Search

link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service


DEEP-EST - Dynamical Exascale Entry Platform - Prototype System

Photo of the DEEP-EST prototype system at JSCDEEP-EST prototype system (cluster module, booster module and data analytics module)
Copyright: Megware, Herbert Cornelius

 

DEEP-EST is an Exascale project funded by the EU Horizon 2020 programme (contract no. ICT-754304), which started in July 2017. The main goal is to develop an energy-efficient system architecture that fits High Performance Computing (HPC) and High Performance Data Analytics (HPDA) workloads, and satisfies the requirements of end-users as well as e-infrastructure operators. To this end, the project partners will build a fully working Modular Supercomputing Architecture (MSA) system prototype made-up of three modules:

  • the Cluster module (installed at JSC in April 2019),
  • the Extreme Scale Booster module (installed in May 2020),
  • and the Data Analytics module (installed in July 2019).

The prototype system will be hosted at the Jülich Supercomputing Centre. For detailed information about the DEEP-EST project refer to the DEEP projects web pages at www.deep-projects.eu

 

Hardware Characteristics

Cluster Module

  • 1 Rack with 50 nodes
  • Nodes: 2 x Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz, 192 GB RAM
  • Processors: 100 x Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz (1200 cores)
  • Overall peak performance: 45 Teraflops
  • Main memory: 4 TB (aggregate)
  • Network:

    • Infiniband EDR
    • 1 Gigabit-Ethernet
  • Operating system: CentOS 7.5
  • Cooling: Direct liquid cooling
  • Vendor: Megware

Extreme Scale Booster (ESB)

  • 3 racks with 25 nodes each
  • Nodes: 1 x Intel(R) Xeon (R) CPU, 48 GB DDR4, 1 Nvidia (R) V100 (R) 32GB HBM2 GPU
  • Processors: 75 x Intel(R) Xeon (R) Silver 4215 processor @2.5GHz (600 cores total)
  • Accelerators: 75 x Nvidia Volta V100 GPU
  • Overall peak FP64 performance: 550 TFLOPS
  • Main memory: 6 TB (aggregate CPUs + Accelerators)
  • Network:

    • Infiniband EDR
    • 1 Gigabit Ethernet
  • Operating system: CentOS 7.7
  • Cooling: Direct liquid cooling
  • Vendor: Megware

Data Analytics Module (DAM)

  • 1 rack with 16 nodes
  • Nodes: 2 x Intel (R) Xeon (R) CPU, 384 GB DDR4, 2 TB non-volatile DIMM (NVM), 1 Nvidia (R) V100 (R) 32GB HBM2 GPU, 1 Intel (R) Stratix 10 FPGA 32GB DDR4
  • Processors: 32 x Intel (R) Xeon (R) Platinum 8260M Scalable Processor @ 2.4GHz (768 cores total)
  • Accelerators: 16 x Nvidia Volta V100 GPU, 16 x Intel (R) Stratix10 FPGA
  • Overall peak FP64 performance: 170,9 TFLOPS
  • Main memory: 7.1 TB (aggregate CPUs + Accelerators)
  • Non-volatile memory: 32 TB (aggregate)
  • Network:

    • Extoll Tourmalet (100 Gb/s)
    • 40 Gigabit Ethernet
    • 1 Gigabit Ethernet
  • Operating system: CentOS 7.5
  • Cooling: Air cooling
  • Vendor: Megware

Login/Master and File System Nodes

  • 8 x DELL R640

    • Processor type: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
    • Processors: 2 (2 x 10 cores)
    • Main memory: 192 GB
    • Disk capacity for local scratch file system: 384 TB
    • Operating system: CentOS 7.6
  • File systems

    • GPFS file system from JUST ($HOME)
    • BeeGFS file system from local storage ($WORK)

  • Internet address: deep.fz-juelich.de

 

Photo of the direct liquid cooled components of the DEEP-EST prototype systemDirect liquid cooled components
Copyright: Forschungszentrum Jülich GmbH / Ralf-Uwe Limbach

Photo of the  Infiniband network switches of the DEEP-EST prototype systemInfiniband network switches
Copyright: Forschungszentrum Jülich GmbH / Ralf-Uwe Limbach


Servicemeu

Homepage