Navigation and service


DEEP Booster

DEEP BoosterDEEP Booster

For the Booster, the project has developed two distinct prototypes:

  • A 384-node system built by Eurotech from custom-engineered dual-node cards in the Aurora blade form factor – the DEEP Booster with aggregated performance around 500 TFlop/s (AURORA Booster)
  • A smaller 32-node prototype built by University of Heidelberg and Megware based on the latest ASIC implementation of EXTOLL (GREEN ICE Booster)

Both Booster prototypes profit from the high throughput of Intel Xeon Phi co-processors and the performance of the novel, direct-switched 3D torus interconnect developed by EXTOLL. The choice of the Booster interconnect was made to ensure scalability up to Exascale levels and to best match with the spatial domain decomposition schemes commonly used by scalable HPC codes. Whereas the former prototype uses an FPGA implementation of the EXTOLL interconnect and is in 24×7 productive use, the latter leverages the brand new ASIC implementation of EXTOLL and experiments with immersive liquid cooling technology.

AURORA Booster

The prototype developed by Eurotech integrates two Booster nodes each into Aurora form factor blades with two Altera Stratix V FPGAs (acting as EXTOLL NICs), two Intel Xeon Phi 7120X cards, one board management controller, several voltage, current and temperature sensors, and support hardware. Each FPGA connects to one Intel Xeon Phi via PCI Express, and provides up to seven EXTOLL links. Eight blades connect to each other and to one Booster Interface blade via a Backplane carrying EXTOLL signals, power and Ethernet and control lines, resulting in a fully-integrated system with a complete remote-management software stack.

The Booster Interface blade uses an Intel Xeon E3 CPU connected to an Avago PCI Express switch, which in turn connects to one Altera Stratix V FPGA implementing the EXTOLL NIC and one Mellanox ConnectX 3 Infiniband NIC. Bridge network traffic does not require CPU intervention.
Both kind of blades use Eurotech’s Aurora direct liquid cooling scheme, based on aluminum coldplates tailored to precisely match the height profile of the blade components. Connection to the liquid distribution is by quick disconnects sourced from aerospace contractors, enabling hot plugging of blades.

The full Booster system is made up of 24 chassis fitting into the front and back halves of a 23 inch rack. EXTOLL links are carried via Backplanes and 12-pair Molex copper cables.

DEEP Booster nodesDEEP Booster node: 2 Xeon Phi nodes connected to EXTOLL interconnected over a backplane

GREEN ICE Booster

To demonstrate the performance impact of the new ASIC implementation of the EXTOLL interconnect, University of Heidelberg created an alternative Booster prototype based on the latest GREEN ICE technology. In lieu of a tightly integrated Booster board, a passive PCI Express backplane connects eight Intel Xeon Phi 7120D cards and eight EXTOLL TOURMALET NICs in a pairwise manner. Thanks to EXTOLL technology the booster nodes can be scaled independently from the scalar nodes. This system of 32 nodes yields 38.4 TFLOPs peak performance.

To ease integration, four assembled backplanes are completely immersed in a basin with 3M Novec®-649 fluid, which also contains the required power supplies and the management CPU. In operation, the heat produced by these components evaporates the Novec fluid (which has a boiling point of 49˚ C). The Novec vapor is then cooled by loops of special copper pipes (with maximized surface) with water as cooling liquid. The condensed vapor drops back into the basin.

System management is performed by a Raspberry PI system via I2C connections to the Backplane and power supplies. The Booster Interface is implemented with standard Intel Xeon server boards in air-cooled chassis.

The EXTOLL links are carried via copper cables that attach to the NICs using standard HDI6 connectors.

GREEN ICE BoosterGREEN ICE Booster


Servicemeu