Integrated Fabric is Key to Many Core Platforms
This file type includes high resolution graphics and schematics when applicable.
Intel’s 14-nm, Knights Landing (KNL) Xeon Phi chip (see figure) now supports the OmniPath fabric. The chip has more cores (up to 72), memory, and the ability to be a host processor, but the addition of the fabric support will probably be the most important aspect of the system; it changes how large clusters are constructed for high-performance computing (HPC) environments.
The 72-core, Knights Landing, Xeon Phi chip is available with built-in OmniPath fabric interfaces. The Groveport platform supports the Xeon Phi as a host processor.
OmniPath is Intel’s answer to InfiniBand and other fabrics used in HPC applications. It is part of Intel’s Scalable System Framework. OmniPath switches and adapters have previously been available, but adapters typically link to a host processor via PCI Express. This adds small but noticeable overhead and latency. It is designed to handle clusters with more than 10,000 nodes. OmniPath incorporates features like adaptive routing, dispersive routing, dynamic lane scaling, and packet integrity protection. It also employs traffic flow optimization algorithms. Links run at 100 Gbits/s.
The KNL Xeon Phi can run 288 threads with its 72 1.5-GHz cores. The cores are based on the Atom Silvermont, with enhancements including support for 4 cores/thread, deep out-of-order execution, scatter/gather support, and improved branch prediction. The system has a high bandwidth cache. The cores support AVX-512, the 512-bit version of Intel’s Advanced Vector Extensions. This allows support of wider arrays. The cores are connected via a 2D mesh. The cores do not support virtual machines.
The chips have up to 16 Gbytes of Multi-Channel DRAM (MCDRAM) high-bandwidth memory (HBM) from Micron. It also has six memory channels that can access up to 384 Gbytes of DDR4 memory. The HBM bandwidth is 490 Gbytes/s. There are two integrated OmniPath providing 50 Gbyte/s bi-directional connections. The chip plugs into the LGA 3647 “Socket P.” Internally the OmniPath support is linked to the many core matrices via dual x16 PCI Express links.
The Xeon Phi will be taking on various high-end GPGPUs like NVidia’s Tesla P100 using techniques such as deep learning. NVidia’s Tesla P100 has 160 Gbyte/s NVLink connections to tie multiple chips together.
The Xeon Phi is supported by Intel’s HP Orchestrator that is based on OpenHPC. It integrates provisioning tools, resource management applications, and development tools. It provides advanced testing support and validation for tools like Intel’s Parallel Studio XE Cluster Edition 2016 Suite.
This latest fabric-based platform will definitely make things more interesting in the HPC space, and will likely find its way into high performance embedded computing (HPEC) applications.
This file type includes high resolution graphics and schematics when applicable.