(Image courtesy of Intel).
Intel 3rd Gen Xeon Scalable 7 606dfce69c2cb

Intel Strikes Back Against AMD With New Ice Lake Server CPUs

April 9, 2021
All the largest cloud vendors plan to roll out services based on the new server chips, which integrate up to 40 cores based on Intel's 10-nm node and its new Sunny Cove architecture.

Intel rolled out its new lineup of Xeon Scalable server processors, betting that its customers will choose its built-in artificial intelligence and security acceleration over AMD's Eypc CPUs.

Intel launched its new generation of Xeon Scalable CPUs, code-named Ice Lake—its first line of server processors based on the 10-nm node that has been plagued by prolonged delays. Intel is trying to counter AMD's Epyc CPUs with the new Xeon CPUs, which have huge generational upgrades over its previous server chip line, called Cascade Lake, which was based on 14-nm.

Navin Shenoy, VP and GM of Intel's data center business, said the chips promise 46% more performance on average than its predecessors for data center workloads. All of the largest cloud industry players plan to roll out services based on the chips, which integrate up to 40 cores—up from 28 cores in Cascade Lake—and new accelerators for AI and cryptography. 

The Ice Lake CPUs incorporate up to 40 cores, 60 MB of shared cache, and 64 lanes of PCIe Gen 4 with varying frequency and power envelopes. The lineup also brings improvements in memory capacity and bandwidth versus Cascade Lake. The chips add 16 DDR4 DRAM slots, giving them access to more memory, one of the performance bottlenecks in the data center.

Intel said it will supply the new chips to more than 50 unique OEMs, including Cisco, Lenovo, Supermicro, Dell and HPE, to be used in more than 250 server platforms. The company also said its new Xeon CPUs will be used by 15 global leaders in the network infrastructure market and 20 research labs and service providers in the high-performance computing (HPC) space.

The announcement came a month after AMD launched its latest generation of Epyc CPUs for the data center, code named Milan, touting them as the fastest server processors in the world.

To strike back against AMD, Intel has focused on bringing more flexibility to its chips. Lisa Spelman, who leads Intel’s server processor and memory group, said that it tailored the Ice Lake CPUs to speed up a wide range of workloads, from the cloud to the network to the edge (Fig. 1). The chips have more levers that its customers can pull to improve performance. There are new instructions and other built-in features to handle AI, 5G, and cryptography in hardware.

"It could not come at a more crucial time,” said DD Dasgupta, the vice president of product management in the cloud and compute unit at Cisco, one of Intel's major OEM customers.

Intel, which commands around 90% market share in central processing chips for use in data centers, has been fighting delays in its development of the 10-nm node for years. At the start of the year, it appointed Pat Gelsinger as chief executive to reboot its strategy and start to regain its dominance in chip design from AMD and process technology from TSMC.

As part of his strategy, which he announced last month along with $20 billion of investment in its fabs, he aims to revive Intel's famed “tick-tock” development model. For decades, Intel uncoupled improvements in its process technology from its microarchitectures. Every "tick" represented a shrinking of its process node, and each "tock" overhauled its chip architecture.

The Silicon Valley giant toiled for years to improve the production process to the point where it would be economically viable to make its most advanced server chips on the 10-nm node. The failure has forced it behind its top rival TSMC, the world's largest contract chip manufacturer. TSMC pulled ahead with its 7-nm technology node, with volume production starting in 2018.

These delays also opened the door for AMD to roll out new server processors, based on the 7-nm node from TSMC, which it contends are now more advanced than Intel's chips. AMD's Epyc CPUs have been prying market share from Intel's Xeon Scalable CPU in the data-center market in recent years, growing from around 1% to more than 7% at the end of 2020.

Intel is aiming to fight back against AMD with its latest generation of Xeon CPUs. The company has previously said that its 10-nm technology is on par with TSMC's 7-nm node.

Ice Lake's performance gains are the result of both the "tick" of shrinking the process node—the 10-nm technology—and the "tock" of a new CPU design—called Sunny Cove. Intel said the Sunny Cove cores pump out 20% more instructions per clock (IPC), a performance metric that influences the amount of data the CPU can process at the same time, than Cascade Lake.

Intel's new Sunny Cove cores add larger caches of memory to access data during processing faster. The Ice Lake chips integrate 48 kB of L1 cache, the primary memory bank for the CPU, per core, up 50% from the previous generation. Intel enlarged the secondary L2 cache by 25% to 1.25 MB per core, and it also improved the L3 caches shared by all the cores to 1.5 MB 

The server chip at the top of the performance stack, the Xeon Platinum 8380, packs 40 cores that can run up to 80 processes at the same time and adds 60 MB of shared cache, up from less than 40 MB in its previous generation, the 28-core Xeon 8280. The cores are clocked at base speeds of 2.4 GHz and a boost frequency of 3.4 GHz per core and 3.0 GHz for all cores.

The server processor consumes up to 275 W of power under maximum load, up from 205 W in the previous generation of Intel's Cascade Lake CPUs, meaning that it also releases more heat.

There are also huge improvements in I/O. Intel said the chips pull information faster from supplemental storage, networking, and server accelerator cards attached to the 64 lanes of PCIe Gen 4 for every socket, up from 48 lanes of PCIe Gen 3 in the Cascade Lake server chips (Fig. 2). AMD said that its Milan CPUs contain up to 128 PCIe Gen 4 lanes for a single socket.

The PCIe Gen 4 lanes double the data-transfer speeds of the PCIe Gen 3 slots in the previous generation of Cascade Lake CPUs, giving its customers up to 32 Gbps for 16 lanes. Intel said the Ice Lake CPUs can also communicate with each other over the same server more rapidly than ever thanks to recent improvements in its proprietary Ultra Path Interconnect (UPI) link.

Moreover, Intel reduced the price tags on the new generation of chips. The Xeon Platinum 8380 costs $8,099, compared to the starting price of $10,099 for the Xeon Platinum 8280.

Intel said the chips also close the gap in memory bandwidth with AMD. The chips integrate 8x DDR4 memory controllers clocked at 3.2 GHz, boosting memory bandwidth by more than 30% over its predecessors. The server chips can be supplemented with 16x DDR4 DRAM memory cards with 256 GB of capacity each, resulting in system memory of up to 4 TB, up from 1 TB.

Intel said its customers can use a combination of DDR4 DRAM and its Optane 3D XPoint to increase the capacity up to 6 TB. The chips are designed for single- and dual-socket servers.

The Ice Lake CPUs represent one of Intel's largest generational leaps in performance in years. But industry analysts warn that AMD has pulled ahead with a "clean sweep" in general-purpose performance with its Milan server CPUs, which combine up to 64 cores with 256 MB of shared cache, while beating out Intel’s Cascade Lake CPUs on common workloads in the data center.

Intel, trying to differentiate the Ice Lake CPUs from rivals, said the chips run faster than AMD's Milan CPUs when it comes to workloads that can take advantage of its onboard AI and other special-purpose processing features. These features come in the form of AVX-512 and other new instructions only found in the Ice Lake CPUs for 5G, cloud, and other workloads (Fig. 3). 

Intel said Ice Lake is the only server CPU on the market with integrated AI accelerators, called Deep Learning Boost, which was introduced in its Cascade Lake CPU and reduces the number of instructions required by the CPU to carry out AI chores. According to Intel, the hardware and software upgrades boost performance up to 74% for AI inference over its Cascade Lake lineup.

Intel said it uses the integrated AI instructions to pump out up to 1.5 times more performance for AI workloads than AMD's most advanced Epyc CPU. Intel said Ice Lake has 1.3 times more performance than Nvidia’s flagship A100 GPU. That improves the CPU's ability to handle AI locally rather than unloading them to GPUs, FPGAs or other accelerators in data centers.

Additionally, Intel is trying to stand out from rivals by building in safeguards to block hackers from accessing secrets in the data center or maliciously altering data in the server's memory.

Intel incorporated its Software Guard Extensions (SGX) technology in Ice Lake, reinforcing the server'sinternal defenses. The SGX technology turns parts of the server's memory into secure enclaves that isolate data and other secrets in the CPU. Data in these zones is inaccessible to software and applications in the CPU. That helps block attacks that attempt to steal data by hijacking the operating system (OS), BIOS, or other software to pry into the server's memory.

The flagship and other high-end chips in the family reserve up to 512 GB of memory for secure enclaves. AMD's chips use a rival "confidential computing" technology to Intel's SGX (Fig. 4).

Intel upgraded the underlying architecture of the server chips to run cryptography, reducing the penalty on performance that tends to come with workloads heavy on encryption. The firm is bringing other features to the fold, including what it calls "total memory encryption" technology, which can be used to encrypt the entire memory space to protect against physical attacks.

The chips, according to Intel, are also ideal for workloads in enormous cloud data centers. The company is trying to regain its goodwill with Amazon, Microsoft, Google, and other cloud giants that are increasingly looking to replace general-purpose chips largely designed by Intel for their servers. They are instead investing in internally designed chips based on Arm's CPUs.

To further differentiate itself, Intel is packaging its central processing chips with its memory, networking, and other chips to bring better performance and cost efficiencies to data centers. Intel is complementing its Xeon Scalable CPUs with its wide range of other server chips, such as its Optane memory, NAND storage, Ethernet networking ASICs, and programmable FPGAs.

Shenoy said that is one of its unique advantages and improves its competitiveness over rivals with more limited sets of products or are unable to bundle them together as closely. Intel expanded its lineup of AI chips for use in servers last year with its purchase of Habana Labs, and it also offers networking switch chips for data centers

“Intel is uniquely positioned with the architecture, design, and manufacturing to deliver the breadth of intelligent silicon and solutions our customers demand,” he said in a statement.

The Ice Lake lineup spans more than 40 SKUs: a third are in the top performance bracket (8 to 40 cores clocked at up to 3.6 GHz with 140 to 270 W), 10 are targeted at scalable performance (8 to 32 cores at base clock frequencies of 2.2 to 2.8 GHz at 105 to 205 W), and other SKUs that are specifically designed for the cloud, networking, and other server workloads. 

Intel continues to dial in production of its Ice Lake CPUs. The semiconductor giant said it rolled out more than 200,000 of the server chips to customers in the first quarter of 2021. 

About the Author

James Morra | Senior Editor

James Morra is a senior editor for Electronic Design, covering the semiconductor industry and new technology trends, with a focus on power management. He also reports on the business behind electrical engineering, including the electronics supply chain. He joined Electronic Design in 2015 and is based in Chicago, Illinois.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!