Data-Center-Promo-2

Intel Trying to Repel Rivals in Data Center Business

Aug. 15, 2018
Intel Trying to Repel Rivals in Data Center Business

Holding more than 95 percent of the market, Intel’s data center business has long looked bulletproof. But the company seems increasingly vulnerable as it continues to struggle with 10-nanometer technology and rivals like Nvidia and Advanced Micro Devices threaten to erase its manufacturing lead—or at least make it less important.

Intel is still building server chips based on 14-nanometer technology because of manufacturing bugs with the 10-nanometer process, with which it could carve more transistors onto slabs of silicon. That could open the door for AMD to muscle into the market with chips based on 7-nanometer technology released ahead of Intel. The company also risks being marginalized by Nvidia, which has been faster to pivot into artificial intelligence.

The threat adds to the challenges facing Intel’s next chief executive, who will fill the position after Brian Krzanich’s dismissal over a relationship with a subordinate. The company is trying to rebuild goodwill with customers in the aftermath of Meltdown and Spectre. Intel is also struggling to counter the growth of graphics chips and other accelerators that take over artificial intelligence and other tasks from its Xeon processors.

Despite the issues Intel has grappled with over the last few years, Navin Shenoy, vice president and general manager of the data center business, painted a more favorable vision of the company’s future in a presentation last week at Intel’s Santa Clara, California headquarters. “I don’t talk to customers about nanometers,” he said. “Customers care about delivered system performance on their workloads.”

In recent years, Intel has taken steps to show that it can continue growing by making small yet significant product changes over time. The company, founded more than 50 years ago, is focused not only on etching more transistors onto its chips but also the memory and other chips surrounding them. And it is increasingly using software to close the artificial intelligence gap with Nvidia.

That represents the new normal for Intel. The company said it would introduce a new generation of server chips every year for the foreseeable future, getting off the ground with Cascade Lake before the end of the year. These chips were redesigned to protect against the Meltdown and Spectre vulnerabilities, with an eye toward eliminating the performance losses associated with software patches.

Following Cascade Lake is Cooper Lake. The company said it would be the last generation of server chips designed with 14-nanometer technology, using the same architecture as the first generation of 10-nanometer chips to be released in 2020. The company said that would allow customers to quickly upgrade from the short-lived Cooper Lake to the new chips, codenamed Ice Lake.

Intel’s mishandling of its manufacturing lead could result in lost customers. AMD’s server chips based on 7-nanometer technology could become available more than a year before Intel releases processors based on competitive 10-nanometer technology. Many industry analysts expect AMD to steal data center market share, cutting into Intel’s dominance and profitability.

Even though Intel is prepared to lose customers to its archrival—and has acknowledged it could lose as much as 20 percent market share—the company is still a moneymaking machine. Over the last year, the data center business reported revenues of $21.2 billion from expensive server chips, which cost thousands of dollars each. Profits were around $10.6 billion, outweighing AMD’s revenue of $6.5 billion over the last year.

While Intel’s management is putting on a brave face—“We are confident we will maintain our performance leadership in 2019,” Shenoy said—shareholders are sweating over its manufacturing woes. Last month, Intel said that data center revenues in the latest quarter grew 26 percent over the last year to $5.5 billion. But the results still undershot Wall Street estimates, driving shares down more than five percent. Some analysts say they are still too high.

Intel is depending on sales to data centers to counterbalance the slowing of the personal computer market. On the other hand, Shenoy said that the market for Intel’s server hardware is growing faster than he anticipated, springing from the previous estimate of  $70 billion to $90 billion. In addition, Intel’s data center sales have increased more than five percent this year after falling slightly on average over the previous four years.

The performance has been bolstered by Intel’s push into artificial intelligence, which typically runs in the cloud. Intel has enhanced its server chips in recent years to better handle both training and inferencing tasks. Last year, the result was more than $1 billion in sales of Xeon processors for artificial intelligence jobs, Shenoy said.  “The step-function increase in performance led to a meaningful business impact for us,” he added.

Intel has never before disclosed where it stands in the market for artificial intelligence chips, which is currently dominated by Nvidia. The company estimates the market will grow from $2.5 billion to $10 billion by 2022. Complicating the business for Intel is that customers including Google and Facebook are building custom chips for training artificial intelligence models and running them. That could reduce Intel to a supporting role.

In recent years, the company has shown a willingness to adjust to the mounting data center competition. Last year, Intel created a new unit focused on artificial intelligence, headed by Naveen Rao, a founder and former chief executive of Nervana Systems. The company is set to release a custom neural network processor, called Spring Lake, in 2019. Intel has also developed a software tool called nGraph that allows TensorFlow or any other machine learning environment to be used with any Intel processor.

Intel said that Cascade Lake would introduce new Variable Length Neural Network Instructions, which can be used to identify faces in photographs and handle other 8-bit inference tasks faster than the current Skylake chips. The company said that Cooper Lake would support new technology that can compact 32-bit floating point numbers into 16-bits by lowering the precision, speeding training. But it still seems unlikely that Intel’s chips can handle training faster than Nvidia’s.

The company is plotting other changes. Intel is building memory controllers into Cascade Lake and Cooper Lake to support Optane persistent memory, which is based on the company’s 3D XPoint technology and could challenge DRAM as main memory in servers. Even though the chips are slower, they can store significantly more information and, unlike DRAM, hold onto it after powering down. The price is also cheaper, the company said.

Storing information inside main memory today means hauling it over to flash memory connected through the PCIe interface. Intel’s Optane uses the same memory bus as traditional DRAM to connect to the processor. The result is that process takes less time, goosing performance. Intel is winning over customers including Huawei, Tencent and Google, all of which are experimenting with the new memory in data centers.

Intel is also trying to reduce communication bottlenecks within data centers and within its chips. The company has been sampling a new network interconnect, Cascade Glacier, which will available in the first quarter next year suited for connecting and directing information traveling between servers. Intel also announced it would put silicon photonics in future products to reduce the wiring used in networking and switching, which can waste power.

The company also said that it give customers more room to customize chips with different speeds and attached memory. Shenoy said that around half of the server chips Intel sells are “off-the roadmap” versions that have been adjusted to handle specific workloads. Intel also said it would allow more customers to specify semi-custom chips. Current customers include Verizon’s Oath and Tencent’s WeChat.

The company is also working to give customers a broader range of chips to choose from. Intel has expanded into field-programmable gate arrays—more commonly called FPGAs—that can be reshaped to handle different tasks and application-specific integrated circuits—more widely known as ASICs—that target specific workloads. It also enlisted AMD’s chief architect Raja Koduri to bootstrap a new business building graphics chips.

Intel also recently hired Jim Keller, who helped develop custom chips for Apple before leading the development of AMD’s Zen architecture, to take charge of the company’s silicon engineering. Over the three months since he was hired, Keller has been focused on getting Intel’s latest products out the door. Speaking at the company’s headquarters last week, he sang the praises of Intel’s engineering culture and noted the challenges facing it.

“The technology is so good but getting it out is really complicated and hard,” he told analysts.

About the Author

James Morra | Senior Editor

James Morra is a senior editor for Electronic Design, covering the semiconductor industry and new technology trends, with a focus on power electronics and power management. He also reports on the business behind electrical engineering, including the electronics supply chain. He joined Electronic Design in 2015 and is based in Chicago, Illinois.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!