What you’ll learn:
- Why Moore’s Law is running out of steam and FPGAs are gaining steam.
- Find out the benefits of using FPGAs.
- See how FPGAs are being used in the data center in things like SmartNICs that don’t require intimate knowledge of FPGA programming.
Part 1 of this two-part series considered what the end of Moore’s Law means for organizations struggling to manage the data deluge. Cloud service providers noticed the demise of Moore’s Law early on and acted promptly to address declining performance. They have looked at many technologies, such as GPUs, NPUs, and even building their own ASIC chips. However, another alternative is emerging, one which could be even more versatile and powerful than any acceleration options currently available: FPGAs.
The technology has been around for decades, but the ubiquity of this highly configurable technology, coupled with proven performance in a variety of acceleration use cases, is coming to cloud service providers’ attention. Could this be the ultimate answer to bridging the gap between compute needs of the future and the flattening performance curve of server CPUs?
FPGAs Gain Steam
“Everything old is new again” isn’t often true in the technology world, but it can be said of field-programmable gate arrays (FPGAs), which have been around for more than 40 years. They have traditionally been used as an intermediary step in the design of application-specific integrated-circuit (ASIC) semiconductor chips. The advantage of FPGAs is that they require the same tools and languages as those used to design semiconductor chips, but it’s possible to rewrite or reconfigure the FPGA with a new design on the fly. The disadvantage is that FPGAs are bigger and more power-hungry than ASICs.
It became harder and harder to justify making the investment in ASIC production, though, as the cost of producing ASICs began to increase. At the same time, FPGAs became more efficient and cost-competitive. It therefore made sense to remain at the FPGA stage and release the product based on an FPGA design.
Now, many industries take advantage of FPGAs, particularly in networking and cybersecurity equipment, where they perform specific hardware-accelerated tasks.
In 2010, Microsoft Azure started looking into using FPGA-based SmartNICs in standard servers to offload compute- and data-intensive tasks from the CPU to the FPGA. Today, these FPGA-based SmartNICs are used broadly throughout Microsoft Azure’s data centers, supporting services like Bing and Microsoft 365.
When it became clear that FPGAs were a legitimate option for hardware acceleration, Intel bought Altera in 2015, the second-largest producer of FPGA chips and development software, for $16 billion. Since then, several cloud companies have added FPGA technology to their service offerings, including AWS, Alibaba, Tencent, and Baidu, to name a few.
The Many Benefits of FPGAs
FPGAs are attractive for several reasons. One is that they offer a nice compromise between versatility, power, efficiency, and cost. Another is that FPGAs can be used for virtually any processing task. It’s possible to implement parallel processing on an FPGA, but other processing architectures can be implemented as well.
Yet another attraction of FPGAs is that details such as data-path widths and register lengths can be tailored specifically to the needs of the application. Indeed, when designing a solution on an FPGA, it’s best to have a specific use case and application in mind in order to truly exploit the power of the FPGA.
Even just considering the two largest vendors, Xilinx and Intel, there’s a vast array of choice for FPGAs when it comes to power. For example, compare the smallest FPGAs that can be used on drones for image processing, to extremely large FPGAs that can be used for machine learning and artificial intelligence. FPGAs generally provide very good performance per watt. Take FPGA-based SmartNICs—they can process up to 200 Gb/s of data without exceeding the power requirements on server PCIe slots.
It’s possible to create highly efficient solutions with FPGAs that do just what is required, when required, because FPGAs are reconfigurable and can be tailored specifically to the application. One of the drawbacks of generic multiprocessor solutions is that there’s an overhead in cost due to their universal nature. A generic processor can do many things well at the same time, but it will always struggle to compete with a specific processor designed to accelerate a specific task.
With the wide selection of FPGAs available, you should be able to find the right model at the right price point for your application needs. Like any chip technology, the cost of a chip reduces dramatically with volume—this is also the case with FPGAs. They’re widely used today as an alternative to ASIC chips, providing a volume base and competitive pricing that’s only set to improve over the coming years.
Only the Beginning
The end of Moore’s Law and its rapid doubling of processing power doesn’t sound the death knell for computing. But it does mean that we must reconfigure our assumptions of what constitutes high-performance computing architectures, programming languages, and solution design. Hennessey and Patterson (see Part 1) even refer to this as the start of a new “golden age” in computer and software architecture innovation. Wherever that innovation may lead, it’s safe to say that server acceleration is possible now, and FPGAs provide an agile alternative with many benefits to consider.
Daniel Proch is Vice President of Product Management at Napatech.