Moore’s Law continues to enable ever more computing power. The integration of billions of transistors on a single die in multicore processors and other advanced devices such as ASICs and FPGAs, even if supply voltages are down to 1 V or less at the core of the chip, has meant that current demand is beginning to exceed 100 A at the point of load (POL).
This file type includes high resolution graphics and schematics when applicable.
Electronic Design: With power requirements for many of today’s ICs quickly rising above 100 A, what challenges arise for engineers who are tasked with designing the architecture to power these devices?
Dunlap: With increasingly power-hungry ICs, the classical problems of power supply design continue to exist: how to increase system efficiency, minimizing power loss in an ever-shrinking footprint. This is compounded with output voltages continuing to decline and load steps increasing with the use of smaller geometry processes. As a result, a power supply that could meet the system requirements for transient response and output regulation a few years ago may not pass the latest specifications.
Adams: In addition to the extremely tight restrictions and performance metrics required by the IC, when you start to reach these levels, two key design issues are space and thermal management. The available board space isn’t increasing, but the power density and requirements are. So how do you manage a complicated design without just adding additional phases, which add space? Furthermore, the required intelligence and tighter performance tolerances that are needed mean you’re not only dealing with analog signals, ground planes, and power paths, but with digital signals and digital grounds too, so it’s now a much more complicated power subsystem. These requirements are pushing the engineer, in many instances, to look for proven “plug and play” solutions.
Le Fèvre: The high currents now required to power FPGAs, ASICs, and multicore processors is not new. One example is Intel’s Voltage Regulator Module (VRM) and Enterprise Regulator-Down (EVRD) design guidelines (version 11.1), which specifies requirements for ICC_CORE from 40 A to 180 A. Clearly these VRM guidelines are primarily aimed at Intel processors, and despite the large number of non-isolated power modules being available on the market, these products might not always be suitable to power network-packet, traffic-manager, and fabric-interface processors, which all require different interfaces and communication protocols. One example is the way Intel specifies the output voltage setting through the voltage identification (VID) table, which is not necessarily the way that many system designers would want it. Designing a multiphase voltage regulator is not necessarily a challenge. What is more difficult is making it even smaller than before, while also making it easy to parallel with full monitoring and control via PMBus 1.2 or PMBus+ 1.3 specifications.
ED: What challenges does this trend pose for power companies?
Adams: A module company attempts to marry the best technologies with its own cutting-edge topologies and create a canned solution to meet the exact specifications of the chips being used. The biggest challenge for us is staying one step ahead of the power needs of the latest generation of advanced ICs. Chip designers typically create reference designs to support their core voltage requirement by working at a discrete level. For them it is important to keep their “perceived” total bill-of-materials cost low. This works for the few OEMs that drive large volume and have the support of power engineers and semiconductor FAEs to assist in the design. However, there is a large portion of the customer base that doesn’t have that luxury and relies heavily on modules for these high current requirements. Thus, getting a hold of this information is critical. Without it, we’d be shooting in the dark—is the requirement 100 A, or is it actually 120 A? To better address this challenge, we’ve started to partner closer with our customers and jointly work with the chip vendors to better understand future power requirements, ensuring our products meet these critical performance metrics.
Le Fèvre: Multiphase POLs have been on the market for decades. But outside of the well-defined Intel VRM specifications, power requirements for network processors represent a number of challenges. These include selecting the best controller embedding the latest features such as dynamic loop compensation, active current sharing, and the ability to communicate with the board power manager to optimize the voltage to meet traffic conditions. Few controllers available today meet all these requirements, although some recently introduced semiconductor products are opening up interesting opportunities. Another challenge will be the implementation of PMBus+, which is expected soon, but will require chipmakers to update firmware as soon as the final specification is released. Another challenge will be how to manage boards and systems having to carry PMBus 1.2 and PMBus+ units. It could be that power companies to have to provide dual-mode compliance, which could be extremely challenging.
Dunlap: For Intersil, this means the continued evolution of IC architectures to enable power supplies with faster transient response and improved regulation. Examples of this would include Intersil’s R4 or EAPP modulator technology for single or multiphase applications or the new ChargeMode control technology for digital control loops. But with the increase in performance also comes the challenge of creating a power supply that can be developed with faster time-to-market with high reliability. As a result, the focus has been on developing products that provide a compensation-free system.
ED: Is this trend changing the way you approach the priorities and tradeoffs within your latest designs?
Le Fèvre: Ericsson began research on digital power in 2004. Since the inception of the 3E concept (enhanced performance, energy management, and end-user value), we have worked very closely with our customers, designing products to match the requirements of their next-generation equipment. High-current POL regulation is part of our technology roadmap and very much aligned with what network processor manufacturers expect from power supply manufacturers. In this respect, our priorities are not changed. However, because of the relative uniqueness of advanced control ICs, it is strategically important to select the right partner early on in the design phase and work in tight cooperation to co-develop our respective products. This imposes a certain number of tradeoffs and priorities for any power supply company.
Dunlap: With old designs, there used to be a compromise that was made. To ensure stability with the integration of compensation components, power supplies would have to trade off bandwidth and performance. The converse would then hold true. High-performance devices would be complex and require significant time to design and optimize. With the demands in the market for ease of use, Intersil responded by developing new ICs that have broken through this paradox. For instance, the new ZL8800 digital controller delivers a wide-bandwidth, fast-transient power supply without the need for compensation. With the digital ChargeMode control technology, it can remain stable without the need for tuning, even with aging of components.
Adams: Definitely. It means you can’t drive designs going forward that serve 90% of the needs and then let the customer figure out the last 10%. You now need to work to solve the most challenging 10% and then figure out how we adapt the design for the other 90%: either create a derivative of that product, backing down on some of the performance, or drive them to be more commercially viable in a high-value situation. As these supplies are performance driven and such high currents don’t allow for tolerance, it also means the end of power being a commoditized standard product.
ED: Digital power has received a lot of press for its ability to address complex loading requirements in distributed power architectures. How can digital help address rising current densities at the board level?
Dunlap: One of the key advantages of digital power is the ability to integrate all the small signal components surrounding the power supply that set up the functionality and performance into the controller IC. With the controller’s on-board memory, multiple parameters can be set up and dynamically changed via the PMBus interface, allowing for greater sophistication in terms of system control. This allows the system to react to changing states, optimizing for peak efficiency.
Adams: The biggest way that digital can add value is through the stability and the performance of the supply itself. It isn’t going to necessarily allow you to get 100 A at a smaller space. That will come from the design of the power train. But digital will deliver a more controlled 100 A and a tighter power at that 100 A. Using an automotive analogy, digital is the computer behind the engine that keeps it performing at its optimal level under all conditions. Digital proves especially critical when you’re working with high-reliability systems, where you need a continuously compensated circuit to keep the system working at peak performance. As geometries and complexities change, “good enough” is no longer good enough.
Le Fèvre: Energy management and how that energy is best used is crucial for all the industry, and the conventional ways of powering boards are changing as systems gain in complexity. Besides offering better switching control within power modules, digital power adds a new level of flexibility that makes complexity simple. An example is a powering multicore processor that requires a sub-volt (0.9 V) core voltage with a current of 240 A distributed between the four cores at 60 A/core, and each core has its own power source. To power such a configuration efficiently and also reduce ripple and noise, power sources will need to work in parallel to distribute/offset phases and, in case of low traffic, to shade phases or even to turn off unrequired cores. To do this in conventional analog, or even in some analog-digital-hybrid way, will require a significant number of external components and shunts to measure output current, reducing efficiency, requiring additional test and verification, and limiting upgrade flexibility. Complex power schemes can be achieved easily by implementing digital POLs with a PMBus interface and using intuitive GUI-based software to configure all the necessary parameters.
ED: Aside from digital, are there other developments that will allow power to keep pace with the increasing current requirements of today’s chips?
Adams: FET and inductor companies will continue development with new materials and packaging technologies. As a module company, we are focused on two key areas: optimizing module packaging by looking at the z, as well as the x and y axis, and bringing in our own proprietary technologies to marry with the latest innovations from power component vendors. At these power levels, we have to be creative in getting heat out of the module and driving towards higher efficiencies and orientations that allow our customers to optimize their board space.
Le Fèvre: Today, in the network processor space, the average current per core is approximately 60 A and is expected to soon reach 90 A. Considering that some ASICs combine multiple cores, the total current could reach 400 A. It is easy to parallel multiple digital POLs to achieve very high power. But aside from digital, there is some interesting development in new materials foreseen to increase power density without jeopardizing energy efficiency. Gallium nitride (GaN), for example, is now reaching the market with commercial products. And research into graphene and nanotubes looks promising. Another area is very high-frequency conversion technology, which is being explored by laboratories with the aim of miniaturizing high-current power sources (100 A) to a point where they could be integrated within an ASIC.
Dunlap: With PMBus interfaces on digital power products, it is easy to control and configure from the system level. But with the increasing complexity of load processors, it is essential to have a direct link back to the power supply. With the PMBus specification revision 1.3 releasing at APEC this year, the inclusion of AVSBus to support this need helps ensure that future power supply needs can continue to be met.
ED: So what can we expect to see at APEC?
Le Fèvre: Above all, I expect APEC to be the kickoff for PMBus+ and the place where university and laboratories will present research into new material and topologies. APEC 2014 looks very promising. Considering the level of maturity reached now by digital power technology, I also expect to see a lot of interesting developments in this area.
Dunlap: At APEC this year, Intersil will be introducing its fourth generation of digital control technology, utilizing ChargeMode control. The ZL8800 will be demonstrated, showcasing the unique ability to be compensation-free while delivering fast transient response.
Adams: We’re sure to see continued advancements in materials such as GaN and silicon carbide (SiC) at this year’s show. New power topologies will also be an important topic. We will be announcing the first isolated and non-isolated products based on our patented Solus Power Topology. This SEPIC-fed buck topology delivers 25% higher densities in the same board space compared to a standard buck converter, allowing for a significantly lower profile, a 25% reduction in power loss, and an improvement in transient response by more than 50%.