Optimized Power Supplies Beget Superior Data-Center Efficiency

March 12, 2009
Data centers are conspicuous consumers of power. Power-supply design that maximizes efficiency can deliver big payoffs.

LLarge data centers devour huge amounts of electrical power (see “Energy-Hungry IT Centers See Hope In Digital Power). At the heart of efforts to reduce wasted energy, power-supply makers offer a host of ways to minimize their contributions to the problem.

In broad terms, the challenge is to deliver power to hundreds of servers’ processors at very low voltage levels from multiphase ac mains. The processors have three power rings—logic, memory, and I/O—all requiring very precisely regulated voltages on the order of 1 V, regulated to a precision of one one-hundredth of a volt.

The processor loads are highly variable, depending on how hard the processors are working, that is, how many gates are switching. Those loads vary from milliamps to tens of amps, with transient demands of hundreds of volts per microsecond.

A complicating factor is that the voltages to the different power rings on the processors must be managed in terms of regulation as well as sequencing. They must be applied and removed in a fixed order, with precise timing. The challenge is to create an efficient power-distribution network that can convert power from the grid to those dynamic low-voltage loads.

Five power distribution alternatives for data centers are either in use or under discussion (Fig. 1). But which is most optimal in terms of overall efficiency? Debate continues, but an interesting analysis, AC vs. DC Power Distribution for Data Centers by American Power Conversion’s Neil Rasmussen, provides a methodology for analysis.

Approaches a and b in Figure 1 represent, respectively, the configurations generally used in North America and the rest of the world for data centers. Approach c represents the configuration used in telecom central offices.

In the North American data-center configuration, the mains power goes through an uninterruptible power supply (UPS) and a transformer-based power-distribution unit (PDU), and then to the server rack. In the rest of the world, where ac mains voltages are higher, there’s no need for the PDU.

The UPS provides pre-regulated ac power, either from the ac line, or, when the utility power drops out, from a battery bank. Its first stage is simply an ac-dc rectifier whose unfiltered output is applied to the battery bank. Its second stage is an inverter. The ac inverter feeds the PDU, which provides power factor correction and steps up the ac voltage for distribution around the data center. Inside each equipment cabinet, a front-end converter rectifies the ac and steps it down for distribution across the backplane.

In the telco central office, the UPS has historically comprised a large, 48-V lead-acid array to provide the mandated “five nines” (99.999%) availability. Therefore, 48 V is distributed directly to backplane busses in equipment cabinets.

Challenging the traditional approaches, the configurations in d and e are based on producing high dc voltages at the front end/ UPS and busing that are around the data center, with proportionally lower I2R losses. The version in e would require the last stepdown to 48 V to occur in the equipment cabinets. Meanwhile, the version in d would use a common step-down converter and allow for conventional 48-V distribution on backplanes.

By discounting certain assumptions in other studies that favor the high-voltage dc concepts, Rasmussen’s analysis essentially concludes that the conventional “rest of the world” approach is optimum. Read the analysis yourself and decide what you think. Inside the equipment cabinets, the common model is the intermediate

bus architecture (IBA). The IBA takes that nominal 48-V dc backplane voltage and applies it to a step-down converter on each board, often in the form factor of a fractional “brick” (Fig. 2). This brick then isolates and transforms the 48 V to a somewhat regulated intermediate bus voltage. This voltage supplies a number of point-of-load (POL) dc-dc converters that step-down and tightly regulate the voltages for critical ICs. For the sake of efficiency and control, these are always switch-mode converters.

To simplify this explanation, we’ve been treating the intermediate bus voltage provided by the dc-dc brick regulators as if it were standardized at 12 V. In actuality, it might be 12, 8, or 5 V dc. It depends on the system design. A higher bus voltage suffers less from I2R losses, but the efficiency of each simple dc-dc buck regulator at each POL decreases with the ratio of voltages it has to drop. Thus, the choice of bus voltage involves some tradeoffs.

FLATTENING EFFICIENCY CURVES Even though improving POL efficiency results in small energy savings, the effects add up. In fact, they actually compound as they’re reflected back through the supply chain (Fig. 3). In the past, power-supply efficiency numbers on data sheets referred to peak efficiency, which usually happened between 50% and 80% of rated load. Actual efficiency would fall off slightly at full load and sometimes dramatically at loads around 20% or so. Recent standards worldwide recognize that loads are variable and call for high efficiencies (80% and more) across a range of loads from 20% to 100%.

Achieving and improving efficiency across the full load range has become a point of differentiation among power-supply manufacturers at all power levels, with the application of proprietary techniques and considerable squabbling over patents. Broadly, efficiency-boosting techniques include synchronous rectification, cycle skipping, and multiple phases and phase shedding.

Continue to page 2

Synchronous rectification replaces the flywheel diode with a power FET. In more detail, a basic buck or boost converter requires only a single switching transistor, plus a rectifying diode. The problem with that, though, is the losses related to current times forward drop and reverse recovery time. Schottky diodes help, with their relatively low forward-voltage drop and good reverse-recovery characteristics. But what’s really needed for efficiency is another MOSFET switch instead of a silicon diode.

The tradeoff is added complexity, both in terms of parts count and timing control. The need for added timing control arises because there must be a certain amount of dead time between one switch opening and the other closing.

Also, that dead time requires some sort of diode to conduct between the time the top switch opens and the bottom switch closes. This can be handled by the intrinsic body diode in the MOSFET or by an external Schottky.

Let’s take synchronous switching one step further. In highpower ac-dc flyback-topology converters, the quasi-resonant or valley switching power supply varies switching frequency with input voltage changes to switch the MOSFET at the lowest point, or the valley, of the switching-MOSFET drain voltage.

Cycle skipping improves efficiency at light loads. In skip mode, a new cycle is initiated only when the output voltage drops below the regulating threshold. The switching frequency is proportional to the load current. With synchronous rectification, care must be taken to open the switch when the current through the inductor reverses so the MOSFET’s body diode blocks the reverse current.

Multiple phases and phase shedding involve ganging multiple low-current switching converters. These run at a common switching frequency, but with their clock phases shifted.

Paralleling the output of several switching regulators provides increased current capacity, along with other advantages. Each parallel switcher in a multiphase dc-dc converter operates at a relatively low frequency. However, when combined, they produce the responsiveness and regulation performance of a single-phase very high switching-frequency converter without the switching losses associated with higher frequencies. Also, by staggering the phases, the inherent output ripple is smoothed out.

So far, so good. Things get really interesting with multiphase switching regulators, though, when shedding phases to handle light loads with higher efficiency. At light loads, it makes sense to shut down some phases, because the efficiency of individual converters is greater at higher loads. As the total load increases, dormant phases can be brought back on line. The tricky part here lies in phase synchronization and balancing—adjusting the relative phase angles on the fly.

For certain situations, it’s better to drive all clock phases in sync. In some of its dc-dc regulators, Primarion (now part of Infineon) does just that. The chips can switch between two modes: one “normal,” the other called Active Transient Response (ATR).

In normal mode, phase pulses are evenly distributed to minimize the combined ripple. In ATR, the clocks to all phases are time-aligned, effectively paralleling the inductors to reduce total inductance and increase transient ramp time. This technique has been applied to POLs with eight phases to deliver di/dt rates in excess of 800 A/µs at the inductors and over 1500 A/µs at the output capacitor.

DIGITAL POWER Five or so years ago, there was a curious debate about whether it was better to close the control loop in switch-mode power supplies in the analog domain or the digital domain. Eventually, everyone realized that was the wrong debate, that “digital” ought to mean telemetry and a programmable bus (with maybe the parallel possibility of programming some parameters with external resistors or by pin-strapping).

Those goals could be achieved regardless of the implementation of the control loop. It’s perfectly possible to put an analog loop inside a digital “wrapper.” (There’s still a debate on the digital side as to whether control is better implemented via a microcontroller or a state machine.)

There are a number of advantages to a two-way control and monitoring bus. Downstream, it enables rapid reconfiguration. Used bidirectionally and coupled with a graphical user interface (GUI), it provides a way to manage the control loop in-circuit (Fig. 4). Upstream, the two-way control and monitoring bus facilitates system diagnostics and prediction of potential failures. Through temperature monitoring, it provides a way to manage multiple server fans to avoid hot spots in cabinets.

With the market clearly in favor of some kind of control bus, the question came down to implementation—whether an industrystandard bus was better, or whether proprietary buses would provide more innovation. It’s the classic opensource, closed-source debate. As it worked out, the debate led to a legal situation that stalemated development for over a year.

I’ve been told privately that those issues will resolve themselves shortly, perhaps as soon as the 2009 APEC conference, or shortly after. Unfortunately, that news will be too late for this report. But Electronic Design will report on it online as soon as it happens.

Continue to page 3

In the meantime, the best way to explain the situation is by looking first at PMBus, the open-standard digital power-management protocol for communication between the power converter and other devices. PMBus defines the transport and physical interface. It additionally provides a command language. It doesn’t, though, address communication between one power source and another.

The transport layer is based on the SMBus (System Management Bus) extension of the I2C serial bus with packet error checking and host notification features. In lieu of polling, it adds a third signal line that enables slave devices such as POL converters to interrupt the system host/bus master. In addition, there are hardwired signals for turning slave devices on and off and for write-protecting memory-held data. Also included are packet specifications and a power-control-specific command set.

For a more detailed explanation of PMBus and its relation to power in the data center, see PMBus Takes Command of Data Center Power Issuesby Intel’s Brian Griffith in Electronic Design’s sister publication, Power Electronics Technology.

A strongly competitive proprietary approach has been Power-One’s Z-Bus. PMBus places a certain coding and computational overhead on the system controller. Z-Bus uses a separate controller that can handle many POLs.

The proprietary Z-system preceded PMBus and operates differently. It relies on an external controller that gets its commands via an I2C interface and communicates with Z-POLs over a single synchronization/ data line called the Z-Bus. The Z-Bus synchronizes multiple Z-POLs to a master clock in the manager chip and carries bidirectional data between POLs and the dynamic power management (DPM). The Z-system offers multi-POL control over output voltage, output tracking and sequencing, switching frequency, interleaving, and active digital current sharing.

Beyond voltage regulation, programmable parameters include output tracking and sequencing, switching frequency, interleaving, feedback-loop compensation, and active digital current sharing with multiple Z-POL converters. Among the programmable protection features are output overcurrent and overvoltage, input undervoltage, power-good signal limits, and fault management. Real-time reporting includes output voltage and current and POL temperature. This can all be programmed via a highly intuitive GUI.

In late 2007, the race between Z-Bus and PMBus entered a new phase when a jury agreed with Power-One that any use of PMBus to control POLs infringed Power- One’s patents. Abruptly, the number of new product announcements about digital POLs for IBA applications almost dried up—but not completely.

Last year, it appeared that Ericsson Power Modules might have sidestepped the patent problem by moving PMBus control to the formerly “dumb” bricks and tightening up their regulation, relieving the POLs of that burden. Specifically, Emerson introduced new fractional bricks in the standard quarter-brick footprint that could be set via PMBus for output levels between 8.5 and 13.5 V, to ±2% precision, with a current capacity up to 33 A at 12 V. Thanks to the digital control loop, typical efficiency is 96% at half load, and that efficiency is nearly flat out to full load.

Physically, Ericsson added a separate header for the communication bus at the opposite end from a brick’s standard input/ output pins. The bus includes pins that allow two of these bricks to load-share while automatically phase-interleaving their switching signals to minimize conducted interference. Whether that still infringes on Power-One’s patent claims is still moot, but it was the only interesting new product development on the PMBus front during the past year.

OTHER PLAYERS Companies that introduced PMBusbased POLs before the Texas jury reached its decision for Power-One in November 2007 included Intersil, Linear Technology, Maxim Integrated Products, Primarion/ Infineon, and Texas Instruments. Additional players in the field took a hard look at the broadness of Power-One’s claims.

In 2005, Zilker Labs (recently acquired by Intersil) introduced a line of POLs that use state-machine control, rather than a microcontroller. (For a complete discussion of the methodology’s merits, see “PMBus Controller Takes A State-Machine Approach.”) For designers, the real appeal of the Zilker/Intersil approach may be ease of use—in particular, how simple it is to use multiple Zilker controllers in the same circuit.

The controllers can be individually programmed (pin-strap, resistor, or SMBus) for regulated output voltage, turn-on delay, and output-voltage ramp rate. SMBus programming provides the highest precision, but simply using a pair of resistors allows the output to be positioned between 0.6 and 5.5 V in 10-mV steps. Alternatively, simple pin-strapping (three pins: low, high, or open) enables the output voltage to be set to any of nine values between 0.6 and 5 V.

For power sequencing, a group of Zilker’s POL devices might be configured to power up in a predetermined order by issuing PMBus commands to assign preceding and succeeding devices in the sequencing chain. Yet a simpler approach would use Zilker’s patented autonomous sequencing mode, in which case no I2C/SMBus host would be needed.

All that’s necessary is that the I2C pins be interconnected for each device. In that case, sequence order is determined on the basis of each device’s bus address. Phasespreading is possible when all converters are synchronized to the same switching clock. In that case, the phase offset for each chip is determined by its device address, where phase offset = device address × 45°.

Radically different from all other players, Vicor’s Factorized Power approach, introduced in 2003, is based on very efficient nonregulating isolated POL-like chips while providing upstream regulation. Advantages include a higher bus voltage, with low I2R losses and voltage drop.

Continue to page 4

Also, the ability to move the bulk capacitors that store the energy needed to handle the load’s transient current demands from the output of the POL upstream to the input reduced the amount of capacitance needed by the square of the POL’s step-down ratio. Finally, it provides the ability to precisely control the load voltage through the isolation barrier without long, noise-sensitive feedback lines or opto- or magnetic couplers.

Factorized power introduces its own terminology. Isolated voltage transformation modules (VTMs) are at the load, while preregulator modules (PRMs) can be found upstream. Load regulation is performed using feedback to the upstream PRM. The PRM adjusts the factorized bus voltage to maintain the load voltage in regulation.

The key to this is the VTM’s function as a current transformer. It multiplies the current (or divides the voltage) by a “K” factor. This takes place with essentially a 100% transformation duty cycle, so there’s no loss of efficiency at high values of K. Thus, the bus voltage can be (and is) greater than 12 V. In fact, it’s limited only by safety concerns.

The bulk capacitance at the VTM input reflects itself at the POL with a gain equal to the square of the VTM current gain. Only very small amounts of ceramic bypass capacitance, effective over a short time scale of less than a microsecond, are needed at the load.

PRMs employ a ZVS buck-boost control architecture. One can operate with input voltages from 1.5 to 400 V, and step up or step down over a 5:1 range, with a conversion efficiency up to 98%. In normal configurations, the output voltage is approximately equal to the input voltage: 48 V unregulated to 48 V regulated. One PRM can put out up to 300 W, and VTMs and PRMs may be paralleled for higher output power.

A VTM employs a zero-voltage switching and zero-current switching (ZCS/ZVS) topology that Vicor calls a sine amplitude converter, or SAC (Fig. 5). The power train is a low-Q, high-frequency, controlled oscillator with high spectral purity and common-mode symmetry, resulting in practically noise-free operation.

The control architecture locks the operating frequency to the power-train resonant frequency, optimizing efficiency and minimizing output impedance by effectively canceling reactive components. ROUT, an equivalent resistance that summarizes all losses in the VTM, can be as low as 0.8 mO from a single VTM. If that isn’t low enough, or if more power is required, VTMs can be paralleled for current sharing. The SACbased VTM is, for the most part, a linear voltage/current converter with a flat output impedance up to about 1 MHz.

The secondary current in a SAC VTM is a virtually pure sinusoid. The very low, essentially non-inductive output impedance of the VTM allows an almost instantaneous response to a 100% step change in load current. Because there’s no internal regulation circuitry in a VTM, and none of the attendant loop delays and stability issues, no internal control action is required to respond to the change in load. The internal ASIC controller simply continues controlling and synchronizing the operation of the switches to maintain operation at resonance.

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Cornell Dubilier: Push EV Charging to Higher Productivity and Lower Recharge Times

April 8, 2024
Optimized for high efficiency power inverter/converter level 3 EV charging systems, CDE capacitors offer high capacitance values, low inductance (< 5 nH), high ripple current ...

TTI Hybrid & Electric Vehicles Line Card

April 8, 2024
Components for Infrastructure, Connectivity and On-board Systems TTI stocks the premier electrical components that hybrid and electric vehicle manufacturers and suppliers need...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!