Low-Load Efficiency

Aug. 30, 2010
$jq().ready( function() \{ setupSidebarImageList(); \} );

Voltage regulator

MOSFETs

No load

Light load operation

Switching losses

It’s impossible to understand the essentials of maximizing switching-supply efficiency across all potential loads without understanding the sources of circuit losses. Fortunately, it isn’t necessary to consider all possible circuit configurations. The analysis that follows looks at a basic step-down (buck) voltage regulator (Fig. 1). Allowing for circuit differences, there are parallels in all switching-regulator topologies.

In a non-synchronous buck-regulator circuit, the forward-voltage drop across the low-side rectifier diode is in series with the output voltage. Naturally, the diode’s losses seriously impact efficiency. For example, in a regulator stepping down a 12-V intermediate bus voltage to 3.3 V, the 0.4-V forward voltage of a Schottky diode represents roughly a 12% loss. The greater the step-down, the worse the situation.

So the first step in pursuing efficiency is synchronous rectification—which is essentially replacing the diode with a switch, usually another MOSFET. This substitutes the diode junction’s constant voltage drop with the MOSFET’s conduction and switching losses and adds complexity, though it gains something on the order of 4% in efficiency. However, a new efficiency limitation arises from dead-time delay, inserted by the synchronous controller to prevent “shoot-through,” with both MOSFETs conducting simultaneously (Fig. 2).

The good thing about dead time is that during the time that top-side MOSFET is conducting, the low-side MOSFET’s parasitic body diode generally acts as a clamp on the negative inductor voltage swing. Unfortunately, like the Schottky, the body diode is lossy (and slower to turn off), and this could result in a 1% to 2% efficiency penalty.

So what do you do? Some designers actually “put back” the Schottky, which goes into conduction at a lower voltage than the body diode, in parallel with the lower MOSFET. The MOSFET has lower conduction losses than the lone Schottky, and the Schottky helps during dead time.

Even with that approach, though, there are still efficiency problems at light loads when switching frequencies are high. That’s because, under light loads, the current in the switching supply’s inductor discharges to zero. This leads to a deeper investigation of dead time.

Recall the situation. For the least complexity, you simply would want to drive the lower MOSFET gate with the complement of the signal on the upper MOSFET’s gate. Or even more simply, you could continue to hold the lower MOSFET on until the beginning of the next cycle. In that kind of design, when the inductor current began to flow in the reverse direction, the regulator’s controller might be designed to sense the inductor current’s zero crossing in each cycle, and it would either shut off the synchronous rectifier or disable it at light loads.

That keeps the controller simple, but consider this: when the inductor current reverses, the synchronous rectifier must pull current from the output, storing the energy in the input bypass capacitor and replacing the lost output energy during the next half cycle. That’s not good because it dissipates power in all of the circuit’s parasitic resistances and switching inefficiencies.

Continue to next page

Then how about this for a solution: suppose the controller was designed not to switch at all? Have it revert to non-synchronous operation at low loads, with the Schottky performing the commutation. That sort of regulator is said to be “pulse-skipping.” However, this re-introduces the diode drop as a drag on efficiency. One solution after another creates new problems, which eventually lead back to the original problem.

WHY BOTHER?
Let’s dig more deeply into low-load problems. We know that power circuits are designed to deliver peak efficiency when driving some specific “normal” load. They lose efficiency when operated in a wide range of load conditions, and the most drastic degradation occurs at the lightest load. The most extreme case would be “no load” (Fig. 3).

But, by definition, “low load” means the system isn’t using very much power to start with. Even if power-supply efficiency at low load isn’t quite as high as it is at full or near-full load conditions, can a few percentage points add up to anything significant? In fact, there are many reasons to be concerned about power losses at low load.

One issue is the relative amount of time the system spends operating far below peak load. Particularly in battery-powered systems, many conditions will drive large parts of the system into standby, sleep, or hibernate modes. During these light load conditions, the power that is consumed reduces battery life.

Another factor has to do with regulation. Consider that any switching power supply has an output LC filter. Generally, the load draws current and allows the LC to average the output. However, with no load, the output will rise to the peak value of the switching waveform. That can be a problem.

Some power supplies, like the ones that run hard-disk drives, carry a minimum load requirement. If the supply is well designed, it will incorporate protective circuitry to shut itself down. If it is not well designed, the supply can burn itself out in a few seconds. Even if the supply doesn’t burn out completely, there can be damage causing degradation that might be difficult to quantify.

Furthermore, a regulator’s control-loop compensation is optimized for a particular load, and it may not be stable at light loads. It’s also possible that feedback signals at light load may be overwhelmed by noise that could also cause unstable operation.

In switching supplies that use pulse width modulation (PWM), the pulse duration shrinks along with load demands. That means increased switching losses increase, relative to conduction losses. In other words, the relative time the FETs spend in transition is higher, relative to the time that they spend cut off or in conduction.

SPECIFIC TECHNIQUES
Of course, it is possible to reduce switching losses by operating at lower switching frequencies. This process demands either larger inductors and capacitors or higher peak current in the output inductor and switching FETs.

A better alternative is to change from PWM to pulse frequency modulation (PFM) during light load operation (Fig. 4). In this case, the switching frequency and related switching losses scale down with the load current.

Continue to next page

Cycle-skipping is another way to improve efficiency at light loads. In skip mode, a new cycle is initiated only when the output voltage drops below the regulating threshold. The switching frequency is proportional to the load current. In applying this approach in a supply with synchronous rectification, the driver must open the switch each time the current through the inductor reverses, so the MOSFET’s body diode blocks the reverse current.

Yet another approach is to supply the load using multiple supplies whose clocks operate at different phases, shedding or adding capacity as needed. In practice, this allows each parallel switcher in a multiphase dc-dc converter to operate at a relatively low frequency.

When combined, though, they produce the responsiveness and regulation performance of a single-phase, very high switching-frequency converter without the switching losses associated with higher frequencies. Another advantage is that, by staggering the phases, the inherent output ripple is smoothed out.

At light loads, it makes sense to shut down some phases, because the efficiency of individual converters is greater at higher loads. As the total load increases, dormant phases can be brought back on line. The tricky part here lies in phase synchronization and balancing—adjusting the relative phases on the fly.

Interestingly, to achieve the fastest transient response, it can make sense to provide the ability to drive all clock phases in sync. In some regulators, normally, clock phases are evenly distributed to minimize the combined ripple. But it is possible to switch to a mode in which the clocks to all phases are time-aligned, effectively paralleling the inductors to reduce total inductance and increase transient ramp time.

MOSFET EFFICIENCY
So far, this analysis has concentrated on controller design. What about losses in the switching MOSFETs? Broadly, MOSFET losses can be attributed to channel and body-diode conduction, switch-transitions, gate drive loss, output capacitance, and reverse recovery loss. For simplification, these can be reduced to conduction losses, switching losses, and “others.”

For any particular MOSFET technology, conduction losses are inversely proportional to MOSFET size, while switching losses are directly proportional to MOSFET size. Optimzing efficiency is a matter of balancing the two.

Switching losses arise from total gate charge (QG), pre-threshold gate-to-source charge (QGS1), post-threshold gate-to-source charge (QGS2), and gate-to-drain charge (QGD). The important losses, though, occur as the MOSFET turns on—that is, as the gate drive voltage transitions from its threshold voltage (Vt) to its plateau voltage (VPlat). After that, switching losses (Fig. 5) are virtually zero. Hence, QGS2 and QGD are the principal contributors to switching loss. (Some data sheets summarize these as “switching charge,” Qsw.)

For both the top and bottom switch, the object is to examine the operating conditions and select a MOSFET that, for those conditions, exhibits essentially the same conduction losses as switching losses. That provides reasonable assurance of near-minimum total loss.

The objective is first to calculate up two characteristics for the MOSFET: switching losses per unit switch charge (expressed in W/nc) and conduction losses per unit drain-source on resistance (expressed in W/mΩ) and take their ratio. The second step is to compare that to RDS(on)/Qsw. The closer the two ratios, regardless of their actual values, the lower the losses.

For the high-side FET, switching loss per unit switch charge is:

(VIn x IOut/IDrive + QG/Qsw x VDrive)fsw

Conduction loss per unit drain-source on resistance is:

(IOut2 + Ipp2/12) x (VOut/VIn)

For the low-side FET, substitute Vfd, the voltage drop of the FET’s body diode in the first expression, and use (1 – VOut/VIn) in the second. Otherwise, IDrive is the drive current, roughly, (VGate – VThreshold)/(RDrive + RGate). VDrive is the source voltage for driving the gate.

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!