These Power-Saving Techniques Promote Longer ADC System Life

March 24, 2011
A number of ways to reduce power consumption in ADC circuit designs are described.

Fig 1. Resistive-attenuator options for measuring high-voltage signals offer a simple way to limit an input signal to match the input range of an ADC with a reduced supply voltage (a). However, it leads to increased source impedance, prompting the modifications in (b) and (c).

Fig 2. Calculations of average power for three SAR ADCs, each sampling 16 channels in 1 ms, illustrate the pitfalls of making design decisions based only on data appearing on the front page of a data sheet.

Fig 3. Plots of the variation in supply current versus sampling rate for a SAR ADC provide useful tradeoff information.

Table 1

Table 2

Electronic design 101: Less power consumption translates into longer battery life or greater control-system functionality in a portable sensor, 4- to 20-mA control loop, or other system design that contains an analog-to-digital converter (ADC). Multiple approaches and tradeoffs are available when selecting an ADC to meet an application’s power budget.

The most obvious power-reduction approach is to use an ADC that operates at a lower supply voltage. Today’s ADCs can operate from analog and digital supplies of 3, 2.5, or even 1.8 V, and a voltage drop from, say 5 to 3 V, gives you an instant power savings of 40%.

Reducing the digital supply voltage creates two somewhat minor downsides: the need for a separate digital power pin on the ADC, and the possibility of a lower drive current on the digital output. If power is reduced by lowering the analog supply voltage, a lower signal-to-noise ratio (SNR) becomes the primary concern. Today’s lower-noise process technologies and design techniques, however, ensure that the SNR of today’s lower-voltage ADCs can equal that of a higher-power ADC with higher analog supply voltages.

To optimize power, it’s important to examine the rest of the analog front end, in addition to the analog supply voltage. Traditional sensors and analog-input front ends require an input range of 0 to 5 V or even ±10 V to achieve the highest dynamic performance or connect directly to the sensor. In the past, reducing the supply voltage decreased the ADC’s dynamic range. Assuming the sensor output remained unchanged at 5 or ±10 V, the signal then had to be attenuated to match the ADC’s input range.

This attenuation is easily accomplished by adding a resistive divider between the sensor and ground. Large resistor values can be used to limit the power consumption (Fig. 1a). ADCs typically require a low source impedance, though, which conflicts with the need for low power consumption in this resistor-attenuator approach.

Another option increases the resistor value between source and ADC input, as well as reduces the resistor value between ADC input and ground (Fig. 1b). Such a change decreases the effective impedance seen by the ADC from 50  to 9.5 kΩ, but it also reduces the ADC’s input range. Assuming a 10-V source, the 0- to 5-V input range shrinks to a span of 0 to 0.5 V.

In Figures 1a and 1b, adding a bypass capacitor to ground between the resistive divider and the ADC input will isolate the source impedance from that seen by the ADC input. Such a bypass capacitor rapidly transfers charge to the sampling capacitor during the ADC’s signal-acquisition phase. Unfortunately, the bypass capacitor also limits the bandwidth of the input signal.

Thus, a third option places a buffer amplifier between the resistive divider and the ADC (Fig. 1c). Of course, buffers and other amplifier/filter signal-conditioning stages add power dissipation.

Conversely, a decrease in the analog supply voltage and input range isn’t much of a concern if the sensor output is small. One such example is the resistive Wheatstone-bridge network commonly found in sensor systems. It offers a typical full-scale output swing of 2 mV for every 1 V of sensor excitation.

In this setup, the ADC measures a full-scale range of only 5 to 10 mV of the sensor output. Moreover, the ADC’s reduced input range doesn’t matter as much as other parameters, such as higher resolution, a low noise floor, and excellent overall dynamic range.

Burst-Mode Processing

Another technique in power-sensitive ADC design is “burst-mode processing.” The ADC is powered up for a quick burst of conversions and then powered down to a low-power sleep mode.

This mode of operation is ideal for applications that include a fast microcontroller or FPGA, plus an ADC capable of producing at least several kilosamples per second. While powered down, the ADC supply currents can be reduced to a few microamperes or less. As a result, the average power is dramatically less than power consumed at the ADC’s fastest sampling rate.

Burst-mode processing takes advantage of the ADC’s ability to cycle on and off at an effective rate that’s lower than its maximum sampling rate. Nearly every ADC datasheet specifies power dissipation at the maximum sample rate (also known as output rate or throughput rate).

Three similar ADCs with integrated multiplexers may measure 16 analog inputs over a period of 1 ms, all with effective sample rates of 1 ksample/s (Fig. 2). At their maximum sample rates, ADC #1 dissipates 8.3 mW at 3 Msamples/s, ADC #2 dissipates 6.0 mW at 1 Msample/s, and ADC #3 dissipates 4.7 mW at 300 ksamples/s.

In scanning only the front page of the datasheets, it would appear that the worst choice for power dissipation would be the 3-Msample/s ADC. But upon further investigation of its active power, shutdown (or standby) power, and effective sample rate, it becomes clear that the faster ADC is actually the better choice.

For ADC #1, the 8.3-mW active power portion is enabled for only 5.3 µs (333 ns per conversion × 16 conversions), and its standby/shutdown power of 6 µW is enabled for the remaining portion of the 1-ms period  (994.7 µs). Its average power is \\[(active power × active time) + (shutdown power × shutdown time)\\]/total cycle time, which yields an effective throughput rate of 1 ksample/s and an average power of 50 µW.

ADC #2 is similar to ADC #1, but has a 1-Msample/s maximum sample rate. Active power is 6 mW for 16 µs (1 µs per conversion) and shutdown power is 6 µW for 984 µs, yielding an average power more than twice as high as ADC #1.

ADC #3 is built from a lower-speed core, with a 300-ksample/s maximum sample rate. Power dissipation is only 4.65 mW, but converting 16 samples now takes 53 µs (10 times longer than ADC #1), with a shutdown power of 15 µW for 947 µs. Therefore, average power for ADC #3 is 260.7 µW, more than five times that of ADC #1.

One potential drawback to burst-mode processing is the possible requirement for a microcontroller or FPGA with a faster clock rate. Another is the need to power the voltage reference off and on. If the ADC has an internal reference, it will require a period of time (often >100 µs) to power up and settle before the ADC can deliver its guaranteed linearity specifications.

For applications that operate in burst mode with a voltage reference external to the ADC, the reference can be powered on at all times. That reference should consume very little power, such as, for example, the MAX6029¾a series reference that draws only 5.25 µA maximum over temperature. Preset voltage outputs include 2.048, 2.5, 3, 3.3, 4.096, and 5 V. These reference voltages match up well with nearly every ADC. For example, the 2.048-V version requires only 15.75 µW of additional average power.

Slower Sample Rate

Most ADC datasheets specify supply current for two conditions: the fastest sampling rate and power-down mode. These data points are good to know, but many systems run the ADC at a sample rate lower than the maximum. In that case, it’s helpful to investigate how the supply current changes with sample rate.

Consider a plot of supply current versus sample rate for the 300-ksample/s ADC used in Figure 2, operating with a 3-V supply (Fig. 3). Power dissipation is 3 V × 0.62 mA  =  1.86 mW at 300 ksamples/s, but only 1.26 mW at 100 ksamples/s, yielding a power savings of 32%.

By powering up for a conversion and powering down between conversions, a SAR ADC can significantly save power at lower sample rates. Similar power-dissipation profiles can be seen on most SAR ADCs, but the power savings may be less dramatic if some of the internal circuitry remains active between conversions. In any case, it’s a good idea to check the curve of typical supply current versus sampling rate on a SAR ADC datasheet.

SAR Versus Delta-Sigma ADCs

Reduced supply current with sample rate is unique to SAR ADCs. The other ADC type used primarily for precision applications is the delta-sigma (ΔΣ) ADC. It typically doesn’t save power at lower output rates, because the ΔΣ modulator achieves high accuracy by oversampling the input signal and then averaging the result. On the other hand, the sampling circuitry of a SAR ADC doesn’t run continuously. For each sample, it takes a single “snapshot” of the analog input.

Running a ΔΣ ADC at lower output rates won’t save power (see “Lower-Power Delta-Sigma Design”). However, it does provide a lower averaged noise and, thus, a better effective resolution. As an example, the MAX11200 24-bit ΔΣ  ADC offers low power (<1 mW maximum) and high effective resolution (23+ bits). The output rate and oversampling ratio can be varied to achieve higher effective resolution at lower output rates.

Operating on an internal oscillator of 2.4576 MHz or 2.048 MHz, the MAX11200 achieves an effective resolution of 21.7 bits at 120 samples/s and 23.6 bits at 10 samples/s. Multiple sample rates are available via software control, though, along with the resulting noise-free resolution (NFR), effective resolution, and RMS noise (see the table).

Enhance Dynamic Range, Drop The Gain Stage

When considering whether to use a SAR or ΔΣ ADC, it’s helpful to look at power dissipation for the complete signal chain. The chain may include a programmable-gain amplifier (PGA). Many SAR ADCs amplify or attenuate the input signal to ensure the signal occupies a substantial portion of the ADC’s maximum input range. This amplitude can be adjusted with an internal PGA or by specifying the use of an external PGA.

For example, a design that measures a 20-mV signal sourced by a Wheatstone-bridge sensor may include a gain stage of 100 between the bridge and the ADC. (ADCs often provide an input range of 0 to 3 V, or 0 to 5 V.) Assuming a 12-bit ADC biased with a 3.0-V reference, the least significant bit (LSB) is 0.73 mV. Without gain, the ADC would resolve only 27 LSBs in the 20-mV signal (20 mV/0.73 mV). After adding a 100-V/V gain stage, the ADC resolves 2740 LSBs in the same signal.

The cost of high-resolution, high-performance ΔΣ ADCs has dropped enough to make them an affordable alternative to the SAR ADC plus PGA. With the low noise and resulting high effective resolution of a ΔΣ ADC, the PGA and its power dissipation can be eliminated altogether.

Many ΔΣ ADCs connect directly to the sensor, while providing the same input-signal granularity (resolution) as a SAR ADC plus PGA. The low noise level of a ΔΣ ADC (below 1 µV) makes this performance possible. Effective resolution, determined by the ADC’s input range and inherent noise level, captures the ADC noise under essentially dc conditions, making quantization noise a non-issue:

Effective resolution  =  log2 (voltage input range/voltage noise) 
=  log2 (20 mV/210 nV)
= 16.5 bits

Using the same 20-mV bridge signal and the aforementioned ADC (with its noise level of 210 nVRMS), it’s possible to achieve an effective resolution of 16.5 bits. After calculating effective resolution, the designer can derive the resulting noise-free resolution (effective resolution – 2.7 bits) and the resulting noise-free counts. Noise-free counts are defined as the number of readings achievable by an ADC without noise disturbance. For example, an ADC with 12.0-bit, noise-free resolution (an ideal 12-bit ADC) offers 4096 noise-free counts. The noise-free count in the previous example is:

Noise-free counts (LSBs) = 2NFR
= 2(16.5 – 2.7)
= 213.8
= 14,263 LSBs

Thus, when compared with a lower-resolution SAR ADC plus PGA, the ΔΣ ADC with low noise offers a higher effective resolution, higher noise-free resolution, and more noise-free counts. Table 2 compares the specifications of an ideal 12-bit ADC plus PGA with those of a low-noise ΔΣ ADC. Not only does the ΔΣ ADC achieve more noise-free counts and higher resolution, it does so on a lower power budget. The main tradeoff (usually) is a lower maximum sample rate for the ΔΣ ADC.

To summarize, the need for low power dissipation has resulted in many new techniques to reduce total system power: different ADC architectures, burst-mode processing, SAR ADC operation at a lower sampling rate, and dropping the supply voltage. These techniques impose various tradeoffs, but they can provide longer battery life, or perhaps allow the use of a higher-performance ADC while meeting the power budget of a 4- to 20-mA current loop.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!