Vadfoto - dreamstime
Promo Vadfoto Dreamstime Xxl 75532056 627ab0a9c6491

Resolution vs Accuracy vs Sensitivity Cutting Through the Confusion

Dec. 1, 1998

This article is part of TechXchangeWhy Low Iq is the Smart Thing to Do

When selecting an analog-to-digital (A/D) board, an external data acquisition system, or other measurement device, many parameters must be considered. Product characteristics such as price and measurement speed are easily understood.

But when you examine the resolution, accuracy, and sensitivity of a product, be careful. Even experienced engineers confuse these related, yet very different, specifications.

The most common mistake is to assume that more resolution always represents a better measurement. As a result, you assume that an A/D board with 16-bit resolution is better than a 14-bit board, and a 20-bit board is still better. Actually, the 14-bit board may be a better choice if it is more accurate and has better sensitivity than other higher-resolution products.

There is a false implication that the higher the resolution, the better suited the product is to make measurements. The reality is that accuracy and sensitivity are of equal or greater importance than resolution. You should pay equal attention to these parameters when selecting a product.

Keep in mind, too, that it is rare when more resolution comes without penalizing acquisition speed.

Accuracy

Accuracy describes the amount of uncertainty that exists in a measurement with respect to the relevant absolute standard. It can be defined in several different ways and is dependent on the specification philosophy of the supplier as well as product design.

Most accuracy specifications include a gain and an offset parameter. Offset parameters, usually expressed as an absolute amount such as volts or ohms, generally are independent of the magnitude of the input. Typically, gain errors depend on the magnitude of the input and usually are expressed as a percentage of the reading. An example of an accuracy specification and what can be expected of the reading is found in Table 1.

Table 1. Readings as a Function of Accuracy

Input Voltage Range of Readings Within the Accuracy Specification
0 V -1 mV to + 1 mV
5 V 4.994 V (5V – 0.1% of 5V – 1mV) to 5.006 V (5V + 0.1% of 5V + 1mV)
10 V 9.989V (10V – 0.1% of 10V-1mV) to 10.011(10V + 0.1% of 10V + 1mV)

Conditions: Input: 0 to 10V  Accuracy: ±(0.1% of input + 1 mV)

Resolution

In relative amounts, resolution describes the degree to which a change can be detected. It is expressed as a fraction of an amount to which you can easily relate. For example, printer manufacturers often describe resolution as dots per inch, which is easier to relate to than dots per page.

In the data acquisition world, resolution most often is expressed as a number of bits, such as 12, 16, or 20. In the digital multimeter world, resolution usually is described in digits, such as 4, 5, or 6.

To relate bits of resolution to actual measurement parameters such as voltage or temperature, first you must do some math. Suppose that a data acquisition device with ±10 V full scale has 16 bits of resolution. To relate this to volts, first calculate 216, which is 65,536.

As a result, the device can resolve one part out of 65,536; on a ±10-V range having a 20-V peak-to-peak (p-p) range, the device can resolve 20 V/65,536 = 305 µV. This generally means that the smallest change that can be detected by the measurement device is 305 µV.

In reality, not all of the resolution is necessarily useful because other factors enter into the situation-most notably noise. A product specified to 16-bit resolution may have 16 counts (4 bits) p-p noise. Instead of resolving 16 bits, it can only resolve 12. If the noise is random in nature, then simple averaging can improve resolution nearer to the stated specification if you are willing to live with the effective reduction in sampling rate.

Let’s look at another example. Averaging reduces noise by the square root of the number of samples. If a measurement has 3 bits of p-p noise, equal to 23 or 8 counts of noise, then averaging 64 samples would reduce the noise to one count. The net result would be achieving 1 bit out of 16-bit resolution.

With this approach, the measurement rate would be 64 times slower than without averaging, which may or may not be a limitation, depending on the application. Although averaging is very effective, it isn’t perfect. You will never be able to average things such as nonlinearity, so this technique only goes so far.

This approach only works as long as the noise is Gaussian. It is difficult to reduce slowly varying 1/f noise with averaging, and the net result often is worse than the square root of the samples might predict.

Sensitivity

Sensitivity is an absolute quantity; resolution is a relative quantity. Sensitivity describes the smallest absolute amount of change that can be detected by a measurement, often expressed in terms of millivolts, microhms, or tenths of a degree.

Sensitivity should not be confused with accuracy—they are entirely different parameters. For example, a device specified with 1-mV sensitivity may only be accurate to 10 mV with an applied input of 10 V. Yet if the 10-V input signal changed by 1 mV, the device still could observe the difference. Sensitivity sometimes can be improved by averaging.

The actual sensitivity is as much a function of the measurement device as it is the environment in which the measurement is being made. A device may be perfectly capable of making measurements with 1-µV sensitivity. But if the cabling is not adequately shielded and void of thermally generated voltages, then achieving 1-µV sensitivity will be impossible.

The easiest way to determine the sensitivity of a device is to look at its performance on its lowest range. The noise specification on this range will largely dictate the device’s sensitivity. Other subtle factors, such as short-term input offset-voltage drift and the quality of the input connectors, will influence it as well. Often the test setup (interconnects, shielding) limits the ultimate sensitivity unless appropriate care is taken.

Now let’s tie these three strands together by applying them to a real application. For this illustration, let’s use the IOtech Personal Daq/56 USB-based data acquisition system. Table 2 includes excerpts from the measurement specifications for this product. Assume that we have a sensor or transducer whose output can range from a few microvolts to 3 V.

Table 2. Personal Daq/56 Specifications

Speed vs Resolution

Speed Designation Maximum Sample Rate 
(S/s)
Resolution (Bits rms)
(-4V to +4-V Range)
Slow, 60-Hz Rejection 3.2/s 22
Medium, 60 Hz Rejection 9.2/s 21
Fast 48/s 17
Very Fast 80/s 15


Programmable Voltage Ranges

RMS Noise (µV)
Slow Medium Fast
-4 V to +4 V 4 5 60
-2 V to +2 V 4 4 30
-1 V to +1 V 2 3 20
-500 mV to +500mV 1.5 2 15
-250 mV to +250 mV <1 2 8

Let’s say that under a certain condition, the output of our sensor is 200 mV, and under another condition, the output is 3 V. Let’s determine the resolution, sensitivity, and accuracy of the measurement under each of these conditions.

First we must consider the usual trade-off between speed and resolution. For this example, we want to maximize resolution, so let’s select the slow conversion rate that will give us the best noise rejection (line cycle integration) and result in 22 bits of available resolution.

Accuracy

Based on the specification for a one-year performance over a 15 to 35°C ambient temperature range, the accuracy is 0.01% of reading + 0.002% of range (exclusive of noise). This is absolute accuracy. We included the exclusive-of-noise note because noise is specified separately in Figure 2.

  • With a 200-mV signal using the 250- mV range:
    0.01% × 200 mV = ±20 µV
  • For percentage of range using the 250- mV range:
    0.002% × 250 mV = ±5 µV

The uncertainty due to the noise on this range, from Figure 2, is 1 µVrms or ±3 µV p-p, a fairly small value.

Unless we correct for it (a simple arithmetic subtraction of the measured offset), we also must include the input offset voltage uncertainty. The worst case specification is 20 µV for the Personal Daq/56.

The total uncertainty in the measurement for the 200-mV sensor output is

±(20 µV + 5 µV + 20 µV +3 µV) = ±48 µV

This implies that the 200-mV sensor output can be measured to an accuracy of ±48 µV.

Using the same methodology, we can calculate the accuracy for the 3-V sensor output. In this case, we must use the 4-V range.

Total Uncertainty = ±(300 µV + 80 µV + 20 µV + 12)
= ±412 µV

So, the accuracy of our measurement when the sensor output is 3 V will be conservatively ±412 µV.

Resolution

Table 2 shows the available resolution specified on the 4-V range is 22 bits rms. Let’s determine the resolution at the 3-V signal level.

Remember that resolution is the ratio between the maximum signal we are measuring and the smallest part we can resolve. The noise specification for the 4-V range is 4 µVrms, which represents the smallest part we can resolve unless we average.

4 µVrms/3 V = 1 part of 750,000, which is approximately 220, or 20 bits rms

This can be improved somewhat by averaging.

For the 200-mV example, 1 µVrms/200 mV = 1 part of 200,000, which is approximately 218, or 18 bits rms

For both examples, the resolution is limited by noise. Had we used 16-bit resolution instead of 22-bit resolution, then the analog-to-digital converter-rather than the noise-would have been the limiting factor, yielding 16-bit resolution.

Sensitivity

Our most sensitive measurement can be made on the 250-mV range, where the noise is only 1 µVrms. In this case, our sensitivity is 1 µVrms, or 6 µV p-p (peak-to-peak noise is computed as ~6x the rms value). If we confine our measurement to the 4-V range, our sensitivity in this case is 4 µVrms or 24 µV p-p.

Observations

A summary of results is found in Table 3. It shows that we can obtain our best sensitivity on the lowest range. However, the lowest range generally limits the dynamic range of our measurement, in this case to ±250 mV. If you are unsure how big your sensor output might be, look for the highest range with the best sensitivity when setting up the measurement.

Table 3. Actual Accuracy, Resolution and Sensitivity Values

Sensor
Signal
Best
Range
Accuracy Resolution
(Without Averaging)
Sensitivity
200 mV ±250 mV ±48 µV 18 bits 6 µV
3 V ±4 V ±412 µV 20 bits 24 µV

Our measurement accuracy is determined largely by the value we are measuring (0.01% of reading) because the system’s range error is only 0.002%. On some data acquisition equipment, this spec is larger. If it is, then you compromise accuracy by selecting a range higher than what you need to make the measurement.

Conclusions

Different companies have different guidelines about how they derive their specifications, particularly when several components determine the specification—which often is the case concerning accuracy. Some companies simply add the accuracy errors and publish a worst-case specification. Other companies use a two-sigma calculation, which effectively is the square root of the sum of the squares of each of the error components. And other companies may simply add the typical accuracy specifications instead of worst case of each of the devices.

It may be necessary to discuss specification philosophy with your supplier before selecting a particular product for your application. The ultimate test is to evaluate the device before purchasing it to determine whether its performance is acceptable.

Check out more articles in the TechXchangeWhy Low Iq is the Smart Thing to Do 

About the Author

Tom DeSantis

Tom DeSantis is the founder and president of IOtech. He holds a bachelor’s degree in electrical engineering and has 20 years of experience in the test and measurement industry. 

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!