There are three types of lies: lies, damn lies, and statistics.” With those words, my MIT statistics professor introduced his course. Although the data acquisition system buyer typically does not face this same deception, when you look at the myriad of ways that different vendors specify their products, you may begin to wonder if the obfuscation is intentional.
The following specifications show the way five industry leaders specify the accuracy of a 16-channel, 12-bit, ~200 kS/s analog input (ignoring for the moment any temperature drift specifications and using the ±10-V input range if specifications vary by range):
- 0.02% of reading, ±1.5 least significant bits (LSBs).
- 0.076% of reading, plus 6.385 mV (offset), plus 3.906 mV of noise and quantization error.
- ±30 µV, ±1.0 LSB integral nonlinearity (INL), ±0.5 LSB differential nonlinearity (DNL), ±40 µV, ±0.3 LSB rms (noise).
- 0.02% of full-scale range, ±1.5 LSBs (INL), ±0.75 LSB (DNL).
These specifications show the formats used by Measurement Computing, National Instruments, Keithley Instruments, Data Translation, and Datel. As you can see, there is little consistency in the way different manufacturers specify input accuracy.
When you add temperature-induced errors into the mixture, the mud gets thicker still. The buyer is required to perform an assortment of complex mathematical calculations to determine which of these boards is best suited for an application.
Regardless of vendor or specification, all of these data acquisition boards measure one thing: voltage. Virtually all current-input boards measure the voltage across a precision resistor and determine current via ohms law.
Also, virtually all sensors and input-source scale factors are provided in terms of volts per engineering unit. Be it a simple conversion like I = V/R or a complex calculation like the look-up table algorithm or the 6th order polynomial required to convert thermocouple voltages to temperature, it almost always boils down to a signal source with a voltage output and an input system that measures voltage.
Why, then, doesn’t anybody provide data acquisition accuracy specs in units of voltage, such as millivolts or microvolts? I’ve posed this question to many old-timers in this industry. Although their responses weren’t identical, the consensus is that it’s done this way because that’s the way the analog-to-digital converter (ADC) specs have always been written.
Perhaps the only logical reason is that it allows vendors to specify accuracy of different input ranges with a single number. When you specify accuracy in LSBs, it automatically scales itself to different input ranges. When creating catalogs and data sheets, space costs money, so there is a real benefit to short-version specs.
The advent of the web, however, allows us to add information at virtually no cost. This factor alone may help the market provide more detailed, easier-to-use specs.
Without wanting to label an entire industry—including myself—as lazy, I believe customers would be best served by stating the specifications in voltage rather than percent ± this and ± LSB that. In the future, I hope to provide specifications in terms more directly useful to my customers.
However, inertia is a powerful force, and a huge number of specifications already written would be time-consuming and expensive to rewrite. It is anticipated that specifications might become clearer, but I expect a slow change in how things are done.
The main contributors to error in data acquisition systems are offset error, gain error, integral nonlinearity, and differential nonlinearity (see sidebar for definitions of these terms). To determine overall worst-case error, it is only necessary to sum the contributions of each of these psuedo-independent errors.
At first glance, you might think it sufficient to provide the sum of all the errors in volts. It would make apples-to-apples comparisons a snap. However, this will not always be true. An application that needs to determine a zero-crossing time may require a low offset error but not be affected by gain error at all.
Likewise, the FAA mandates that general aviation fuel-tank gauges be accurate when empty (a measurement only typically affected by offset error), yet provides no specific requirements for accuracy at other fuel levels. An industrial control application may require that the measurements have no missing codes, but may not require low offset or gain errors to perform perfectly.
Unlike linearity and offset errors, gain errors vary with the input signal level. The worst-case gain error occurs only at full scale. At half scale, the error will be half that of full scale.
If your system needs high accuracy at one-quarter scale but not at full scale, using the worst-case error spec may overstate the error that is important to you. This may lead you to purchase a higher-resolution, more expensive system than you actually require. Based on input from our customers, we will begin providing specifications for each of the component errors as well as a totalized worst case. It is our goal to encourage other vendors to join us in this endeavor.
So, if for now we’re left with the mish-mosh of formats, how is a customer without a strong background in measurement electronics supposed to weed through the various databooks and web pages to pick the best solution? Even if you have the background to decipher all these specifications, who has the time?
I recommend you make your vendors do the work for you. If you can determine the accuracy in voltage you need, let your vendor’s technical sales people do the calculations for you.
Calculating the Required Accuracy
Here’s all you need to do. Determine the transfer function from the units you want to measure to voltage. Let’s say your engineering unit conversion is X volts per unit.
Now let’s assume you need an accuracy of Y units. To determine the accuracy you need in volts, multiply X by Y to determine the voltage accuracy. If your transfer function is provided as units per volt rather than volts per unit, invert your term (use your calculator’s 1/x key) before you multiply by Y.
As an example, let’s use a pressure transducer that has an output of 1 V per psi, and you want to accurately measure to 0.001 psi. Your voltage accuracy requirement is 1 V/psi × 0.001 psi = 0.001 V or 1 mV. If the system you are evaluating offers accuracy better than 1 mV, it should work for you.
Some people would suggest a system that offers twice the accuracy you really need, and there is some sense in that. However, overspecifying data acquisition systems often adds needless expense and complexity.
Most data acquisition systems offer better specifications in voltage as the full-scale input range decreases. For example, you might expect a typical 12-bit data acquisition system to offer roughly ±5-mV accuracy at a full-scale input range of ±5 V and a ±1-mV accuracy with a full-scale input range of ±1 V. However, if your sensor output ranges from -2.5 to +3.5 V, you will not be able to take advantage of the ±1-V input range for the entire signal.
Also, unless your sample rate is slow and your data acquisition board has software programmable gains, you cannot auto-range your input section. This means your system may have to rely on the lower accuracy of the larger input range. A final word of caution here: some boards capable of auto-ranging only offer fully calibrated results on one scale, once again making it likely that you will be forced to use the larger input range.
Conclusion
As the data acquisition board industry matures, there is a growing need for vendors to agree on standard specifications and how boards are tested and verified to meet spec. Standard methods need to be published so concerned scientists and industrial engineers can understand the measurements they make and more accurately compensate for uncertainty. There needs to be a quick and easy way for data acquisition buyers to compare apples to apples as they investigate products from various vendors.
Offset
In a data acquisition system, offset is defined as the difference in voltage between zero and the voltage that is required to produce the output code (from the ADC) that represents zero. In practice, this measurement is not made at the code for zero but rather at the transition between the codes for zero and zero plus one. This transition occurs at the ½ LSB point.
The code edge is measured rather than the actual zero because the code edge is easily defined while the exact center of the zero code is not. Offset error exists throughout the transfer function of the input system and is at the same level as the zero error throughout the input range. Offset is measured in voltage and often converted to fractional LSBs for specification purposes.
Gain Error
Gain error is defined as the difference between the actual slope of the system’s transfer function and the slope of the ideal transfer function. While offset error is measured at the zero point in the transfer function, the gain error is defined at the full-scale point.
Although gain error is present at all points in the transfer function, full scale is where the gain error is at its maximum and easiest to measure. Like offset, gain error is measured at a code transition point. The last code transition occurs at 1½ LSB below full scale.
Remember that offset error exists equally at all codes and must be subtracted from any measured voltage. Gain error exists for the entire transfer function, although it is in proportion to the actual measurement point. The ±X% of reading specification is a true gain-error spec.
Integral Nonlinearity
INL is defined as the deviation of the data acquisition system’s actual transfer function from the ideal straight-line transfer function. Most ADCs and data acquisition systems are specified for endpoint INL. This means that the deviations are evaluated from the actual transfer function and the mathematical straight line drawn through the ideal endpoints for the system.
This method typically is used over the best-straight-line method for two reasons. First, it is a more conservative approach to specifying INL. Secondly, it’s simply an easier test to perform.
Differential Nonlinearity
DNL is the difference between the analog input span corresponding to a code and the span for the ideal LSB. Again, code edges are used as the evaluation points for DNL measurements. It is important to note that DNL errors exist for every code pair through the entire transfer function.
As a result, the DNL specification is defined by the worst occurrence of error in the transfer function. If a data acquisition system has a DNL specification of less than ½ LSB, the system is guaranteed to have no missing codes throughout the transfer function. Conversely, if the system has a DNL greater than ½ LSB, it may have missing codes. A system with DNL greater than 1 LSB will likely have missing codes, and worse systems run the possibility of exhibiting nonmonotonic behavior (a lower code for an increase in input voltage).
About the Author
Bob Judd is vice president of marketing at Measurement Computing. He has been involved in the data acquisition market since 1984 in various marketing and engineering capacities. Mr. Judd holds a B.S. from Brown University and an M.S. from MIT. Measurement Computing, 16 Commerce Blvd., Middleboro, MA 02346, 508-946-5100, e-mail: [email protected].
Return to EE Home Page
Published by EE-Evaluation Engineering
All contents © 2000 Nelson Publishing Inc.
No reprint, distribution, or reuse in any medium is permitted
without the express written consent of the publisher.
November 2000