Lowcost1

Low Cost Outside, High Performance Inside

The age-old maxim garbage in, garbage out holds true nowhere better than in data acquisition applications. Regardless of the cost of a data acquisition device, if the input signal is distorted or corrupted before digitization or if the digitizer itself is inaccurate or nonlinear, the recorded data will be of limited use. It’s instructive to examine how low-cost data acquisition products actually achieve a sub-$500 selling price while providing an appropriate level of signal integrity.

Several manufacturers house the data acquisition circuitry in a small module that connects to the PC via the USB link or an Ethernet network. Bob Judd, general manager at Measurement Computing, explained this trend. “New, extremely low-cost USB and Ethernet interface chips allow data acquisition devices to be designed for less than previously possible with plug-in boards. The chip sets are less expensive than the PCI interface required on plug-in boards, and the separate cable and screw-terminal board usually required with a PCI board are built into the USB and Ethernet products.”

Conventional low-cost PC plug-in PCI boards also are available with improved specifications resulting from advanced ICs as well as clever engineering. For example, high-speed, large, and relatively low-cost field-programmable gate arrays (FPGAs) have been used in several board-level products to avoid the cost of an off-the-shelf PCI interface and reduce the need for onboard buffer memory. These FPGAs are available to all manufacturers but have proven especially effective at reducing the cost of plug-in PCI board data acquisition systems.

According to Roy Wan, product manager at ADLINK Technology, “Speed, accuracy, and resolution are the key buying factors for data acquisition boards. To provide these features at low cost, in the DAQ-2213 board we reduced the number of channels from 64 to 16 and made the analog output optional to reduce production costs. In addition, we were able to reduce the need for first-in, first-out memory through innovative FPGA coding.”

Advantech’s Mike Berryman with the company’s Industrial Automation Group agrees with the importance of speed and accuracy. But for his ADAM brand distributed data acquisition products, the network interface plays a significant role. “As powerful low-cost embedded computing becomes available, these distributed devices now can handle some advanced software languages, such as JAVA, that can easily provide web-page interfaces to the user. This means that the user can interact with the data acquisition instrument via a web browser, eliminating the need for expensive or proprietary software interfaces,” he explained.

In another case, Scientific Solutions intentionally splits the PC interface function from data acquisition and signal conditioning. An external desktop unit handles sensitive analog signals, connecting digitally to the PC-mounted BaseBoard to reduce noise.

Regardless of the product form factor, ADC speed directly affects cost, and in the last few years, the price of a 1-MS/s 16-b ACD has fallen from about $40 to $10. As a result, many older 12-b boards have been replaced by 16-b products. Unfortunately, high ADC resolution often is not supported by corresponding high accuracy in the signal-conditioning path. In some cases, 16-b is more of a marketing number than a useful specification.

Under the Hood

Accuracy

Achieving high accuracy in any data acquisition system requires attention to a large number of circuit- and signal-related details. As can be seen in the comparison chart that accompanies this article, only a few products quote an overall accuracy that complements the product’s resolution. For example, the Data Translation Mode lDT9810 with 10-b resolution has a ±0.1% accuracy—one part in a thousand and nearly equal to the ADC’s 1,024 digitizing levels.

On the other hand, the company’s Model DT9816-A with 16-b resolution has 0.02% accuracy. By itself, a perfect 16-b converter would be accurate to 1 LSB or 0.0015% FS. However, given the module’s single-ended inputs and $349 price, 0.02% may be a very practical and realistic accuracy specification.

Of course, the results you actually achieve can be much better than the absolute accuracy specified. In a nearly constant-temperature environment, over a short time period, the relative accuracy of the samples usually will be significantly better than the specified absolute accuracy.

To decide if a product’s stated accuracy is adequate for your application, you really want to look at the lower-level specifications that comprise the overall accuracy number. For example, the zero and gain drifts for the DT9810 are ±20 μV/°C and ±50 ppm/°C, respectively. With a single 2.44-V input range and a 10-b ADC, an LSB is equivalent to 2.38 mV. A 10°C temperature change would result in a 200-μV offset and a 500-ppm or 0.05% gain change. Clearly, this device is relatively insensitive to errors caused by temperature changes.

For the DT9816-A, drift coefficients and other relevant details are shown in Table 1, reproduced from the product’s data sheet. This device’s drift specifications of ±25 μV/°C and ±50 ppm/°C are similar to those of the DT9810. But on the ±5-V range, an LSB is equivalent to only 0.153 mV—15 times smaller than in the previous example. In this case, a 10°C temperature change results in an offset of almost two LSBs and a gain change equivalent to about 33 LSBs at FS.

To reduce the loss of accuracy caused by a temperature change, you need to recalibrate your data acquisition system at the new temperature. This procedure will improve accuracy if the onboard reference has a significantly lower temperature coefficient than that quoted for the overall product or if a good external reference is used. Arranging your test application to avoid a large temperature change is an obvious step, but that isn’t always possible.

There are many other data-sheet details that may be overlooked by a would-be customer. For example, some products present a different input load when channels are off compared to when they are on. Also, the bandwidth may not be stated or may have been confused with the sampling rate.

In the case of the DT9816-A, the 16-b converters are only monotonic to 14 b. This means that a small input signal increase may or may not be reflected in a corresponding increase in the value of the ADC output code.

The differential nonlinearity of 0.006% corresponds to the ±2-LSB inherent quantizing error, and the 0.007% overall nonlinearity is consistent with these numbers. These values are in no way bad, but they support the product’s monotonic 14-b performance rather than its 16-b specification.

The comparison chart only lists the major specifications for several low-cost products. It’s not possible in such a chart to present the range of information required to make a meaningful, in-depth comparison. Just for the two examples discussed here, offset and gain drift, differential and overall nonlinearity, signal range, resolution, and overall accuracy had to be considered to understand how a product might behave under different circumstances. Fortunately, Data Translation has provided a comprehensive data sheet. Many times, specifications are not presented so clearly or completely.

Finally, shielding or the lack of it may affect accuracy in high-resolution data acquisition systems. For example, both Data Translation products are packaged in plastic enclosures, but they are fully shielded internally. In addition, the inputs are protected against electrostatic discharge to 8,000 V. These features may not be present in other plastic-packaged modules that externally appear to be similar.

Isolation

Industrial applications often call for isolation. Of the data acquisition products listed in the chart, only the Advantech ADAM Series provides 2,000-V isolation. But, what is isolation, why do you need it, and does it have anything to do with accuracy?

In the worst case, all the signals that you need to record may have separate nonground references. In a simpler case, the reference could be common to all the signals but 1,000 V above ground. In the first scenario, each input channel must be separately isolated. The second scenario can be handled by a module or board with a single isolated reference connection.

An alternative solution to the first scenario is to attenuate the input signals to bring them within the voltage range of a module or board with differential inputs. This technique works to a degree but loses resolution and increases signal noise.

The techniques commonly used to provide true galvanic isolation are not perfect, and their characteristics will degrade high-frequency accuracy and linearity. If signals are isolated via analog techniques before being digitized, the common-mode rejection of the isolation becomes more of a factor at high frequencies than if isolation occurs at a digital signal interface.

For example, in a typical switch-mode power supply application, the gate drive of a FET is referenced to the output switching waveform, not to a DC level. The isolation must reject the several-hundred-volt common-mode square wave upon which the much smaller gate signal is riding. So, you need to understand the isolation method and location to determine its effect.

To View Data Acquisition Chart Click Here

More important than the possible accuracy degradation, however, is the safety-related need for isolation in applications involving off-ground measurements. Some experts have commented that the combination of a battery-operated laptop PC and a nonisolated data acquisition module provides isolation, and it does. The problem is that this approach floats the entire measurement system at the signal reference level. How safe is it to operate a computer that has its chassis at 1,000 V off ground when your body is at ground potential?

Isolation is required for the following reasons: • To provide operator safety. • To avoid signal corruption when mixed references are involved. • To maintain input sensitivity and resolution in the converted output. • To protect the measurement equipment in the presence of high off-ground common-mode voltages.

Speed

Speed also affects cost. Tom DeSantis, president of I/Otech, said that lower-speed boards can benefit from off-the-shelf input stages while higher-speed boards require discrete designs that cost substantially more. Lower-speed ADCs also are less expensive than higher-speed ADCs.

Low-cost data acquisition systems generally will not have a separate ADC per channel unless the channel count is very small. Nevertheless, multiplexed systems do increase in cost as the channel count increases because of the need for additional connectors, cables, and often screw terminals or other signal connection types.

Speed may affect accuracy according to Kristi Hobbs, a data acquisition product manager at National Instruments. She was discussing the common ADC multiplexing approach to providing multiple input channels. In this type of design, the programmable-gain instrumentation amplifier (PGIA) situated ahead of the ADC must settle to sufficient accuracy before the ADC samples the amplifier output.

This requirement exists for all multiplexed ADC systems but becomes more difficult to satisfy at higher ADC speeds. The size of the error produced depends on both the settling time of the amplifier and the signal levels that it is handling. Worst case, a multiplexed ADC must provide its specified accuracy when alternately digitizing FS positive and FS negative signal levels. NI has developed a custom PGIA optimized for 18-b fast settling time, low noise, and high linearity to address this requirement.

The NI-STC 2 device is an example of a digital ASIC that controls inter- and intra-board synchronization and timing.

If a family of closely related products can be defined, as NI has done for the company’s new M Series data acquisition devices, then the development costs associated with an ASIC solution make sense. In high-volume production, an ASIC provides significant cost savings over an FPGA-based solution.

Calibration

“The largest cost reduction resulting from software replacing hardware is in the area of calibration,” according to Mr. DeSantis. “Just a few years ago, calibration was accomplished by selecting a calibration constant from an onboard EEPROM, which would drive a calibration DAC, which would correct for input stage inaccuracies by applying a correction signal to the input stage.

“Today, the calibration constants still are contained in an onboard EEPROM,” he continued, “but instead of driving correction DACs, the EEPROM constants are read by the software driver, and the correction is accomplished in the driver. This saves the cost of the DACs and the associated circuitry required to apply the correction signal.”

Taking a different approach, NI developed a fast software routine that can use low-cost, off-the-shelf calibration pulse-width modulation parts. Not only is the new method less expensive, but it executes four times faster than the previous technique and is more accurate.

Conclusion

A few consistent themes run through the development of all types of data acquisition products. First, very sophisticated ICs now are available that support both high performance and low cost. The two objectives are not mutually exclusive to the degree they have been in the past.

According to John Tucker, product line manager at Keithley Instruments, “Higher levels of integration available in state-of-the-art components have made possible low-cost, small, and high-performance products. This is evidenced in ASICs and FPGAs that can place the functions of several discrete components into one chip. Lower-cost, simple, standard interfaces such as USB also help to drive down cost.”

Software has advanced to become much more a component part of a data acquisition solution than simply a control program that operates the hardware. In some areas, software has replaced hardware. Further, because software represents such a large part of a product’s development cost, reducing this investment by leveraging the same driver for a complete family of products can affect the product’s selling price.

These trends mean that the job of selecting a suitable data acquisition product has become easier while finding the best one for a specific application is much harder. You no longer can ignore the low-cost end of the market because of poor accuracy, inadequate resolution, or slow speed.

While it always will be easy to consider only a few suppliers’ products if your application requires special data acquisition system attributes, no longer does low cost mean low performance.

To get the best value, read and compare data sheets thoroughly. If you still have questions, ask for clarification. And if you intend to buy more than one system, obtain a demonstration unit and make certain it performs as you need it to. With so many good solutions to choose among, the data acquisition system customer truly is in the driver’s seat.

September 2005

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!