Just as quiescent current (IDDQ) testing of CMOS devices is attracting increased recognition as an almost indispensable and very cost-effective test technique, questions are being raised as to its future applicability. This dichotomy became particularly apparent at the 1996 IEEE International Test Conference.
A capacity crowd attended the Second IEEE International Workshop on IDDQ Testing. At the same conference, a panel entitled “Will IDDQ Testing Leak Away In Deep Submicron Technology?” contemplated the potential demise of IDDQ.
The factors that have led to increased adoption of IDDQ testing during the second half of this decade include:
The need for increased fault coverage when testing today’s more complex, larger and denser ICs.
Recent IDDQ test-implementation advancements resulting in reduced test time per IDDQ test vector, which minimizes test-throughput penalties.
New and improved IDDQ test-program preparation tools which automatically select or generate the minimum number of test vectors to provide a specified fault coverage.
Concern about the future of IDDQ testing focuses on the dramatic decrease of test margins when ICs are being scaled down to the submicron level. This could lead to a higher incidence of rejecting good devices and passing bad ones.
Fortunately, there are ways to deal with the submicron-device test problems, such as applying new IDDQ-test-specific design for testability (DFT) rules or alternative test techniques. But before examining these solutions, let’s review the principles and rationale of IDDQ testing, the new test-implementation possibilities and automatic generation of minimal but optimum IDDQ test vector sets.
IDDQ Test Principle and Purpose
When developing functional or logic test programs, fault-coverage adequacy is commonly determined by using stuck-at-one and stuck-at-zero fault simulation. However, CMOS ICs are prone to contain other process-related faults, such as gate-oxide and metal-bridging defects. These, as well as certain types of open-circuit defects, may not be readily found with stuck-at fault-model-based tests and pose latent performance and quality risks.
The basic CMOS IC combines an N-channel FET and a P-channel FET, forming a gate where one transistor is in the conducting state (on) while the other is nonconducting (off). The gate input level determines which one of the two is conducting and, except during switching, the two never conduct at the same time. As a result, the current drawn during the quiescent periods between state changes is in the picoamp range for a single gate or several microamps for fairly large ICs.
An internal short results in an above-normal quiescent current flow during the time the shorted transistor is supposed to be in the “off” state. A floating gate input (open) may cause increased current since neither gate may be in a fully nonconducting state. Process defects may cause leakage currents to flow which may decrease operating margins, causing erratic behavior or early failure.
By measuring the IDDQ drawn from the VDD supply and comparing its value with established pass/fail limits, you can distinguish bad from good circuits. To examine all circuits on a device, a series of IDDQ tests must be performed. However, the test vector set required to sequentially place FETs into off states, so that excessive current flow may be detected, is much smaller than a functional vector set. This is because at any time (except during switching) approximately half of the transistors are innately in the off state.
IDDQ testing provides immediate benefits since it expands the fault coverage domain. In addition, the quality improvements of shipped products, due to weeding out marginal devices, are equally significant.
For example, Steven McEuen of Ford Microelectronics found that a sample-group of 59,000 ICs that passed functional tests—but failed IDDQ tests—exhibited an 8.3% Boolean failure level after a 1,000-hour life test. And Ken Wallquist of Philips Semiconductor had a 2.3% failure rate in an experiment with 40,000 ICs in a 24-hour burn-in.1
While these numbers quantify the qualitative improvements possible with IDDQ testing, Timothy Henry of IBM and Thomas Soo of Intel used IDDQ to improve production flow. When IDDQ testing was utilized in addition to stuck-at fault testing, they found that device burn-in could be eliminated, leading to a cost saving of $1.25M for the i960JX CPU product alone.2
IDDQ Test Instrumentation Choices
Whenever new logic inputs are applied to a CMOS device, many internal transistors switch from an “on” state to an “off” state, and vice versa. During this switching time, the current consumption of the device under test (DUT) may approach several amperes.
The supply providing this current usually is located some distance away from the DUT. The interconnecting wiring resistance and the internal impedance of the supply are usually quite significant, causing substantial VDD voltage drops at the DUT during heavy current flows.
To accommodate the large switching transient current and maintain a constant VDD, a fairly large capacitor (CDD) usually is placed near the DUT. A small bypass capacitor (CHF) is placed directly at the DUT to provide a low-impedance path to ground for high frequencies.
The magnitude of IDDQ, being in the nanoamp to microamp range, is more than 120 dB below the switching current level. In addition, IDDQ can only be measured after each switching event and after related transients have subsided, a period dependent on the capacitor values, interconnecting wiring resistance and the power-supply impedance.
To handle large currents, measure IDDQ with nanoamp resolution and minimize settling times so that a reasonable throughput is achievable is a formidable challenge. Many solutions have been proposed and implemented, each with advantages and disadvantages.
Figure 1 shows the various locations where IDDQ measurements could be performed. The more distant from the DUT, the easier it is to install and deploy a conventional measurement subsystem, but the less effective it may be.
At the other extreme, an on-chip IDDQ monitor is fast and effective but uses semiconductor real estate. Between these extremes, many options exist which are more practical for the majority of today’s VLSI test applications.3
IDDQ is measured using one of two basic principles. The first one places a resistor in the current path, senses the voltage drop across it and applies Ohm’s law. The second one, referred to as the Keating-Meyer principle, performs a voltage-to-time conversion accomplished by measuring the time needed to let the voltage across a capacitor drop by a predefined amount.
The capacitor in this case is the intrinsic capacity of the DUT plus, in some cases, the high-frequency bypass capacitor. The voltage decay is proportional to IDDQ and caused by depleting the charge remaining on the capacitors after disconnecting the DUT from the VDD source.
Most actual implementations used today are derivatives from these two methods and fall into these categories:
Parametric Measurement Unit (PMU)
—All IC ATE used for device characterization contain at least one precision source/measure instrument—the PMU. Since IDDQ tests require high-resolution current measurements, it was only natural that the first IDDQ tests were conducted with PMUs.
However, PMUs are relatively slow, typically performing only a few hundred tests per second. And some PMUs cannot supply sufficient current during the nonquiescent portions of the test. To overcome these problems, some PMUs were redesigned and often used in conjunction with separate high-current voltage sources. Teradyne, for instance, developed a high-performance PMU with an IDDQ test option that achieved an overall test rate of 2,500 vectors/s.
Device Power-Supply (DPS) Monitors—
Measuring IDDQ at a remotely located DPS results in throughput penalties. The portion of the CDD-charge consumed during each vector application must be replenished by the DPS and IDDQ can only be measured at the end of each recharge transient. But by integrating a DPS/IDDQ unit into the test head and carefully managing interconnections, the size of CDD can be minimized, consequently raising the IDDQ test rate from the millisecond into the microsecond range, commented Ulrich Schoettmer, strategic test consultant at Hewlett-Packard BSTD.
The Hewlett-Packard HP 83000 DPS features integrated IDDQ measurement capability, performing current monitoring outside the DPS regulating feedback loop. Advantest has designed a high-speed measuring circuit that uses a nonfeedback voltage source and a high-speed current source. The circuit, provides satisfactory operation with values for CDD as small as 1,000 pF.4
Monitor on Pin Electronics Card—Placing the monitor in the test head is attractive because it is easily accessible and retrofittable. One implementation, integrating the monitor with the pin electronics, uses a sampling resistor within a feedback loop and achieves measurements in the nanosecond range with reduced CDD values.
5Monitor on Test Fixture, QTAG
—Being closest to the DUT, test fixture-mounted monitors are more likely to provide faster and more accurate measurements. The performance these monitors can provide is demonstrated with the QuiC-Mon and the OCIMU circuits. QuiC-Mon, housed on a PCB, is based on the Keating-Meyer principle and achieves measurement rates of up to 250 kHz and 100-nA resolution.6 OCIMU performs accurate IDDQ measurements in the 1-µA to 1-mA range at a >10-kHz rate and can tolerate high-capacitive (>2 µF) loads.
7
Recognizing the performance advantages offered by on-test-fixture monitors but concerned about multivariant implementations, the Quality Test Action Group (QTAG) was formed at the 1993 International Test Conference to create appropriate standards.8 The desired standards for on-test-fixture IDDQ monitors would specify the foot-print on the fixture (Figure 2), describe interconnections and define standard software to drive the monitor. QTAG also would assure that monitors are readily available.3
DUNA 2, a monitor contained on a single IC, is the first implementation designed to comply with the QTAG class-1 standard.9 Sample devices are available for evaluation and development is continuing.
Finding an Optimum Set of IDDQ Test Vectors
IDDQ measurements are more time-consuming than functional testing and constitute an additional test operation. Consequently, minimizing the number of IDDQ test vectors is essential to maintain a reasonable test throughput.
Most automated test pattern generation (ATPG) programs today, such as Mentor Graphics’ FastScan™ and FlexTest™ or Viewlogic’s Sunrise TestGen IDDQ, address IDDQDDQ test generation is to provide a set of IDDQ test vectors representative of the maximum number of current measurements allowed by the manufacturing environment which detects the maximum number of faults,” said Ralph Sanchez, technical marketing engineer at Mentor Graphics. “A secondary goal of IDDQ test generation is to furnish a minimal set of IDDQ test vectors which detects all faults.” test-throughput issues. “The major goal of I
An optimum IDDQ test vector set may be obtained by letting a program evaluate and select suitable vectors from the functional set or by generating a new set via an IDDQ-specific ATPG. In the first case, each vector is examined for IDDQ test effectiveness and a minimal subset is selected. When any of the chosen test vectors is applied during functional test, a strobe command is automatically generated to perform the IDDQ measurement.
Or in the second case, IDDQ-specific ATPG tools not only generate test vectors, but also place the DUT in an IDDQ testable state. For instance, high-current output drivers must be turned off or the write commands to embedded RAMs must be disabled, to ensure that the circuit is not drawing excessive current.
It also is more efficient to let one tool perform IDDQ fault-coverage evaluation, vector set minimization, circuit state initialization and IDDQ/functional test integration. For example, Viewlogic’s Sunrise TestGen™ tool combines several IDDQ-specific features.
“It generates a minimal set of high fault-coverage vectors for IDDQ purposes while constraining these vectors to inhibit excessive current flow during the quiescent state,” explained Steve Smith, director of marketing at Viewlogic. “This set of vectors is ranked and compacted, and the most efficient strobe points are selected by the TestGen tool.
“Typically, 10 to 15 IDDQ strobe points may be adequate. In most cases, the point ranked number one achieves about 40% of the IDDQ coverage,” Mr. Smith continued. “The rest provide coverage in descending order. Because the tool operates from a gate-level netlist and contains a circuit simulator, it yields a vector set that gets the most use out of any particular strobe point.”
Test-generation suites that contain design-for-testability checks can ensure that the device is innately IDDQ testable and can verify the appropriateness of functional vectors considered for IDDQ tests. When using an existing vector set, test cycles which conflict with any one of the IDDQ test constraints will not be used for IDDQ measurement, said Mr. Sanchez. When generating a new set of vectors, the IDDQ checks are used during ATPG as a bounding condition.
The Future of IDDQ
CMOS device geometries are continually being shrunk and supply voltages are being reduced to obtain more functionality and better performance from each device. An unfortunate by-product of this down-scaling is a decrease in threshold voltage levels and an increase in quiescent current.
Today, the mean IDDQ value for defective products is about 14 times larger as the mean value for good products at 25°C. However as CMOS technology moves toward smaller feature sizes, the mean IDDQ for the good product will increase dramatically.
As a result, the mean IDDQ for the defective product will be only a small percentage greater than the mean IDDQ of the good product. This makes it very difficult or impossible to draw a line which separates good and defective product based upon IDDQ values.10
If IDDQ testing is to be useful for future technologies, new techniques to control the quiescent current must be developed. Potential solutions include cooling the device during test (perhaps to 200°K), reverse biasing the substrate during test and architectural changes such as partitioning of the device.10
Since neither testing at subzero temperatures nor invoking stringent design-for-test rules are attractive propositions, some experts express doubts about the long-term applicability of IDDQ testing. However, there are others who have much more confidence in the future.
For instance, Manoj Sachdev of Philips Research Laboratories stated: “No matter which test methodology—Boolean or IDDQ—is used, deep submicron devices will be much more difficult to test. DFT is an accepted overhead in Boolean testing. Why should it not be in IDDQ testing?”11
References
1. Hawkins, C. and Soden, J., Viewpoint: Dealing with Yield Losses in IDDQ Testing,” IEEE Spectrum, January 1996, pp. 68-69.
2. Henry, T. R. and Soo, T., “Burn-in Elimination of a High-Volume Microprocessor Using IDDQ,” International Test Conference Proceedings, 1996, pp. 242-249.
3. Baker, K., et al, “Plug & Play IDDQ Monitoring with QTAG,” International Test Conference Proceedings, 1995, pp. 739-749.
4. Isawa, K. and Hashimoto, Y., “High-Speed IDDQ Measurement Circuit,” International Test Conference Proceedings, 1996, pp. 112-117.
5. Johnson, G. and Wilstrup, J., “A General-Purpose ATE Based IDDQ Measurement Circuit,” International Test Conference Proceedings, 1995, pp. 97-105.
6. Wallquist, K., et al, “A General-Purpose IDDQ Measurement Circuit,” International Test Conference Proceedings, 1993, pp. 642-649.
7. Manhaeve, H., et al, “An Off-Chip IDDQ Current Measurement Unit for Telecommunication ASICs,” International Test Conference Proceedings, 1994, pp. 203-212.
8. Baker, K., “QTAG: A Standard for Test Fixture Based IDDQ/ISSQ Monitors,” International Test Conference Proceedings, 1994, pp. 194-202.
9. Baker, K., et al, “Development of a Class 1 QTAG Monitor,” International Test Conference Proceedings, 1994, pp. 213-222.
10. Williams, T. W., et al, “IDDQ Test: Sensitivity Analysis of Scaling,” International Test Conference Proceedings, 1996, pp. 786-792.
11. Sachdev, M., “MOS Scaling and IDDQ,” Test Technology Newsletter of the IEEE Computer Society, November-December 1996, p. 6.
These companies provided information for this article:
Advantest America (847) 634-2552
Hewlett-Packard (800) 452-4844
Mentor Graphics (503) 685-7000
Teradyne (818) 991-2900
Viewlogic Systems (510) 440-1000
Copyright 1997 Nelson Publishing Inc.
March 1997
|