Design-For-Test The Smart Way: dFT With A "Big T" And A "Little d"
Design-for-test, or DFT, should facilitate high-quality test, not change the design. Test techniques and strategies need to supply a high-quality test that screens out defective devices, avoiding their shipment to customers. Any change to the design, or special design requirements for test, will make an impact on scheduling. Moreover, any additional logic and routing will lower the product yield. Fortunately, many test technologies that have little or no impact on the design are already popular in the industry.
Macro testing enables small memories or other blocks that require specific pattern sequences to be tested without any design impact. And embedded deterministic test compression technology yields up to 100X compression without any requirements or changes to the core design. These test methods also can perform at-speed testing without the need for special hardware at the tester or fixture. Internal phase-locked-loop (PLL) clock-generation logic can be controlled and manipulated during automatic test pattern generation (ATPG). The bottom line is that more testing is often necessary, but impact on the design must be minimal.
Testing, of course, is necessary to detect a defective product. Fabrication processes are imperfect systems in which a percentage of the product contains defects. The ratio of defect-free parts to total parts produced is the process yield. Among others, sources of defects include particles getting into the silicon and design tolerances that are too close to process variations. The opportunity for a defect to occur increases with the area the logic and routing required within a device. Consequently, any logic added to the device design will lower the device yield.
As the complexity of designs and defects has increased over time, so has the demand for effective testing. Thus, various DFT techniques have emerged to help facilitate effective and efficient testing. However, there's often a tradeoff between the amount of DFT logic added and its effect on the development process and yield.
Evolution Of Test Test strategies in the early days of IC production were based on running functional patterns that mimicked the design's desired operation. Functional patterns can be very effective. But the complexity of developing them and grading their coverage of potential defects increases exponentially with design size. As design complexity increased, this approach hit an effectiveness and efficiency barrier. Out of necessity, scan testing was born.Scan testing is a DFT technique that essentially turns all flip-flops or latches into a series of shift registers in test mode. As a result, each of these sequential elements can be directly loaded with a desired value. In addition, a functional mode value captured into a sequential element can be unloaded and verified.
With such controllability and observability, the complex nature of a circuit can be tested as many individual and small combinational segments between sequential elements. The idea was that if each of the circuit's structural pieces function individually, then the sum of them should operate correctly in functional mode. ATPG tools took advantage of scan and could automatically achieve a very high test coverage of structural faults.
Even for a small circuit, creating a functional pattern to verify that a stuck-at-1 (sa1) fault doesn't occur at an internal gate can be very complex. To create a functional pattern to detect the sa1, the primary inputs (PIs) must be driven, and the clock must be pulsed to get values into the first set of registers (Fig. 1). This action is repeated until a value of 0 exists at the sa1 fault site.
Next, the 0 value that was controlled at the fault site must propagate to an output, which also may require many cycles of driving PIs and pulsing clocks. Conversely, with scan, each sequential element is directly loaded as if it were part of a long shift register. The complexity of the pattern creation is dramatically reduced to only the logic between scan chain B and scan chain C. Once the scan chains are loaded, the circuit is put back into functional mode and the functional clock is pulsed. Shifting out the captured contents of the scan chains performs verification.
The scan methodology is now accepted as a necessity for testing moderately sized to large digital designs scheduled for volume production. There's a logic penalty, though, because every flip-flop or latch must be converted to a scannable flip-flop or latch.
Similarly, embedded memory testing initially was performed either by using a tester to provide the sequence of patterns to the memory, or via an internal processor to generate and verify a sequence of writes and reads. Both of these strategies became very difficult with the increasing size and speed of embedded memories. One solution applied the desired sequences from a circuit within the logic-memory built-in self-test (BIST).
Today, the foundation of testing for digital devices is built on scan and memory BIST. Both methodologies incur an increase in area. But because the value is so significant, usually memory BIST should be used for large memories and scan for random logic.
Memory Testing Increased integration enables many more functions to be embedded within one device. As a result, designs commonly contain large memory banks as well as many small memories distributed through the device. Designs can often have over 400 small memories, many existing in a timing-critical interface.The routing and area required to implement a memory BIST solution to test all of these memories would slow down the development cycle and have a significant impact on the production yield. However, embedded memories must be tested with at least a simple March algorithm to detect common memory defects. Without applying a memory test algorithm to the small memories, test quality will suffer. Defective devices will escape detection during test and be shipped. This situation posed a critical problem with few viable solutions. The traditional options were:
- Add memory BIST to all small memories. Test quality will be improved, but area and routing will take a significant hit.
- Don't test the small memories. No increase in area or routing, but test quality will suffer.
- Develop functional patterns from an internal processor. This allows high-quality test without increasing area or routing overhead.
This historical approach requires specific circuit knowledge and a large amount of effort to implement, though. In addition, it may not be possible to test all of the memories in parallel, and test time could take too long.
A technique called macro testing was developed to apply specific pattern sequences to small memories without adding logic to the design (Fig. 2). Instead of adding BIST logic, macro testing exploits ATPG features to use existing scan chains to apply the memory test sequences.1 The user simply defines the pattern sequences and the memories they should be applied to. Then, the ATPG tool can determine how to load scan chains so that the desired pattern value (address and data) is available at each small memory. ATPG also ensures that expected memory outputs (reads) are propagated to scan cells for observation and verification. Because macro testing is automated through ATPG, hundreds of different memories can be effectively targeted in parallel.
The Demand For More Patterns Random logic testing through scan has been very effective over the years. But now, more patterns and pattern types are needed to deal with increases in design size and new defect mechanisms. Logic BIST has promised to take over the market year after year for about 15 years. It's a necessity for some applications, such as fielded system test. But why is it still only used for special cases, and why hasn't it become a common production-test approach?Logic BIST often requires a significant change to the design. Implementing logic BIST in a design affects both the design process and silicon area. The design logic must be changed to prevent any unknown state, or X value, from propagating to an observation point. Random-pattern-resistant logic must be made more testable with test points. Even though logic BIST will have some contribution to test quality in production, it achieves very limited test quality because many faults are random-pattern resistant. Due to the effort required to insert logic BIST, it's used for fielded systems test and other special applications with reuse in production, along with the deterministic patterns needed for high-quality test.
It's widely accepted that the more patterns and pattern types applied to random logic, the better the test quality. Most patterns are based on scan testing, because scan ATPG is easy to apply and high coverage of various fault models can be achieved. Figure 3 gives an estimate of the adoption of fault types and pattern counts in the industry. The stuck-at fault model and patterns were effective for testing earlier process technologies. But as 130-nm process technologies were introduced, the population of timing-related defects had grown to the point where patterns specifically targeting timing defects became necessary.2,3 The transition fault model solves the problem of detecting timing defects. Yet transition pattern sets are often three to five times larger than a stuck-at pattern set for the same design.
Looking forward, new fault models and patterns specifically targeting timing defects are likely to be added to the test portfolio for 90-nm and smaller process technologies. These fault model types include multiple detect patterns4 and patterns based on locating likely bridge pairs from the GDSII physical structure of the device. While these new pattern types are also scan patterns, the pattern volume is growing too dramatically to keep up with test-quality requirements.
Also, a critical test-time issue for 130-nm technologies appeared. Existing tester capacity had to be maintained while performing all necessary tests. Focusing on test-data volume helps fit a pattern set into the tester. But without effective test-time compression, many more testers would be necessary to support steady chip production capacity with all of the new required tests.
Several test approaches introduced to the industry aim to supply the necessary patterns for test coverage within a reasonable amount of tester time. The goal of each approach is to provide a mechanism to apply the necessary scan patterns-but in much faster tester time. A clear opportunity to speed up the test time for each pattern is to find a way to make scan-chain loading faster. Scan chains can often be 5000 scan cells in length or longer, requiring at least 5000 test cycles to load each pattern. So, how can these chains be shortened without adding many more scan chains and increasing chip I/O and tester requirements?
Compression techniques that use the BIST strategy of a signature generator on the scan outputs have the same issue as logic BIST-design changes need to fix internal X states. Linear feedback shift registers (LFSRs) were first reseeded about a decade ago to solve compression issues by filling an LFSR with values that can be loaded to the specific scan-cell locations for targeted fault detection.5 The scan cells that require a specific value are called specified bits. Reseeding LFSRs also fills the non-specified bits with random data. This technique significantly reduces the test data, but it's less effective for reducing test time.
A problem with reseeding LFSRs is that one LFSR bit is necessary for every scan cell that requires a specified bit. So if a pattern requires 3000 specified bits, then 3000 bits of LFSR must be available (often with a shadow register of the same size). Consequently, either a huge LFSR is required or many LFSRs are necessary to apply the various types of patterns. Either case requires a large amount of logic for the LFSR and shadow block.
One way around this problem is to avoid sophisticated patterns and only generate patterns with a small number of specified bits. But this will affect test quality if all patterns or pattern types can't be supported. Another workaround inserts test points throughout the circuit to reduce the number of specified bits required, though this would put the focus back on design changes and increase silicon-area overhead.
Illinois scan is another technique introduced to deal with pattern compression. It uses a very simple approach of taking the tester scan input signals and sending them to a very large number of internal scan chains. This technique is also effective in introducing pattern compression, since increasing the number of internal scan chains shortens their length. A bypass mode detects faults that otherwise can't be detected due to dependencies between internal scan chains that have common tester scan inputs.
The design impact and silicon area is small with such an approach. However, compression may be between 2X and 11X, which might not be enough to apply all of the patterns necessary for a high-quality test.6
Embedded Deterministic Test (EDT), a recently introduced compression technique, uses a continuous flow through a ring generator.7 The ring generator is a logic block that can provide specified bits into scan-cell locations but is continuously updated with each scan load cycle. As a result, it's possible for the ring generator to provide thousands of specified bits for various pattern types with 64 bits or less in the ring generator. In turn, EDT can supply any type of pattern that's desired for test quality, but with minimal logic located only on the scan-chain I/O.
The work is done in the ATPG or test process to solve a series of linear equations, so that loading a compressed sequence of bits through the ring generator supplies all specified bits required. Unknown values (X's) within the logic are recognized during ATPG and masked out at the EDT compactor. Therefore, internal design changes aren't required. The end result is support for all scan pattern types, but with very minimal logic and no design changes. Compression achieved with EDT offers up to a 100X improvement in test time and data versus a non-EDT method.
Simple Logic For Accurate At-Speed Testing Probably the biggest test challenge for technologies at 130 nm and smaller is determining how to detect the dramatic increase in subtle timing defects versus previous technologies. The most reliable solution is to use the chip's PLL to supply at-speed clocking. But the device clock tree and PLL logic are often some of the most critical parts of the design. Fortunately, clock-switch design strategies exist where the clock tree isn't modified, but PLL clock pulses can be made programmable during test.Designs with PLLs have a multiplexer added to the PLL clock path to enable an external tester clock, or shift clock. It's used to load the scan chains for a design that will be scan tested. Figure 4 illustrates the use of this same multiplexer to supply the programmable PLL pulsing during test.
The PLL clock is fed into a clock switch block that can be configured to supply various PLL clock pulses. Clock pulses from the clock switch propagate to the circuit through the scan-clock path. Control bits for the PLL clock switch can add up, but ATPG-named capture procedures can automatically program them.
The control bits associated with each sequence are defined. When the ATPG wants desired clocking, it loads scan cells so the program bits are in the desired state. Consequently, accurate PLL clocking can be supplied during scan testing but it requires simple logic, which is automatically loaded during scan chain loading, to control it. The focus on the PLL programmability is to have the ATPG tool determine how to control it and not add clock-tree impact or complex control logic.
Tradeoffs are often made between test quality and development impact and yield. DFT is most effective if high-quality test can be achieved without seriously affecting the design. The focus should be on the ATPG tools and the testing portion of the DFT process, not design changes. A macro testing approach enables testing of small memories using specific algorithms designed to provide high-quality test, but without design or silicon impact.
For logic testing, EDT makes it possible to apply all of the tests necessary to achieve very high-quality test without affecting the design. Moreover, only minimal logic is placed around the scan I/O. Macro testing and EDT can both use programmable PLL clock switching to perform accurate at-speed testing. The value of all these techniques is that they have minimal, if any, impact to the design process and silicon area. Thus, the tradeoffs of test quality versus design/area impact can be avoided with the right DFT approaches.
The author would like to thank Richard Illman of Cadence Design Foundry UK Ltd. for his insights related to test focus versus design change during our previous collaborations on publications.
References:
1. Boyer, J., and Press, R., "New Methods Test Small Memory Arrays," Proceedings Test & Measurement World, Reed Business Information, 2003, p. 21-26.
2. Kim, K., Mitra, S., and Ryan, P., "Delay Defect Characteristics and Testing Strategies," IEEE Design & Test of Computers, Sept.-Oct. 2003, p. 8-16.
3. Benware, B., Madge, R., Lu, C., and Daasch, R., "Effectiveness Comparisons of Outlier Screening Methods for Frequency-Dependent Defects on Complex ASICs," IEEE VLSI Test Symposium (VTS 03), 2003.
4. Benware, B., Schuermyer, C., Ranganathan, S., Madge, R., Krishnamurthy, P., Tamarapalli, N., Tsai, K.-H., and Rakski, J., "Impact of Multiple-Detect Test Patterns on Product Quality," International Test Conference, 2003.
5. Hellebrand, S., Tarnick, S., Rajski, J., and Courtois, B., "Generation of Vector Patterns Through Reseeding of Multiple-Polynomial Linear Feedback Shift Registers," International Test Conference, 1992.
6. Pandey, A., and Patel, J., "An Incremental Algorithm for Test Generation in Illinois Scan Architecture Based Designs," IEEE Design, Automation and Test in Europe Conference, 2002.
7. 7. Rajski, J., Tyszer, J., Kassab, M., Mukherjee, N., Thompson, R., Tsai, K.-H., Hertwig, A., Tamarapalli, N., Mrugalski, G., Eide, G., and Qian, J., "Embedded Deterministic Test for Low-Cost Manufacturing Test," International Test Conference, 2002.