Design-For-Test Links Design And Manufacturing

Dec. 4, 2003
A new frontier for EDA is its ability to flow information from design to manufacturing and back again. In the process, DFT has begun to encroach on the front end of the design process.

Historically, testability is an afterthought in the design process. But heightening complexity of chip designs, and especially SoCs, forces testability (and manufacturability) to take a more central position. It's no longer enough for test engineers to insert DFT scan chains into a flattened chip layout after synthesis. Now, embedded test structures are being thought of as the first line of defense for failure analysis and yield improvement.

A shift is afoot in which EDA tools will eventually pass more information forward to test engineers. Conversely, and perhaps more importantly, information gained from failure and yield analysis will wend its way back to designers. In the process, designers stand to learn more about how to make their products more testable from the outset. In the past, design-for-test (DFT) was considered by designers to be a non-value-added element. But that perception is turning around rapidly, with DFT gaining much more attention from front-end designers—even at the RTL stage.

DFT techniques have been applied to chips as a mechanism for weeding out yield casualties. It's understood that there will always be yield fallout in any silicon fabrication process. It's also understood that yields will continue to decrease as design rules descend further into the nanometer realm.

Furthermore, in the SoC world, designers must incorporate intellectual-property (IP) blocks from many sources, some internal and some external. Designers may not know very much about what's going on inside a given IP block other than a functional specification. Thus, they're forced to rely on DFT to bring test access to that block out to other parts of the overall chip design and, ultimately, to the outside world.

The traditional way to apply DFT is to add scan structures to facilitate testing via automatic test-pattern generation (ATPG). This is accomplished by connecting all of the design's registers in serial fashion, allowing test engineers to shift data in and out through a few ports at the chip level (Fig. 1). That allows, for test purposes, access to not only the pins of the device, but also to the internal registers. That's been the mainstream technology for manufacturing test over the last 10 years, and it probably won't relinquish that status anytime soon.

However, as multi-megagate designs are migrated down into 130- and 90-nm silicon processes, the effectiveness of scan-based DFT techniques is diminishing (Fig. 2). Some of the lessened effectiveness can be attributed to failures in following DFT rules, but much of it is traceable to the shift to smaller geometries. Traditionally, the metric for determining the effectiveness of tests is the stuck-at fault model. A stuck-at fault is a hard, static failure in which any node is stuck at zero or one. Typically, ATPG techniques are used to create static test vectors to detect these faults.

For memories, built-in self-test (BIST) techniques are the preferred method. BIST adds test circuitry to the design and applies the memory test from these circuits on-chip.

Historically, ASIC suppliers have preferred to see fault coverages of greater than 90%. However, for designs with 1 million gates fabricated on a 0.25-µm process, 90% coverage would produce a defect rate of 0.28% or 2800 ppm. This is unacceptable considering that these defects won't be found until devices are assembled on pc boards. Even when all DFT rules are followed, it's very unusual if the fault coverage for a conventional ASIC design exceeds 98%. If DFT rules aren't followed, a project will suffer the consequences in device fallout after board assembly.

The real problem is that at nanometer geometries, other types of faults begin to appear. Primarily these are speed-related failures. Speed-related (or, as they're sometimes referred to, at-speed) failures aren't static, but rather better characterized as resistive failures. In this case, resistive nodes or bridges cause given nodes to be slow to rise or fall.

For example, resistive vias can cause errors in high-speed circuits but still pass stuck-at testing. Transition fault models test whether the circuit is transitioning properly or not, while transition delay fault models determine if the delay between two logic values is acceptable. Path delay fault models test for delays along a predetermined path caused by resistive-capacitive coupling with other paths. None of these failures will reveal themselves through purely static stuck-at testing.

As a result, designers are on the move toward adopting an at-speed test methodology. When multiple clock speeds are used in separate segments of the design, at-speed tests for one clock segment across the entire design boosts test complexity. It also increases the patterns required to identify the faults. Microprocessor manufacturers have been doing at-speed test for years, but it's beginning to spread more widely throughout the industry. Manufacturers such as Intel have typically relied on functional testing to snare speed-related failures. Functional test exercises the chip as though it were in its target system. It doesn't use scan, which lends itself to precise fault-coverage metrics. As a result, a downfall of an at-speed methodology is that it's very difficult to determine whether or not you've covered the entire design.

Designers are seeking ways to supplement their standard test methodologies. They typically contain static and stuck-at components to include an at-speed methodology. These methodologies aren't mutually exclusive, but rather complementary.

This is where ATPG comes in once again, as ATPG tools evolve to provide coverage metrics for at-speed tests. One of the most common at-speed vectors used with ATPG is the transition vector, which is very similar to the stuck-at fault. The ATPG tool looks at every node in the design to determine if it's slow to rise or fall. It targets the complete design for any speed problem. But the side effect of this methodology is that the transition patterns typically number from two to 10 times the number of standard stuck-at patterns.

The much greater volume of test vectors for at-speed test has an adverse impact on both test time and the volume of test data. Will all of this data fit on the tester? Most large designs already stress the limits of tester memory. Even more compelling is the impact on test time, which, in turn, affects manufacturing throughput.

The need to reduce test pattern volume and testing time is pushing designers toward embedded test and BIST techniques. The trick for tool vendors is to disrupt the lives of RTL designers as little as possible.

Mentor Graphics is attempting to address these issues with the latest version of its FastScan ATPG tool. The tool introduces what Mentor terms embedded-deterministic-test (EDT) technology as embodied in its TestKompress tool.

Like traditional ATPG, EDT technology is based on deterministic test-pattern generation. But it differs in that by adding two very small pieces of logic in the scan path, it's possible to create highly compressed test patterns offering the same coverage as ATPG. Deterministic test-pattern generation means that there's support for a wide range of industry-standard fault models as well as easy extension to new fault models. It also produces the smallest number of test patterns compared to random techniques.

EDT logic blocks are inserted only in the scan chain path (scan in and out). As a result, the system design isn't affected, which means immunity from engineering change orders or other changes to the system logic. The EDT logic lets users increase the number of scan chains in their designs up to 10 times, which results in shorter chains and shorter test time.

Two main embedded blocks make up the EDT intellectual property. A decompressor on the input side takes the compressed data and decodes and decompresses it to fill all of the design's scan cells. A compactor on the output side takes the decompressor's internal data and compresses it on the way out. Total logic needed for both parts is only about 20 gates per internal scan chain.

The EDT architecture is designed for very high-speed operation. The decompressor itself won't be a bottleneck in high-speed shift operation. It's also very modular. As a result, during place and route, the logic can be easily segmented and placed next to the blocks it drives. The decompressor also provides very high encoding capacity, which is critical for nanometer designs that require additional types of tests to detect new failure modes.

The compactor side of the architecture also offers advantages. For one thing, it handles unknown states. That's significant because almost all designs contain unknown values during test. Other compaction schemes require users to bound all X sources so that they don't propagate to the compactor and corrupt the signature. X bounding means that users must modify their functional design, a situation that no test engineer wants to impose on the designers.

In addition, the compactor supports diagnostics. Based on failure information at the compactor's output, it's possible to determine which internal node is defective. This capability is especially important as manufacturers roll out new silicon processes and ramp up yields.

In EDT, the tester stores patterns in the form of compressed stimuli and responses (Fig. 3). Applying one pattern involves sending a compressed stimuli from the ATE system to the tool's decompressor. The continuous-flow decompressor receives the stimuli on its inputs and then produces the data that's loaded to the scan chains on its outputs. Such data contains the values required to detect the targeted faults.

Once the scan chains are loaded, the responses are captured back into the scan chains and shifted out. On the way out, they pass through the compactor to ensure that the information related to fault detection isn't masked. Compacted responses are compared to "golden" references on the tester.

EDT uses continual flow application of the test patterns. Unlike signature-based approaches, it continues to perform cycle-by-cycle comparison of actual device responses against the expected responses.

The advantages of EDT compared with ATPG are found in the resultant shorter scan chains. A circuit implemented with traditional ATPG might have, say, two very long scan chains. In EDT, those two chains are configured into, perhaps, 20 very short scan chains driven by a decompressor and observed by a compactor. The tester, however, "sees" the design as having only two inputs and two outputs, or two scan chains. But the actual EDT chains are 10 times shorter; test requires 10 times fewer cycles; and the process consumes 10 times less test data. Also, because testers require 10 times less memory, a less costly tester can be used.

Embedded test, in general, is making its way deeper into the SoC designer's toolbox. Without embedded test, which should be understood to mean embedded test IP in each core and layer of a large, hierarchical SoC design, several difficulties ensue. For one, scan/ATPG flows don't scale well with larger designs, and they don't map directly onto hierarchy levels. Scan/ATPG methods generally require development of flat test patterns for the entire SoC, making it difficult to ensure high fault coverage. The speeds and densities of today's multi-million-gate SoCs means longer test times, because larger patterns must be used.

For LogicVision, a vendor of embedded test IP and software tools, the answer lies in a BIST-based approach. In its new LV2004 product family, LogicVision offers an integrated approach to embedded test throughout the design cycle and the end product's lifecycle (Fig. 4). Tailored for hierarchical SoC architectures, the LV2004 suite allows testability to be embedded throughout the design with minimal impact on overall design times.

A set of customizable IP modules performs embedded test on different portions of a SoC design. Integration of these modules is based on a divide-and-conquer hierarchical approach. In this case, the design is partitioned into multiple hierarchical cores. Each core is supplied with dedicated, embedded test modules, such as a logic BIST controller and memory BIST controllers. LogicVision's hierarchical embedded test insertion flow is a bottom-up flow. Embedded test modules are inserted at the lowest cores in the hierarchy and then inserted at progressively higher hierarchical levels.

That bottom-up flow could result in some unpredictability in the test-integration process. LogicVision's Embedded Test Planner tool eliminates that unpredictability by providing architectural verification at the RTL level. With this tool, designers can make architectural decisions early in the design flow that meet embedded-test requirements.

The LV2004 suite also provides links to physical design. It generates Synopsys Design Constraint (SDC) files for all embedded test structures as well as static timing analysis scripts. It also generates ScanDef files that support scan reordering between different chains. Reordering of scan flip-flops within a chain can be an effective means of minimizing routing congestion. Because the logic-BIST approach drives the use of a larger number of smaller scan chains, further routing gains can be had if flip-flops are moved from one chain to another.

In LogicVision's hierarchical approach, BIST controllers are added to individual cores, making them fully self-testable independent of other portions of the design (Fig. 5). A top-level BIST controller tests inter-core logic at speed as well. But the BIST controllers of all cores are connected to the chip's overall 1149.1 TAP interface. Each core comes with its own localized TAP interface, called Wrapper TAP, through which all BIST controllers within the core are accessed. This not only makes each core reusable, but it also maintains low and constant interconnect requirements between the core and the chip's TAP interface.

Facilitating test for a full SoC on a hierarchical basis is an important step forward. Equally or perhaps even more important is using the information one gleans from test to improve both design and manufacturing.

Cadence's efforts in this regard are reflected in its recently announced Encounter Test Solutions suite, which consists of two products: Encounter Test Design Edition and Encounter Test Manufacturing Edition (Fig. 6). Encounter Test Solutions are the first fruits borne from Cadence's acquisition of IBM's Test and Design Services operation in September 2002.

The Design Edition, targeted at design and test engineers, includes DFT insertion; capacity for designs of greater than 50 million gates; memory BIST; embedded core test; test compression with X-state masking; and compact, high-coverage delay tests. The Manufacturing Edition comprises a failure diagnostic environment that analyzes design intent and manufacturing information.

The Manufacturing Edition accelerates defect detection and links design data with the manufacturing flow by enabling a true feedback loop. Faults are quickly located to the logic netlist and layout location, with navigation supported by state-of-the-art viewer technology. The fault-isolation tool will also work with tests generated by other ATPG tools, such as Synopsys' Tetramax or Mentor's FastScan. On top of that, the tool generates diagnostic patterns for first-silicon debug and provides links to automatic-test and failure-analysis equipment.

Need More Information?
Cadence Design Systems
www.cadence.com

Credence Systems Corp.
www.credence.com

LogicVision Inc.
www.logicvision.com

Mentor Graphics Corp.
www.mentor.com

Synopsys Inc.
www.synopsys.com

Syntest Technologies
www.syntest.com

Teseda Corp.
www.teseda.com

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!