Many IC designers finally have embraced design for testability (DFT) in the form of scan insertion for digital circuit designs because of the significant time-to-production advantages these techniques deliver. But the advent of system-on-a-chip (SOC) designs, including those containing analog and mixed-signal blocks, is causing many of the traditional DFT insertion methods to break.
In a typical, full-custom design flow, as illustrated in Figure 1 (see the September 2001 issue of Evaluation Engineering), the design is produced in Verilog or VHDL code. That code then is synthesized into a gate-level netlist and sent through the scan insertion process to improve its controllability and observability to facilitate automatic test pattern generation (ATPG) and fault simulation.
For all-digital designs of reasonable size, this flow works very well and is widely used by many companies. And it works well if all the blocks in the SOC are supplied as soft intellectual property (IP) that can be integrated into the overall full-chip synthesis scheme.
But what happens when we begin to construct custom SOCs from previously designed blocks, some of which may have been separately packaged devices in previous lives? The full chip synthesis model starts to break down at this point. This is especially true when third-party IP is licensed and included in a new SOC but has been encrypted to protect the supplier’s proprietary data or when analog and mixed-signal circuit blocks are included.
In the case of protected IP, it may be impossible to include the block of logic in an overall full chip synthesis design strategy since the original behavioral source code is unavailable to the SOC design team for integration into the total circuit design. Even if the source code is supplied, going through the synthesis, scan insertion, ATPG, and fault-simulation processes for a logic block that already has undergone those processes can waste time and money. For analog and mixed-signal blocks, no commercial tool is available that can synthesize these blocks in the overall full chip environment.
SOC DFT Challenges
Looking at a typical SOC design composed of both internally developed and third-party-supplied IP, including mixed-signal blocks, reveals some interesting challenges (Figure 2, see the September 2001 issue of Evaluation Engineering ). Hopefully, an IP developer already has inserted DFT, usually in the form of scan, into the so-called hard IP blocks or implemented a built-in self-test (BIST) function.
In both cases, there may be some sort of wrapper around the logic circuitry to provide access to its DFT features. It may have been supplied with the test vectors necessary to exercise it—in isolation.
If there are so-called firm IP blocks, typically supplied with Verilog or VHDL test benches, those blocks must be integrated into the overall design and the test benches used to create actual test vectors. It may be left up to the SOC integrator to design a test wrapper for a firm IP core that will work at the SOC level. The analog and mixed-signal blocks usually are supplied either as hard IP or as combinations of Verilog/VHDL and SPICE netlists.
Tying together all of these elements using the full chip synthesis model no longer is practical. A new strategy is necessary. Many large integrated device manufacturers already have put strict rules in place to ensure that the IP—internally developed or licensed—used in their new SOCs is either testable with traditional DFT techniques or BISTed and includes the necessary test-access mechanism information.
But just ensuring that the individual blocks are testable is not enough. There must be a way to access each block once it is part of the overall chip. In addition, all of the test features must fit the final SOC die size and package pin constraints, not to mention the potential timing impact and the power-consumption issues.
Often, this means unwrapping an individual IP core and rewrapping it with a test-access mechanism that better fits into the overall SOC design. This concept is illustrated in Figure 3 (see the September 2001 issue of Evaluation Engineering). For example, it may be necessary to concatenate the scan chains of a particular block to reduce the number of nodes required for test access. Multiplexers might be used so the SOC package pins reserved for test purposes can access multiple IP blocks.
Or it may be desirable, in terms of test time on the manufacturing floor, to run multiple scan patterns and BISTs in parallel. So the SOC integrator not only needs the ability to unwrap and wrap test access mechanisms, but also must be able to stitch them together into an overall SOC DFT strategy.
Performing these functions on already synthesized and scan-inserted blocks is very important. Without this capability, it may be necessary to go back through the full chip synthesis each time a block is reused. That is one reason why so much attention is being paid to DFT strategy issues. Lack of attention to these kinds of DFT details can create a situation where it literally takes longer to develop the test programs for SOCs than it does to design the SOC’s functionality itself.
What to Test Where
Another set of trade-offs rears its ugly head when the issue of fault coverage, such as product quality, surfaces. Should an SOC integrator include functional testing, performance testing, parametric testing, or structural testing? With how many test insertions? At what levels: wafer test, initial package test, or burn-in and final test?
Each option must be evaluated and the test strategy and test-resource partitioning adjusted to achieve the highest test quality at the lowest cost. That typically means creating a matrix of the available test levels and the expected fault categories and figuring out what can best be tested where.
This gets even more complicated when the design team has used the core assembly approach instead of the full chip synthesis approach. The options offered with the full chip synthesis approach simply are not available using the core assembly approach, so DFT and test-engineering professionals need even more interaction with design teams and IP suppliers to get things right as quickly as possible. They must consider scan-chain type differences such as level-sensitive scan design (LSSD) vs. muxed-D, multiple clock domains, different test-access mechanisms, varying scan chain depths, and various kinds of BIST implementations.
Test-strategy development and test-resource partitioning in this environment are much bigger tasks and have to be attacked much earlier in the process. Each IP element has to contain the necessary testability features, and those features must be reusable at the SOC level.
DFT/BIST Trade-Offs
Implementing some of the testing resources in the SOC design itself involves still another complex set of trade-offs. Some DFT and BIST implementations could have a negative impact on product performance if they interfere with critical, high-speed signal paths. Depending on the tools available, implementing DFT and BIST could increase the time it takes to complete the design. On the other hand, this extra time up-front can significantly reduce the test-program development time later on, actually shortening the overall time-to-market cycle.
Also, there are issues associated with die size, wafer yield, and device packaging. Adding gates, flip-flops, or BIST structures can increase die size, reducing the total number of devices per wafer that can be fabricated. Bigger die sometimes can lower yield per wafer.
But with today’s fabrication processes, particularly with 300-mm wafers, these issues usually are far less onerous than they were only a few years ago. They do, however, have to be considered in the context of tester cost, test development time, and device fault coverage in calculating the overall manufacturing and test costs for new designs.
Fitting Tests Into the Tester
Another element to be considered when developing DFT strategies for SOC designs is the cost of the tester and the use of that tester during the manufacturing process. Three principal factors drive the capital equipment cost for new ATE:
- The number of pins.
- The memory depth behind each pin to accommodate large test vector sets, particularly for scan vectors.
- Raw device speed requirements.
These effects tend to add to the basic tester functions required for things like continuity and DC parametric testing, handler binning, and overall test-program flow. Unchecked, designs will continue to require more pins, more memory, and higher tester-pin driver/receiver speeds.
DFT and BIST, for that matter, are the only viable solutions when it comes to controlling the cost of ATE. Too little DFT and BIST mean high-cost ATE, long test-development times, and high manufacturing test costs. Too much DFT and BIST may make a design too big to fit in the target package or too expensive in terms of silicon area, particularly in high-volume consumer applications. As a result, trade-offs must be made early in the device architecture development and detailed design phases to arrive at the right strategy.
Then Add Mixed-Signal Blocks
When the design team begins to integrate analog circuitry into SOC designs in the form of DACs, ADCs, and other analog and RF circuits, the test problem becomes potentially unsolvable without the inclusion of DFT and BIST techniques. In addition to the tester costs, the team is faced with adding expensive analog instrumentation to the ATE and executing tests that can be very time-consuming on the manufacturing test floor.
For example, testing a 12-bit DAC or ADC using traditional instrumentation methods with expensive mixed-signal ATE can take upwards of 1 s. Consider a design that includes multiple ADCs and DACs, and you can see how the test cost can easily begin to exceed the device fabrication cost.
Fortunately, analog and mixed-signal BIST techniques now are available to help solve both the ATE cost and manufacturing test time problems. Histogram-based analog BIST, for example, can test a 12-bit ADC or DAC at full device operating speeds in less than 20 ms. This technique also provides the test results in digital format via the IEEE 1149.1 test access port, reducing or eliminating the need for expensive analog instrumentation.
In Conclusion
Many organizations are looking for a silver-bullet, one-size-fits-all test strategy. That solution seldom exists because each design is unique. A good test strategy for a small chip that sells for less than $5 in very large volume will differ markedly from a strategy for a large, multimillion-gate SOC that pushes the technology performance envelope and costs several hundred dollars per device in medium volumes.
DFT strategies for SOC designs must take the available design and test-flow options into account. With the full chip synthesis design strategy, the emphasis often can be on scan design only. With the core assembly approach, more sophisticated methods are needed—methods that include the capability to deal with scan, BIST, and analog/mixed-signal circuit blocks.
It takes teamwork and cooperation among design, test, and the newly emerging DFT engineering disciplines to come up with the right strategy in each case. And it looks like that teamwork actually is beginning to happen in a big way.
About the Author
Jon Turino is the product marketing manager for BIST technology at Fluence Technology. Previously, he held senior marketing and management positions at Syntest Technologies, Integrated Measurement Systems, and Mentor Graphics. Mr. Turino founded the IEEE P1449 testability bus standardization committee and was instrumental in the adoption of JTAG boundary scan and the mixed-signal testability bus standards. Fluence Technology, 8700 SW Creekside Place, Beaverton, OR 97008, 503-672-8800.
Return to EE Home Page
Published by EE-Evaluation Engineering
All contents © 2001 Nelson Publishing Inc.
No reprint, distribution, or reuse in any medium is permitted
without the express written consent of the publisher.
September 2001