1000memory1

It’s Déjà Vu All Over Again

Yogi Berra almost certainly wasn’t talking about memory test when he made his famous Bronx comment. However, a comparison of semiconductor test industry responses to questions asked in 1995 and again today confirms that major drivers really haven’t changed that much. They’re still talking about speed, cost of test, and ATE throughput and utilization.

We asked industry leaders several questions, but the first two elicited the most important replies:

To what extent are prevailing technical trends, such as higher operating speed, increased memory capacity, greater variety of architectures, and other memory-device innovations, affecting the outlook for obsolescence of existing memory test equipment? What will be the likely configuration of future memory ATE?

In 1995, Keith Lee, then Advantest America’s marketing manager, said, “Two major technical factors of memory devices are changing rapidly. One is accelerating operating frequencies,…and the other is memory density.”

At that time, the top speeds were in the 100-MHz to 200-MHz range, and 32-bit wide static random access memories (SRAMs) were being used. Test time had already become an issue, and Mr. Lee commented that his company “has progressed from 32 to 64 devices in parallel and will soon be introducing a system that can handle 128 devices in parallel.”

When asked the same question this year, Robert Graybeal, Schlumberger’s product marketing manager for Rambus and future memory technologies, replied, “Memory types are diverging into at least two markets from an ATE point of view. On one hand, there’s the niche market of ever-soaring high-speed memories, such as synchronous SRAMs (SSRAMs), currently running higher than 1 Gb/s. They’re expensive and have limited applications in specialized products.

“On the other hand, mainstream 200-MHz memories are inexpensive and manufactured in huge volumes and can be tested on fairly ordinary testers that do not require advanced technologies,” he continued. “While these lower-end dynamic random access memories (DRAMs) remain especially attractive for manufacturers to build, their continued life expectancy is definitely limited.”

Both Mr. Lee and Mr. Graybeal referred to ATE that tests singulated, packaged devices precisely positioned by a handler and connected to the tester by a contactor—the kind of technology used by most ATE manufacturers today. As memory devices have grown to multimegabit sizes, their test times have extended to as much as 10 minutes. Testing many devices in parallel is an approach that has been used for several years to reduce the cost of test per device.

During the last few years, ATE manufacturers have realized that packaged-device parallel testing by itself will not provide the cost reduction required by what is increasingly a consumer-driven demand for larger and cheaper memory devices. This year, Mr. Graybeal commented, “Currently, a great deal of research and experimenting is underway to prove various strategies for reducing cost of test, including at-speed wafer test, strip packaging, and wafer-level burn-in. At a minimum, we still see this three years away, but the architectural plans for all future memory test equipment will consider additional capabilities for wafer-level test within the same high-speed framework.”

SSRAMs may not yet be the subject of full-speed wafer testing, but DRAMs are. Mark Brandemuehl, marketing director at FormFactor, was referring to at-speed direct wafer probing in a recent EE article:

“The DRAM industry is leading advances in parallelism and test cost reduction. Since 1998, 32-DUT in-parallel testing has become the standard in the DRAM industry.…FormFactor expects the DRAM industry to continue pushing the limits of parallelism with 64-DUT in-parallel testing becoming the standard by 2003 and 128-DUT in-parallel following closely behind.”1

In addition to at-speed wafer testing, cost-reducing initiatives include test during burn-in and strip processing. Because burn-in is required to ensure reliability, it makes sense to perform as many lower speed tests as possible at the same time and ideally prior to singulation. This approach precipitates failures earlier in the production cycle, which lowers the cost, and reduces the time necessary for final device test on expensive full-speed functional testers. Strip processing improves test efficiency because a number of packaged devices are handled simultaneously rather than one at a time as has been the practice.

“ATE will definitely begin to see introductions of test during burn-in, full wafer test, and strip processing,” said Gary Fleeman, memory product manager at Advantest America. “These technologies are developmental, and we are not seeing significant business volumes today. But,” he continued, “I’m certain that some segment or combination of these will come to market and offer better solutions than today’s singulated solutions.”

Achieving At-Speed Performance

Regardless of the production stage at which the testing is done, the testers have had to change to accommodate faster devices. “Many existing testers were not designed to handle speeds greater than 100 MHz,” said Michael Schoknecht, an applications engineer at Micro Control. “Others that theoretically can produce signals at the higher speeds cannot maintain their signal quality and can cause false failures during testing.

“A change coming in memory testers will be the switch from algorithmic pattern generators (APGs) to vector-based systems,” he explained. “The APGs simply are not fast enough to keep pace with the high operating speeds of the memories. Vector-based systems offer the flexibility that is needed to test the many varieties of memories because they can be set up to apply test signals to any set of pins as required.”

Mr. Schoknecht described the relationship among several of the test concepts. “Because memories are so much faster, at-speed testers are becoming much more expensive. By using test during burn-in, many of the long, pattern-specific tests can be offloaded, improving the high-speed tester throughput.”

Six or seven minutes of an overall 10-minute DRAM test can be offloaded to a slower tester integrated into the burn-in oven, remarked Aehr Test Systems’ Vice President of Marketing Carl Buck. “You can’t eliminate the need for the high-speed tester, but you can perform a large percentage of the testing in a slower system. So as an alternative to buying a roomfull of very expensive 800-MHz testers, many companies find the parallel test approach provides a good return on investment.”

CST will take three approaches to the problem of increased speed in double data rate (DDR) devices. According to the company president, Cecil Ho, you can test the devices twice, once on each clock edge; use a special ASIC to generate the signals required to run at full (double) speed; or test complete memory modules in their motherboard operating environment. Each method is applicable to at least one of the company’s low-cost memory-module test product lines.

MOSAID also has developed PC-based high-speed solutions to SDRAM and DDR-SDRAM module test. Glen Evans, general manager of the MOSAID systems division, said that the new Acuid Tester results correlated closely with those of mainframe ATE. The tester uses a series of interchangeable test headers to provide the specific pin electronics required by the various types of memory modules.

Accuracy

Achieving high accuracy and high throughput simultaneously is a major problem. In a March 1995, press release, Teradyne announced the Model J995 Memory Test System for multiple-site wafer testing with up to 144 I/O channels in each of two test heads and Tester-Per-Site™ architecture. The tester had a base rate of 60 MHz, multiplexed to 120 MHz, and an overall timing accuracy of ±500 ps.

Compared to its minimum pulse width of 2.5 ns, an uncertainty of 500 ps may not have been a problem. After all, the fastest device speed was 120 MHz or about 8 ns per cycle. The uncertainty was only 6% of a clock period.

Today’s ±50-ps uncertainty in testers capable of 1-GHz rates and beyond remains about 5% of a clock cycle. Achieving very low jitter at high speed is not a trivial design problem. Many complementary techniques have been applied including SiGe circuitry, system-on-a-chip (SOC) levels of integration, careful transmission-line matching and termination, and water cooling.

According to a 1999 International Test Conference (ITC) paper that relates some of Lucent’s ATE experience, “To achieve subnanosecond measurement accuracy, the test engineer must compensate for errors which cannot be accounted for in tester calibration. This is a complex task that requires the test engineer and the device designer to simulate the device in the test environment. Most of the errors [we considered] can only be corrected by the use of a simulation model.

“These errors result from interaction between the device under test and the tester and the differences in tester termination. Consequently, simulation is absolutely necessary to resolve the timing errors. By using simulation to compensate for the major errors…the authors successfully implemented and correlated a model to significantly improve accuracy. Lucent Technologies has used this simplified model since July 1998.”2

The problem of achieving high accuracy is exacerbated by the slight inequalities distributed among the multiple sites used in ×32, ×64, or even ×128 parallel testing. Tester accuracy is important because it can adversely affect yield. As the authors of another ITC paper said, “The less accurate a tester is, the harder it is to tell whether a device under test is capable of running at speed or not.”

They developed a mathematical model that related tester uncertainty to yield loss, taking an RDRAM part as an example with both 800- and 600-Mb/s speed bins. The conclusion was reached that, “For the forecasted Rambus average selling prices,…every 1 ps of better tester accuracy means an added $1.1M to the bottom line of a 100-million parts volume.”3

Life Beyond SRAMs/DRAMs

In 1995, Advantest’s Mr. Lee foretold the current success of flash memory: “Flash-memory demand has been explosive, although relatively small in numbers compared to the DRAM market. But, the future will realize the expansion of applications for flash memory as manufacturers look to replace EPROM devices and users’ needs for mobile computer-based products escalate.”

Today’s flash testers provide improved functionality on several levels. Advantest’s Mr. Fleeman described the benefits of “facilitating engineering on the exact platform as production, often in the same fixturing right down to the socketing. For exhaustive retention testing in a lab environment, a more dedicated single- or dual-site nonproduction test solution might be better. But,” he concluded, “there is no better correlation and yield-improvement method than characterization in a production equipment set.”

Tester throughput has improved as a direct result of the processor-per-device architectures adopted by several vendors. Because each device can be tested at its own fastest rate, tests are completed as quickly as possible. In earlier, centrally controlled testers, it was necessary to insert wait states in test procedures to allow for the possibility of a slow device.

To Answer the Obsolescence Question

Obsolescence of existing test equipment is being caused by the requirements of new memory devices. However, some vendors with extra ATE capacity are using it to offload the very high-speed functional testers they have had to purchase. It is practical for them to have separate burn-in, low-speed test, and final high-speed test phases because they already have the low-speed testers. Of course, this approach does incur an extra device insertion, so yields may be down for purely mechanical-damage reasons.

In addition to speed, there is a point at which the cost of the floor space, additional operators, and higher maintenance required by older machines no longer is economical—even though the testers are paid for. Don’t forget the deterioration of an older tester’s utilization caused not just by its lower reliability, but also by more frequent calibration. And, while you’re using older equipment, others are investing in the latest at-speed wafer testers, strip processing, or wafer-stage burn-in. For how much longer will you be competitive?

Speed, cost of test, and ATE throughput and utilization remain the industry drivers. There is no substitute for a new, fast tester, but your legacy equipment can help you to use it more efficiently, at least for a while.

References

  1. Brandemuehl, M., “Parallel Test Reduces Costs at Wafer Probe,” EE-Evaluation Engineering, September 2000, pp. 16-24.
  2. Warwick, T., et al, “An Accurate Simulation Model of the ATE Test Environment for Very High Speed Devices,” 1999 Proceedings of the ITC, pp. 524-531.
  3. Dalal, W. and Miao, S., “The Value of Tester Accuracy,” 1999 Proceedings of the ITC, pp. 518-523.

Published by EE-Evaluation Engineering
All contents © 2000 Nelson Publishing Inc.
No reprint, distribution, or reuse in any medium is permitted
without the express written consent of the publisher.

October 2000

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!