Press releases can make it seem like EDA tool qualification for a particular IC process node is the “end game.” But in truth, qualification is just the first publicly visible step of ongoing collaborations between an EDA vendor and the foundry.
Press releases can make it seem like EDA tool qualification for a particular IC process node is the “end game.” But in truth, qualification is just the first publicly visible step of ongoing collaborations between an EDA vendor and the foundry. That activity actually begins long before qualification, during process development for a new node, and continues long into the process node’s life cycle.
- Pattern Matching: The Next Step In EDA's Evolution
- It's Time To Change To The OASIS Data Format
- Make The Leap From Test Chips To Production Designs At 20 nm
When it comes to physical verification, a new node doesn’t just mean an updated design rule manual (DRM). A node may require brand-new design rules to be written, and these rules may require new types of measurements and analysis, which places new requirements on the EDA tools themselves.
Once the foundry describes an initial set of design constraints required by the new process, it creates a regression test suite to validate that physical verification decks produce the expected results, both at initial release and over time. This test suite is developed by iteratively creating test chips, running the design rules against test chips, observing results, identifying and analyzing errors, updating the design rules (and the tool itself, if required), and so on, until the foundry and its development partners determine that the flow is acceptable for initial production.
| Download this article in .PDF format |
This file type includes high resolution graphics and schematics when applicable.
During this initial rule development and validation, the foundry’s focus is on accuracy. The foundry has extensive expertise in writing accurate checks, but may not always be aware of the best ways to optimize rule decks for software runtime performance and reduced memory requirements. Typically, performance and memory optimization are addressed through collaborative work between the foundry and the EDA supplier.
EDA companies invest a significant amount of resources to analyze rule decks and make recommendations on techniques to get accurate results in the fastest possible time with the least memory. This collaborative work begins prior to the first V0.5/alpha deck releases, and it may continue well past the first (V1.x) production deck release.
For example, in May of this year, Mentor announced the results of work with TSMC on its 20-nm physical verification kit, which reduced signoff runtimes by a factor of three, and memory requirements by 60%, compared to the initial design kits released last year. Such efforts provide a very significant benefit for customers, because it means their turnaround times remain consistent with their expectations, even though the number and complexity of checks have grown considerably. Minimizing memory demands also means they can make smaller major compute platform upgrades than if these improvements were not done.
In addition to these immediate benefits, there is a positive learning curve effect associated with such collaborative efforts. We’re finding that our joint optimization work with TSMC on the 16-nm node is progressing even faster than 20 nm, even accounting for the added complexity that 16 nm requires.
Besides the efforts aimed at performance improvement, there are ongoing changes to the DRM and coding changes for the signoff tools throughout the lifespan of a process node. These changes ripple through the design enablement ecosystem each time they occur. Moreover, the changes do not simply involve design rule specifications, but also how the rules are implemented, which can involve intricate operational behaviors that are subtly different for each physical verification platform.
Each foundry tends to have a primary physical verification tool that it uses to create the original rule decks and to validate updates. For all the other tools, the EDA vendors or the foundry must then make changes to that tool and run the regression tests to see if the tool’s results match those of the foundry’s tool. Early adopters of a new process may not be able to tolerate the delay resulting from this ripple effect, so many customers opt to use the same verification tool as their foundry to avoid delays.
Tool optimization, as opposed to qualification, is not a single event and does not have a finish line. It’s a continuous process, like everything else in the innovative electronics industry. In fact, there will always be multiple process nodes in some stage of qualification and optimization at any point in time, as new issues are discovered or as foundries and EDA vendors develop new techniques to improve performance, reliability, and yield.
The goal of every EDA vendor is to provide the best, most effective technology possible at any point in time for every process node its customers use. Achieving that goal requires constant experimentation, innovation, and most importantly collaboration between the foundries, EDA vendors, and design customers. It’s a process that never ends!
Michael White is the product marketing director for Mentor Graphics’ Calibre Physical Verification products. Prior to Mentor Graphics, he held various product marketing, strategic marketing, and program management roles for Applied Materials, Etec Systems, and the Lockheed Skunk Works. He received an MS in engineering management from the University of Southern California and a BS in system engineering from Harvey Mudd College.