Verification And Software Dominate EDA’s Future

Jan. 8, 2010
With the cost of design skyrocketing, EDA must hasten its transition from a hardware-centric industry to a software-centric one. Meanwhile, look for virtual platforms and transaction-level modeling to become mainstream technologies in verification.

Many challenges face system designers as we head into 2010. But some of the most difficult are equally challenging for the EDA vendors who must provide the tools and methodologies to deal with them.

The burgeoning complexity of designs at process nodes from 45 nm and below portends a serious crisis in at least two significant areas: verification and software. These issues, and many more that are closely related including physical verification, keep EDA CTOs awake long into the night. Some pieces of the puzzle are already in place while others are brewing.

In general, designers are ramping up for doing more design work at higher levels of abstraction. “This is driven not only by rising complexity and cost, but also by the advent of technologies like 3D chip stacks, which rule out iteration in the back end,” says Mike Gianfagna, vice president of marketing at Atrenta.

So, says Gianfagna, power management, design verification, design for test, and timing closure will all be “close to done” before handoff to synthesis and place and route. This will make the traditional back end of IC design a more predictable, routine process, which in turn will hasten the back-end flow’s maturity, commoditization, and consolidation.

The overall cost of design is rising rapidly (Fig. 1a). Projections are for the cost of software design and debug to skyrocket. “As we move into 32 nm and below, which we’ll start to see in the next year and a half, the cost of design is a huge issue,” says Tom Borgstrom, director of solutions marketing at Synopsys. “Some projections show the cost for large designs on those nodes will approach $100 million. That affects fundamental economic decisions.”

SOFTWARE DESIGN A BURDEN


For an example of where this kind of complexity is leading, look no further than Intel Labs’ December announcement of a 48-core single-chip “cloud computer.” Fabricated on a 45-nm, high-k metal-gate process, this test chip consumes 25 W when in power conservation mode and just 125 W in full-bore performance mode, Intel says. While Intel has plans for a commercial release of six- and eight-core processors in 2010, the cloud-computer test chip could potentially scale up to 100 processors on a single die.

Such complexity places enormous burdens on the software-development side of the system house (Fig. 1b). Thus, EDA vendors have embedded software on their radar screens. “One of the biggest trends rising to the top is software,” says John Bruggeman, chief marketing officer at Cadence Design Systems.

Bruggeman sees Intel’s acquisition last summer of Wind River Systems as a “massive signal to the marketplace that the total value proposition that a chip company delivers to its end customers must include software.” Bruggeman also noted Cavium’s recent acquisition of embedded-Linux house MonteVista Software.

What this means to Bruggeman, Cadence, and the rest of the EDA community is that the problem set for designers is changing rapidly and dramatically. “It points toward the concept of hardware design for software,” says Bruggeman. “We have all talked about design for manufacturing and yield. You will see in years to come that those initiatives will pale relative to hardware design for software.”

Central to this concept is the ability to perform simultaneous design and verification of hardware and software. “It’s no longer enough to have an Eclipse-based toolset to debug software,” says Bruggeman, “because the problem is usually in the integration of hardware and software and not necessarily in the software itself.”

Speaking from his prior experience as chief marketing officer at Wind River Systems, Bruggeman sees a huge opportunity in this space for the EDA industry. “At Wind River, I knew this problem intimately and I knew that the solution set would come not from embedded software but from EDA. But not from EDA as we know it now, but EDA as it will be.”

CO-DESIGN GOING MAINSTREAM

These days, it’s common to see system-on-a-chip (SoC) development teams in which software designers outnumber hardware designers by a large margin. Software developers will increasingly turn to ways of getting an early start on their work.

Two approaches to watch in 2010 are virtual platforms (VPs) based on transaction-level models (TLMs) and FPGA-based rapid prototyping. VPs are, for the most part, SystemC TLM 2.0-based platforms that stitch together TLMs of major intellectual property (IP) blocks, whether from IP vendors or homegrown. With VPs, software teams can get busy up to 12 months before first silicon.

“What’s been slowing adoption of virtual platforms is scant availability of TLM 2.0 IP,” says Borgstrom of Synopsys. But the TLM 2.0 standard is gradually gaining wider support.

Prototyping with TLM 2.0-based VPs is much faster than simulating at the register-transfer level (RTL), but doing so is a tradeoff—you lose some of the fidelity of the RTL when you move up in abstraction to a TLM. One way to surmount that is with FPGA-based rapid prototyping.

According to Borgstrom, more than 70% of all design teams are using FPGA-based rapid prototypes at some point in their development process. “You can boot a Linux or mobile platform OS (operating system) in seconds and connect to real-world I/O like SATA, PCIe (PCI Express), or USB,” says Borgstrom.

Further, the Stratix IV FPGAs from Altera and Virtex 6 FPGAs from Xilinx should spawn commercial prototyping systems with capacities up to 100 Mgates that offer runtime performance from 10 MHz to 50 MHz. Moreover, many of today’s FPGA-based systems allow mixed simulation that combine virtual platforms with the hardware-based implementation through use of the SCE-MI transaction interface standard.

Another trend we’ll see more of is hybridization of the TLM 2-based VP and the FPGA-based rapid prototype into what Synopsys calls a “system prototype.” New designs will continue to use legacy components for which there are no readily available TLMs. In such cases, enabling a virtual platform is best achieved through a combination of TLM 2.0 models and FPGA-based prototypes and bringing them into one environment for simulation (Fig. 2).

REUSING MORE THAN JUST IP

SoC complexity demands the adoption of reuse methodologies. On the design side, IP reuse is long established. In the verification realm, adoption of reuse lags. But according to John Lenyo, GM of Mentor Graphics’ Design Verification Technology Division, there’s growing interest among designers in creating a more modular, reusable verification environment.

One way to do that is through adoption of the Open Verification Methodology (OVM), which was designed from the ground up to enable reuse of SystemVerilog testbenches. A reuse methodology such as OVM, which was jointly designed by Mentor Graphics and Cadence, enables SystemVerilog testbenches to be ported from design to design, block to block, and chip to chip (Fig. 3).

Echoing the trend cited by Tom Borgstrom of Synopsys regarding rapid prototyping, Lenyo sees more use of hardware acceleration in the simulation process. “We can take a SystemVerilog testbench written in OVM and move that onto our Veloce emulator without changing it,” Lenyo says.

The ability to run OVM testbenches on an emulator opens the way to using emulation much earlier in the design cycle. “We’re starting to see people use the emulator as a very fast simulator to see, for example, streaming video. This is something that cannot be done in simulation,” says Lenyo.

Another trend in verification is concern about closure and management of the mountains of data that simulation generates. The answer lies in EDA vendors’ ability to provide data in a usable format.

Hardware verification languages (HVLs) that gained popularity about 10 years ago were intended to overcome the difficulties embodied in writing test vectors through constrained-random generation. But the result was a great deal of redundancy that made simulation inefficient.

In 2010, Mentor Graphics plans to announce a technology that aims to eliminate the simulation of redundant vectors. According to Lenyo, the technology uses graph-based algorithms to generate nonrepeating vectors that do not overlap. The result will be a verification environment that reaches closure much faster.

Changes are also brewing in the logic-synthesis portion of the design flow. According to Sanjiv Kaul, vice chairman and acting VP of marketing at Oasys Design Systems, RTL synthesis as we know it is not scaling to the demands of deep-submicron IC implementation.

EDA vendors such as Oasys Design Systems are pioneering next-generation synthesis tools that meld synthesis and physical design. The problem is the size and complexity of designs. At 65 nm, most SoC designs entail hundreds of blocks. Dividing up constraints among all those blocks is a nightmare. “Logic synthesis is less meaningful because it doesn’t incorporate placement information,” says Kaul. The result is a long cycle of iterations.

The coming trend in logic synthesis, then, is tools that handle the entire chip, taking in RTL and a fixed-gate floorplan if one is available or automatically generating one if not. The tool will then partition and place the RTL on the floorplan, implementing all the way through placed gates while checking timing along the way. If timing constraints are not being met, the tool starts over with the original RTL and reimplements in a different architecture.

HOLDING DOWN POWER

The ongoing push for minimal power consumption isn't news, but what is interesting is that power stinginess is no longer restricted to portable consumer electronics. “We see low power spilling into all segments,” says Bob Smith, VP of product marketing for Magma Design Automation’s Design Implementation Business Unit.

Thus, there’s more interest in both the United Power Format (UPF) and the Common Power Format (CPF) as means of specifying power intent. Both will gain momentum in 2010 as design teams look to ensure that power constraints are maintained throughout the flow. “There’s a huge push to optimize for low power at the system level,” says John Lenyo of Mentor Graphics. CPF and UPF aid in pre-synthesis verification of power-gating logic.

Expect to see a lot more activity on the low-power front in general, both in terms of tools and methodologies and of broader proliferation of low-power design practices. According to Magma’s Bob Smith, designers are expressing an increased interest in various low-power techniques, such as dynamic voltage and frequency scaling (DVFS).

PHYSICAL DESIGN TRENDS

At deeper submicron nodes, physical design takes on even more importance. According to Joe Sawicki, VP and GM of Mentor Graphics’ Design to Silicon Division, the biggest problem the industry faces in place-and-route is management of power and performance in the face of increased variability. Chip designers are not process engineers, nor do they want to be. Thus, they have a difficult time envisaging how their place-and-route decisions will mix with the variability inherent in their chosen process technology.

The answer, Sawicki believes, will come in the form of closer integration between Mentor’s Olympus-SoC place-and-route system and its Calibre physical verification and design-for-manufacturing (DFM) suite. “We’ve been doing fundamental work here on integrating the place-and-route platform with Calibre,” says Sawicki.

An upcoming version of Calibre will allow the router to invoke its applications incrementally during routing runs. “In areas where you have found problems, you need a self-healing routing mode that uses Calibre inline as you perform place and route,” says Sawicki.

Another perspective on the future of placement and routing comes from Antun Domic, senior VP and general manager of the implementation group at Synopsys. The amount of data generated by physical design at nodes such as 28 nm is going to force a move from a flat flow to a hierarchical approach. “With as many as 100 million placeable objects, a flat approach with all of the iterations and ECOs (engineering change orders) in the middle is not feasible,” says Domic. Look for EDA vendors to begin pushing hierarchical techniques.

Layout rules are becoming increasingly complex, especially as double patterning becomes more prevalent at the 22-nm node. This will necessitate further integration of routing with layout verification, says Domic. “Double patterning will generate a lot of activity in routing and in the rule-checking space,” Domic says. Further, the tradeoffs are getting tougher between what is routed by tools intrinsically, i.e., using correct-by-construction methods, and what is checked with design rule checking/layout versus schematic (DRC/LVS) rules.

When it comes to physical verification, Mentor’s Sawicki sees changes coming as well. “The DFM hype started at 130 nm, where the first tapeouts were debacles. But we built up a lot of experience using DFM on real designs. This has put the lie to the myth that EDA doesn’t get tools ready in time,” says Sawicki.

“One of the biggest areas in which we see DFM happening is chemical-mechanical polishing (CMP),” says Sawicki. The infrastructure has been built for metal fill, and that will be tied into a timing-aware physical-design environment.

“We’re seeing this move forward at 45 nm. By the time we hit 28 nm, all metal fill will be done in a model-based and timing-aware fashion,” Sawicki says. We can expect to see intelligent fill algorithms and modeling to ensure that designs have no hot spots. The transition to a fully model-based methodology will occur at the 32- and 28-nm nodes, Sawicki says.

Sawicki sees a need for a new approach to transistor-level parasitic extraction. “It’s becoming very difficult to achieve the necessary accuracy in a rule-based methodology,” he says. Thus, EDA will pursue an approach driven by fundamental physics for extraction. It will be required for digital blocks at the 32-nm node and for high-performance analog at 45 nm and even 65 nm, Sawicki estimates.

About the Author

David Maliniak | MWRF Executive Editor

In his long career in the B2B electronics-industry media, David Maliniak has held editorial roles as both generalist and specialist. As Components Editor and, later, as Editor in Chief of EE Product News, David gained breadth of experience in covering the industry at large. In serving as EDA/Test and Measurement Technology Editor at Electronic Design, he developed deep insight into those complex areas of technology. Most recently, David worked in technical marketing communications at Teledyne LeCroy. David earned a B.A. in journalism at New York University.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!