Image

The Next Level Of Design Entry—Will 2012 Bring Us There?

Dec. 5, 2011
Cadence's Frank Schirrmeister explains how the industry is edging its way toward adoption of virtual platforms and transaction-level modeling with hybrid TLM/RTL approaches and ever-larger FPGAs.

It’s the end of the year again—a time for predictions and revisiting past years’ forecasts. Last year, I wrote about “EDA’s Next Step: System-Level Design Automation.” Based on the 2009 edition of the International Technology Roadmap for Semiconductors ( ITRS), I had concluded that only drastic design productivity improvements on both the hardware and software sides (26-fold and 50-fold, respectively) could keep costs in check through the next decade. “System-Level Design Automation” was my solution. Roughly a year prior, I had written that “2010 Will Change The Balance In Verification,” suggesting that embedded software running on the embedded processor would become an important part of test benches for hardware.

So where are we? The fundamentals of the ITRS roadmap did not change in its 2010 update. The industry still needs huge productivity improvements in both hardware and software design over the next 10 years. Most design teams will also confirm that hardware can no longer be developed independently of software considerations. However, the engines used to execute representations of the hardware on which software can be developed, debugged, and verified are rapidly growing closer together. To achieve the required rates of increased design productivity, automation must be pushed to higher levels of abstraction.

Current State Of The Art

When asked to provide pre-silicon representations of a design under development (to enable early software development), design teams have choices. Often a previous-generation chip satisfies basic software development needs. Depending on how well hardware abstraction layers are designed, they can be replaced with new versions once the actual next generation chip is available.

But to enable software development for specific new features characterizing the next-generation design, the first available options are virtual prototypes based on transaction level models (TLMs). These highly abstract virtual models execute the real software as binary files with little or no modification compared to what will be loaded into the actual chip, reaching hundreds of megahertz in equivalent execution speed.

After the less abstract register transfer level (RTL) has been developed, four different engines offer a variety of pros and cons. Classic RTL simulation usually comes first and can be used even when RTL is still unstable. While its execution speed in the hertz range is too slow to enable real software development, it is still sometimes used to bring up low-level software. 

When RTL becomes stable enough, parts of it can be mapped into hardware-based execution engines. Verification acceleration then combines the use of test benches in classic RTL simulation with the design under development mapped into a verification compute platform that executes the design in real hardware. Now users can execute in the range of hundreds of kilohertz, which becomes reasonable for software execution.

Classic in-circuit emulation reaches speeds on the order of several megahertz and has been used for software bring-up, development, and debug since their inception.

FPGA-based prototypes offer the next level of speed improvement, often running an order of magnitude faster than emulation. Arguably, FPGA prototypes are the primary engines on which software is currently developed pre-silicon. Once the design is back from fabrication, the actual next-generation chip now replaces previous-generation chips for software development, allowing execution at the target speed of the design.

Hybrid Protocols

While virtual prototypes are one step on the path to TLM as the next level of design entry beyond RTL, virtual platforms are not yet part of the mainstream design flow. More and more users are interested in combining their advantages—speed and early availability—with the better accuracy of the RTL-based execution engines. It is safe to assume that in 2012 we will see more and more hybrid uses of TLM- and RTL-based techniques.

The other aspects of transaction level modeling entry are implementation and verification. The industry has provided technologies for high-level synthesis from TLMs, but has been less quick to standardize on a single modeling style. Not only can implementation to RTL be automated, but verification methodologies can be up-leveled as well. The 2010 book TLM-Driven Design and Verification Methodology by Brian Bailey, et al, is a good example of combining both implementation and verification. On the path to the next level of design entry, combining TLM for design and verification with TLM for software enablement will be an important step.

While I am sure that the industry will make progress in that direction during 2012, an interesting TLM-related situation has been created this fall by the two big FPGA vendors, Altera and Xilinx. Both vendors have announced devices combining programmable logic of up to several million ASIC gate equivalents with hard implementations of ARM Cortex-A9 based multiprocessor subsystems. Both vendors have also announced virtual prototypes for their devices. The Xilinx Zynq platform encourages extensibility with user logic at the TLM level by offering an “Extensible Virtual Platform,” which means design teams can create a model for software development even before the RTL of the user defined logic is developed, either by hand or using high-level synthesis.

Given that I am a system-level enthusiast, I am betting that early TLM extensibility will show real value to users. Verification will still be required at all levels, so hybrids of TLM and hardware will also become a key part of the design and verification flow. Whether this will happen on an FPGA platform or via the development of ASICs and application-specific standard parts (ASSPs) using the different engines described earlier remains to be seen. 

Either way, especially with the setup of FPGA vendor solutions extending into the virtual prototype space, we are sure to get a fascinating system-level case study in 2012!

Frank Schirrmeister, senior director of product marketing at Cadence responsible for the System Design Suite, has an MSEE from the Technical University of Berlin. He also discusses system-level and embedded software design technology adoption in his blog “Frankly Speaking.”

About the Author

Frank Schirrmeister

Frank Schirrmeister is Senior Director at Cadence Design Systems in San Jose, responsible for product management of the Cadence System Development Suite, accelerating system integration, validation, and bring-up with a set of four connected platforms for concurrent HW/SW design and verification.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!