Add Arrows To Your Quivers To Meet UDSM Challenges

Jan. 12, 2006
Design in the era of ultra-deep submicron (UDSM) silicon is a constantly moving target. To hit the bullseye, you've got to fill many more gates per given amount of area, verify that much more functionality, and know that almost anything can go wrong when

Design in the era of ultra-deep submicron (UDSM) silicon is a constantly moving target. To hit the bullseye, you've got to fill many more gates per given amount of area, verify that much more functionality, and know that almost anything can go wrong when you're deep into the umpteenth simulation run.

An expanding palette of tools, methodologies, and languages can be combined in innovative ways to find the right solution. Among other things, the design challenges will call for improved use, and reuse, of IP. It'll also mean adopting some design methodologies that have been slow to see broad adoption, namely algorithmic design and coprocessor synthesis. Such technologies won't fix everything, but they're part of a broader solution.

Similarly, the verification puzzle must be addressed through a mix of elements that includes tools, languages, and smart planning to achieve adequate coverage.

THE DESIGN SIDE Today's end-user markets are driven by increasingly integrated portable electronics, more often than not involving some kind of wireless connectivity. For designers, evolving industry standards in the multimedia and wireless domains means more digital signal and control processing.

The flexibility to integrate and adjust to these evolving standards has become critical. Plus, the increasing complexity of emerging algorithms drives the desire to implement more of the signal processing algorithms in software instead of hardware.

It's too costly, and risky, to go completely custom with the implementation of all of this portable functionality. So, designers need ways to implement reconfigurability.

Fortunately, several alternatives will begin to take off in the near term. For one thing, more designers are turning to FPGA and/or structured-ASIC technology. Second, platforms consisting of multiple custom processors enable complex algorithms to be implemented in software instead of hardware.

The customized instruction set of custom processors increases-software performance while keeping power consumption under control. The ability to spread the processing load across multiple processors also makes it possible to fully exploit the parallelism inherently present in DSP algorithms.

In addition, reconfigurable computing combines the reconfigurable logic and custom processor into a reconfigurable processor. Such processors consist of a processor core that's tightly linked to an array of reconfigurable computing elements that at run-time are (re-)configured to accelerate critical software.

All of the above is only possible through use of ESL tools and methodologies. They ultimately provide the required programming, algorithm, and architecture design tools.

On the algorithmic side, the industry will start to see IP offered in conjunction with algorithmic synthesis environments that effectively explore the design-space alternatives. No longer can you just trade off area and frequency as in RTL synthesis.

Rather, system-level parameters will be employed to trade off power, sample rates, and system throughput in addition to area and frequency. Combining algorithmic synthesis and IP hardware will reduce development times and integration issues, as well as increase the use of third-party IP.

We'll also see more use of application-specific compute engines to address worsening SoC performance and power-consumption problems. The most successful design technologies will offer an " incrementalist" approach that preserves existing design investment and infrastructure.

Standard processors alone don't have enough hardware parallelism to execute compute-intensive SoC application software in a power-efficient manner. Thus, design teams are proactively evaluating technologies for developing application-specific computing engines to boost parallelism.

Look for emerging new concepts and applications of IP reuse, for both design and verification IP, to support SoC methodologies in 2006. These next-generation reuse methodologies embrace the concept that for reuse to be effective, it must bring together all aspects of IP. These new methodologies treat software, hardware, verification, and documentation with equal respect.

Traditional approaches generally use text specification as the golden reference, with all of the downstream views created and maintained by hand. Going forward, high-level specification languages can capture this information once and use it to generate and update downstream views. Clearly, applying compilers to hardware isn't a new concept, but applying the machine-readable specification to SoC creation is.

A design team should imagine adding an IP block to its SoC design, configuring its properties, automatically stitching other IP blocks together, automatically running tests based on IP and SoC configuration information, and generating documentation and software headers. Then, anytime the specifications change, a team member would update the information in one place—and only one time. Such correct-by-construction flows will be one of the key areas in 2006.

THE VERIFICATION SIDE Electronic designers need new methodologies for functional verification. A successful verification methodology encompasses test-bench automation, assertion-based verification, coverage-driven verification, and transaction-level modeling. Critical applications include simulation, formal checkers, emulation and functional prototypes, and the ability to integrate these technologies into a comprehensive verification environment.

It's increasingly important that the verification environment be set up to automatically detect bugs for features such as assertions, automated response checkers, and scoreboards. Furthermore, the environment must generate proper stimulus from directed tests to cause bugs to occur, or from powerful test-benches for constrained random techniques to deploy a wide range of scenarios with a relatively small amount of code. Once test-bench randomization takes place, functional coverage is required to determine which of the possible scenarios actually occurred.

Standardization of the SystemVerilog hardware design and verification language as IEEE Std 1800-2005 was one of the most significant events of 2005 for IC development. Expect 2006 to be the year of wide SystemVerilog adoption, resulting in productivity improvements and more first-silicon successes.

SystemVerilog holds the key to many of the requirements of a modern verification system (see the figure). It adds advanced functional verification constructs, such as assertions, constrained-random data generation, and functional coverage, to the language so they can be used seamlessly with existing HDL-based environments. Another benefit of SystemVerilog is its schematic structure, which permits detection and generation mechanisms to work together.

In addition to greater use of SystemVerilog, the complexity of today's designs demands that SystemC modeling be integrated into RTL flows. This trend is currently driven primarily by the verification benefits inherent in SystemC—the ability to model and simulate something very quickly.

However, a new trend is emerging thanks to transaction-level modeling (TLM), which links design and verification much closer together (see "The Rise Of Transaction-Level Modeling" online at www.elecdesign.com, Drill Deeper 11775). TLM has been a major driver in SystemC's adoption, primarily for its verification benefits. Combining behavioral synthesis technology with SystemC and TLM will enable more companies to exploit abstraction for design as well as verification.

It's also crucial to recognize that design complexity has compelled new levels of specialization in project teams and increased interdependence between those specialists. Among them are systems engineers, software engineers, verification engineers, logic designers, mixed-signal designers, and product engineers responsible for post-silicon debug.

Too often, members of these various disciplines use poorly integrated manual verification methods. Moreover, the overall verification process isn't managed from a common plan with metrics that are relatable across those disciplines. Further adoption of verification methodologies will unify these disparate elements into a more holistic approach.

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Cornell Dubilier: Push EV Charging to Higher Productivity and Lower Recharge Times

April 8, 2024
Optimized for high efficiency power inverter/converter level 3 EV charging systems, CDE capacitors offer high capacitance values, low inductance (< 5 nH), high ripple current ...

TTI Hybrid & Electric Vehicles Line Card

April 8, 2024
Components for Infrastructure, Connectivity and On-board Systems TTI stocks the premier electrical components that hybrid and electric vehicle manufacturers and suppliers need...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!