SoC Co-Design Is Pushing The Limits Of Software And Hardware Simulation

Oct. 30, 2000
Hardware and software co-design tools try to keep up as transistor count continues to rise.

System-on-a-chip (SoC) design is a hot topic, and one answer to quickly developing an SoC is to use co-design and co-verification tools. With fast time-to-market as a goal, the ability to design and test on both hardware and software sides is imperative. Delaying software development and testing until the hardware is solid is no longer an option.

A number of electronic-design-automation (EDA) vendors are pushing various approaches to co-design and co-verification, an area where new tools and new approaches abound. Hardware simulation is often required as well to get the necessary performance for simulating an SoC at a reasonable speed. This is especially true when testing software-intensive algorithms, such as new network protocols or streaming-media transport and decoding.

One alternative is to use standard SoCs that are reconfigurable (see "Reconfigurable SoCs," p. 90). The other is to use software and hardware tools that allow concurrent design of hardware and software and to utilize tools that allow simulation of hardware prior to its availability.

Co-design and co-verification tools are emerging technologies. Their actual definition can vary significantly depending upon which company is presenting information on its technology. Most agree with the basic premise that co-design involves concurrent design of hardware and software with a coordinated exchange of development changes that occur throughout the design process. This includes product specification, partitioning and repartitioning of a system's hardware and software architecture, implementation of the design for simulation purposes, including hardware/software co-verification support, plus the final system implementation in silicon.

While the tools in this space are being effectively used by small groups on new projects, proven products for large projects and large groups are only now being attempted. Many organizations are still in the midst of completing projects employing conventional development tools. But, new projects implementing co-design tools are popping up all over the place. For example, Tim Redfield, software engineer at Vanteon Corp., indicates that his company is checking out new tools in the co-design space. This consulting firm, which develops ASIC, SoC, and FPGA designs, is doing so as the new tools become almost a requirement for developing larger SoCs.

The size of SoCs is both a boon to co-design as well as a bane. As SoCs become larger, the implementation and design become more complex, making tools to simplify and manage the development tasks necessary. On the other hand, simulations slow down as complexity rises. Likewise, software is becoming a larger part of an SoC system design, further aggravating simulation overhead.

System complexity is relative. A large SoC used to be around a million transistors. Now, a large SoC is pushing four times that number.

Co-design tools help address complexity by providing a consistent design framework. This is a way to partition the design between hardware and software, and to manage changes to different parts of the design, especially the relationships between hardware and software support. Often, co-verification is part of a co-design tool or else the next step when testing a design.

Hardware and software de-sign issues aren't always in sync with each other, though. The hardware design addresses details regarding implementation, which must be supported through software device drivers. Many times, software design issues are split between enhancing performance and minimizing resource usage. Each must be complemented by the hardware and software designs. This is because the hardware has to be able to run the software at sufficient speeds to address the final product's requirements. If a designer builds the wrong hardware, then the software may run too slowly. Obviously, correct operation is equally important.

Software development tools are independent of the co-design tools. Still, this application code will be brought back into the loop for co-verification. Also, many co-design tools provide a mechanism to generate basic device drivers for use with developer-generated software.

One of the major co-design tools is Mentor Graphics' Seamless-CVE. It's implemented to create virtual hardware prototypes specifically designed to run application software. Seamless-CVE can provide partitioned, high-performance co-verification using instruction-set simulators when simulating a processor core.

Cadence's Virtual Component Co-design (VCC) is another encompassing co-design and co-verification tool. Its multilingual support includes designs based on C, C++, MatLab, and SDL, as well as the Cadence Ciertoú signal-processing work system (SPW). This tool addresses hardware/software partitioning, bus and processor loading analysis, and RTOS scheduling and resource contention. Communication refinement helps convert an abstract token-level interface description into the actual signal-level interfaces. VCC co-verification includes software simulation as well as support for Cadence's Affirma HW/SW verifier.

Various ways to address design complexity and simulation overhead exist. One method is using a higher-level specification. Register-transfer-level (RTL) system definitions are a common way to specify a design, but higher-level specification languages and tools can help. On the design side, a more powerful specification using a higher-level language allows a designer to specify a portion of the system with fewer lines of code. On the simulation side, a higher-level language lets a simulation run at a higher algorithmic level instead of at a lower-level gate or register-level simulation.

Another approach to optimizing simulation is to partition the design based upon proven intellectual property (IP), such as processor core designs. It's possible to simulate a processor core at the same level as the rest of the design. But, it's more efficient to simply simulate the core at a higher level, executing the code at the logical instruction level instead of at the gate level.

Finally, there's hardware-assisted simulation. Large arrays of FPGAs are used to emulate the design, typically providing a tenfold to a one-hundredfold performance boost to simulations. Also, mixing hardware and software emulation is an option. But, this is much more complex to implement.

The issues of design and simulation become more complex in this time of potential transition due to the employment of new design specification languages. Long gone are the days of laying out an IC using individual transistor cells, but the dominant design specification languages are still only a few steps above this. RTL is still a widely used hardware description language (HDL) because the translation from RTL to hardware is well defined, and it's supported by most vendors.

The other very high-speed IC (VHSIC) design languages include VHDL (VHSIC HDL) and Verilog (Verifying Logic) HDL. Verilog was originally intended as a simulation language, but it eventually became a design synthesis tool too. VHDL is a U.S. Department of Defense standard designed to replace proprietary design languages.

VHDL International (VI) has been the keeper of VHDL with Open Verilog International (OVI) addressing Verilog. The similarity between the two languages wasn't lost on these groups or users. Recently, OVI and VI merged into an organization named Accellera.

Two popular approaches provide a higher level of abstraction. One is enhancing an existing design language, such as Verilog. The other is replacing an old design language with a new language. Actually, it isn't so much a new language, but rather the result of taking an existing programming language and utilizing it as a design language. In this case, C and C++ tend to be the hot programming languages used for co-design tools.

From Co-Design Automation Inc., Superlog is designed as a proper superset of Verilog. It's a proprietary solution that should eventually find itself as an open standard with multivendor support.

The Superlog method offers significant benefits to those currently using Verilog because their existing IP can be employed without modification. In addition, the initial impact of training developers is minimized because the advanced features in Superlog can be utilized when needed and when the features are understood.

Superlog is actually an amalgam of Verilog and C. This approach provides the familiarity of C and Verilog, but it requires a language-specific compiler.

Superlog is going to receive a lot of competition from strict C and C++ alternatives, like SystemC (See "SoC Design Using SystemC," p. 94). Using a conventional programming language for design and simulation is nothing new. A number of ambitious designers and companies have already struck out on their own to develop designs in C and C++.

Two significant advantages come with this approach. First, the bulk of the support required to use the tools is already part of readily available C and C++ development tools. These tools tend to be available on a number of platforms. They also offer designers choices among hardware and C and C++ software, over a range of performance levels, from a desktop PC to a high-end workstation.

The second advantage is a well-defined language that's familiar to designers. Granted, C and C++ are more familiar to programmers. But, as most SoC designs become heavily weighted toward software development, with one or more processor cores in an SoC, the programming language of choice is often C or C++.

This consistency between application software definitions and hardware definitions isn't by chance. Having both in the same language means that moving algorithms from the hardware to the software side of a design is significantly easier. It also brings hardware and software design teams closer together. This is less of an issue with small design teams or small projects. But on larger projects, the ability to discuss designs and design changes using a common design tool can help reduce confusion between designers.

Originally developed by Synopsis Inc., CoWare Inc., and Frontier Design Inc., SystemC is a library and design methodology of C++. Today, SystemC is supported through the Open SystemC Initiative (OSCI) and by a number of other companies, such as Cadence. Other C-based solutions include CycleC from C-Level Design Automation, which uses ANSI C, and CynApps' Cynlib, which is based on C++.

Due to its functionality and because of the companies that support it, SystemC has gained a major following. SystemC is capable of addressing different levels of abstraction. These different abstraction levels mixed in a design are what allow incremental refinements. Plus, SystemC comes with a number of higher-level definitions, such as queues, that can greatly simplify a design. Lower-level design tools, like CycleC, can be used to define similar objects, but that means that a designer has to write more code.

C-Level Design actually supports SystemC with its simulation tool, CSim. It offers CycleC as a high-performance simulation tool that generates the VHDL or Verilog code available today. C-Level Design provides its customers with the CycleC methodology style guide instead of a library.

The main difference between SystemC and CycleC is that CycleC is more of a low-level design tool that's comparable to VHDL in C. CycleC doesn't utilize a runtime system as SystemC does, allowing CycleC to provide very good simulation performance. All that's needed is a standard ANSI C/C++ compiler.

CycleC supports asynchronous design techniques, such as multiple clock and asynchronous reset signals that are common in SoC designs. But unlike SystemC, it doesn't provide higher levels of abstraction, such as queues.

CynApps' Cynlib is patterned after Verilog PLI. Available online, the Cynlib library is an open source. The company's separately priced products include applications like the Verilog Co-simulation interface and CynSuite, which has Cynthesizer, the translator of C++/Cynlib code to Verilog RTL.

The implications of the differences between the three is beyond the scope of this article, but each has benefits. The SystemC runtime provides designers with a rich set of ports, data types, clocks, and support for untimed modules. CycleC potentially provides better simulation performance.

Different parts of the system definition directly drive other components without synchronization overhead due to the runtime support. Although some users have reported significant performance advantages from using CycleC, it hasn't undergone benchmarks and performance testing by a third party.

SystemC offers multiple levels of abstraction from high-level functional models down to detailed, clock-level RTL models. High-level functional models can be refined to more detailed models as necessary.

In both of these cases, the resulting application is single-threaded. Simulation performance, then, depends very much upon the quality of the C/C++ compiler and the hardware used to run the application.

The design process is the same for all approaches (Fig. 1). A number of higher-level design products will generate source code that can then be compiled and linked into an application used for simulation tests. In addition, the source code can be created by the designer and be modified as part of an iterative design process.

Conversion of the design to hardware starts with the source code. But a different compiler, like C-Level's System Compiler, transforms the source code into a definition, like an RTL design definition, which can be used by existing tools to generate a chip. There's no reason why the compiler couldn't be incorporated into tools to generate hardware without going through an intermediate step. But given the large investment in an existing infrastructure, it's likely that the two-step process will remain the same for many years.

One reason for keeping the two-step process is that SoC designs are rarely done from scratch. More often, a significant portion of the design consists of existing IP tied together with some custom IP.

Because the C and C++ approach is relatively new, much of the company's IP will be in other forms. Verilog, VHDL, and RTL are examples. Mixing design languages can lead to a Tower of Babel, but it will probably be the norm for many years to come. One reason is that the other languages provide a way to exchange IP between groups and companies.

Yet another approach is taken by Cynergy Systems Design Inc. Its idea was to extend RTL using C. What resulted is the company's patented RTLC.

A number of the company's tools work with RTLC. One of these tools, ArchGen, is a graphical design application that generates RTLC based on graphical system designs (Fig. 2). ArchGen can simulate the design too, and it utilizes graphical animation for debugging purposes.

Application Specification Virtual Prototype (ASVP) Builder is another RTLC design tool from Cynergy Systems Design Inc. With the Afterburner, also from Cynergy, existing RTL designs can be converted to RTLC.

Simulation is a key aspect to co-design and co-verification. Small to medium-size SoCs can be simulated at reasonable speeds with the entire system defined by using one of the HDLs already discussed. Larger SoCs, though, simply push the hardware requirements too far.

Partitioning To Speed Simulation One way to provide faster simulations is to partition the design and implement a portion as a black box that can be done in a more efficient fashion. In many instances, this is performed with the processor cores because application execution is responsible for a large amount of activity and an SoC is usually built around a processor. Memory arrays fall into the same category. They are well defined and well tested.

The other way to speed up simulation is by throwing hardware at the problem. Hardware-based simulation support is normally provided through an array of FPGAs. Aptix Inc., IKOS Systems Inc., and Quickturn Inc. are offering products in this area, in addition to software simulation support.

Aptix's Explorer spans co-design and co-verification, including Explorer hardware simulation support. The Explorer uses Aptix's Field Programmable Interconnect Component technology to link an array of FPGAs with other core components, such as memory and processor cores. The Explorer 2000 software maps a software design to Explorer's hardware.

The IKOS VStation-5M supports designs up to 5 million gates. It comes with the IKOS VirtualLogic compiler, which utilizes the patented VirtualWires technology. The VStation-5M supports RTL designs as well.

VirtualWires lets designers view and manipulate the multiple FPGAs in the VStation-5M as one logical FPGA. It accomplishes this using custom FPGAs that time-multiplex signals between FPGAs in the array.

From Quickturn, the Mercury Plus also employs custom FPGA chips tuned for emulation. A fully loaded system can handle designs with up to 20 million gates and up to 32 primary clock domains. The system accurately models asynchronous logic.

Both systems are controlled by a workstation. Debugging software provides access to signals within the FPGA arrays, allowing debugging support that's similar to software-based simulations.

While software-based solutions have typical cycle times that are well under 100 cycles/s, these hardware solutions provide performance in the range of 1k to 100k cycles/s.

Better yet, the systems accept multiple plug-in modules. Typically, the modules contain such standard IP as a bonded-out processor core or an FPGA that simulates other IP to be incorporated into a design. These modules are synchronized with the FPGA implementation. They reduce the number of FPGAs needed to utilize a design, allowing larger SoCs to be simulated. Plus, they provide a well-tested environment that matches what will be implemented in the final chip.

IKOS has a Translation Interface Portal (TIP) product. TIP allows the hardware model to be linked to a software model. In this case, part of the design is simulated in software while the other part is simulated in hardware. TIP's data-streaming support allows high-performance verification with real-world data that might be generated by hardware or software, like a network interface. But generally speaking, even using hardware simulation, co-verification can take quite a bit of time.

Platform-Based Design Building an SoC beginning with a standard set of IP is frequently called platform-based design. A typical IP collection with this approach is a processor core, a block of memory, and possibly a number of peripheral, interrupt, or DMA controllers.

The advantage of this approach is twofold. First, the base is well defined. Second, the base is typically well tested and doesn't require any verification other than what's necessary to test the linkage between it and additional IP.

CoWare's napkin-to-chip (N2C) product approach is designed to formalize the platform-based approach. It allows an SoC design to be based on an existing platform. This, in turn, can be tuned by changing high-end parameters. For instance, the amount of on-board memory used in the design could be altered.

CoWare has a number of domain-specific designs. For example, a digital-camera platform exists, as well as an xDSL platform. Obviously, each uses a different collection of peripherals, but even here there's a commonality that's typical with the core processor.

The platform-based approach has another advantage with N2C. The N2C interface synthesis support generates the IP to be incorporated in the SoC design, plus the software device drivers required to access this IP.

SoC designs have normally been done in-house using local hardware and software. But, Virtio Inc. is out to change that. This company's web site provides an interface for creating a system design model using Virtio's Magic-C as the HDL. This model can then be tested online. The simulation runs on servers located at Virtio's site. Though the model is functionally accurate, it isn't necessarily timing accurate.

Virtio's service is free. The Citrix-based client interface can be downloaded from the site. The sales model for the site is based on licensing third-party IP that will be available through the site.

Virtio recognizes the need to have in-house development tools, so it's marketing an intranet version of the system. It has the same flexibility, providing access to workstations on the intranet that typically operates at a much higher speed than an Internet connection. Furthermore, it addresses any potential security and secrecy issues.

While few organizations will be designing a large SoC using Virtio's site, it does open the door to smaller companies and startups. Those groups can develop small to medium-size SoCs or ASICs using the site. It minimizes up-front investment because there are no costs until IP is used to make a chip.

Major consulting firms, such as NEC Electronics Inc., are starting to utilize and support more robust co-design and co-verification tools. The fact that many are still building their suite of tools or evaluating third-party tools shows that this space is in flux.

For example, the company's ACE2 initiative began last year and won't be complete until 2002. NEC is working with third-party tools that will support the company's 800 series, MIPS' microprocessors, and SPX DSP cores. NEC understands the importance of both software and hardware simulation and has developed emulation boards for a number of third-party hardware emulation tools.

Co-design and co-verification tools are becoming invaluable as SoC complexity grows. Hopefully, these tools will be up to the job when designers need them.

Companies Mentioned In This Report
Accellera
www.accellera.org

Aptix Inc.
(408) 428-6200
www.aptix.com

Atmel Corp.
(408) 441-0311
www.atmel.com

Cadence Inc.
(408) 943-1234
www.cadence.com

Chameleon Systems Inc.
(408) 730-3300
www.chameleonsystems.com

C-Level Design Automation
(408) 369-0555
www.cleveldesign.com

Co-Design Automation Inc.
(877) 626-3374
www.co-design.com

CoWare Inc.
(408) 748-2929
www.coware.com

CynApps
(408) 588-4000
www.cynapps.com

Cynergy Systems Design Inc.
(512) 338-0165
www.cae-plus.com

Frontier Design Inc.
(321) 728-7750
www.frontierd.com

IKOS Systems Inc.
(408) 255-4567
www.ikos.com

Mentor Graphics
(800) 547-3000
www.mentor.com

NEC Electronics Inc.
(408) 588-6000
www.necel.com

ßOpen SystemC Initiative
www.systemc.org

Open Verilog International
www.ovi.org

Quickturn Inc.
(408) 914-6000
www.quickturn.com

Synopsis Inc.
(650) 584-5000
www.synopsis.com

SystemC Inc.
(408) 986-8000
www.systemc.org

Tensilica Inc.
(408) 986-8000
www.tensilica.com

Vanteon Corp.
(888) 506-5677
www.vanteon.com

VHDL International
(303) 530-4562
www.vhdl.org

Virtio Inc.
(408) 341-0844
www.virtio.com

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!