Last year, I predicted that two things would happen in 2012 (see “The Next Level Of Design Entry–Will 2012 Bring Us There?”). First, I suggested that the hybrid, combined use of TLM simulation and the various ways to execute RTL (including hardware-assisted verification) would find further adoption. Second, I thought that more TLM modeling would be used in the FPGA space in which both main vendors were providing virtual prototyping solutions for their new FPGAs containing dual ARM subsystems. So, what happened?
Hybrids In 2012
For the hybrid, combined use of execution engines for TLM and RTL, I compared four different engines to execute hardware to enable software development as early as possible: TLM simulation, RTL simulation, acceleration/emulation, and FPGA-based prototyping. The major tradeoffs between these engines are early time of availability, speed, accuracy of hardware execution, and ability to debug hardware and software. I predicted that we would see more and more hybrid uses of TLM- and RTL-based techniques in 2012.
I got this one right. User interest is significant, sparking Synopsys announcements of hybrids of TLM and FPGA-based prototyping at DAC, Cadence presentations at ARM TechCon and other venues about hybrid TLM and acceleration/emulation usage, and Eve papers about how its products are used with TLM simulation. And, of course, Synopsys contributed to further market consolidation in the hardware-assisted verification space by adding Eve’s FPGA-based system to its existing two FPGA-based systems from the previous ChipIT and Synplicity acquisitions.
Hybrid use will increase even more in 2013, and not only between TLM and RTL but across all engines. We already have users happily hot-swapping RTL execution back and forth between the Cadence Palladium XP Verification Compute Platform and Incisive RTL simulation. Acceleration, the combination of RTL simulation and emulation, finds further adoption as users are scrambling for more cycles to execute to increase verification coverage.
Cadence made specific announcements in this area with in-circuit acceleration, which combines the best of acceleration and emulation. Furthermore, we see users connecting FPGA-based prototypes containing the stable version of the hardware with the newer, still to be debugged portion in emulation. And even in the pure TLM space, users are adopting hybrids with swapping between instruction-accurate and cycle-accurate processor models to trade off between accuracy and execution speed.
In The FPGA Space
The second prediction has to do with TLM modeling in the FPGA space to enable early software development on the ARM-based dual-core subsystems for the complex FPGAs both big vendors provide. We certainly saw interest and adoption in some areas, especially around porting of operating systems and some driver bring-up. However, I had overlooked an adoption trend that in hindsight should have been obvious.
First of all, ramp-up of the FPGAs themselves is still starting up. But in addition, a fair portion of the first wave of adoption of these FPGAs is in projects that consolidate into one chip what was previously two separate chips that held the processor subsystem and the hardware extensions. As a result, not that much new software development was necessary for these projects. Still, with further adoption in 2013, TLM modeling will become more important here as well.
Other Developments In 2013?
It is time for software-driven verification to be adopted at a faster rate. Previously I had suggested that embedded software running on the embedded processor would become an important part of test benches for hardware (see “2010 Will Change The Balance In Verification”). The emergence of hybrid execution as discussed above now enables a much more efficient set of choices to actually execute the embedded test software targeted for verification.
At this point I have personally seen in customer projects a significant variety of combinations of processors executing the test software in conjunction with the hardware block to be verified, the design under test (DUT). Some users are developing verification scenarios even before RTL is available on pure TLM-based virtual platforms using a virtual DUT followed by combining instruction-accurate or cycle-accurate processor models on the host with the DUT in RTL to refine verification scenarios.
The same tests can then later be executed in pure RTL simulation—now the processor is executed in its RTL form–as well as by acceleration/emulation and FPGA-based prototyping to increase execution speed. The tests can even be executed on the actual silicon once it is back from production.
The processor that executes the tests may be a processor available in the system anyway, but more and more users are considering a dedicated separate processor for those tests, which if it is implemented in the actual silicon would look like a very advanced built-in self test (BIST) module. One use model for the setups described is to verify a DUT in hardware using C-based test benches as opposed to the e language or SystemVerilog. Another use model is to model the system environment with connections like USB, PCI, and MIPI for chip interfaces, which is effectively a form of virtualization of the system environment in which the chip resides.
With further adoption of TLM-based design in all areas including FPGAs, more effective hybrid connections between the TLM and RTL execution engines, and a push in software-driven verification, 2013 promises to be an exciting year.