For my first chip design in 1992, which was part of an HDTV encoder, I had to complete a layout to optimize the area for memories. I designed mostly at the gate level and used the register transfer level (RTL) to verify the connection of six chips. At the time the war between languages to express the transfer between registers was raging. Remember HiLo, DABL, LASAR, UDL/I, and n.2? You just dated yourself.
The industry eventually adopted Verilog and VHDL. Years later Verilog was extended upwards in abstraction with SystemVerilog. While VHDL users claimed that Verilog was finally catching up, the “System” in SystemVerilog drew users to believe that there may be overlap with SystemC, which had emerged by then.
The commonality of both improvements, getting to RTL and then extending RTL with transactions and advanced verification capabilities, resulted in different technologies enabling very similar capabilities. The technologies were competing with each other, and then standardization came along and smoothed some of the process. Eventually, though, users faced competing technologies and had to make a choice.
Today’s Landscape
The discussion has moved to the system level, and more and more functionality has moved into software. One of the big issues for development teams is how to best enable software development and perform true system-level verification of designs that include complex systems-on-a-chip (SoCs) being put into very complex printed-circuit boards (PCBs) with lots of interfaces to the outside world and, of course, executing hugely complex software.
During the development process, especially for derivative designs, developers end up managing a large collection of intellectual property (IP) assets for the individual blocks that make up their design. They will have high-level models in C or SystemC for the new portions of the design or will start with a high-level model for a fresh new green-field development.
RTL in Verilog, VHDL, and SystemVerilog are available for pretty much all the licensed IP, as well as for parts reused from previous designs. Instrumentations such as assertions, monitors, and checkers are the connection to RTL simulation and acceleration. Other assets include the test chips of the environment connected to in-circuit emulation, as well as connections to real-world interfaces like USB, PCI Express, and MIPI.
All of these IP assets representing parts of the design are available targeted to different technologies and different system-level verification environments. Virtual prototyping is based on abstracted models that enable early software development and the development of test benches prior to the design under test even being implemented in RTL.
Several engines allow the execution of RTL once the development has reached that step. Traditional RTL simulation runs on Linux hosts and finds its limitation in speed. But due to the execution in software, it allows very advanced instrumentation, monitoring, and checking with assertions and advanced test benches.
Once the RTL is available and reasonably stable, simulation acceleration enables the acceleration of at least the execution of the design under test. Depending on the interface methods between the hardware-accelerated portion and the test bench, execution speeds in the 10-kHz range or its multiples can be reached. Because simulation is involved in test-bench execution, debug and analysis remain very advanced.
Later in the flow, in-circuit emulation executes the chip or at least large portions of it in the context of the system environment. The chip is connected to the environment using rate adapters, often called “speed bridges.” Now we’re talking about the megahertz range of execution speed. But when a bug is identified, to get to its roots, users often need to reproduce it in simulation-acceleration or even just simulation to use the more advanced analysis capabilities to circle in on the bug.
Finally, FPGA based prototyping also executes the RTL, but it’s really used in most of the cases less for verification than for software development. It runs in the range of tens of megahertz, which is great for software, and its analysis and debug capabilities are pretty limited. Bugs again need to be reproduced in other environments if their root causes to be analyzed.
Competition Versus Cooperation
All of these technologies enable environments that are used for system-level verification and software enablement. But are they really competing? I would argue that they are not! They all have their individual strengths and weaknesses. In addition, the complexity of modern designs has made it unfeasible to completely model all components of a system targeted for each environment.
The cost, effort, and required elapsed project time is simply too high. Reproducing bugs found in an environment running at higher speed in another, slower environment that has the analysis capabilities just to analyze the root causes of those bugs has become a huge problem for customers I talk to. Finally, the cost of maintaining all these environments has become too high.
So what’s the solution? We as an industry need to work toward a heterogeneous, system-level verification environment that allows users to utilize the individual IP asset they have as is, instead of requiring a lot of remodeling. We need to remove the need to reproduce bugs found in one environment in another one and instead enable closer connections between the different verification environments, effectively combining the advantages of both.
Looking back to the days of different languages expressing register transfers, system-level verification does not have one single panacea solution that can be worked out by way of competition. It will require cooperation between environments and technologies to provide to development teams the best of the individual worlds.