Verify SoCs Faster And More Predictably With SystemVerilog And Constrained-Random Stimuli

March 5, 2008
Verifying the integration and operation of new IP in a legacy system-on-a-chip (SoC) becomes challenging. This is true particularly when the legacy SoC environment was built using a directed test methodology and validation of new IP requires corner case s

Verifying the integration and operation of new IP in a legacy system-on-a-chip (SoC) becomes challenging. This is true particularly when the legacy SoC environment was built using a directed test methodology and validation of new IP requires corner case stimulus to achieve required functional coverage. As a result, it becomes absolutely necessary to upgrade the legacy environment to support the generation of constrained-random and directed stimuli, functional coverage, and automated checking at the block level using reference models. The idea is to achieve these goals with minimal verification effort by employing a methodology that caters to these items.

This article covers the steps taken to develop an environment capable of supporting the aforementioned items using the methodology detailed in the SystemVerilog Verification Methodology Manual (VMM). It details how a legacy test environment was seamlessly integrated into the new environment using the SystemVerilog VMM methodology. In addition, it discusses the integration of block-level reference models and assertions, and functional coverage for measuring the adequacy of test cases.

The Challenge
At TI’s Connectivity Interface Solution (CIS) Group, validation of an earlier SoC project (the TSB43DA42) was accomplished through use of directed test cases. In a directed test environment, each verification item (scenario) from the verification plan is converted to a test. One advantage of a directed test-based scheme is that project progress is almost linearly proportional to the amount of time spent on the project, and therefore somewhat predictable. However, a corresponding disadvantage is that each new feature or variation to be tested requires writing a new test.

Pressed to integrate new IP embodying security functionality in the same SoC environment, and given a limited amount of time to complete the verification task, the challenge for the CIS digital design team was clear. The team realized that constrained-random, coverage-driven testing is a must. It was immediately obvious that the SystemVerilog VMM methodology was the answer to achieving CIS’s goal. Using this methodology, it would no longer be necessary to individually implement and verify each scenario in the verification plan, and therefore we could expect an overall productivity boost.

Although we knew moving to new methodologies could help verification productivity, efficiency, reusability, and completeness, we were concerned about the time needed to learn a new language, methodology, and tools. We were also concerned that much of the legacy verification environment might have to be recreated to fit the new methodology. Because time was a big constraint for validating the new IP, reuse of legacy verification components was essential.

Out With The Old
The TSB43DA42 SoC is a 1394 link layer and integrated physical layer device designed for digitally interfacing advanced audio/video electronics applications. It supports formatting and transmission of IEC61883 data as well as standard 1394 data types, such as asynchronous streams and PHY packets.

TI’s CIS Group’s 1394 devices are used in digital audio/video consumer products, including HDTVs, set-top boxes, audio/video receivers and DVD audio. Beyond individual audio/video applications, these devices are used in home-networking products using the 1394 protocol (Fig. 1).

The legacy TSB43DA42 verification environment consists of a Verilog device model (RTL or gates) and Verilog testbenches (primarily written in the form of Verilog tasks and functions). The basic structure of this verification environment illustrates the relationships between the various models and testbenches (Fig. 2).

In this environment, we had 600 directed test cases. First, each test, based on the functional requirements, set up the configuration register model. Next, based on the test configuration, one or more generators was activated. These Verilog generators produced a fixed data pattern and sent that pattern to the appropriate bus-functional model (BFM) and to the pre-programmed verification scoreboards. Finally, the BFM picked up the generated data and drove the DUT input signals.

Based on what we had heard about the VMM and its recommendation regarding a layered architecture, we realized that the legacy environment was already structured appropriately but was lacking a constrained-random methodology. Now the challenge for us was to take these legacy environment blocks and incorporate them into the VMM methodology and, at the same time, use SystemVerilog/VMM constrained-random generators for test generation.

To make the best use of some of the major components from the legacy environment and enhance the test generation to constrained-random methodology, we decided to keep most of the BFM and scoreboard functionality and their interfaces to generated objects. We changed only the test-generation portion of the environment.

Moving Forward
From experience on earlier projects, we knew the potential benefits of a constrained-random, coverage-driven verification methodology. Before fully adopting the new methodology, we talked to several other groups within TI that had experimented with the new SystemVerilog language extensions to Verilog, and found that they had a positive experience.

This further solidified our confidence in adopting VMM, considering there was no room for mistakes. Also, this was a very sensitive customer project with a focus on first-pass success. Although the methodology, based on the book “Verification Methodology Manual for SystemVerilog” (www.vmm-sv.org), was built upon years of experience with several verification languages, its use within TI was relatively new.

With got some assistance from Synopsys’ local applications consultants and engineers, who provided us with a VMM starter kit and helped us to modify this generic VMM environment to fit our specific modeling and integration needs. We were able to quickly build a simple high-speed data interface (HSDI) loop-back test to check basic data generation and monitoring. A very important element in building this test environment was the step of task scoping and coming up with the work breakdown structure. A brain-storming session between the design team and Synopsys AEs proved very fruitful in achieving this milestone.

We then built up the HSDI and 1394 data-transactor components to create a flexible, powerful testbench environment. Because the legacy Verilog verification environment was built with a layered structure, we were able to reuse all of the signal-level drivers and monitors virtually without modification. With the new VMM components, we were able to replace the legacy data generation and high-level transactor functions, while the legacy bus functional models (BFMs) were reused (Fig. 3).

The legacy environment included a basic scoreboard, but it needed to be updated to work with the new VMM environment. In addition, the new IP included a cycle-accurate reference model in Verilog, a reference model written in C, and a set of assertions to check the block-level functionality. Following the VMM methodology, we were able to incorporate these block-level checks in the chip-level verification environment, providing hooks into the top-level scoreboard to report any failures. The legacy scoreboard, which checked the still-valid older features, was wrapped by a higher-level VMM scoreboard to provide final reports.

As with any constrained-random test environment, functional coverage measurement was required to ensure that the generated data was exercising the desired functionality. SystemVerilog and VMM offered powerful capabilities for collecting coverage data, while the VCS simulator provided HTML-based tools to visualize the achieved coverage and the coverage holes (Fig. 4).

Because the primary goal of this new environment was to test the new functionality added to the SoC, a coverage model was built that emphasized these new features. Initial testing was done with relatively few constraints on the random data generation, and a large portion of the coverage goals were achieved in this way. To hit the few remaining coverage items, VMM provided a straightforward way to add more constraints to generate “directed-random” transactions from the test-writer level, without modification of the base environment.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!