Sign In
Forgot Password?
Sign In | | Create Account

JUN 2011—Volume 7, ISSUE 2

“As a special treat in this issue, we next introduce you to the Online UVM/OVM Methodology Cookbook.”

Tom Fitzpatrick, Editor and Verification Technologist

June 2011 Issue Articles

First Principles - Why Bother With This Methodology Stuff, Anyway?

Many of us are so used to the idea of “verification methodology,” including constrained random and functional coverage, that we sometimes lose sight of the fact that there is still a large section of the industry to whom these are new concepts. Every once in a while, it’s a good idea to go back to “first principles” and understand how we got where we are and why things like the OVM and UVM are so popular. Both authors have found ourselves in this situation of trying to explain these ideas to colleagues and we thought it might be helpful to document some of the discussions we’ve had. If you’re new to the idea of object-oriented testbenches in SystemVerilog and maybe are wondering what all the fuss about UVM at shows like DAC and DVCon is all about, or if you’re getting ready to take that plunge, we think these ideas might help you “begin with the end in mind.” If you’re an “expert” at this stuff, we hope that this dialog will help you take a step back and appreciate how far we’ve come as an industry and remember not to get too hung up on the whiz-bang features of a methodology but to keep in mind the ultimate goal, which is to make sure that our chips are going to work properly

Request issue today!

Online UVM/OVM Methodology Cookbook - Registers/Overview

The UVM register model provides a way of tracking the register content of a DUT and a convenience layer for accessing register and memory locations within the DUT.

The register model abstraction reflects the structure of a hardware-software register specification, since that is the common reference specification for hardware design and verification engineers, and it is also used by software engineers developing firmware layer software. It is very important that all three groups reference a common specification and it is crucial that the design is verified against an accurate model.

The UVM register model is designed to faciliate productive verification of programmable hardware. When used effectively, it raises the level of stimulus abstraction and makes the resultant stimulus code straight-forward to reuse, either when there is a change in the DUT register address map, or when the DUT block is reused as a sub-component.

Request issue today!

A Methodology for Hardware-Assisted Acceleration of OVM and UVM Testbenchesproving Embedded Software Integration with Veloce Emulation and the Questa Codelink Debug Environment

A methodology is presented for writing modern SystemVerilog testbenches that can be used not only for software simulation, but especially for hardware-assisted acceleration. The methodology is founded on a transaction-based co-emulation approach and enables truly single source, fully IEEE 1800 SystemVerilog compliant, transaction-level testbenches that work for both simulation and acceleration. Substantial run-time improvements are possible in acceleration mode and without sacrificing simulator verification capabilities and integrations including SystemVerilog coverage-driven, constrained-random and assertion-based techniques as well as prevalent verification methodologies like OVM or UVM.

Request issue today!

Combining Algebraic Constraints with Graph-based Intelligent Testbench Automation

The Questa® inFact intelligent testbench automation tool is already proven to help verification teams dramatically accelerate the time it takes to reach their coverage goals. It does this by intelligently traversing a graph-based description of the test sequences and allowing the user to prioritize the input combinations required to meet the testbench coverage metrics while still delivering those sequences in a pseudo-random order to the device under test (DUT). The rule language, an extended Backus Naur Format (BNF) that is used to describe the graph structure, has recently been enhanced to add two powerful new features. Algebraic constraints can now be included to define relationships between the fields of the stimulus description (such as the fields of an OVM/UVM sequence item). Also, external testbench values can now be imported into the graph, allowing for the definition of relationships between Questa inFact-generated field values and externally selected values. The Questa inFact algorithms can now target cross combinations of fields that are under its control with fields that are outside of Questa inFact’s control. This article describes these powerful new capabilities in more detail with some simple application examples.

Request issue today!

Data Management - Is There Such a Thing as an Optimized Unified Coverage Database?

With the sheer volumes of data that are produced from today’s verification environments there is a real need for solutions that deliver both the highest capacities along with the performance to enable the data to be accessed and analyzed in a timely manner. There is no one single coverage metric that can be used to measure functional verification completeness and today’s complex systems demand multiple verification methods. This means there is a requirement not only to unify different coverage metrics’ but also to unify data from multiple tools and verification engines. Data management forms the foundation of any verification environment.

Request issue today!

A Unified Verification Flow Using Assertion Synthesis Technology

As SOC integration complexity grows tremendously in the last decade, traditional blackbox checker based verification methodology fails to keep up to provide enough observability needed. Assertion-based verification (ABV) [1] methodology is widely recognized as a solution to this problem. ABV is a methodology in which designers use assertions to capture specific internal design intent or interface specification and, either through simulation, formal verification, or emulation of these assertions, verify that the design correctly implements that intent. Assertions actively monitor a design (or testbench) to ensure correct functional behavior. They detect design errors at their source, greatly increasing observability and decreasing debugging.

Request issue today!

Benchmarking Functional Verification

This article describes “asureMark™ ” - the Functional verification Capability Maturity Model (FV-CMM™) benchmarking process developed by TVS to help the user measure the maturity of their verification processes and to provide a framework for planning improvements.

Request issue today!

Universal Verification Methodology (UVM)-based SystemVerilog Testbench for VITAL Models

With the increasing number of different VITAL model families, there is a need to develop a base Verification Environment (VE) which can be reused with each new VITAL model family.

UVM methodology applied to the SystemVerilog Testbench for the VITAL models should provide a unique VE. The reusability of such UVM VE is the key benefit compared to the standard approach (direct testing) of VITAL models verification. Also, it incorporates the rule of “4 Cs” (Con-figuration, Constraints, Checkers and Coverage). Thus, instead of writing specific tests for each DUT feature, a single test can be randomized and run as part of regression which speeds up the collection of functional coverage.

The results show that UVM VE testbench, with respect to a standard direct test bench, requires nearly equal time to develop. In return it provides re-usability and much faster verification of each new VITAL model. The changes one needs to do are mainly related to the test where the appropriate configuration must be applied.

Request issue today!

Efficient Failure Triage with Automated Debug - a Case Study

Functional debug is a dreadful yet necessary part of today’s verification effort. At the 2010 Microprocessor Test and Verification Workshop experts agreed that debug consumes approximately one third of the design development time. Typically, debugging occurs in two steps, triage and root cause analysis. The triage step is applied when bugs are first discovered through debugging regression failures. This occurs at the chip, system or sub-system level, where a verification engineer will group different failures together and perform an initial investigation to determine which engineers should deal with the bug. We refer to this as the triage step due to its similarity to how hospital emergency rooms assess incoming patients and determine the next step for treatment. Once the bug has been passed on, the root cause analysis step begins. Here, the engineer responsible for the bug will determine its full cause and how to fix it. Both triage and root cause analysis must be performed accurately and efficiently to result in an effective debug process. This article focuses on the often neglected pain of triage. It presents a case study where a UVM verification environment is enhanced with powerful automation tools to improve the overall debug effort. More specifically, the use of Vennsa’s OnPoint and Mentor Graphics’ Questa suite of tools can distinguish between multiple error sources in the design and the testbench as well as reduce the debugging time by combining different failures of the same error source.

Request issue today!

Are OVM & UVM Macros Evil? A Cost-Benefit Analysis

Are macros evil? Well, yes and no. Macros are an unavoidable and integral part of any piece of software, and the Open Verification Methodology (OVM) and Universal Verification Methodology (UVM) libraries are no exception. Macros should be employed sparingly to ease repetitive typing of small bits of code, to hide implementation differences or limitations among the vendors’ simulators, or to ensure correct operation of critical features. Although the benefits of the OVM and UVM macros may be obvious and immediate, benchmarks and recurring support issues have exposed their hidden costs. Some macros expand into large blocks of complex code that end up hurting performance and productivity, while others unnecessarily obscure and limit usage of otherwise simple, flexible APIs.

The ‘ovm_field macros in particular have long-term costs that far exceed their short-term benefit. While they save you the one-time cost of writing implementations, their run-time performance and debug costs are incurred over and over again. Consider the extent of reuse across thousands of simulation runs, across projects, and, for VIP, across the industry. These costs increase disproportionately with increased reuse, which runs counter to the goals of reuse.

In most cases, it takes a short amount of time and far fewer lines of code to replace a macro with a “direct” implementation. Testbenches would be smaller and run faster with much less code to learn and debug. The costs are fixed and up-front, and the performance and productivity benefits increase with reuse.

Request issue today!

 
Online Chat