Sign In
Forgot Password?
Sign In | | Create Account

Horizons Newsletter - October 2008

“Hello and welcome back to Verification Horizons. Over the past few years, I’ve come to really enjoy this particular issue because it gives me a chance to reflect on everything that happened at DAC. Since DAC seems always to be the focal point of our industry’s year, I find it fascinating to see how things have progressed from year to year—within the industry at large and especially here at Mentor Graphics. The central focus of Mentor’s verification activities at DAC this year was our annual luncheon, in which we discussed the topic of “Innovations in Verification.”

Aside from giving me a chance to present to my largest audience of the year (this year, we had a standing-room-only crowd of over 300 attendees!), it’s a wonderful opportunity for me to immerse myself in the full spectrum of Mentor’s verification offerings. I then have the chance through this newsletter to share some of the same thoughts with our ever-expanding (over 35,000 and counting) online audience. So, this quarter’s issue will continue with the same theme.

Perhaps the greatest verification innovation to occur in the past year, if I may say so myself, is the Open Verification Methodology (OVM), which we released in partnership with Cadence back in January. The success of this partnership continues with the recent release of OVM 2.0, which strengthens many of the key features of OVM. Our first article, “A Quick Tour through OVM 2.0,” provides an overview of some of the new features of this exciting release.”

Tom Fitzpatrick, Editor and Verification Technologist

October 2008 Issue Articles

A Quick Tour Through OVM 2.0

Since the initial release back in January of the Open Verification Methodology (OVM), development and collaboration between Mentor Graphics and Cadence has continued its brisk pace. Through some great interaction with OVM users, and guided by the recently-formed OVM Advisory Group (OAG), we’ve focused our recent efforts on the successful release of OVM 2.0, in which you’ll find many usability enhancements, bug fixes and other general improvements. In addition to improving the code, OVM 2.0 also includes the anxiously-awaited OVM User Guide, which provides step-by-step instructions and guidelines for applying OVM most effectively in the development of verification components, testbench environments and general verification IP.

In addition to the User Guide, OVM 2.0 improves the implementation of OVM sequences by adding TLM interfaces and other easeof- use features from OVM scenarios. This “unified sequences” implementation preserves the best of both sequential stimulus
mechanisms while simplifying the use-model and making sequence/driver interaction a conceptual superset of the existing TLM communication facility. We have also added explicit support in the OVM factory for the creation and overriding of parameterized classes and added much-requested debug methods to trace port/export connectivity throughout the testbench hierarchy.

Request October issue today!

Using Questa inFact in an OVM Environment

The Open Verification Methodology (OVM) provides a framework that includes both a methodology and code libraries that a verification engineer can use to build a testbench that is modular, interoperable, and reusable. It includes mechanisms for creating the various components and other objects that are required and for communicating information between these components.

The Questa inFact® intelligent testbench automation tool allows the user to build testbench components whose activity is controlled by one or more rule graphs that define the verification scenarios and stimuli that are to be applied to the device under verification (DUV) or to one of its interfaces. At simulation run-time these rule graphs interact with the tool’s intelligent algorithms and possibly other testbench components to efficiently achieve the required functional coverage.

The compiled binary rule graphs, known as testengines, are defined by describing all the legal sequences of activity that form each verification scenario, with each step in the sequence corresponding to an action (in Questa inFact terminology). Each action in turn is linked to code in the form of a task or function in the high-level verification language (HVL) code. The description syntax is an extended BNF style and, therefore, provides a very compact representation of both sequential operations and parallel choices, as well as other higher level constructs, such as loops.

Request October issue today!

Does Your RTL Simulation Accurately Predict the Behavior of Your Silicon?

For decades designers have relied on software simulators to predict silicon behavior, so few question whether the software models used to represent silicon behavior are sufficiently accurate to correctly predict silicon behavior. In fact, today’s verification methodology is based on this foundation — simulate the logic to ensure the right function is being computed, then use static timing to ensure the logic performs its function within the timing constraints.

So why question whether simulation accurately predicts silicon behavior? The fundamental reason is that the way engineers architect their designs today differs from the designs they did 10, or even 5, years ago. Clearly, designs are much larger and more complex, but that in itself does not cast doubt on whether software simulation is effective at predicting silicon behavior. The reason to revisit this question is that some of the fundamental assumptions that allow the “simulation + static timing” methodology to be effective no longer hold with today’s design styles.

Request October issue today!

Understanding and Debugging the Verification Environment

As more and more users deploy object-oriented design techniques to develop reusable dynamic verification components at the transaction-level, the more important it becomes to provide debug and analysis capabilities that work at that level of abstraction and verification environments composed of class-based objects. You must be able to debug at multiple levels of abstraction, using the same models at the transaction, register transfer, and gate levels.

Transaction-level modeling (TLM), abetted by standards-based verification components, delivers the required capacity, visibility, and performance to thoroughly debug today’s extremely complex designs.

Request October issue today!

Processor Driven Verification - Use it for More Than Just Sign-off

Current techniques of applying test vectors from an HDL testbench only begin to mimic processor bus behavior. The introduction of processor driven testbenches into the existing verification methodology enables real-world verification and extensive re-use of testbench software throughout the project.

One of the limitations to effective utilization of processor driven tests has been the difficulty debugging software in a processor running in a logic simulator. This paper presents proven methods of debugging and trace technology which overcome the limitations to enable the benefits of processor driven tests throughout various hardware and software integration
stages of your project.

Request October issue today!

The Innovation Behind Veloce - Time for a Closer Look

While chip and verification complexity continues to grow, time to market pressure requires that chip verification be completed on time and on schedule. To help their customers with these issues, emulation vendors face challenges of delivering faster runtime performance, addressing capacity needs, supporting different stimulus sources/methodologies, and offering simulation like ease-of-use and debug.

Providing a level of flexibility where users can employ multiple stimulus methodologies in the same project, or on the same model using single emulator platform is an important capability. As verification moves to higher levels of abstraction, hardware-assisted verification tools are required to handle verification environments using
these advanced methodologies.

Request October issue today!

Bringing Everyone on Board with OVM

The LSI Boulder verification team decided to verify their latest IP core with OVM and Questa. Although the verification team understood OVM well and could develop the primary constrained random tests, it was necessary to develop an interface that test writers outside the verification team could use to develop their own tests. The test writers are experts in the design they verify but do not know and do not have time to train on OVM or SystemVerilog.

This article describes an approach that preserves the flexibility and capability of the constrained random environment while allowing directed or semi-directed tests to be developed quickly and easily by engineerswho are not experts in OVM or SystemVerilog.

Request October issue today!

A Practical Guide to OVM – Part 2

This article is the second of a series aimed at helping you get started with OVM in a simple, practical way. The emphasis will be on the steps you need to take to write working code.

In the previous article we explored how to hook up a class-based verification environment to a module-based DUT (design-under-test), and also looked at the structure of an OVM transaction and an OVM verification component. In this article we will look at assembling the verification components, running a test, and then reconfiguring the verification environment.

This article was written just as version 2.0 of the OVM class library was being released. The code you see in this article will run with both OVM-1.1 and OVM-2.0, and can be downloaded from the Doulos website at www.doulos.com/knowhow.

Request October issue today!

Closed Loop Requirement Verification

The paper talks about a new unifying methodology that ensures closure to Verification planning and execution. Using this methodology and enabling tools, design/verification engineers and managers have a complete handle on the verification process. In a typical flow, a verification plan is created in Excel, Word, XML etc. and imported into the Simulation tool, as the verification process gets underway, the synchronization to the original plan is at best done manually. Further, you need help with understanding if the original requirements are being verified at all.

This new methodology/tool, takes away the guessing game. In one document you describe the Verification Plan, check the latest Verification Status, know when you are done and what you are done with. This methodology works with Mentor’s UCDB and advanced verification technologies.

Request October issue today!

Concurrent Verification - Resolving Conflicting Goals

The purpose of this article is to point out a common issue that affects most ASIC development projects.

Since Verification often consumes most of the man-hours of the ASIC development, typically double that of the RTL implementation, engineers and managers are trying to get as much mileage and value out of the verification as possible. If the verification environment can serve the needs of the designer in the early stages of RTL development, it is a big benefit.

Request October issue today!

Automated Register Abstraction Coming to an OVM Near You

Picture this, you’re heading a specialized verification team, and you’ve just been handed a new chip with 57 different IP blocks, 5,871 addressable registers, and 22,273 bit-fields. Oh great, you think, what is the probability of the documentation matching the design? And once you get it in sync, how do you keep it that way?

Not to worry, there is a solution: abstracting the registers in a manner that maintains their synchronization with the corresponding RTL code, documentation, firmware, and more. Register-abstraction tools, such as SpectaReg™ SystemVerilog Register Abstraction (SVRA™) from PDTi, auto-generate SystemVerilog modules to provide everything needed for working with registers from a TLM verification standpoint. Since they are auto-generated, you can be confident that the abstracted registers match the single source spec and derived RTL — when the spec changes, so does everything that depends on it.

Request October issue today!

Off to the Races with MVCs

I should like to share some of our team’s recent good experiences with the newly released MVC Verification components library. We have found the MVCs to be extremely useful in solving customer verification problems.

The primary reason for using any verification IP is that it should save you the time and effort to develop and verify a model for your verification environment. At the very least, it should come with a means of generating some stimulus and a means of being able to check a protocol. We were not dissapointed with the MVC Verification IP library since each VIP comes as a comprehensive kit of transactors, monitors, functional coverage monitors, responders, masters, slaves, stimulus classes and a UCDB ready functional coverage plan. There are also examples that illustrate how to use the MVC in different situations and a number of the example classes provided can quickly be adapted to solve specific project verification problems. We have used MVCs in customer verification environments based on both the AVM and the OVM and well as in traditional VHDL and Verilog verification environments.

Request October issue today!

 
Online Chat