Sign In
Forgot Password?
Sign In | | Create Account

February - Volume 5, ISSUE 1

“In the winter in New England, it’s important to stay warm and be thankful for friendly neighbors and family to help in times of need. In a stormy economy, it’s even more important to protect yourself from the “cold” of risk in your verification process.”

Tom Fitzpatrick, Editor and Verification Technologist

February 2009 Issue Articles

Achieving Higher Quality Design through Questa

The design and verification of a complex image processing system consisting of various image enhancement techniques is a very challenging and daunting task. The sheer complexity of algorithms and their wide range of applications mean that there are many different scenarios and corner cases to verify and also to ensure that the system is behaving correctly.

Traditionally such designs have been verified with mostly directed tests and teams have been successful in doing so. However, with more complex systems being designed with shrinking time-to-market, this approach doesn’t scale up. The complexity factor requires more powerful verification technologies while the time-to-market pressure involves managing and mitigating risks. Recent advances in Higher Level Verification techniques aim to address these challenges. However a raw set of features such as constraints, coverage, assertions etc. alone don’t appeal directly to such design teams as they are left to ponder which one to use where.

This is where a tailored verification approach based on proven and robust verification platform fits in. In this paper we share our recent experience of verifying an image correction algorithm block using sophisticated verification techniques offered by SystemVerilog along with robust and easy-to-use verification platform - Questa.

Request February issue today!

Reduce the Risk of Expensive Post Silicon Debug

A 2007 study by Far West research found that less than 30 percent of ASIC designs have first silicon success, and the majority of these flawed chips have functional flaws. Hence over 70 percent of all ASICs—and anecdotal evidence suggest that for FPGAs this number is much higher—require extensive debugging in the lab, often referred to as post-silicon debug.

Post-silicon debug is hard and expensive. It requires expensive instrumentation, and observability (when compared to simulation) is strongly impaired, because only a few of the thousands of important internal signals are directly accessible during normal chip operations. As a result, post-silicon debug is often a labor intensive process.

There is, however, a category of bugs that are even harder to find and debug post silicon than “regular” functional bugs. These bugs
manifest themselves intermittently and are not repeatable. These bugs seem to appear randomly, sometimes, for example, they only appear (or disappear) after the chip has warmed up. Other times they seem to happen every now and then in a completely random fashion. These bugs behave this way because they are caused by stochastic events deep inside the silicon. Locating and exterminating these bugs—compared to regular, repeatable functional bugs—is even more
daunting, and a lengthy, unpredictable post-silicon debug process is almost a given. A large set of bugs in this category originate from faultily designed clock domain crossings.

Request February issue today!

Mitigate Multi-Processor Synchronization of Risks with Processor Driven Verification

Multi-processor synchronization techniques are extensions of well established single processor, multi-threaded, software based synchronization techniques. These multi-processor synchronization techniques require a high level of concurrent visibility of both hardware and processor instruction logic.

The risks of effective verification of multi-processor synchronization hardware and processor instruction logic can be best mitigated using a processor driven verification methodology and supporting tools. The stimulus must come from the processor in conjunction with the system level test bench. Debug tools must be non-intrusive and provide concurrent visibility of the hardware and processor state of all processors in a multi-processor design.

Request February issue today!

Reusing Legacy VMM VIP in OVM Environments

The Open Verification Methodology (OVM) provides users with a proven methodology for creating modular, reusable verification components and testbenches that accelerate the verification task. With more than 12,000 downloads and 5200 users on ovmworld.org, the OVM has taken the industry by storm since its open-source release just over a year ago.

Having been architected specifically to encourage (enable/support) reuse from the block to system levels and from project to project, the OVM provides the ideal level of flexibility and automation to simplify the creation of verification intellectual property (VIP). Clearly, verification teams who have taken a look at the OVM have liked what they’ve seen.

Even users of the older Verification Methodology Manual (VMM) have shown substantial interest in OVM. This led to the forming of Accellera’s Verification Intellectual Property Technical Subcommittee (VIP-TSC), chartered with standardizing a solution that enables
interoperability between OVM and VMM. In response to the VIPTSC’s approval of a set of interoperability requirements, Mentor Graphics released an open-source OVM/VMM interoperability library to meet these requirements. The library includes a set of new
OVM-based components and classes to handle synchronization and communication between the OVM and VMM. It also includes an enhanced version of the open-source VMM release to provide IEEE 1800 compliance and additional infrastructure that supports
interoperability1. It also includes an extensive set of examples and HTML-based documentation.

Request February issue today!

A Practical Guide to OVM – Part 3

This is the third and final article of a series aimed at helping you get started with OVM in a simple, practical way. The emphasis of the series has been on the steps you need to take to write working code.

In the previous two articles we explored the overall structure of an OVM class-based verification environment and saw how to assemble OVM verification components, run tests, and reconfiguring the verification environment. In this final article we explore sequences and sequencers, which are the main tools used to generate structured test stimulus.

The previous article was published just as version 2.0 of the OVM class library was being released. One of the most important innovations in OVM 2.0 was the revision and improvement of the sequence classes. This article describes the new, improved
sequences available in OVM 2.0. The code you see in this article will run with OVM-2.0, and can be downloaded from the Doulos website at www.doulos.com/knowhow.

Request February issue today!

Are you in a “bind” with Advanced Verification

The first eight years of the 21st century have seen leading companies in our industry adopting new methodologies for verifying their designs.

These companies publish papers highlighting considerable benefits gained by implementing advanced verification technologies such as functional requirements tracking, Coverage Driven Verification (CDV), Assertion Based Verification (ABV), formal verification, constrained random stimulus, and most recently, the Open Verification Methodology (OVM). And yet, a significant number of customers continue to use HDL based testbench methods developed in the 1990’s or earlier.

Why is this? The most common reason for keeping to traditional verification practices is the lack of an obvious migration path to allow risk free adoption of new approaches. In this article we describe a useful approach which has provided a migration path for many customers who are beginning to adopt advanced verification techniques. The solution presented is the combination of Questa’s Multilanguage support, and the Verilog bind mechanism.

Request February issue today!

 
Online Chat