Sign In
Forgot Password?
Sign In | | Create Account

Jun 2012—Volume 8, ISSUE 2

“On a recent visit to the Evergreen Aviation & Space Museum in Oregon, I had an opportunity to see some great examples of what, for their time, were incredibly complex pieces of engineering. ...those successes were the result of early failures where engineers learned the hard way...”

Tom Fitzpatrick, Editor and Verification Technologist

June 2012 Issue Articles

Mentor Has Accellera's Latest Standard Covered

If you can’t measure something, you can’t improve it. For years, verification engineers have used “coverage” as a way to measure completeness of the verification effort. Of course, there are many types of coverage, from different types of code coverage to functional coverage, as well as many tools, both dynamic and static, that provide coverage information. Simply put, coverage is a way to count interesting things that happen during verification and the measure of coverage is being able to correlate those things that happened back to a list of things you wanted to happen (also called a verification plan). Meaningful analysis requires a standard method of storing coverage data from multiple tools and languages, which is what Accellera’s Unified Coverage Interoperability Standard (UCIS) finally delivers.

Request issue today!

Is Intelligent Testbench Automation for You?

This article discusses where iTBA is best applied and where it will produce optimal results, as well as where it is not and will not. As with most emerging technologies, iTBA yields better results in some applications than others.

Request issue today!

Automated Generation of Functional Coverage Metrics

Verification teams are always under pressure to meet their project schedules, while at the same time the consequences of not adequately verifying the design can be severe. This puts the team between a rock and a hard place as they say. The main value of Questa inFact is to help with the problem of meeting the schedule requirements by more efficiently, and more predictably, generating the tests needed to meet coverage goals in the case where coverage metrics are being used to determine ‘completeness’ of the verification project. An indirect benefit of this has always been that, when planning to use this capability on a project, a verification team can expand the scope of the functional coverage metrics and therefore can expect to avoid the prior mentioned severe consequences of letting serious bugs slip through. This has however appeared to move the bottleneck in the process to the creation of these expanded coverage metrics. If many additional cross coverage goals are added this can be especially time consuming, and actually many bugs can be, and have been in my experience, introduced into the coverage scoreboard during this process. This issue has definitely slowed down the overall move to true coverage driven verification processes in some organizations, forcing them to fall back on easier, but less reliable, metrics such as code coverage.

Request issue today!

Targeting Internal State Scenarios

The challenges inherent in verifying today’s complex designs are widely understood. Just identifying and exercising all the operating modes of one of today’s complex designs can be challenging. Creating tests that will exercise all these input cases is, likewise, challenging and labor-intensive. Using directed-test methodology, it is extremely challenging to create sufficiently-comprehensive tests to ensure design quality, due to the amount of engineering effort needed to design, implement, and manage the test suite. Random test methodology helps to address the productivity and management challenges, since automation is leveraged more efficiently. However, ensuring that all critical cases are hit with random testing is difficult, due to the inherent redundancy of randomly-generated stimulus.

Request issue today!

Mentor's VirtuaLAB & Veloce

25 With the majority of designs today containing one or more embedded processors, the verification landscape is transforming as more companies grapple with the limitations of traditional verification tools. Comprehensive verification of multi-core SoCs cannot be accomplished without including the software that will run on the hardware. Emulation has the speed and capacity to do this before the investment is made in prototypes or silicon.

This means that, theoretically speaking, emulation’s time has come. However, there has been an intractable and very practical barrier to that arrival. The high-cost of emulators has made them affordable only for companies with deep pockets. Fortunately, recently introduced virtualization technologies are changing that by demolishing this barrier. It won’t be long before emulators will be a common fixture at small and medium sized companies, as well as larger ones. In this article we will look at how and why that is.

Request issue today!

On the Fly Reset

A common verification requirement is to reset a design part of the way through a simulation to check that it will come out of reset correctly and that any non-volatile settings survive the process. Almost all testbenches are designed to go through some form of reset and initialization process at their beginning, but applying reset at a mid-point in the simulation can be problematic. The Accellera UVM phasing sub-committee has been trying to resolve how to handle resets for a long time and has yet to reach a conclusion.

Request issue today!

A Methodology for Advanced Block-Level Verification

Code reuse is increasingly common, mostly because it helps to make engineers more productive. Through reuse it is possible to use a single code base tuned to specific requirements in lieu of maintaining multiple code bases. A standard ASIC technique is to use strap signals along with register programming at the start of operation to generate a customized configuration. Designs on programmable fabric can take this one step further. The downloadable netlist can be configured by the use of parameters, which optimizes away unneeded logic at synthesis time. IP providers use this technique to generate optimized netlists for every customer with a single infrastructure.

Request issue today!

Prototyping MATLAB and Simulink Algorithms

Prototyping algorithms on FPGAs gives engineers increased confidence that their algorithm will behave as expected in the real world. In addition to running test vectors and simulation scenarios at high speed, engineers can use FPGA prototypes to exercise software functionality and adjacent system-level functions, such as RF and analog subsystems. Moreover, because FPGA prototypes run faster, larger data sets can be used, potentially exposing bugs that would not be uncovered by a simulation model.

Class-Based SystemVerilog Debug

Debugging large testbenches has changed recently. The testbenches are larger than they used to be, and they are more like software than they used to be. In addition, the testbench language features use object-oriented constructs, and may use a new library of verification components. Each of these characteristics adds to the pain of debugging the testbench, which must be done before debugging of the actual device- under-test can begin.

 
Online Chat