Sign In
Forgot Password?
Sign In | | Create Account

Bug Prospecting with Coverage-Aware Stimulus

During discussions with customers about verification and functional coverage closure techniques, one thing that has consistently surprised me is how much reliance is sometimes placed on the unquantified benefits of random testing. There are many well-quantified benefits to constrained-random stimulus generation. Using algebraic constraints to declaratively describe a stimulus domain and automation to create specific stimulus dramatically boosts verification efficiency and results in verification of corner cases that humans would be unlikely to consider. More difficult to quantify, however, are the benefits of redundant stimulus. It is definitely beneficial in provoking some sequential behavior, but how much redundancy is enough and how much is just wasteful? I work with a tool that accelerates functional coverage closure by efficiently targeting stimulus that will ‘cover’ the coverage model. The typical result is an order-of-magnitude improvement in time-to-coverage closure. The core productivity benefit of reducing time-to-coverage is typically understood by the customers I talk with. As designs have become more complex, verification requirements have increased, coverage models have grown, and coverage closure has become challenging, unpredictable and time-consuming. So, some automation in achieving coverage closure is a welcome addition to one’s verification toolkit. However, there is concern over what might be lost in the stimulus-optimization process. Is all that ‘redundant’ stimulus really important and not so redundant after all?

 

This concern seems quite valid. After all, coverage metrics are a subjective measure of verification quality. Achieving coverage of a specific set of coverage metrics proves that a given set of functionality was exercised, but doesn’t prove that this is the only functionality that needed to be exercised. The coverage model is a dynamic, moving target, and is likely to change several times across the typical verification cycle. The coverage model may be expanded as new features are added. The coverage model may be trimmed, or certain areas re-prioritized, as the schedule runs out or as certain coverage is deemed less important or cost-prohibitive. Finally, the coverage model may be expanded to include functional areas where bugs were found. From a coverage-driven verification perspective, verification that isn’t documented in the coverage model effectively does not exist. Verification that uncovered a bug should be repeated as the design is refined and changed. The only way to guarantee this happens is to augment the coverage model.

 

Across the verification cycle, two activities are taking place in parallel. Tests are added and simulation is run to target coverage closure. Meanwhile, bugs discovered during verification are analyzed to determine whether the coverage model should be expanded to functionality surrounding the bug – an activity I like to refer to as bug prospecting. As an example, let’s say we discover a bug with large packets in heavy traffic conditions. We might want to exercise different combinations of traffic conditions and small, medium, and large packets to see if the coverage model should be enhanced. Stimulus described using a declarative description, such as constraints, does a good job of producing unexpected cases. When stimulus is produced randomly, however, 90% of the stimulus, on average, is redundant. This makes systematic bug prospecting difficult and time consuming.

 

A coverage-aware stimulus generation tool like inFact allows the verification engineer to flexibly tailor stimulus according to the requirements of the job at hand. When time-to-coverage closure is the most important thing (regression runs, for example), coverage-aware stimulus provides the ultimate convergence between coverage-closure efficiency and bug prospecting. When it’s time to do some bug prospecting, redundancy can be limited (but not eliminated) and the stimulus targeted a bit more loosely. Then, the coverage model can be enhanced to ensure future regression runs efficiently produce the newly-interesting stimulus. So, far from limiting verification, coverage-aware stimulus actually gives the verification engineer new and improved tools to tackle bug prospecting and coverage closure.

Convergence

More Blog Posts

About Matthew Ballance

Matthew BallanceMatthew is a functional verification Technical Marketing Engineer, specializing in inFact. He has worked at Mentor for over 10 years, working with Hardware/Software Co-Verification, Transaction-Level Modeling, and Functional Verification tools. Visit Intelligent Testbench Automation Blog

More Posts by Matthew Ballance

Comments

No one has commented yet on this post. Be the first to comment below.

Add Your Comment

Please complete the following information to comment or sign in.

(Your email will not be published)

Archives

Tags

 
Online Chat