# Are You Covered?

### Matthew Ballance

Posted Sep 15, 2009

As a marketing guy, spending time with real engineers is always a welcome sanity check. Sometimes it’s easy to abstract a bit too much and forget some of the practical realities of the verification process. A few of my recent sanity-check moments have occurred around the concept of stimulus-coverage closure. In a coverage-driven verification flow that uses constrained-random stimulus, stimulus coverage is very important to ensure that all expected (and critical) stimulus has been generated. Achieving coverage closure in this area seems like it should be almost trivial – especially compared to hitting specific response coverage or triggering assertions in the design. However, several recent customer engagements have highlighted some of the difficulties around just achieving coverage closure for stimulus.

When using randomly-generated stimulus, we can predict the number of expected stimulus items to achieve stimulus coverage closure using a classic problem from probability theory called the Coupon Collector’s Problem. The subject of this problem is a game in which the object is to collect a full set of coupons from a limitless uniformly-distributed random collection. Early in the game, it is easy to fill empty slots in the coupon collection, since the probability is high that each new coupon selection is different from the previously-selected coupons. However, as the coupon collection approaches completeness, each new selection has a high probability of being a duplicate of a previously-selected coupon. After a bit of mathematical derivation, the expected number of selections needed to complete a set of n coupons is shown to be O(n∙log(n)). Given a collection that contains 250 elements, we would expect to have to make around 1380 random selections to fill the set. With a typical-size coverage model, uniformly-distributed random stimulus results in a 10-20% stimulus efficiency rate. Put another way, coverage closure could be achieved 5-10 times faster if non-redundant stimulus were used.

However, life is often not as simple as math and marketing folks predict. There are two primary aspects of stimulus and coverage models that complicate stimulus coverage. First, it is very common for the stimulus model and the coverage model to be mismatched. After all, they serve very different purposes. The stimulus model typically describes the entire valid stimulus space for the design. The coverage model, on the other hand, describes a much smaller set of stimulus that should exercise critical design functionality. Because a random stimulus generator has no knowledge of the stimulus coverage model, it’s often very difficult to hit the corner cases identified in the stimulus model.

The constraint structures used to describe stimulus also complicates stimulus-coverage closure. Recently, I’ve seen quite a few cases where the stimulus space is partitioned and re-partitioned by a chain of constraints. In these cases, it isn’t uncommon to see random stimulus with less than 1% efficiency. In other words, random stimulus could be over an order of magnitude less efficient than we would predict using the Coupon Collector’s Problem. Just achieving coverage closure is painfully difficult or practically impossible given the available computing resources.

In all of these cases, having a coverage-driven stimulus generation tool that is aware of both the stimulus model and the coverage model results in huge productivity gains in achieving stimulus coverage. Many of my customers believe that having coverage-driven stimulus is almost imperative to complete the required verification task.

### More Blog Posts

Matthew, Please take a look at the article I presented at the DVCON 2009. It addresses some of the issues you have raised. It turns out that it is possible to solve stimulus generation redundancy problem by using a relative simple math without having a coverage driven stimulus generator. So, life is almost as simple as marketing folks predict…. (Presentation:http://www.scribd.com/doc/12980881/Stochastic-Modeling-for-Random-Testing-Dvcon2009-Final-9009-New Article:http://www.scribd.com/doc/12980772/Stochastic-Modeling-for-Random-Testing-Dvcon2009-Final-9009)

VICTOR BESYAKOV
10:45 PM Sep 16, 2009

Victor, I attended your presentation at DVCon last year (I was presenting a paper on efficiently leveraging simulation farms with distributed verification). My understanding is that your approach is a very systematic method for grading random seeds (as well as determining how long to use each random seed). While seed-ranking definitely reduces redundant stimulus, it cannot completely eliminate redundancy (and seems to run out of steam with larger coverage spaces). I found your approach a very interesting approach to increasing the productivity of existing verification tools, though the math seemed anything but simple...

Matthew Ballance
11:28 PM Sep 16, 2009