Sign In
Forgot Password?
Sign In | | Create Account

Getting the Most out of Your Directed Test Environment

Quite often I visit customers who have a well-established, directed test verification environment.  Although they know their company’s long-term success could benefit from an advanced verification methodology, their current verification organization is not ready to make the transition and they are often too busy on their current projects. The majority of these customers have a Verilog environment, with a minority using either VHDL or SystemC.

As we look closer at these directed test environments, we often find:

  • Many of the directed tests verify specific DUT functions and there are no compelling reasons to change them.
  • Legacy tests need to be supported, so the verification environment must be considered when evaluating new flows or methodologies.
  • A subset of the directed tests is rather complex, exercising multiple DUT functions together.  Such tests often contain complex procedural code that exercises combinations of DUT functions.  These tests can be difficult to write and maintain, but are important because they attempt to exercise the DUT in more typical customer use models.  Sometimes these tests employ some amount of randomness to data or control fields to add variability to the tests.
  • A formalized scheme to measure test coverage is often lacking, so it is difficult to assess how effective the combination of tests are in exercising important DUT functionality.

What options are available to help such customers improve their verification environment? One viable low-impact solution to increase their effectiveness and productivity is using an intelligent testbench solution.  A systematic coverage-driven stimulus generation tool, such as inFact, can adapt to existing methodologies and is a useful step to take towards the architecture of an advanced verification environment like OVM.   It addresses the more complex directed test architectures that target combinations of DUT functions in a natural way, and does so by enabling re-use of significant portions of the existing directed test environment.  It includes a formalized scheme for measuring stimulus coverage, thereby addressing a significant shortcoming of many directed test environments.

So what is it about intelligent testbench automation that makes this possible?

Consider a common directed test code structure having three nested for loops, each having a range of values.  This structure generates all for-loop index values in combination, but does so in a fixed ordering like 111, 112, 113, 121, etc.  If test adjacency ordering was important, this for-loop implementation would be insufficient as it would not detect a bug for test adjacencies like 312,111, and many others.  A rather complex directed test would be needed to generate these cases, either in random order or with all adjacency combinations expressed.

Next, consider a more complex directed test having nested loops with conditional statements selecting different code branches depending on previously selected stimulus options, DUT responses, or both.  This code may also use calls to random() functions to add to the number of stimulus cases generated, increasing test variability.   Tasks and functions are typically called from within the procedural code to implement the lower-level functionality needed for the tests.  The test is written with certain verification goals in mind, but we don’t commonly see any formalized coverage metrics implemented that measure how well the test covers the different conditions implied by the code structure.

These types of test architectures map nicely to a graph-based verification tool because the procedural code maps directly to a graph, and lower-level tasks and functions can be called directly from nodes in the graph.  The structure of the graph is easy to understand since it depicts the various choices at different levels in a protocol, as the graph example below taken from an I2C testbench illustrates.

i2cmaster

Lower-level testbench code implemented as tasks or functions can be called directly from the blue graph “action” nodes, thereby facilitating significant re-use of the existing testbench.  When new functionality needs to be added to a test, the graph structure is easily extended by adding branches to the graph for the new DUT functionality being tested.

Since inFact supports various testbench coding styles (Verilog modules, interfaces, SystemVerilog and SystemC classes, OVM, “e,” Vera …) inFact easily integrates into existing environments.  Portions of the testbench environment unrelated to the block(s) managed by inFact remain unchanged, giving users flexibility.  Users planning for a migration to an advanced methodology have the option of re-generating their inFact rule graph as an OVM sequence.

During simulation, inFact decides which branches of the graph to traverse based on user-defined traversal goals.  Hard-coded selection of choices in nested “for” loops or conditional logic is replaced by user-configurable graph traversal strategies, which select graph branches randomly or systematically under control of a path traversal algorithm.  Stimulus coverage (either node or path) can be tabulated by inFact during traversal. Because the stimulus coverage is built into the tool, the user does not have to learn a new language or methodology, yet can realize the benefits of knowing their stimulus coverage.

This approach to directed testing gives users an option to improve the performance of their existing directed test environment and helps prepare them for a subsequent migration to a verification methodology such as OVM.

 

Directed Test Environment

More Blog Posts

Comments

No one has commented yet on this post. Be the first to comment below.

Add Your Comment

Please complete the following information to comment or sign in.

(Your email will not be published)

Archives

Tags

 
Online Chat