Getting the Most out of Your Directed Test Environment
Getting the Most out of Your Directed Test Environment
Quite often I visit customers who have a well-established, directed test verification environment. Although they know their company’s long-term success could benefit from an advanced verification methodology, their current verification organization is not ready to make the transition and they are often too busy on their current projects. The majority of these customers have a Verilog environment, with a minority using either VHDL or SystemC.
As we look closer at these directed test environments, we often find:
Many of the directed tests verify specific DUT functions and there are no compelling reasons to change them.
- Legacy tests need to be supported, so the verification environment must be considered when evaluating new flows or methodologies.
- A subset of the directed tests is rather complex, exercising multiple DUT functions together. Such tests often contain complex procedural code that exercises combinations of DUT functions. These tests can be difficult to write and maintain, but are important because they attempt to exercise the DUT in more typical customer use models. Sometimes these tests employ some amount of randomness to data or control fields to add variability to the tests.
- A formalized scheme to measure test coverage is often lacking, so it is difficult to assess how effective the combination of tests are in exercising important DUT functionality.
What options are available to help such customers improve their verification environment? One viable low-impact solution to increase their effectiveness and productivity is using an intelligent testbench solution. A systematic coverage-driven stimulus generation tool, such as inFact, can adapt to existing methodologies and is a useful step to take towards the architecture of an advanced verification environment like OVM. It addresses the more complex directed test architectures that target combinations of DUT functions in a natural way, and does so by enabling re-use of significant portions of the existing directed test environment. It includes a formalized scheme for measuring stimulus coverage, thereby addressing a significant shortcoming of many directed test environments.
So what is it about intelligent testbench automation that makes this possible?
Consider a common directed test code structure having three nested for loops, each having a range of values. This structure generates all for-loop index values in combination, but does so in a fixed ordering like 111, 112, 113, 121, etc. If test adjacency ordering was important, this for-loop implementation would be insufficient as it would not detect a bug for test adjacencies like 312,111, and many others. A rather complex directed test would be needed to generate these cases, either in random order or with all adjacency combinations expressed.
Next, consider a more complex directed test having nested loops with conditional statements selecting different code branches depending on previously selected stimulus options, DUT responses, or both. This code may also use calls to random() functions to add to the number of stimulus cases generated, increasing test variability. Tasks and functions are typically called from within the procedural code to implement the lower-level functionality needed for the tests. The test is written with certain verification goals in mind, but we don’t commonly see any formalized coverage metrics implemented that measure how well the test covers the different conditions implied by the code structure.
These types of test architectures map nicely to a graph-based verification tool because the procedural code maps directly to a graph, and lower-level tasks and functions can be called directly from nodes in the graph. The structure of the graph is easy to understand since it depicts the various choices at different levels in a protocol, as the graph example below taken from an I2C testbench illustrates.
Lower-level testbench code implemented as tasks or functions can be called directly from the blue graph “action” nodes, thereby facilitating significant re-use of the existing testbench. When new functionality needs to be added to a test, the graph structure is easily extended by adding branches to the graph for the new DUT functionality being tested.
Since inFact supports various testbench coding styles (Verilog modules, interfaces, SystemVerilog and SystemC classes, OVM, “e,” Vera …) inFact easily integrates into existing environments. Portions of the testbench environment unrelated to the block(s) managed by inFact remain unchanged, giving users flexibility. Users planning for a migration to an advanced methodology have the option of re-generating their inFact rule graph as an OVM sequence.
During simulation, inFact decides which branches of the graph to traverse based on user-defined traversal goals. Hard-coded selection of choices in nested “for” loops or conditional logic is replaced by user-configurable graph traversal strategies, which select graph branches randomly or systematically under control of a path traversal algorithm. Stimulus coverage (either node or path) can be tabulated by inFact during traversal. Because the stimulus coverage is built into the tool, the user does not have to learn a new language or methodology, yet can realize the benefits of knowing their stimulus coverage.
This approach to directed testing gives users an option to improve the performance of their existing directed test environment and helps prepare them for a subsequent migration to a verification methodology such as OVM.
More Blog Posts
- Part 1: The 2012 Wilson Research Group Functional Verification Study
- Those nasty wire’s and reg’s in Verilog
- Getting AMP’ed Up on the IEEE Low-Power Standard
- Prologue: The 2012 Wilson Research Group Functional Verification Study
- Even More UVM Debug in Questa 10.2
- IEEE Approves New Low Power Standard
- Verification Horizons DVCon Issue Now Available
- Get your IEEE 1800-2012 SystemVerilog LRM at no charge
- IEEE 1800™-2012 SystemVerilog Standard Is Published
- See You at DVCon 2013!
- May, 2013
- April, 2013
- March, 2013
- February, 2013
- January, 2013
- December, 2012
- November, 2012
- October, 2012
- July, 2012
- June, 2012
- May, 2012
- March, 2012
- February, 2012
- January, 2012
- December, 2011
- November, 2011
- October, 2011
- September, 2011
- July, 2011
- June, 2011
- Intelligent Testbench Automation Delivers 10X to 100X Faster Functional Verification
- Part 9: The 2010 Wilson Research Group Functional Verification Study
- Verification Horizons DAC Issue Now Available Online
- Accellera & OSCI Unite
- The IEEE's Most Popular EDA Standards
- UVM Register Kit Available for OVM 2.1.2
- May, 2011
- April, 2011
- User-2-User’s Functional Verification Track
- Part 7: The 2010 Wilson Research Group Functional Verification Study
- Part 6: The 2010 Wilson Research Group Functional Verification Study
- SystemC Day 2011 Videos Available Now
- Part 5: The 2010 Wilson Research Group Functional Verification Study
- Part 4: The 2010 Wilson Research Group Functional Verification Study
- Part 3: The 2010 Wilson Research Group Functional Verification Study
- March, 2011
- February, 2011
- January, 2011
- December, 2010
- October, 2010
- September, 2010
- August, 2010
- July, 2010
- June, 2010
- The reports of OVM’s death are greatly exaggerated (with apologies to Mark Twain)
- New Verification Academy Advanced OVM (&UVM) Module
- The Dog That Didn’t Bark
- DAC: Day 1; An Ode to an Old Friend
- UVM: Joint Statement Issued by Mentor, Cadence & Synopsys
- Static Verification
- OVM/UVM at DAC 2010
- DAC Panel: Bridging Pre-Silicon Verification and Post-Silicon Validation
- Accellera’s DAC Breakfast & Panel Discussion
- May, 2010
- Easier UVM Testbench Construction – UVM Sequence Layering
- North American SystemC User Group (NASCUG) Meeting at DAC
- An Extension to UVM: The UVM Container
- UVM Register Package 2.0 Available for Download
- Accellera’s OVM: Omnimodus Verification Methodology
- High-Level Design Validation and Test (HLDVT) 2010
- New OVM Sequence Layering Package – For Easier Tests
- OVM 2.0 Register Package Released
- OVM Extensions for Testbench Reuse
- April, 2010
- SystemC Day Videos from DVCon Available Now
- On Committees and Motivations
- The Final Signatures (the meeting during the meeting)
- UVM Adoption: Go Native-UVM or use OVM Compatibility Kit?
- UVM-EA (Early Adopter) Starter Kit Available for Download
- Accellera Adopts OVM 2.1.1 for its Universal Verification Methodology (UVM)
- March, 2010
- February, 2010
- January, 2010
- December, 2009
- A Cliffhanger ABV Seminar, Jan 19, Santa Clara, CA
- Truth in Labelling: VMM2.0
- IEEE Std. 1800™-2009 (SystemVerilog) Ready for Purchase & Download
- December Verification Horizons Issue Out
- Evolution is a tinkerer
- It Is Better to Give than It Is to Receive
- Zombie Alert! (Can the CEDA DTC “User Voice” Be Heard When They Won’t Let You Listen)
- DVCon is Just Around the Corner
- The “Standards Corner” Becomes a Blog
- I Am Honored to Honor
- IEEE Standards Association Awards Ceremony
- ABV and people from Missouri...
- Time hogs, blogs, and evolving underdogs...
- Full House – and this is no gamble!
- Welcome to the Verification Horizons Blog!
- November, 2009
- October, 2009
- September, 2009
- August, 2009
- July, 2009
- May, 2009
#IEEE Celebrates Ethernet's 40th Anniversary and Life-Changing Technologies that IEEE 802.3(TM) standards help enable http://t.co/DTOcqUZaQN about 9 hours ago
Happy 40th Anniversary to Ethernet. It is now May 22nd in Korea, so I can get a jump on the celebration! Here is what Ethernet's... about 12 hours ago
RT @CalyptoDesign: Read Shawn McCloud's article "Raising the Bar for Power Optimization" in Chip Design. #DAC50 #Power #EDA #SemiEDA http:/… 6:30 AM May 18