July - Volume 5, ISSUE 2
“The key to teamwork is to have everyone working together. In verification that means starting with a plan so that everyone knows what they’re doing and progress can be measured effectively.”
July 2009 Issue Articles
Verification Teamwork Requires Process Automation
You’ve heard it all before: More to verify in less time.
Functional verification has to constantly ride the wave of increasing design sizes, competitive pressure demands and the limitations of resource and budgets. Development and verification of tomorrow’s technology often requires monitoring and controlling dispersed project teams across multiple time zones, which adds further to the complexity of these challenges. The ultimate price of not effectively managing this process is unplanned re-spins, which in turn leads to missed market windows or, even worse, field returns.
To achieve effective management and visibility of this process there must be comprehensive planning in a form that will allow a verification plan to drive the process. There needs to be accurate measurement of progress and effectiveness of the process and the automation to achieve “electronic closure” between these metrics and the plan. Run management is required to allow controllability, repeatability and further automation of the process. Automation of the complete verification process allows improvements in time to coverage and time to next bug, and it enables multiple dispersed project teams to accurately estimate the time to completion, allowing products to hit their market windows by avoiding unplanned respins and ensuring the necessary quality.
Team-Driven OVM Testbench Creation
In the world of OVM testbench development there exist producers and consumers. Producers create and provide reusable OVM components and templates to consumers, who in turn employ these items to assemble testbenches and to write specific tests. While producers and consumers typically possess differing goals and skill sets, they must work together for success.
This article discusses how these two groups of developers can interact efficiently to create testbenches, enabled by a new tool from Mentor Graphics: Certe™ Testbench Studio (CTS).
Evolving Verification Capabilities
The verification management problem can be described as a process of optimizing often conflicting goals, such as, project resources, quality objectives, schedule, and cost. However, a question frequently asked by project teams is: what is the best solution to this problem? Unfortunately, there are very few industry models for assessing an organization’s ability to achieve project goals.
In this article, we propose a simple model that can be used by an organization to assess their advanced functional verification capabilities, while providing a roadmap for process improvement.
The Dynamic Duo of Veloce and TBX - Accelerating Transaction-Based Verification to the Next Level
System-on-Chip (SoC) designs continue to grow in size and complexity. Traditional verification techniques are rapidly becoming inadequate to the task of performing system-level verification.
This problem is further amplified by the need to verify enormous amounts of embedded software which is more prevalent in today’s SoC designs. To combat this burgeoning verification challenge transaction based verification has been introduced. But is it enough?
Maximizing Your Verification Throughput
The amount of new verification technology released each day can be overwhelming even for the most technically voracious company. With so many choices the opportunities for improving performance get lost, especially when most people are comfortable with the age old view that performance is a simple measure of how long one or two target simulations takes to complete. Adding memory footprint to the performance datapoint is a fairly new way to to add an extra measure of comfort, measure once and assume the metric will be valid thought the life of your project current and future. Although this view is completely valid, these are not the only throughput metric that should be considered. Throughput is also the time to run a regression environment, the time to coverage, and the time to resolve a bug. All are equally important in the lifetime of a project. Projects are seldom static: workforce balancing and market influences on project specification can stress the most experienced teams.
How you solve your throughput challenges may be a matter of knowing what potential opportunities exist. There is a cast of technologies that are broadly relevant, including regression suite throughput, coverage closure techniques and time to bug resolution. There are many other opportunities for throughput, however we will focus on two in this article.
Applying Scalable Verification Components to a USB 2.0 HUB
Verifying a USB HUB is always a challenge because you need to reuse the same environment at the system level. The verification environment should facilitate the use of multiple TLM components when only the HUB is ready, and, as the RTL host and devices are created, it should replace the TLM components with the RTL. It should also gather TLM activities at all interfaces in order to analyze them and make sure that all the necessary transactions have been run successfully.
The verification of a USB 2.0 HUB requires that the transactions are run at high, full, and low speeds so as to verify the HUB’s capability of handling split transactions targeted to full/low speed devices in parallel with high speed transactions. In this article, we will use the verification of a USB 2.0 HUB as an example to show how this can be done. The same approach can be applied to other protocol components that assume similar top-level functionality as a USB 2.0 HUB.
Porting a Legacy Verification Environment for a Wireless SoC to OVM
This article focuses on techniques to port a normal verification environment to a SystemVerilog based environment with details from the verification of a real-world wireless SoC project. Emphasis will be on using three major strengths of OVM based SystemVerilog environment: (1)SystemVerilog modeling with assertions (2) Functional coverage and (3) Simplify the block to top flow using standardized components, directory and flow.
Process improvements using OVM Registers and IDesignSpec
This paper covers the benefits of deploying an OVM-IDesignSpec based methodology for design and verification. It discusses the testbench automation provided by the OVM Register package. It shows the advantages of using the new libraries from Mentor in conjunction with a new tool called IDesignSpec from Agnisys.
This combination of new technologies enables design teams to realize cost savings, quality improvement and TTM benefits across the engineering organization.
Interface-based OVM Environment Automation
The increasing size and complexity of today’s Systems-on-Chip is driving the adoption of modular, IP-centric design and verification flows. Increasingly complex IPs, sub-systems and systems require high levels of modularization, standardization and re-use and must be comprehensively verified as quickly and efficiently as possible. Advanced verification methodologies such as OVM1 provide important verification capabilities, such as coverage-driven verification and reusable verification components, that leverage the benefits of object oriented development. In order to quickly access these verification methodologies and apply them to the task at hand, it is important to automate the creation of as much of the verification infrastructure as possible.
This paper presents a methodology for auto-generating much of the OVM verification environment for an IP using an interface based executable specification, combined with a flexible generator framework. This methodology provides IP verification engineers with immediate access to advanced verification capabilities.
Owzthat? – Or How I Was Nearly Out, Caught
One of the defining moments in my career could easily have been one of my last. I had developed an ASIC that was an interface between a low cost microcontroller and a complex daisy wheel printer mechanism. The ASIC provided a memory mapped register based software interface that had been carefully specified by me, the printer manufacturer and our software engineer. The whole project had been done on a crash timescale; the software had been developed concurrently with both the ASIC hardware and the mechanism. On the day the risk wafer ASIC samples arrived, we had a factory in Asia full of printers ready for the electronics and we needed to ship units to meet customer orders.
Meanwhile we were in the UK with a prototype board bringing the software, the electronic hardware and the mechanism together for the first time. We had a few hours to check that everything worked before the Asian factory started work. Within a couple of hours it became evident that we had a problem.