Sign In
Forgot Password?
Sign In | | Create Account

Power-Aware Verification in Mixed-Signal Simulation

White Paper In this paper we present the basic concepts of power-aware verification in mixed-signal simulation and apply them to verify a tire pressure monitoring system SoC.

Verification Horizons

Stories of an AMS Verification Dude: Model Shmodel

by Martin Vlach, PhDude, IEEE Fella, Chief Technologist AMS, Mentor Graphics

Well here I am again. Last time I talked about putting stuff together, and when I mean stuff, it turned out that the digital folks handed me an RTL and the analog dudes gave me a SPICE netlist. I finally got it all working together, and let ‘er rip, thinking that this was easy and I’ll be done in no time. But that “no time” turned into hours, and then days, and then nights, too. Simulating the analog stuff just takes forever, and while I don’t mind having a cup of Joe now and then, I got pretty jittery. And don’t talk to me about my boss. She was after me all the time – are we done yet? When are we going to know if we put it together right? Tell me something, anything, I can’t wait, we have a market to go to, my bosses are after me – you get the drill.

So, what is a verification dude to do? I tried all bunch of things – SPICE, fast SPICE, faster SPICE, fastest SPICE, I tweaked parameters, I benchmarked this SPICE and that SPICE, this SIM and that SIM, and all it got me was a headache. So I said to myself, there must be a better way to speed up the simulation. I looked around a bit, and sure enough, there is. It’s called modeling. Sounds good, I’ll try it. It shouldn’t be that hard.

Yeah, right. The problem, turns out, is knowing what you need to model to get the job done. You can’t listen to those analog guys, they just know too much, and for them if the model doesn’t do everything that their SPICE does, it’s never going to be good enough. It turns out that the big problem is knowing what not to model. Abstraction, they called it. So I started looking into different kinds of models and abstractions that people write, and I came up with a bunch of ideas to try out. I wrote it all up, and after those corporate writer-dudes got through with it, this is how Chapter 2 of the tome I’m working on turned out:

Taxonomy of Models for AMS Verification

Behavioral modeling of analog circuits has been a subject of discussion for decades, but it has never taken a serious hold in IC design. But in the absence of formal methods for analog verification, and given that SPICE simulation at any accuracy is extremely slow, it is clear that the use of models of the analog IP in AMS Verification is the only way today to achieve the massive amounts of simulation required to verify ever-growing SoCs and ASICs. In order to gain an understanding of how modeling can be used to achieve better coverage in AMS Verification, I will look at some trends and approaches to modeling, and examine how a modification to the concept of behavioral modeling can be beneficial for verification.

Models are used for two very different reasons on the two sides of the classical design-and-verification V diagram (see Figure 1).

Figure 1:The V diagram of Design and Verification

Top-Down Design: During specification and architectural exploration, behavioral models are used for exploring design trade-offs, selecting circuit topology, and creating the detailed circuit specification. The system architect has good ideas of high level interactions and uses a behavioral model that incorporates those aspects of the design that need to be examined. There is no transistor implementation yet. This kind of exploration is commonly done with the Matlab or Simulink tools. There are no synthesis tools for analog design, and once the architectural exploration is finished, there is no incentive to maintain the behavioral model. As further trade-offs and modifications are done during implementation, the original behavioral model gets out of step with the implementation, and no longer accurately represents the implementation.

Bottom-Up Verification: Once the implementation phase is complete, the problems faced by the model writer are completely different. Under a policy of “trust but verify,” the verification engineer must devise methods and models to first verify that the design meets the circuit specification. More importantly, the verification engineers must determine that the circuit in fact implements its intended functionality within the system as a whole. Even if each team in the combined effort of IC design performs its task impeccably, miscommunications at both the team and hardware boundary are a fact of life.

Over the years, behavioral modeling used during the design phase has fallen out of favor. Today in IC design as feature sizes are getting smaller and variability increases, design is all about the details and getting the second and even third order effects under control. Creating behavioral models that cover those second and third order effects for IC design is possible, but it is extremely time consuming, the gain in simulation speed is not sufficiently significant, maintenance costs of the model as the underlying implementation changes are very high, and the confidence of the analog designer in the model is in practice nonexistent.

On the other hand, the use of verification modeling is increasing. While the tendency of the analog designer to distrust any kind of model that is not SPICE has not abated, the demands of verifying large systems put a strong pressure on decreasing the simulation time required to verify AMS systems in order to gain increased coverage and avoid chip-killing silicon bugs.

It is worthwhile to consider a terminology and classification of AMS models that are used at various phases of the IC design. They are described here in the order that they are likely to appear in the design and verification process:

Behavioral model is a functional model that is primarily used during the early exploration of the system architecture, before any circuit implementation exists. The focus is on exploring high level behaviors that need to be implemented within the system, and on evaluating tradeoffs. Behavioral models are often general purpose models with many parameters that can be used in the exploration. They model behaviors needed for exploring the tradeoffs, but those behaviors may not in fact end up being implemented in the final design.

Verification model is a model that is used during the verification phases of the design. Although in the classical V diagram view verification appears only after implementation is complete, such a classical waterfall process is not, or should not be, commonly practiced any more—verification planning and implementation should start concurrently with design. The focus is on exploring the actual implementation, and especially in the early stages of a project, the verification model may not even provide any functionality, only early checking. A verification model should represent the minimum functionality needed to verify a particular condition, and should only model that which has been implemented. Verification models are often special purpose models with few or no parameters. A good verification model will be written in a way that enables finding bugs in the design. It will incorporate many checks to make sure that the block or IP it models is used in its intended environment.

Implementation model is a representation of the design that is the basis for detailed physical design: synthesis in digital, and custom in analog. In digital, it is usually RTL, and in analog it is the schematic from which a SPICE netlist is created for simulation.

Physical model is a representation of the design that includes the effects of the layout and is the next level of detail on the way to manufacturing the IC. In digital design, it will usually be at the gate level, with detailed timing. In analog design, it is the SPICE netlist together with parasitics extracted from the layout.

Levels of Fidelity

Verification models can be further classified into several levels based on the amount of detail and the functionality that they implement. In the lifecycle of an IC design process, the verification models are going to evolve through these levels. Especially in very large projects with multiple design teams, the simpler models will be used as placeholders while a team designs its piece of the puzzle. In the description of the levels, each higher level includes the behaviors or checks of all previous levels.

Level 0 - Empty: A model that literally has no information other than the definition of the interface. Although counter intuitive, empty models can serve an important role as placeholders during integration.

Level 1 – Anchored load: The inputs may present a load to the driving nodes, and the output is fixed at a typical value. The model can include checking code, for example to make sure that correct levels are established by biasing, and can already be useful in simulation to check that inputs stay within design bounds.

Level 2 – Feed-through: The signal levels at inputs are observed and drive a related output value, although the full signal processing functionality of the component is not represented. This may be used for example to ensure that correct supplies are connected.

Level 3 – Basic: At this modeling level, first order functionality of the block is represented at one environment point, with input wiggles being transmitted to related output wiggles.

Level 4 – Functional: The model is fully functional at one environment point, and includes important second order effects. Practically speaking, this is the last level of verification model that is likely to be used in most projects.

Level 5 – High Fidelity: Fully functional model at all environment points. Only very sophisticated teams will find a need for, and especially find beneficial return on investment, at this modeling level. Maintaining such high fidelity models is very expensive, and creating them in the first place and making sure that the model actually reflects what has been built is technically very difficult.

This description of modeling levels provides a useful framework when deciding what needs to be included in a verification model depending on the state of the project and the properties that need to be verified.

The models themselves can be coded in any of the usual languages and techniques for modeling mixed signal hardware: Verilog-A or -AMS, VHDL-AMS, real number models in Verilog or VHDL, or event-driven record/ structure based model in VHDL or SystemVerilog. For SoC AMS verification, real number models would probably be preferred by most verification engineers, especially for Levels 0-3, because of the engineers’ personal preference for the language, but there is nothing in this classification of levels that prevents continuous time models in Verilog- AMS or VHDL-AMS to be used at the lower level, or realnumber or event-driven models used at the higher levels.

In the end, the choice of modeling approach – continuous time vs event-driven – will depend on the availability of simulators, experience of the verification engineer, and the verification task. As a rule of thumb, level 5 models in the continuous time –AMS languages should be expected to be at most an order of magnitude faster than SPICE (and thus not justifying the expense of building them), and level 1-4 models one to two orders of magnitude faster, of course depending on detail. All event-driven models should be another 1-2 orders of magnitude faster during simulation then their continuous time counterparts.

Well, there you have it. The people I talk to all seem to like the idea of verification models – the digital folks are comfortable with verification modeling anyway, and the analog dudes don’t expect them to be perfect and do exactly what their SPICE simulators would do, and everybody’s happy. Except me, I still have to go and write those models. But that’s another story.

 
Online Chat