Sign In
Forgot Password?
Sign In | | Create Account

Verification Horizons

Portable VHDL Testbench Automation with Intelligent Testbench Automation

by Matthew Ballance, Mentor Graphics

we’ve come a long way since digital designs were sketched as schematics by hand on paper and tested in the lab by wiring together discrete integrated circuits, applying generated signals and checking for proper behavior. Design evolved to gate-level on a workstation and on to RTL, while verification evolved from simple directed tests to directedrandom, constrained-random, and systematic testing. At each step in this evolution, significant investment has been made in training, development of reusable infrastructure, and tools. This level of investment means that switching to a new verification environment, for example, has a cost and tends to be a carefully-planned migration rather than an abrupt switch. In any migration process, technologies that help to bring new advances into the existing environment while continuing to be valuable in the future are critical methodological “bridges”.

Questa inFact graph-based intelligent testbench automation provides just such a “bridge” for VHDL testbench environments. inFact offers more-productive test creation than the directed tests that are predominantly used in VHDL testbench environments. The boost in test-creation productivity enables more tests to be created more quickly and more-comprehensive testing of the design to be done. In addition, graph-based tests are portable, thus ensuring that any investment made in creating graph-based tests can be leveraged both in the existing verification environment and in its next evolution. This article will show how inFact can be applied within a VHDL testbench for a cache controller, and the benefits of doing so.

Design and Testbench Overview

The design being tested is a very simple direct mapped write-through cache controller with a 16-bit address. The cache controller accepts requests from a processor or other bus master and either satisfies the requests itself or accesses a memory device to satisfy the memory requests.

The functional specification for the design describes how the cache behaves in the presence of various sequences of memory operations. The tests implemented by the testbench are expected to exercise these cases.

Spec ID Description
FSPEC_001 &f you read from a memory location, and then read from it again with the same address you will get a cache hit.
FSPEC_002 If you read from a memory location, then read from another location with the same bottom eight address bits, but different bits in the top of the address, you will get a cache miss.
FSPEC_003 If you reset the cache, and do no writes, all reads will be cache misses until the cache is refilled.
FSPEC_004 If you reset the cache, and then write to a location, you will get a cache hit if you read from that same location.
FSPEC_005 If you write to a location and then read from the location immediately you will get a cache hit.

The structure of the VHDL testbench for the cache controller is shown below. A driver, or bus functional model, is connected to the cache in place of a processor or other bus master. A stimulus generator, which is responsible for generating sequences of reads and writes, is connected to the driver. In this VHDL testbench, a new test is created by creating a new VHDL architecture for the stimulus generator.

The scoreboard is responsible for checking that the cache controller behaves correctly in the presence of the memory reads and writes issued by the driver. It monitors the memory reads and writes received by the cache controller and the reads and writes issued by the cache controller.

Six directed tests were written to exercise the cache and target the behaviors described by the functional specification. The table below summarizes the purpose of the test and the number of memory read/write operations performed by the test.

Test R/W Description
test1 7 Smoke test
testhit 512 Targets FSPEC_001
testmiss 768 Targets FSPEC_002
testwrite 1024 Targets FSPEC_005
testresetafterhit 1024 Targets FSPEC_003
testresetafterwrite 768 Targets FSPEC_004
4103

As can be seen from the table, a total of 4103 operations are performed against the cache by these tests. When code coverage is collected across all the tests shown above, a total of 75.3% is achieved by the directed tests.

This number could be improved by analyzing uncovered regions of the code and slowly crafting tests to address the missing code coverage. Fundamentally, this is the process of making the tests more comprehensive. Let’s see how applying inFact can rapidly make the tests more comprehensive!

inFact Integration Preparation

The first step in preparing to have inFact drive test stimulus into this testbench environment is to determine how the testbench accepts stimulus from the test. In the case of this testbench, a VHDL record describes the operation (memory read/write, or reset) to be applied to the cache. The record is shown below:

type mem_op_t is (READ, WRITE, RST);

type cpu_req is record
addr : std_logic_vector(15 downto 0);

data : std_logic_vector(31 downto 0);

op : mem_op_t;

end record;

Figure 1 - Memory-Access Request VHDL Record

inFact describes tests using rules – a formal description of the tests to be produced and the method by which the test stimulus will be communicated to the testbench. inFact rules support transactions, such as that described by the cpu_req record above, using the ‘struct’ construct. inFact rules use the ‘meta_action’ construct to describe stimulus variables within the struct. As can be seen below, an inFact struct is declared to mirror the VHDL record used by the testbench environment.

/*************************************
* cpu_req.rseg
**************************************/

rule_segment {
set mem_op_enum [enum READ,WRITE,RST];
struct cpu_req {
meta_action addr[unsigned 15:0];
meta_action data[31:0];
meta_action op [mem_op_enum];
}
}

Figure 2 - Memory-Access Request inFact Struct

In our inFact-based tests, the VHDL test will call inFact to set the value of the VHDL record fields rather than having procedural code select the value of the VHDL record fields.

A FIRST INFACT TEST

The cache controller accepts a 16-bit address, which is divided into a page address and a cache-line address.

Figure 3 - Cache Address Layout

Our first inFact test will perform a memory read and write to each page. The inFact rules below describe the procedure to generate a series of individual accesses to the cache.

rule_graph dcc_traffic_gen {
import “cpu_req.rseg”;
action init; action infact_checkcov;

interface fill_item(cpu_req);
cpu_req item;

constraint mem_ops_c {
item.op inside [READ,WRITE];
}

dcc_traffic_gen = init
repeat {
fill_item(item)
infact_checkcov
};
}

Figure 4 - Memory Read/Write Rules

Notice a few things about the rule description above. The rules file shown above imports the previously-created ‘cpu_req.rseg’ file that contains the declaration of the inFact rules struct. The ‘interface’ keyword is used to declare an interface between the rules and the testbench environment. Notice that the ‘fill_item’ interface accepts a struct of type ‘cpu_req’. The mem_ops_c constraint ensures that only READ and WRITE operations are performed by this test. In other words, this test will not perform reset operations.

Finally, note the repeat loop at the bottom of the file. This repeat loop passes our struct item to the interface, selecting values for the struct fields and passing the transaction to the VHDL testbench. The rule description can be viewed as a graph, or flow chart, as seen below.

Figure 5 - Read/Write Graph

Next, our test goal of performing a READ and a WRITE to each page must be described. This is done in two steps.

First, bins are applied to the transaction address to divide the address into 256 pages. This will cause inFact to ensure that the addresses of the read and write transactions will hit each of the 256 pages.

bin_scheme all_pages {
cpu_req.addr / 256;
}

Figure 6 - Bin Specification to Target all Pages

Next, the variable combinations to target must be described. This is done by annotating a coverage region on the graph as shown below. The MemOpsAllPages goal, shaded here in blue, informs inFact that all combinations of page address and operation must be generated.

Figure 7 - Read/Write All Pages Coverage Goal

Integrating into the VHDL Test

The inFact VHDL test architecture is shown below. The key elements of the inFact integration into the test are highlighted. The inFact graph is represented by the ‘infact_gen’ variable in the test process. The inFact graph is initialized at the beginning of the test. On each pass through the graph, a call is made to the ifc_fill_item procedure. This procedure corresponds to the ‘fill_item’ interface declared in the rules, and calls inFact to set values on the cpu_req record fields. The stim_to_driver procedure from the testbench is called to pass the transaction to the driver. The driver will then apply the transaction to the design. Finally, inFact provides the allCoverageGoalsHaveBeenMet function that indicates when all the goals targeted by the test are complete. When this function returns true, the test loop will exit causing the test to end.

architecture dcc_arch_infact_traffic of
dcc_stim_gen is
begin
infact_gen : process is
variable infact_gen :dcc_traffic_gen;
variable req : cpu_req;
variable done : boolean;
begin
-- Initialize the inFact generator
infact_gen.init(“infact_gen”);

-- Perform an initial reset
cpureset(clk, cmd,cmd_req, cmd_ack);

while TRUE loop
-- Call inFact to fill
-- the record fields
infact_gen.ifc_fill_item(req);

-- Send the command to the BFM
stim_to_driver(
clk,cmd,cmd_req,cmd_ack);

-- End the test when all goals
-- are complete
if (infact_gen.
allCoverageGoalsHaveBeenMet) then
exit;
end if;
end loop;

finish(0);
end process;
end;

Figure 8 - inFact VHDL Testbench Integration

Running the Simple Test

The simple read/write test shown above runs a total of 512 operations (2 operations X 256 pages). Code coverage was collected for just this test. The 512 operations performed by the inFact test achieved 95.4% code coverage, showing that the 512 transactions generated by inFact are more comprehensive and unique than the 4103 transactions generated by the existing directed tests.

Figure 9 - Code Coverage of inFact Read/Write Test

Testing by the Function Specification

Even though the simple read/write test above achieved better code coverage than the existing directed tests, we still need a test that exercises all the operations described in the function specification. Next, inFact rules will be created to target each of the functional specification items. From examining the description of each of the functional specification items, it is clear that the longest operation sequence is required by FSPEC_004. This specification item requires a sequence of three operations: RESET, WRITE, READ. Consequently, the graph that targets the testplan will generate sequences of three operations.

The skeleton of a rule description that targets the functional specification items is shown below. Note that it is very similar to the rules that just produced reads and writes. The primary differences are that the testplan rules select a functional specification item to target and generate three cpu_req structs to the testbench on each pass through the graph.

rule_graph dcc_testplan_gen {
import “cpu_req.rseg”;
action init, infact_checkcov;

interface fill_item(cpu_req);

meta_action FSPEC_TARGET [
enum FSPEC_001, FSPEC_002,
FSPEC_003, FSPEC_004, FSPEC_005];

cpu_req req1, req2, req3;

dcc_testplan_gen = init repeat {
FSPEC_TARGET
fill_item(req1)
fill_item(req2)
fill_item(req3)
infact_checkcov
};
}

These rules do not yet specify the relationships between the transactions necessary to exercise the functional specification items. The table below presents a mapping between the English description and a constraint-based description of the functional specification items.

Spec ID Description Constraint
FSPEC_001 If you read from a memory location, and then read from it again with the same address you will get a cache hit. req1.op == READ
req2.op == READ
req1.addr ==
req2.addr
FSPEC_001 If you read from a memory location, and then read from it again with the same address you will get a cache hit. req1.op == READ
req2.op == READ
req1.addr ==
req2.addr
FSPEC_002 If you read from a memory location, then read from another location with the same bottom eight address bits, but different bits in the top of the address, you will get a cache miss. req1.op == READ
req2.op == READ
req1.addr[7:0] ==
req2.addr[7:0]
req1.addr[15:8] !=
req2.addr[15:8]
FSPEC_003 If you reset the cache, and do no writes, all reads will be cache misses until the cache is refilled. req1.op == RST
req2.op == READ
FSPEC_004 If you reset the cache, and then write to a location, you will get a cache hit if you read from that same location. req1.op == RST
req2.op == WRITE
req3.op == READ
FSPEC_005 If you write to a location and then read from the location immediately you will get a cache hit. req1.op == WRITE
req2.op == READ
req1.addr == req2.
addr

Figure 10 - Functional Spec Mapped to Constraints

The complete rules to exercise each of the functional specification items are shown below. The constraint expressions shown in the table above have been included in the FSPEC_c constraint block.

rule_graph dcc_testplan_gen {
import “cpu_req.rseg”;
action init, infact_checkcov;

interface fill_item(cpu_req);

meta_action FSPEC_TARGET [
enum FSPEC_001, FSPEC_002,
FSPEC_003, FSPEC_004,
FSPEC_005];
cpu_req req1, req2, req3;

constraint FSPEC_c {
if (FSPEC_TARGET == FSPEC_001) {
req1.op == READ;
req2.op == READ;
req1.addr == req2.addr;
} else
if (FSPEC_TARGET == FSPEC_002) {
req1.op == READ;
req2.op == READ;
req1.addr[7:0] ==
req2.addr[7:0];
req1.addr[15:8] !=
req2.addr[15:8];
} else
if (FSPEC_TARGET == FSPEC_003) {
req1.op == RST;
req2.op == READ;
} else
if (FSPEC_TARGET == FSPEC_004) {
req1.op == RST;
req2.op == WRITE;
req3.op == READ;
req3.addr == req2.addr;
} else
if (FSPEC_TARGET == FSPEC_005) {
req1.op == WRITE;
req2.op == READ;
req1.addr == req2.addr;
}
}

dcc_testplan_gen = init repeat
FSPEC_TARGET
fill_item(req1)
fill_item(req2)
fill_item(req3)
infact_checkcov
};
}

Figure 11 - Rules targeting the Functional Spec

In terms of test goals, it is clear that all the functional specification items need to be covered. However, just as with the previous test, we will cross the functional specification item with the page address of the first transaction. The coverage goal is shown overlaid on the graph below.

Figure 12 - Coverage Goals for the Functional Spec

In total, this graph will generate 3840 transactions (5 specification items X 256 pages X 3 transactions).

Running the Testplan Test

As with the previous tests, code coverage was collected from the run of the inFact testplan graph test. As can be seen below, the inFact testplan graph test achieved 99.1% coverage after running the 3840 transactions described by the testplan graph.

Figure 13 - Code Coverage from inFact Testplan Graph

Conclusion

As can be seen from this example, adding inFact to a VHDL testbench can efficiently and quickly boost the comprehensiveness of testing. As illustrated by this example, that boost in test comprehensiveness can have a very positive impact on the level of coverage achieved. In this case, a few lines of rule description generated tests sufficient to boost code coverage to 95%, while a slightlylarger description exercised the elements of the functional specification and achieved 99% code coverage.

As the introduction alluded to, however, considering reuse and portability are also important when selecting verification technology. The good news with inFact is that graphs and rules created in a VHDL environment are fully reusable in and portable to testbench methodologies, such as the UVM. The comprehensive verification illustrated in this example combined with reusability and portability truly provide the best of both worlds: immediate benefits in a VHDL testbench environment and a “bridge” to whatever testbench environment might come next.

 
Online Chat