Upload
fernanda-lima
View
215
Download
2
Embed Size (px)
Citation preview
Evaluating Coverage Collection Using the VEasyFunctional Verification Tool Suite
Samuel Nascimento Pagliarini, Paulo Andre Haacke and Fernanda Lima KastensmidtInstituto de Informatica - Universidade Federal do Rio Grande do Sul (UFRGS)
Programa de Pos-Graduacao em Microeletronica
Porto Alegre, Brasil - 91501-970
{snpagliarini, pahaacke, fglima}@inf.ufrgs.br
Abstract—This paper describes a performance evaluation ofcoverage collection on different simulators. It also describes howcoverage is collected using VEasy, a tool suite developed specifi-cally for aiding the process of Functional Verification. A Verilogmodule is used as an example of where each coverage metricapplies. The block and toggle coverage collection algorithmsused in VEasy are presented and explained in detail. Finally, theresults show that the algorithms used in VEasy are capable ofperforming coverage collection with a lower simulation overheadwhen compared with commercial simulators.
Index Terms—Functional verification, coverage collection, cov-erage analysis, dynamic verification, simulation.
I. INTRODUCTION
The primary goal of Functional Verification (FV) is to
establish confidence that the design intent was captured cor-
rectly and preserved by the implementation [1] [2]. In order
to do that, FV often uses a combination of simple logic
simulation and test cases (i.e. sequences of inputs) generated
for asserting specific features of the design. In the context of
this paper, FV is performed on the Register Transfer Level
(RTL) representation of the design. On top of the simulation,
FV applies specific and specialized constructs, like constrained
randomness [3] [4], assertions [5] [6] and coverage metrics [7],
which is the main theme of the discussions presented later in
this paper.
Verification is a necessary step in the development of today’s
complex digital designs. Hardware complexity continues to
grow and that obviously impacts the verification complexity
since that growth is leading to an even more challenging
verification. In fact, it has been shown that the verification
complexity theoretically rises exponentially with hardware
complexity [8]. The Application Specific Integrated Circuit
(ASIC) industry already acknowledges that the verification
process is extremely necessary and hard to accomplish. Veri-
fication itself occupies a fair share of the design cycle time,
some say even 70% [9]. Yet, functional/logic flaws are still
the main reason for silicon re-spins [10]. Therefore there
is a need to improve the current verification practices and
methodologies.
Evaluation of the overall quality of verification is obtained
from coverage metrics, either structural or functional ones.
Functional coverage has become a key element in the verifica-
tion process since it tries to define the expected functionalities
of a given design. However, the continuous increase in terms of
the number of transistors per chip is resulting in a diminished
validation effectiveness. The test cases are more and more
complex. Simulation is getting more expensive and providing
less coverage [11].
FV strives to cope with that complexity increase trend
but some of the related challenges are overwhelming. So
far those challenges have been addressed with verification
methodologies and Electronic Design Automation (EDA) tools
but there is a claim for more innovation and automation
improvement. This paper describes and compares aspects of
VEasy, an EDA tool suite developed by the authors. VEasy’s
aim is to be a FV solution, including a simulator and a Test-
bench Automation scheme, therefore acting in both domains
of improvement: the simulator as a simple EDA tool and the
Testbench Automation solution as a methodology. Also, VEasy
is capable of collecting and analyzing coverage data, which is
the main topic of this paper.
This paper is organized as follows: Section II explains the
generalities of the tool suite. One example of a possible Design
Under Test (DUT) is presented in section III and it is used
to explain the different types of coverage metrics. Section
IV contains some measurements of the simulation overheads
that enabling coverage collection creates. The algorithms used
in VEasy are also discussed in Section IV. Finally, some
future work considerations are made on Section V and some
conclusions are drawn on Section VI.
II. VEASY - A FUNCTIONAL VERIFICATION TOOL SUITE
The tool suite comprehends four main modules:
• The Verilog RTL linting
• The Verilog RTL simulation
• The Testbench Automation methodology
• The coverage collection and analysis
Also, the tool suite has two distinct work-flows: the assisted
flow and the simulation flow. The Verilog linting [12] is
available only in the assisted flow of the tool, which starts
when the Verilog [13] description of the DUT is parsed
and analyzed. The simulation flow is only enabled when the
description complies with the linting rules. Linting guarantees
that the input is written using strictly RTL Verilog construc-
tions. One example of such rule is the single assignment rule,
which ensures that a reg or wire is only assigned in a single
������������������� �����������
procedural block (typically an always block). Linting checks
are very similar to the ones performed in synthesis tools, but
it also might be used to detect if a given code complies with
coding style conventions.Once the code of the DUT is linted, the simulation work-
flow may begin. The input of the simulation flow is no longer a
hardware description, instead it is a verification plan file. The
verification plan file used by VEasy is actually a complete
view of the verification effort. It includes the traditional lists
of features and associated test cases but it also contains
simulation and coverage data. This approach makes it an
unified database of the current verification progress.As early mentioned, the quality of the verification relies
on coverage metrics, either functional or structural ones [7].
Structural coverage metrics are also referred as code coverage
since there is a direct link between the RTL code and the
metric. VEasy has integrated three different structural coverage
metrics:
• Block coverage
• Expression coverage
• Toggle coverage
All these metrics are widely used by verification teams and
are quite simple. However, although simple and necessary,
this type of metric is not able to provide a definitive answer
if a given functionality actually was implemented in the
design. That is why functional coverage collection is also
possible using VEasy, which allows functional coverage from
both inputs and outputs. Output coverage, when necessary, is
performed directly on the primary outputs of the design. The
input coverage, on the other hand, may be performed on the
primary inputs or using specific logical members of VEasy’s
testbench automation methodology.For the purpose of creating stimulus for the DUT, VEasy
uses a layered testbench automation methodology. This
methodology enables testbench automation using a GUI and/or
a specialized language, where the user is able to create
complex sequence scenarios by using layers of abstraction.
The layers are used to link together sequences of data stimuli.
The tool will automatically extract the design interface and
create a layer0 containing all the physical inputs of the design.
That is the basic data-item used to communicate with the
design while the other layers in the methodology are used
to apply hierarchical rules to this data-item and create a flow
of data that is meaningful to the design. All layers of the
methodology are capable of holding logical members, which
might be manually chosen as points of interest for functional
coverage.The layered methodology is the most complex aspect of
VEasy and, although not deeply explored in this paper, it
creates the backbone that supports the simulation. On top of
the methodology there is also a constraint solving engine that
allows a hierarchical control of the data being generated.After defining the layers and sequences, VEasy is ready
to create a simulation snapshot by combining the circuit
description and the test case generation capabilities. The use of
a golden model is optional. All the generated code is ANSI-C
[14] compliant which allows it to be used in a majority of
platforms and compilers. Combining the circuit description
with the test case generation mechanism provides a faster
simulation since there is no context switching between the
simulator and the testbench automation solution.
Our simulation results, so far, are very encouraging. Our
simulator has been compared with two major event-driven
commercial simulators and it has shown simulation times that
are less than one tenth of the commercial’s simulation times.
Our simulator has also been compared against Icarus [15]
and Verilator [16], the latter being a cycle accurate Verilog
simulator just like VEasy. The results have shown that VEasy
and Verilator have very similar simulation times. Yet, VEasy
allows different types of coverage metrics that are important
for FV.
The different types of coverage metrics are explored in the
next section.
III. VEASY AND THE COVERAGE METRICS
Code coverage measurement metrics are used to assess the
extent to which the DUT’s code has been exercised during
simulation and to judge the quality of the verification. These
metrics are also used as an aid to judge when the verification
is nearing completion (it is common to perform functional
coverage prior to code coverage). So, code coverage identifies
code structures that have not been fully verified and are
possible sources of functional errors.
In order to exemplify the different types of coverage metrics
and how they are obtained from the DUT code, a simple
Verilog module will be used. The source code is shown on
Fig. 1.
1 module example ( c lk , r s t n , a , b , c , d ) ;2
3 input c lk , r s t n ;4 input a , b ;5 output reg [ 7 : 0 ] c ;6 output reg d ;7
8 always @( posedge c l k ) begin9 i f ( r s t == 1 ' b0 ) begin
10 c <= 8 ' b0000 0000 ;11 d <= 1 ' b0 ;12 end13 e l s e begin14 c <= c + 8 ' d1 ;15 d <= c [ 0 ] ;16
17 i f ( ( a | | b ) && ( c == 0 ) ) begin18 c <= 8 ' d128 ;19 end20 end21 end22
23 endmodule
Fig. 1: Example of a Verilog code
A. Block coverage
Block coverage measures if each and every block has been
simulated at least once. A block of code is defined as a set
of statements that will always execute together, like the ones
in lines 14 and 15 of Fig. 1. Examining the same figure, one
should consider four different blocks of code. Those blocks
start on the following lines:
• Line 8 (always statement).
• Line 9 (if statement).
• Line 13 (else statement).
• Line 17 (if statement).
Notice that the begin/end pairs are not necessary for creating
blocks of code. The if statement of line 19 does not require
the pair at all but still would dictate the creation of a new
block of code. This type of coverage has replaced the old line
coverage metric, in which the execution of every line of code
was measured.
B. Expression coverage
Expression coverage measures how much an expression has
been exercised and therefore complements block coverage.
The expression given on line 19 of Fig. 1 must be exercised in
a way that every sub-expression gets a chance of controlling
the main expression result. In another words, at a given
simulation time, the (a || b) sub-expression must evaluate to
one while the (c == 0) sub-expression evaluates to zero.
This is also true for all the other combinations of inputs.
This behavior makes expression coverage more challenging
to collect than block coverage.
C. Toggle coverage
Toggle coverage measures if every bit of every signal
has been properly stimulated. This measurement is done by
observing if a given bit has toggled from zero to one and vice-
versa. In the code of Fig. 1 all the inputs and outputs of the
module are susceptible to toggle coverage. If the module had
internal signals it would be necessary to cover them as well.
Even for the example of Fig. 1, which is quite simple, there
are 13 bits that must be observed at every clock cycle. For
each bit both transitions must be considered, therefore there
are 26 distinct situations that must be covered and, because
of that nature, it is predictable that toggle coverage is very
intensive to collect.
The results and comparisons of the next sections will show
how hard it is to collect toggle coverage. These results help
to explain why sometimes toggle coverage is performed only
at a gross level [17] in some verification processes.
IV. MEASURING SIMULATION OVERHEAD CAUSED BY
COVERAGE
Before explaining how the developed tool suite collects and
analyzes coverage, it is important to understand and measure
the impact that coverage has on simulation. For this task a
set of circuits was chosen along with a set of commercial
simulators. The circuits were chosen based on the different
logic constructions they contain. Four circuits were chosen:
dffnrst is a D type flip-flop with reset, fsm is a Finite State
Machine (FSM) with 8 states and each state performs an 8-
bit wide operation, adder is a 16 bit adder with an enable
signal while t6507lp [18] is an 8-bit microprocessor with 100
opcodes and 10 addressing modes.
A deeper analysis of the properties found on those circuits is
performed on Tab. I, where the number of blocks, expressions
and toggle bins is shown for each circuit. Toggle bins are
defined as the total number of transitions of interest. Since we
are only interested in one-to-zero and zero-to-one transitions,
dividing the number of toggle bins by two gives the total
number of signals of a given circuit. For example, the dffnrstcircuit has 8 toggle bins to cover 4 signals, the d input, the qoutput, plus clock and reset signals.
Two commercial simulators from major vendors were eval-
uated using those circuits. Simulator A is widely used in the
Field Programmable Gate Array (FPGA) domain while simu-
lator B is widely used in the ASIC domain. For each circuit
a testbench was developed using some simple constraints: all
the data signals are kept completely random except for reset,which is triggered only once and during the first simulation
cycle. All testbenches were configured to run up to 10 million
clock cycles.
In order to perform a fair comparison, all simulations
were performed using the same computer configuration (64bit
OS, 6Gb of memory and a quad-core processor operating at
2.66Ghz). Also, no waveform or VCD output was requested
from the simulators. No $display() or printf() operations were
performed in the testbenches. File writing operations were kept
at a minimum, just the necessary to analyze the coverage
data post simulation. Also, coverage was configured to be
only collected for the DUT code since both simulators default
behavior is to cover the testbenches as well. Finnaly, each
scenario for each simulator was evaluated three times and the
average value was calculated for each run. The values that
follow on all results and comparisons in this paper are these
averages.
The simulation overhead of enabling toggle coverage in
simulator A is shown on Fig. 2. The simulation overhead of
enabling toggle coverage in simulator B is shown on Fig. 3.
The overheads measured in both simulators may be considered
high. Regarding simulator A, one may notice that the fsmcircuit has an incredibly high overhead. While regarding simu-
lator B, one may notice that simulating the t6507lp circuit with
toggle coverage more than doubles the simulation time. That is
why some companies choose to perform most of the coverage
tasks only near the project’s end. Although this choice may
increase the number of simulation cycles performed during the
TABLE I: Some properties of the circuits being analyzed.
dffnrst fsm adder t6507lp# of blocks 3 6 15 294
# of expressions 1 2 10 96
# of toggle bins 8 70 74 604
Fig. 2: Simulation overhead due to toggle coverage in simu-
lator A.
Fig. 3: Simulation overhead due to toggle coverage in simu-
lator B.
development of the project, engineers will receive feedback
of the quality of the testbenches late. Clearly this scenario
might lead to inefficient use of engineering resources. It also
worth mentioning that the same testbench is simulated more
than once when considering regression testing, augmenting the
relevance of coverage overheads.
As shown on Fig. 2 and Fig. 3, the overhead created by
toggle coverage is severe. Although the other types of coverage
also represent significant overheads, toggle coverage is the
most severe one as shown on Tab. II, which summarizes the
overheads created by all the three metrics plus the overhead
of applying the three metrics at the same time.
As seen on Tab. II, the time it takes to simulate a circuit
with all coverage metrics combined is directly related to the
toggle coverage simulation time, specially for simulator A. It
is also possible to notice that simulating the largest of the
circuits (t6507lp) has created the smallest of the overheads in
simulator A. Actually, the simulation time of this circuit is
already considerable without coverage, as shown on Fig. 2.
Therefore the measured overhead is not so severe. Yet, this
is a particularity of simulator A since the results shown for
simulator B and later from our own simulator reveal otherwise.
This type of scenario has influenced our circuit selection, in
which we have chosen circuits that are purely sequential (a
flip-flop), purely combinational (an adder) and also mixed
circuits of different sizes (a simple fsm and a processor).
It is also important to mention that both commercial sim-
ulators are event-driven, thus allowing them to be used in
more complex scenarios that contain both behavioral and RTL
models. VEasy on the other hand is only focused on the FV of
digital circuits, therefore it is a cycle-accurate simulator and
benefits from that behavior presenting a considerable speed-up
factor. Later VEasy will be compared against these simulators
and since the simulators have different internal mechanisms,
we chose not to compare the actual simulation times but
instead we are comparing only the overheads. That is the
reason why the values of Tab. II are percent wise. Otherwise,
the values measured using VEasy would be smaller by at least
one order of magnitude.
A. VEasy’s block coverage algorithm
The C code from Fig. 4 shows how VEasy collects block
coverage information. The idea of this algorithm is to perform
the coverage necessary operations only once and then disable
that operation by replacing it with another function. Such
technique is known as a jump or branch table.
First, a new Handler type is defined as a function pointer
in line 1 of Fig. 4. Following, lines 2 and 3 create two
function prototypes: cover() and do nothing(). Next, an array
of Handler is declared and sized according to the number
of blocks of the circuit (the code being considered is the
one from Fig. 1, which has 4 blocks). The contents of the
array are initially set to point only to the cover() function. On
line 8 the cover() function is defined with a single parameter,
1 t y p e d e f void (* Hand le r ) ( i n t ) ;2 void c o v e r ( i n t b l o c k i d ) ;3 void d o n o t h i n g ( i n t b l o c k i d ) ;4
5 Hand le r j u m p t a b l e [ 4 ] = { cover ,6 cover , cover , c o v e r } ;7
8 void c o v e r ( i n t b l o c k i d )9 {
10 / * e x p e n s i v e c o v e r a g e s t o r a g e * /11 { . . . }12 j u m p t a b l e [ b l o c k i d ] = d o n o t h i n g ;13 }14
15 void d o n o t h i n g ( i n t b l o c k i d )16 {17 / * empty * /18 }
Fig. 4: Block coverage collection algorithm.
TABLE II: Simulation overhead measured using both simulators and all metrics.
CircuitSimulator A Simulator B
Block Expression Toggle All combined Block Expression Toggle All combineddffnrst 163.11% 160.67% 175.91% 175.91% 2.56% 1.28% 15.51% 38.46%
fsm 532.24% 539.55% 598.99% 600.25% 1.41% 1.41% 36.48% 60.56%
adder 370.54% 371.21% 407.14% 407.37% 1.16% 1.16% 18.60% 39.53%
t6507lp 12.43% 0.28% 29.04% 29.35% 5.88% 7.06% 151.00% 208.24%
the identification of the block being covered (block id). The
coverage storage is performed and then the jump table is
updated to reference the do nothing() function. In another
words, the coverage storage for that given block id will not be
executed again. Finally, on line 15, the do nothing() function
is defined as being empty.
It is necessary to call the cover() function at some point in
the simulation. For that purpose the simulation snapshot gen-
erated by VEasy is instrumented with calls to the jump table.
The actual code that performs the storage is not relevant,
although it is sufficient to say that it contains more than one
memory access or even an file I/O operation. Disabling the
execution of this operation creates an initial small overhead
that is justified by the savings it enables later.
B. VEasy’s toggle coverage algorithm
The C code from Fig. 5 shows how VEasy collects toggle
information at each cycle. The collection is executed by
the doToggleCoverage() function, which is called with three
parameters: the old value of a given signal from the last
simulation cycle (oldvalue), the new value of the same signal
from the current simulation cycle (newvalue) and finally a
pointer to the array that holds the toggle status of the signal
(tog[]).
The first operation that the algorithm performs is to find out
if any bit of the signal being evaluated has toggled. Therefore,
an integer variable is declared on line 4 and updated on line
6. The exclusive or of line 6 creates sort of a mask pattern:
if an bit is set on the mask it means that the given bit has
toggled. However, it does not tell if the bit has toggle from
one to zero or vice-versa.
Lines 8 and 9 perform a logical and between the mask and
the oldvalue. For the purpose of detecting a zero-to-one toggle,
the oldvalue is inverted on line 8. For the purpose of detecting
1 void doToggleCoverage ( i n t o l d v a l u e ,2 i n t newvalue , i n t t o g [ 2 ] )3 {4 i n t mask ;5
6 mask = o l d v a l u e ˆ newvalue ;7
8 t o g [ 0 ] |= ( ( ˜ o l d v a l u e ) & mask ) ;9 t o g [ 1 ] |= ( o l d v a l u e & mask ) ;
10 }
Fig. 5: Toggle coverage collection algorithm.
an one-to-zero toggle, the oldvalue is kept the same on line 9.
Now, after performing the logical and, the result is stored at
the tog pointer after performing an logical or. The first position
of the array pointed by the tog pointer stores the zero-to-one
toggles while the second one store the one-to-zero toggles.
The or operation guarantees that results from past simulation
cycles are not overwritten.
Analyzing the algorithm from Fig. 5, one notices that there
is no bit shifting. In another words, the algorithm works at
the word level. Each Verilog signal is directly mapped to an
integer, which is analyzed in its entirety. Also, there are no
branch conditions on the algorithm. These two properties are
very positive for the algorithm execution time.
The algorithms described here are compared on the next
section. However, for the purpose of this paper, the algorithm
of expression coverage collection will not be considered. It is
sufficient to say that it is directly embedded on the simulation
snapshot just like the block and toggle coverage collection.
C. Experimental Results and Analysis
All four circuits were also evaluated using VEasy, consid-
ering the three coverage metrics in separate and also the three
combined. Results are shown on Tab. III. Again, the toggle
coverage creates the most severe simulation overhead. The
overhead seen on the dffnrst circuit is the highest one since
the simulation time was increased by 35%. Yet, this value is
still lower than the 598% overhead of simulator A and the
36% of simulator B.
A comparison between the average overheads of all three
simulators is shown on Tab. IV. To our surprise, the overheads
measured in simulator A are extremely high, with an average
overhead of 303%. Simulator B also has a considerable
average overhead of 86%. VEasy, on the other hand, has
an average overhead of 39%, which is less than half of the
overhead introduced in simulator B.
TABLE III: Simulation overhead measured using VEasy.
dffnrst adder fsm t6507lpBlock 3.31% 1.43% 0.07% 2.48%
Expression 4.96% 0.71% 9.16% 13.86%
Toggle 35.54% 25.00% 23.86% 17.82%
All 42.98% 31.67% 44.82% 39.60%
TABLE IV: Average simulation overhead measured using both simulators and VEasy.
Simulator A Simulator B VEasyBlock Expression Toggle All Block Expression Toggle All Block Expression Toggle All
269.58% 267.93% 302.77% 303.22% 2.75% 2.73% 55.40% 86.70% 1.82% 7.17% 25.55% 39.77%
V. FUTURE WORK
The main goal of this tool suite is to improve the verification
process as a whole. One of the key elements of verification, the
simulation itself, was addressed by developing a cycle-accurate
simulator with coverage capabilities for three structural metrics
and also functional metrics. However, there is a need to
provide different coverage metrics for the user, like path
coverage and FSM specific coverage. Since coverage metrics
are indirect measurements of the verification progress, our
belief is that the user should have as many as possible metrics
available.
Another interesting topic for future research is to create
a methodology for testbench automation based on coverage
goals. The comparisons made in this paper are a valuable
asset for deciding which coverage metric might be used to
feed the stimuli generation. Also, in the future, we would like
to explore parallelism in our simulation environment since
the coverage collection can be performed in parallel with
the actual simulation. It is only necessary to synchronize the
collection and the simulation in a cycle-by-cycle basis.
VI. CONCLUSION
Design verification has been accomplished following two
principal techniques, known as formal and functional verifica-
tion [19]. Although new methodologies have been proposed
[20] and adopted by the industry, these methodologies are
limited [21]. FV is the de-facto industry verification method
and it is mainly simulation based and coverage dependent.
In that context, this paper described a tool that enhances the
FV traditional flow by providing efficient coverage collection
algorithms.
Our proposed solutions provide a collection mechanism that
is fully automated (i.e. requires no user intervention) and
delivers coverage collection with a more feasible simulation
time overhead. Our proposed solutions were compared against
two commercial simulators and, in the worst case scenario,
delivered coverage collection with an overhead that is less
than 5% higher (expression coverage in simulator B is more
efficient). However, there is more than one method to collect
expression coverage, like Sum of Products (SOP) scoring,
control scoring and vector scoring. Our algorithm is based
on control scoring and evaluates sub-expressions. The scoring
method of simulator B is SOP based. Since both scoring
methods produce different results, comparing them directly
might be a bit unfair.
Regarding all other metrics, our tool has delivered coverage
collection more efficiently than simulators A and B. Our belief
is that low overhead solutions like the ones presented in this
paper might help a verification engineer by providing cover-
age information earlier in the verification process, therefore
enabling an increase in the efficient use of resources.
REFERENCES
[1] A. Piziali, Functional Verification Coverage Measurement and Analysis.Kluwer Academic, 2004, ch. 2.
[2] Specification for Verification Components/SoC Functional Verification,VSI Alliance, 2004.
[3] J. Yuan, K. Shultz, C. Pixley, H. Miller, and A. Aziz, “Modelingdesign constraints and biasing in simulation using bdds,” in ICCAD’99: Proceedings of the 1999 IEEE/ACM international conference onComputer-aided design. Piscataway, NJ, USA: IEEE Press, 1999, pp.584–590.
[4] N. Kitchen and A. Kuehlmann, “Stimulus generation for constrainedrandom simulation,” nov. 2007, pp. 258 –265.
[5] Standard for the Property Specification Language (PSL), IEEE Std.1850, 2005.
[6] Standard for SystemVerilog - Unified Hardware Design, Specification,and Verification Language, IEEE Std. 1800, 2009.
[7] R. Grinwald et al., “User defined coverage - a tool supported method-ology for design verication,” in Proc. 35th annual Design AutomationConference, San Francisco, United States, June 15–19, 1998, pp. 158–163.
[8] D. Dempster, M. Stuart, and C. Moses, Verification Methodology Man-ual: Techniques for Verifying HDL Designs, 2nd ed. TeamworkInternational, 2001.
[9] S. Fine and A. Ziv, “Coverage directed test generation for functionalverification using bayesian networks,” in DAC ’03: Proceedings of the40th annual Design Automation Conference. New York, NY, USA:ACM, 2003, pp. 286–291.
[10] R. Schutten and T. Fitzpatrick. (2003) Design for verifi-cation methodology allows silicon success. [Online]. Available:http://www.eetimes.com/story/OEG20030418S0043
[11] C. Yan and K. Jones, “Efficient simulation based verification by reorder-ing,” presented at the Design and Verification Conference, 2010.
[12] L. Bening and H. Foster, Principles of verifiable RTL design: afunctional coding style supporting verification processes in Verilog.Springer.
[13] Standard for the Verilog Hardware Description Language, IEEE Std.1364, 2001.
[14] The C Programming Language Standard, ANSI Std. X3.159, 1989.[15] S. Williams. (1999) Icarus verilog. [Online]. Available:
http://www.icarus.com/eda/verilog/[16] W. Snyder, D. Galbi, and P. Wasson. (1994) Verilator. [Online].
Available: http://www.veripool.org/wiki/verilator[17] M. Katrowitz and L. M. Noack, “I’m done simulating; now what?
verification coverage analysis and correctness checking of the dec chip21164 alpha microprocessor,” in Proceedings of the 33rd annual DesignAutomation Conference, ser. DAC ’96. New York, NY, USA: ACM,1996, pp. 325–330.
[18] S. Pagliarini and G. Zardo. (2009) t6507lp ip core. [Online]. Available:http://opencores.org/project,t6507lp
[19] J. Bergeron, Writing Testbenches: Functional Verification of HDL Mod-els, 2nd ed. Boston: Kluwer Academic, 2003.
[20] O. Cohen et al., “Designers work less with quality formal equivalencechecking,” presented at the Design and Verification Conference, 2010.
[21] A. Aziz, F. Balarin, R. K. Brayton, S. Cheng, R. Hojati, S. C. Krishnan,R. K. Ranjan, A. L. Sangiovanni-vincentelli, T. R. Shiple, V. Singhal,S. Tasiran, and H. Y. Wang, “Hsis: A bdd-based environment for formalverification,” in In Proc. of the Design Automation Conf, 1994, pp. 454–459.