3
Pre-silicon Validation Technical Backgrounder Alakesh Chetia, Sr. VP, Business Development Robert Birch, Chief Technology Officer TransEDA Introduction Pre-silicon validation is generally performed at a chip, multi-chip or system level. The objective of pre-silicon validation is to verify the correctness and sufficiency of the design before sending the design for fabrication. This approach typically requires modeling the complete system, where the model of the design under test may be RTL, and other components of the system may be behavioral or bus functional models. By subjecting the design under test (DUT) to real-world-like input stimuli, pre-silicon validation aims to: Validate design sufficiency Validate design correctness. Verify implementation correctness. Uncover unexpected system component interactions. To achieve the goals outlined above a system level environment is required with special emphasis for handling the following situations. Concurrency Most complex chips have multiple ports or interfaces and there is concurrent, asynchronous and independent activity at these ports in a real system. A system-level verification environment should be able to create and handle such real-world concurrency to qualify as a pre-silicon validation environment. Concurrency needs to be handled in both the test controller and the bus/interface models used. Some models will return data when a transaction completes, so the test controller or environment can do data checking. Other models require the expected data to be provided up front so the model can do data checking when the transaction completes. Such behavior impacts the test checking methodology and the amount of concurrency that can be generated. Results Checking While it may be relatively easy to generate activity or stimulate the different ports or interfaces of a chip, the difficult part is to implement an automated results or data checking strategy. A system-level pre-silicon validation environment should relieve test writers from maintaining or keeping track of the data in test code to make the task easily manageable for a multi-ported system. Keeping track of the data becomes arduous when different agents in a system interact on the same address segments. Automated Test Generation The test creation and/or generation methodology is critical in building a system level pre-silicon validation environment capable of generating real-world-like stimuli. The test generation methodology is closely interrelated to the results checking strategy. A dynamic test generator and checker are more effective in creating very interesting, reactive test sequences. They are more efficient because errors can be detected as they happen. In a generate/run/post-process method, one may run a simulation for eight hours, only to find during the post-process checking that an error occurred 20 minutes into the simulation, with the balance of the simulation time being useless. An automated test generation tool should be capable of handling directed testing, pseudo-random testing and reactive testing. In directed testing, users specify the sequence of events to generate. This is efficient for verifying known cases and conditions. Pseudo-random testing is useful in uncovering unknown conditions or corner cases. TransEDA [email protected] www.transeda.com © 2001 TransEDA

Pre Silicon Validation Whitepaper

Embed Size (px)

Citation preview

Page 1: Pre Silicon Validation Whitepaper

Pre-silicon Validation Technical Backgrounder

Alakesh Chetia, Sr. VP, Business DevelopmentRobert Birch, Chief Technology OfficerTransEDA

Introduction

Pre-silicon validation is generally performed at a chip, multi-chip or system level. The objective of pre-silicon validation is to verify the correctness and sufficiency of the design before sending the design for fabrication. This approach typically requires modeling the complete system, where the model of the design under test may be RTL, and other components of the system may be behavioral or bus functional models. By subjecting the design under test (DUT) to real-world-like input stimuli, pre-silicon validation aims to:

• Validate design sufficiency• Validate design correctness.• Verify implementation correctness.• Uncover unexpected system component interactions.

To achieve the goals outlined above a system level environment is required with special emphasis for handling the following situations.

Concurrency

Most complex chips have multiple ports or interfaces and there is concurrent, asynchronous and independent activity at these ports in a real system. A system-level verification environment should be able to create and handle such real-world concurrency to qualify as a pre-silicon validation environment. Concurrency needs to be handled in both the test controller and the bus/interface models used.

Some models will return data when a transaction completes, so the test controller or environment can do data checking. Other models require the expected data to be provided up front so the model can do data checking when the transaction completes. Such behavior impacts the test checking methodology and the amount of concurrency that can be generated.

Results Checking

While it may be relatively easy to generate activity or stimulate the different ports or interfaces of a chip, the difficult part is to implement an automated results or data checking strategy. A system-level pre-silicon validation environment should relieve test writers from maintaining or keeping track of the data in test code to make the task easily manageable for a multi-ported system. Keeping track of the data becomes arduous when different agents in a system interact on the same address segments.

Automated Test Generation

The test creation and/or generation methodology is critical in building a system level pre-silicon validation environment capable of generating real-world-like stimuli. The test generation methodology is closely interrelated to the results checking strategy.

A dynamic test generator and checker are more effective in creating very interesting, reactive test sequences. They are more efficient because errors can be detected as they happen. In a generate/run/post-process method, one may run a simulation for eight hours, only to find during the post-process checking that an error occurred 20 minutes into the simulation, with the balance of the simulation time being useless.

An automated test generation tool should be capable of handling directed testing, pseudo-random testing and reactive testing.

In directed testing, users specify the sequence of events to generate. This is efficient for verifying known cases and conditions. Pseudo-random testing is useful in uncovering unknown conditions or corner cases.

TransEDA [email protected]

© 2001 TransEDA

Page 2: Pre Silicon Validation Whitepaper

Pseudo-random test generation, where transactions are generated from user-defined constraints, can be interspersed with blocks of directed sequences of transactions at periodic intervals to re-create real-life traffic scenarios in a pre-silicon validation environment.

Dynamic test generation also facilitates reactive test generation. Reactive test generation implies a change in test generation when a monitored event is detected during simulation.

Robust, High-quality Verification IP

The quality of verification, and therefore the probability of shippable first-pass silicon, is greatly enhanced with robust, high-quality verification IP, which includes such items as bus functional models (BFMs) and protocol monitors.

A common mistake is to require the project group that develops the RTL to also create the verification IP used to verify the RTL. While this is sometimes required for proprietary interfaces, it runs the risk of making the same wrong assumptions. Further, the rate of maturity of an internally developed model is much slower than a commercial model that has been used by multiple independent design groups.

Whether the design team builds or buys the verification IP, they must ensure that the models can fit into the test generation and checking strategy that is adopted. Also, the models need to operate in a mode that fits into a pseudo-random test methodology. Models that load and execute a pre-compiled test sequence do not work in an environment where one can dynamically generate and check tests.

Models must operate at the appropriate level of abstraction, concurrency and programmable controllability. For example, a processor BFM may simply drive transactions on the bus, but not automatically handle deferred transactions and retries. In such a case, the test code or an additional layer of abstraction needs to handle such situations making the test writing more difficult and time consuming.

Sometimes models generate or respond to signals on the bus with the same timing and do not provide programmable controllability. To fully validate adherence to a bus protocol, the system must be tested with all possible variations in cycle timing that is allowed by the device specification. This means that the test generator should be able to change the timing of the models, and to randomly vary delays and cycle relationships, such as data latencies and wait states.

So, while on one hand, a higher abstraction (such as transaction level) is required in the model API so that system level tests can be generated or created easily it is equally important to provide in the API low level signal timing control.

An Architecture for Pre-silicon Validation

We present here a case study of a pre-silicon validation environment set up to verify several versions of a bridge device connecting CPU with memory and I/O interfaces.

Simulation Enviornment

CPUModel

PCI Model

AGPModel

MemoryModel

CPU I/F

Custom Logic

MemI/F

I/O I/F I/O I/F

Test Templates

Test Controller

& Data Checker

Com

munication Layer

Coverage Metrics

Page 3: Pre Silicon Validation Whitepaper

Verification components

The major components of this pre-silicon validation environment are:

* Bus Functional Models: The intelligent BFMs provide a transaction-level API, and aredesigned to handle concurrency and parallelism. This makes it suitable to be used in an automated test generation environment. It provides a consistent programmer’s view. It also offers a high degree of controllability for the model behavior to emulate a real device with real operating characteristics through programmable delay registers and configuration registers. * Bus Protocol Monitors: The intelligent bus protocol monitors provide dynamic protocol checking and can be used in automated test generation environments. They provide dynamic bus state information, which can be used to provide dynamic feedback to user tests or automated test controllers. They are extensible to accommodate user-defined sequences.* Test Controller and Data Checker: The intelligent test controller utilizes BFMs and transaction generators to create constraint-based concurrent sequences of transactions at the different interfaces of the DUT. The controller can generate transactions pseudo-randomly, for a user specified sequence, or a mix of both. It can also perform specific tasks or dynamically reload input constraints upon a certain event occurring during simulation. In addition to test stimuli generation, the controller provides for automated and dynamic data checking.

Salient Features

Some of the major features and characteristics of this environment are as follows:

* Concurrency: It handles multiple concurrent, asynchronous and independent generation of transaction sequences at the different interfaces of the DUT.* Results Checking: It has a built-in shadow memory system to handle automatic results checking. The test writer does not need to write code to do data checking. It is done automatically and transparently.

* Automated Test Generation: The intelligent test controller and data checker can handle user specified directed tests with automated data checking and/or automatically generate pseudo-random sequences of transactions with automated data checking. Being a dynamic test generator it can handle reactive test generation as well.* Robust, High-quality Verification IP: The intelligent models and monitors provide appropriate level of abstraction and controllability for effective system level pre-silicon validation.* Ease of Use: It is easy to use and intuitive without requiring the learning of a new language or methodology.* Leveraging Design and Application Knowledge: It leverages application-specific knowledge so that only the pertinent application space is tested.* Configurable and Extensible: It is easily configurable and extensible allowing pre-silicon validation of multiple different configurations.* Reusing the Test Environment: This architecture allows easy test and environment reuse. Replacing the appropriate models and monitors easily validated subsequent generations of the DUT.

Conclusion

Pre-silicon validation requires a well thought out and proven strategy and helps achieve shippable first-passsilicon. The key features to consider about a system-level pre-silicon validation environment are concurrency, automated test generation and results checking and, intelligent models and monitors comprising proven, high-quality verification IP. An example application outlined the pre-silicon validation of a bridge device noting the use of BFMs, monitors and an intelligent test controller and data checker.