Upload
derek-pappas
View
152
Download
1
Embed Size (px)
Citation preview
Methodologies and flows for chip design
Chip Specification Language (CSL)
Derek Pappas
Overview
Shared infrastructure
• Infrastructure elements are shared between teams
• Infrastructure elements include hierarchies,
connections, vectors, vector readers/writers, state
elements (e.g. registers, register files, …), ISA’s,
pipelines, address maps, …
• Infrastructure elements do not include
• Control and data path elements which are typically
implemented differently in RTL and simulation
The cost of not automating vs automating
• Failing to automate the generation of infrastructure
code costs companies $$$’s and increases time to
market
• Conversely a significant reduction in engineering and
verification time can be achieved by automating the
infrastructure generation process
Infrastructure changes
• Changes in the architecture and RTL cause changes in
simulation, verification, documentation, and software
• Manually generated infrastructure is time consuming to
maintain
• Manually generated infrastructure limits the number of
verification points/test benches on projects
• Teams should not spend time creating and maintaining
infrastructure when the infrastructure can be generated
Infrastructure changes (continued)
• Changes in infrastructure are communicated through
emails, meetings, build breaks, and tickets
• Often documentation is not updated to reflect the
infrastructure changes
Throwing problems over the wall
• Throwing problems over the wall gives the illusion that
the generation of infrastructure in one domain can be
done quickly
• The next domain has to reimplement the same
infrastructure elements
• The changes have to be made in all domains
Problem: Parallel bookkeeping
• Eliminate the parallel bookkeeping required to
synchronize shared infrastructure elements between
teams
Automatically generate shared infrastructure
• 100x reduction in code
• Rapidly bring up the project
• Rapidly make changes
• Propagate changes to all affected parties
HW/SW/Simulation/Verification interface
methodology
HW SW
Verif Sim
Equivalent infrastructure across all file views/teams
with documentation
DOCS
Shared infrastructure
• Examples:
• Interfaces
• Registers
• Vectors
CSL infrastructure generation flow
CSL specification
HW SWVerif Sim DOCS
CSL compiler
Reduce integration time
• Tighter software/hardware integration
• Tighter simulation/hardware integration
• Tighter test/simulation/hardware integration
Benefits
• Rapidly bring up test benches, simulation and design
• Test RTL from day 1
• Spend time on the architecture and design
• No time wasted bringing up and maintaining
infrastructure
• 10-100x reduction in work required to create the project
infrastructure
Benefits (continued)
• Generate infrastructure code and documentation which
is in sync
• Create a specification for infrastructure which can be
compiled
• Cloud based team solution (to be built) for instant
collaboration using a GUI and table driven entry
• To be constructed using the existing C++ class
hierarchy and compiler, with web based tools and an
underlying database
Benefits (continued)
• Extensive CSL documentation (manuals)
• Need to be converted from Framemaker which is no
longer supported by Adobe
• CSL is throughly tested
• Automatic and manual test suites
• Regression/reports
• Needs to be turned into a web based report and run on
a continuous integration server like Jenkins
Dramatically increase the number of verification
points
• A verification point in an RTL design is either set of
interfaces or state that is driven by stimulus vectors and
compared expected vectors and/or state from an
alternate implementation’s
• Teams are bandwidth limited when creating verification
points in designs
• Creation and maintenance of test infrastructure is time
consuming
Typical objections to infrastructure meta
languages and compilers
Typical objections by HW engineers to using
meta specification languages
• “I can write the infrastructure code as fast as writing
meta language spec”
• No one can write code as accurately as a working
compiler can.
• “I only write the infrastructure code once”
• Infrastructure code changes and needs to be synced
across domains
Typical objections by HW engineers to using
meta specification languages (continued)
• “Integration does not take long so I do not need a meta specification
language/GUI/compiler”
• Integration lasts for an entire project cycle and if the entire design has
complete coverage with respect to test points it will take an army of
verification engineers to generate the test benches.
• “Integration does not take long so I do not need a meta specification
language/GUI/compiler”
• Integration accounts for a huge amount of delays on projects. We
Brought up 64 processor application specific chips at Sun in 2 weeks,
including a basic compiler and RTL model in under 3 weeks using a
meta language to generate the complex interconnect for the entire
chip in 2 days.
Typical objections by HW engineers to using
meta specification languages (continued)
• “Generated code is not flexible and I can’t add custom
code to it”
• Wrong. Generated code has points where custom
code can be inserted to modify the behavior of the
design
• “No one uses tools to do that…”
• Hardware engineers who do not not embrace well
understood software engineering principles reject
meta language compilers
Typical objections by HW engineers to using
meta specification languages (continued)
• “We already have point tools for memory map
generation, interconnect generation”
• The individual tools typically do not communicate with
one another and parts of the infrastructure
specification needs to be replicated in each of the
overlapping tool’s input format.
• “We have SystemC/System Verilog“
• These are not meta description languages
The Objections are not factual…
• In reality no programmer can create or maintain large
pieces of infrastructure code on a project
• Infrastructure code needs to be consistent across
different views which is not possible with “parallel book
keeping”
• It takes a lot longer to manually create and maintain
infrastructure code than it does to write a compressed
representation in a meta language and/or to use a GUI
to describe the specification and to compile the
specification
Meta specification language
Guaranteeing consistency between different
views
• A single source is used to describe the shared
infrastructure
• A single source eliminates parallel bookkeeping
problems
Automating the chip specification process
• A meta level specification is constructed using a GUI
and the Chip Specification Language (CSL)
• The CSL specification is checked for correctness by
software
• Generation of shared infrastructure code for RTL,
simulation, software, and verification
CSL Grammar/ENF
• CSL is well defined
• The language is parsed using ANTLR
Meta specification language benefits
• Single source
• Rapid changes
• Synchronization of changes across all domains
• Minimal time to infrastructure
• Minimal code set
Requirements for a meta specification language
• Different domains share infrastructure
• Need an unambiguous shared specification
• The specification should compile
CSL Language
CSL Attributes
• CSL is object oriented
• CSL is similar to C++/Java
• No pointers/memory management
• Scopes
• CSL is easy to learn
CSL Documentation
• CSL grammar is specified for each major component
(e.g. interconnect, test benches, …)
• Examples for each language construct are shown
• CSL code + diagrams + generated Verilog/C++
Interconnect, hierarchy and test
benches
Connections
Connection types
• Scalar
• Vector
• Concatenations
• Structures
• Hierarchies
Signal types
• [SLIDES UNDER CONSTRUCTION]
Aggregated structures
• [SLIDES UNDER CONSTRUCTION]
Connections and functional units
Connections and functional units
Hierarchies and pipelines
a0_0[9:0]
b0_0[9:0]
c0_0[9:0]
d0_0[9:0]
a1_0[9:0]
b1_0[9:0]
c1_0[9:0]
d1_0[9:0]
a0_1[9:0]
b0_1[9:0]
c0_1[9:0]
d0_1[9:0]
a1_1[9:0]
b1_1[9:0]
c1_1[9:0]
d1_1[9:0]
x{x0}
x{x1}
y
x{x0}
x{x1}
y
x{x0}
x{x1}
y
z{z2}z{z0} z{z1}
top
Auto routing connections between end points
• End points, name(s), and the type to route are specified
• Intermediate connections do not need to be specified
• Benefits
• Consistency between different file views
• Rapid floor planning changes to meet full chip timing
Connections and functional units
a[9:0]b[9:0]c9:0]d[9:0]
Signal Group
abcd
Connections
z{z0}.y.x{x0} ->
abcd
-> z{z1}.y.x{x0}
Hierarchy
x{x0} x{x1} x{x1} x{x1} x{x1} x{x1}
y y
top
y
z{z0} z{z1} z{z2}
Connections are autoroutes across hierarchies
Tech bench generation
C/C++/SystemC Simulator
infrastructure generation
Generation of simulator infrastructure code
• Simulator code is generated from the meta specification
• Equivalent verification points are chosen between the
RTL design and simulator
• Vector writers/readers are generated automatically
• Vector readers/writers are generated in the
corresponding test benches
Automatic simulator infrastructure generation
benefits
• The simulator infrastructure code is updated
automatically
• The generated infrastructure infrastructure in the
simulator is in sync with the corresponding
infrastructure elements in different domains (i.e. RTL,
software, verification)
Automatic test bench generation
• Maintain consistency between the RTL design unit
under test (DUT), the algorithmic or cycle based
simulator, tests, and test benches
• Generate test benches at every level
• The generation of infrastructure for test bench
generation/design coverage is not limited by the
bandwidth of the design, simulation, and verification
teams
Level of verification detail
• Creating test benches
• Integration of multiple units
• Leaf level testing
• Lots to maintain
Connections (signals) become vectors
• Consistency between the design, the simulator and the
test bench is maintained
Different levels of abstraction in models
C++ simulator
uint abcd1
uint abcd0
uint abcd1
uint abcd0
a1_0[9:0]
b1_0[9:0]
c1_0[9:0]
d1_0[9:0]
a1_1[9:0]
b1_1[9:0]
c1_1[9:0]
d1_1[9:0]v
==
RTL test bench
Infrastructure equivalency requirement
uint abcd1
uint abcd0
uint abcd1
uint abcd0
a1_0[9:0]
b1_0[9:0]
c1_0[9:0]
d1_0[9:0]
a1_1[9:0]
b1_1[9:0]
c1_1[9:0]
d1_1[9:0]v
==
Input vector Expected vector
Module under test instantiation
Vector checkerVector reader
Inter unit
ConnectionsOutputs
Design hierarchy
Inputs
OutputsInputs
Inter unit Connections
C++ simulator
RTL test bench
Infrastructure maintenance/parallel book keeping
• Single test bench
• At least 7 infrastructure elements to maintain
• Different teams needs to keep in sync
• Changes break regressions/tests
Deign hierarchy equivalency-verification point
uint abcd1
uint abcd0
uint abcd1
uint abcd0
a1_0[9:0]
b1_0[9:0]
c1_0[9:0]
d1_0[9:0]
a1_1[9:0]
b1_1[9:0]
c1_1[9:0]
d1_1[9:0]v
==
C++ simulator
RTL test bench
Design hierarchy
Unit interface equivalency
uint abcd1
uint abcd0
uint abcd1
uint abcd0
a1_0[9:0]
b1_0[9:0]
c1_0[9:0]
d1_0[9:0]
a1_1[9:0]
b1_1[9:0]
c1_1[9:0]
d1_1[9:0]v
==
OutputsInputs
OutputsInputs
C++ simulator
RTL test bench
Vector equivalency
uint abcd1
uint abcd0
uint abcd1
uint abcd0
a1_0[9:0]
b1_0[9:0]
c1_0[9:0]
d1_0[9:0]
a1_1[9:0]
b1_1[9:0]
c1_1[9:0]
d1_1[9:0]v
==
Input vector Expected vector
C++ simulator
RTL test bench
Test bench generation
uint abcd1
uint abcd0
uint abcd1
uint abcd0
a1_0[9:0]
b1_0[9:0]
c1_0[9:0]
d1_0[9:0]
a1_1[9:0]
b1_1[9:0]
c1_1[9:0]
d1_1[9:0]v
==
Module under test instantiation
Vector checkerVector reader
Inter unit Connections
C++ simulator
RTL test bench
State infrastructure/registers
C++ simulator
uint abcd1
uint abcd0
uint abcd1
uint abcd0
a1_0[9:0]
b1_0[9:0]
c1_0[9:0]
d1_0[9:0]
a1_1[9:0]
b1_1[9:0]
c1_1[9:0]
d1_1[9:0]v
==
RTL test bench
RF
RF
==
Input vector
Expected vectorComparator
Standalone algorithmic simulators can be built to
generate vectors
uint abcd1uint abcd1
a1_0[9:0]
b1_0[9:0]
c1_0[9:0]
d1_0[9:0]
a1_1[9:0]
b1_1[9:0]
c1_1[9:0]
d1_1[9:0]v
==
Input vector Expected vector
Stand
alone
software
simulator
RTL test bench
test(s)
Signal Inheritance
Objects share features
• The same bits are reused in different types of objects
• Connections
• Inputs/outputs
• Registers
• Pipelines
• Vectors
Reuse of features
• Objects inherit features from a meta description
• All derived objects share the same features
Reuse benefits
• Reuse reduces time to code
• Reuse reduces errors
• Reuse eliminates parallel book keeping
• Reuse reduces time to make changes
Example: pipeline connected to a signal
opcode_0[3:0]
srca_0[5:0]
srcb_0[5:0]
dest_0[5:0]
opcode_0[3:0]
srca_0[5:0]
srcb_0[5:0]
dest_0[5:0]
opcode_0[3:0]
srca_0[5:0]
srcb_0[5:0]
dest_0[5:0]
v
valid
sta
ll
reset
enable
instruction_0[21:0] instruction[21:0]
v
valid
sta
ll
reset
enable
instruction_0[21:0]
opcode[3:0]
srca[5:0]
srcb[5:0]
dest[5:0]
instruction[21:0]
Access elements in aggregated structures
instruction[21:0]
instruction.srca
instruction.srcb
instruction.dest
Memory maps
Memory maps (CSL under construction)
• The CSL memory map specification integrates tightly
with the other CSL components
Pipeline Generation
Generating pipelines (under construction)
• Generation of pipeline code can eliminate register over
and under run errors
• Valid, stall, and enable logic is generated
• Cookie cutter/cut and paste pipeline logic is generated
• Pipeline naming conventions are followed and names
are updated automatically
Control, Status, Query,
configure, Measure
Control, Status, Query, Configure, Measure
• Control
• Status
Out of band communication
• [UNDER CONSTRUCTION]
Non-shared infrastructure elements
• Non-shared infrastructure elements which need to integrate with RTL
for tape out models which can be generated from a specification
• Clock trees
• Testability logic
• IO pads
• Note: it can be argued that most of this logic should be int he RTL
design
• Some design teams “hack” the clock trees and testability logic into
the netlist which further delays takeout instead of putting the clock
trees and testability logic under test/regression control
More CSL components
• The CSL language is very comprehensive
• Many more CSL components are supported by the CSL
compiler and documented.
• Some components need to be built out