Upload
christine-turner
View
222
Download
1
Embed Size (px)
Citation preview
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
1
Raimund Ubar
Tallinn Technical UniversityD&T Laboratory
Estonia
Design for Testability
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
2
Course Map
Models Theory
Tools
Fault Modelling
Fault Simulation
Test Generation
Fault Diagnosis
DFTBIST
Test DesignD&TField:
Defect Level
High Level
System Modelling
High Level
Logic Level
Boolean Differential Analysis
BDD
DD
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
3
Motivation of the Course
• The increasing complexity of VLSI circuits has made test generation one of the most complicated and time-consuming problems in digital design
• The more complex are getting systems, the more important will be the problems of test and design for testability because of the very high cost of testing electronic products
• Engineers involved in SoC design and technology should be– made better aware of the importance of test,
– very close relationships between design and test, and
– trained in test technology
to enable them to design and produce high quality, defect-free and fault-tolerant products
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
4
Goals of the Course
• The main goal of the course is to give the basic knowledge to answer the question: How to improve the testing quality at increasing complexities of
today's systems?
• This knowledges includes – understanding of how the physical defects can influence on the
behavior of systems, and how the fault modelling can be carried out– learning the basic techniques of fault simulation, test generation and
fault diagnosis– understanding the meaning of testability, and how the testability of a
system can be measured and improved– learning the basic methods of making systems self-testable
• The goal is also to give some hands-on experience of solving test related problems
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
5
Objective of the Course
Specification
Hardware description languages (VHDL)
Implementation
Full custom, standard cell, gate arrays
Manufacturing
CMOS
VLSI Design Flow
Testing
Automatic test equipment (ATE),
structural scan testing
Built-in Self-Test
Verification
Simulation. Timing analysis, formal verification
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
6
Content of the Course
Lecture course – 16 h.Laboratory work – 8 h.
Lecture course:
• Introduction (1 h)– General philosophy of digital test. Fault coverage. Types of tests. Test
application. Design for test. Economy of test and the quality of product.
• Overview of mathematical methods in testing (2 h)– Boolean differential algebra for test generation and fault diagnosis – Binary decision diagrams and digital circuits– Generalization of decision diagrams for modeling digital systems
• Fault modeling (2 h) – Faults, errors and defects. Classification of faults. Modeling defects by Boolean
differential equations. Functional faults. Fault equivalence and fault dominance. Fault masking.
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
7
Content of the Course
Lecture course (cont.):
• Test generation for VLSI (3 h) – Combinational circuits, sequential circuits, finite state machines, digital
systems, microprocessors, memories. Delay testing. Defect-oriented test generation. Universal test sets
• Fault simulation and diagnostics (3 h) – Test quality analysis. Simulation algorithms: parallel, deductive, concurrent,
critical path tracing. Fault diagnosis: combinational and sequential methods. Fault tables and fault dictionaries
• Design for testability (2 h) – Testability measures. Adhoc testability improvement. Scan-Path design.
Boundary Scan standard
• Built-in Self-Test (3 h) – Pseudorandom test generators and signature analysers. BIST methods: BILBO,
circular-self-test, store and generate, hybrid BIST, broadcasting BIST,– embedding BIST
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
8
References
1. N.Nicolici, B.M. Al-Hashimi. Power-Constrained Testing of VLSI Circuits. Kluwer Acad. Publishers, 2003, 178 p.
2. R.Rajsuman. System-on-a-Chip. Design and Test. Artech House, Boston, London, 2000, 277 p.
3. S.Mourad, Y.Zorian. Principles of Testing Electronic Systems. J.Wiley & Sons, Inc. New York, 2000, 420 p.
4. M.L.Bushnell, V.D.Agrawal. Essentials of Electronic testing. Kluwer Acad. Publishers, 2000, 690 p.
5. A.L.Crouch. Design for Test. Prentice Hall, 1999, 349 p.
6. S. Minato. Binary Decision Diagrams and Applications for VLSI CAD. Kluwer Academic Publishers, 1996, 141 p.
7. M. Abramovici et. al. Digital Systems Testing & Testable Designs. Computer Science Press, 1995, 653 p.
8. D. Pradhan. Fault-Tolerant Computer System Design. Prentice Hall,1995, 550 p.
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
9
Overview
1. Introduction2. Theory: Boolean differential algebra
3. Theory: Decision diagrams
4. Fault modelling
5. Test generation
6. Fault simulation
7. Fault diagnosis
8. Testability measuring
9. Design for testability
10. Built in Self-Test
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
10
Overview: Introduction
• Role of testing• How much to test?
– The problem is money
– Complexity vs. quality
– Hierarchy as a compromise
• Testability – another compromise• Quality policy• History of test• Course map
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
11
Introduction: the Role of Test
Dependability
Fault-ToleranceFault DiagnosisTestBIST
Reliability Security Safety
There is no sequrityon the earth,there is only oportunity Douglas McArthur
(General)
Test Diagnosis
Design for testability:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
12
Introduction : How Much to Test?
Amusing Test:
Paradox 1:Digital model is finite, analog model is infinite.
However, the complexity problemwas introduced by Digital World
Paradox 2:If I can show that the system works,then it should be not faulty.But, what does it mean: it works?
32-bit accumulator has 264 functions which all should work.
So, you should test all of them!
All life is an experiment.The more experiments you make,the better (American Wisdom)
System
Stimuli
Y
Response
X
Y
X
Samples (for the analog case)
In digital case you cannot extrapolate
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
13
Introduction: How Much to Test?
Paradox:264 input patterns (!) for 32-bit accumulator will be not enough.
A short will change the circuit into sequential one,and you will need because of that 265 input patterns
Paradox:Mathematicians counted that Intel 8080
needed for exhaustive testing 37 (!) yearsManufacturer did it by 10 secondsMajority of functions will never activated during the lifetime of the system
Time can be your best friendor your worst enemy (Ray Charles)
& &x1
x2
x3
yState q
Y = F(x1, x2, x3,q)
*1
1
Y = F(x1, x2, x3)Bridging fault
0
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
14
Introduction: the Problem is Money?
Cost oftesting
Quality
Cost of qualityCost
Cost ofthe fault
100%0% Optimumtest / quality
How to succeed? Try too hard!How to fail? Try too hard!(From American Wisdom)
Conclusion:
“The problem of testingcan only be containednot solved” T.Williams
Time
Fa
ult
Co
ve
rag
e
Test coverage function
Time
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
15
Introduction: Hierarchy
Paradox:
To generate a test for a block in a system,
the computer needed 2 days and 2 nights
An engineer
did it by hand with 15 minutes
So, why computers?
The best place to start iswith a good title.Then builda song around it. (Wisdom of country music)
System
16 bit counter
&
1Sequence
of 216 bits
Sea of gates
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
16
Introduction: Complexity vs. Quality
Problems:• Traditional low-level test generation and fault simulation methods and tools for
digital systems have lost their importance because of the complexity reasons
• Traditional Stuck-at Fault (SAF) model does not quarantee the quality for deep-submicron technologies
• How to improve test quality at increasing complexities of today's systems?
Two main trends: – Defect-oriented test and – High-level modelling
• Both trends are caused by the increasing complexities of systems based on deep-submicron technologies
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
17
Introduction: A Compromise
• The complexity problem in testing digital systems is handled by raising the abstraction levels from gate to register-transfer level (RTL) instruction set architecture (ISA) or behavioral levels
– But this moves us even more away from the real life of defects (!)
• To handle defects in circuits implemented in deep-submicron technologies, new defect-oriented fault models and defect-oriented test methods should be used
– But, this is increasing even more the complexity (!)
• As a promising compromise and solution is:
To combine hierarchical approach with defect orientation
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
18
Introduction: Testability
Amusing testability:
Theorem: You can test an arbitrary digital system by only 3 test patterns if you design it approprietly
&011
101001 &
011
101
001
&?
&011
101
001
1010 &011
101001
Solution: System FSM Scan-Path CC NAND
Proof:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
19
Introduction: Quality Policy
Quality policyYield (Y)
P,n
Defect level (DL)
Pa
Design for testability
TestingP - probability of a defectn - number of defectsPa - probability of accepting a bad product
nPY )1( - probability of producing a good product
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
20
Introduction: Defect Level
DL
T(%)
Y
1000
1 Y(%)
T(%)10
10
50
90
50 90
8 5 1
45 25 5
81 45 9
)1(1 TYDL
DL T
Paradox: Testability DL
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
21
Introduction: History of Test
Historical Test:
1960s: Racks Functional testing Belle epoque for optimization...
1970s: Boards Structural testing Complexities, automata,…
1980s: VLSI Design for testability (DFT) Interactivity vs. testability? Hierarchy: top-down, bottom-up, yo-yo…
1990s: VLSI Self-test, Fault-tolerance Testability, Boundary-scan standard
2000s: Systems on Chip (SoC) Built-in Self-Test (BIST)
The years teach much which the days never know (Ralph Waldo Emerson)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
22
Introduction: Test Tools
Test
System
Fault dictionary
System model
Test generation
Fault simulation
Test result
Fault diagnosis
Go/No go Located defect
Test experiment
Test tools
(BIST)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
23
Introduction: Course Map
Models Theory
Tools
Fault Modelling
Fault Simulation
Test Generation
Fault Diagnosis
DFTBIST
Test DesignD&TField:
Defect Level
High Level
System Modelling
High Level
Logic Level
Boolean Differential Analysis
BDD
DD
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
24
Overview
1. Introduction
2. Theory: Boolean differential algebra3. Theory: Decision diagrams
4. Fault modelling
5. Test generation
6. Fault simulation
7. Fault diagnosis
8. Testability measuring
9. Design for testability
10. Built in Self-Test
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
25
Overview: Boolean Differential Algebra
• Boolean derivatives– Boolean vector derivatives
– Multiple Boolean derivatives
– Boolean derivatives of complex functions
• Overview of applications of Boolean derivatives• Boolean derivatives and sequential circuits• Boolean differentials and fault diagnosis• Universal test equation
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
26
Boolean Derivatives
1)(
ix
XF
y
x
y = F(x)
0)(
ix
XF
Traditional algebra: speed Boolean algebra: change
0)(
kx
XF
0)(
ix
XF
F(X) will change
if xi changes
F(X) will not change
if xi changes
y 0,1, F(X) 0,1
xi xk
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
27
Boolean Derivatives
Boolean function:
Y = F(x) = F(x1, x2, … , xn)
Boolean partial derivative:
),...,...(),...,...()(
11 ninii
xxxFxxxFx
XF
),...1,...(),...0,...()(
11 ninii
xxxFxxxFx
XF
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
28
Boolean Derivatives
Useful properties of Boolean derivatives:
ii x
XF
x
XF
)()(
ii x
XF
x
XF
)()(
- if F(x) is independent of xi
- if F(x) depends always on xi
0)(
ix
XF
1)(
ix
XF
Test generation algorithm:
Solve the differential equation
1)(
ix
XF
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
29
Boolean Derivatives
Useful properties of Boolean derivatives:
These properties allow to simplify the Boolean differential equation
to be solved for generating test pattern for a fault at xi
If F(x) is independent of xi
ii x
XGXF
x
XGXF
)()(
)()(
ii x
XGXF
x
XGXF
)(
)()()(
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
30
Boolean Derivatives
316254142321 ))((( xxxxxxxxxxxxy
5
562414233121
5
625414233121
5
625414233121
5
625414233121
5
625414233121
5
))((
)))(((
)))(((
))))((((
x
xxxxxxxxxxxx
x
xxxxxxxxxxxx
x
xxxxxxxxxxxx
x
xxxxxxxxxxxx
x
xxxxxxxxxxxx
x
y
Transformations of the Boolean derivative:
Given:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
31
Boolean Derivatives
1...
)()())((
)()()(
2341
62414233121
5
562414233121
5
xxxx
xxxxxxxxxxx
x
xxxxxxxxxxxx
x
y
Calculation of the Boolean derivative:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
32
Boolean Vector Derivatives
If multiple faults take place independent of xi
Example:
&
&
1
x1
x2
x3
x4
),...,,...(),...,,...(),(
)(11 njinji
ji
xxxxFxxxxFxx
XF
43214321 ),,,( xxxxxxxxFy
)(),(
)(3232414141
32
xxxxxxxxxxxx
XF
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
33
Boolean Vector Derivatives
Interpretation of the vector derivation components:
1)(),(
)(3232414141
32
xxxxxxxxxxxx
XF
141 xx
Two paths activated
141 xx
Single path activated
&
&
1
x1
x2
x3
x4
yFault
&
&
1
x1
x2
x3
x4
yFault
1
1
1
0
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
34
Boolean Vector Derivatives
Calculation of the vector derivatives by Carnaugh maps:
1)(),(
)(3232414141
32
xxxxxxxxxxxx
XF
43214321 ),,,( xxxxxxxxFy x2
x1
1 1
111
11
x3
x4
4321 xxxx
x2
x1
1 1
11
1 11
x3
x4
4321 xxxx
x2
x1
1 1
1
11
11
x3
x4
1
1
1
=
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
35
Multiple Boolean Derivatives
0
1
413232
2
4241
3
4321
xxx
y
xxx
y
xxxxx
y
xxxxy Test for x3
Fault in x2 cannot mask
the fault in x3
&
&
1
x1
x2
x3
x4
y
0
1
10
01Faults
10
0
&
&
1
x1
x2
x3
x4
y
1
1
10
01Faults
10
10
01
1
No fault masking
Fault masking141 xx
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
36
Derivatives for Complex Functions
Boolean derivative for a complex function:
Example:
i
j
j
k
i
jk
x
F
F
F
x
XXFF
)),((
y
x2
x1x3x4
4
3
3
1
14 x
x
x
x
x
y
x
y
Additional condition:
03
2
x
x
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
37
Overview about Applications of BDs
• Fault simulation– Calculate the value of:
• Test generation– Single faults: Find the solution for:
– Multiple faults: Find the solution for:
– Decompositional approach (complex functions):
• Fault masking analysis:
• Defect modelling– Logic constraints finding for defects:
ix
XF
)(
1)(
ix
XF
1),(
)(
ji xx
XF
i
j
j
k
i
jk
x
F
F
F
x
XXFF
)),((
0;12
ijiji x
y
xxx
y
x
y
1)(
ix
XF
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
38
Bool. Derivatives for Sequential Circuits
Boolean derivatives for state transfer and output functions of FSM:
xt yt
,
tqt qt+1
t-1 ty
x x
q(t)/x(t-1)
q(t)/q(t-1) y(t)/q(t)
y(t)/x(t)
y(t) = [x(t),q(t)]
q(t+1) = [x(t),q(t)]
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
39
Bool. Derivatives for Sequential Circuits
Boolean derivatives for JK Flip-Flop:
J
K
QT
)1()1()1()()( tQtKtQtJtQ
)1()1()1()1()1(
)(
)1()1(
)(
)1()1(
)(
tJtKtJtKtQ
tQ
tQtK
tQ
tQtJ
tQThe erroneos signal
will propagate
from inputs to output
The erroneous signal
was stored
in the previous
clock cycle
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
40
Boolean Differentials
dx - fault variable, dx (0,1)
dx = 1 - if the value of x has changed because of a fault
Partial Boolean differential:
Full Boolean differential:
ii
niinix dxx
FxdxxxFxxxFFd
i
),...,,...,(),...,,...,( 11
),...,...,(),...,...,(
)()(
111 nniini dxxdxxdxxFxxxF
dXXFXFdF
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
41
Boolean Differentials and Fault Diagnosis
&
1
&
x1
x2
x3
&1
1
1
0
1
10
y
)( 321 xxxy
x1 = 0x2 = 1x3 = 1dy = 0
))())((( 332211 dxxdxxdxxydy
0)(1 321 dxdxdxdy
Correct output signal:
1)(13
12
01 dxdxdx 1
01 x
&
1
&
x1
x2
x3
&1 0
1
1
0
0
0
1 y
x1 = 0x2 = 0x3 = 0dy = 1
Erroneous output signal:
1)(1 321 dxdxdxdy
103
02
01 dxdxdx
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
42
Boolean Differentials and Fault Diagnosis
1)(13
12
01 dxdxdx 1
01 dx
103
02
01 dxdxdx
Rule: 010 kk dxdx
103
02 dxdx
1)(13
03
02
03
02
12
01 dxdxdxdxdxdxdx
= 0Diagnosis:
113
03
02
01 dxdxdxdx
The line x3 works correct
There is a fault: The fault is missing
12 x11 x
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
43
Boolean differentials and Fault Diagnosis
Fault Diagnosis and Test Generation as direct and reverse mathematical tasks:
dy = F(x1, ... , xn) F(x1 dx1 , ... , xn dxn)
dy = F(X, dX)
Direct task:
Test generation: dX, dy = 1 given, X = ?
Reverse task:
Fault diagnosis: X, dy given, dX = ?
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
44
Test Tasks and Test Tools
Test
System
Fault dictionary
System model
Test generation
Fault simulation
Test result
Fault diagnosis
Go/No go Located defect
Test experiment
Test tools
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
45
Universal Test Equation
Fault Diagnosis and Test Generation - direct and reverse mathematical tasks
Model of test experiment: dy = F(x1, ... , xn) F(x1 dx1 , ... , xn dxn)
Direct task:
Test generation: dX, dy = 1 given, X = ?
Reverse task:
Fault diagnosis: X, dy given, dX = ?
Fault Simulation is a special case of fault diagnosis
F(X, dX) = dy Result of the test experiment
Fault vector (fault)
Test vector
Given system Possible faults
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
46
Basics of Theory for Test and Diagnostics
Two basic tasks:
1. Which test patterns are needed to detect a fault (or all faults)2. Which faults are detected by a given test (or by all tests)
ALU
&1
0
0
&10
Gate
Multiplier
System Booleandifferential
algebra
Decisiondiagrams
DD
BDD
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
47
Overview
1. Introduction
2. Theory: Boolean differential algebra
3. Theory: Decision diagrams4. Fault modelling
5. Test generation
6. Fault simulation
7. Fault diagnosis
8. Testability measuring
9. Design for testability
10. Built in Self-Test
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
48
Overview: Decision Diagrams
• Binary Decision Diagrams (BDDs)• Structurally Synthesized BDDs (SSBDDs)• High Level Decision Diagrams (DD)• DDs for Finite State Machines• DDs for Digital Systems• Vector DDs• DDs for microprocessors• DD synthesis from behavioral descriptions• Example of DD synthesis from VHDL description
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
49
Binary Decision Diagrams
x1
x2
y
x3
x4 x5
x6 x7
0
1
7654321 )( xxxxxxxy Simulation:
7654321 xxxxxxx0 1 1 0 1 0 0
1y
Boolean derivative:
15427613
xxxxxxx
y
1
0
Functional BDD
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
50
Binary Decision Diagrams
Functional synthesis BDDs:
43124321 ))(( xxxxxxxxy
Shannon’s Theorem:
0111 )()(
)(
kk xxXFxXFx
XFy
xky1
)(kx
XF
0)(
kxXF
x1y2432 )( xxxx
x2
x3 x4
43xxx3
x4
43 xx
Using the Theoremfor BDD synthesis:
Example:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
51
Binary Decision Diagrams
Elementary BDDs:
1
x1x2x3
y x1 x2 x3&
x2x3
y x1
x1
x2
x3
1x1x2x3
y x1 x2 x3
+x1x2x3
y
x1
x2
x3
y x2 x3
Adder
NOR
AND
OR
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
52
Binary Decision Diagrams
D
C
q c
q’
D
S
C
q
Elementary BDDs
R
0
')'(
SR
qcRqScq
c
q’
S
R q’
R
U
D Flip-Flop
RS Flip-Flop
JK Flip-Flop
S
J
q
R c
q’
S
R q’
C
KK
J
U - unknown value
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
53
Building a SSBDD for a Circuit
&
1
1x1
x2
x3
x21
x22y
a
b
))((& 322211 xxxxbay
a by
a x1
x21
b x22
x3
ay x22
x3
y x22
x3
x1
x21
DD-library:
Superposition of DDs
Superposition of Boolean functions:
Given circuit:
Compare to
SSBDD
Structurally Synthesized BDDs:
b a
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
54
Representing by SSBDD a Circuit
&
&
&
&
&
&
&
1
2
3
4
5
6
7
71
72
73
a
b
c
d
e
y
Macro
6 73
1
2
5
7271
y
0
1
y = cyey = cy ey = x6,e,yx73,e,y deybey
y = x6x73 ( x1 x2 x71) ( x5 x72)
Structurally synthesized BDDfor a subcircuit (macro)
To each node of the SSBDD a signal path in the circuit corresponds
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
55
High-Level Decision Diagrams
R2M3
e+M1
a
*M2
b
R1
IN
c
d
y1 y2 y3 y4
y4
y3 y1 R1 + R2
IN + R2
R1* R2
IN* R2
y2
R2 0
1
2 0
1
0
1
0
1
0
R2
IN
R12
3
Superposition of High-Level DDs:
A single DD for a subcircuit
R2
R2 + M3
Instead of simulating all the components in the circuit, only a single path in the DD should be traced
M1
M2
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
56
High-Level DDs for Finite State Machines
1/0
3/0
5/0
6/1
4/1
2/1
x1 x1
x2 x2
x1
x1
ResRes q’ x1
3.0
x2 4.1
5.0
6.1x1
1.0
*.0
q.y
1
1 1
2
3
4
5
6
*
3 42.1
5
6
8
9
10
12
11
13
7
1.02
0
0
0
1
1
1
State Transition Diagram: DD:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
57
High-Level DDs for Digital Systems
ABC
M
ADR
MUX1
MUX2
CC
COND
Control Path
Data Path
/
FF
yx
z
z1
z2
Digital system:A
01
0q
xA
B + C
A + 1
13 xC C + B
04 xA A + B+ C
B
04
1q
xA
B + C
B
C
14
2q
xA
1
0xB A + B
C
0 xC
xA1
xC3 0
3,4
02
q
1
01
0q 1
4xA
2
1
5xB
3
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
58
High-Level DDs for Digital Systems
Digital system:A
01
0q
xA
B + C
A + 1
13 xC C + B
04 xA A + B+ C
B
04
1q
xA
B + C
B
C
14
2q
xA
1
0xB A + B
C
0 xC
xA1
xC3 0
3,4
02
q
1
01
0q 1
4xA
2
1
5xB
3
Begin
A = B + C
xA
A = A + 1 B = B + C
xA
B = B C = C
xB
C = C
xC
A = A +B + C
xC
C = A + B A = C + B
END
0
0
0
0
0
1
1
1
1
1
s0
s1
s2
s3
s4
s5
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
59
High-Level Vector Decision Diagrams
3,4
02
q
1
01
0q 1
4xA
2
15xB
3
A01
0q
xA
B + CA + 1
13 xC C + B04 xA A + B+ C
B04
1q
xA
B + C
B
C
14
2q
xA
10xB A + B
C
0 xC
xA1
xC3 0
M=A.B.C.q
1
1
q
xA
0qA
i B’ + C’
#1
qB
i B’ + C’
#2
0qA
i A’ + 1
#4
2
1
xB
qC
i C’
#3
0qC
i A’ + B’
#5
3
1
xC
qA
i B’ + C’
#5
0qC
i A’ + B’
#5
4
1
xC
qC
i C’
#5
0
B
Ai A’ + B’+C’xA
0
q#5
B’
qB
i B’
#5
A system of 4 DDs Vector DD
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
60
Decision Diagrams for Microprocessors
I1: MVI A,D A IN
I2: MOV R,A R A
I3: MOV M,R OUT R
I4: MOV M,A OUT A
I5: MOV R,M R IN
I6: MOV A,M A IN
I7: ADD R A A + R
I8: ORA R A A R
I9: ANA R A A R
I10: CMA A,D A A
High-Level DDs for a microprocessor (example):
Instruction set:
I R3
A
OUT4
I A2
R
IN5
R
1,3,4,6-10
I IN1,6
A
A2,3,4,5
A + R7
A R8
A R9
A10
DD-model of themicroprocessor:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
61
Decision Diagrams for Microprocessors
High-Level DD-based structure of the microprocessor (example):
I R3
A
OUT4
I A2
R
IN5
R
1,3,4,6-10
I IN1,6
A
A2,3,4,5
A + R7
A R8
A R9
A10
DD-model of themicroprocessor:
OUT
R
A
IN
I
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
62
Vector DDs for Miocroprocessors
I1 I3 t i1
i1
PC + 1
PC + 2
0 1 2
3
AB
AB
A1
A2
OUT = AB.DB (t)
t0, 2 i1
PC + 1
PC + 2
2 DB
AB
I22
t1, 3 i12
AB
t4
i1
L
INP
4 DB
AB
A1
A2
2
3
i1
INP + 1
5 DB
AB
H
Instruction: SHLD I1.I2.I3= 0.4.2
(DB(t=3).DB(t=2)) L
((DB(t=3).DB(t=2)) + 1) H
I1 I2 I3
0 1 2 4 5 7
DDs for representingmicroprocessor output behaviour
i2
DB(t=2)
DB(t=3)
L
H
INP (H,L)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
63
DD Synthesis from Behavioral Descriptions
BEGINMemory state: MProcessor state: PC, AC, AXInternal state: TMPInstruction format: IR = OP. A. F0. F1. F2.Execution process: EXEC:
BEGIN DECODE OP (
0: AC AC + MA1: M[A] AC, AC 02: M[A] M[A]+ 1,
IF M[A]= 0 THEN PC PC + 13: PC A......................................7: IF F0 THEN AC AC + 1
IF F1 THEN IF AC = 0 THEN PC PC + 1 IF F2 THEN (TMP AC, AC AX, AX TM’)
ENDEND
Procedural description
of a microprocessor
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
64
DD Synthesis from Behavioral Descriptions
Start
AC = AC + M [A] AC = AC + 1PC = A
M [A] = AC, AC = 0
M [A] = M [A] + 1
PC = PC + 1
1
2
3
4
6
5
AC = AX, AX = AC
7
PC = PC + 1
AC = AX, AX = AC
8
9
AC = AX, AX = AC
10
11
OP=0
OP=1
OP=2
OP=3...
OP=7
M[A]=0M[A]=1
F0=1
F0=0
F1=1F1=0
F2=0
F2=1
AC=0AC0
F2=1F2=0 F2=1
F2=0
Symbolic execution tree:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
65
DD Synthesis from Behavioral Descriptions
No Input assertions Output assertions1 OP = 0 AC = AC+ M A
2 OP = 1 M A = AC, AC = 03 OP = 2, M A + 1 = 0 M A = M A +1, PC = PC + 14 OP = 2, M A + 1 0 M A = M A + 15 OP = 3 PC = A6 OP = 7, FO=0, F1=0, F2=0 NO CHANGE7 OP = 7, FO=0, F1=0, F2=1 AC = AX, AX = AC8 OP = 7, FO=0, F1=1, AC=0, F2=1 AC = AX, AX = AC9 OP = 7, FO=0, F1=1, AC=0, F2=0 NO CHANGE
Generation of nonprocedural descriptions via symbolic execution
Terminal contexts
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
66
DD Synthesis from Behavioral Descriptions
Input assertions Output assertions
OP = 0 AC = AC+ M A
OP = 1 M A = AC, AC = 0OP = 2,M A + 1 = 0
M A = M A +1,PC = PC + 1
OP = 2,M A + 1 0
M A = M A + 1
OP = 3 PC = AOP = 7,FO=0, F1=0, F2=0
NO CHANGE
OP = 7,FO=0, F1=0, F2=1
AC = AX, AX = AC
OP = 7, AC=0,FO=0, F1=1, F2=1
AC = AX, AX = AC
OP = 7, AC=0,FO=0, F1=1, F2=0
NO CHANGE
OPAC AC+M [A]
#0
F0 F2
AC
AX
F2 AC+1
Decision Diagram for AC
0
1
2,3
7 0 0
0
1 1
1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
67
DD Synthesis from VHDL Descriptions
entity rd_pc is port ( clk, rst : in bit; rb0 : in bit; enable : in bit; reg_cp : out bit ; reg : out bit ; outreg : out bit ; fin : out bit ) ; end rd_pc ; architecture archi_rd_pc of rd_pc is type STATETYPE is (state1, state2); signal state, nstate: STATETYPE ; signal enable_in : bit ; signal reg_cp_comb : bit ; begin seq: process(clk, rst) begin if rst='1' then state <= state1 ; elsif (clk'event and clk='1') then state <= nstate ; end if ; end process ; process(clk, enable) begin if clk='1' then enable_in <= enable ; end if ; end process ;
process(clk, reg_cp_comb) begin if clk='0' then reg_cp <= reg_cp_comb ; end if ; end process ; comb: process (state, rb0, enable_in) begin case state is when state1 => outreg <= '0' ; fin <= '0' ; if (enable_in='0') then nstate <= state1 ; reg <= '1' ; reg_cp_comb <= '0' ; else nstate <= state2 ; reg <= '1' ; reg_cp_comb <= '1' ; end if ; when state2 => if (rb0='1') then nstate <= state2 ; reg <= '0' ; reg_cp_comb <= '1'; outreg <= '0'; fin <= '0'; elsif (enable_in='0') then nstate <= state1 ; reg <= '0' ; reg_cp_comb <= '0'; outreg <= '1'; fin <= '1'; else nstate <= state2 ; reg <= '0' ; reg_cp_comb <= '0'; outreg <= '0'; fin <= '1'; end if ; end case ; end process ; end archi_rd_pc ;
VHDL description of 4 processes which represent a simple control unit
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
68
DD Synthesis from VHDL Descriptions
nstate
rst
clk
#1
state’
state1
0
0
1
clk
enable’
enable1
enable_in0
DDs for state, enable_in and nstateentity rd_pc is port ( clk, rst : in bit; rb0 : in bit; enable : in bit; reg_cp : out bit ; reg : out bit ; outreg : out bit ; fin : out bit ) ; end rd_pc ; architecture archi_rd_pc of rd_pc is type STATETYPE is (state1, state2); signal state, nstate: STATETYPE ; signal enable_in : bit ; signal reg_cp_comb : bit ; begin seq: process(clk, rst) begin if rst='1' then state <= state1 ; elsif (clk'event and clk='1') then state <= nstate ; end if ; end process ; process(clk, enable) begin if clk='1' then enable_in <= enable ; end if ; end process ;
state’
rb0
enable_in
#2
#11
1
120
0nstate
Superposition of DDs
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
69
DD Synthesis from VHDL Descriptions process(clk, reg_cp_comb) begin if clk='0' then reg_cp <= reg_cp_comb ; end if ; end process ; comb: process (state, rb0, enable_in) begin case state is when state1 => outreg <= '0' ; fin <= '0' ; if (enable_in='0') then nstate <= state1 ; reg <= '1' ; reg_cp_comb <= '0' ; else nstate <= state2 ; reg <= '1' ; reg_cp_comb <= '1' ; end if ; when state2 => if (rb0='1') then nstate <= state2 ; reg <= '0' ; reg_cp_comb <= '1'; outreg <= '0'; fin <= '0'; elsif (enable_in='0') then nstate <= state1 ; reg <= '0' ; reg_cp_comb <= '0'; outreg <= '1'; fin <= '1'; else nstate <= state2 ; reg <= '0' ; reg_cp_comb <= '0'; outreg <= '0'; fin <= '1'; end if ; end case ; end process ; end archi_rd_pc ;
rst #1state 1
0
state’
rb0
enable'
#2
#11
1
120
0
enable
#0011
#0001
1
0
enable
#0100
#1100
1
0
state
rb0
1
2
0
#0010
1
outregfinreg_cpreg
DDs for the total VHDL model
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
70
DD Synthesis from VHDL Descriptions
rst #1state 1
0
state’
rb0
enable'
#2
#11
1
120
0
enable
#0011
#0001
1
0
enable
#0100
#1100
1
0
state
rb0
1
2
0
#0010
1
outregfinreg_cpreg
time 1 2 3 4 5 6 rst 1 0 0 0 0 0 enable 0 1 1 1 0 0 rb0 x x 1 0 0 0 state 1 1 2 2 2 1 outreg 0 0 0 0 1 0 fin 0 0 0 1 1 0 reg_cp 0 1 1 0 0 0 reg 1 1 0 0 0 1
Simulation and Fault Tracing on the DDs
Simulated vector
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
71
Overview
1. Introduction
2. Theory: Boolean differential algebra
3. Theory: Decision diagrams
4. Fault modelling5. Test generation
6. Fault simulation
7. Fault diagnosis
8. Testability measuring
9. Design for testability
10. Built in Self-Test
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
72
Overview: Fault Modelling
• Faults, errors and defects• Stuck-at-faults (SAF)• Fault equivalence and fault dominance• Redundant faults• Transistor level physical defects• Mapping transistor defects to logic level• Fault modelling by Boolean differential equations• Functional fault modelling• Faults and test generation hierarchy• High-level fault modelling• Fault modelling with DDs
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
73
Fault and defect modeling
Defects, errors and faults• An instance of an incorrect
operation of the system being tested is referred to as an error
• The causes of the observed errors may be design errors or physical faults - defects
• Physical faults do not allow a direct mathematical treatment of testing and diagnosis
• The solution is to deal with fault models
System
Component
Defect
Error
Fault
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
74
Fault and defect modeling
Why logic fault models?• complexity of simulation
reduces (many physical faults may be modeled by the same logic fault)
• one logic fault model is applicable to many technologies
• logic fault tests may be used for physical faults whose effect is not completely understood
• they give a possibility to move from the lower physical level to the higher logic level
1x2
x1
Broken line
1x2
x1
Bridge to ground
0VSingle model: Stuck-at-0
Two defects:
Stuck-at fault model:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
75
Fault and defect modeling
Fault models are: explicit and implicit explicit faults may be enumerated implicit faults are given by some
characterizing properties
Fault models are: structural and functional: structural faults are related to structural
models, they modify interconnections between components
functional faults are related to functional models, they modify functions of components
1
&
&x1
x2
x3
x21
x22y
a
bStructural faults:
- line a is broken
- short between x2 and x3
Functional fault:
Instead of 3221 xxxxy
32xxy
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
76
Fault and defect modeling
Structural faults• Structural fault models assume that components are fault-free and only their
interconnections are affected: a short is formed by connecting points not intended to be connected an open results from the breaking of a connection
• Structural fault models are: a line is stuck at a fixed logic value v (v {0,1}), examples:
a short between ground or power and a signal line an open on a unidirectional signal line any internal fault in the component driving its output that it
keeps a constant value bridging faults (shorts between signal lines) with two types: AND
and OR bridging faults (depending on the technology).
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
77
Gate-Level Faults
1
Broken (stuck-at-0)
&
Broken (stuck-at-1)
Broken 1
Broken 2
Broken 3
1
2
3
Broken 1 stuck branches: 1,2,3 (or stuck stem)Broken 2 stuck branches: 2,3Broken 3 stuck branches: 3
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
78
Stuck-at Fault Properties
Fault equivalence and fault dominance:
&
&ABC
D
A B C D Fault class
1 1 1 0 A/0, B/0, C/0, D/1 Equivalence class0 1 1 1 A/1, D/01 0 1 1 B/1, D/0 Dominance classes1 1 0 1 C/1, D/0
Fault collapsing:
&1
1
1
0
1
&&1
0
1
1
0
DominanceDominanceEquivalence
Equivalence
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
79
Fault Redundancy
1
&
&
&
1&
x1
x2
&x4
x3
y
0
)(
2
434211
x
y
xxxxxxy
Internal signal dependencies:
1
&
&1
11
1
1
Impossible pattern,OR XOR not testableFaults at x2 not testable
Optimized function: 341 xxxy
Redundant gates (bad design):
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
80
Fault Redundancy
1
&
&
&
1
1
01
10
01
1
1
Hazard control circuitry:
Redundant AND-gateFault 0 not testable
0
Error control circuitry:
Decoder
0
E 1 if decoder is fault-free Fault 0 not testable
E
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
81
Transistor Level Faults
Stuck-at-1Broken (change of the function)BridgingStuck-open New StateStuck-on (change of the function)
Short (change of the function)
Stuck-off (change of the function)
Stuck-at-0
SAF-model is not able to cover all the transistor level defects
How to model transistor defects ?
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
82
Transistor Level Stuck-on Faults
x1 x2
Y
Stuck-onVDD
VSS
x1
x2
x1 x2
Y
VDD
VSS
x1
x2
x1 x2 y yd
0 0 1 1
0 1 0 0
1 0 0 VY/IDDQ
1 1 0 0
NOR gate
Conducting path for “10”)( NP
NDDY RR
RVV
RN
RP
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
83
Transistor Level Stuck-off Faults
x1 x2
Y
Stuck-off (open)VDD
VSS
x1
x2
x1 x2
Y
VDD
VSS
x2
x1 x2 y yd
0 0 1 1
0 1 0 0
1 0 0 Y’
1 1 0 0
NOR gate
No conducting path from VDD to VSS for “10”
x1
Test sequence is needed:
00,10
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
84
Bridging Faults
Fault-free W-AND W-OR
x1 x2 x’1 x’2 x’1 x’2
0 0 0 0 0 0
0 1 0 0 1 1
1 0 0 0 1 1
1 1 1 1 1 1
Wired AND/OR modelx1
x2
x’1
x’2
&x1
x2
x’1
x’2
W-AND:
1x1
x2
x’1
x’2
W-OR:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
85
Bridging Faults
Fault-free x1 dom x2 x2 dom x1
x1 x2 x’1 x’2 x’1 x’2
0 0 0 0 0 0
0 1 0 0 1 1
1 0 1 1 0 0
1 1 1 1 1 1
Dominant bridging model x1
x2
x’1
x’2
x1 dom x2:
x2 dom x1:
x1
x2
x’1
x’2
x1
x2
x’1
x’2
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
86
Delay Faults
Two models:
- gate delay
- path delay
Test pattern pairs: The first test initializes the circuit, and the second pattern sensitizes the fault
Robust delay test: If and only if when L is faulty and a test pair is applied, the fault is detected independently of the delays along the path
&
&
&11
&A
D
C
Bx1
x2
x3
01
11
1xxx0
1x0
0xxxx1
1
Delay fault activated, but not detected
&
&
&00
&A
D
C
Bx1
x2
x3
10
11
0xxx1
11
1xxxx0
0xxxxx1
Robust delay test
y
y
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
87
Mapping Transistor Faults to Logic Level
Shortx1
x2
x3
x4
x5
y
)()(* dydyy d
))(( 53241 xxxxxyd 54321 xxxxxy
Generic function with defect:
Function:
Faulty function:
A transistor fault causes a change in a logic function not representable by SAF model
Defect variable: d =0 – defect d is missing
1 – defect d is present
Mapping the physical defect onto the logic level by solving the equation:
1*
d
y
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
88
Mapping Transistor Faults to Logic Level
Shortx1
x2
x3
x4
x5
y )()(* dydyy d
))(( 53241 xxxxxyd 54321 xxxxxy
Test calculation by Boolean derivative:
1
))(()(*
5432154315421
5324154321
xxxxxxxxxxxxx
d
dxxxxxdxxxxx
d
y
Generic function with defect:
Function:
Faulty function:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
89
Why Boolean Derivatives?
1)()(* dydyy d
))(( 53241 xxxxxyd 54321 xxxxxy
1
))()((
/)(
))(()(*
5432154315421
4325132154
4325132154
5324154321
xxxxxxxxxxxxx
xxxxxxxxxx
dxxxxxdxxxxxd
dxxxxxdxxxxx
d
y
Distinguishing function:
Given:
1)))((()( 5324154321 xxxxxxxxxxDBD-based approach:
Usingthe properties of BDs, the procedure of solving the equation becomes easier
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
90
Functional Fault vs. Stuck-at Fault
NoFull SAF-Test Test for the defect
x1 x2 x3 x4 x5 x1 x2 X3 x4 x5
1 1 1 1 0 - 1 0 - 0 1
2 0 - - 1 1 1 - 0 0 1
3 0 1 1 0 1 0 1 1 1 0
4 1 0 1 1 0
5 1 1 0 0 -
Full 100% Stuck-at-Fault-Test is not able to detect the short:
54321 xxxxxy
The full SAF test is not covering any of the patterns able to detect the given transistor defect
))(( 53241 xxxxxyd
Functional fault
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
91
Defect coverage for 100% Stuck-at Test
Results:• the difference between stuck-at fault and physical defect coverages
reduces when the complexity of the circuit increases (C2 is more complex than C1)
• the difference between stuck-at fault and physical defect coverages is higher when the defect probabilities are taken into account compared to the traditional method where all faults are assumed to have the same probability
Probabilisticdefect
coverage, %
Denumerabledefect coverage,
%
Circuit
Tmin Tmax Tmin Tmax
C1 66,68 72,01 81,00 83,00
C2 70,99 77,05 84,29 84,76
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
92
Generalization: Functional Fault Model
Constraints calculation:
yComponent F(x1,x2,…,xn)
Defect
Wd
Component with defect:
Logical constraints
dn dFFddxxxFy ),,...,,(** 21
Fault-free Faulty
1*
d
yW d
Fault model: (dy,Wd), (dy,{Wk
d})
Constraints:
d = 1, if the defect is present
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
93
Functional Fault Model Examples
yComponent F(x1,x2,…,xn)
Defect
Wd
N Fault (defect) Constraints1 SAF x 0 x = 12 SAF x 1 x = 03 Short between x and z x = 1, z = 04 Exchange of x and z x = 1, z = 05 Delay fault on x x = 1, x’ = 0
Constraints examples:
Component with defect:
Logical constraints
1*
d
yW d
Constraints:
FF model: (dy,Wd), (dy,{Wk
d})
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
94
Functional Fault Model for Stuck-ON
Stuck-on
x1 x2
Y
VDD
VSS
x1
x2
NOR gate
Conducting path for “10”
)( NP
NDDY RR
RVV
RN
RP
dZxxxx
Zxxxxdxxdy
2121
212121 )()(*
1/* 21 ZxxdyW d
x1 x2 y yd
0 0 1 1
0 1 0 0
1 0 0 Z: VY/IDDQ
1 1 0 0
Condition of the fault potential detecting:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
95
Functional Fault Model for Stuck-Open
Stuck-off (open)
x1 x2
Y
VDD
VSS
x2
NOR gate
No conducting path from VDD to VSS for “10”
x1
Test sequence is needed: 00,10
x1 x2 y yd
0 0 1 1
0 1 0 0
1 0 0 Y’
1 1 0 0
)'(
)'()(*
12
212121
dyxx
yxxxxdxxdy
1'/* 21 yxxdyW d
t x1 x2 y1 0 0 1
2 1 0 1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
96
Functional Fault Model
Example:
Bridging fault between leads xk and xl
The condition means that
in order to detect the short between leads xk and xl on the lead xk we have to assign to xk the value 1 and to xl the value 0.
lkkd
lkkd
kkk
xxdxW
xxdxdxdxdx
/*
)()()()(*
1 lkd xxW
xk
xl
x*k
d
Wired-AND model
xk*= f(xk,xl,d)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
97
Functional Fault Model
Example:
x1
x2
x3
y&&
x1
x2 x3
y&&
&
321
321321
)'(
)()(*
xydxx
xyxxdxxxdy
Equivalent faulty circuit:
Bridging fault causes a feedback loop:
1'/* 321 yxxxdyW d
Sequential constraints:
A short between leads xk and xl changes the combinational circuit into sequential one
t x1 x2 x3 y
1 0 1 02 1 1 1 1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
98
First Step to Quality
How to improve the test quality at the increasing complexity of systems?
First step to solution:Functional fault model
was introduced
as a means
for mapping physical defects
from the transistor or layout level
to the logic level
System
Component Low level
kWFk
WSk
Environment
Bridging fault
Mapping
Mapping
High level
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
99
Fault Table: Mapping Defects to Faults
Input patterns tji Fault di Erroneous function f di pi
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 B/C not((B*C)*(A+D)) 0.010307065 1 1 1 1
2 B/D not((B*D)*(A+C)) 0.000858922 1 1 1 1
3 B/N9 B*(not(A)) 0.043375564 1 1 1 1 1 1 1
4 B/Q B*(not(C*D)) 0.007515568 1 1 1 1 1 1 1 1 1
5 B/VDD not(A+(C*D)) 0.001717844 1 1 1
6 B/VSS not(C*D) 0.035645265 1 1 1
7 A/C not((A*C)*(B+D)) 0.098990767 1 1 1 1
8 A/D not((A*D)*(B+C)) 0.013098561 1 1 1 1
9 A/N9 A*(not(B)) 0.038651492 1 1 1 1 1 1 1
10 A/Q A*(not(C*D)) 0.025982392 1 1 1 1 1 1 1 1 1
11 A/VDD not(B+(C*D)) 0.000214731 1 1 1
12 C/N9 not(A+B+D)+(C*(not((A*B)+D))) 0.020399399 1 1 1 1 1
13 C/Q C*(not(A*B)) 0.033927421 1 1 1 1 1 1 1 1 1
14 C/VSS not(A*B) 0.005153532 1 1 1
15 D/N9 not(A+B+C)+(D*(not((A*B)+C))) 0.007730298 1 1 1 1 1
16 D/Q D*(not(A*B)) 0.149452437 1 1 1 1 1 1 1 1 1
17 N9/Q not((A*B)+(B*C*D)+(A*C*D)) 0.143654713 1
18 N9/VDD not((C*D)+(A*B*D)+(A*B*C)) 0.253382006 1
19 Q/VDD SA1 at Q 0.014386944 1 1 1 1 1 1 1
20 Q/VSS SA0 at Q 0.095555078 1 1 1 1 1 1 1 1 1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
100
Probabilistic Defect Analysis
0
0.0005
0.001
0.0015
Pro
babi
liti
es o
f fa
ults
(ar
bitr
ary
unit
s)
No
t det
ecta
ble
not(
(B*
C)*
(A+
D))
not(
(B*
D)*
(A+
C))
B*
(not
(A))
B*
(not
(C*
D))
not(
A+
(C*
D))
not(
C*
D)
not(
(A*
C)*
(B+
D))
not(
(A*
D)*
(B+
C))
A*
(not
(B))
A*
(not
(C*
D))
not(
B+
(C*
D))
not(
A+
B+D
)+(C
*(n
ot((
A*B
)+D
)))
C*
(not
(A*
B))
not(
A*
B)
not(
A+
B+C
)+(D
*(n
ot((
A*B
)+C
)))
D*
(not
(A*
B))
not(
(A*
B)+
(B*
C*
D)+
(A*
C*
D))
not(
(C*
D)+
(A*
B*
D)+
(A*
B*
C))
SA1
at Q
SA0
at Q
Functional faults (actual functions performed)
0
0.0005
0.001
0.0015
0.002
0.0025
Eff
ecti
vene
ss i
n de
tect
ion
of f
ault
s(a
rbit
rary
uni
ts)
OO
OO
OO
OI
OO
IO
OO
II
OIO
O
OIO
I
OII
O
OII
I
IOO
O
IOO
I
IOIO
IOII
IIO
O
IIO
I
IIIO IIII
Input patterns
Probabilities of physical defects
Effectiveness of input patterns in detecting real physical defects
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
101
Physicaldefect
analysis
Defect
Complex gate
Gate-level fault analysis
Module
System
Functionalfault
detected
High-levelfault analysis
High-levelsimulation
Gate-level simulation
YyMyGd
Functional fault activated
Hierarchical Defect-Oriented Test Analysis
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
102
Faults and Test Generation Hierarchy
Circuit
Module
System
Networkof gates
Gate
Functionalapproach
Fki Test
Fk Test
WFki
WSki
F Test
WFk
WSk
Structuralapproach
Networkof modules
Wdki
Interpretation of WFk:
- as a test on the lower level
- as a functional fault on the higher level
Higher Level
Component Lower level
kWFk
WSk
Environment
Bridging fault
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
103
Register Level Fault Models
K: (If T,C) RD F(RS1, RS2, … RSm), NRTL statement:
K - labelT - timing conditionC - logical conditionRD - destination register
RS - source register
F - operation (microoperation) - data transfer N - jump to the next statement
Components (variables) of the statement:
RT level faults:
K K’ - label faultsT T’ - timing faultsC C’ - logical condition faultsRD RD - register decoding faults
RS RS - data storage faults
F F’ - operation decoding faults - data transfer faults N - control faults(F) (F)’ - data manipulation faults
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
104
Fault Models for High-Level Components
Decoder:- instead of correct line, incorrect is activated
- in addition to correct line, additional line is activated
- no lines are activated
Multiplexer (n inputs log2 n control lines):
- stuck-at - 0 (1) on inputs
- another input (instead of, additional)
- value, followed by its complement
- value, followed by its complement on a line whose address differs in 1 bit
Memory fault models:- one or more cells stuck-at - 0 (1)
- two or more cells coupled
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
105
Fault models and Tests
Dedicated functional fault model for multiplexer:– stuck-at-0 (1) on inputs,
– another input (instead of, additional)
– value, followed by its complement
– value, followed by its complement on a line whose address differs in one bit
Functional fault model
Test description
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
106
Combinational Fault Models
Exhaustive combinational fault model:- exhaustive test patterns
- pseudoexhaustive test
patterns- exhaustive output line
oriented test patterns
- exhaustive module
oriented test patterns
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
107
Fault modeling on SSBDDs
The nodes represent signal paths through gates
Two possible faults of a DD-node represent all the stuck-at faults along the corresponding path
&
&
&
&
&
&
&
1
2
3
4
5
6
7
71
72
73
a
b
c
d
e
y
Macro 6 73
1
2
5
7271
y
0
1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
108
Fault Modeling on High Level DDs
High-level DDs (RT-level):
R2M3
e+M1
a
*M2
b
R1
IN
c
d
y1 y2 y3 y4
y4
y3 y1 R1 + R2
IN + R2
R1* R2
IN* R2
y2
R2 0
1
2 0
1
0
1
0
1
0
R2
IN
R12
3
Terminal nodes represent:
RTL-statement faults: data storage, data transfer, data manipulation faults
Nonterminal nodes represent:
RTL-statement faults: label, timing condition, logical condition, register decoding, operation decoding,control faults
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
109
Fault modeling on DDs
The fault model for DDs is defined asfaulty behaviour of a node m
labelled with a variable x(m): • output edge is always activated to x(m) = i,
• output edge for x(m) = i is broken,
• instead of the given output edge for x(m) = i,
another edge or a set of edges are activated
This fault model leads to exhaustive test
of the node
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
110
Overview
1. Introduction
2. Theory: Boolean differential algebra
3. Theory: Decision diagrams
4. Fault modelling
5. Test generation6. Fault simulation
7. Fault diagnosis
8. Testability measuring
9. Design for testability
10. Built in Self-Test
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
111
Overview: Test Generation
• Universal test sets – exhaustive and pseudoexhaustive tests
• Structural gate level test generation methods– Path activation principle– Test generation algorithms: D-alg, Podem, Fan– ENF-based test generation– Multiple fault testing– Defect-oriented test generation– Test generation for sequential circuits
• Hierarchical test generation• DD-based test generation
– SSBDDs and macro-level test generation– RT-level test generation– Microprocessor behavior test generation
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
112
Functional testing: universal test sets
Universal test sets1. Exhaustive test (trivial test)2. Pseudo-exhaustive test
Properties of exhaustive tests1. Advantages (concerning the stuck at fault model):
- test pattern generation is not needed- fault simulation is not needed- no need for a fault model- redundancy problem is eliminated- single and multiple stuck-at fault coverage is 100%- easily generated on-line by hardware
2. Shortcomings:- long test length (2n patterns are needed, n - is the number of inputs)- CMOS stuck-open fault problem
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
113
Functional testing: universal test sets
Pseudo-exhaustive test sets:– Output function verification
• maximal parallel testability• partial parallel testability
– Segment function verification
Output function verification
4
4
4
4
216 = 65536 >> 4x16 = 64 > 16
Exhaustivetest
Pseudo-exhaustivesequential
Segment function verification
F &1111
0101
0011Pseudo-
exhaustiveparallel
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
114
Functional testing: universal test sets
Output function verification (maximum parallelity)
c0 a0 b0 c1 a1 b1 c2 a2 b2 c3 …1 0 0 0 0 0 0 0 0 0 02 0 0 1 0 0 1 0 0 1 03 0 1 0 0 1 0 0 1 0 04 0 1 1 1 0 0 0 1 1 15 1 0 0 0 1 1 1 0 0 06 1 0 1 1 0 1 1 0 1 17 1 1 0 1 1 0 1 1 0 18 1 1 1 1 1 1 1 1 1 1
Exhaustive test generation for n-bit adder:
Good news:Bit number n - arbitraryTest length - always 8 (!)
0-bit testing 2-bit testing1-bit testing 3-bit testing … etc
Bad news:The method is correctonly for ripple-carry adder
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
115
Testing carry-lookahead adder
General expressions:
iii baG iiiii babaP 1 nnnn CPGC
211211 )( nnnnnnnnnnnn CPPGPGCPGPGC
n-bit carry-lookahead adder:
01231232333 CPPPGPPGPGC
),,( 011011011110111 CbafCbaCbabaCPGC
01111222233330123 ))()(( CbabababababaCPPP
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
116
Testing carry-lookahead adder
01111222233330123 ))()(( CbabababababaCPPP
1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 1 11 0 0 1 1 1 1 1 1 00 1 1 0 1 1 1 1 1 01 1 0 1 1 0 1 1 1 0 1 1 1 0 0 1 1 1 1 01 1 1 1 0 1 1 0 1 01 1 1 1 1 0 0 1 1 01 1 1 1 1 1 0 0
For 3-bit carry lookahead adder for testing only this part of the circuit at least 9 test patterns are needed (i.e. pseudoexhaustive testing will not work)
Increase in the speed implies worse testability
Testing 0
Testing 1
R
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
117
Functional testing: universal test sets
Output function verification (partial parallelity)
x1
x2
x3
x4
F1(x1, x2)
F2(x1, x3)
F3(x2, x3)
F4(x2, x4)
F5(x1, x4)
F6(x3, x4)
0011- -
010101
010110
00- 11-000111
0011- 0F1
F3
F2
F4
F5
Exhaustive testing - 16Pseudo-exhaustive, full parallel - 4Pseudo-exhaustive, partially parallel - 6
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
118
Structural Test Generation
• A fault a/0 is sensitisized by the value 1 on a line a
• A test t = 1101 is simulated, both without and with the fault a/0
• The fault is detected since the output values in the two cases are different
• A path from the faulty line a is sensitized (bold lines) to the primary output
&
&
0
AB
C
D
11
0
1
0
1
1
a
1 01 0
0 1
1
0 1
Structural gate-level testing: fault sensitization:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
119
Structural Test Generation
Structural gate-level testing:
Path activation
&
&
&
&
&
&
&
1
2
3
4
5
6
7
71
72
73
a
b
c
d
e
y
Macro
DDD
D D
11
1
1
Fault sensitisization:
x7,1= DFault propagation:
x2 = 1, x1 = 1, b = 1, c = 1
Line justification:
x7= D = 0: x3 = 1, x4 = 1
b = 1: (already justified)
c = 1: (already justified)
1))(( 721212,753,761,7
xxxxxxxxxx
y
))(( 2,751,7213,76 xxxxxxxy Symbolic fault modeling:D = 0 - if fault is missingD = 1 - if fault is present
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
120
Structural Test Generation
Multiple path fault propagation:
1
1
1
1
1
1
1
1
x1
x2
x3
x4
yD
DD
0
01
1
1
1
1
1
1
1
x1
x2
x3
x4
yD
DD
0
0
D
D
10
0
Single path activation is not possible
Three paths simultaneously activated
DD
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
121
Structural Test Generation Algorithms
D - algorithm (Roth, 1966):
Select a fault site, assign DPropagate D along all available paths using D-cubes of gatesBacktracking, to find the inputs needed
123
4
D11
D
Example:1 2 3 4D 1 1 D1 D 1 D1 1 D D
123
4D1
D1
Fault site
PropagationD-cubesfor AND-gate
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
122
Structural Test Generation Algorithms
D - algorithm:
Singular cover for C = NAND (A,B):
a b c
1 1 0x 0 10 x 1
Propagation D-cubes for C = NAND (A,B):
a b c
1 D D D 1 DD D D
Intersection of cubes:
Let have 2 D-cubesA = (a1, a2,... an)B = (b1, b2,... bn)
where ai, bj 0,1,x,D,D)
1) x ai = ai
2) If ai x and bi x then ai bi = ai if bi = ai or ai bi = otherwise
3) A B = if for any i: ai bi =
Primitive D-cubes for NAND and c 0:
a b c
0 x D x 0 D
&
&
&
1
3
2
5
4
6
Propagation of D-cubes in the circuit:
1 2 3 4 5 6
D-drive:Primitive cube for x2 1 DPropagate D through G4 1 D D Propagate D through G6 1 D D 1 D
Consistency operation:Intersect with G5 1 D 0 D 1 D
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
123
Structural Test Generation Algorithms
PODEM - algorithm (Goel, 1981):
1. Controllability measures are used during backtracking
Decision gate:The “easiest” input will be chosen at first
Imply gate:The “most difficult” input will be chosen at first
2. Backtracking ends always only at inputs
3. D-propagation on the basis of observability measures
&0
&1
0
1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
124
Structural Test Generation Algorithms
FAN - algorithm (Fujiwara, 1983):
1. Special handling of fan-outs (by using counters)
PODEM: backtracking continuesover fan-outs up to inputs
FAN: backtracking breaks off,the value is chosenon the basis of values in counters
2. Heuristics is introduced into D-propagation
PODEM: moves step by step (without predicting problems)
FAN: finds bottlenecks and makes appropriate decisions
at the beginning, before starting D-propagation
1 (C = 6)
0 (C = 3)
0 (C = 2)
1
Chosen value:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
125
Structural Test Generation Algorithms
Test generation by using disjunctive normal forms
32143121 xxxxxxxxy
x1 x2 x1 x3 x4
x1 x2 x3 y x1 x2 x3 x4
0 1 0 0 1 1 1 0 0 0 1 0 11 0 1 0 1 0 0 0 0 1 0 0 10 0 0 1 1 1 0 1 0 0 0 1 11 0 1 1 0 0 0 1 0 1 0 1 01 1 1 1 0 1 1 1 No test
1 1 1 0 0 1 0 1 1 1 01 0 1 1 1 0 0 1 1 1 0 1 10 1 0 1 1 1 1 1 0 1 1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
126
Multiple Fault Testing
Multiple faults fenomena:
• Multiple stuck-fault (MSF) model is a straightforward extension of the single stuck-fault (SSF) model where several lines can be simultaneously stuck
• If n - is the number of possible SSF sites, there are 2n possible SSFs, but there are 3n -1 possible MSFs
• If we assume that the multiplicity of faults is no greater than k , then the number of possible MSFs is
ki=1 {Cn
i}2i
• The number of multiple faults is very big. However, their consideration is
needed because of possible fault masking
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
127
Multiple Fault Testing
Fault masking• Let Tg be a test that detects a fault g • A fault f functionally masks the fault g iff the multiple fault { f, g } is not
detected by any pattern in Tg
The test 011 is the only test that detects the fault c 0
The same test does not detect the multiple fault { c 0, a 1} Thus a 1 masks c 0
• Let Tg’ T be the set of all tests in T that detect a fault g • A fault f masks the fault g under a test T iff the multiple fault { f , g } is not
detected by any test in Tg’
&
&
0
1
1 0
1a
b
c
Example:
Fault a 1
Fault c 0
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
128
Multiple Fault Testing
Circular fault masking
Example: • The test T = {1111, 0111, 1110, 1001,
1010, 0101} detects every SSF
• The only test in T that detects the
single faults b 1 and c 1 is 1001
• However, the multiple fault {b1, c1} is not detected because under the test
vector 1001, b 1 masks c 1, and
c 1 masks b 1
&
&
1/0
1
1
0/1
1/0
1
1
0
0/1
0/1
0/1
ab
c
d
Multiple fault F may be not detected by a complete test T for single faults
because of circular masking among the faults in F
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
129
Multiple Fault Testing
To test a path under condition of multiple faults, two pattern test is needed
As the result, either the faults on the path under test are detected or the masking fault is detected
Example:The lower path from b to output is
under test
A pair of patterns is applied on b There is a masking fault c 1
1st pattern: fault on b is masked
2nd pattern: fault on c is detected
&
&
10
11
1111(00)
10(11)
11
01(00)
01
0100
ab
c
d
Testing multiple faults by pairs of patterns
The possible results:
01 - No faults detected
00 - Either b 0 or c 1 detected
11 - The fault b 1 is detected
1 faults
(11)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
130
Multiple Fault Testing
Testing multiple faults by groups of patterns
32143121 xxxxxxxxy Multiple fault: x11, x20,
x31
Test x1 x2 x1 x3 x4 x1 x2 x3 y y’T1 0 1 0 0 1 1 1 0 0 0T2 1 1 1 0 1 0 1 0 1 1T3 1 0 1 0 1 0 0 0 0 1
x31x20
x11
T1 T2 T3
Fault masking Fault detectingAn example where the method of test pairs does not help
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
131
Defect-Oriented Test Generation
Defect-Level constraints calculation:
y* = F*(x1,x2,…,xn, d) = (d & F) (d & Fd)
where d = 1, if the defect is present
Wd : y* / d = 1
yComponent
F(x1,x2,…,xn)
Defect
Wd
Component with defect:
Constraints Logic level
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
132
Defect-Oriented Test Generation
Test generation for a bridging fault:
&
&
&
&
&
&
&
1
2
3
4
5
6
7
71
72
73
a
b
c
d
e
y
Macro
DDD
D D
11
1
1
Fault manifestation:
Wd = x6x7= 1: x6 = 0, x7 = 1,
x7 = DFault propagation:
x2 = 1, x1 = 1, b = 1, c = 1
Line justification:
b = 1: x5 = 0
1
)())((
76521
76212,753,761,7
xxxxx
xxxxxxxxWx
y d
yComponent
F(x1,x2,…,xn)
Defect Wd
Activate a path
Bridge between leads 73 and 6
Wd
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
133
Test generation for Sequential Faults
Time frame model:
CC
CC CC CC
R
R R R
x
xxx yyy
yFault sensitization: Test pattern consists of an input pattern and a state
Fault propagation: To propagate a fault to the
output, an input pattern and a state is needed
Line justification: To reach the needed state, an input sequence is needed
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
134
Hierarchical Test Generation
• In high-level symbolic test generation the test properties of components are often described in form of fault-propagation modes
• These modes will usually contain:– a list of control signals such that the data on input lines is reproduced
without logic transformation at the output lines - I-path, or
– a list of control signals that provide one-to-one mapping between data inputs
and data outputs - F-path • The I-paths and F-paths constitute connections for propagating test
vectors from input ports (or any controllable points) to the inputs of the Module Under Test (MUT) and to propagate the test response to an output port (or any observable points)
• In the hierarchical approach, top-down and bottom-up strategies can be distinguished
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
135
Hierarchical Test Generation Approaches
A
B
C
D
a
D
c
A = ax D: B = bx C = cx
A
B
C
D’
a’x
d’x
c’x
A = a’xD’ = d’xC = c’x
a,c,D fixedx - free
a’
c’
a
Bottom-up approach: Top-down approach:
a’,c’,D’ fixedx - free
System System
Module Modulec
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
136
Hierarchical Test Generation Approaches
Bottom-up approach: • Pre-calculated tests for components
generated on low-level will be assembled at a higher level
• It fits well to the uniform hierarchical approach to test, which covers both component testing and communication network testing
• However, the bottom-up algorithms ignore the incompleteness problem
• The constraints imposed by other modules and/or the network structure may prevent the local test solutions from being assembled into a global test
• The approach would work well only if the the corresponding testability demands were fulfilled
A
B
C
D
a
D
c
A = ax D: B = bx C = cx
a,c,D fixedx - free
aSystem
Modulec
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
137
Hierarchical Test Generation Approaches
• Top-down approach has been proposed to solve the test generation problem by deriving environmental constraints for low-level solutions.
• This method is more flexible since it does not narrow the search for the global test solution to pregenerated patterns for the system modules
• However the method is of little use when the system is still under development in a top-down fashion, or when “canned” local tests for modules or cores have to be applied
Top-down approach: A
B
C
D’
a’x
d’x
c’x
A = a’xD’ = d’xC = c’x
a’
c’
a’,c’,D’ fixedx - free
System
Module
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
138
Test Generation with SSBDDs
The nodes represent signal paths through gates
Two possible faults of a DD-node represent all the stuck-at faults along the signal path
&
&
&
&
&
&
&
1
2
3
4
5
6
7
71
72
73
a
b
c
d
e
y
Macro
6 73
1
2
5
7271
y
0
1
Test pattern for the node 71:
1 2 3 4 5 6 7 y
1 1 0 0 1 1 Fault 710
No fault
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
139
Structural Test Generation on SSBDDs
Multiple path fault propagation by DDs:
1
1
1
1
1
1
1
1
x1
x2
x3
x4
yD
DD
0
0
D
D
10
0
x21
x11 x31
x12
x22 x32
x41
x23 x33
x3
x24 x42
y
Functional DD for testing inputs
Structural DD for testing paths
x2
x1
y
x3 x4
x1 x4 x3
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
140
Example: Test Generation with SSBDDs
&
&
&
1
&
x1
x2
x3x4
y
x1 x2 x3 x4 y
1 1 0 - 1
Testing Stuck-at-0 faults on paths:
Test pattern:
x11
x21
x12
x31
x13
x22x32
Tested faults: x120, x210
x11y x21
x12 x31 x4
x13x22 x32
1
0
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
141
Example: Test Generation with SSBDDs
x11y x21
x12 x31 x4
x13x22 x32
&
&
&
1
&
x1
x2
x3x4
y
x1 x2 x3 x4 y
1 0 1 1 1
Test pattern:
1
0
Tested faults: x120, x310, x40
Testing Stuck-at-0 faults on paths:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
142
Example: Test Generation with SSBDDs
x11y x21
x12 x31 x4
x13x22 x32
&
&
&
1
&
x1
x2
x3x4
y
x1 x2 x3 x4 y
0 1 1 0 1
Test pattern:
1
0
Tested faults: x131, x220, x320
Testing Stuck-at-0 faults on paths:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
143
Example: Test Generation with SSBDDs
&
&
&
1
&
x1
x2
x3x4
y
x1 x2 x3 x4 y
0 0 1 1 0
Testing Stuck-at-1 faults on paths:
Test pattern:
x11
x21
x12
x31
x13
x22x32
Tested faults: x111, x121, x221
x11y x21
x12 x31 x4
x13x22 x32
1
0
1
1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
144
Example: Test Generation with SSBDDs
&
&
&
1
&
x1
x2
x3x4
y
x1 x2 x3 x4 y
1 0 0 1 0
Testing Stuck-at-1 faults on paths:
Test pattern:
x11
x21
x12
x31
x13
x22x32
Tested faults: x211, x311, x130
x11y x21
x12 x31 x4
x13x22 x32
1
0
1
1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
145
Example: Test Generation with SSBDDs
&
&
&
1
&
x1
x2
x3x4
y
x1 x2 x3 x4 y
1 0 1 0 0
Testing Stuck-at-1 faults on paths:
Test pattern:
x11
x21
x12
x31
x13
x22x32
Tested fault: x41
x11y x21
x12 x31 x4
x13x22 x32
1
0
1
1
Not yet tested fault: x321
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
146
Transformation of BDDs
x11y x21
x12 x31 x4
x13x22 x32
x1y x2
x4 x3
x2
SSBDD:
Optimized BDD:
x1y x2
x12 x31 x4
x13x22 x32
x1y x2
x12 x3 x4
x13x22 x32
x1y x2
x3 x4
x2 x3
BDD:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
147
Example: Test Generation with BDDs
&
&
&
1
&
x1
x2
x3x4
y
x1 x2 x3 x4 y
D 1 0 - D
Testing Stuck-at faults on inputs:
Test pair D=0,1:
x11
x21
x12
x31
x13
x22x32
Tested faults: x10, x11
x11y x21
x12 x31 x4
x13x22 x32
0
1x1y x2
x4 x3
x2
SSBDD:
BDD:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
148
Multiple Fault Testing with SSBDDs
Method of pattern groups on SSBDDs
x1y x2
x1 x3 x4
x1x2 x3
&
&
&
1
&
x1
x2
x3x4
y
Disjunctive normal forms are trending to explodeDDs provide an alternative
Test group for testing a part of circuit:x1 x2 x3 x4 y
1 1 0 - 1
0 1 1 - 0
1 0 1 - 0
1 1 0 - 0
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
149
Test Generation for Digital Systems
R2M3
e+M1
a
*M2
b
R1
IN
c
d
y1 y2 y3 y4
y4
y3 y1 R1 + R2
IN + R2
R1 * R2
IN* R2
y2
R2 0
1
2 0
1
0
1
0
1
0
R2
IN
R12
3
Multiple paths activation in a single DDControl function y3 is tested
Data path
Decision Diagram
High-level test generation with DDs: Conformity test
Control: For D = 0,1,2,3: y1 y2 y3 y4 = 00D2
Data: Solution of R1+ R1 IN R1 R1* R1
Test program:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
150
Test Generation for Digital Systems
R2M3
e+M1
a
*M2
b
R1
IN
c
d
y1 y2 y3 y4
y4
y3 y1 R1 + R2
IN + R2
R1 * R2
IN* R2
y2
R2 0
1
2 0
1
0
1
0
1
0
R2
IN
R12
3
Single path activation in a single DDData function R1* R2 is tested
Data path
Decision Diagram
High-level test generation with DDs: Scanning test
Control: y1 y2 y3 y4 = 0032
Data: For all specified pairs of (R1, R2)
Test program:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
151
Test Generation for Digital Systems
y30
C R’2
C
y2
2222
A
2R’2
y1
R’1
R’3 B
F(B,R’3)
A
A
AR’1
0
0
0
R’2
Y,R3 R20 0
1 1
0
0
0
22
0
3
R1
C
R’1
R’1
1
0
0
1
02
01
0 1
1
C+R’2
R’3 R’2
R’1
Transparency functions on Decision Diagrams:
Y = C y3 = 2, R3’ = 0
C - to be testedR1 = B y1 = 2, R3’ = 0
R1 - to be justified
+ R3
R2
F R1
A
BC
Y
y2
A
y3
y1 s
High-level path activation on DDs
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
152
Test Generation for Digital Systems
Modelling the Control Path by DDs
State Condition Nstate Control signals q’ q y1 y2 y3 0 1 0 0 1 1 R2 = 0 2 1 2 0 1 R2
0 3 0 2 1 2 4 2 0 0 3 4 2 1 1 4 0 1 1 2
FSM state transitions and output functions
DD for the FSM:
q’ 1001q y1 y2 y3
4200
1
2
0
R’2=01
0#2120
3021
4211
0112
3
4
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
153
Test Generation for Digital Systems
y3 0
C R’2
C
y2
2222
A
2R’2
y1
R’1
R’3 B
F(B,R’3)
A
A
A R’1
0
0
0
R’2
Y,R3 R20 0
1 1
0
0
0
22
0
3
R1
C
R’1
R’1
1
0
0
1
02
01
0 1
1
C+R’2
R’3 R’2
R’1
+ R3
R2
F R1
A
BC
Y
y2
A
y3
y1 s
System model Data path
Control pathq’ 1001q y1 y2 y3
4200
1
2
0
R’2=01
0#2120
3021
4211
0112
3
4
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
154
Test Generation for Digital Systems
Test generation steps:
• Fault manifestation• Fault-effect propagation• Constraints justification
y3 =2
R’ 2 =0
y2 = 0
0 R 3 = D
A R’ 1
A = D 1
R’ 1 = D 2 B = D 2
R’3
=0
y1
=2
y3
= 0
0
C = D
q’=4
Fault manifestation
q’=2 q’=1 q’=0
R’2
= 0 y2
= 0
q’=1
q’=2
Constraints justification
Faultpropagation
t t-1 t-2 t-3Time:
0
+ R3
R2
F R1
A
BC
Y
y2
A
y3
y1 s
High-level test generation for data-path (example):
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
155
Test Generation for Digital Systems
Test generation step:
Fault-effect propagation
y3 = 2
R’2 = 0
y2 = 0
0 R 3 =D
A R’ 1
A =D 1
R’ 1 =D 2 B =D 2
R’3
= 0
y1
= 2y
3 = 0
0
C =D
q’=4
Fault manifestation
q’=2 q’=1 q’=0
R’2
= 0 y2
= 0
q’=1
q’=2
Constraints justification
Faultpropagation
t t-1 t-2 t-3Time:
0
+ R3
R2
F R1
A
BC
Y
y2
A
y3
y1 s
y3 0
C R’2
CR’2
Y,R3
1
0
0
2
0
C+R’2
R’3
q’ 1001
q y1 y2 y3
4200
1
2
0
R’2=01
0#2120
3021
4211
0112
3
4
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
156
Test Generation for Digital Systems
y3 0
C R’2
CR’2
Y,R30
1
0
0
2
0
y1
R’1
R’3 B
F(B,R’3)
0R1
1
02
0
C+R’2
R’3y2
2A
2R’2
0R20
1
2
3
R’2
Path activation procedures on DDs:
y3 =2
R’ 2 =0
y2 = 0
0 R 3 =D
A R’ 1
A =D 1
R’ 1 =D 2 B =D 2
R’3
=0
y1
=2
y3
= 0
0
C =D
q’=4
Fault manifestation
q’=2 q’=1 q’=0
R’2
= 0 y2
= 0
q’=1
q’=2
Constraints justification
Faultpropagation
t t-1 t-2 t-3Time:
0 q’ 1001q y1 y2 y3
4200
1
2
0
R’2=01
0#2120
3021
4211
0112
3
4
Test generation step: Line justification Time: t-1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
157
Test Generation for Digital Systems
t q’ y1 y2 y3 A B C R1 R2 R3 Y
1 0 0 0 1 0
2 1 1 2 0 0
3 2 2 0 0 D2 D2 0
4 4 1 1 2 D1 D D D
Symbolic test sequence:
y3 =2
R’ 2 =0
y2 = 0
0 R 3 =D
A R’ 1
A =D 1
R’ 1 =D 2 B =D 2
R’3
=0
y1
=2
y3
= 0
0
C =D
q’=4
Fault manifestation
q’=2 q’=1 q’=0
R’2
= 0 y2
= 0
q’=1
q’=2
Constraints justification
Faultpropagation
t t-1 t-2 t-3Time:
0
High-level test generation example:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
158
Test Generation for Microprocessors
I1: MVI A,D A IN
I2: MOV R,A R A
I3: MOV M,R OUT R
I4: MOV M,A OUT A
I5: MOV R,M R IN
I6: MOV A,M A IN
I7: ADD R A A + R
I8: ORA R A A R
I9: ANA R A A R
I10: CMA A,D A A
Modelling a microprocessor by High-Level DDs (example):
Instruction set:
I R3
A
OUT4
I A2
R
IN5
R
1,3,4,6-10
I IN1,6
A
A2,3,4,5
A + R7
A R8
A R9
A10
DD-model of themicroprocessor:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
159
Test Generation for Microprocessors
Test program synthesis (example):
I R3
A
OUT4
I A2
R
IN5
R
1,3,4,6-10
I IN1,6
A
A2,3,4,5
A + R7
A R8
A R9
A10
DD-model of themicroprocessor:
Scanning test for adder:Instruction sequence I5 I1 I7 I4
for all needed pairs of (A,R)
OUT I4
A I7
A
R
I1
IN(2)
IN(1)
R I5
Time:t t - 1 t - 2 t - 3
Observation Test Load
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
160
Test Generation for Microprocessors
Data generation for a test program (example):
I R3
A
OUT4
I A2
R
IN5
R
1,3,4,6-10
I IN1,6
A
A2,3,4,5
A + R7
A R8
A R9
A10
DD-model of themicroprocessor:
Conformity test for decoder:Instruction sequence I5 I1 D I4
for all DI1 - I10 at given A,R,IN
Data generation:
IN 0A 101DataR 110
I1, I6 IN 0I2, I3 I4, I5 A 101
I7 A + R 1011I8 A R 111I9 A R 0
Functions
I10 A 0
Data IN,A,R are generated so that the values of all functions were different
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
161
Defect-Oriented Hierarchical Test Generation
Low-levelfault
constraints
Defect
Component
Low-level test generation and constraints satisfaction
Module
System
Functionalfault
activated
High-levelsymbolic
faultpropagation
High-levelconstraintsjustification
High-level symbolic fault manifestation
Multi-Level approach with functional fault model:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
162
Overview
1. Introduction
2. Theory: Boolean differential algebra
3. Theory: Decision diagrams
4. Fault modelling
5. Test generation
6. Fault simulation7. Fault diagnosis
8. Testability measuring
9. Design for testability
10. Built in Self-Test
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
163
Overview: Fault Simulation
• Overview about methods • Low (gate) level methods• Parallel fault simulation• Deductive fault simulation
– Gate-level fault lists propagation (library based)
– Boolean full differential based (general approach)
– SSBDD based (tradeoff possibility)
• Concurrent fault simulation• Critical path tracing• Parallel critical path tracing• Hierarchical fault simulation
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
164
Fault simulation
Goals: • Evaluation (grading) of a test T (fault coverage)
• Guiding the test generation process
• Constructing fault tables (dictionaries)
• Fault diagnosis
Generate initial T
Evaluate T
Sufficientfault coverage?
Update T Done
YesNo
Select target fault
Generate test for target
Fault simulate
Discard detected faults
Done
No morefaults
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
165
Fault simulation
Fault simulation techniques:• serial fault simulation
• parallel fault simulation
• deductive fault simulation
• concurrent fault simulation
• critical path analysis
• parallel critical path analysis
Common concepts:• fault specification (fault collaps)
• fault insertion
• fault effect propagation
• fault discarding (dropping)
Comparison of methods:
Fault table
Faults Fi
Test patterns Tj
Entry (i,j) = 1(0) if Fi is detectable (not detectable) by Tj
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
166
Parallel Fault Simulation
Parallel patterns
Fault-free circuit:
Faulty circuit:
& 1x1
x2 x3
zy
001
101
001
010
011
& 1x1
x2 x3
zy
001
101
111
010
111
Inserted stuck-at-1 fault
Detected error
Parallel faults
& 1x1
x2 x3
zy
000
111
0 1 0
000
010
Stuck-at-0Stuck-at-1
Computer word
Fault-free
Detected error
Inserted faults
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
167
Parallel Simulation of Faults
Interpretation of values
in a computer word: 04 3 2 1
Value of A in faulty circuit #31
Value of A in faulty circuit #1
Value of A in the good circuit
Three-valued logic
A = (A1, A2)
0 1 u
A1 0 1 0A2 0 1 1
C = A B:
C1 = A1 B1C2 = A2 B2
C = A:
C1 = A2C2 = A1
Fault insertion on z:
x
y
zx 1y 1z 0z 1
1
11 1
0
1
0
0 1 0
Mask for z:
Stuck values for z:
z before fault insertion:
z after fault insertion:
Booleanoperations:
Values:
031 30 2 1
0 0
000 00 1
1 0
0 1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
168
Deductive Fault Simulation
&
&
1
1
1
2
3
45 a
c
b1
1
0
00
0
0
11
y
Fault list calculation:
La = L4 L5
Lb = L1 L2
Lc = L3 La
Ly = Lb - Lc
-----------------------------------------------------------
Ly = (L1 L2) - (L3 (L4 L5))
Gate-level fault list propagation
La – faults causing
erroneous signal on the node a
Ly – faults causing erroneous signal
on the output node y
Library of formulas for gates
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
169
Deductive Fault Simulation
&
&
1
1
1
2
3
45 a
c
b1
1
0
00
0
0
11
y
Fault list calculated:
Ly = (L1 L2) - (L3 (L4 L5))
Solving Boolean differential equation:
Macro-level fault propagation:
)]())([())(( 5544332211 dxxdxxdxxdxxdxxydy
)(1 54321 dxdxdxdxdxdy
)()( 54321 dxdxdxdxdxdy kdx Lk
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
170
Deductive Fault Simulation with DDs
Macro-level fault propagation:
&
&
1
1
1
2
3
45 a
c
b1
1
0
00
0
0
11
y
Fault list propagated:
Ly = (L1 L2) - (L3 (L4 L5))
1 2
3 4
5
y
Fault list calculation on the DD
Ly = (L1 L2)
Ly = (L1 L2) - L3
Ly = (L1 L2) - (L3 (L4 L5))
Faults on the activated path:
First order fault masking effect:
Second order masking effect (tradeoff):
There is a tradeoff possibility between the speed and accuracy When increasing the speed of simulation the results will be not accurate (pessimistic): less fault detected than in reality
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
171
Concurrent Fault Simulation
• A good circuit N is simulated
• For every faulty circuit NF, only those elements in NF are simulated that are different from the corresponding ones in N
• These differences are maintained for every element x in N in the form of concurrent fault list
ab
c&0
1
1
Faults in list a b c Origin of faultsFault-free 0 1 1F1 1 1 0 Faults propagated to aF2 0 0 1 Faults propagated to bF3: a 1 1 1 0 Fault activated at the gateF4: b 1 0 1 1 Fault activated at the gate
Example: a gate concurrently simulated
Concurrent fault list
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
172
Concurrent Fault Simulation
• A fault is said to be visible on a line when its values for N and NF are different
• In deductive simulation, only visible faults belong to the fault lists
• In concurrent simulation, faults are excluded (dropped) from the fault list only when the element in NF is equivalent to the element of N
ab
c&
0
1
1
Example: simple circuit simulated
Concurrent fault list
e&1
d0
Faults in list a b c d eFault-free 0 1 1 1 0F1 1 1 0 1 1F2 0 0 1 1 0 DroppedF3: a 1 1 1 0 1 1F4: b 1 0 1 1 1 0 DroppedF5: c 0 0 1 1F6: d 0 1 0 1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
173
Critical Path Tracing
&
&
1
1
1
2
3
45 a
c
b1
1
0
00
0
0
11
y
1 2
3 4
5
y
Problems:&
&
11
10/1
y
&
&
11
01
y
1/0
1
1
1/0
1
1
The critical path is not continuous
The critical path breaks on the fan-out
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
174
Parallel Critical Path Tracing
&
1
x1
y
0110321
xxx
y
1011
1110
10011011
Detected faults vector: - 10 -
T1: No faults detectedT2: x1 1 detectedT3: x1 0 detectedT4: No faults detected
x3
x2
Handling of fanout points: • Fault simulation• Boolean differential calculus
x y
xk
x2
x1
F
))(),...,(( 11 x
xx
x
xxFy
x
y kk
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
175
Hierarchical Concurrent Fault Simulation
High-Levelcomponent
High-Levelcomponent
High-Levelcomponent
Sequenceof patterns
P: First Pattern
R: Faults
Set of patternsWith faults
P; P1(R1)…Pn( Rn)
Set of patternswith faults
P; P1(R1)…Pm( Rm)
P: Pattern
Set of patternswith faults
P; P1(R1)…Pn( Rn)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
176
Hierarchical fault simulation
Main ideas of the procedure:
• A target block is chosen
• This will be represented on gate level
• Fault simulation is carried out on the low level
• The faults through other blocks are propagated on the high level
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
177
P20 1010
P21 1100 R21 2,4,9P22 0110 R22 1,3P23 1011 R23 6,10P24 1110 R24 5,8P25 0010 R25 7,11,12
Low-level fault simulation
P30 0100
P31 1000 R31 2,4,9P32 1100 R32 1,3P33 0110 R33 6,10P34 1100 R34 5,8P35 0100 R35 7,11,12
High-level fault propagation
P30 0100
P31 1000 R31 2,4,9P32 1100 R32 1,3,5,8P33 0110 R33 6,10
Updated complex patternP10 1100 To be fault simulatedP11 0010 R11 3P12 1001 R12 2,4,8
To be simulated atgiven faults
Gate-levelblock
Register-levelblock
Target blockunder fault
analysis
Hierarchical fault simulation
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
178
Hierarchical fault simulation
Definition of the complex pattern:
• D = {P, (P1,R1), …, (Pk, Rk)}
• P is the fault-free pattern (value)
• Pi (i = 1,2, ..., k) are faulty patterns, caused by a set of faults Ri
• All the faults simulated causing the same faulty pattern Pi are put together in one group Ri
• R1- Rk are the propagated fault groups, causing, correspondingly, the faulty patterns P1- Pk
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
179
Fault Simulation with DD-s
Fault propagation through a complex RT-level component
q
xA
xc
BC
A
Dq = {1, 0 (1,2,5), 4 (3,4)}, DxA = {0, 1 (3,5)}, DxC = {1, 0 (4,6)}, DA = {7, 3 (4,5,7), 4 (1,3,9), 8 (2,8)}, DB = {8, 3 (4,5), 4 (3,7), 6 (2,8)},
DC = {4, 1 (1,3,4), 2 (2,6), 5 (6,7)}.
Decision diagram
New DA to be calculated
Sub-system
for A
A01
0q
xA
B + C
A + 1
13xC A + C
04xA A
0xC
2
1
0
A - 1
A + B
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
180
Fault Simulation with DD-s
Example of high level fault simulation
• Dq = {1, 0 (1,2,5), 4 (3,4)},
• DxA = {0, 1 (3,5)},
• DxC = {1, 0 (4,6)},
• DA = {7, 3 (4,5,7), 4 (1,3,9), 8 (2,8)},
• DB = {8, 3 (4,5), 4 (3,7), 6 (2,8)},
• DC = {4, 1 (1,3,4), 2 (2,6), 5 (6,7)}
Final complex vector for A:
• DA = {8, 3(4), 4(3,7), 5(9), 7(5), 9(1,8)}
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
181
Overview
1. Introduction
2. Theory: Boolean differential algebra
3. Theory: Decision diagrams
4. Fault modelling
5. Test generation
6. Fault simulation
7. Fault diagnosis8. Testability measuring
9. Design for testability
10. Built in Self-Test
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
182
Overview: Fault Diagnosis
• Overview of the methods• Combinational methods of diagnosis
– Fault table based methods
– Fault Dictionary based methods
– Minimization of diagnostic data in fault tables
– Methods for improving the diagnostic resolution
• Sequential methods of diagnosis– Edge-Pin testing
– Guided Probe fault location
• Design error diagnosis
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
183
Fault diagnosis
Fault Diagnosis methods
• Combinational methods
– The process of fault localization is carried out after finishing the whole testing experiment by combining all the gathered experimental data
– The diagnosis is made by using fault tables or fault dictionaries
• Sequential methods (adaptive testing)
– The process of fault location is carried out step by step, where each step depends on the result of the diagnostic experiment at the previous step
• Sequential fault diagnosis can be carried out either
– by observing only output responses of the UUT or
– by pinpointing by a special probe also internal control points of the UUT (guided probing)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
184
Combinational Fault diagnosis
F1 F2 F3 F4 F5 F6 F7
T1 0 1 1 0 0 0 0T2 1 0 0 1 0 0 0T3 1 1 0 1 0 1 0T4 0 1 0 0 1 0 0T5 0 0 1 0 1 1 0T6 0 0 1 0 0 1 1
Fault F5 located
Faults F1 and F4 are not distinguishable
Fault localization by fault tables
E1 E2 E3
0 0 10 1 00 1 01 0 11 0 10 0 0
No match, diagnosis not possible
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
185
Combinational Fault Diagnosis
• Fault dictionaries contain the sama data as the fault tables with the difference that the data is reorganised
• The column bit vectors can be represented by ordered decimal codes or by some kind of compressed signature
Fault localization by fault dictionaries
No Bit vectors Decimal numbers Faults1 000001 01 F7
2 000110 06 F5
3 001011 11 F6
4 011000 24 F1, F4
5 100011 35 F3
6 101100 44 F2
Test results:E1 = 06, E1 = 24, E1 = 38
No match
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
186
Combinational Fault Diagnosis
• To reduce the cost of building a fault table, the detected faults may be dropped from simulation
• All the faults detected for the first time by the same vector produce the same column vector in the table, and will included in the same equivalence class of faults
• Testing can stop after the first failing test, no information from the following tests can be used
Minimization of diagnostic data F1 F2 F3 F4 F5 F6 F7
T1 0 1 1 0 0 0 0T2 1 0 0 1 0 0 0T3 0 0 0 0 0 1 0T4 0 0 0 0 1 0 0T5 0 0 0 0 0 0 0T6 0 0 0 0 0 0 1
With fault dropping, only 19 faults need to be simulated compared to the all 42 faults
The following faults remain not distinguishable:
{F2, F3}, {F1, F4}.
A tradeoff between computing time and diagnostic resolution can be achieved by dropping faults after k >1 detections
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
187
Improving Diagnostic Resolution
Generating tests to distinguish faults
• To improve the fault resolution of a given test set T, it is necessary to generate tests to distinguish among faults equivalent under T
• Consider the problem of distinguishing between faults F1 and F2. A test is to be found which detects one of these faults but not the other
• The following cases are possible:
– F1 and F2 do not influence the same outputs • A test should be generated for F1 (F2) using only the circuit feeding the
outputs influenced by F1 (F2)
– F1 and F2 influence the same set of outputs. • A test should be generated for F1 (F2) without activating F2 (F1)
• How to activate a fault without activating another one?
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
188
Improving Diagnostic Resolution
Method:
• F1 may influence both outputs, F2 may influence only x8
• A test pattern 0010 activates F1 up to the both outputs, and F2 only to x8
• If both outputs will be wrong, F1 is present, and if only x8 will be wrong, F2 is present
Generating tests to distinguish faults
F1: x3,1 0, F2: x4 1
Faults are influencing on differentoutputs:
x2
x3
x4
x3,1
x3,2
x5
x6
x7
x8
1
1
1x1
0
0
1
0
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
189
Improving Diagnostic Resolution
Method:
• Both faults influence the same output of the circuit
• A test pattern 0100 activates the fault F2. F1 is not activated: the line x3,2 has the same value as it would have if F1 were present
• A test pattern 0110 activates the fault F2. F1 is now activated at his site but not propagated through the AND gate
Generating tests to distinguish faults
F1: x3,2 0, F2: x5,2 1
How to activate a fault without activating another one?
x5,1x5,2
x2
x3
x4
x3,1x3,2
x5
x6
x7
x8
1
1
1x1
0
1
0/1
0
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
190
Improving Diagnostic Resolution
Method:
• Both of the faults may influence only the same output
• Both of the faults are activated to the same OR gate (none of them is blocked)
• However, the faults produce different values at the inputs of the gate, they are distinguished
• If x8 = 0, F1 is present
• Otherwise, either F2 is present or none of the faults F1 and F2 are present
Generating tests to distinguish faults
F1: x3,1 1, F2: x3,2 1
How to activate a fault without activating another one?
x5,1x5,2
x2
x3
x4
x3,1x3,2
x5
x6
x7
x8
1
1
1x1
1
0
0
1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
191
Sequential Fault Diagnosis
Sequential fault diagnosis by Edge-Pin Testing
T1 F1,F4,F5,F6,F7
PT2
PF1,F4
F2, F3 T3P
F3
F
F
F2
F
F5,F6,F7 T3P
F5,F7
F
F6
T4P
F7
F
F5
F1,F2
F3,F4
F5,F6
F7
F1 F2 F3 F4 F5 F6 F7
T1 0 1 1 0 0 0 0T2 1 0 0 1 0 0 0T3 1 1 0 1 0 1 0T4 0 1 0 0 1 0 0T5 0 0 1 0 1 1 0T6 0 0 1 0 0 1 1
Two faults F1,F4 remain indistinguishable
Not all test patterns used in the fault table are needed
Different faults need for identifying test sequences with different lengths
The shortest test contains two patterns, the longest four patterns
Diagnostic tree:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
192
Sequential Fault Diagnosis
Guided-probe testing at the gate level
x8
No faultsP
F
x6
P
F
x4
x5,2
P
F
OR- x8 is faulty
x2
P
F
x3,1 PF
NOR- x5 is faulty
x3
P
F
Line x3,1 is faulty
Line x3 is faultyLine x2 is faulty
Line x2
is faultyF
P
x3,2
P AND- x6 is faultyF x3
P
F
Line x3,2 is faulty
Line x3 is faulty
x2
x3
x4
x3,1
x3,2
x5,1x5,2
x5
x6
x7
x8
1
1
1
Searh tree:
Faulty circuit
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
193
Sequential Fault Diagnosis
Guided-probe testing at the macro-level
&
&
&
&
&
&
&
1
2
3
4
5
6
7
71
72
73
a
b
c
d
e
y
Macro
100
1 0
1
1
1
1
6 73
1
2
5
7271
y
0
1
There is a fault on the line 71
Nodes to be pinpointed: Gate level: c, e, d, 1, a, 71 (6 attempts)Macro level (DD): 1, 71 (2 attempts)
Rules on DDs:
• Only the nodes where the leaving direction coincides with the leaving direction from the DD should be pinponted• If simulation shows that these nodes cannot explain the faulty behavior they can be dropped
01
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
194
Design error diagnosis
Design error sources:• manual interference of the designer with the design during synthesis• bugs in CAD software
Design error types:• gate replacements• extra or missing inverters• extra or missing wires• incorrectly placed wires • extra or missing gates
Main approaches to design error diagnosis:• Error model (design error types) explicitly described
– single gate replacement on the basis of {AND, OR, NAND, NOR}
• Diagnosis without error model
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
195
Design Error Diagnosis
Single gate design error model
• Stuck-at fault model is used with subsequent translation of the diagnosis into the design error area
• This allows to exploit standard gate-level ATPGs for verification and design error diagnosis purposes
• Hierarchical approach is used for generating test patterns which
– at first, localize the faulty macro (tree-like subcircuit), and
– second, localize the erroneous gate in the faulty macro
Basic idea of the single gate error:
• To detect a design error in the implementation at an arbitrary gate sk = gk (s1, s2,...,sh), it is sufficient to apply a pair of test patterns which detect the faults si /1 and si /0 at one of the gate inputs si,= 1,2, ... h
sk
s1s2
sh
gkOUTIN
si
11
11
01
11
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
196
Design Error Diagnosis
Mapping stuck-at faults into design errors:
Path-level diagnosis: x1 /1, x2,2 /0, x2,1 /1, x3,1 /1
Gate-level diagnosis: g6, g8, g9, g12
Additional test T8 = (00011): x3,1 /1 is missingRemove g9 and g12
x2,2 /0 suspected, x2,2 /1 is missing g6 is correct
Diagnosis: g8 (x1 /1, x2,1 /1) AND8 OR8
Stuck-at faultsGates1 s2
Design error
0 1 0 1 NAND1 1 OR
AND 0 0 NOR0 1 NOT(x1)
0 1 NOT(x2)0 1 0 1 NOR0 0 AND
OR 1 1 NAND0 1 NOT(x1)
0 1 NOT(x2)0 1 0 1 AND0 0 OR
NAND 1 1 NOR0 1 NOT(x1)
0 1 NOT(x2)0 1 0 1 OR
1 1 ANDNOR 0 0 NAND
0 1 NOT(x1)0 1 NOT(x2)
AND
AND
AND
AND
OR
OR
AND
NOT
NOT
12
3
4
5
21
22
31
32
41
42
6
7
8
9
10
11
12
13
14, y
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
197
Overview
1. Introduction
2. Theory: Boolean differential algebra
3. Theory: Decision diagrams
4. Fault modelling
5. Test generation
6. Fault simulation
7. Fault diagnosis
8. Testability measuring9. Design for testability
10. Built in Self-Test
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
198
Overview: Testability Measuring
• Quality Policy of Electronic Design• Tradeoffs of Design for Testability• Testability criteria• Testability measures
• Heuristic measures
• Probabilistic measures
• Calculation of signal probabilities• Parker - Mc Cluskey method
• Cutting method
• Conditional probabilities based method
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
199
Design for Testability
The problem is - QUALITY:
Quality policyYield (Y)
P,n
Defect level (DL)
Pa
Design for testability
TestingP - probability of a defectn - number of defectsPa - probability of accepting a bad product
nPY )1( - probability of producing a good product
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
200
Design for Testability
The problem is - QUALITY:
Quality policyYield (Y)
P,n
Defect level (DL)
Pa
n - number of defectsm - number of faults testedP - probability of a defectPa - probability of accepting a bad productT - test coverage
)1()1(
111)1(1)1(
Tn
m
n
mnmn
ana YYYP
PP
PDL
nma PPP )1()1(
nPY )1(
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
201
Design for Testability
The problem is - QUALITY:
DL
T(%)
Y
1000
1 Y(%)
T(%)10
10
50
90
50 90
8 5 1
45 25 5
81 45 9
)1(1 TYDL
DL T
Paradox: Testability DL
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
202
Design for Testability
Tradeoffs:
DL T
Testability DL T
DFT: Resynthesis oradding extra hardware
Logic complexity Area Number of I/O
Performance
Power consumption Yield
Economic tradeoff:
C (Design + Test) < C (Design) + C (Test)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
203
Design for Testability
Economic tradeoff:
C (Design + Test) < C (Design) + C (Test)
C (DFT) + C (Test’) < C (Design) + C (Test)
C (Test) = CTGEN + (CAPLIC + (1 - Y) CTS) Q
Test generationTestingTroubleshootingVolume
C (DFT) = (CD + ΔCD) + Q(CP + ΔCP)
DesignProduct
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
204
Testability Criteria
Qualitative criteria for Design for testability:Testing cost:
– Test generation time
– Test application time
– Fault coverage
– Test storage cost (test length)
– Availability of Automatic Test Equipment
Redesign for testability cost:– Performance degradation
– Area overhead
– I/O pin demand
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
205
Testability of Design Types
General important relationships:
T (Sequential logic) < T (Combinational logic)
Solutions: Scan-Path design strategy
T (Control logic) < T (Data path)
Solutions: Data-Flow design, Scan-Path design strategies
T (Random logic) < T (Structured logic)
Solutions: Bus-oriented design, Core-oriented design
T (Asynchronous design) < T (Synchronous design)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
206
Testability Estimations for Circuit Types
Circuits less controllable
• Decoders• Circuits with feedback• Counters• Clock generators• Oscillators• Self-timing circuits• Self-resetting circuits
Circuits less observable
• Circuits with feedback• Embedded
– RAMs
– ROMs
– PLAs
• Error-checking circuits• Circuits with redundant
nodes
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
207
Testability Measures
Evaluation of testability:
Controllability C0 (i)
C1 (j)
Observability OY (k)
OZ (k)
Testability
12
20&
&12
201
x
DefectProbability of detecting 1/p60
12
20&
12
20 1
i
kj
Y
Z
Controllability for 1 needed
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
208
Heuristic Testability Measures
Controllability calculation: Value: minimum number of nodes that must be set in order to produce 0 or 1 For inputs: C0(x) = C1(x) = 1
For other signals: recursive calculation rules:
&x y &x1
yx2
1x1 yx2
x1 yx2
C0(y) = minC0(x1), C0(x2) + 1C1(y) = C1(x1) + C1(x2) + 1
C0(y) = C1(x) + 1 C1(y) = C0(x) + 1
C1(y) = minC1(x1), C1(x2) + 1C0(y) = C0(x1) + C0(x2) + 1
C0(y) = min(C0(x1) + C0(x2)), (C1(x1) + C1(x2)) + 1C1(y) = min(C0(x1) + C1(x2)), (C1(x1) + C0(x2)) + 1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
209
Heuristic Testability Measures
Observability calculation: Value: minimum number of nodes which must be set for fault propagating For inputs: O(y) = 1
For other signals: recursive calculation rules:
&x y
&x1
yx2
1x1 yx2
x1 yx2
O(x1) = O(y) + C1(x2) + 1
O(x) = O(y) + 1 O(x1) = O(y) + C0(x2) + 1
O(x1) = O(y) + 1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
210
Heuristic Testability Measures
Controllabilies Obs.x C0(x) C1(x) O(x)1 1 1 92 1 1 113 1 1 84 1 1 85 1 1 96 1 1 77 3 2 671 3 2 1072 3 2 873 3 2 6a 4 2 8b 4 2 6c 4 2 4d 4 2 6e 5 3 3y 6 3 1
Controllability and observability:
&
&
&
&
&
&
&
12
3
4
5
6
7
71
72
73
a
b
c
d
e
y
Macro
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
211
Heuristic Testability Measures
Controllabilies Obs. Testab.x C0(x) C1(x) O(x) T(x0)1 1 1 9 102 1 1 11 123 1 1 8 94 1 1 8 95 1 1 9 106 1 1 7 87 3 2 6 871 3 2 10 1272 3 2 8 1073 3 2 6 8a 4 2 8 10b 4 2 6 8c 4 2 4 6d 4 2 6 8e 5 3 3 6y 6 3 1 4
Testability calculation:
&
&
&
&
&
&
&
12
3
4
5
6
7
71
72
73
a
b
c
d
e
y
Macro
T(x 0) = C1(x) + O(x)
T(x 1) = C0(x) + O(x)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
212
Probabilistic Testability Measures
Controllability calculation: Value: minimum number of nodes that must be set in order to produce 0 or 1 For inputs: C0(i) = p(xi=0) C1(i) = p(xi=1) = 1 - p(xi=0)
For other signals: recursive calculation rules:
&x y&
x1yx2
1x1 yx2
py= px1 px2
py = 1 - px py= 1 - (1 - px1)(1 - px2)
&x1 yxn
...
xi
n
iy pp
1
1x1
yxn
... )1(11
xi
n
iy pp
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
213
Probabilistic Testability Measures
Probabilities of reconverging fanouts:
x1 yx2
py= ?
&x1
yx2
&
1
py = 1 - (1 - pa ) (1 - pb) =
= 1 - (1 - px1(1 - px2))(1 - px2(1 - px1)) = ?
a
b
&x y py = px px = px2 px
Correction of signal correlations:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
214
Calculation of Signal Probabilities
Straightforward methods:
&
&
&
a
c
y&
b
1
2
3
21
22
23
Parker - McCluskey algorithm:
py = pcp2 = (1- papb) p2 =
= (1 – (1- p1p2) (1- p2p3)) p2 =
= p1p2 2 + p2
2p3 - p1p2
3p3 =
= p1p2 + p2
p3 - p1p2p3 = 0,38
Calculation gate by gate:
pa = 1 – p1p2 = 0,75,
pb = 0,75, pc = 0,4375, py = 0,22
For all inputs: pk = 1/2
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
215
Probabilistic Testability Measures
Parker-McCluskey:
&
&
&
a
c
y&
b
1
2
3
21
22
23
Observability:
p(y/a = 1) = pb p2 =
= (1 - p2p3) p2 = p2 - p22p3
= p2 - p2p3
= 0,25x
Testability:
p(a 1) = p(y/a = 1) (1 - pa) =
= (p2 - p2p3)(p1p2) =
= p1p22
- p1p22p3 =
= p1p2 - p1p2p3
= 0,125
For all inputs: pk = 1/2
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
216
Calculation of fault testabilities
&
&
&
a
c
y&
b
1
2
3
21
22
23
Fault 1
)0()1()1()1( 2
bPxPaPbb
yP
211)1( ppaP
Calculation the probability of detecting the fault b1
22 )1( pxP
3232 )1(1)1( ppppaP
8/1)1()1( 3221 ppppaP
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
217
Calculation of Signal Probabilities
Using BDDs:
1 21
22
23
3
y L1
L2
py = p(L1) + p(L2) =
= p1 p21 p23 + (1 - p1) p22 p3 p23 =
= p1 p2 + p2 p3 - p1p2 p3 = 0,38
&
&
&
a
c
y&
b
1
2
3
21
22
23
For all inputs: pk = 1/2
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
218
Heuristic Testability Measures
Using BDDs for controllability calculation:
&
&
&
a
c
y&
b
1
2
3
21
22
23
x C0(x) C1(x)a 3 2b 3 2c 5 4y 2 6
Gate level calculation
1 21
22
23
3
y L1
L2
3 3 1
3 3
BDD-based algorithm for the heuristic measure is the same as for the probabilistic measure
C1(y) = min [(C1(L1), C1(L2)] + 1 =
= min [C1(x1) + C1(x2),
C0(x1) + C1(x2) + C1(x3)] + 1 =
= min [2, 3] + 1 = 3
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
219
Probabilistic Testability Measures
Using BDDs:
1 21
22
23
3
yL1 L2
Observability:
p(y/x21 = 1) = p(L1) p(L2) p(L3) =
= p1 p23 (1 - p3) = 0,125
&
&
&
a
c
y&
b
1
2
3
21
22
23
x
L3
Testability:
p(a 0) = p21 p(y/x21 = 1) =
= p21 p(L1) p(L2) p(L3) =
= p2p1 (1 - p3) = 0,125
Why: p(y/x21 = 1) = p21 p(y/x21 = 1)?
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
220
Calculation of Signal Probabilities
• Complexity of exact calculation is reduced by using lower and higher bounds of probabilities
• Reconvergent fan-outs are cut except of one
• Probability range of [0,1] is assigned to all the cut lines
• The bounds are propagated by straightforward calculation
Cutting method&
&
&
&
&
&
&
12
3
4
5
6
7
71
72
73
a
b
c
d
e
y
pk [pLB , pHB) Exact pk pk [pLB , pHB) Exact pk
p7 3/4 3/4 pb [1/2, 1] 5/8p71 [0, 1] 3/4 pc 5/8 5/8p72 [0, 1] 3/4 pd [1/2, 3/4] 11/16p73 3/4 3/4 pe [1/4, 3/4] 19/32pa [1/2, 1] 5/8 py [34/64, 54/64 ] 41/64
For all inputs: pk = 1/2
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
221
Calculation of Signal Probabilities
Method of conditional probabilities
&
&
&
&
&
&
&
12
3
4
5
6
7
71
72
73
a
b
c
d
e
y
)1,0(
)()/(()(i
ixpixypyp
yx
NB! Probabilities
Pk = [Pk* = p(xk/x7=0), Pk
** = p(xk/x7=1)]
are propagated, not bounds as in the cutting method.For all inputs: pk = 1/2
Pk [Pk* , Pk
**] Pk [Pk* , Pk
**]P7 Pb [1, 1/2]P71 Pc [1, 1/2]P72 Pd [1/2, 3/4]P73 Pe [1/2, 5/8]Pa [1, 1/2] Py [1/2, 11/16 ]
py = p(y/x7=0)(1 - p7) + p(y/x7=1)p7 = (1/2 x 1/4) + (11/16 x 3/4) = 41/64
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
222
Calculation of Signal Probabilities
Combining BDDs and conditional probabilities
x
z
y
w
Using BDDs gives correct results only inside the blocks, not for the whole system
New method:
• Block level: use BDDs and straightforward calculation• System level: use conditional probabilities
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
223
Overview
1. Introduction
2. Theory: Boolean differential algebra
3. Theory: Decision diagrams
4. Fault modelling
5. Test generation
6. Fault simulation
7. Fault diagnosis
8. Testability measuring
9. Design for testability10. Built in Self-Test
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
224
Overview: Design for Testability
• Ad Hoc Design for Testability Techniques– Method of test points– Multiplexing and demultiplexing of test points– Time sharing of I/O for normal working and testing modes– Partitioning of registers and large combinational circuits
• Scan-Path Design– Scan-path design concept– Controllability and observability by means of scan-path– Full and partial serial scan-paths– Non-serial scan design– Classical scan designs
• Boundary Scan Standard• Synthesis of Testable Circuits
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
225
Ad Hoc Design for Testability Techniques
Method of Test Points:
Block 1 Block 2Block 1 is not observable,Block 2 is not controllable
Block 1 Block 2
1- controllability: CP = 0 - normal working mode CP = 1 - controlling Block 2 with signal 1
1
CP
Improving controllability and observability:
Block 1 Block 2
0- controllability: CP = 1 - normal working mode CP = 0 - controlling Block 2 with signal 0
&
CP
OP
OP
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
226
Ad Hoc Design for Testability Techniques
Method of Test Points:
Block 1 Block 2Block 1 is not observable,Block 2 is not controllable
Block 1 Block 21
CP1
Improving controllability:
Block 1 Block 2
Normal working mode:CP1 = 0, CP2 = 1
Controlling Block 2 with 1:CP1 = 1, CP2 = 1Controlling Block 2 with 0:CP2 = 0
MUX
CP1
&
CP2
CP2
Normal working mode:CP2 = 0
Controlling Block 2 with 1:CP1 = 1, CP2 = 1Controlling Block 2 with 0:CP1 = 0, CP2 = 1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
227
Ad Hoc Design for Testability Techniques
Multiplexing monitor points:
OUT
01
2n-1
x0
xn
x1
MUX
To reduce the number of output pins for observing monitor points, multiplexer can be used:
2n observation points are replaced by a single output and n inputs to address a selected observation point
Disadvantage:
only one observation point can be observed at a time
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
228
Ad Hoc Design for Testability Techniques
Multiplexing monitor points:
OUT
01
2n-1
c
MUX
To reduce the number of output pins for observing monitor points, multiplexer can be used:
To reduce the number of inputs, a counter (or a shift register) can be used to drive the address lines of the multiplexer
Disadvantage:
Only one observation point can be observed at a time
Counter
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
229
Ad Hoc Design for Testability Techniques
Demultiplexer for implementing control points:
0
1
N
DMUX
To reduce the number of input pins for controlling testpoints, demultiplexer and a latch register can be used.
N clock times are required between test vectors to set up the proper control values
x
CP1
CP2
CPN
N = 2n
x1x2
xN
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
230
Ad Hoc Design for Testability Techniques
Demultiplexer for implementing control points:
0
1
N
c
DMUX
To reduce the number of input pins for controlling testpoints, demultiplexer and a latch register can be used.
To reduce the number of inputs for addressing, a counter (or a shift register) can be used to drive the address lines of the demultiplexer
Counter
x
CP1
CP2
CPN
N = 2n
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
231
Time-sharing of outputs for monitoring
To reduce the number of output pins for observing monitor points, time-sharing of working outputs can be introduced: no additional outputs are needed
To reduce the number of inputs, again counter or shift register can be used
Disadvantage:
only one observation point can be observed at a time
Original circuit
MUX
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
232
Time-sharing of inputs for controlling
0
1
N
DMUX
CP1
CP2
CPN
To reduce the number of input pins for controlling test points, time-sharing of working inputs can be introduced.
To reduce the number of inputs for driving the address lines of demultiplexer, counter or shift register can be used
Normal input lines
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
233
Ad Hoc Design for Testability Techniques
Examples of good candidates for control points:– control, address, and data bus lines on bus-structured designs– enable/hold inputs of microprocessors– enable and read/write inputs to memory devices– clock and preset/clear inputs to memory devices (flip-flops, counters, ...)– data select inputs to multiplexers and demultiplexers– control lines on tristate devices
Examples of good candidates for observation points:– stem lines associated with signals having high fanout– global feedback paths– redundant signal lines– outputs of logic devices having many inputs (multiplexers, parity generators)– outputs from state devices (flip-flops, counters, shift registers)– address, control and data busses
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
234
Ad Hoc Design for Testability Techniques
Redundancy should be avoided:
• If a redundant fault occurs, it may invalidate some test for nonredundant faults
• Redundant faults cause difficulty in calculating fault coverage
• Much test generation time can be spent in trying to generate a test for a redundant fault
Redundancy intentionally added:
• To eliminate hazards in combinational circuits
• To achieve high reliability (using error detecting circuits)
Logical redundancy:
1
&
&
&
1
1
01
10
01
1
1
Hazard control circuitry:
Redundant AND-gateFault 0 not testable
0
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
235
Ad Hoc Design for Testability Techniques
Fault redundancy:
Error control circuitry:
Decoder
0
E 1 if decoder is fault-free Fault 0 not testable
E
Testable error control circuitry:
Decoder
0
T 0 - normal working mode T = 1 - testing mode
E
T
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
236
Ad Hoc Design for Testability Techniques
Partitioning of registers (counters):
C REG 1 REG 2IN IN OUTOUT
CL CL
CREG 1 REG 2IN IN OUT
OUT
CLCL
&&
&&
CP: Tester DataCP: Data Inhibit
CP: Clock Inhibit
CP: Tester Clock
&&
CP: Tester DataCP: Data Inhibit
OP
16 bit counter divided into two 8-bit counters:
Instead of 216 = 65536 clocks, 2x28 = 512 clocks needed
If tested in parallel, only 256 clocks needed
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
237
Ad Hoc Design for Testability Techniques
Partitioning of large combinational circuits:
C1
C2
DMUX1 C1
MUX1 MUX2
DMUX2 C2
MUX3
MUX4
The time complexity of test generation and fault simulation grows faster than a linear function of circuit size
Partioning of large circuits reduces these costs
I/O sharing of normal and testing modes is used
Three modes can be chosen: - normal mode - testing C1 - testing C2 (bolded lines)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
238
Scan-Path Design
Combinational circuit
IN OUT
R
Scan-IN
Scan-OUT
1&
&
q
q’
Scan-IN
T
TD
C
Scan-OUT
q
q’
The complexity of testing is a function of the number of feedback loops and their length
The longer a feedback loop, the more clock cycles are needed to initialize and sensitize patterns
Scan-register is a aregister with both shift and parallel-load capability
T = 0 - normal working mode T = 1 - scan mode
Normal mode : flip-flops are connected to the combinational circuit
Test mode: flip-flops are disconnected from the combinational circuit and connected to each other to form a shift register
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
239
Scan-Path Design and Testability
OUTMUX
DMUXIN
SCANOUT
SCANIN
Two possibilities for improving controllability/observability
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
240
Boundary Scan Standard
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
241
Boundary Scan Architecture
TDOinternallogic
T
A
P TDO
TMS
TCK
TDI
BSCTDI
Data_out
Data_in
TDO
TDO
TDI
internallogic
internallogic
internallogic
internallogic
T
A
P
T
A
P
TMS
TCK
T
A
P
TAP
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
242
Boundary Scan Architecture
Device ID. Register
Bypass Register
Instruction Register (IR)
TDI
TDO
Bou
ndary
Scan
Registe
rsInternal logic
Data Register
s
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
243
Boundary Scan Cell
From last cell Update
DR
To next cell
Q
QSET
CLR
D
Clock DR
Test/Normal
1
0
Q
QSET
CLR
D0
1
From system pin
Q
QSET
CLR
D
Q
QSET
CLR
D
Shift DR
To syste
m logic
Used at the input or output pins
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
244
Boundary Scan Working Modes
SAMPLE mode:
Get snapshot of normal chip output signals
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
245
Boundary Scan Working Modes
PRELOAD mode:
Put data on boundary scan chain before next instruction
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
246
Boundary Scan Working Modes
Extest instruction:
Test off-chip circuits and board-level interconnections
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
247
Boundary Scan Working Modes
INTEST instruction
Feeds external test patterns in and shifts responses out
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
248
Boundary Scan Working Modes
Bypass instruction:
Bypasses the corresponding chip using 1-bit register To TDO
From TDIShift DR
Clock DR Q
QD
SET
CLR
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
249
Boundary Scan Working Modes
IDCODE instruction:
Connects the component device identification register serially between TDI and TDO in the Shift-DR TAP controller state
Allows board-level test controller or external tester to read out component ID
Required whenever a JEDEC identification register is included in the design
TDOTDI Version Part Number Manufacturer ID 1
4-bitsAny format
16-bitsAny format
11-bitsCoded form of JEDEC
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
250
Fault Diagnosis with Boundary Scan
Short
Open
1
0
0
0
0
1
Assume stuck-at-0
Assume wired AND
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
251
Fault Diagnosis with Boundary Scan
Short
Open
10
00
00
01
11
Assume stuck-at-0
00
00
00
Assume wired AND
Kautz showed in 1974 that a sufficient condition to detect any pair of short circuited nets was that the “horizontal” codes must be unique for all nets. Therefore the test length is ]log2(N)[
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
252
Fault Diagnosis with Boundary Scan
Short
Open
101
000
001
011
110
Assume stuck-at-0
001
001
001
Assume wired AND
All 0-s and all 1-s are forbidden codes because of stuck-at faults Therefore the final test length is ]log2(N+2)[
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
253
Fault Diagnosis with Boundary Scan
Short
Open
0 101
0 000
0 001
0 011
1 110
Assume stuck-at-0
1 001
0 001
1 001
Assume wired AND
To improve the diagnostic resolution we have to add one bit more
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
254
Synthesis of Testable Circuits
2131 xxxxy
1&
&
x1
x3
x2
y
x1 x2 x3 y&
&
2131 xxxxy
Test generation:
0 1 1 0 1 0 0 01 0 0 1 0 1 1 01 1 0 0 0 0 1 10 0 1 1 1 1 1 1
4 test patterns are needed
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
255
Synthesis of Testable Circuits
2131 xxxxy
1&
&
x1
x3
x2
y
&
&
x1 x2 x3
y&
&
Here: Only 3 test patterns are needed
010
010
110
110
110
101
Here: 4 test patterns are needed
Two implementations for the same circuit:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
256
Synthesis of Testable Circuits
32172163151432322310 xxxcxxcxxcxcxxcxcxccy
Calculation of constants: 2131 xxxxy fi x1 x2 x3 y f0 0 0 0 1 1 C0 = f0
f1 0 0 1 0 1 C1 = f0 f1 f2 0 1 0 1 0 C2 = f0 f2
f3 0 1 1 0 0 C3 = f0 f1 f2 f3
f4 1 0 0 0 1 C4 = f0 f4
f5 1 0 1 0 1 C5 = f0 f1 f4 f5
f6 1 1 0 1 1 C6 = f0 f2 f4 f6
f7 1 1 1 1 0 C3 = f0 f1 f2 f3 f4 f5 f6 f5
2131131 xxxxxxy
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
257
Synthesis of Testable Circuits
Test generation method:
2131131 xxxxxxy
x1 x2 x3
y&
&
010
010
110
110
110
101
x1 x2 x3
0 0 01 1 1
0 1 11 0 11 1 0
&1
&0
0
&1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
258
Testability as a Trade-off
Amusing testability:
Theorem: You can test an arbitrary digital system by only 3 test patterns if you design it approprietly
&011
101001 &
011
101
001
&?
&011
101
001
1010 &011
101001
Solution: System FSM Scan-Path CC NAND
Proof:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
259
Overview
1. Introduction
2. Theory: Boolean differential algebra
3. Theory: Decision diagrams
4. Fault modelling
5. Test generation
6. Fault simulation
7. Fault diagnosis
8. Testability measuring
9. Design for testability
10.Built in Self-Test
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
260
Overview: Built-In Self-Test
• Motivation for BIST• Test generation in BIST
– Pseudorandom test generation with LFSR– Weighted pseudorandom test
• Response compaction– Signature analyzers
• BIST implementation– BIST architectures– Hybrid BIST– Test broadcasting in BIST– Embedding BIST– Testing of NoC– IEEE P1500 Standard
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
261
Built-In Self-Test
• Motivations for BIST:– Need for a cost-efficient testing– Doubts about the stuck-at fault model– Increasing difficulties with TPG (Test Pattern Generation)– Growing volume of test pattern data– Cost of ATE (Automatic Test Equipment)– Test application time– Gap between tester and UUT (Unit Under Test) speeds
• Drawbacks of BIST:– Additional pins and silicon area needed– Decreased reliability due to increased silicon area– Performance impact due to additional circuitry– Additional design time and cost
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
262
BIST Techniques
• BIST techniques are classified: – on-line BIST - includes concurrent and nonconcurrent techniques– off-line BIST - includes functional and structural approaches
• On-line BIST - testing occurs during normal functional operation– Concurrent on-line BIST - testing occurs simultaneously with normal operation
mode, usually coding techniques or duplication and comparison are used – Nonconcurrent on-line BIST - testing is carried out while a system is in an idle
state, often by executing diagnostic software or firmware routines
• Off-line BIST - system is not in its normal working mode, usually – on-chip test generators and output response analyzers or microdiagnostic routines – Functional off-line BIST is based on a functional description of the Component
Under Test (CUT) and uses functional high-level fault models – Structural off-line BIST is based on the structure of the CUT and uses structural
fault models (e.g. SAF)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
263
Built-in Self-Test in SoC
• Advances in microelectronics technology have introduced a new paradigm in IC design: System-on-Chip (SoC)
• SoCs are designed by embedding predesigned and preverified complex functional blocks (cores) into one single die
• Such a design style allows designers to reuse previous designs and will lead to shorter time-to-market and reduced cost
• Testing of SoC, on the other hand, is a problematic and time consuming task, mainly due to the resulting complexity and high integration density
• On-chip test solutions (BIST) are becoming a mainstream technology for testing such SoC based systems
System-on-Chip
EmbeddedDRAM
Interface Control
Copmplexcore
UDL
Legacycore
DSPcore
Self-testcontrol
1149.1
UDL
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
264
Built-In Self-Test in SoC
SoC
SRAMPeripherial ComponentInterconnect
SRAM
CPU
Wrapper
CoreUnderTest
ROM
MPEG UDLDRAM
Test AccessMechanism
Test AccessMechanism
Source
Sink
System-on-Chip testingTest architecture components:
• Test pattern source & sink
• Test Access Mechanism
• Core test wrapper
Solutions:
• Off-chip solution
– need for external ATE
• Combined solution
– mostly on-chip, ATE needed for control
• On-chip solution
– BIST
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
265
Built-In Self-Test in SoC
SoC
C3540
C1908 C880 C1355
Embedded Tester C2670
Test accessmechanismBIST BIST
BISTBISTBIST
Test Controller
TesterMemory
Embedded tester for testing multiple cores
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
266
Built-In Self-Test Components
BIST Control Unit
Circuitry Under Test
CUT
Test Pattern Generation (TPG)
Test Response Analysis (TRA)
• BIST components:– Test pattern generator
(TPG)– Test response
analyzer (TRA)
• TPG & TRA are usually implemented as linear feedback shift registers (LFSR)
• Two widespread schemes:
– test-per-scan– test-per-clock
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
267
BIST: Test per Scan
Scan Path
Scan Path
Scan Path
.
.
.
CUT
Test pattern generator
Test response analysator
BIST Control
• Assumes existing scan architecture
• Drawback:– Long test application time
Initial test set:
T1: 1100T2: 1010T3: 0101T4: 1001
Test application:
1100 T 1010 T 0101T 1001 TNumber of clocks = 4 x 4 + 4 = 20
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
268
BIST: Test per Clock
Test per Clock:Initial test set:
T1: 1100T2: 1010T3: 0101T4: 1001
Test application:
1 10 0 1 0 1 0 01 01 1001
T1 T4 T3 T2
Number of clocks = 10 (instead of 16)
Combinational Circuit
Under Test
Scan-Path Register
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
269
Test Generation in BIST
Pseudorandom Test generation by LFSR:
CUT
LFSR
LFSR
X1Xo Xn. . .
ho h1 hn
. . .
• Using special LFSR registers
• Several proposals:
– BILBO
– CSTP
• Main characteristics of LFSR:
– polynomial
– initial state
– test length
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
270
Linear Feedback Shift Register (LFSR)
Pseudorandom Test generation by LFSR:
1 x x2
x3
x4
x2 x 1x4
x3
Polynomial: P(x) = 1 + x3 + x4
Standard LFSR
Modular LFSR
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
271
Built-In Self-Test with LFSR
Time
Fa
ult
Co
ve
rag
e
Pseudorandom Test generation by LFSR:
Reasons of the high initial efficiency:
A circuit may implement functions
A test vector partitions the functions into 2 equal sized equivalence classes (correct circuit in one of them)
The second vector partitions into 4 classes etc.
After m patterns the fraction of functions distinguished from the correct function is
n22
The main motivations of using random patterns are: - low generation cost - high initial efeciency
,212
1
1
2
2
m
i
in
n
nm 21
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
272
Built-In Self-Test with LFSR
Pseudorandom testing of sequential circuits:The following rules suggested:• clock-signals should not be random• control signals such as reset, should be activated
with low probability• data signals may be chosen randomly
Microprocessor testing• A test generator picks randomly an instruction
and generates random data patterns• By repeating this sequence a specified number of
times it will produce a test program which will test the microprocessor by randomly excercising its logic
Pseudorandom Test generation by LFSR:
Full identification is achieved only after 2n input combinations have been tried out (exhaustive test)
A better fault model (stuck-at-0/1) may limit the number of partitions necessary, leaving only the faults with low probability in an equivalence class with the fault-free circuit
,212
1
1
2
2
m
i
in
n
nm 21
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
273
Pseudorandom Test Length
Time
Fa
ult
Co
ve
rag
e
Problems:• Very long test
application time• Low fault coverage• Area overhead• Additional delay
Possible solutions • Weighted pseudorandom
test• Combining
pseudorandom test with deterministic test
– Multiple seed– Bit flipping
• Hybrid BIST
Time
Fau
lt C
ove
rag
e
The main motivations of using random patterns are: - low generation cost - high initial efeciency
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
274
BIST: Weighted pseudorandom test
Hardware implementation of weight generator
LFSR
&&&
MUXWeight select
Desired weighted value Scan-IN
1/21/41/81/16
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
275
BIST: Weighted pseudorandom test
Problem: random-pattern-resistant faults
Solution: weighted pseudorandom testing
The probabilities of pseudorandom signals are weighted, the weights are determined by circuit analysis
NCV - noncontrolling value
The more faults that must be tested through a gate input, the more the other inputs should be weighted to NCV
&Faults to be tested
1 NCV
Propagated faults
NDI - number of circuit inputs for each gate to be the number of PIs or SRLs in the backtrace cone
PI - primary inputs SRL - scan register latch
&
NDIG
NDIII
G
NDI - relative measure of the number of faults to be detected through the gate
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
276
BIST: Weighted pseudorandom test
NCV - noncontrolling value
The more faults that must be tested through a gate input, the more the other inputs should be weighted to NCV
&Faults to be tested
1 NCV
Propagated faults
&
NDIG
NDIII
G
R I = NDIG / NDII
R I - the desired ratio of the NCV to the controlling value for each gate input
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
277
BIST: Weighted pseudorandom test
Example:
R 1 = NDIG / NDII = 6/1 = 6
R 2 = NDIG / NDII = 6/2 = 3
R 3 = NDIG / NDII = 6/3 = 2&
G1
2
3
PI
PI
PIPIPI
PI
More faults must be detected through the third input than through others
This results in the other inputs being weighted more heavily towards NCV
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
278
BIST: Weighted pseudorandom test
R 1 = 6 W01 = 1 W11 = 6
R 2 = 3 W02 = 1 W11 = 3
R 3 = 2 W03 = 1 W11 = 2
&G
12
3
PI
PI
PIPIPI
PI
WV - the value to which the input is biased
W0, W1 - weights of the signals
WV = 0, if W0 W1 else WV = 1
Calculation of signal weights:
Function WOI W1I
AND WOG RI W1G
NAND W1G RI WOG
OR RI WOG W1G
NOR RI W1G WOG
W0G = 1
W1G = 1
Calculation of W0, W1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
279
BIST: Weighted pseudorandom test
W01 = 1 W11 = 6
W02 = 1 W12 = 3
W03 = 1 W13 = 2
&G
12
3
PI1
PI2
PI6
PI5
PI4
PI3
Calculation of signal weights:
W0G = 1
W1G = 1
1
1
1For PI1
RG = 1 W0 = 6 W1 = 1
For PI2 and PI3
RG = 2 W0 = 2 W1 = 3
For PI4 - PI6
RG = 3 W0 = 3 W1 = 2
Backtracing from all the outputs to all the inputs of the given cone
Weights are calculated for all gates and PIs
Function WOI W1I
OR RI WOG W1G
NOR RI W1G WOG
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
280
BIST: Weighted pseudorandom test
&G
12
3
PI
PI
PIPIPI
PI
WF - weighting factor indicating the amount of biasing toward weighted value
WF = max {W0,W1} / min {W1,W0}
Probability:
P = WF / (WF + 1)
Calculation of signal probabilities:
For PI1
W0 = 6 W1 = 1 WV = 0 WF = 6 P1 = 1 - 6/7 = 0.15
For PI2 and PI3
W0 = 2 W1 = 3 WV = 1 WF = 1.5 P1 = 0.6
For PI4 - PI6
W0 = 3 W1 = 2 WV = 0 WF = 1.5 P1 = 1 - 0.6 = 0.4
1
1
1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
281
BIST: Weighted pseudorandom test
&G
12
3
PI
PI
PIPIPI
PI
Calculation of signal probabilities:
For PI1 : P1 = 0.15
For PI2 and PI3 : P1 = 0.6
For PI4 - PI6 : P1 = 0.4
1
1
1
Probability of detecting the fault 1 at the input 3 of the gate G:
1) equal probabilities (p = 0.5):
P = 0.5 (0.25 + 0.25 + 0.25) 0.53 = = 0.5 0.75 0.125 = = 0.046
2) weighted probabilities: P = 0.85 (0.6 0.4 + 0.4 0.6 + 0.62) 0.63 = = 0.85 0.84 0.22 = = 0.16
1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
282
BIST: Weighted pseudorandom test
Hardware implementation of weight generator
LFSR
&&&
MUXWeight select
Desired weighted value Scan-IN
1/21/41/81/16
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
283
BIST: Response Compression
1. Parity checking
m
iirRP
1
)(UUT
TestT
ri
Pi-1
2. One counting
)()(2
1
m
iii rrRP
UUTTest ri
Counter3. Zero counting
)()(2
1
m
iii rrRP
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
284
BIST: Response Compression
4. Transition counting
UUTTest
T
ri
ri-1)()(2
1
m
iii rrRP
)()(2
1
m
iii rrRP
a) Transition 01
b) Transition 10
UUTTest
T
ri
ri-1
5. Signature analysis
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
285
BIST: Signature Analyser
1 x x2
x3
x4
x2 x 1x4
x3
Polynomial: P(x) = 1 + x3 + x4
Standard LFSR
Modular LFSR
UUT
Response string
Response in compacted by LFSR
The content of LFSR after test is called signature
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
286
BIST: Signature Analysis
The principles of CRC (Cyclic Redundancy Coding) are used in LFSR based test response compaction
Coding theory treats binary strings as polynomials:
R = rm-1 rm-2 … r1 r0 - m-bit binary sequence
R(x) = rm-1 xm-1 + rm-2 xm-2 + … + r1 x + r0 - polynomial in x
Example:
11001 R(x) = x4 + x3 + 1
Only the coefficients are of interest, not the actual value of x
However, for x = 2, R(x) is the decimal value of the bit string
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
287
BIST: Signature Analysis
Arithmetic of coefficients:
- linear algebra over the field of 0 and 1: all integers mapped into either 0 or 1
- mapping: representation of any integer n by the remainder resulting from the division of n by 2:
n = 2m + r, r { 0,1 } or n r (modulo 2)
Linear - refers to the arithmetic unit (modulo-2 adder), used in CRC generator (linear, since each bit has equal weight upon the output)
Examples:
x4 + x3 + x + 1
+ x4 + x2 + x
x3 + x2 + 1
x4 + x3 + x + 1
x + 1
x5 + x4 + x2 + x
x4 + x3 + x + 1
x5 + x3 + x2 + 1
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
288
BIST: Signature Analysis
Division of one polynomial P(x) by another G(x) produces a quotient polynomial Q(x), and if the division is not exact, a remainder polynomial R(x)
)(
)()(
)(
)(
xG
xRxQ
xG
xP
Example:
1
11
1)(
)(35
223
35
37
xxx
xxx
xxx
xxx
xG
xP
Remainder R(x) is used as a check word in data transmission
The transmitted code consists of the unaltered message P(x) followed by the check word R(x)
Upon receipt, the reverse process occurs: the message P(x) is divided by known G(x), and a mismatch between R(x) and the remainder from the division indicates an error
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
289
BIST: Signature Analysis
In signature testing we mean the use of CRC encoding as the data compressor G(x) and the use of the remainder R(x) as the signature of the test response string P(x) from the UUT
Signature is the CRC code word )(
)()(
)(
)(
xG
xRxQ
xG
xP
Example:
1)(
)(35
37
xxx
xxx
xG
xP
1 0 1 = Q(x) = x2 + 1
1 0 1 0 1 1 1 0 0 0 1 0 1 0
1 0 1 0 1 1
0 0 1 0 0 1 1 0 1 0 1 0 1 1
0 0 1 1 0 1 = R(x) = x3 + x2 + 1
P(x)
G(x)
Signature
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
290
BIST: Signature Analysis
1)(
)(35
37
xxx
xxx
xG
xP
1 0 1
1 0 1 0 1 1 1 0 0 0 1 0 1 0
1 0 1 0 1 1
0 0 1 0 0 1 1 0 1 0 1 0 1 1
0 0 1 1 0 1 = R(x) = x3 + x2 + 1
P(x)
G(x)
Signature
The division process can be mechanized using LFSR
The divisor polynomial G(x) is defined by the feedback connections
Shift creates x5 which is replaced by x5 = x3 + x + 1
x0 x1 x2 x3 x4IN
IN: 01 010001 Shifted into LFSR
x5
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
291
BIST: Signature Analysis
Aliasing:
UUTResponse
SA
L N
L - test length
N - number of stages in Signature Analyzer
Lk 2
All possible responses All possible signatures
Nk 2Faulty
response
Correct response
N << L
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
292
BIST: Signature Analysis
Aliasing:
UUTResponse
SA
L N
L - test length
N - number of stages in Signature Analyzer
Lk 2 - number of different possible responses
No aliasing is possible for those strings with L - N leading zeros since
they are represented by polynomials of degree N - 1 that are not divisible
by characteristic polynomial of LFSR. There are such stringsNL2
Probability of aliasing:
12
12
L
NL
PN
P 2
11L
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
293
BIST: Signature Analysis
x2 x 1x4
x3
Parallel Signature Analyzer:
UUT
x2 x 1x4
x3
UUTMultiple Input Signature Analyser (MISR)
Single Input Signature Analyser
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
294
Built-In Self-Test
Signature calculating for multiple outputs:
LFSR - Test Pattern Generator
Combinational circuit
LFSR - Signature analyzer
Multiplexer
LFSR - Test Pattern Generator
Combinational circuit
LFSR - Signature analyzer
Multiplexer
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
295
LFSR: Signature Analyser
1 x x2 x3 x4
LFSR
UUT
Response string for Signature Analysis
Test Patterns (when generating tests)Signature (when analyzing test responses)
FF FF FF FF
Stimuli
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
296
Test-per-Clock BIST Architectures
BILBO - Built- In Logic Block Observer:
CSTP - Circular Self-Test Path:
LFSR - Test Pattern Generator
Combinational circuit
LFSR - Signature analyzer
LFSR - Test Pattern Generator
& Signature analyser
Combinational circuit
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
297
BIST: BILBO
Working modes:
B1 B2
0 0 Reset 0 1 Normal mode 1 0 Scan mode 1 1 Test mode
Testing modes:
CC1: LFSR 1 - TPGLFSR 2 - SA
CC2: LFSR 2 - TPGLFSR 1 - SA
LFSR 1
CC1
LFSR 2
CC2
B1B2
B1B2
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
298
BIST: Circular Self-Test
Circuit Under Test
FF FFFF
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
299
Functional Self-Test
• Traditional BIST solutions use special hardware for pattern generation on chip, this may introduce area overhead and performance degradation
• New methods have been proposed which exploit specific functional units like arithmetic blocks or processor cores for on-chip test generation
• It has been shown that adders can be used as test generators for pseudorandom and deterministic patterns
• Today, there is no general method how to use arbitrary functional units for built-in test generation
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
300
BIST Embedding Example
M1 M2
M3
M5
LFSR1
M4
MISR1
BILBO
M6
MUX
CSTP
LFSR2
MISR2
MUXLFSR, CSTP M2 MISR1M2 M5 MISR2 (Functional BIST)CSTP M3 CSTPLFSR2 M4 BILBO
Concurrent testing:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
301
Test-per-Scan BIST Architectures
Test Pattern Generator
MISR
R1 CC1...
STUMPS:Self-Testing Unit Using MISR and Parallel Shift Register Sequence Generator
LOCST: LSSD On-Chip Self-Test
Rn CCn
Error
Test Controller
SI SO
TPG SA
CUT
BS BS
Scan Path
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
302
Software BIST
To reduce the hardware overhead cost in the BIST applications the hardware LFSR can be replaced by software
Software BIST is especially attractive to test SoCs, because of the availability of computing resources directly in the system (a typical SoC usually contains at least one processor core)
SoC ROMCPU CoreLFSR1: 001010010101010011N1: 275
LFSR2: 110101011010110101N2: 900...
load (LFSRj); for (i=0; i<Nj; i++) ...end;
Core j Core j+1Core j+...
Software based test generation:
The TPG software is the same for all cores and is stored as a single copy All characteristics of the LFSR are specific to each core and stored in the ROM They will be loaded upon request. For each additional core, only the BIST characteristics for this core have to be stored
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
303
Problems with BIST
Time
Fa
ult
Co
ve
rag
e
Problems:• Very long test
application time• Low fault coverage• Area overhead• Additional delay
Possible solutions • Weighted pseudorandom
test• Combining
pseudorandom test with deterministic test
– Multiple seed– Bit flipping
• Hybrid BIST
Time
Fau
lt C
ove
rag
e
The main motivations of using random patterns are: - low generation cost - high initial efeciency
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
304
Store-and-Generate test architecture
• ROM contains test patterns for hard-to-test faults • Each pattern Pk in ROM serves as an initial state of the LFSR for test pattern
generation (TPG)• Counter 1 counts the number of pseudorandom patterns generated starting
from Pk • After finishing the cycle for Counter 2 is incremented for reading the next
pattern Pk+1
ROM TPG UUT
ADR
Counter 2 Counter 1
RD
CL
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
305
Built-In Self-Test
PRPG
CORE UNDERTEST
. . .. . .
. . .
ROM
. . . . . .
SoC
Core
MISR
BIS
T C
ontr
olle
r
• Hybrid test set contains a limited number of pseudorandom and deterministic vectors
• Pseudorandom test vectors can be generated either by hardware or by software
• Pseudorandom test is improved by a stored test set which is specially generated to shorten the on-line pseudorandom test cycle and to target the random resistant faults
• The problem is to find a trade-off between the on-line generated pseudorandom test and the stored test
Hybrid BIST
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
306
Optimization of Hybrid BIST
Cost curves for BIST:
Total Cost
C TOTAL
C
Cost of
pseudorandom test
patterns C GEN
Number of remaining
faults after applying k
pseudorandom test
patterns r NOT (k)
Cost of stored
test C MEM
LLOPT
k rDET(k) rNOT(k) FC(k) t(k)
1 155 839 15.6% 1042 76 763 23.2% 1043 65 698 29.8% 1004 90 608 38.8% 1015 44 564 43.3% 99
10 104 421 57.6% 9520 44 311 68.7% 8750 51 218 78.1% 74
100 16 145 85.4% 52200 18 114 88.5% 41411 31 70 93.0% 26954 18 28 97.2% 12
1560 8 16 98.4% 72153 11 5 99.5% 33449 2 3 99.7% 24519 2 1 99.9% 14520 1 0 100.0% 0
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
307
Hybrid BIST for Multiple Cores
SoC
C3540
C1908 C880 C1355
Embedded Tester C2670
Test accessmechanismBIST BIST
BISTBISTBIST
Test Controller
TesterMemory
Embedded tester for testing multiple cores
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
308
Hybrid BIST for Multiple Cores
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
309
Multi-Core Hybrid BIST Optimization
COST P,k
COST T,k
COST
j COST D,k
j min
COST E* T
j* k
Solution
E
E
Cost functions for HBIST: Iterative optimization:
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
310
Optimized Multi-Core Hybrid BIST
Pseudorandom test is carried out in parallel, deterministic test - sequentially
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
311
Test-per-Scan Hybrid BIST
Embedded Tester
Test Controller
TesterMemory
Scan Path
Scan Path
Scan Path
Scan Path
LFS
R
LFS
R
Scan Path
Scan Path
Scan Path
Scan Path
LFS
RLF
SR
Scan Path
Scan Path
Scan Path
Scan Path
Scan Path
Scan Path
Scan Path
Scan Path
LFS
R
LFS
R
LFS
R
LFS
R
s838s1423
s3271 s298
SoC
TAM
Deterministic tests can only be carried out for one core at a time
Only one test access bus at the system level
is needed.
Every core’s BIST logic is capable to produce a set of independent pseudorandom test The pseudorandom test sets for all the cores can be carried out simultaneously
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
312
Broadcasting Test Patterns in BIST
Concept of test pattern sharing via novel scan structure – to reduce the test application time:
... ...
CUT 1 CUT 2
... ...
CUT 1 CUT 2
Traditional single scan design Broadcast test architecture
While one module is tested by its test patterns, the same test patterns can be applied simultaneously to other modules in the manner of pseudorandom testing
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
313
Broadcasting Test Patterns in BIST
Examples of connection possibilities in Broadcasting BIST:
CUT 1 CUT 2 CUT 1 CUT 2
j-to-j connections Random connections
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
314
Broadcasting Test Patterns in BIST
... ...
CUT 1 CUT n
Scan configurations in Broadcasting BIST:
...
MISR
Scan-In
Scan-Out
... ...
... ...
CUT 1 CUT n
MISR 1
Scan-In
Scan-Out
... ...MISR n
Common MISR Individual and multiple MISRs
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
315
Hybrid BIST with Test Broadcasting
LFSREmulator
Tester Memory
Test Controller
EmbeddedTester
Core 1
TAM12
k...
...n-1n
Core k Core n
... ...
... ... ...
Core 2
Not for all cores 100% fault coverage can be achieved by pure pseudorandom test Additional deterministic tests have to be applied to achieve 100% coverage Deterministic test patterns are precomputed and stored in the system
SoC with multiple cores to be tested
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
316
Hybrid BIST with Test Broadcasting
Hybrid test consisting of a pseudorandom test with length LP and a deterministic test with length LDLDk - length of the deterministic test set dedicated for the core Ck
L - deterministic test patterns moved from the pseudorandom part to deterministic part
Pseudorandom patterns
Pseudorandom patterns
Deterministic patterns
LP LD
Test length
Bits
LDk
L
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
317
Testing of Networks-on-Chip (NoC)
• Consider a mesh-like topology of NoC consisting of
– switches (routers), – wire connections between them and – slots for SoC resources, also referred to as tiles.
• Other types of topological architectures, e.g. honeycomb and torus may be implemented and their choice depends on the constraints for low-power, area, speed, testability
• The resource can be a processor, memory, ASIC core etc.
• The network switch contains buffers, or queues, for the incoming data and the selection logic to determine the output direction, where the data is passed (upward, downward, leftward and rightward neighbours)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
318
Testing of Networks-on-Chip
• Useful knowledge for testing NoC network structures can be obtained from the interconnect testing of other regular topological structures
• The test of wires and switches is to some extent analogous to testing of interconnects of an FPGA
• a switch in a mesh-like communication structure can be tested by using only three different configurations
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
319
Testing of Networks-on-Chip
• Arbitrary short and open in an n-bit bus can be tested by
log2(n) test patterns
• When testing the NoC interconnects we can regard different paths through the interconnect structures as one single concatenated bus
• Assuming we have a NoC, whose mesh consists of
m x m switches, we can view the test paths through the
matrix as a wide bus of 2mn wires
m x mmatrix
2m buses
Concatenated bus concept
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
320
Testing of Networks-on-Chip
• The stuck-at-0 and stuck-at-1 faults are modeled as shorts to Vdd and ground
• Thus we need two extra wires, which makes the total bitwidth of the bus
2mn + 2 wires.
• From the above facts we can find that
3[log2(2mn+2)] test patterns are needed in order to test the switches and the wiring in the NoC
m x mmatrix
2m buses
Concatenated bus concept
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
321
Testing of Networks-on-Chip
0
1
2
3
4
5
6
7
Bus
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
Test Detected faults
Stuck-at-1
Stuck-at-0
All opens
and shorts
6 wires tested
3[log2(2mn+2)]
test patterns needed
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
322
IEEE P1500 standard for core test
• The following components are generally required to test embedded cores – Source for application of test stimuli and a sink for observing
the responces– Test Access Mechanisms (TAM) to move the test data from
the source to the core inputs and from the core outputs to the sink
– Wrapper around the embedded core
embeddedcore
wrapper
testpatternsource
TAMtest
responces’sink
TAM
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
323
IEEE P1500 standard for core test
• The two most important components of the P1500 standard are– Core test language (CTL) and
– Scalable core test architecture • Core Test Language
– The purpose of it is to standardize the core test knowledge transfer
– The CTL file of a core must be supplied by the core provider – This file contains information on how to
• instanciate a wrapper, • map core ports to wrapper ports, • and reuse core test data
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
324
IEEE P1500 standard for core test
Core test architecture• It standardizes only the wrapper and the interface between the
wrapper and TAM, called Wrapper Interface Port or (WIP) • The P1500 TAM interface and wrapper can be viewed as an
extension to IEEE Std. 1149.1, since – the 1149.1 TAP controller is a P1500-compliant TAM interface, – and the boundary-scan register is a P1500-compliant wrapper
• Wrapper contains – an instruction register (WIR), – a wrapper boundary register consisting of wrapper cells, – a bypass register and some additional logic.
• Wrapper has to allow normal functional operation of the core plus it has to include a 1-bit serial TAM.
• In addition to the serial test access, parallel TAMs may be used.
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
325
IEEE P1500 standard for core test
System chip
Source/Sink(Stimuli/Responses)
Off-chip or
On-chip
User-defined test access mechanism (TAM) On-chip
P1500 wrapper
Core 1
WIR
WPI WPO
Functionalinputs/outputs
P1500 wrapper
Core n
WIR
WPI WPOFunctional
inputs/outputs
P1500 Wrapper interface port (WIP)
WSIWSI WSO WSO
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
326
Practical works to the course“Design for Testability”
Artur Jutman
Tallinn Technical UniversityEstonia
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
327
Practical Works
There are two practical works in the frames of this course:• Test Generation• Built-In Self-Test
We provide only brief descriptions of these works in this handout. The descriptions are given just to make a short overview of what should be done during these exercises. The full description of mentioned works is available in the Web at the following URL:
http://www.pld.ttu.ee/diagnostika/labshttp://www.pld.ttu.ee/diagnostika/labs
All the laboratory works are based on the Turbo Tester (TT) tool set, which will be preinstalled at the computer classes. However, it is a freeware and if you wish to have a copy of TT at your own disposal, do not hesitate to download it from:
http://www.pld.ttu.ee/tthttp://www.pld.ttu.ee/tt
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
328
Practical Work on Test Generation
OverviewThe objectives of this practical work are the following:• to practice with manual and automatic test pattern generation• to perform fault simulation and to analyze the simulation information• to compare the efficiency of different methods and approaches
There are three types of circuits to practice with. The main difference between them is their size. The gate level schematic is available for the smallest one. Based on that schematic, test vectors should be generated manually. For the second circuit, its function is known. It is an adder. A functional test should be generated manually for this circuit. In addition to that, automatic test pattern generators (ATPG) should be also run with the adder. These ATPGs have different settings. Best settings should be found during the work. The third circuit is too large, to analyze its function or schematic. Therefore, the test for this circuit should be generated automatically.
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
329
Practical Work on Test Generation
Workflow of manual & automatic test pattern generation
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
330
Practical Work on Test Generation
Steps1. Apply manually as many random vectors as you think it will be enough for the first circuit. However, remember that the goal is to acquire a test with possibly better fault coverage using possibly smaller amount of test patterns.
2. Using a certain algorithm, prepare a test, which is better than that.
3. Repeat the steps above (1, 2) for the adder. Use functional methods instead.
4. Run the ATPGs with default settings.
5. Try different settings of the "genetic" and "random" ATPGs to obtain shorter test. Run the deterministic ATPG without tuning again and perform the test compaction using the optimize tool.
6. Compare the results and decide which test generation method (or several methods) is the best for the given circuit. Why?
7. Repeat steps 4,5,6 for the third circuit.
8. Calculate the cost of testing for all the methods you used.
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
331
Practical Work on Built-In Self-Test
OverviewThe objectives of this practical work are the following:• to explore and to compare different built-in self-test techniques • to learn finding best LFSR architectures for BILBO and CSTP methods • to study Hybrid BIST approach
Let us have a system-on-chip (SoC) with several cores, which have to be tested by a single BIST device using Broadcasting BIST method. Then our task is to minimize the time requirements via selection of proper BIST configuration suitable for all the cores simultaneously. We are going to solve this problem by simulation of different configurations and selection of the best one. There are three combinational circuits (cores) in our SoC. First we have to select the best configuration for each circuit separately and then select the best one for the SoC as a whole. Another problem to be solved here is a search for an optimal combination of stored and generated vectors in Hybrid BIST approach.
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
332
Practical Work on Built-In Self-Test
Selection of the Best Configuration for Broadcasting BIST (a workflow)
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
333
Practical Work on Built-In Self-Test
Steps1. Choose proper length of TPG (Test Pattern Generator) and SA (Signature Analyzer) for BILBO. It depends on parameters of selected circuits.
2. For each circuit, find the configuration that gives the best fault coverage and test length. Run BIST emulator with at least 15 different settings. You have to obtain three best configurations (one for each circuit).
3. Take the first configuration and apply it to the second and the third circuit. Do the same with the 2nd and the 3rd one. Choose the best configuration.
4. Repeat steps 1-3 in CSTP mode. Be sure to select a proper TPG/SA length. Compare the efficiency of CSTP and BILBO methods.
5. Write the schedule for the Hybrid BIST device. The target is to reduce the initial test lengths by 2 times. The main task is to find the minimal number of stored seeds to approach the target test length.
6. Answer the following question: Which method is better: Hybrid BIST or BILBO if each stored seed costs as much as 50 generated test vectors?
Technical University Tallinn, ESTONIACopyright 2000-2003 by Raimund Ubar
334
Contact Data
Prof. Raimund Ubar Artur JutmanE-mail: [email protected] E-mail: [email protected]
www.pld.ttu.ee/ˇraiub/ www.pld.ttu.ee/ˇartur/
Tallinn Technical University
Computer Engineering Department
Address: Raja tee 15, 12618 Tallinn, Estonia
Tel.: +372 620 2252,
Fax: +372 620 2253