Upload
erv
View
60
Download
0
Embed Size (px)
DESCRIPTION
Automated Assume-Guarantee Reasoning for Simulation Conformance Sagar Chaki , Edmund Clarke, Nishant Sinha, Prasanna Thati. Overview. Assume-guarantee style simulation checking between labeled transition systems in an automated manner. - PowerPoint PPT Presentation
Citation preview
Sponsored by the U.S. Department of Defense© 2005 by Carnegie Mellon University
CAV 2005 - page 1
Pittsburgh, PA 15213-3890
Automated Assume-Guarantee Reasoning for Simulation Conformance
Sagar Chaki, Edmund Clarke, Nishant Sinha, Prasanna Thati
© 2005 by Carnegie Mellon University CAV 2005 - page 2
Overview
Assume-guarantee style simulation checking between labeled transition systems in an automated manner.
Automated learning of assumptions. Assumptions are expressible as regular tree languages.
Efficient algorithm LT to learn minimal deterministic tree automata using queries and counterexamples.
Experimental results, implications and future work.
© 2005 by Carnegie Mellon University CAV 2005 - page 3
Labeled Transition System
Directed graph with an initial node. Edges labeled with actions drawn from an alphabet.
a
a
b
c d
e
M
(M) = {a,b,c,d,e,f}
© 2005 by Carnegie Mellon University CAV 2005 - page 4
A-G Simulation between LTSs
We have a concurrent implementation M1 || M2 and a specification S. We wish to check if M1 || M2 ¹ S.
We wish to use the A-G rule: (AG-NC)
We also want to construct a counterexample if simulation does not hold.
M1 || A ¹ SM2 ¹ A
M1 || M2 ¹ S
© 2005 by Carnegie Mellon University CAV 2005 - page 5
Simulation between LTS and tree
t ¹ M ´ there exists a homomorphism between t and M
a
b c
Ma a
a a
b c
t
© 2005 by Carnegie Mellon University CAV 2005 - page 6
Simulation ´ Language Containment
L(M) = The set of trees that M can simulate
a
b c
M
a
b
a a
c
L(M)M1 ¹ M2 iff
L(M1) µ L(M2)
© 2005 by Carnegie Mellon University CAV 2005 - page 7
Weakest assumption
The weakest assumption that satisfies the first premise of AG-NC corresponds to the language:
W = L(M1) n L(S)
In fact:
L(M1 || M2) µ L(S)
, L(M1) Å L(M2) µ L(S)
, L(M2) µ L(M1) n L(S) = W
© 2005 by Carnegie Mellon University CAV 2005 - page 8
Initial Idea
Check if L(M2) µ W = L(M1) n L(S).
If so, we are done.
Otherwise let t be a tree such that:t 2 L(M2) and t 2 L(M1) n L(S).
Hence t 2 L(M1 || M2) n L(S).
Report t as a counterexample.
© 2005 by Carnegie Mellon University CAV 2005 - page 9
Problems and Solutions
How do we represent W?
The tree languages we encounter are regular. We will use tree automata to represent them.
How do we check L(M2) µ W efficiently?
Naïve approach composes M1, M2 and S. Instead we will incrementally learn an approximation to W using the two AG-NC premises.
© 2005 by Carnegie Mellon University CAV 2005 - page 10
Tree Automaton
A = (S, S0, , , , F)
S, S0, , and F are exactly like finite automata. is a cross-transition relation.
A reads a binary tree t in a bottom-up manner.
a
b c
© 2005 by Carnegie Mellon University CAV 2005 - page 11
Tree Automaton
A = (S, S0, , , , F)
S, S0, , and F are exactly like finite automata. is a cross-transition relation.
A reads a binary tree t in a bottom-up manner.
a
b c
S0 S0
S0,b) S0,c)
X = S0,b) S0,c)X,a)
© 2005 by Carnegie Mellon University CAV 2005 - page 12
Facts about Tree Automata
Theory of tree automata mirror that of finite automata. Deterministic TA have same accepting power as ND-TA.
Languages accepted by TA are said to be regular. Regular languages are closed under standard set operations.
There is a version of Myhill-Nerode theorem for TA. Every regular language is accepted by an unique minimal DTA.
The tree language of any LTS M is regular.
Hence W = L(M1) n L(S) is a regular language and is accepted by a minimal DTA A(W). We will learn A(W).
© 2005 by Carnegie Mellon University CAV 2005 - page 13
Algorithm LT
Learns a minimal DTA that accepts an unknown regular tree language W. Uses a minimally adequate teacher (MAT) that can answer two kinds of queries:
• Membership: Is a tree t in W, i.e., t 2 W?
• Candidate: Is a proposed DTA C the correct answer, i.e., L(C) = W? If not return a counterexample tree t.
LT is a generalization of the L* algorithm by Angluin. It is closer in spirit to the improved L* by Rivest et al.
© 2005 by Carnegie Mellon University CAV 2005 - page 14
Context
A context is a tree with a hole where you can plug-in other trees and contexts.
a
b c+ a
b c=
c t c[t]
© 2005 by Carnegie Mellon University CAV 2005 - page 15
Overview of LT
LT uses an observation table to record information obtained by querying the MAT. It iteratively:
• Augments rows of the table using membership queries till the table is closed.
• Constructs a candidate DTA C from the table and makes a candidate query.
• If the answer is yes, then LT terminates with C.• Otherwise adds a single context to E and repeats.
LT terminates due to upper bound on number of rows.
Space and time complexity of LT is polynomial in the size of A(W) assuming each MAT query takes unit resource.
© 2005 by Carnegie Mellon University CAV 2005 - page 16
Observation Table
E (set of contexts)
S (set of trees)
………………
T(s,e) = 1 if e[s] 2 W = 0 otherwise
………………
S ²
S £ S
experiments that candistinguish between
states of A(W)
states of A(W)
transitions
cross transitions
s 0 1 0 1 1 1 0 0 …
© 2005 by Carnegie Mellon University CAV 2005 - page 17
Closure
E (set of contexts)
S (set of trees)
S ²
S £ S
s
s ²
s’
s, ) = s’
© 2005 by Carnegie Mellon University CAV 2005 - page 18
-Closure
E (set of contexts)
S (set of trees)
S ²
S £ S
s1
(s1 , s2)
s’
(s1,s2) = s’s2
© 2005 by Carnegie Mellon University CAV 2005 - page 19
Learning A(W)
We have a concurrent implementation M1 || M2 and a specification S. We wish to check if M1 || M2 ¹ S.
Membership: t 2 W iff M1 || t ¹ S
Candidate: To answer if L(C) = W
• Phase 1: M1 || C ¹ S- return t 2 L(C) n W or proceed to phase 2.
• Phase 2: M2 ¹ C- return t 2 W n L(C) or global counterexample.
© 2005 by Carnegie Mellon University CAV 2005 - page 20
Candidate Query
Check M1 || C ¹ Syes no
L(C) µ W :(L(C) µ W)
return t 2 L(C) n WCheck M2 ¹ Cyes no
C satisfies AG premisesM1 || M2 ¹ S
let t’ 2 L(M2) n L(C) Check M1 || t’ ¹ S
yes no
return t’ 2 W n L(C) return CE t’ to M1 || M2 ¹ S
Premise 1
Premise 2 1
2
© 2005 by Carnegie Mellon University CAV 2005 - page 21
Results
Direct AG Gain
Result T1 M1 T2 M2 M1/M2 |A| MQ # CQ #
Invalid * 2146 325 207 10.4 8 265 3
Valid * 2080 309 163 12.8 8 279 3
Valid * 2077 309 163 12.7 16 279 3
Valid * 2076 976 167 12.4 16 770 4
Valid * 2075 969 167 12.4 16 767 4
Invalid * 2074 3009 234 8.9 24 1514 5
Invalid * 2075 3059 234 8.9 24 1514 5
Invalid * 2072 3048 234 8.9 24 1514 5
© 2005 by Carnegie Mellon University CAV 2005 - page 22
Results
Direct AG Gain
Result T1 M1 T2 M2 M1/M2 |A| MQ # CQ #
Invalid * 2146 325 207 10.4 8 265 3
Valid * 2080 309 163 12.8 8 279 3
Valid * 2077 309 163 12.7 16 279 3
Valid * 2076 976 167 12.4 16 770 4
Valid * 2075 969 167 12.4 16 767 4
Invalid * 2074 3009 234 8.9 24 1514 5
Invalid * 2075 3059 234 8.9 24 1514 5
Invalid * 2072 3048 234 8.9 24 1514 5
© 2005 by Carnegie Mellon University CAV 2005 - page 23
Related Work
MAT-based learning for DFA : Angluin (1987), improved by Rivest and Schapire (1993).
MAT-based learning for DTA : Sakakibara (1990), Drewes and Hogberg (2003) asymptotically more expensive than ours.
Automated assume-guarantee using DFA learning : Cobleigh et al. (2003), Barringer et al. (2003) explore various assume-guarantee proof-rules, dynamic A-G (Chaki, Clarke, Sharygina, Sinha : FM 2005)
Other kinds of automata learning : Buchi automata (Maler et al.), Timed automata (Jonsson et al.)
© 2005 by Carnegie Mellon University CAV 2005 - page 24
Future work
We could explore the use of other assume-guarantee rules and the effect of ordering of the components on overall complexity.
We could learning non-deterministic automata. These may be exponentially more compact than the corresponding deterministic versions. However, we might have to sacrifice canonicity.
We could apply this automated assume-guarantee reasoning via learning to other types of verification problems such as LTL model checking and deadlock detection.