Upload
carlo-ghezzi
View
102
Download
0
Embed Size (px)
DESCRIPTION
Second lecture at the LASER Summer School, Elba Island, Sept. 2013
Citation preview
1
Development of dynamically evolving and self-adaptive software
2. Understanding and managing change
LASER 2013Isola d’Elba, September 2013
Carlo Ghezzi Politecnico di Milano
Deep-SE Group @ DEIB
Tuesday, September 10, 13
The global picture: the machine and the world (Jackson/Zave)
2P, Zave, M. Jackson, Four dark corners of requirements engineering, TOSEM 1997
Tuesday, September 10, 13
The global picture: the machine and the world (Jackson/Zave)
2
World (the environment) Machine
P, Zave, M. Jackson, Four dark corners of requirements engineering, TOSEM 1997
Tuesday, September 10, 13
The global picture: the machine and the world (Jackson/Zave)
2
World (the environment) Machine
P, Zave, M. Jackson, Four dark corners of requirements engineering, TOSEM 1997
Tuesday, September 10, 13
The global picture: the machine and the world (Jackson/Zave)
2
GoalsRequirements
World (the environment) Machine
P, Zave, M. Jackson, Four dark corners of requirements engineering, TOSEM 1997
Tuesday, September 10, 13
The global picture: the machine and the world (Jackson/Zave)
2
GoalsRequirements
World (the environment) Machine
Shared phenomena
P, Zave, M. Jackson, Four dark corners of requirements engineering, TOSEM 1997
Tuesday, September 10, 13
The global picture: the machine and the world (Jackson/Zave)
2
GoalsRequirements
Specification
World (the environment) Machine
Shared phenomena
P, Zave, M. Jackson, Four dark corners of requirements engineering, TOSEM 1997
Tuesday, September 10, 13
The global picture: the machine and the world (Jackson/Zave)
2
GoalsRequirements
Domainproperties,assumptions
Specification
World (the environment) Machine
Shared phenomena
P, Zave, M. Jackson, Four dark corners of requirements engineering, TOSEM 1997
Tuesday, September 10, 13
Domain properties and assumptions
• Domain property- statement about problem world phenomena
- often it holds regardless of any software-to-be; e.g. physics’ laws
avgTrainAcceleration (t1, t2) > 0 implies trainSpeed (t2) > trainSpeed (t1)
• Assumption- statement about problem world phenomena, constraints
- may be violated
‣“humans behave as instructed by the machine”
‣“temperature is in the range -40..+40 Celsius”
‣“device generates a measure every 2 ms.”3
Tuesday, September 10, 13
Domain assumptions
4
“Domain assumptions bridge the gap between requirements and specifications” (M. Jackson & P. Zave)
May concern• usage profiles• users’ responsiveness• remote servers response time• network latency• sensors/actuators behaviors• . . .
Tuesday, September 10, 13
Dependability arguments
• Assume you have a formal representation for– R = requirements– S = specification– D = Dp + Da domain properties and assumptions
if S and D are both satisfied and consistent, it is necessary to prove
– S, D |= R
5
Tuesday, September 10, 13
Change
• Requirements change• Environment changes
• Change is often a manifestation of uncertainty• Change asks for evolution (of the machine)
6Tuesday, September 10, 13
Changes may cause evolution
• Changes are exogenous phenomena that may concern
- R - D (actually, Da)
• Changes likely break the dependability argument• Evolution (of the machine) is a consequence of change‣ we need to change S (and hence the implementation)
to continue to satisfy the dependability argument
7
S, D |= R
Tuesday, September 10, 13
Evolution (1970s) = software maintenance
• Traditionally, changes to software (the machine, S) in response to changes in D, R are performed manually (by software engineers) and applied off-line, after delivery
• Maintenance (Lientz and Swanson) classified as- Corrective maintenance
✓Modification to correct discovered problems.
- Adaptive maintenance✓Modification to keep it usable in a changed or changing
environment.
- Perfective maintenance✓Modification to improve it functionally/non-functionally.
- Preventive maintenance.✓Anticipate any of the above.
8Lientz B., Swanson E., Software Maintenance Management. Addison Wesley, 1980
Tuesday, September 10, 13
9
Early studies on software evolution
Lehman, M. , Programs, Life Cycles, and Laws of Software Evolution, Proc. IEEE, 1980
• Software evolution recognized as a crucial problem since the 1970’s (work by M. Lehman and L. Belady)
• Three categories of software- S-programs
- written according to an exact specification of what the program can do- P-programs
- written to implement certain procedures that completely determine what the program can do (e.g., a program to play chess)
- E-programs- written to perform some real-world activity- how they should behave is linked to the environment in which they run- need to adapt to varying requirements and circumstances in that
environment
Tuesday, September 10, 13
Lehman’s “laws” of software evolution (1)
• Continuing Change — E-type systems must be continually adapted or they become progressively less satisfactory.
• Increasing Complexity — As an E-type system evolves its complexity increases unless work is done to maintain or reduce it.
• Self Regulation — E-type system evolution process is self-regulating with distribution of product and process measures close to normal.
• Conservation of Organizational Stability (invariant work rate) — The average effective global activity rate in an evolving E-type system is invariant over product lifetime.
10Tuesday, September 10, 13
Lehman’s “laws” of software evolution (2)• Conservation of Familiarity — Over the lifetime of a system,
the incremental change in each release is approximately constant.
• Continuing Growth — The functional content of E-type systems must be continually increased to maintain user satisfaction over their lifetime.
• Declining Quality — The quality of E-type systems will appear to be declining unless they are rigorously maintained and adapted to operational environment changes.
• Feedback System — E-type evolution processes constitute multi-level, multi-loop, multi-agent feedback systems and must be treated as such to achieve significant improvement over any reasonable base.
11Tuesday, September 10, 13
Software evolution (2000): agility
•Common to all variants: - Incorporate feedback in an iterative
development that supports progressive calibration of objectives and adjustment of requirements
12Tuesday, September 10, 13
Software evolution (late 2010s)
• Run-time evolution• Self-managing evolution
13Tuesday, September 10, 13
Evolution and adaptation
Adaptation is a special case of evolution due to changes in domain assumptions, Da
• an increasingly relevant phenomenon, often due to uncertainty‣ cyber-physical systems
- interaction with the physical environment‣ user-intensive systems
- changes in usage profile‣ cloud/service infrastructure
- platform/software volatility
14
Tuesday, September 10, 13
On-line evolution and self-adaptive systems
• More and more often systems are required to be continuously running
• This asks for on-line evolution, i.e. applying changes to the machine as the system is running and providing service
• The special case of self-adaptive systems- on-line adaptation is self-managed
15
Tuesday, September 10, 13
Self-adaptive system (SaS)
• D decomposed into Df and Dc
– Df is the fixed/stable part
– Dc is the changeable part
• A SaS should- detect changes to Dc
- modify itself (the machine --- S, and the implementation) to keep satisfying the dependability argument, if necessary
16
S, D |= R
Tuesday, September 10, 13
Paradigm shift
• SaSs ask for a paradigm shift, which involves both development time (DT) and run time (RT)
• The boundary between DT and RT fades • Reasoning and reacting capabilities must enrich the RT
environment- detect change- reason about the consequences of change- react to change
17
Tuesday, September 10, 13
Models+verification@runtime
• To detect change, we need to monitor the environment
• The changes must be retrofitted to models of the machine+environment that support reasoning about the dependability argument (a learning step)
• The updated models must be verified to check for violations to the dependability argument
• In case of a violation, a self-adaptation must be triggered
18
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Reqs
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Reqs
Env
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Reqs
E1
0
Specification
Specification
Env
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Reqs
E1
0
Implementation
Specification
Specification
Env
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Reqs
E1
0
Implementation
Specification
SpecificationDevelopment time
Env
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Reqs
E1
0
Implementation
Specification
SpecificationDevelopment time
Run time
Env
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Reqs
E1
0
Implementation
Execution
Specification
SpecificationDevelopment time
Run time
Env
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Reqs
E1
0
Implementation Monitoring
Execution
Specification
SpecificationDevelopment time
Run time
Env
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Reqs
E1
0
Implementation Monitoring
Execution
ReasoningSpecification
SpecificationDevelopment time
Run time
Env
Tuesday, September 10, 13
Lifecycle of self-adaptive systems
19
Reqs
E1
0
Implementation Monitoring
Execution
ReasoningSpecification
SpecificationDevelopment time
Run time
Env
Self-adaptation
Tuesday, September 10, 13
20
Autonomic computing
Tuesday, September 10, 13
20
the auton
omic computin
g
view
revisited
Autonomic computing
Tuesday, September 10, 13
Zooming in
21
• I. Epifani, C. Ghezzi, R. Mirandola, G. Tamburrelli, "Model Evolution by Run-Time Parameter Adaptation”, ICSE 2009
• C. Ghezzi, G. Tamburrelli, "Reasoning on Non Functional Requirements for Integrated Services”, RE 2009
• I. Epifani, C. Ghezzi, G. Tamburrelli, "Change-Point Detection for Black-Box Services”, FSE 2010• A. Filieri, C. Ghezzi, G. Tamburrelli, " A formal approach to adaptive software: continuous
assurance of non-functional requirements", Formal Aspects of Computing, 24, 2, March 2012.
Tuesday, September 10, 13
Problem setting• Focus on non-functional requirements
– reliability, performance, energy consumption, cost, …
• Quantitatively stated in probabilistic terms• Dc decomposed into Du , Ds
– Du = usage profile
– Ds = S1 ∧ .... ∧ Sn Si assumption on i-th service
22
User
Integrated Service
Workflow
W
Service
S1
<uses>
Service
S2
<uses>
Service
Sn
<uses>
....
?
? ??
Tuesday, September 10, 13
An example
23
Login
Search
Buy
[buy more]
NrmShipping
ExpShipping
[proceed]
[normal]
CheckOut
Logout
[express]
Tuesday, September 10, 13
An example
23
Login
Search
Buy
[buy more]
NrmShipping
ExpShipping
[proceed]
[normal]
CheckOut
Logout
[express]
Returning customersvs
new customers
Tuesday, September 10, 13
An example
23
3 probabilistic requirements:R1: “Probability of success is > 0.8”R2: “Probability of a ExpShipping failure for a user recognized as ReturningCustomer < 0.035”R3: “Probability of an authentication failure is less then < 0.06”
Login
Search
Buy
[buy more]
NrmShipping
ExpShipping
[proceed]
[normal]
CheckOut
Logout
[express]
Returning customersvs
new customers
Tuesday, September 10, 13
Assumptions
24
User profile domain knowledge
External service assumptions (reliability)
RCRC
RCNC
NC
Tuesday, September 10, 13
DTMC model
25
5
12
Logged
9
Buy
6
Search
1
Returning
3
NewCustomer
7
Buy
4
Search
11
10
ExpShipping
12
14
Logout
16
Success
5 1
FailedLg
8
FailedChk
15 1
FailedNrmSh
13 1
FailedExpSh
1
0
Login
0.97
0.030.65
0.35
1
1
0.20.5 0.05
0.6
NrmShipping
0.030.95
0.1 0.97
0.95
0.9
1
0.15
1
0.3
0.05
CheckOut
0.25
Tuesday, September 10, 13
DTMC model
25
Property check via model checkingR1: “Probability of success is > 0.8”R2: “Probability of a ExpShipping failure for a user recognized as ReturningCustomer < 0.035”R3: “Probability of an authentication failure is less then < 0.06”
5
12
Logged
9
Buy
6
Search
1
Returning
3
NewCustomer
7
Buy
4
Search
11
10
ExpShipping
12
14
Logout
16
Success
5 1
FailedLg
8
FailedChk
15 1
FailedNrmSh
13 1
FailedExpSh
1
0
Login
0.97
0.030.65
0.35
1
1
0.20.5 0.05
0.6
NrmShipping
0.030.95
0.1 0.97
0.95
0.9
1
0.15
1
0.3
0.05
CheckOut
0.25
Tuesday, September 10, 13
DTMC model
25
Property check via model checkingR1: “Probability of success is > 0.8”R2: “Probability of a ExpShipping failure for a user recognized as ReturningCustomer < 0.035”R3: “Probability of an authentication failure is less then < 0.06”
0.84
5
12
Logged
9
Buy
6
Search
1
Returning
3
NewCustomer
7
Buy
4
Search
11
10
ExpShipping
12
14
Logout
16
Success
5 1
FailedLg
8
FailedChk
15 1
FailedNrmSh
13 1
FailedExpSh
1
0
Login
0.97
0.030.65
0.35
1
1
0.20.5 0.05
0.6
NrmShipping
0.030.95
0.1 0.97
0.95
0.9
1
0.15
1
0.3
0.05
CheckOut
0.25
Tuesday, September 10, 13
DTMC model
25
Property check via model checkingR1: “Probability of success is > 0.8”R2: “Probability of a ExpShipping failure for a user recognized as ReturningCustomer < 0.035”R3: “Probability of an authentication failure is less then < 0.06”
0.84
0.031
5
12
Logged
9
Buy
6
Search
1
Returning
3
NewCustomer
7
Buy
4
Search
11
10
ExpShipping
12
14
Logout
16
Success
5 1
FailedLg
8
FailedChk
15 1
FailedNrmSh
13 1
FailedExpSh
1
0
Login
0.97
0.030.65
0.35
1
1
0.20.5 0.05
0.6
NrmShipping
0.030.95
0.1 0.97
0.95
0.9
1
0.15
1
0.3
0.05
CheckOut
0.25
Tuesday, September 10, 13
DTMC model
25
Property check via model checkingR1: “Probability of success is > 0.8”R2: “Probability of a ExpShipping failure for a user recognized as ReturningCustomer < 0.035”R3: “Probability of an authentication failure is less then < 0.06”
0.84
0.0560.031
5
12
Logged
9
Buy
6
Search
1
Returning
3
NewCustomer
7
Buy
4
Search
11
10
ExpShipping
12
14
Logout
16
Success
5 1
FailedLg
8
FailedChk
15 1
FailedNrmSh
13 1
FailedExpSh
1
0
Login
0.97
0.030.65
0.35
1
1
0.20.5 0.05
0.6
NrmShipping
0.030.95
0.1 0.97
0.95
0.9
1
0.15
1
0.3
0.05
CheckOut
0.25
Tuesday, September 10, 13
What happens at run time?
• Actual environment behavior is monitored• Model updated
• e.g., by using a Bayesian approach to estimate DTMC matrix (posterior) given run time traces and prior transitions
• Boils down to the following updating rule
26
Tuesday, September 10, 13
What happens at run time?
• Actual environment behavior is monitored• Model updated
• e.g., by using a Bayesian approach to estimate DTMC matrix (posterior) given run time traces and prior transitions
• Boils down to the following updating rule
26
Tuesday, September 10, 13
What happens at run time?
• Actual environment behavior is monitored• Model updated
• e.g., by using a Bayesian approach to estimate DTMC matrix (posterior) given run time traces and prior transitions
• Boils down to the following updating rule
26
A-priori Knowledge
Tuesday, September 10, 13
What happens at run time?
• Actual environment behavior is monitored• Model updated
• e.g., by using a Bayesian approach to estimate DTMC matrix (posterior) given run time traces and prior transitions
• Boils down to the following updating rule
26
A-priori Knowledge A-posteriori Knowledge
Tuesday, September 10, 13
In our example
27
Tuesday, September 10, 13
In our example
27
R2: “Probability of an ExpShipping failure for a user recognized as BigSpender < 0.035”
Tuesday, September 10, 13
In our example
27
0.633
R2: “Probability of an ExpShipping failure for a user recognized as BigSpender < 0.035”
Tuesday, September 10, 13
In our example
27
0.633
Requirementviolated!
R2: “Probability of an ExpShipping failure for a user recognized as BigSpender < 0.035”
Tuesday, September 10, 13
Model update and failure prediction
• Model checking applied to after each update• Model checking may predict requirements violations• ... and trigger self-adaptations before violations manifest
themselves
28
Tuesday, September 10, 13
In our example
29
12
Logged
9
Buy
6
Search
1
Returning
3
NewCustomer
7
Buy
4
Search
11
10
ExpShipping
12
14
Logout
16
Success
5 1
FailedLg
8
FailedChk
15 1
FailedNrmSh
13 1
FailedExpSh
1
0
Login
0.97
0.030.65
0.35
1
1
0.20.5 0.05
0.6
NrmShipping
0.030.95
0.1 0.97
0.95
0.9
1
0.15
1
0.3
0.05
CheckOut
0.25
Tuesday, September 10, 13
In our example
29
R2: “Probability of an ExpShipping failure for a user recognized as ReturningCustomer < 0.035”
12
Logged
9
Buy
6
Search
1
Returning
3
NewCustomer
7
Buy
4
Search
11
10
ExpShipping
12
14
Logout
16
Success
5 1
FailedLg
8
FailedChk
15 1
FailedNrmSh
13 1
FailedExpSh
1
0
Login
0.97
0.030.65
0.35
1
1
0.20.5 0.05
0.6
NrmShipping
0.030.95
0.1 0.97
0.95
0.9
1
0.15
1
0.3
0.05
CheckOut
0.25
Tuesday, September 10, 13
In our example
29
R2: “Probability of an ExpShipping failure for a user recognized as ReturningCustomer < 0.035”
12
Logged
9
Buy
6
Search
1
Returning
3
NewCustomer
7
Buy
4
Search
11
10
ExpShipping
12
14
Logout
16
Success
5 1
FailedLg
8
FailedChk
15 1
FailedNrmSh
13 1
FailedExpSh
1
0
Login
0.97
0.030.65
0.35
1
1
0.20.5 0.05
0.6
NrmShipping
0.030.95
0.1 0.97
0.95
0.9
1
0.15
1
0.3
0.05
CheckOut
0.25
0.067
Tuesday, September 10, 13
In our example
29
R2: “Probability of an ExpShipping failure for a user recognized as ReturningCustomer < 0.035”
12
Logged
9
Buy
6
Search
1
Returning
3
NewCustomer
7
Buy
4
Search
11
10
ExpShipping
12
14
Logout
16
Success
5 1
FailedLg
8
FailedChk
15 1
FailedNrmSh
13 1
FailedExpSh
1
0
Login
0.97
0.030.65
0.35
1
1
0.20.5 0.05
0.6
NrmShipping
0.030.95
0.1 0.97
0.95
0.9
1
0.15
1
0.3
0.05
CheckOut
0.25
0.067
Requirementviolated!
Tuesday, September 10, 13
In our example
29
R2: “Probability of an ExpShipping failure for a user recognized as ReturningCustomer < 0.035”
12
Logged
9
Buy
6
Search
1
Returning
3
NewCustomer
7
Buy
4
Search
11
10
ExpShipping
12
14
Logout
16
Success
5 1
FailedLg
8
FailedChk
15 1
FailedNrmSh
13 1
FailedExpSh
1
0
Login
0.97
0.030.65
0.35
1
1
0.20.5 0.05
0.6
NrmShipping
0.030.95
0.1 0.97
0.95
0.9
1
0.15
1
0.3
0.05
CheckOut
0.25
0.067
Requirementviolated!
Even if no returning customers have been
observed
Tuesday, September 10, 13
30
Units:����$
Discrete Time Markov Reward Model (D-MRM)
16
Buy
8
Search
6
20
NrmShipping
50
1
Logout
0
End
2
Login
0.2
0.56
ExpShipping
0.14
CheckOut
Another example
0.3
60.625Tuesday, September 10, 13
30
Units:����$
Discrete Time Markov Reward Model (D-MRM)
16
Buy
8
Search
6
20
NrmShipping
50
1
Logout
0
End
2
Login
0.2
0.56
ExpShipping
0.14
CheckOut
Another example
What’s the average cost of a session?
0.3
60.625Tuesday, September 10, 13
30
Units:����$
Discrete Time Markov Reward Model (D-MRM)
16
Buy
8
Search
6
20
NrmShipping
50
1
Logout
0
End
2
Login
0.2
0.56
ExpShipping
0.14
CheckOut
Another example
What’s the average cost of a session? \PSKMR = ��.���
0.3
60.625Tuesday, September 10, 13