8

Click here to load reader

Zue2015Uncertainties

Embed Size (px)

Citation preview

Page 1: Zue2015Uncertainties

Toward Coverage of Uncertainty in Adaptive Analog andMixed Signal System

William Chipman, Christoph Grimm, Carna Radojicic, Design of Cyber-Physical SystemsTU KaiserslauternWebsite: http://cps.cs.uni-kl.de/

Summary / Abstract

In modern systems, analog and mixed-signal (AMS) circuits are tightly coupled with their HW/SW and controlling appli-cations. Because of this, verification of the AMS portion exclusively is not sufficient.Many of these AMS circuits control or contribute significantly to the control of critical systems. In order to achieve ‘firsttime right’ system deployment, the accuracy of the models, and the validation of the application fitness is at least asimportant as the AMS modeling and accuracy.In this paper we discuss and give an overview of methods that strive for validation of AMS systems with increasedcoverage. In particular, we focus on modeling, verification and validation of uncertainties in the context of Cyber-PhysicalSystems. We will also describe a novel approach to determining unknown uncertainties.

1 IntroductionToday, most systems include complex Analog/Mixed-Signal (AMS) subsystems: transceivers, power drivers,PLL and power management are all crucial parts. AMSblocks usually provide a connection between the HW/SWsubsystems and the physical environment of an application.Despite improved methods for verification, many projectsfail due to preventable issues and, as such, demands for im-proved methods and new approaches for verification andvalidation are required. Adding to this need, in futuresystems such as the Internet of Things (IoT) and Cyber-Physical Systems (CPS), the interaction of AMS systemswith HW/SW systems and applications is becoming muchmore complex.In this paper, we give an overview of the state of the art inverification of AMS systems with a focus on the challengeof showing correctness of a system with uncertain parame-ters that is embedded in an unknown or uncertain environ-ment. In the remainder of this section, we give a classi-fication and overview of verification activities. In sectionII, we describe the specific challenge of uncertainties forverification and validation, and classify uncertainties. Insection III, we give an overview of approaches for verifica-tion and validation of systems with uncertainties. In sectionIV, we describe uncertainties in self-modifying systems. Insection V, we discuss scalability and applicability. In sec-tion VI, we discuss a novel system to be used for dealingwith unknown uncertainties. Section VI concludes with anoutlook.

1.1 Reasons for failed design projectsAnalysis of the reasons for failed AMS projects shows thatinsufficient verification coverages can be classified as fol-lows:Insufficient verification arises when the verification cov-erage is not measured properly. This allows for system be-havior states that are not tested. It can also occur when spe-cific components are integrated late or after the verificationprocess completes.Inaccurate modeling is a typical issue of both AMS anddigital systems. Inaccurate modeling can be caused bythe use of incomplete behavioral models, inaccurate devicemodels or unforeseen issues that are non-model related butstem from inconsistencies introduced through the physicaldesign such as crosstalk, power integrity and other environ-mental issues.Incomplete specification as well as changing or wrongspecification is also very common. This issue arisesbecause the specification process itself can be considereda part of the design process, and is often done at leastpartially in parallel with other design issues in order tominimize time to market.

While experience shows that all three classes of design fail-ure occur with nearly equal likelihood, verification of anAMS system against its specification can only cover thefirst. This has led to the first being the subject of much re-search and tool development while leaving the second andthird to the experience of engineers and development ofmethodology. To explain by which approaches the otherclasses of design failures can be covered as well, we mustfirst discuss the design activities that target correctness in

Page 2: Zue2015Uncertainties

design.

1.2 Design activities that target ‘correct-ness’

A design is considered as correct if it fulfills all specifiedproperties and requirements of a given application. Cor-rectness is targeted by

1. verification,

2. validation, and

3. modeling activities.

Verification activities target the proof of a specification’sproperties. Verification means to show that a model of thesystem fulfills specified properties; basically, it means toshow that “we do things right” [8]. Optimally, verificationis done in a comprehensive (‘formal’) way, i.e. by modelchecking, or by modeling/simulation.Validation activities target consistency of the design withits application. Validation means to show that functionaland non-functional properties of a system fulfill the re-quirements and expectations; basically it means to showthat “we do the right things” [8]. As validation activities weconsider the executable specification that creates an initialmodel of the design. It allows designers to abstract from theapplication context. After the design of a component, in-tegration validation shows that the design is working prop-erly in the context of the application (see [17]). Validationis usually based on a small number of explicitly given usecases or scenarios in which a design has to be validated.Modeling is an activity that is often overlooked in this con-text. Modeling creates a representation of an existing sys-tem. Models describes useful properties to the needed levelof detail and abstracts less important properties for a par-ticular purpose. Both verification and validation are basedon models and assumptions; models of the design, mod-els of the environment, and assumptions of scenarios of theapplication. However, with increasing complexity, modelsare becoming more abstract and critical for coverage.

Activity Does CoverageVerification shows that model of-, semi-formal design has properties quantified-, formal as specified comprehensiveValidation shows functionality-, semi-formal of model of quantified-, formal design in system comprehensiveModeling creates abstraction-, dependable of design safe inclusion

of uncertainties

Table 1 Overview and classification of design activitiesthat aim showing correctness.

Table 1 gives an overview of the design activities that tar-get ‘correctness’. The verification and validation activities

are classified in semi-formal and formal activities. Semi-formal activities try to quantify and, if possible, increasecoverage. Formal activities strive for comprehensive cov-erage.

2 Correctness, in an uncertainworld

With shrinking structures, increasing complexity, andmore complex interactions of components and the appli-cation, the impact of deviations (e.g. Process-, Voltage-,Temperature-) is growing and becoming harder to predict.Adding to this, the Internet of Things and Cyber-PhysicalSystems have opened the system borders of embedded sys-tems. Impacts of the surrounding environment and relatedsystems have to be accounted for in order to fully realizecorrectness in the system. Therefore, it becomes insuffi-cient to show properties of a system in a given scenarioand given environment (correctness). Experiences, e.g. thecrash of the LH2904 aircraft [26], show that the failing ofcomplex systems is caused by complex interactions involv-ing different kind of deviations from “planned operation”.To sufficiently prove correctness of the system these possi-ble deviations must be taken into account and included inthe verification and validation process.Means to show correctness and dependability require amore complex understanding that is still evolving, andmight include:

1. Deviations due to aging, drift, etc.,

2. (Multiple) faults in components or communication,

3. Unforeseen scenarios and use cases,

4. Changes in the configuration or adaptive behavior.

System engineers have dealt with type (1) and (2) for a longtime and have developed elaborate methods to analyze theireffects and to tolerate them [19, 23, 36]. Model inaccura-cies (3) have also been studied. [22] categorizes possibleuncertainties in models and presents a conceptual mathe-matical description to handle them. A characterization ofthe location, level and nature of uncertainties is discussedin the literature [6, 37], which describes a quantification ofmodel uncertainties and introduces a design flow for proba-bilistic uncertainty models. In fact, uncertainties emergingdue to adaptive CPS behavior (type 4) have not been sys-tematically studied, but they are certainly a growing con-cern because of the rapidly growing number of CPS withan increasing ability for adaptation.Therefore for validation of AMS systems in the IoT or forCPS, it will become more important to understand how asystem can adapt itself to changing and maybe unforeseenenvironments. However, how can we model, validate, ver-ify unforeseen situations? To, at least partially, give an an-swer to this question, we first formalize it.

Page 3: Zue2015Uncertainties

2.1 Classification of uncertaintiesWalker [37] defines the term uncertainty as: “any deviationfrom the unachievable ideal of completely deterministicknowledge of the relevant system”. It includes all kind ofdeviations, faults, failures, errors, unforeseen changes, etc.that are not part of a deterministic model. Uncertainties cantake a deterministic model into a non-deterministic state.Walker classifies uncertainties according to location of theuncertainty. They can be in the inputs of a system, the pa-rameters, or in the model (structure, behavior) itself (see[37]): Input uncertainties are typically caused by the lackof knowledge in initial operating conditions, or unforeseenscenarios. Parameter uncertainties are values where we donot know its accurate values (e.g. π). For many values wehave statistical knowledge (e.g. mean value, probabilitydensity function), or we know at least a range from whicha value is chosen randomly (non-deterministic choice). Ta-ble 2 gives an overview and examples from AMS systems.In addition to location, we will need to add a classificationaccording its modeling approach and whether it is static ordynamic to define a formal model.

Table 2 Classification by location[37]

Location Examplesinput uncertainties uncertain initial values

or stimuliparameter uncertainties tolerances, component

agingmodeling uncertainties abstraction of

accurate models

Depending on the level of uncertainty, different mathemat-ical approaches can be used to capture its behavior. Proba-bilistic models allow us to quantify the impact of uncertain-ties. We distinguish between discrete and continuous prob-abilistic models. We further distinguish two kind of non-deterministic models: the non-deterministic choice of avalue from a (continuous) range, and the non-deterministicchoice from a discrete set. Table 3 gives an overview ofmodeling approaches.

Table 3 Classification by modeling approach

Modeling approach Examplescontinuous non-deterministic tolerances, drift, aging

continuous probabilistic white, colored noisediscrete non-deterministic possible failure

discrete probabilistic sporadic failure

Furthermore, we recognize as distinct dynamic and staticuncertainties. Static uncertainties are unknowns that areconstants over time (i.e during a symbolic simulation run).Dynamic uncertainties are unknown variables whose val-ues are determined for every use during a symbolic simula-tion according a given law. A dynamic uncertainty can of-ten be modeled as a sequence of (possibly correlated) static

uncertainties.

Table 4 Dynamic and static uncertainties.

Examplesstatic process variations, aging

dynamic noise, quantization error,sporadic errors, SEU errors

2.2 Formal representation of uncertaintiesIn order to (formally) verify and validate systems with un-certainties, we need concrete, mathematical models of un-certainties. These provide us with a basis for modeling un-certainties, and also to compute or simulate them. There isa vast range of theories, and we only give a few examplesfor each modeling approach.Continuous, non-deterministic uncertainties are ranges orintervals. Among several approaches to compute withranges, Affine Arithmetic (e.g. [4, 14]) offers a good trade-off between efficiency and accuracy if used for computa-tions. An Affine Arithmetic Form (AAF) x consists of acenter value x0 and a linear combination of deviation termsxiεi that model deviations from the center value. The devia-tion symbols εi are symbols with unknown value in [−1,1].The deviation variables xi scale these ranges.

x = x0 +n

∑i=1

xiεi εi ∈ [−1,1]

Linear operations on AAF give accurate results with con-stant size of the expression:

c(x± y) = c(x0± y0)+m

∑i=1

c(xi± yi)εi

Non-linear operations can be approximated by a linear ap-proximation and an additional deviation term that guaran-tees safe inclusion of the higher order terms (see [14]).Continuous, probabilistic uncertainties are random vari-ables that are chosen depending on a probability densityfunction. In [18, 25], normal distributed uncertainties arerepresented. Then, the symbols εi are assumed to be nor-mally distributed random variables with standard deviationthat is scaled by xi. However, such a modeling is limited tonormal distributions, and only applicable to linear systems.Non-deterministic discrete uncertainties can be modeledby non-determinism in automata [31]; there is a compre-hensive theory on automata that we assume to be known tothe reader. Probabilistic discrete uncertainties require themodeling of the probabilities of the state transitions, (e.g.in probabilistic automata[30] or Markov processes [27]).While there are many different comprehensive, theoreti-cal frameworks for each modeling approach a theoreticalframework that covers all different kind of uncertaintiesseems to be missing.

Page 4: Zue2015Uncertainties

3 Verification of systems with uncer-tainties

In the previous sections we have described a means tomodel uncertainties at its origins. However, once these un-certainties propagate throughout the systems, they are ei-ther canceled, or lead to an error. Essentially, modeling thesystem at its starting state is the starting point but modelingof the functionality is required for proper validation of thesystem. In order to validate the impact of uncertainties, wewill briefly describe two different approaches: The first op-tion is to use numerical simulations and the second ‘formal’option is to use symbolic simulation. Numerical simulationcan only evaluate a single combination of all uncertainties.To increase coverage, multi-run simulation techniques areused: Worst Case and Monte Carlo analysis. The main is-sue with multi-run simulation is that it can require a highnumber of simulation runs to get results with a high degreeof confidence.

3.1 Monte Carlo analysisMonte Carlo methods determine statistical properties of anuncertain system. For this purpose, values for uncertainparameters are chosen randomly from probabilistic mod-els and used in repeated simulation runs. The number ofsimulation runs is independent from the number of uncer-tainties, but it increases with

1. rare events such as errors, and

2. the desired confidence.

For (1) solutions are available. In order to reduce the num-ber of simulation runs, the original (probabilistic) modelhas to be biased to make rare events more likely, e.g. byvariance reduction methods ([28] gives an overview andcomparison), Importance Sampling [11, 21], or blockinguseless samples [35]. (2) is simply a fundamental problemof probabilistic methods.

3.2 Worst Case analysisThe objective of Worst Case analysis is to show that systemperformances, even in presence of uncertainties, remain ina specified range. For deterministic uncertainties, we canconsider only the corner cases of continuous uncertainties[5], and all possible values for discrete uncertainties. How-ever, this has two problems: First, for n uncertainties it re-quires 2n simulation runs. Second, it can lead to a falseestimation, since worst-case performances are not alwaysfound in the corner cases.A systematic approach to efficiently finding corner casesis the use of Design of Experiments (DoE), e.g. in [33].In this approach, statistical methods are used to analyze theexisting simulation runs, and to determine inputs that likelyare candidates for worst case parameters. However, it isapplicable only for continuous uncertainties. With DOEthe border between the passing and the failing operatingregions is hard to be detected since in the failure regionscontinuous signals often become discontinuous.

3.3 Formal verificationComplex systems with uncertainties show the limits ofsimulation-based approaches. While simulation is able tocope with complexity and heterogeneity, it fails to exploreall uncertainties with a sufficient coverage. An alternativecould be the use of formal methods that promise compre-hensive coverage.Compared with numeric simulation, symbolic simulation(and model checking) provides not only much more cover-age, but also more insight and information on an analyzedsystem. Symbolic simulation of circuits has been used fora long time [34, 16], but the initial approaches were lim-ited to linear circuits and systems. Since, many other ap-proaches have been developed such as:

• convex polyhedra [10],

• non-convex ortogonal polyhedra ([12, 7]),

• ellipsoids [9],

• polytopes [3], or Affine Arithmetic Forms [13, 18, 32]that are equivalent to polytopes.

4 Uncertainties in self-modifyingsystems

A self-modifying system is one which is capable of chang-ing its behavior while in an active state. While this ability isquite useful for the longevity and robustness of the system,it creates many issues with the verification and validationprocess. Self modification instead of partial or total failurecan add stability in an uncertain environment but this sta-bility comes at a price. Systems models need to account forchanges that could be made but may not.As described previously, showing a high degree of confi-dence in the verification process can require a high numberof simulation. When the system can make changes to itselfwhile running, both the model and the test runs need to beable to anticipate these changes. This adds complexity (andtime) to the modeling and simulation process.In cyber-physical systems and especially IoT, systems thatcan self-heal to overcome partial or to avoid catastrophicfailure will be vital. Once a system begins to self-heal,it becomes extremely difficult to confirm that the systemremains in a deterministic state. CPS are quite often real-time and/or safety systems so if these traverse into a non-deterministic state due to self imposed modifications, theentire system can become suspect. Models must be devel-oped that can take into account these potential changes asuncertainties and allow for testing that can verify and vali-date the system while not fully understanding what changescould be or have been made. To add even more uncertainty,once self modifications begin, it becomes extremely diffi-cult to determine what the circular inputs will be and howthe system will deal with these values.

4.1 Types of self modificationSelf modifications can occur for several reasons:

Page 5: Zue2015Uncertainties

• Failure of parts or subsystems

• Modifications to logic based on new and previouslyunknown inputs

• Learning that new behaviors yield better results thanprevious

When a part or subsystem fails, a self healing system mustfirst detect the failure and then decide to ignore the fail-ure, reroute around the failure or to compensate in someother undetermined fashion. Determining when failures oc-cur can be as simple as repeated failed heartbeat responsesor can involve more complicated self-healing algorithms.The modifications to the system logic due to advanced self-healing algorithms can lead to adjustments to the circularinputs. These changes may handle the known inputs in anunknown manner leading to further deviations and changesto the system logic. Moreover, unknown or unanticipatedinputs to the system coupled with unexpected modifica-tions to the system can more a system into a state that iswell beyond that which has been tested and modeled. Fi-nally, self-learning systems can modify their logic to adjustbehaviors when the new behaviors are determined to be bet-ter or more useful than previous. This self-learning basedmodification can quickly move a system in to behavior pat-terns that are unexpected and well outside what has beenverified and validated.

4.2 Systems that require self modifying be-havior

While many cyber-physical systems can be improved bybeing designed as adaptive and self-modifying, we willlook closer at autonomous vehicles and smart environmen-tal systems. Autonomous vehicles range from unmannedair vehicles (UAV) to driverless automobiles. UAVs be-gan to be in use over a decade ago but were strictly remotecontrolled vehicles that were strictly controlled by a per-son. These vehicles have morphed over time to aircraft thatcan takeoff fly an extended mission and then land with nohuman interaction. During flights, UAVs must be able todetect and compensate for failures or other changes to thesystem or external environment. They must also be fullyproven to remain in a stable deterministic state in spite offailures or changes in input. If they move into an unstablestate, the safety of the entire system comes into question.According to [29] forecasts show that over 94 million self-driving vehicles will be in service by 2035. In 2013,Google introduced their driverless car followed by Daim-ler’s version [29]. While safety is of the highest priorityfor these vehicles, proving that safety before those vehicleenter the roadway must come first. Proving safety throughverification and validation in the system must not only ac-count for the uncertainties of failures in the internal en-vironment but must also deal with the uncertainties of thequickly changing environment of a high speed vehicle. [29]shows that to run full combinatorial model simulations willrequire x∗1012 test cases.

Cyber-physical systems to control environmental systemshave been in place for several years. As the requirementsof these systems expand so do the scope of the uncertain-ties introduced by the external environment. In additionto the general control of the environmental systems, thesesystems must deal with uncertainties in the external inputssuch as emergency situations, weather factors, componentand sub-system failures and unknown uncertainties due toself-healing behaviors changes in the system.

4.3 Uncertainties due to unanticipatedmodifications

Uncertainties that are introduced because of unexpected orunforeseen modifications not only make verification andvalidation difficult, they can also cause instabilities thatmay lead to system-wide problems. Safety issues affect-ing the system or any external entities must be proven to beminimal and within prescribed limits. Additionally thesechanges can not allow a system to fundamentally changefrom the specified functionality. Minor changes in themode of function are expected but overall functionality andend results must be maintained and validation must showthat despite known and unknown modifications to behaviorthat functionality remains intact.Modeling and verification must show that in the faceof unexpected changes, the entire system does not movefrom a deterministic state to remain in a non-deterministicstate. Any potential non-determinism must be modeled andshown that edges cases can be recognized and self-correct.This edge case modeling must also account for lost or miss-ing data and inputs. Data may be lost by internal systemfailures or from external issues; interference, crosstalk, etc.Lost/missing data may be partial or missing and algorithmsfor handling these vary.Quite often these CPS tend to function as black boxes thataccept inputs and output results. These results coupledwith new inputs are fed circularly into the system gener-ating new outputs. When the CPS uses self-diagnostics toself-heal and initiate behavior modifications, this can leadto unforeseen changes to the outputs and therefore changesto the circular inputs. These unexpected inputs can forcefurther self-diagnostics and healing modifications.

5 Applicability and scalabilityWhile formal methods are promising, industrial adoptionis rather low. The reasons are that formal methods are hardto integrate into design flows, and seem to be less scalablecompared with numerical simulation.

5.1 Ability to integrate in design flowsIn most formal verification approaches, the design modelshave to be translated into specific models for (formal) ver-ification. In [1] a description of a smart regression cubethat handle sparse data in a linear regression to build mod-els from minimal sampling data. [2] describes a potentialnovel approach for automated use-case generation for sys-

Page 6: Zue2015Uncertainties

tems that have unknown risky uncertainty that can only bedetermined at runtime. [39] requires system modeling inthe form of recurrence equations on which formal verifica-tion can be applied. In [38] mixed-signal behavior needs tobe formulated in the form of SAT constraints which repre-sent the input to NL-SMT solver. [15, 12] analyze mixed-signal behaviors of linear-hybrid automata. Tools for trans-lation are not provided (as in [39, 38, 15, 12]), or only for asubset (e.g. [24]). [24] translates a subset of VHDL-AMSto labeled hybrid Petri nets, on which formal methods areapplied.However, for design of circuits and systems, languagessuch as VHDL-AMS, Verilog-A or SystemC AMS areused. Verification and validation involves a vast set of mod-eling IP in terms of device models, test benches, and othermodels. This infrastructure makes vast use of all languagefeatures, and even compatibility among different simula-tion tools for the same is questionable. Hence, a manualtranslation or partially automatic translation is a major flawin the design flow.An approach to overcome would be the direct use of exist-ing modeling languages and tools for formal verification asdone e.g. in [18, 32].

5.2 ScalabilitySimulation based verification of systems with k uncertain-ties requires multi run simulations. For worst case analysis,2k simulation runs is required to simulate all corner cases;however, this does not guarantee that the worst case hasbeen found. For Monte Carlo Simulation, a constant num-ber of simulation runs (e.g. 500) is needed to determinestatistical properties; however, to increase confidence thenumber has to be increased.In [20] the run-time complexity of symbolic simulationbased on Affine Arithmetic is analyzed. It is shown thatmemory complexity is k+2⊆O(1), and that the given im-plementation has a constant overhead in run-time comparedwith numeric simulation.

Table 5 Comparison of run-time complexity.

method complexity commentMonte-Carlo O(1) large number of

runs requiredCorner-Case O(2k) exponential with k

affine arithmetic O(1) only one simulationrun required

6 Smart CampusA novel cyber-physical system has been designed to studythe effects of unknown uncertainties on a larger system.The smart campus, while controlling lighting and other en-vironmental systems, must also recognize scenarios based

Figure 1 Self-healing Smart Campus

on human traffic patterns, determine unusual and new be-havior patterns, detect and anticipate emergency situationsand maintain system security all while remaining in a stabledeterministic state.In order to achieve these requirements, a self-awareness isrequired in the system. This self-awareness consists of self-diagnosis and self healing sub-systems. These subsystemscan be seen in Figure 1.In addition to the uncertainties due to unknown inputs tothe system while running, modeling and verification mustalso account for the unknown uncertainties introduced bythe self-aware subsystems. Because of the complexity ofthe smart campus system and the high level of unknownuncertainties that are possible in this system, a combina-tion of several verification and validation techniques willbe required to solve the related issues and show that largesemi-diverse CPS can be successfully modeled and vali-dated.

7 ConclusionFor verification and validation of Analog/Mixed-Signal andCyber-Physical Systems, uncertainties play a central role.Traditionally, simple parameter uncertainties from insidethe system had to be handled. This is going to changeas more uncertainties come from inputs to the system andfrom behavioral modifications during runtime.We have given a comprehensive and rather abstract dis-cussion of uncertainties, including a classification, andoverview of approaches and means for its verification bysimulation, by formal verification, and ideas towards vali-dation. We have described uncertainties in self-modifyingsystems including the types of self-modification, systemsthat require the ability to self-modify and a brief discussionof the uncertainties due to unanticipated modifications. De-signing of modeling techniques for applicability and scal-ability has also been explained along with a comparisonof run-time complexity for several approaches. Finally anovel smart campus system has been described that hasenough complexity that it will require a new approach toeffectively verify and validate.While there are many novel approaches being tried to ver-

Page 7: Zue2015Uncertainties

ify and validate AMS and CPS systems, there is now onesize fits all approach. In order to make the process widelyexcepted in industry, a design for a non-IP process that canbe apply universally to different systems needs to be devel-oped.

8 Literature[1] Hossein Ahmadi, Tarek Abdelzaher, Jiawei Han,

Nam Pham, and Raghu Ganti. The sparse regres-sion cube: A reliable modeling technique for opencyber-physical systems. In IEEE/ACM Second IN-ternational Conference on Cyber-Physical Systems,2011.

[2] Shauket Ali and Tao Yue. U-test evolving, modellingand testing realistic uncertain behaviors of cyber-physical systems. In Software Testing, Verificationand Validation (ICST), 2015 IEEE 8th InternationalConference on, pages 1–2, 2015.

[3] Matthias Althoff, Akshay Rajhans, Bruce H. Krogh,Soner Yaldiz, Xin Li, and Larry Pileggi. Formal Veri-fication of Phase-Locked Loops Using ReachabilityAnalysis and Continuization. In IEEE/ACM Inter-national Conference on Computer Aided Design (IC-CAD), pages 659–666, November 2011.

[4] Marcus Andrade, Joao Comba, and Jorge Stolfi.Affine Arithmetic (Extended Abstract). Interval ’94,St.Petersburg, Russia, 1994.

[5] Kurt Antreich, Helmut Gräb, and Claudia Wieser.Practical Methods for Worst-Case and Yield Analy-sis of Analog Integrated Circuits. International Jour-nal of High Speed Electronics and Systems, 4(3):261–282, 1993.

[6] Arendt Paul D., Apley Daniel W., and Chen Wei.Quantification of Model Uncertainty: Calibration,Model Discrepancy, and Identifiability. Journalof Mechanical Design, 134(10):100908–100908, sep2012. 10.1115/1.4007390.

[7] Eugene Asarin, Olivier Bournez, Thao Dang, andOded Maler. Approximate Reachability Analysis ofPiecewise-Linear Dynamical Systems. In Third Inter-national Workshop on Hybrid Systems: Computationand Control (HSCC’00), volume LNCS 1790, pages20–31, 2000.

[8] Barry Boehm. Guidelines for verifying and validatingsoftware requirements and design specifications. InEuro IFIP 79, pages 771–719. North Holland, 1979.

[9] F.L. Chernousko and Ovseevich A.I. Method of ellip-soids: Guaranteed estimation in dynamical systemsunder uncertainties and control. In Abstracts of theInternational Conference on Interval and Computer-Algebraic Methods in Science and Engineering (IN-TERVAL/94), April 1994.

[10] Alongkrit Chutinan and Bruce H. Krogh. Comput-ing Polyhedral Approximations to Flow Pipes for Dy-namic Systems. In Proceedings of the 37th IEEEConference on Decision and Control, 1998.

[11] Charles E. Clark. Importance sampling in MonteCarlo analysis. Operations Research, 9(5):603–620,1961.

[12] Thao Dang, Alexandre Donzè, and Oded Maler.Formal Methods in Computer Aided Design, vol-ume LNCS 3312, chapter Verification of Analog andMixed-Signal Circuits Using Hybrid System Tech-niques, pages 21–36. Springer Berlin Heidelberg,2004.

[13] N. Femia and G. Spagnuolo. True worst-case cir-cuit tolerance analysis using genetic algorithms andaffine arithmetic. Circuits and Systems I: Fundamen-tal Theory and Applications, IEEE Transactions on,47(9):1285–1296, Sep 2000.

[14] Luiz Henrique De Figueiredo and Jorge Stolfi. AffineArithmetic: Concepts and Applications. NumericalAlgorithms, 37(1-4):147–158, 2004.

[15] Goran Frehse. PHAVer: Algorithmic Verification ofHybrid Systems Past HyTech. In 8th InternationalWorkshop on Hybrid Systems: Computation and Con-trol (HSCC’05), volume LNCS 3414, pages 258–273,2005.

[16] G.G.E. Gielen, H.C.C. Walscharts, and W. Sansen.ISAAC: a symbolic simulator for analog integratedcircuits. Solid-State Circuits, IEEE Journal of,24(6):1587–1597, Dec 1989.

[17] Christoph Grimm, Martin Barnasconi, Alain Va-choux, and Karsten Einwich. An Introduction toModeling Embedded Analog/Mixed-Signal Systemsusing SystemC AMS Extensions, 2008.

[18] Christoph Grimm, Wilhelm Heupke, and Klaus Wald-schmidt. Analysis of mixed-signal systems withaffine arithmetic. Computer-Aided Design of Inte-grated Circuits and Systems, IEEE Transactions on,24(1):118–123, 2005.

[19] Christoforos Hadjicostis. Coding Approaches to FaultTolerance in Combinational and Dynamic Systems,volume 660 of The Springer International Series inEngineering and Computer Science. Kluwer Aca-demic Publishers, Norwell, MA, USA, 2001.

[20] Wilhelm Heupke, Christoph Grimm, and Klaus Wald-schmidt. Advances in Specification and Design Lan-guages for SoC, chapter Modeling Uncertainty inNonlinear Systems with Affine Arithmetic, pages198–213. Springer-Verlag, 2006.

[21] Dale E. Hocevar, Michael R. Lightner, and Timo-thy N. Trick. A study of variance reduction tech-niques for estimating circuit yields. IEEE Trans-actions on Computer Aided-Design, CAD-2(3):180–192, 1983.

[22] Armen Der Kiureghian and Ove Ditlevsen. Aleatoryor epistemic? Does it matter? Structural Safety,31(2):105–112, 2009. Risk Acceptance and RiskCommunication Risk Acceptance and Risk Commu-nication.

[23] Israel Koren and C. Mani Krishna. Fault-TolerantSystems. Morgan Kaufmann Publishers Inc., San

Page 8: Zue2015Uncertainties

Francisco, CA, USA, 2007.[24] Scott Little, Nicholas Seegmiller, David Walter, and

Chris J. Mayers and. Verification of analog/mixed sig-nal circuits using labeled hybrid petri nets. In Inter-national Conference on Computer-Aided Design (IC-CAD 2006), number 275-282, 2006.

[25] J.D. Ma and R.A. Rutenbar. Fast interval-valued sta-tistical modeling of interconnect and effective capaci-tance. Computer-Aided Design of Integrated Circuitsand Systems, IEEE Transactions on, 25(4):710–724,April 2006.

[26] Poland Main Commission Aircraft Accident Investi-gation Warsaw. Report on the Accident to AirbusA320-211 Aircraft in Warsaw on 14 September 1993,1994.

[27] James R. Norris. Markov chains. Cambridge series instatistical and probabilistic mathematics. CambridgeUniversity Press, 1998.

[28] M. Perninge, M. Amelin, and V. Knazkins. Com-paring variance reduction techniques for monte carlosimulation of trading and security in a three-areapower system. In Transmission and DistributionConference and Exposition: Latin America, 2008IEEE/PES, pages 1–5, Aug 2008.

[29] Horst Pfluegl. Safety and reliability testing of au-tonomous vehicles. Technical report, AVL, 2015.

[30] Michael O. Rabin. Probabilistic automata. Informa-tion and Control, 6(3):230 – 245, 1963.

[31] M.O. Rabin and D. Scott. Finite automata and theirdecision problems. IBM Journal of Research and De-velopment, 3(2):114–125, April 1959.

[32] Carna Radojicic, Thiyagarajan Purusothaman, andChristoph Grimm. Symbolic Simulation of Mixed-Signal Systems with Extended Affine Arithmetic. InProceedings of EDAWORKSHOP 2015, pages 21–26,2015.

[33] Monica Rafaila, Christian Decker, Christoph Grimm,and Georg Pelz. Simulation-based sensitivity andworst-case analyses of automotive electronics. In13th IEEE Symposium on Design and Diagnosticsof Electronic Circuits and Systems, pages 309–312.IEEE, April 2010.

[34] Kishore Singhal and Jiri Vlach. Symbolic analysisof analog and digital circuits. Circuits and Systems,IEEE Transactions on, 24(11):598–609, Nov 1977.

[35] A. Singhee and R.A. Rutenbar. Statistical blockade:A novel method for very fast monte carlo simulationof rare circuit events, and its application. In De-sign, Automation Test in Europe Conference Exhibi-tion, 2007. DATE ’07, pages 1–6, April 2007.

[36] Arun K. Somani and Nitin H. Vaidya. UnderstandingFault Tolerance and Reliability. Computer, 30(4):45–50, April 1997.

[37] Warren E Walker, Poul Harremoës, Jan Rotmans,Jeroen P van der Sluijs, Marjolein BA van Asselt, Pe-ter Janssen, and Martin P Krayer von Krauss. Defin-ing uncertainty: a conceptual basis for uncertainty

management in model-based decision support. Inte-grated assessment, 4(1):5–17, 2003.

[38] Leyi Yin, Yue Deng, and Peng Li. Verifying dynamicproperties of nonlinaer mixed-signal circuits via ef-ficient smt-based techniques. In International Con-ference on Computer-Aided Design (ICCAD 2012),pages 436–442, 2012.

[39] Mohamed H. Zaki, Ghiath AL Sammane, SofièneTahar, and Guy Bois. Combining Symbolic Sim-ulation and Interval Arithmetic for the Verificationof AMS Designs. In 7th International Conferenceon Formal Methods in Computer-Aided Design (FM-CAD 2007), pages 207–215, 2007.