16
Computing (2013) 95:633–648 DOI 10.1007/s00607-013-0338-9 EDITORIAL Software architecture-based analysis and testing: a look into achievements and future challenges Antonia Bertolino · Paola Inverardi · Henry Muccini Published online: 26 July 2013 © Springer-Verlag Wien 2013 1 The role of software architecture in testing and analysis The term software architecture (SA) has been introduced to denote the high level structure of a software system. SA has been proposed as a means for managing com- plexity and improving reuse, by supporting the decomposition of a system into its high level components and their interconnections. SA became prevalent in the beginning of the ’90s when its main role was to describe the system structure (by identifying architectural components and connectors) and the required system behavior. Over the years, the SA scope has evolved, and today it also captures the architecturally-relevant decisions behind design [50] taken by a variety of stakeholders to satisfy their own specific concerns, and codified into different views and viewpoints [46]. Nowadays, the relevance of SA in both academic and industrial worlds is unques- tionable, and SAs are used for documenting and communicating design decisions and architectural solutions [24], for driving analysis techniques [57, 62, 63, 79], for code generation purposes in model-driven engineering [3, 37], for product line engineering [15], for risks and cost estimation [36, 67], and further on. SAs have also been advocated since the 1990s as a means for improving the dependability of complex software systems. In this light, different methods have been A. Bertolino (B ) Istituto di Scienza e Tecnologie dell’Informazione “A. Faedo” (ISTI-CNR), Area della Ricerca CNR di Pisa, 56100 Pisa, Italy e-mail: [email protected] P. Inverardi · H. Muccini Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica, University of L’Aquila, Via Vetoio 1, 67100 L’Aquila, Italy e-mail: [email protected] H. Muccini e-mail: [email protected] 123

Software architecture-based analysis and testing: a look into achievements and future challenges

Embed Size (px)

Citation preview

Page 1: Software architecture-based analysis and testing: a look into achievements and future challenges

Computing (2013) 95:633–648DOI 10.1007/s00607-013-0338-9

EDITORIAL

Software architecture-based analysis and testing: a lookinto achievements and future challenges

Antonia Bertolino · Paola Inverardi ·Henry Muccini

Published online: 26 July 2013© Springer-Verlag Wien 2013

1 The role of software architecture in testing and analysis

The term software architecture (SA) has been introduced to denote the high levelstructure of a software system. SA has been proposed as a means for managing com-plexity and improving reuse, by supporting the decomposition of a system into its highlevel components and their interconnections. SA became prevalent in the beginningof the ’90s when its main role was to describe the system structure (by identifyingarchitectural components and connectors) and the required system behavior. Over theyears, the SA scope has evolved, and today it also captures the architecturally-relevantdecisions behind design [50] taken by a variety of stakeholders to satisfy their ownspecific concerns, and codified into different views and viewpoints [46].

Nowadays, the relevance of SA in both academic and industrial worlds is unques-tionable, and SAs are used for documenting and communicating design decisions andarchitectural solutions [24], for driving analysis techniques [57,62,63,79], for codegeneration purposes in model-driven engineering [3,37], for product line engineering[15], for risks and cost estimation [36,67], and further on.

SAs have also been advocated since the 1990s as a means for improving thedependability of complex software systems. In this light, different methods have been

A. Bertolino (B)Istituto di Scienza e Tecnologie dell’Informazione “A. Faedo” (ISTI-CNR),Area della Ricerca CNR di Pisa, 56100 Pisa, Italye-mail: [email protected]

P. Inverardi · H. MucciniDipartimento di Ingegneria e Scienze dell’Informazione e Matematica,University of L’Aquila, Via Vetoio 1, 67100 L’Aquila, Italye-mail: [email protected]

H. Muccinie-mail: [email protected]

123

Page 2: Software architecture-based analysis and testing: a look into achievements and future challenges

634 A. Bertolino et al.

proposed on the one side for assessing the correctness of architectural decisions withrespect to system goals and requirements, and on the other as a reference model to drivea more effective system design and implementation. Among these methods, testingand analysis play a central role.

In this editorial paper, while introducing the two papers selected for this specialissue, we report on those that we consider the most relevant advances in the field ofarchitecture-based testing and analysis over the years. We do this based on our ownjourney in this topic, which we ourselves contributed to stir and shape.

Many dedicated events have been hold to discuss the role plaid by SA in theanalysis and testing of software systems. Starting from the first ROSATEA workshoprun in 1998 (and followed by the ROSATEA 20061 and ROSATEA 20072 editions),the role plaid by SA in analysis and testing has been also discussed in dependabil-ity venues (such as the Architecting Dependable Systems workshops and books3),in testing venues (such as the Charette session of the “6th Workshop on Automa-tion of Software Test” at ICSE 2011), and in software architecture venues (such asthe “Workshop on Architecture-Based Testing and System Validation”, part of theWICSA 2011 programme4). Very recently three events (the “Working meeting onArchitecture-Based Testing: Moving Research into Practice” held in Pisa in 20115,the industrial and practitioners-oriented meetings “Workshop on Architecture-BasedTesting: Best Practices, Best Potential” held at the CMU-SEI on February 2011, andthe “Architecture-based Testing: Industrial Needs and Research Opportunities” BOFsession6 at WICSA 2011) showed a renewed interest in the topics.

Summarizing we can classify the role of SA in testing and analysis in three mainareas:Architecture evaluation: Evaluates the architecture goodness with respect to qualityrequirements.Architecture-based analysis: Analyzes an architecture or architectural model withrespect to functional and non functional qualities. Usually the analysis serves the pur-pose of comparing alternative architectures to make informed early design decisions.Architecture-based testing: Uses an architecture to produce artifacts useful in testingthe implementation of a system (e.g., test cases, test plans, coverage measures) andexecutes code-level test cases to check the implementation.

When dealing with SA-based analysis and testing, two perspectives can be taken:(i) the one where the architect wants to analyze or test the software architecture itself(e.g., with respect to given requirements or goals), or (ii) the other where the developerwants to analyze or test the developed system against the decisions made within itsSA. This implies conformance between the architecture and the implemented system.This paper will cover both perspectives.

1 See: http://www.di.univaq.it/muccini/Rosatea2006/.2 See: http://www.di.univaq.it/muccini/Rosatea2007.3 See: http://www.cs.kent.ac.uk/people/staff/rdl/ADSFuture/resources.htm.4 See: http://www.cs.bilkent.edu.tr/ABT-2011/.5 See: http://labsewiki.isti.cnr.it/projects/ast/ast2011pisa/main.6 See: http://wwwp.dnsalias.org/wiki/WICSA_2011_BABT:_Architecture-based_Testing.

123

Page 3: Software architecture-based analysis and testing: a look into achievements and future challenges

Software architecture for code testing and analysis 635

Using such classification, the rest of this editorial paper briefly describes thehistorical trends in the use of software architecture for code testing and analysis(Sects. 2, 3, 4), and identifies what we envision to be the challenges for the near future(Sect. 5). Finally the papers that have been selected for the special issue are introduced(Sect. 6).

2 Architecture evaluation

Architecture evaluation (AE) consists in evaluating the software architecture compli-ance to quality requirements. Different methods have been proposed.

Dobrica and Niemela in [31] analyze and compare eight well known AE meth-ods, including the SAAM (scenario-based architecture analysis method) with its threeextensions SAAMCS, ESAAMI, and SAAMER, the ATAM (architecture trade-offanalysis method), the SBAR (scenario-based architecture reenginering), the ALPSM(architecture level prediction of software maintenance), and the SAEM (software archi-tecture evaluation model). Most of these AE techniques are based on scenarios, andhelp to assess the SA quality with respect to a multitude of quality attributes. Differenttechniques specialize on different evaluation goals: ATAM locates and analyzes trade-offs, SBAR and SAAMER evaluate the potential of the designed SA to achieve thesoftware quality requirements, SAAMCS focuses on risk assessment, while SAAMand ESAAMI help to identify potential trouble spots [31]. Among them, SAAM andATAM can be considered the most mature and applied methods.

More recently, other AE methods have been introduced. The System ArchitectureTradeoff Analysis Method7 is a variant of the ATAM, designed to evaluate the systemand software architectures of a software-reliant system. It is used to assess the con-sequences of architectural decisions in light of business goals and quality-attribute-requirement goals. The Holistic Product Line Architecture Assessment (HoPLAA)method [60] extends the ATAM architecture evaluation to product line architectures:in a first stage, HoPLAA focusses on the core architecture evaluation, while during asecond stage it evaluates individual product architectures. The strength of HoPLAAis that it takes less time than performing separately the ATAM analysis on each indi-vidual product architecture. The Continuous Performance Assessment of SoftwareArchitecture (CPASA) [66] provides a method for assessing the performance require-ments using the system architecture as a reference. CPASA, being designed withagile and incremental software development processes in mind, extends the PASA[75] framework that instead requires a full and finalized requirements specification.The Tiny Architectural Review Approach (TARA) [76] is a quick and inexpensive,non scenario-based approach, recently introduced to make architecture evaluationmore easily applicable in industry. Starting from the consideration that scenario-basedmethods are perceived to be expensive when applied in industrial contexts, and thereis not a clear confidence on the benefits of such assessments, TARA aims to be quickerand less expensive than most of the scenario-based methods.

7 See: http://www.sei.cmu.edu/architecture/tools/evaluate/systematam.cfm?location=quaternary-nav&source=651988.

123

Page 4: Software architecture-based analysis and testing: a look into achievements and future challenges

636 A. Bertolino et al.

3 Architecture-based analysis

While AE focuses on evaluating the SA itself, architecture-based analysis (ABA)analyzes the system expected qualities, based on the decisions made at the architec-ture level. In other words, assuming the SA model correctly implements the desiredrequirements, ABA aims at using the produced architectural artifacts to select theright architecture that at best satisfies expected system qualities and stakeholderconcerns.

In a recent study 48 practitioners from 40 different IT companies have been inter-viewed to understand their perceptions and needs associated with existing architecturallanguages [55]. Architecture-based analysis is practiced by about 74 % of the inter-viewed practitioners and mostly related to extra-functional properties verification.Still, on a different question, 63 % of the respondents exposed the need to (better)analyze architecture descriptions semantically.

Being the ABA topic quite extensive, we will cover what we consider the mostrelevant analysis techniques applied at the architectural level, thus focussing onmodel checking techniques (Sect. 3.1), performance and reliability analysis techniques(Sect. 3.2), and fault tolerance (Sect. 3.3).

3.1 Model checking software architectures

Model checking [23] is a widely used formal verification technique whose aim is toanalyze, through an exhaustive and automatic approach, systems behavior with respectto selected properties.

In the context of software architectures, model checking consists in assessingwhether an SA specification satisfies desired architectural properties. The input to thisprocess is an SA behavioral specification and a set of SA properties. For each property,the model checker processes the SA inputs and returns “true” (in case the SA satisfiesthe property), or “false” (and in that case, it also returns a counter-example).

Sixteen model checking SA techniques have been classified and compared in [79].Starting from Wright [4], to be considered the first seminal work on model checkingSA, many other approaches have been proposed, including approaches for checkingreal-time systems properties (e.g., Tsai et al.’s approach [72], and the Fujaba [20]approach and tool for model checking distributed, embedded, and real-time systems),or approaches for checking dynamically evolving and reconfigurable architectures(e.g., CHAM [25], PoliS [22], and ArchWARE [61]), approaches for model checkingSA based on UML models (e.g., Bose [16], Charmy [62], and AutoFOCUS [2]), orapproaches for checking concurrent and distributed architectures (e.g., Darwin/FSP[53], and SAM [43]).

By looking at how the model checking SA approaches evolved over time, we cannotice that: (i) while earlier model checking SA approaches relied on formal textualnotations, more recent ones make use of model-based notations, or a combination ofmodel-based and formal textual notations; (ii) SA properties are typically specifiedusing existing languages, even if some new formal languages or graphical abstractionshave been proposed; (iii) while most of the model checking approaches make use of

123

Page 5: Software architecture-based analysis and testing: a look into achievements and future challenges

Software architecture for code testing and analysis 637

existing (general purpose) model-checking engines, some approaches (e.g., LTSA8 andPolishMC [22]) introduced their own SA-specific model checking engine; (iv) the mostrecent approaches provide a more comprehensive support for architectural elementmodeling and checking, enabling the specification and checking of properties related tocomponents, connectors, interfaces, types and instances, semantics, and configuration.

The first paper in this special issue, titled “An Architectural Approach to the Analy-sis, Verification and Validation of Software Intensive Embedded Systems”, deals withmodel checking based on an SA behavioral specification (see Sect. 6 for further infor-mation).

3.2 Performance and reliability analysis at the software architecture level

According to the above cited survey [55], the interviewed practitioners identifiedthe assessment of extra-functional properties to be the most important reason forconducting architecture-based analysis. In this study, we focus on the two extra-functional analyses that received the bigger attention from the SA community, namelyarchitecture-based performance and reliability analyses.

Software performance engineering is a systematic, quantitative approach to con-structing software systems that meet performance objectives [74]. When applying soft-ware performance engineering at the software architecture level, architectural modelsare enriched with performance annotations, with the purpose of either selecting thearchitectural solution that carries out the potentially best performances, or measuringthe performance of an actually developed system in terms of its components (in linewith the two perspectives defined in Sect. 1).

Initial approaches for software performance engineering at the software architec-ture level have been proposed in the late 90’s. Williams and Smith [74] describe howto use software performance engineering techniques to perform early assessment ofan SA: more specifically, given the ICAD application, they consider three differentapplications and informally demonstrate the importance of inter-component communi-cation in determining performance. In [7] the authors describe how to derive a queuingnetwork model from a Chemical Abstract Machine formal architectural specification,for providing a quantitative performance comparison among different architecturalalternatives. Petriu and others [63–65] propose a formal approach for building layeredqueueing network performance models from the architectural patterns used in the sys-tem by using the PROGRES graph rewriting system. In [63] the proposed approach isapplied to a telecommunication product.

Other approaches have been presented in subsequent years. Balsamo et al. [8]survey model-based performance prediction approaches. They discuss software per-formance engineering approaches to address early software performance analysis andconclude that most of the approaches apply performance analysis at the architecturelevel. Another mainstream work recognizing the importance of SA for performanceengineering is the Future of Software Engineering paper in [77], where the authorsacknowledge the need of automatic performance optimization of architecture, design

8 See: http://www.doc.ic.ac.uk/ltsa/.

123

Page 6: Software architecture-based analysis and testing: a look into achievements and future challenges

638 A. Bertolino et al.

and run-time configuration. More recently, Koziolek [48] surveys performance pre-diction and measurements approaches for component-based systems. Based on thiswork, state of the art performance evaluation approaches for component-base systemsare classified into five main approaches: prediction approaches based on UML oron proprietary meta-models, prediction approaches focussing on middleware, formalperformance specification approaches, and monitoring approaches for system imple-mentations.

Reliability can be seen as the ability of a system or component to perform its requiredfunctions under stated conditions for a specified period of time [1]. As discussed in [45],the reliability of a software architecture depends on three main factors: the reliabilityof individual components (e.g., implementation technology, size, and complexity),the component interactions (e.g., component dependencies and interactions), and theexecution environment.

Goseva-Popstojanova and Trivedi [40] provide a survey of architecture-basedapproaches, categorizing them into state-based, path-based, and additive models.State-based models use the control flow to represent the system architecture and proba-bilities of the transfer of control between components to estimate the system reliability.In path-based models, the system reliability is computed considering the possible exe-cution paths of the program. Additive models, instead, are focused on estimating theoverall application reliability using the components failure data. This study classifiesstate-of-the art approaches and discuss the different models assumptions, limitationsand applicability.

A survey conducted by Immonen and Niemela in 2008 [45] compares state ofthe art reliability and availability prediction methods from the viewpoint of SA.The approaches are classified into state- and path-based. State-based approaches aredivided into composite (where architecture and failure behaviors are combined into asingle model) and hierarchical (making use of approximate or accurate hierarchicalmodels). In path-based approaches the reliability of the software is a weighted averageof the reliabilities of all the paths. The study identified as the main shortcoming atthat time the lack of tool support, the lack of methods that consider variability in anexplicit way, a weak validation of the method and its results, and a limited ability tobridge the gap between reliability requirements and analysis.

Other relevant papers on the topic have been published since the surveys presentedabove. Among them, the work by Roshandel et al. [71] leverages standard SA modelsto predict the reliability of software systems by means of Bayesian models, the paperby Cheung et al. [21] presents a framework for predicting the reliability of individualsoftware components during architectural design, and the article by Brosch et al.[19] presents an approach for reliability modeling and prediction of component-basedsoftware architectures by explicitly modeling the system usage profile and executionenvironment.

3.3 Architecting fault tolerant systems

Fault tolerance (together with fault prevention, removal, and forecasting) is one of thefour means to attain dependability [6]. The essence of fault tolerance is in detecting

123

Page 7: Software architecture-based analysis and testing: a look into achievements and future challenges

Software architecture for code testing and analysis 639

errors and carrying out the subsequent system recovery actions to avoid service fail-ures in the presence of faults. Advances made since the 1970s have included a plethoraof fault tolerance mechanisms, a good understanding of the basic principles of build-ing fault tolerant software, and the dedication of a considerable fraction of require-ments analysis, run-time resources, development efforts and code to ensuring faulttolerance.

The introduction of fault tolerance at the architecting phase has the clear benefits ofallowing developers to make good decisions, very early into the process, about whatredundant resources to use, how to use them efficiently and how to establish if thechosen recovery strategy will be successful. A number of approaches, methods andstyles/patterns for architecting fault tolerant systems have been surveyed and comparedin [59].

In the last decade there has been a steady stream of research papers, mostly indedicated and specific workshops and conferences. Between 2001 and 2003 the mainfocus was on architectural styles and patterns for fault tolerant architectures. Existingapproaches can be classified according to different criteria. Some reuse a library ofstyles, like in [32] where a library of fault tolerance patterns is used in order to generatea fault-tolerant architecture, or in [52] where FT systems are architected by using alibrary of existing styles. Others introduce domain-specific fault-tolerant styles, likein [54] where the SOA style is modified in order to add fault tolerance mechanismsto SOA systems, or in [30] where a web service FT style is used in which a faultlocal to a web service is managed internally, but if the web service is unable to do sothe failure is immediately propagated to a Global Fault Manager. Other approachespropose styles that support the idealised fault-tolerant component model, like in [9]where the Idealised Fault-Tolerant Component style (combined with the role-basedcollaboration styles) is used in order to produce a dependable software architecture,or in [18] where the Idealised FT architectural component (iFTComponent), the Ide-alised FT architectural connector (iFTConnector), and in general, the Idealised FTarchitectural element (iFTE) are introduced. IFTE has its own style that prescribesthe way components and connectors inside it are integrated. Others combine existingstyles, like in [28] where the C2 and the Idealised Fault-Tolerant Component styles arecombined to create an Idealised C2 Component style, or in [78] where the pipe-and-filter, repository and object-oriented styles are combined to create the fault-tolerantarchitecture style.

Since 2004 the growth in the area has further risen, with new contributors joiningin, a renovated interest in the topic, indicated by a larger number of papers published,and a new maturity of the research community, as demonstrated by the many journaland book publications produced. New formal and diagrammatic modeling languagesfor describing fault tolerant architectures have been introduced. For example, in [27] anew Architecture Description Language called MAMA-dl is defined that combines alanguage for describing MAMA fault management architectures with a Fault-TolerantLayered Queueing Network specification. In [17] the B-Method (used to specify archi-tectural elements, interfaces and exception types) is combined with the CSP algebra (tospecify architectural scenarios). In [33] an algebra of actors is extended with mecha-nisms to model and detect crash failures (of actors). In [18] (stereotyped) Componentand Sequence diagrams are used to describe the four main idealised fault-tolerant

123

Page 8: Software architecture-based analysis and testing: a look into achievements and future challenges

640 A. Bertolino et al.

architectural elements, enforcing the principles associated with the ideal fault-tolerantcomponent model.

A number of analysis techniques has been proposed for analyzing fault tolerantarchitectures, ranging from model and consistency checking of FT SA (e.g., in [29]),to unit, integration, conformance, and scenario-based testing (like in [26]), and per-formance analysis (like in [27]).

4 Architecture-based testing

Architecture-based testing (ABT) consists in using software architecture artifacts (e.g.,software architecture specification models, architectural design decisions, architec-tural documentation) to select abstract (architecture-level) test specifications to beused for testing those systems implementing the selected architecture.

ABT is an instance of specification-based testing, therefore, an SA model describ-ing the system expected behavior and properties is used to generate test specifications(according to certain testing goals and coverage and adequacy criteria) that are suc-cessively refined into concrete test cases and executed on the running implementation.More specifically, the high level tests are passed to the software tester, who has to (i)derive code level tests corresponding to the specified architectural test sequences, and(ii) actually run the tests and observe the current implementation behavior.

Following the two perspectives reported in Sect. 1, ABT can be used to test thearchitecture itself, or to test for integration the system’s implementation, or its con-formance to the expected behavior and design decisions expressed by the system’ssoftware architecture.

In general, deriving a functional test plan means to identify on some referencemodel those classes of behavior that are relevant for testing purposes. Since an archi-tectural model points out architecturally relevant information, such as componentsand connectors behaviors, deployment and configuration of architectural elements,architectural styles and constraints, architecture-based test cases will typically reveal(integration) failures related to those aspects.

By looking at the state of the art on ABT, we may recognize the following ones asthe most relevant contributions to the field. In [12] the authors analyze the advantagesin using SA-level unit and integration testing for reuse of test results and to testextra-functional properties. In [69] the authors propose a family of architecture-based(integration) test criteria based on the SA specification, adapting specification-basedapproaches. These two papers represent the seminal attempts in ABT. In [14] Bertolinoet al. outline an automatable method for the integration testing of software systemsbased on a formal description of its software architecture. In [68] Richardson et al.present an architecture-based integration testing approach that takes into considerationarchitecture testability, simulation, and slicing. In [41] Harrold presents an approachfor effective software architecture regression testing, and in [42] she also discussesthe use of software architecture for testing. In [70] Rosenblum adapts his strategy forcomponent-based systems testing to SAs. The author shows how formal models of testadequacy can be used in conjunction with architectural models to guide testing. In [11]the authors present an approach for deriving test plans for the conformance testing of

123

Page 9: Software architecture-based analysis and testing: a look into achievements and future challenges

Software architecture for code testing and analysis 641

a system implementation with respect to an architecture description, while in [13] theauthors establish a relation between SA tests and concrete, executable tests. In [47]the authors propose a technique to test data flow and control flow properties at thearchitectural level. Six architecture relations among architecture units are defined andthen used to define architecture testing paths. Five architecture-level testing criteria areproposed. In [58] a framework for SA-based regression testing is presented, copingwith two main types of evolution: architectural evolution and code evolution. Theapproach proposed by us in [57] still remains, to the best of our knowledge, the mostcomprehensive attempt to tackle the whole cycle of SA-based testing, spanning thewhole spectrum from test derivation down to test execution, and relying on empiricalhands-on experience on real-world case studies.

By looking at the state of the art on ABT from a temporal perspective, we cannotice that: (i) while the preliminary papers were mostly introducing the ABT topicand focussing on selecting test cases, later approaches focus more on assessment cri-teria and test execution; (ii) architecture-related coverage criteria have been (almost)constantly proposed through the time; (iii) regression testing has recently gained acertain attention from the community [58]; (iv) the testability topic, while initiallyanalyzed in [41,68,69], has been silent in subsequent years. More recently, the Soft-ware Engineering Institute has been elaborating new testability profiles.9

5 Challenges

In this paper we have shortly classified and summarized the relevant research proposalsthat have been conducted over the years on the topics related to architecture-basedanalysis and testing. Still, much research is needed to transfer academic researchinto industrial practice, and to keep pace with the new requirements modern systemsexpose.

On the role plaid by software architecture in analysis and testing in practice, an ini-tiative named Architecture Support for Testing (AST), and conducted by the Universityof L’Aquila (Italy), ISTI—CNR Pisa (Italy), and Software Engineering Institute—SEI,Pittsburgh (US) started in 2010. The main objective of AST is to collect industrialneeds related to the use of system architecture descriptions for supporting the sys-tem testing process, and to provide solutions on the topic. The AST main researchquestion is: “How can we use, or better use, architecture to help test our systems?”.Within AST, an initial workshop was held by SEI in Pittsburgh (on February 2011)with practitioners from over 15 companies. The output has been a set of 29 importantmodel problems10 in architecture-based testing. These model problems were groupedinto four main areas: (i) AST and requirements, (ii) AST and product lines, (iii) thescope of AST, and (iv) AST and integration testing. The model problems are available

9 See: http://www.sei.cmu.edu/architecture/research/archpractices/More-Architecture-Support-for-Testing.cfm.10 A model problem is considered a problem that, if solved, would result in a significant improvement overcurrent testing practice.

123

Page 10: Software architecture-based analysis and testing: a look into achievements and future challenges

642 A. Bertolino et al.

online at link11. Research solutions to those model problems have been discussed insubsequent meetings (as already listed in Sect. 1).

The already discussed study conducted in [55], although not directly related tothe role of software architecture in testing and analysis (but rather on the needs andchallenges perceived by practitioners when using existing architectural languages),has nevertheless provided useful information. The 63 % of the respondents reportedthat they have the need to analyze architecture descriptions semantically. The analysesnamed in the answers are: (i) data flow analysis, (ii) run-time dependencies analysis,(iii) performance, (iv) scalability, (v) security, (vi) requirements analysis (from eitherinformal specifications or UML models), (vii) simulation, (viii) finding design flaws,(ix) change impact analysis, (x) cost/value analysis. Analysis also scores third amongthe perceived needs, following design and communication. However, analysis is notonly a need, but it is also practiced by about 74 % of the interviewed practitioners.Further, the top-most reason for analysis is to check extra-functional properties (about48 %), followed by behavioral concerns (about 24 %). The respondents that do notcarry out any analysis are about 26 % in most of the cases, using the ArchitectureLanguage-based model for documentation purposes only. Motivation for not carryingout analysis are: no value perceived (about 44 %), architectural languages (AL) toolimited/imprecise (about 44 %) and lack of skills, competencies or available resources(about 11 %).

By shifting our focus from current software systems and practice to the new emerg-ing paradigms in systems development, the most relevant challenges concern: (i) analy-sis and testing of dynamic and adaptive systems, (ii) analysis and testing of systemsunder uncertainty, (iii) analysis and testing of systems in the cloud, (iv) analysis andtesting of mobile-enabled systems, (v) analysis and testing of secure trustworthy sys-tems. In these domains it is interesting to see if SA can still play a role. Indeed,emerging work in the considered areas show that the use of architectural concerns caneven be more evident and profitable.

Analysis and testing of dynamic and adaptive systems: In the last few years a lotof research has been focussing on dynamically evolving and adaptive systems, andspecifically on the analysis and design of their architectures. Indeed the architecturalabstraction is the most favorite one to model system’s changes and adaptations [39,49].Challenges in this domain attain the possibility to use the architecture to constrain thescope of the change into well defined boundaries (e.g., not allowing a run-time changethat has the potential to mine the system quality) thus making system’s analysis easier.The second paper in this special issue, titled “Architecture-Based Resilience Evalu-ation for Self-Adaptive Systems”, deals with the evaluation of alternative adaptationsolutions for a given self-adaptive system (see Sect. 6 for further information).

Analysis and testing of systems under uncertainty: Uncertainty consists in partial igno-rance and incomplete or conflicting information [73]. While for a long time uncertaintyhas been (sort of) ignored, assuming that everything in our systems was fully speci-fiable, and that systems could be engineered to be trouble-free [38], today’s research

11 See: http://labsewiki.isti.cnr.it/projects/ast/ast2011pisa/main.

123

Page 11: Software architecture-based analysis and testing: a look into achievements and future challenges

Software architecture for code testing and analysis 643

points out the need to formally model the uncertainty modern systems bring with themdue to their ubiquitous nature. SA is an important source of uncertainty, and the scopeof uncertainty deals with not knowing the exact impact of architectural alternativeson properties of interest [34]. On the other hand SA can also be a way to control theuncertainty brought by the components or by the execution environment, thus per-mitting the guarantee of properties of interest [5]. The questions to be answered infuture work are: (i) how to use architectural specification embracing uncertainty fortesting the system to be?, (ii) how to verify the architecture conformance to proper-ties/requirements, in the presence of uncertainty?, (iii) how to predict software qualityfrom an architectural specification in the presence of uncertainties?, (iv) how to makearchitectural design decisions taking into account uncertainties?

Analysis and testing of systems in the cloud: Cloud computing enables the on-demandaccess to distributed resources. When architecting cloud-based applications, newrequirements such as elasticity, multi-tenancy, and application independence intro-duce new challenges on the architecture description, analysis and testing. Initial workon how to model the architecture of cloud-based applications has been presented in[35] but analysis and testing of such architectures is still to be developed. The questionsto be answered in future work are: (i) how to reflect cloud-based applications peculiar-ities into architectural languages?, (ii) what are the new challenges when analyzingand testing the architecture of cloud-based applications?, (iii) how to test cloud-basedapplications in the presence of elasticity?

Analysis and testing of mobile-enabled systems: Modern applications make more andmore use of things, that are, static or mobile computational units, with variable com-putational power and resources, always connected, and typically enabled to sensedata from the surrounding environment. New scenarios arise when architecting whatwe will refer as mobile-enabled systems12: (i) resources can appear and disappeardepending on the computational environment, (ii) the computation can be off-loadedto other nodes, (iii) context-aware information can be collected and aggregated, (iv) thecommunication strength varies in unpredictable ways, and so on. All those scenariosimpose new challenges when architecting mobile-enabled systems.

Analysis and testing of secure trustworthy systems: In current distributed pervasivesystems, security and trust emerge among the most crucial qualities that a softwarearchitecture should guarantee. Security is a multi-faceted concern, involving all layersof a system and cannot be dealt with as an after-thought. System designers shouldtake into account security principles since the architectural stages: as straightly saidby McGraw, “testing that doesn’t take the software architecture into account probablywon’t cover anything interesting about software risk” [56]. Concerning architecturalanalysis, research is active on approaches for performing risk-based analysis (e.g.,[51]) and formal verification [44] of security-related properties. On the other hand,trust is not a static property that is gained once and for all, but it should be continuouslyreinforced by moving the testing phase to on-line: in the recently concluded European

12 See: http://www.sei.cmu.edu/community/mobs2013/.

123

Page 12: Software architecture-based analysis and testing: a look into achievements and future challenges

644 A. Bertolino et al.

project “Trusted Architecture for Securely Shared Services” TAS313 a trusted archi-tecture preserving personal privacy and confidentiality in dynamic environments hasbeen developed, which embedded mechanisms for on-line functional testing [10].Challenging open questions remain, such as how to assess security properties in com-positional way, and how to mitigate side-effects in on-line testing for trustworthyarchitectures.

6 Introduction to the selected papers

This special issue comprises two papers that resulted from the selection of six sub-missions. Those submissions responded to an open call that was widely advertised inthe software engineering community. All the papers underwent a rigorous ad selectivereview process. We want to take here the opportunity to thank all the authors and thereviewers for their hard work.

The two selected papers provide a rather comprehensive picture of the many dimen-sions we mentioned earlier in this editorial, included the more futuristic ones.

The first paper is “An Architectural Approach to the Analysis, Verification and Val-idation of Software Intensive Embedded Systems”, by DeJiu Chen , Lei Feng, TahirNaseer Qureshi, Henrik Lönn, and Frank Hagl. Its describes a holistic developmentapproach for embedded systems centered around the architecture and its descriptionlanguage. The approach described in the paper makes an effective use of the archi-tectural abstractions to analyze functional and non functional properties of the systemby exploiting the architectural description and ensuring its conformance to the systemdue to the systematic and rigorous development steps down to the code.

The second paper is “Architecture-Based Resilience Evaluation for Self-AdaptiveSystems”, by Javier Camara, Rogério de Lemos, Marco Vieira, Raquel Almeida, andRafael Ventura. It addresses the new emerging domain of self-adaptive systems andproposes an architecture-based approach to compare different adaptation mechanismsof a self-adaptive software system. Besides proposing a new technique, the papershows that changes at the architectural level are indeed relevant for the dependabilityof the system.

References

1. IEEE Standard Computer Dictionary (1991) A compilation of IEEE standard computer glossaries.IEEE Std 610

2. AutoFOCUS 3. Autofocus project. http://autofocus.in.tum.de/index.php/Main_Page, last visit: 20133. Abi-Antoun M, Aldrich J, Garlan D, Schmerl B, Nahas N, Tseng T (2005) Improving system depend-

ability by enforcing architectural intent. In: Proceedings of the 2005 workshop on architecting depend-able systems, WADS ’05. ACM , New York, pp 1–7

4. Allen R, Garlan D (1997) A formal basis for architectural connection. ACM Trans Softw Eng Methodol6(3):213–249

5. Autili M, Cortellessa V, Di Ruscio D, Inverardi P, Pelliccione P, Tivoli M (2012) Integration architecturesynthesis for taming uncertainty in the digital space. In: Calinescu R, David (eds) Monterey workshop,volume 7539 of Lecture Notes in Computer Science. Springer, Berlin, pp 118–131

13 Project page is maintained here: http://vds1628.sivit.org/tas3/.

123

Page 13: Software architecture-based analysis and testing: a look into achievements and future challenges

Software architecture for code testing and analysis 645

6. Avizienis A, Laprie J-C, Randell B, Landwehr CE (2004) Basic concepts and taxonomy of dependableand secure computing. IEEE Trans Dependable Sec Comput 1(1):11–33

7. Balsamo S, Inverardi P, Mangano C (1998) An approach to performance evaluation of software archi-tectures. In: Proceedings of the 1st international workshop on software and performance, WOSP ’98.ACM, New York, pp 178–190

8. Balsamo S, Di Marco A, Inverardi P, Simeoni M (2004) Model-based performance prediction insoftware development: A survey. IEEE Trans. Software Eng. 30(5):295–310

9. Beder DM, Romanovsky A, Randell B, Rubira CMF (2001) On applying coordinated atomic actionsand dependable software architectures in developing complex systems. 4th IEEE international sym-posium on object-oriented real-time distributed computing (ISORC’01), Magdeburg

10. Bertolino A, De Angelis G, Kellomaki S, Polini A (2012) Enhancing service federation trustworthinessthrough online testing. IEEE Computer 45(1):66–72

11. Bertolino A, Corradini F, Inverardi P, Muccini H (2000) Deriving test plans from architectural descrip-tions. In: Ghezzi C, Jazayeri M, Wolf AL (eds) ICS. ACM, New York, pp 220–229

12. Bertolino A, Inverardi P (1996) Architecture-based software testing. In: Joint proceedings of thesecond international software architecture workshop (ISAW-2) and international workshop on multipleperspectives in software development (Viewpoints ’96) on SIGSOFT ’96 workshops, ISAW ’96. ACM,New York, pp 62–64

13. Bertolino A, Inverardi P, Muccini H (2001) An explorative journey from architectural tests definitiondownto code tests execution. In: Müller HA, Harrold MJ, Schäfer W (eds) ICSE. IEEE ComputerSociety, pp 211–220

14. Bertolino A, Inverardi P, Muccini H, Rosetti A (1997) An approach to integration testing based onarchitectural descriptions. ICECCS. IEEE Computer Society, p 77

15. Bosch J (2000) Design and use of software architectures: adopting and evolving a product-lineapproach, 1st edn. Addison-Wesley Professional, Boston

16. Bose P (1999) Automated translation of UML models of architectures for verification and simulationusing spin. In: Proceedings of 14th IEEE international conference on automated software engineering,Cocoa Beach, pp 102–109

17. Brito PH, Lemos R, Rubira CM (2008) Development of fault-tolerant software systems based onarchitectural abstractions. In: Proceedings of the 2nd European conference on software Architecture,ECSA ’08. Springer, Berlin, pp 131–147

18. Brito PHS, de Lemos R, Rubira CMF, Martins E (2009) Architecting fault tolerance with exceptionhandling: verification and validation. J Comput Sci Technol 24(2):212–237

19. Brosch F, Koziolek H, Buhnova B, Reussner R (2012) Architecture-based reliability prediction withthe palladio component model. IEEE Trans Softw Eng 38(6):1319–1339

20. Burmester S, Holger G, Hirsch M, Schilling D, Tichy M (2005) The Fujaba real-time tool suite: model-driven development of safety-critical, real-time systems. In: International conference on softwareengineering, ICSE, pp 670–671

21. Cheung L, Roshandel R, Medvidovic N, Golubchik L (2008) Early prediction of software componentreliability. In: Proceedings of the 30th international conference on software engineering, ICSE ’08.ACM, New York, pp 111–120

22. Ciancarini P, Franzé F, Mascolo C (2000) Using a coordination language to specify and analyze systemscontaining mobile components. ACM Trans Softw Eng Methodol 9(2):167–198

23. Clarke EM, Grumberg O, Peled DA (1999) Model checking. MIT press, Cambridge24. Clements P, Bachmann F, Bass L, Garlan D, Ivers J, Little R, Merson P, Nord R, Stafford J (2010) Doc-

umenting software architectures: views and beyond, 2nd edn. Addison-Wesley Professional, Boston25. Corradini F, Inverardi P, Wolf AL (2006) On relating functional specifications to architectural specifi-

cations: a case study. Sci Comput Program 59(3):171–20826. da Brito PHS, Rocha CR, Filho FC, Martins E, Rubira CMF (2005) A method for modeling and testing

exceptions in component-based software development. In: Proceedings of the second Latin-Americanconference on dependable computing, LADC’05. Springer, Berlin, pp 61–79

27. Das O, Woodside CM (2004) Analyzing the effectiveness of fault-management architectures in layereddistributed systems. Perform Eval 56(1–4):93–120

28. de Guerra PAC, Rubira CMF, Romanovsky A, de Lemos R (2004) A dependable architecture forcots-based software systems using protective wrappers. In: Lemos R, Gacek C, Romanovsky A (eds)Architecting dependable systems II, volume 3069 of Lecture Notes in Computer Science. Springer,Berlin, pp 144–166

123

Page 14: Software architecture-based analysis and testing: a look into achievements and future challenges

646 A. Bertolino et al.

29. de Lemos R (2004) Analysing failure behaviours in component interaction. J Syst Softw 71(1–2):97–115

30. Dialani V, Miles S, Moreau L, De Roure D, Luck M (2002) Transparent fault tolerance for webservices based architectures. In: Proceedings of the 8th international Euro-Par conference on parallelprocessing, Euro-Par ’02. Springer, London, pp 889–898

31. Dobrica L, Niemela E (July 2002) A survey on software architecture analysis methods. IEEE TransSoftw Eng 28(7):638–653

32. Domokos P, Majzik I (2005) Design and analysis of fault tolerant architectures by model weaving.In: Proceedings of the ninth IEEE international symposium on high-assurance systems engineering,HASE ’05. IEEE Computer Society, Washington, DC, pp 15–24

33. Dragoni N, Gaspari M (2005) An object based algebra for specifying a fault tolerant software archi-tecture. J Logic Algebr Program 63(2):271–297 (Special Issue on Process Algebra and System Archi-tecture)

34. Esfahani N, Malek S, Razavi K (2013) Guidearch: guiding the exploration of architectural solutionspace under uncertainty. In: Proceedings of the 2013 international conference on software engineering,ICSE ’13. IEEE Press, Piscataway, pp 43–52

35. Everton Cavalcante ALM, Batista T (2013) Cloud-adl: an architecture description language for mod-eling cloud computing applications. In: Proceedings of the 7th European conference on softwarearchitecture, ECSA ’13. Springer, Berlin, pp 320–323

36. Fairbanks GH (2010) Just enough software architecture: a risk-driven approach, 1st edn. Marshall andBrainerd

37. Fujaba Project (2005) http://www.cs.uni-paderborn.de/cs/fujaba/publications/index.html. Universityof Paderborn, Software Engineering Group

38. Garlan D (2010) Software engineering in an uncertain world. In; Proceedings of the FSE/SDP workshopon future of software engineering research, FoSER ’10. ACM, New york, pp 125–128

39. Garlan D, Cheng S-W, Huang A-C, Schmerl BR, Steenkiste P (2004) Rainbow: architecture-basedself-adaptation with reusable infrastructure. IEEE Comput 37(10):46–54

40. Goseva-Popstojanova K, Trivedi KS (2001) Architecture-based approach to reliability assessment ofsoftware systems. Perform Eval 45(2–3):179–204

41. Harrold MJ (1998) Architecture-based regression testing of evolving systems. In: Proceedings of theinternational workshop on the role of software architecture in testing and analysis—ROSATEA 98,pp 73–77

42. Harrold MJ (2000). Testing: a roadmap. In: Finkelstein A (ed) ACM ICSE 2000, the future of softwareengineering, pp 61–72

43. He X (2005) A framework for ensuring system dependability from design to implementation. In:Proceedings of the 3rd international workshop on modelling, simulation, verification and validation ofenterprise information systems, MSVVEIS 2005. In conjunction with ICEIS 2005

44. Heyman T, Scandariato R, Joosen W (2012) Reusable formal models for secure software architec-tures. In: 2012 Joint working IEEE/IFIP conference on software architecture (WICSA) and Europeanconference on software architecture (ECSA), pp 41–50

45. Immonen A, Niemel E (2008) Survey of reliability and availability prediction methods from theviewpoint of software architecture. Softw Syst Model 7(1):49–65

46. ISO/IEC/IEEE (2011) ISO/IEC/IEEE 42010:2011 systems and software engineering—architecturedescription

47. Jin Z, Offutt J (2001) Deriving tests from software architectures. In: Proceedings 12th internationalsymposium on software reliability engineering, ISSRE 2001, pp 308–313

48. Koziolek H (2010) Performance evaluation of component-based software systems: A survey. Perform.Eval. 67(8):634–658

49. Kramer J, Magee J (2007) Self-managed systems: an architectural challenge. In: Briand LC, Wolf AL(eds) FOSE, pp 259–268

50. Kruchten P (2004) An ontology of architectural design decisions in software intensive systems. 2ndGroningen workshop on software variability, pp 54–61

51. Kuz I, Zhu L, Bass L, Staples M, Xu X (2012) An architectural approach for cost effective trustwor-thy systems. In: 2012 Joint working IEEE/IFIP conference on software architecture (WICSA) andEuropean conference on software architecture (ECSA), pp 325–328

123

Page 15: Software architecture-based analysis and testing: a look into achievements and future challenges

Software architecture for code testing and analysis 647

52. Li J, Chen X, Huang G, Mei H, Chauvel F (2009) Selecting fault tolerant styles for third-party compo-nents with model checking support. In: Proceedings of the 12th international symposium on component-based software engineering, CBSE ’09. Springer, Berlin, pp 69–86

53. Magee J, Kramer J, Giannakopoulou D (1999) Behavior analysis of software architectures. In: Pro-ceedings of the 1st working IFIP conference on software architecture. WICSA, San Antonio

54. Mahdian F, Rafe V, Rafeh R, Rahmani AT (2009) Modeling fault tolerant services in service-oriented architecture. TASE ’09. In: Proceedings of the 2009 third IEEE international symposiumon theoretical aspects of software engineering. IEEE Computer Society, Washington, DC, pp 319–320

55. Malavolta I, Lago P, Muccini H, Pelliccione P, Tang A (2013) What industry needs from architecturallanguages: a survey. IEEE Trans Softw Eng 39(6):869–891

56. McGraw G (2006) Software security: building security in. Pearson Education, Inc57. Muccini H, Bertolino A, Inverardi P (2003) Using software architecture for code testing. IEEE Trans

Softw Eng 30(3):160–17158. Muccini H, Dias MS, Richardson DJ (2006) Software architecture-based regression testing. J Syst

Softw 79(10):1379–139659. Muccini H, Romanovsky A (2007) Architecting fault tolerant systems. Technical Report CS-TR-1051,

Newcastle University60. Olumofin FG, Misic VB (2005) Extending the ATAM architecture evaluation to product line architec-

tures. Fifth working IEEE/IFIP conference on software architecture (WICSA 2005). IEEE ComputerSociety, Pittsburgh, pp 45–56

61. Oquendo F, Warboys B, Morrison R, Dindeleux R, Gallo F, Garavel H, Occhipinti C (2004) Archware:architecting evolvable software. In: Oquendo F, Warboys B, Morrison R (eds) Proceedings of the 1stEuropean workshop on software architecture EWSA ’2004 (St Andrews, Scotland, UK), volume 3047of Lecture Notes in Computer Science. Springer, Berlin, pp 257–271

62. Pelliccione P, Inverardi P, Muccini H (2009) Charmy: a framework for designing and verifying archi-tectural specifications. IEEE Trans Softw Eng 35:325–346

63. Petriu D, Shousha C, Jalnapurkar A (2000) Architecture-based performance analysis applied to atelecommunication system. IEEE Trans Softw Eng 26:1049–1065

64. Petriu DC, Wang X (1998) Deriving software performance models from architectural patterns by graphtransformations. In: Ehrig H, Engels G, Kreowski H, Rozenberg G (eds) TAGT, volume 1764 of LectureNotes in Computer Science. Springer, Berlin, pp 475–488

65. Petriu DC, Wang X (1999) From UML descriptions of high-level software architectures to LQNperformance models. In: Nagl M, Schürr A, Münch M (eds) AGTIVE, volume 1779 of Lecture Notesin Computer Science. Springer, Berlin, pp 47–62

66. Pooley RJ, Abdullatif AAL (2010) Cpasa: continuous performance assessment of software architecture.In: Proceedings of the 2010 17th IEEE international conference and workshops on the engineering ofcomputer-based systems, ECBS ’10. IEEE Computer Society, Washington, DC, pp 79–87

67. Poort ER, van Vliet H (2011) Architecting as a risk- and cost management discipline. In: Proceedings ofthe 2011 ninth working IEEE/IFIP conference on software architecture, WICSA ’11. IEEE ComputerSociety, Washington, DC, pp 2–11

68. Richardson DJ, Stafford J, Wolf AL (1998) A formal approach to architecture-based software testing,Technical report. University of California, Irvine

69. Richardson DJ, Wolf AL (1996) Software testing at the architectural level. In: ISAW-2, in joint pro-ceedings of the ACM SIGSOFT ’96 workshops, pp 68–71

70. Rosenblum D (1998) Challenges in exploiting architectural models for software testing. In: Proceedingsof the international workshop on the role of software architecture in testing and analysis—ROSATEA

71. Roshandel R, Medvidovic N, Golubchik L (2007) A Bayesian model for predicting reliability ofsoftware systems at the architectural level. In: Overhage S, Szyperski CA, Reussner R, Stafford JA(eds) Software architectures, components, and applications, volume 4880 of Lecture Notes in ComputerScience. Springer, Berlin, pp 108–126

72. Tsai JJP, Prasad Sistla A, Sahay A, Paul R (1997) Incremental verification of architecture specificationlanguage for real-time systems. In: WORDS ’97: Proceedings of the 3rd workshop on object-orientedreal-time dependable systems—(WORDS ’97). IEEE Computer Society, Washington, DC, p 215

73. Walley P (1996) Measures of uncertainty in expert systems. Artif Intell 83(1):1–5874. Williams LG, Smith CU (1998) Performance evaluation of software architectures. In: Proceedings of

the 1st international workshop on software and performance, WOSP ’98. ACM, New York, pp 164–177

123

Page 16: Software architecture-based analysis and testing: a look into achievements and future challenges

648 A. Bertolino et al.

75. Williams LG, Smith CU (2002) PASASM: a method for the performance assessment of softwarearchitectures. In Proceedings of the 3rd international workshop on software and performance, WOSP’02. ACM, New York, pp 179–189

76. Woods E (2012) Industrial architectural assessment using TARA. J Syst Softw 85(9):2034–2047(Selected papers from the 2011 Joint Working IEEE/IFIP Conference on Software Architecture(WICSA 2011))

77. Woodside M, Franks G, Petriu DC (2007) The future of software performance engineering. In: 2007Future of software engineering, FOSE ’07. IEEE Computer Society, Washington, DC, pp 171–187

78. Yuan L, Dong JS, Sun J, Basit HA (2006) Generic fault tolerant software architecture reasoning andcustomization. IEEE Trans Reliab 55(3):421–435

79. Zhang P, Muccini H, Li B (2010) A classification and comparison of model checking software archi-tecture techniques. J Syst Softw 83(5):723–744

123