140
Master’s Thesis in Telematics for the Award of the Academic Degree Diplom Ingenieur at the Graz University of Technology Component based testing during the software development cycle submitted by Gerhard Fliess Institute for Information Processing and Computer Supported New Media (IICM) Graz University of Technology September 2003 Supervisor: Univ.-Doz., Dipl.-Ing., Dr.techn. Klaus Schmaranz

Component based testing during the software development cycle

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Component based testing during the software development cycle

Master’s Thesis in Telematics

for the Award of the Academic Degree

Diplom Ingenieur

at the

Graz University of Technology

Component based

testing during the

software development cycle

submitted by

Gerhard Fliess

Institute for Information Processing and Computer Supported New

Media (IICM)

Graz University of Technology

September 2003

Supervisor: Univ.-Doz., Dipl.-Ing., Dr.techn. Klaus Schmaranz

Page 2: Component based testing during the software development cycle

Diplomarbeit aus Telematik

zur Verleihung des akademischen Grades

Diplom Ingenieur

an der

Technischen Universitat Graz

Komponenten basiertes Testen

im Software-Entwicklungszyklus

vorgelegt von

Gerhard Fliess

Institut fur Informationsverarbeitung und Computergestutzte neue

Medien (IICM)

Technische Universitat Graz

September 2003

Begutachter: Univ.-Doz., Dipl.-Ing., Dr.techn. Klaus Schmaranz

Betreuer: Univ.-Doz., Dipl.-Ing., Dr.techn. Klaus Schmaranz

Page 3: Component based testing during the software development cycle

Acknowledgments

There are a lot of people that had a strong influence on what I am and who are

therefore more or less responsible that I was able to finish my studies by writing

this thesis.

First of all I’d like to thank my parents for supporting my education and providing

the freedom that is necessary to make my own decisions. My personal development

over the last years was strongly influenced by the flat-sharing community I was

involved in and the friends I was sailing with. Also I want to thank all friends I

made music with and those who shared many hours with me while discovering Graz

at night. Thanks for your time and for discussing interesting topics sometimes late

at night.

At the university I have to thank Klaus Schmaranz for supervising this thesis and

for holding sustainable courses that had a strong influence on my view of software-

development. Special thanks to Egon Valentini for joining the team that enabled

CrashIt and this thesis.

Additional thanks goes to Astrid for being so patient during the final editorial steps

of this work.

Gerhard Fliess

Graz, October 3, 2003

iii

Page 4: Component based testing during the software development cycle

iv

I hereby certify that the work reported in this thesis is my own and that work

performed by others is appropriately cited.

Signature of the author:

Ich versichere hiermit wahrheitsgemaß, die Arbeit bis auf die dem Aufgabensteller

bereits bekannte Hilfe selbstandig angefertigt, alle benutzten Hilfsmittel vollstandig

und genau angegeben und alles kenntlich gemacht zu haben, was aus Arbeiten an-

derer unverandert oder mit Abanderungen entnommen wurde.

Page 5: Component based testing during the software development cycle

Abstract

The IT community has learned how to design and implement highly reusable classes

during the past ten years. This was forced by writing down the different design

patterns and defining the component based approach.

Now another aspect becomes more and more important: do these classes behave like

it has been defined during the design-phase?

Existing test-benches are based on the approach that test-classes must be imple-

mented or additional test code in the implementation has to be written. This code

will include the same amount of errors than the implementation code so another

type of defining test cases can be usefull to avoid this errors. Defining the tests in a

descriptive way e.g. by writing down an XML file has the advantage that developers

can specify the test in a languageindependent format that simplifies compatibility

tests. These files describe what should be done during the test without implementing

the neseccary steps. The files can be created before all features of the component

that should be tested have been implemented.

This thesis covers some theoretical topics on testing and software design and intro-

duces a framework that is designed to fulfill the requirements of a modern software

test-bench.

v

Page 6: Component based testing during the software development cycle

Kurzfassung

Die IT-Gemeinde hat in den letzten 10 Jahren gelernt wie man vielfaltig wiederver-

wendbare Klassen entwirft und implementiert. Das wurde durch die Definition der

Design Patterns und das Aufkommen des komponentenbasierten Ansatzes verstarkt.

Nun wird ein weiterer Aspekt immer wichtiger, namlich, ob sich die so entwickelten

Klassen wirklich so verhalten, wie es das Design vorgibt.

Bestehende Testumgebungen verfolgen den Ansatz, dass spezielle Testklassen be-

nutzt werden oder zusatzlicher Programmcode fur die Tests implementiert wer-

den muss. Diese zusatzlichen Programmzeilen haben dieselbe Fehlerrate, wie der

eigentliche Programmcode. Darum muss ein anderer, weniger fehleranfalliger Weg

gefunden werden, Tests zu definieren. Die Definition der Testfalle in einer beschreiben-

den Art, z.B. als XML-Datei, hat den Vorteil, dass die Testfalle unabhangig von der

Sprache beschrieben werden. Dieser Ansatz erleichtert Kompatibilitatstests. Die

Testbeschreibungen konnen daruber hinaus schon erstellt werden, wenn noch nicht

alle zu testenden Klassen implementiert sind. Sie beschreiben, was wahrend des

Tests zu tun ist, ohne die einzelnen Vorgange zu implementieren.

Diese Arbeit behandelt einige theoretische Aspekte der Softwareentwicklung und des

Testens und stellt ein Framework vor, das erstellt wurde, um die oben genannten

Anforderungen an ein modernes Software-Testwerkzeug zu erfullen.

vi

Page 7: Component based testing during the software development cycle

Table of Contents

Acknowledgments iii

Abstract v

Kurzfassung vi

1 Introduction 1

2 Motivation 4

2.1 Starting Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 Approach to a Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3.1 jUnit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3.2 iContract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Aspects of calls and contracts 14

3.1 Method-calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.1.1 Types of calls . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.1.2 Components and States . . . . . . . . . . . . . . . . . . . . . 15

vii

Page 8: Component based testing during the software development cycle

TABLE OF CONTENTS viii

3.1.3 Method-call model . . . . . . . . . . . . . . . . . . . . . . . . 16

3.1.4 Sequence of calls . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.2 Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.3 State-machines as sequence-models . . . . . . . . . . . . . . . . . . . 20

3.3.1 Limitations of state-machines . . . . . . . . . . . . . . . . . . 20

3.3.2 Enhanced state-machines . . . . . . . . . . . . . . . . . . . . . 22

3.4 The categorization of contracts . . . . . . . . . . . . . . . . . . . . . 22

3.4.1 Linear Contracts . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.4.2 Pseudo Linear Contracts . . . . . . . . . . . . . . . . . . . . . 23

3.4.3 Looped Contracts . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.4.4 States and pre-conditions . . . . . . . . . . . . . . . . . . . . . 25

3.4.5 How to combine components . . . . . . . . . . . . . . . . . . . 25

4 General software quality factors 27

4.1 A small example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.1.1 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.1.2 Implementations and usage of the framework . . . . . . . . . . 32

4.2 Important Quality Factors . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2.1 External factors . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.2.2 Internal Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.3 Design patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.3.1 Hook and HookUp . . . . . . . . . . . . . . . . . . . . . . . . 40

4.3.2 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.4 Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Page 9: Component based testing during the software development cycle

TABLE OF CONTENTS ix

5 Testing 45

5.1 Goals for running tests . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.1.1 Targets of test runs . . . . . . . . . . . . . . . . . . . . . . . . 47

5.2 Test-design concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.2.1 Test principles . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.2.2 Descriptive or declarative configurations . . . . . . . . . . . . 53

5.3 Exception safety and robustness . . . . . . . . . . . . . . . . . . . . . 54

6 CrashIt - A short introduction 56

6.1 What is CrashIt ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

6.2 Main concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

6.3 Testing topics covered by CrashIt . . . . . . . . . . . . . . . . . . . . 59

6.3.1 Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

6.3.2 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

6.3.3 Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

7 Using CrashIt in the software design cycle 62

7.1 Methodology concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 63

7.2 Development-models . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

7.2.1 The waterfall model . . . . . . . . . . . . . . . . . . . . . . . 64

7.2.2 The spiral model . . . . . . . . . . . . . . . . . . . . . . . . . 66

7.2.3 Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

7.2.4 Extreme programming . . . . . . . . . . . . . . . . . . . . . . 68

7.3 Using CrashIt in different models . . . . . . . . . . . . . . . . . . . . . 71

Page 10: Component based testing during the software development cycle

TABLE OF CONTENTS x

7.3.1 Which way is the best? . . . . . . . . . . . . . . . . . . . . . . 71

8 The Implementation of CrashIt 73

8.1 Overall architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

8.2 Interfaces and design-decisions . . . . . . . . . . . . . . . . . . . . . . 75

8.2.1 The CrashIt environment . . . . . . . . . . . . . . . . . . . . . 75

8.2.2 Accessing the configuration, the configuration-layer . . . . . . 77

8.2.3 Using the framework . . . . . . . . . . . . . . . . . . . . . . . 78

8.2.4 Interfaces for test-cases . . . . . . . . . . . . . . . . . . . . . . 80

8.2.5 The XML-subsystem . . . . . . . . . . . . . . . . . . . . . . . 81

8.3 Available applications . . . . . . . . . . . . . . . . . . . . . . . . . . 83

8.3.1 Stand-alone Application . . . . . . . . . . . . . . . . . . . . . 84

8.3.2 Ant-task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

9 Conclusion 85

9.1 Personal experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

9.2 Project related topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

9.3 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

Appendix 88

A The usage of CrashIt - a simple example 88

A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

A.2 A service example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

A.2.1 Parts of the framework . . . . . . . . . . . . . . . . . . . . . . 89

Page 11: Component based testing during the software development cycle

TABLE OF CONTENTS xi

A.2.2 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

A.2.3 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . 94

A.3 The CrashIt Configuration . . . . . . . . . . . . . . . . . . . . . . . . 96

A.3.1 The Configuration file . . . . . . . . . . . . . . . . . . . . . . 97

A.3.2 The Test-configuration File . . . . . . . . . . . . . . . . . . . 100

A.3.3 The Testcase files . . . . . . . . . . . . . . . . . . . . . . . . . 104

A.3.4 The Component Files . . . . . . . . . . . . . . . . . . . . . . . 106

A.3.5 The Connection Files . . . . . . . . . . . . . . . . . . . . . . . 107

A.3.6 The Contract Files . . . . . . . . . . . . . . . . . . . . . . . . 111

A.3.7 Creating Objects - the Parameter-Converter . . . . . . . . . . 111

A.3.8 Flowcontrol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VIII

Page 12: Component based testing during the software development cycle
Page 13: Component based testing during the software development cycle

List of Figures

3.1 Model of a deterministic call . . . . . . . . . . . . . . . . . . . . . . . 17

3.2 Model of a nondeterministic call . . . . . . . . . . . . . . . . . . . . . 18

3.3 A one-way state-machine . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.4 State-machine for permutations of calls . . . . . . . . . . . . . . . . . 21

3.5 State-machine similar to a one-way state-machine . . . . . . . . . . . 21

3.6 An enhanced state-machine . . . . . . . . . . . . . . . . . . . . . . . 22

3.7 Linear contract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.8 A pseudo linear sequence . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.9 A looped sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.10 A call of a deterministic method . . . . . . . . . . . . . . . . . . . . . 26

3.11 A call of a nondeterministic method . . . . . . . . . . . . . . . . . . . 26

7.1 Elements of a methodology . . . . . . . . . . . . . . . . . . . . . . . . 63

7.2 The waterfall model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

7.3 The spiral model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

7.4 Life-cycle in XP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

8.1 CrashIt modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

xiii

Page 14: Component based testing during the software development cycle

LIST OF FIGURES xiv

8.2 UML diagram of the environment classes . . . . . . . . . . . . . . . . 77

8.3 UML diagram of internal configuration-interfaces . . . . . . . . . . . 79

8.4 UML diagram of the configuration tags (incomplete) . . . . . . . . . 83

Page 15: Component based testing during the software development cycle

Listings

2.1 jUnit vector test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2 Implementation of a stack enhanced by iContract . . . . . . . . . . . 11

8.1 CrashIt tag-library example . . . . . . . . . . . . . . . . . . . . . . . . 82

8.2 CrashIt as ant-task . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

A.1 Service interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

A.2 ServiceProvider Interface . . . . . . . . . . . . . . . . . . . . . . . . . 92

A.3 Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

A.4 A sample application without a Configurator . . . . . . . . . . . . . 95

A.5 A sample application using a Configurator . . . . . . . . . . . . . . . 96

A.6 The main configuration file . . . . . . . . . . . . . . . . . . . . . . . . 98

A.7 A simple configuration without Flowcontrol . . . . . . . . . . . . . . 101

A.8 A Flowcontrolled configuration . . . . . . . . . . . . . . . . . . . . . . 101

A.9 A simple Testcase-sequence without Flowcontrol . . . . . . . . . . . . 103

A.10 A Testcase-sequence using Flowcontrol . . . . . . . . . . . . . . . . . 103

A.11 Calling a method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

A.12 Description of a component . . . . . . . . . . . . . . . . . . . . . . . 107

A.13 The connections config file . . . . . . . . . . . . . . . . . . . . . . . . 108

xv

Page 16: Component based testing during the software development cycle

LISTINGS xvi

A.14 Connections file with direct connections . . . . . . . . . . . . . . . . . 108

A.15 A connection that uses a Contract . . . . . . . . . . . . . . . . . . . . 109

A.16 Definition of the service-Contract . . . . . . . . . . . . . . . . . . . . 111

A.17 Definition of the used Contracts . . . . . . . . . . . . . . . . . . . . . 111

A.18 Converter property file . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Page 17: Component based testing during the software development cycle

Chapter 1

Introduction

What would life be if we had no courage to attempt anything?

Vincent van Gogh

If somone is about to solve a problem by writing an application, it is important

to write down the problem and work out the main aspects. After the problem has

been specified in that way, it is useful to verify wether someone else has solved it

before. Sometimes good solutions are found which can be easily adapted to solve

the problem but not always do the available solutions cover all important aspects.

This is the time to start your own implementation.

I stumbled over the problem of testing applications that are implemented by a

component-based approach during a lecture held byby Klaus Schmaranz at the Graz

University of Technology. During a seminar-project two other colleques and I tried

to solve it, but we had to stop at the point of definig the main aspects of this

problem. We accepted as a result that no solution covers all our requirements or is

exentsible enough to adapt it in a way that it can fulfill our requirements.

One exciting year later the first prototype of a new test-bench was released. Egon

Valentini and I had implemented it as a core-framework which fulfills most of our

1

Page 18: Component based testing during the software development cycle

CHAPTER 1. INTRODUCTION 2

requirements and which is extensible enough to form the basis of more complex

test-benches. It provides a more general approach of defining test-cases than other

frameworks and includes interfaces for implementing custom configurations which

can thus be integrated into larger software development applications.

Many people associate “testing” with proving that an application runs correctly. In-

deed, testing is a destructive process with the aim of findinig bugs and thus crashing

an application in a secure testing environment. Using this definition as a starting

point, the created test tool is supposed to motivate programmers to detect more

bugs and finally to create better programs. According to these ideas it was named

CrashIt .

Integrating the test into the design-process is one key-factor for a succesful test-

process and therefore essential for a reliable and stable product. CrashIt simplifies

the integration of the test-design into early phases of the development-cycle.

There several approaches for solving the requirements of such test applications.

However, no test-application which could accompany engineers through the whole

design and implementation cycle of a software project exists. On suggestion of

Klaus Schmaranz, Egon Valentini and I hence created a tool, that helps software

developers to verify their designs and to test their implementations through an ex-

tensive and complete check. It includes both unit- and integration-tests in a single

session. Furthermore, we realized that the idea behind this test tool reveals a po-

tential to change the status of testing in the IT community. The design is based on

the idea to enhance the status of testing from a neglected to a widely recognised one.

CrashIt was designed and implemented in teamwork by Egon Valentini and me (Ger-

hard Fliess) between May 2002 and June 2003. The main modules of the framework

were developed together. This and Egon’s theses ([Val03]) are the results of our

Page 19: Component based testing during the software development cycle

CHAPTER 1. INTRODUCTION 3

teamwork. Egon describes in his thesis the design of the framework, the module

interconnections (refer to [Val03, chapter 6, p.62]) and the implementation of the

main modules and Flowcontrol (refer to [Val03, section 5.1.3, p.49]). Futhermore

it contains a general introduction into CrashIt , a critical evaluation of the project

and takes a look on the future of this application. In addition, he explains the main

modules and some use-cases [Val03, chapter 6, p.62]).

I was responsible for the runtimeenvironment (also called CrashItEnvironment (see

8.2.1), a method to optimally describe tests through XML-Files and I implemented

the modules that are used by CrashIt to generate test reports, i.e. I wrote all parsers

and document generation classes.

In my thesis I will discuss the usage of CrashIt in different software-design models as

well as the principles of testing and explain the runtime-environment of this test tool.

As these two theses cover the same software project both cover some equal topics.

We shared some topics and wrote some general sections only once. This applies

to chapters 1,4,5 and 6. Chapter 1 was written to equal parts by Egon and me,

chapter 4 was mainly written by Egon except for the sections on design patterns

and exceptions. Chapter 5 was written by me and chapter 6 includes only few parts

of the complete chapter which can be found in Egon’s thesis. Klaus Schmaranz

supervised our theses and we agreed on this proceeding after releasing the first

version of CrashIt .

Page 20: Component based testing during the software development cycle

CHAPTER 2. MOTIVATION 4

Chapter 2

Motivation

People forget how fast you did a job-

but they remember how well you did it.

Howard W. Newton

2.1 Starting Point

During the development of an application some phases are more interesting and

challenging than others. Most developers preferred designing and implementing an

application than writing the documentations or installation manuals. The most

condemned task is testing the application.

The starting point which is to invite to have a critical look at testing, is excellently

described by Kent Beck:

Testing Strategy: Oh yuck. Nobody wants to talk about testing. Testing is

the ugly stepchild of software development. The problem is, everybody knows

that testing is important. Everybody knows they don’t do enough testing. [...]

Remember the principle “Work with human nature, not against it.” That

Page 21: Component based testing during the software development cycle

CHAPTER 2. MOTIVATION 5

is the fundamental mistake in the testing book I’ve read. They start with

the premise that testing is at the center of development. You must do this

test and that test and oh yes this other one, too. If we want programmers

and customers to write tests, we had better make the process as painless as

possible, realizing that the tests are there as instrumentation, and it is the

behavior of the system being instrument that everyone cares about, not the

tests themselves. [...]

Tests are most valuable when the stress level rises, when people are working

too much, when human judgment starts to fail. So the tests must be automatic

- returning an unqualified thumbs up/thumbs down indication of whether the

system is behaving. [Bec00, Chap 18]

In other words, Kent Beck explains that software engineering needs tools

• to test parts of an application and whole applications automatically

• resulting in the generation of a test report.

Moreover, there is also a need for

• a tool which developers may use during the whole life-cycle of a software-pro-

ject,

• computer-aided generation of test-cases from a specification and [Bec00]

• testing using contracts [Mey97, Chap. 12-14].

2.2 Approach to a Solution

In order to provide a solution to this situation CrashIt was created. It consists of

a Java-framework, which can automatically test Java-components on the basis of

test-cases. After all test had be run it generates a test-report.

Page 22: Component based testing during the software development cycle

CHAPTER 2. MOTIVATION 6

In CrashIt an automatic test doesn’t cover the generation of test-cases. Automatic

test means in this case that it runs the a test procedure composed of:

1. testing of single components

2. loading, connecting according components via Contracts and testing the root-

component. (Contracts are used to check, whether conditions between two

components hold during communication)

3. loading, combining components to a complete application and trying to find

bugs, too.

The user has to specify the test-cases before running a test. The goal of writing test-

cases manually is that the test-engineer is able to get a feeling which tests are useful

and which test are futile ones. For example, if a method computes the addition of

two values, it would be useless to test this method for each permutation of these

two values. Whereas appropriate tests consist in inspecting the method and trying

to find those permutations where the addition fails, e.g. value1 = maxInteger and

value2 = maxInteger, so if the method-result returned by an integer value, the

addition of value1 and value2 produces an overflow.

The goal of designing appropriate test-cases is understanding and scrutinizing a sys-

tem in terms of correctness, robustness, extendibility, compatibility and performance

(see also section 4.2 on page 33). CrashIt meets those premises by an extended pos-

sibility how test-cases can be to formulated. It also provides a mechanism to group

test-cases into simple or complex sequences, so called Testcase-sequences. Not only

test-cases but also sequences of test-cases can crash a system.

Moreover, CrashIt has the capability to organize unit- and integration-test in one

test. Well designed systems can be split up into several parts, socalled modules that

possibly consist of one ore more components. To perform a detailed test, each com-

ponent and all possible combinations of them have to be checked. CrashIt can test

all components at once and further run tests in connected components. The con-

nections between the components must be specified in the configuration and CrashIt

Page 23: Component based testing during the software development cycle

CHAPTER 2. MOTIVATION 7

will create the connections before the test is started.

A new approach to create modular software which is moreover correct and robust,

consists in defining contracts between according components. This approach is also

known under the synonym Design by Contract . The advantage of Design by Con-

tract is that most components shrink, because through predefined conditions it can

be taken for granted that some cases do not appear when the component will be

processed. Supplementary checks can consequently be dropped. As a consequence,

when the program decreases, it is easier keep an overview of its components and

to maintain the code. (see also [Val03, chapter 4, p.34] and [Mey97]). During a

test CrashIt supports Design by Contract through the possibility to insert Contracts

between components. These Contracts are simple Java classes which had to fulfill

some simple precepts that are caused by the fact, that Java do not support multiple

inheritance from classes. Contracts have to:

• implement the Contract interface

• and to implement either the interfaces of both components

• or they have to be derived from one component and implement the interface

of the second one

CrashIt is able to load contracts dynamically. It is possible to deliver them for addi-

tional tests but in general they are not part of a packaged application.

Finally, a very important requirement is the ability to compare test results with

expected results. This is an easy task for simple result-types like integer. Whereas

it fast raises to a complex job, if user-types have to be validated. The approach

implemented in CrashIt consists of special Result-checker-classes. They are Java

classes, that can be extended for own purposes. Result-checker-classes for simple

types are already implemented.

Page 24: Component based testing during the software development cycle

CHAPTER 2. MOTIVATION 8

2.3 Related Work

As the topic of writing stable and reliable software is getting more and more impor-

tant several approaches for testing and designing such applications are available. In

some software-development models the importance of testing has been increased as

it is an important part of the process, e.g. eXtreme Programming forces developers

to implement tests as the first step in a development-cycle.

CrashIt is therefore not the first approach to support developers in writing stable

and reliable applications. jUnit and iContract are two interesting projects that deal

with these topics in different ways.

2.3.1 jUnit

jUnit is a small but powerful framework for writing repeatable tests that has been

designed by Erich Gamma and Kent Beck [jUn03]. It consists of two main classes:

the TestCase and the TestSuite. Both implement the same interface Test that

allows to start a test by invoking the method run.

A TestCase performs one or more tests by invoking method calls and comparing the

results of these calls with expected values. Concrete TestCases can be implemented

by extending the class TestCase and by implementing methods whose names start

with test. If a concrete TestCase class is applied to the framework, jUnit searches

for those special named methods and invokes them. TestCases can be successful,

cause an error or cause a failure in which case the test cannot be started. The result

of a test run can be formatted by a TestFormatter and used as a report of the

applied tests.

A TestSuite can combine and start several TestCases. The method that is used

to add a TestCase requires an instance of type Test and therefore a TestSuite can

also combine several TestSuites. This simple design allows a flexible structure of

test-classes. There are also methods for setting up and shutting down tests, so that

it is possible to initialize needed environments for them.

Page 25: Component based testing during the software development cycle

CHAPTER 2. MOTIVATION 9

1 package j un i t . samples ;2

3 import j u n i t . framework . ∗ ;4 import java . u t i l . Vector ;5

6 public class VectorTest extends TestCase {7 protected Vector fEmpty ;8 protected Vector f Fu l l ;9

10 public stat ic void main ( St r ing [ ] a rgs ) {11 j u n i t . t e x tu i . TestRunner . run ( s u i t e ( ) ) ;12 }13 protected void setUp ( ) {14 fEmpty= new Vector ( ) ;15 f F u l l = new Vector ( ) ;16 f F u l l . addElement (new I n t e g e r ( 1 ) ) ;17 f F u l l . addElement (new I n t e g e r ( 2 ) ) ;18 f F u l l . addElement (new I n t e g e r ( 3 ) ) ;19 }20 public stat ic Test s u i t e ( ) {21 return new TestSu i te ( VectorTest . class ) ;22 }23 public void t e s tCapac i ty ( ) {24 int s i z e = fFu l l . s i z e ( ) ;25 for ( int i = 0 ; i < 100 ; i++)26 f Fu l l . addElement (new I n t e g e r ( i ) ) ;27 asse r tTrue ( f Fu l l . s i z e () == 100+ s i z e ) ;28 }29

30 public void t e s tConta in s ( ) {31 asse r tTrue ( f Fu l l . c onta in s (new I n t e g e r ( 1 ) ) ) ;32 asse r tTrue ( ! fEmpty . conta in s (new I n t e g e r ( 1 ) ) ) ;33 }34 public void testElementAt ( ) {35 I n t e g e r i = ( In t eg e r ) f Fu l l . elementAt ( 0 ) ;36 asse r tTrue ( i . intValue ( ) == 1) ;37

38 try {39 f Fu l l . elementAt ( f Fu l l . s i z e ( ) ) ;40 } catch ( ArrayIndexOutOfBoundsException e ) {41 return ;42 }43 f a i l ( ”Should r a i s e an ArrayIndexOutOfBoundsException” ) ;44 }45 public void testRemoveAll ( ) {46 f F u l l . removeAllElements ( ) ;47 fEmpty . removeAllElements ( ) ;48 asse r tTrue ( f Fu l l . isEmpty ( ) ) ;49 asse r tTrue ( fEmpty . isEmpty ( ) ) ;50 }51

52 }

Listing 2.1: jUnit vector test

This approach is simple to use and can be learned easily. Some development tools

integrate wizards for defining and applying jUnit tests on applications. jUnit is

widely used and there are different extensions for testing web- or J2EE (Java 2

Enterprise Edition) - applications. A small example which is part of the sample

package of jUnit is shown in Listing 2.1 on page 9.

Page 26: Component based testing during the software development cycle

CHAPTER 2. MOTIVATION 10

Benefits and Weaknesses

We claim that the sourcecode of test-classes contains a nearly equal amount of fail-

ures, test-classes written in this way are as buggy as the original code. In a typical

project at least as much test code as program source code will be written and there-

fore this approach might be dangerous. It is true that the test-classes are often used,

but how often will they be reviewed?

A solution for this problem can be to define the tests in another way than program-

ming test classes, e.g. in XML files. A further aspect is that it is more secure to

specify what should be done in a test than to implement it. This approach may cause

some limitations for the test design, but if the configuration possibilities are flexible

enough, this should not be a real restriction. The definition of a test in a separated,

non-Java source file also has the advantage that it can be created automatically or

it can be the source for an automatically generated test report.

jUnit is a well established framework with a huge amount of related projects. An-

other advantage of jUnit is that it can be integrated easily into an IDE (Integrated

Development Environment) as it has already been done for Eclipse1.

2.3.2 iContract

iContract is a Java tool that provides developers with support for Design by Con-

tract(DbC). DbC refers to the concept of considering the interfaces between system

components as contracts that are specified as integral parts of the sourcecode. Until

today, the explicit specification of contracts by means of class invariants and mes-

sage pre- and postconditions was available for Eiffel and some formal specification

languages like VDM2. iContract is a prototype tool that provides similar support

for Java. It enables developers to take advantage of the following benefits:

1Eclipse, see online http://www.eclipse.org2see online http://www.ifad.dk/Products/vdmtools.htm

Page 27: Component based testing during the software development cycle

CHAPTER 2. MOTIVATION 11

1. support of design for testability by enhancing the system’s observability (fail-

ure occurs close to fault),

2. uniform implementation of invariant-, pre- and postcondition checks among

members of your team,

3. documentation and code are always in sync and

4. semantic level specification of what requirements/benefits a class/message of-

fers.

iContract is a freely available sourcecode preprocessor, which instruments the code

with checks for class invariants, pre- and postconditions that may be associated with

methods in classes and interfaces. Special comment tags (e.g. @pre, @post) are in-

terpreted by iContract and converted into assertion check code that is inserted into

the sourcecode. The semantic of iContract includes also quantifiers (forall, exists)

to specify properties of enumerations, implications, old- and return-value references

in postconditions, as well as the naming of exception classes to throw. iContract

supports the propagation of invariants, pre- and postconditions via inheritance and

multiple interface implementation, as well as multiple interface extension mecha-

nisms. Due to the non-mandatory nature of the comment tags, source code that

contains DbC annotations remains fully compatible with Java and can be processed

with standard Java compilers enabling a risk-free adoption of the technique. [Kra01]

A small example:

1 /**2 @inv (top >= 0 && top < max)3 */4 class MyStack5 {6 private Object [ ] e lems ;7 private int top , max ;8

9 /**10 @pre (sz > 0)11 @post (max == sz && elems != null)12 */13 public MyStack ( int sz )14 {15 max = sz ;16 elems = new Object [ sz ] ;17 }

Page 28: Component based testing during the software development cycle

CHAPTER 2. MOTIVATION 12

18

19 /**20 @pre ! isFull ()21 @post (top == $prev (top ) + 1) && elems[top -1] == obj22 */23 public void push ( Object obj )24 {25 elems [ top++] = obj ;26 }27

28 /**29 @pre ! isEmpty ()30 @post (top == $prev (top ) - 1) && $ret == elems[top]31 */32 public Object pop ( )33 {34 return elems[−−top ] ;35 }36

37 /**38 @post ($ret == ( top == max))39 */40 public boolean i s F u l l ( )41 {42 return top == max ;43 }44

45 /**46 @post ($ret == ( top == 0))47 */48 public boolean isEmpty ( )49 {50 return top == 0;51 }52 } // End MyStack

Listing 2.2: Implementation of a stack enhanced by iContract

Benefits and Weaknesses

iContract compiles pre-, postconditions and invariants into assertion-checks, that

can be added to the original program. This additional code causes an application

to slow down a little bit so that time-critical sections may crash.

It could be a complex task whether conditions or invariants have to be specified for

user types or a check implicates complicate calculations. The syntax of iContract is

not as powerful as Java or Eiffel to permit unlimited possibilities to express condi-

tions. A bad workaround would consist of specifying Java methods that implement

the check. Furthermore these methods would be used by some statements in iCon-

tract to express conditions.

A benefit of iContract is the small implementation effort of conditions, because they

can be simply expressed in some comment tags.

Page 29: Component based testing during the software development cycle

CHAPTER 2. MOTIVATION 13

As mentioned in the past sections both projects have its benefits and weaknesses.

Both are interesting projects that are useful in many cases. However, they are not

able to address all requirements that lead to the implementation of CrashIt .

CrashIt is designed to combine unit- and integration- tests with the concepts of con-

tracts. This was done in a way that allows a smooth integration of this concepts

into existing development processes.

One topic of this thesis is how CrashIt can be used and how it fits into different

develoment-models This will be explained in later chapters and an extensive example

of its usage can be found in the appendix (see page 88). The next sections will

address some theoretical concepts in order to change developers view on software-

design.

Page 30: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 14

Chapter 3

Aspects of calls and contracts

If you be in a hurry, take the longer way.

Lao Tse

Sometimes it makes sense to lean back and think of concepts that are used over and

over again to get a new view on them. Even reinventing things often opens a new

angle and helps to gain a deeper understanding of the related topics.

This chapter tries to open a new view on method-calls and contracts to facilitate

developers in making their design decisions.

3.1 Method-calls

Calling a method of a component is a very basic process, but it is not as simple as it

seems. Each method must be called by using the correct parameters and will often

return a value or an object reference.

There are a lot of possible reasons why a call may cause an error. Some of these

reasons, like an incorrect signature, can be checked by the compiler. Other reasons

cannot be checked by the compiler because it is not possible to specify them in the

Page 31: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 15

chosen language. For example, there can be dependencies in the used parameters

as there are two integer parameters and the first parameter must be greater than

the second one. Other reasons for an error during a call can be a busy device,

an incorrect initialized object, a reference to a deleted object etc. The only thing

a developer can do is specify the behavior of the object, when the call cannot be

finished successfully. The behavior is defined by an exception, so that it is possible

to react to this error.

Some methods are very simple or do not use some any critical resources, so that it

is not possible for an error to occur during a call.

3.1.1 Types of calls

According to this view there are two types of method-calls. Calls that will deter-

mine successfully in either case and method-calls that can be interrupted because

something is going wrong. To make it easier to distinguish between these types,

they will be named deterministic call and nondeterministic call .

Developers often concentrate on the deterministic calls and forget to define the be-

havior of the nondeterministic ones. Both method types are equally important for

the design process and also for testing the software.

It is important to accept an exception as a valid result of a call. The difference

between an exception and a result of a successful call is that the execution of the

program will follow a different way. An invalid call would be if an error is not

reported by an exception. This often causes a crash of the whole application.

3.1.2 Components and States

One basic assumption is that each component has internal states. Developers often

do not recognize these states and, consequently, there is no representation of these

states in the program. Each component has at least two states, justCreated and

deleted. Depending on the functionality there may be only these two states or a

lot of other states between them. Sometimes flags or status variables are used to

Page 32: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 16

store state information. Even when they are implemented, they cannot be used

for testing, as there can be a mistake in the coide that isz used to set the state

infromation and everything seems to be right although an error has occurred. That

is why the only reliable way of testing is to monitor the behavior of the class.

Before describing the behavior of a component by defining a sequence of calls we

should take a closer look at method-calls and define a model for them.

3.1.3 Method-call model

Calling a method changes the internal state of a component. This can result in a

new state or the component can stay in the same one. During the call the compo-

nent changes into an intermediate state in which the component calculates the result

of the method call. Sometimes it is necessary to consider this state, but often the

component stays in this state for a very short time and therefore it can be ignored.

When a method-call is blocking this state must be considered. A further example

for considering this intermediate state is, when the method is solving a concurrency

problem. In this case the method has to be defined assynchronized, which means

that only one call of this method can be done as long as the execution of the method

has not been finished.

Another important point is that during the execution of a method an error can oc-

cur. This will probably cause the component to switch into a state that cannot be

directly reached from the original state. This seems to be a needlessly complicated

view of method calls, but this model allows a simpler test-framework and it is im-

portant to understand this view of method-calls to work with the test-framework.

The test-framework becomes simpler because this model defines deterministic state-

machines for this nondeterministic problem of calling methods.

Here are some rules for understanding the following graphics and the more enhanced

ones in later chapters.

Page 33: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 17

• circle: Equals a state.

• line with arrow: Equals a transition between two states. The arrow shows

the direction and points at the result state

• solid line: Equals a normal transition. This transition is the reason why the

method was written.

• dotted items: Denote internal intermediate states or transitions that are used

by the model. Transitions caused by events like exceptions must be specified

for error handling in order to make it possible to test the correct behavior.

• rectangle: Equals an event, like a method-call or the occurrence of an excep-

tion. In section 3.4.5 we will see that an event can have more than one effect,

however, up to now the only effect is that the transition is executed.

Model of deterministic calls

In this example the component is in the state A before a method is called. During

the call the component is in the intermediate state A*. After the result has been

calculated there is only one possible state, in this example labeled as B.

Figure 3.1: Model of a deterministic call

Model of nondeterministic calls

A nondeterministic call is quite equal to a deterministic call except for the fact that

the intermediate state A* has two possible transitions into further states. If the

Page 34: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 18

result can be calculated, the component will switch into state B. If an error occurs,

the component will switch into the state C.

Figure 3.2: Model of a nondeterministic call

3.1.4 Sequence of calls

It is very important to describe the usage and the behavior of components both in

order to use and to test the component. Thinking of the description of a component

and considering the testability during the design process makes it easier to design

simpler and therefore better components. If it is easy to test a component, it is

probably easy to use it. Otherwise components that do not facilitate there tests

often include design problems.

The behavior and the allowed usage can be described by a sequence of method-calls

and their results. Some methods cannot be called, when a certain state has not

been reached. These calls define a possible sequence of method-calls which is the

only way the component can be used.

Sequences of method calls are the first step to define contracts of components.

3.2 Contracts

When a component is implemented, there is no direct way to specify its usage

and behavior. The usage is written down in the documentation of a component.

Page 35: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 19

Documentations are human readable, but it is difficult to write a program that

extracts test-cases for the component from the documentation. A good way of

solving this problem is to define contracts between components. These contracts

can be written in a machine readable form and therefore it s possible to transform

them into human readable texts or diagrams.

One part of a contract is the way of calling methods of a component. Depending

on the language exceptions may be thrown when an error occurs. Contracts can

not solve the problem that has caused the exception, but they can specify a way of

dealing with it.

A correct signature cannot prohibit an illegal call of a method. Methods often

need some pre-conditions which must be fulfilled before a method can be used.

After a successful call the result has to match a post-condition and the state of the

component may have changed.

Contracts are defined by signatures and a possible sequence of method-

calls

Describing the correct sequence can be difficult, as there may be many variants.

A good design should prevent complex sequences and limit the amount of possible

variants.

One good approach to describe the correct sequences is to define the start- and the

end-state for each method-call. In some cases it is useful to consider the intermediate

states. The description can be defined as set of states combined with a set of methods

and transitions between them.

• S set of states

• Ss ⊂ S set of start states

• Se ⊂ S set of end states

• M set of methods

• Ss ×M → Se

Page 36: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 20

In this model transitions correspond to method calls.

There are various types of contracts which can be classified by taking a look at the

time line of permissible method-calls. This will be modeled by using a state-machine.

3.3 State-machines as sequence-models

State-machines are well-known constructs to control different processes. They are

defined by states and transitions between these states. [ASU99, p. 137]1 A state-

machine does not remember the earlier states, it only knows the actual state.

This behavior is not enough for modeling the allowed sequences of method calls, and

there are other limitations, too, so that it is necessary to enhance the functionality

of state-machines.

3.3.1 Limitations of state-machines

Before defining a more enhanced state-machine we should discuss their limitations.

The one-way-limitation

Figure 3.3: A one-way state-machine

A state-machine allows only one way of applying the transitions. There is no way

to do parallel transitions. It would be better to specify the correct sequence of calls,

but sometimes it is not possible to specify only one allowed way. Figure 3.3 shows

a sequence of transitions that result in state Y. Reaching state Y in this example is

only possible going from state A to state B, etc.

1The page number referes to the german version.

Page 37: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 21

Sometimes a state should be reached when several other states have been reached

without defining an order. State-machines that allow such sequences will quickly

get too complex to handle.

Figure 3.4: State-machine for permutations of calls

Defining groups of states

One solution to this problem could be building groups of states, but state-machines

do not support groups. There must be policies which transitions are valid within

the group and which cannot be applied. These policies will get less complex if they

can be defined for state-machines as in figure 3.5.

Figure 3.5: State-machine similar to a one-way state-machine

In this state-machine all transitions that result in a new state can be called in this

state again.

Page 38: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 22

Components which can be modeled by such state-machines consist of methods that

can be called at least twice without causing an error. This might not apply for all

methods but a lot of methods show this behavior. For instance, during the setup

of a component it might be necessary to store references of other components by

calling a method. This method can be called several times without causing an error.

An example for a method that does not have such a behavior could be the opening

of a file containing the setup information. Calling this method twice may cause an

exception when the file is still opened. A robust implementation is to avoid such an

exception and to make it possible to call the method more than once.

3.3.2 Enhanced state-machines

Enhanced state-machines allow to test contracts as defined on page 19 by using a

set of states. It supports a group of states. All transitions that are allowed between

these states can be applied. A method that causes a switch to a new state but can

not be called again in this state, can also be called once in these state-machines.

After a state-machine has been defined, this enhanced functionality can be applied

by grouping states.

Figure 3.6: An enhanced state-machine

3.4 The categorization of contracts

There are various categories of contracts. For implementing systems that deal with

contracts it is important to define categories for these types. Some types do not need

Page 39: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 23

additional algorithms but others need loop-detectors or other algorithms based on

graph-theory.

3.4.1 Linear Contracts

Linear contracts are the simplest type. Each state has only one transition that ends

in another state.

Figure 3.7: Linear contract

It is very simple to handle such contracts, as there is a clear way of how a component

that supports this type of contracts has to be used.

3.4.2 Pseudo Linear Contracts

This type is slightly more complex than the linear type. One or more states have

one or more transitions that end either in the same state or in a state that has not

been visited.

Figure 3.8: A pseudo linear sequence

Building a system that is able to handle such contracts must solve two problems:

Page 40: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 24

• There are different ways of reaching a state.

• How often is a transition called that ends in the same state.

3.4.3 Looped Contracts

Developers should avoid writing contracts that have this behavior, but sometimes

it is necessary for a component to have such a contract.

Figure 3.9: A looped sequence

A system for these contracts must solve several problems:

• There are different ways of reaching a state.

• How often is a transition called that ends in the same state.

• How to detect loops.

• Finding all ways to reach a state.

• Finding the shortest way to reach a state.

Page 41: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 25

3.4.4 States and pre-conditions

Most models for testing software components use pre-conditions to define what is

to be done before a component can be used. Pre-conditions are often defined in a

very formal way resulting in lots of work to define them.

States are defined during the design of a component. Using this view on components

may result in a simple design and simple designs may cause less errors.

In some way there is a relation between states and preconditions:

• Methods that need the same pre-conditions are in the same state.

• The summary of all states that are needed to reach a state can be interpreted

as the pre-conditions of this state.

It it seems possible to derivate most pre-conditions by using states. However, this

has not been proven and evaluating this assumption and developing an algorithm

should be part of further research.

3.4.5 How to combine components

Components will be used in combination with other components. All these com-

ponents have contracts that must be fulfilled. The components are connected by

applying method-calls on each other. The component that applies a method-call on

another component is called caller.

As described in section 3.1.3 there are different types of method calls, determin-

istic calls and nondeterministic calls. Figure 3.10 shows the model of combining

components by applying deterministic calls.

This is quite simple because invoking a deterministic call will always result in an

end-state. Image 3.10 shows the model of a deterministic call between two compo-

nents. Both involved components change into their end-states.

Page 42: Component based testing during the software development cycle

CHAPTER 3. ASPECTS OF CALLS AND CONTRACTS 26

Figure 3.10: A call of a deterministic method

Figure 3.11: A call of a nondeterministic method

Calling a nondeterministic method is more complex because there is more than one

possible end-state. Both components must handle all possible exceptions and switch

into the corresponding state (cf. 3.11).

Page 43: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 27

Chapter 4

General software quality factors

Start by doing what is necessary,then do what is possible,

and suddenly you are doing the impossible.

St. Francis of Assisi

Software-quality is a controversial topic as creating applications are getting more and

more complex. Developers have to manage the trade-off between fast development

cycles in a complex environment and customers that need stable, reliable products.

The situation is getting worse when applications are used by people that do not have

a technical background. Technicians often find solutions that are logical and stable

when they use them, as they know the processes in the background, the relations

between them and the resulting operation of a certain device. People who do not

have this background information - because it is not their main topic and they

only want to use a device or an application - often get problems with these products

because their usage is not intuitive and resulting bugs do not allow an efficient usage

of the product.

The problem of a not intuitive user-interface or behavior of a product can only be

solved by usability tests during the design. Decreasing the rate of errors can be done

during implementation, but the basis must also be established during the design.

Page 44: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 28

It is common sense that it is impossible to write huge applications without bugs,

but this should still be the goal during the development process. Developers may

not lean back without struggling for reaching this goal because everybody says that

it is not possible to reach it. Economical conditions force companies to decrease

the development time and its costs but this is short-sighted because bugs in deliv-

ered applications cause much bigger costs than bugs that are detected during the

development process.

It doesn’t matter if dishwasher, car, washing machine or mobile phone - if a

device doesn’t work, there often is a bug in the software.

[...]

Experts quite agree that software could be essentially more reliable if producers

would force that. Further, they believe that a liability for bugs could speed this

process up.

Annually buggy software costs 60 bn. USD for the US-economy [...]

There are no automatic test-tools1: One half of their time developers

devote to programming, the other one to searching bugs and patching these.

In the test phase, a program reveals approximately 10 bugs per 1000 lines of

code. The problem of developers consists in that there exists no robust tool to

measure the reliability of their software-designs.

[ORF03]

Users have accepted that an application crashes once a day, not because they know

about the complexity of the programm but for the reason almost every programm

behaves in that way. As a consequence they have accepted this misbehavior as a

bothering, unavoidable fact.

As our life is getting more and more digitalized, more reliable software is needed. In

the IT-world there is a lack of experts, so lots of projects are delegated to non-ex-

1The term “automatic test-tools” does not mean that they can automatically generate test-

cases, but they can automatically run test-cases. Automatically generated test-cases would never

be as efficient as test-cases, manually created by intelligent inspection of source-code.

Page 45: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 29

perts resulting in faulty products. By means of new and more powerful programming

languages, tools for testing and formal specifications, the IT-community is trying to

remedy this situation. This is not the final solution, but certainly a step into the

right direction.

One of the greatest “revolutions” in software design during the last few years has

been both, the invention of the object-oriented approach and the introduction of

the OOP quality factors, like reusability, modularity and extendibility (section 4.2

on page 33).

The evolution of OOP temporarily ended in the definition of the Design Patterns

which can be interpreted as the object-oriented counterparts of the established

paradigms of structural languages. These patterns allow the implementation of

components to fulfill the goals of the OOP approach.

Christopher Alexander provides a good description of patterns:

“Each pattern describes a problem which occurs over and over again in our

environments, and then describes the core of the solution to that problem, in

such a way that you can use this solution a million times over, without ever

doing it the same way twice” [GHRV95, Chap. 1]

This definition describes patterns in buildings and towns, but it can also be applied

to OOP. Patterns define solutions for general problems using a range of known

patterns in a very formal way. A very good book by Gamma, Helm, Johnson

and Vlissides (☞GOF ) is Design Patterns, Elements of reusable object-orientated

software. Section 1.1 [GHRV95, p.2] defines the term pattern by specifying four

elements: the pattern name, the problem, a solution and its consequences. The book

addresses 23 patterns that are grouped into 3 categories. All patterns are explained

and further treated by discussing an example, so that it is easy to get an overview

of all patterns. Knowing these patterns is not enough, they must be internalized to

understand their deeper meanings and their potential.

These 23 patterns are patterns for object-orientated development environments.

Page 46: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 30

As the development-environment provides special possibilities, e.g. the J2EE2-

platform, existing patterns will be adapted3 or new ones will be invented4 (see

[AM01]).

In combination with the ideas of OOP a further definition is becoming popular,

namely that of a component [Sch02] A component is represented by a class that

concentrates on one topic and which is connected with other components via a

Hook and a HookUp (see page 40 for details).

Another aspect of modern languages is that they are very useful for writing correct

and robust programs as they provide the chance to handle unexpected situations

by defining a clear mechanism for notifying such situations. This can be done by

defining exceptions which track faulty states of a program and guide the program

to a defined end- or error-state.

The aim of this chapter is to explain the terms which are needed in the following

chapters including design patterns and software quality factors by means of a simple

example5.

4.1 A small example

The example is implemented as a component-based framework6 allows to build string

manipulation applications. The framework captures the main design and thus as-

signs the responsibilities. It therefore dictates the architecture and allows the devel-

oper to concentrate on the specific requirements of the concrete application which is

2A good book on J2EE is Mastering Enterprise JavaBeans by Ed Roman [Rom02]3The command pattern has been adapted to J2EE see [AM01, chapter 7.2].4The business delegate[AM01, chapter 8.1] was designed especially for J2EE5The entire example and the usage of CrashIt as its test bench can be found in the appendix.6A framework is a set of cooperating classes that are designed to support developers in creating

applications for different domains [GHRV95, p. 26].

Page 47: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 31

based on the framework. This framework is designed to build string manipulation

applications.

The framework defines Services that can be used by a ServiceProvider. A

Service deals with strings on which it performs some computations by invoking

the method String execute(String value).

The framework consists of several classes, of which two are important for the further

explanations:

Service: it processes a string value and returns the modified value. A Service must

be initialized, before using it. It is possible to check whether the service is

initialized or not. A Service has a unique id, that serves ServiceProvider

to address the this Service.

ServiceProvider: it manages different Services. The ServiceProvider7 can ini-

tialize all registered services. A service can be used by passing two parameters

to the ServiceProvider, the id of the service and the value that should be

processed.

4.1.1 Interfaces

There are different interfaces that correspond to the parts of this framework. In this

section the interfaces are shortly described a detailed documentation can be found

in Appendix A.2.1 on page 89. The service interface (see listing A.1) consists of four

methods:

• String getServiceId(): returns the id of the concrete Service

• String execute(String) throws ServiceException: performs a calcula-

tion on a value

• void init() throws ServiceException: initializes the service

7The ServiceProvider can be interpreted as Mediator [GHRV95, Chap. 5].

Page 48: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 32

• boolean isInitialized(): returns true if the service has been initialized

The service-provider interface (see listing A.2) is defined by8:

• void registerService(Service) throws ServiceException: registers an

uninitialized Service

• void initServices() throws ServiceException: initializes all services

• String ask(String id, String value) throws ServiceException: forces

the service that corresponds to the id to process the assigned value

4.1.2 Implementations and usage of the framework

In this example two simple services are implemented:

Echo: returns the given value

Concat: concatenates the given value with the previously stored values (The first

time Concat is called, it returns the unchanged value.)

These two services can consequently be registered at the ServiceProvider 9 and

addressed by it. This can be done by implementing the following steps:

1. an instance of a ServiceProvider must be created

2. both services are loaded

3. an external object registers the services with the provider and

4. the provider initializes these registered services

8A smarter implementation of this components would go beyond the scope of this example, but

would be necessary for real world applications.9This implementation is done in the class org.service.ServiceProviderImpl that is part of

the CrashIt example. The whole example is part of the CrashIt distribution

Page 49: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 33

5. the services can be used by applying the corresponding key and a value

An example of an application that uses the framework in the described way can be

found in Listing A.4 on page 95. Services and ServiceProvider will be connected

at runtime which allows a very flexible configuration and extension of an application

that uses this framework. If this application must be modified, only its configuration

has to be updated without recompiling the framework. For instance, the implemen-

tation used in the appendix supports a Configurator class. It is responsible for

setting up the ServiceProvider. Configurator parses a configuration file, where

the used services are specified.

In this version services can be added during the start up. A smarter implemen-

tation would be able to add services during execution, but this example is designed

to clear the view on the basic ideas and not to provide a perfect solution to the

problem.

Before starting to explain the remaining terms, the following sections deal with

achievable software quality factors.

4.2 Important Quality Factors

The premises, that must be fulfilled to produce high quality software which the

economy demands, can be grouped into these external factors :

1. Correctness is the exact representation of a software product with respect

to its specification.

2. Robustness is the ability of software to respond appropriately to abnormal

or modified conditions.

3. Extendibility is the ease of extending software products to new changes of

specification or other problem domains.

Page 50: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 34

4. Compatibility is the ease of combining software elements with others.

5. Ease of use is the ease with which people of various backgrounds and qualifi-

cations can learn to use software products and apply them to solve problems.

6. Performance is the capability to predict the time exposure of an applica-

tion depending from the software architecture, amount of processed data and

hardware

and in following internal factors:

1. Simplicity is the ability to model real world problems in intuitive software

structures. [Bec00, Chap 17]

2. Modularity is the ability to decompose an application in single modules,

which communicates only over defined interface. [Sch, Chap 2.4]

3. Intuitivity is the capability to write the code so that other developers may

read it like a book, this without comments in the code. [Sch01]

External factors define characteristics of a program determined by users, whereas

internal factors concern software engineers. They have access to the software design

and its sourcecode. The following sections explains in detail these factors. Further

external and internal factors are explained by [Mey97], [Ale01] and [McC77].

4.2.1 External factors

The journalist of the article [ORF03] tries to picture out, why many software pro-

ducts are not reliable (neither correct, nor robust). As an outstanding user, he

judges the actual situation in the software industry. This judgment from external

users equals the definition of the external factors. In this sections the most important

external factors with respect to CrashIt will be discussed.

Page 51: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 35

Correctness

Correctness is the main and prime quality of a software product; no other factor

has such a weight like this. The best user interface counts nothing if the software is

not correct. The correctness is theoretically derived from the exact implementation

of the URD (User Requirements Document). But also the URD must underly

correctness and completeness. For little systems an exact URD is possible, but

for huge applications like an operating system, the numerous user requirements for

the software product could easily conflict among themselves. To create reliable

software, the correctness already starts when user requirements have to be declared

and verified on the client’s desires. Correctness finishes with the maintenance of the

product.

A method which helps to employ correctness is the layered structure of software:

The software can be split into more clearly defined and limited layers. Each layer

relies on lower ones. The process to proof the correctness of a layered system is

a incremental process. That means that the correctness of a layer depends on the

correctness of lower layers. If, for example, a system makes use of libraries then

firstly it must be ensured that these are correctly implemented. Relying on this as-

sumption it is possible to test the correctness of the system. The basic layer should

be selected evaluating user requirements, operating system, language and compiler,

hardware, running environment and user groups. Design patterns may help to find

the layers and interfaces between them. [Hoa72], [BBM+78]

Robustness

Correctness is the exact representation of the specification, whereas robustness also

expresses how a system reacts in situations which are not include in the specification.

The strength of robustness consists of the capability to notice failures, to guide the

system into a secure state or to catch the error and start some recovery processes.

To grant robustness, following points help:

Page 52: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 36

• Clear and simple design: thereby it is easier e.g. to locate errors and to react

accordingly

• Formal specifications, see [Jon96]

• Start testing already during specification phase

• Not only valid tests should be formulated, but also invalid tests, which should

verify the robustness

Extendibility

An essential principle for improving extendibility is modularity : The better a system

can be split into autonomous parts, the higher the likelihood that a change in the

system would only affect one or some modules, rather than the whole system (further

details are explained in section Modularity on page 39).

Compatibility

A commonly known situation: A new operating system is installed and some old

applications would not run anymore. This is a common issue of not considered

compatibility. Compatibility is the factor which implies the most work: Depending

on the size and differences of a user group, this factor demands to consider each

different environment, where the user would use the new application.

Also for this factor, layered program structure could help to move all user or system

relevant varieties into one or less layers. Using this technique, only some less layers

would be affected by compatibility. Of course, if the specification is not good enough,

the migration to other platforms is quite impossible.

Ease of use

The factor ease of use should just be considered when user requirements are to be

defined. The more software designers understand and study the desires, backgrounds

Page 53: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 37

and living environments of the user group, the more user requirements can be ex-

pressed through a simple and clear structure. Upon user requirements, the software

design is developed. The simpler and clearer these requirements are, the easier the

process of development becomes and the more uncomplicated and unproblematic

the structure of software can be defined. Simplicity and clarity are premises for this

factor.

Performance

This important factor would be ignored, if for example the calculation of tomorrow’s

weather takes more than 24 hours. Mostly, the performance of a system correlates

with the hardware capabilities, but also with the complexity of a program and how

much data types will be abstracted (The abstraction of types means its generaliza-

tion. This has the disadvantage, that special types must be derived from general

ones. This process causes performance costs). The software community shows two

typical attitudes:

• Either a program is designed with respect to speed omitting intuitiveness and

simple design (this program would be so specialized, that it is not open for

reusability or extendibility)

• Or it is designed with respect to generality: design a program as general as

possible to be used for many different problem domains, without dealing with

process time or other limited resources.

4.2.2 Internal Factors

Internal factors describe qualities for software designers. These factors are in general

the premise for external factors: correctness, robustness, extendibility, compatibility

and ease of use based on simplicity and modularity.

Page 54: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 38

Simplicity

Simplicity could be one of the hardest things in peoples’ lives:

It seems crazy, but sometimes it is easier to do something more complicated

than to do things simple. This is particularly true, when you have been suc-

cessful doing the complicated thing in the past. Learning to see the world in

the simplest possible terms is a skill and a challenge. The challenge is that

you may have to change your value system. [Bec00, Chap 24]

Most people correlate quality work to simplicity. For example a car: it has few com-

mand devices, but they are usefully arranged, so that driving is a simple task. The

manufacturers of cars hide complex structured engines behind simple to use devices.

A gimmick to achieve simplicity in software: first avoid complexness, secondly hide

complex processes behind simple-to-use devices. Simplicity will not only influence

the external factors, but also:

• A simple design is easier to communicate with than a harder one.

• Simplicity invokes user to adopt simple strategies to achieve simple and fur-

thermore also correct, robust and extendible programs

• Simple design gives quicker, more and exact feedback about the quality of the

project.

Principles to improve simplicity are:

• develop software using a modular approach and Design Patterns (see [GHRV95])

• small initial investment in the design

• incremental change: The strategy of simplicity in design will work by gradual

change

• be open for extensions and changes in the specification

Page 55: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 39

• ability to understand user requirements, their background and living environ-

ment. Only on this way, software developers are able to apply correctly user

desired software solutions.

A further key to support simplicity is uniformity in the programming style (e.g.

using some style guides). Another key is of course intuitivity, so that a concept of

a system is understandable and logic for colleagues in the developing team. Finally

each developer should keep in mind, that a system has to be maintained. It is

therefore useful to get the habit to write exact documentation, furthermore to think

and develop in simple terms.

Modularity

A software construction method is modular, then, if it helps designers pro-

duce software systems made of autonomous elements connected by a coherent,

simple structure. [Mey97, Chap. 3]

A modular design method could be classified using this five criterions:

• Modular decomposability: a system can be split into less complex and rela-

tively autonomous parts that communicate over a simple structure

• Modular composability, is connected to reusability: it represents the capability

to construct new systems using existing modules. Only the communication

structure has to be designed.

• Modular understandability: each module can be read and understood by itself,

without knowing other or only few modules.

• Modular Continuity: an extension in a module does not trigger changes in

other modules.

• Modular Protection: the consequences of an error caught or caused by a mo-

dule are limited to that module or at worst propagate to few neighboring

modules.

Page 56: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 40

4.3 Design patterns

Design pattern describes a problem, that often occurs in our environment, and gives

a solution to this problem (see also chapter 4 on page 28). Design patterns often

seem to be very similar or they can be described in similar ways, however, their small

differences are quite important. Patterns are sometimes based on related concepts

which is the reason why their deeper meaning can only be caught by studying their

application and its resulted consequences. As carefully worded definitions of the

patterns can be found in the design pattern book [GHRV95]. In this document only

a few patterns are used, the most important one in this scope will be discussed in

the next section. It is a basic pattern that is often used although it is not explicitly

mentioned. Further readings about Design Patterns is well documented in [Ale01]

and [Sut00].

4.3.1 Hook and HookUp

The Hook Hook-Up pattern (refer to [Sch02]) is split up in two parts, the Hook and

the HookUp (see following subsections).

This pattern is often used unconsciously by other patterns like the Observer pattern

(see [GHRV95]). The deeper intention of the Hook / Hook-Up pattern is to connect

components. Moreover, it is not described in detail in the Design Patterns Book

[GHRV95] because it was later defined by Klaus Schmaranz, [Sch02].

Some patterns use the hook pattern for connecting components, but depending on

the behavior of the connected components these patterns describe different func-

tionalities.

For example, the Observer pattern combines components by using the hook pattern

but in contrast to a Mediator the observer notifies all connected components when

an event occurred. The mediator that also uses the hook mechanism for connecting

the components, chooses only one component to forward the event.

Page 57: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 41

There are some other patterns that use the hook mechanism, but it is not the aim

of this document to explain all design patterns, however, for a better understanding

of the next sections it is useful, to have a closer look on the hook pattern.

Hook

The definition of a Hook consists of two parts: the abstract Hook and the concrete

Hook. The abstract Hook defines an interface for a component that represents me-

thods (e.g. execute) how this component can be used. A concrete Hook implements

this abstract Hook interface.

The Hook-method is the “reason” why a class is written. It contains the functiona-

lity that is needed. If more than one Hook method is needed, several abstract Hook

interfaces must be defined, to make clear how this component can be used.

In the service-example (section A.2 on page 88) the Service interface is an abstract

Hook-Up and the Echo class is a concrete hook because of the execute method that

is used to ask the service.

Hook-Up

The Hook-Up is the second part of the Hook Hook-Up pattern. In comparison with

the Hook that defines what happens when a method is called, the Hook-Up decides

when the method is invoked.

The Hook-Up can also be split up into two parts: the abstract Hook-Up and the

concrete Hook-Up. An abstract Hook-Up defines a method that is used to connect

a Hook with another component. A concrete Hook-Up is a component that imple-

ments the abstract Hook-Up interface.

In the example the ServiceProvider (Listing A.2 on page 92) the interface is an

abstract Hook-Up because it contains the Hook-Up method register(Service).

The ServiceProviderImpl class is a concrete Hook-up.

Page 58: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 42

4.3.2 Components

The idea of components is based on the main concept of object-oriented program-

ming. [Sch02]

• A component is responsible for one function or can deal with one type of data.

• A component must be free of side-effects.

• The communication with a component is only allowed through methods, public

member-variables are not admitted.

• There are only a few ways how this class can be used.

Because of these constraints components are able to signal missusage. Mis-usage

can only occur by applying wrong values to methods or by calling methods in a

wrong sequence.

Developers often violate these ideas which are also an important part of the OOP

approach, so the stronger term component was defined to make clear, that if someone

is writing a component, this class complies with these ideas and limitations. In the

service example the different services are typical components.

Some guidelines how components should be designed will now be discussed. These

guidelines widely correlate with guidelines for designing exchangeable and expand-

able components, and they also force the design of good testable applications.

So complying with them, these guidelines are not a restriction to the freedom of

software-designers but rather implicate a good design for an exchangeable and ex-

pandable component-framework.

Side-effect-free : A component must avoid side effects. It should encapsulate

the functionality without affecting other components (e.g. changing global

variables), except those components that are registered in order to receive

reactions from the component.

Page 59: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 43

Composition by Hook and Hook-Up : The Hook Hook-Up pattern must be

used for binding components [Sch, Chap. 3.4.7]. This includes the requirement

for defined interfaces for each Hook and Hook-Up. The interfaces describe how

a component is registered and which methods can be used. The Hook Hook-Up

pattern allows test teams to connect components via a proxy that can be used

for monitoring the test. This can be done dynamically without recompiling

the component.

Dynamically loadable : This correlates with the guideline of using the Hook

Hook-Up pattern. Components must be dynamically loadable in order to

replace them without recompiling the framework. Moreover, this allows the

usage of components that are not known at compile-time. [Mey97, Chap. 2]

Thinking of the inner state of a component : During the design process the

inner state of a component should always be present. The inner state of

a component will be changed by calling a method. If the method finishes

correctly, the next state is reached. If an error occurs during the execution,

another state is reached. Thinking of the effects of method-calls will help to

design components with small interfaces.

Usage of exceptions : Exceptions have to be triggered on unpredictable events or

mis-usages of a component. Even wrong method-calls should trigger excep-

tions because a programmer is not forced to handle special return-values. If a

method returns true when it succeeds and false when it fails, the programmer

can ignore this result. If an exception is thrown when it fails, the program-

mer must handle it. Using exceptions will also force designers to think about

the handling of missusage or wrong method calls during the design. Thinking

about the impact of an exception on the inner state of a component during

the design-phase facilitates a robust component design. Notice that it is not

allowed to use exceptions for controlling program flow because, this would vi-

olate the concept of an exception. Exceptions are not designed to handle the

default cases but to indicate dangerous situations (see section 4.4 on page 44).

Page 60: Component based testing during the software development cycle

CHAPTER 4. GENERAL SOFTWARE QUALITY FACTORS 44

Definition of the usage : Components must be defined with respect to the con-

text, where they are used. Therefore, obligations and benefits with reference

to the context can be defined. This definition can also be called contract. For

more details about contracts refer to [Val03, chapter 4, p.34].

4.4 Exceptions

Even when all components are correctly used difficulties can occur, e.g. an inter-

rupted network connection. Program should be able to manage these situations.

Exceptions are the correct tool to solve such problems.

An exception can be interpreted as a possible result of a method call that occurs

only when something goes wrong. This is an important aspect because exceptions

are thrown when a dangerous situation has been reached, whereas they must not be

used for controlling the program flow. If the program flow depends on a result of a

method call, the result value should be used and not a possible thrown exception.

Several languages implement the exception-concept in different ways. Java, for

example supports exceptions as a part of the language concept by providing special

keywords to handle these situations. All implementations allow the definition of

an additional program code that is executed when a dangerous situation has been

signaled by throwing an exception.

Notice that for the test design an exception can be a valid test result. It is necessary

to test if an exception will occur caused by missusage or an unexpected event in the

application. The emergence of an exception is part of the component behavior.

Page 61: Component based testing during the software development cycle

CHAPTER 5. TESTING 45

Chapter 5

Testing

The English Positivist philosophers Locke, Berkley, and Hume said that

anything that can’t be measured doesn’t exist. When it comes to code, I agree

with them completely, Software features that can’t be demonstrated by

automated tests simply don’t exist.

Kent Beck

Another philosophic approach to testing can be found by interpreting articles by

Sir Karl Popper. He defines the matter of a scientific thesis or a scientific work

by two factors, the logical and the empirical matter [Pop94, S 40]. The logical

matter combines the sum of all works that can be derived from a thesis. This can

be interpreted as the reusablility of a component. On the other hand, the empirical

matter of a work is defined by the sum of all proves that demonstrate the correctness

of a thesis. This second factor can be interpreted as the testability of a component.

During the last few years patterns and components have become popular so more and

more papers or books about these “solutions” can be found in nearly every software

design division. As a consequence we claim that the evolution of writing reusable

and expendable software components by using the ideas of the design patterns is

nearly finished. We can say that the IT community has learned to carry out the

Page 62: Component based testing during the software development cycle

CHAPTER 5. TESTING 46

ideas of reusable and extendable components and is able to fulfill the goals of the

OOP approach. So the logical matter of components that define an application can

now be satisfied. But what about the second factor?

The aim of this section is to discuss different test concepts. It will also show how

the component-based approach which has been described in earlier sections, allows

to define tests and thus it facilitates to increase the empirical matter. How can this

be done and what are the steps to achieve this goal?

5.1 Goals for running tests

In this scope another aspect become more important: Do these reusable components

react as it has been defined during the design? At this point new terms gain impor-

tance correctness and robustness. The target of a test is to check implementations

against these terms.

If someone is in the situation that the test of a new software component must be

defined, there are two frequently asked questions:

• What are the goals of this test?

• Does this test verify the correctness of a component?

However, these questions are dangerous because they imply that a test is carried to

show the correctness of a program, which is wrong!

A test is done in order to find failures or misbehaviors of a program and not to show

that everything functions correctly. This psychological aspect of testing is one of the

reasons why testing is not as popular as designing. Testing is a destructive task that

charges the components in order to affect them to cause errors. It cannot be done

by using the default test-cases,as they will result in default test results. Instead it

must be done by using test-cases that are more complex than the default ones and

Page 63: Component based testing during the software development cycle

CHAPTER 5. TESTING 47

this must be carried out in an environment that is close to the real-life situations in

which the component will be used.

A typical situation is that after a test someone asks: “...ok, this was the test, but did

it fail or succeed?” Such situations must be avoided! It must be clear how a result

of a test run has to be interpreted before the test is done. It is therefore helpful to

use tools that are able to generate clear summaries of what has happened during

the test.

At least almost all scenarios that are discussed in this work deal with the problem of

verifying the correctness and robustness of a program, but there are other possible

targets too.

5.1.1 Targets of test runs

Keeping the main goal of finding errors in mind other useful targets can be defined

for the test. For all targets it must be clear that a test is done to find misbehaviors

and not to show that “everything is ok” in order to ease the development team’s

conscience. It is not the aim of this work to give a complete introduction into all

possible test scenarios, but it is necessary to keep all requirements of a software

project in mind when the tests are designed. It is likely that not all requirements

of a project can be proved by only showing the correctness and robustness of a

program and therefore testing this aspects is an important but not sufficient part of

the whole test process. What are possible goals for test runs?

Correctness-tests : Testing the correctness of an application or a component

can only be done when its behavior is known and well-documented. The

correctness of a component can be tested by applying method calls in the

correct order on the component. These calls must also use correct parameters

so that the component is used correctly. If there are different ways how the

component can be used, all ways must be included in the test. For example, if a

class can be instantiated by using different constructors, all these constructors

Page 64: Component based testing during the software development cycle

CHAPTER 5. TESTING 48

must be used in test-cases. That applies to methods with different signatures

as well.

This might imply a big effort but it is necessary to guarantee that all possible

ways of using a component are included in the test.

Robustness-tests : In contrast to testing the correctness of a component testing

the robustness uses incorrect calls i.e. calls with incomplete or illegal pa-

rameters or invokes methods in wrong orders. This test concentrates on the

feature of the tested component that a misusage is detected and reported. The

component should neither cause a crash when it is misused.

Even if testing the correctness implies a big effort when all correct ways of using

a component should be included in the test, testing all ways of its misusage is

nearly impossible. The only way to reduce possible missuses of a component

is a clear and simple design, which will also decrease the effort for testing the

component for robustness.

For testing the robustness it is necessary to accept exceptions as desired an-

swers of a illegal method calls. Consequently, testing the correctness and the

robustness can often be done in the same test run when the result of a call is

only compared to the desired results.

Performance-tests : Testing the performance of an application always means

comparing its behavior with target values. It is necessary to carry out a

test with a multiple of the expected critical actions. Critical actions can be

network requests, user interactions, queries etc. Notice that the test should be

done with at least 7-10 times of critical actions to ensure that the application

reaches the expected performance. This performance test must be done on

an appropriate computer system which is pretty similar to the system used

for running the application. The performance-tests can deal with different

aspects:

• time behavior : This deals with questions like: “How long does it take to

execute a request?” or ”How long does it take the application to start

Page 65: Component based testing during the software development cycle

CHAPTER 5. TESTING 49

up?” A related topic is the monitoring of the response time , i.e. the

time that an application needs to return the first partial answer.

• resource consumption: Nearly every application is forced to reduce the

consumption of resources. There are different types of resources. One of

the most important ones is the memory usage of the application. Me-

mory is limited, so an application should optimize the amount of required

memory. If a program uses more memory than it should, this often has

a bad influence on the time behavior, because the missing memory must

be simulated by using the much slower hard disk space. Other resources

are limited by harder system-depending variables. For instance, operating

systems often allows only a limited amount of open file handles or possible

network connections. If an application tries to exceed this limits, this will

often cause unsafe situations for the application, if nobody considers this

situation during the design.

A third group of limited resources also causes a deterioration of the per-

formance. For example, database connections are also often limited but

depending on the complexity of the executed queries the upper limit

of connections varies. Testing an application with a huge amount of

database connections that perform simple queries can pretend a good

performance but in real life the application uses more complex queries

and the performance will drop radically. A consequence, the test must be

done with typical use cases, especially when more complex subsystems

whose behavior can not be described very well are included.

Interoperability : Interoperability is necessary when more than one system ar-

chitecture is used. Different byte orders on the network layer or different

implementations of used protocols may cause errors that can hardly be found.

An other important case for testing interoperability occurs when a new stan-

dard is implemented by different groups for the first time.

Interoperability-tests must always be carried out by combining the different

implementations in a way that all possible combinations of the related com-

Page 66: Component based testing during the software development cycle

CHAPTER 5. TESTING 50

ponents are tested. This permutation can be skipped when a reference imple-

mentation is available.

There are several other possible test scenarios, but this document concentrates on

correctness and robustness, so discussing other factors would go beyond its scope.

Other possible test scenarios can be derived from the requirements of the application.

In each case it is necessary to think about the goals of a test run and write them

down. After this part has been finished the test will be designed and implemented,

which often causes a big effort, sometimes similar to the design. It is useful to

start the test design and its realization during the design phase to avoid untestable

modules. In eXtreme Programming [Bec00, Chap. 9] the definition of the test is

the last step before the implementation is started. This confirms the importance of

testing in the design cycle.

Thinking of how a component can be tested during the design often facilitates a

modular and extendable design.

5.2 Test-design concepts

Defining good test-cases is often hard work and test engineers need a lot of infor-

mation about the program to find all important test-cases. The component-based

approach simplifies this process because a lot of test-cases can easily be found by

studying the descriptions of the components.

The component-based approach is based on a detailed description of the interfaces

between the components. Without this description it is not possible to write or later

use components, so designers are forced to write down how components must be used

and how this components must react. Pre- and postconditions and a sequence of

how methods had to be invoked are part of the documentation and facilitate the

process of defining the test-tasks.

Page 67: Component based testing during the software development cycle

CHAPTER 5. TESTING 51

The component-based approach implies the development of good testable programs,

because each component interacts with other components only by using a defined

interface. This is only true when the system is split into quite small components that

interact in a comprehensible way, but this is a requirement of the component-based

approach.

If the component-based approach is used, an other big problem is solved: the design

of an integration test. Testing the whole system is often quite complex as the system

often hides information and components cannot be accessed during the test. The

component-based approach allows to use proxies between the components during the

test and so the communication between the involved components can be monitored.

This is very useful to determine if a component is misused or wrongly implemented.

Even when proxies are used it can be very complicated to determine what went

wrong, so the test must be a step-by-step procedure.

5.2.1 Test principles

There are some simple principles how component-based systems can be tested. The

main idea is to define tests from a simple to a more complex scope. Here are some

rules how tests should be designed. These simple rules seem obvious, but they define

a method that can be used when a test must be defined by hand1.

1. Test Components on their own: First all components that could be used with-

out other components should be tested. In the service example the two services

Echo and Concat should be tested on their own in order to verify that their

reacts corresponds the specifications. The goal for this test is to check if the

component is implemented correctly.

2. Combine few components via proxies : As a next step, the test-engineer should

combine only few components (mostly denoted as ☞Client and ☞Supplier)

1In later versions of CrashIt there will be an automated way to generate test configurations

using this approach

Page 68: Component based testing during the software development cycle

CHAPTER 5. TESTING 52

via a proxy. This test is useful to check if components are used correctly by

its Hook-Up component. The proxy allows the engineer to monitor the com-

munication between the Hook-Up and the Hooked component. For instance,

CrashIt supports these proxies but they are called Contracts(refer to [Val03,

p.59] for detailed information). Contracts are slightly smarter than simple

proxies because they can interact with each other. In the service example this

test can be implemented using the class org.service.ServiceContract.

3. Combine more components directly and new ones via Contracts : If the test has

been successful so far, more components can be combined to a more complex

system. Components that had already been tested by using Contracts can

now be connected directly. New components should be connected by using

Contracts between the client and the supplier. The reason for this test is to

check if a growing framework reacts as defined in the specification. If a problem

occurs, it could be useful to connect components that had already been tested

again by using Contracts. This allows to gain more information on what has

happened during the test. The reason why already tested components should

be connected directly is that the system reacts in a way which is more similar

to how it will react in a real world environment. This can be falsified for

example by the delay caused by a Contract, so only the important Contracts

should be used in this stage.

This test cannot be carried in the mentioned example (see section 4.1 on

page 30) as there not enough components.

4. Test the whole system: After all these tests have been passed a test of the

whole system should be done. Now the real performance and delays of an

event can be determined. In the small service example this is done by using

the configurator. The configurator connects the components directly without

Contracts. Sometimes it can be useful to connect components via Contracts

even at this stage in order to trace the communication between the compo-

nents, for example to determine components that have a poor performance.

Page 69: Component based testing during the software development cycle

CHAPTER 5. TESTING 53

Developing test configurations by using this simple schema is comprehensible and

configurations can be easily enhanced by including new components or new test

cases without dropping the old configuration.

This schema can be used in all test tools that are able to deal with components.

In jUnit[jUn03] such configurations can be implemented by writing test-suits that

combine several test classes.

5.2.2 Descriptive or declarative configurations

Writing the test-code is not the only way to carry out tests. It is also possible to use

external files to define such test configurations instead of implementing test-classes.

This has the advantage that these test scenarios can be designed and written before

one class has been implemented. The test team can start their work while the

development team is working on the implementation. The approach of writing files

that describe the test sequence avoids more errors in the test configuration than

writing a test-class, because describing what should be done is more simple than

implementing the test-case. Implemented test-cases often contain copy-and-paste

errors in the source code and its error-rate is as high as in the source code of the

program. Therefore it is useful to change the medium by using different formats for

defining the test-cases and implementing the programm.

Another advantage of describing the test scenario in a language-independent format

is the possibility of using the same test-cases to test another implementation based

on another language or environment. For example, the configuration can be used

for testing Java component-based frameworks and another test-bench can use the

same configuration for testing an C++ implementation.

An implementation of a test-class may be slightly more flexible than the description

of a test-case, because only topics can be tested that can be described in the used

description. A small set of needed operations and definitions is enough for testing

Page 70: Component based testing during the software development cycle

CHAPTER 5. TESTING 54

component-based frameworks. If this method is applied to a framework that has a

clean component-based design this disadvantage loses importance.

5.3 Exception safety and robustness

As mentioned before exceptions are an important tool for increasing the robustness

of a program. Even when a program is written without exceptions critical situations

may appear. Without exceptions these situations have to be indicated by a special

return value. No one forces the programmer to handle this special return values and

they are often ignored. If a situation is indicated by an exception, the programmer

must handle it because otherwise a compile error will occur. Even when exceptions

are used in a program, it is still the decision of the programmer how an exception

must be handled. When an exception is ignored this is called silent catch. In some

uncommon cases these silent catches are necessary, but in the majority of cases they

are programming mistakes and they therefore have to be avoided.

This is only one reason why it is necessary to test if a component throws an exception

when it is supposed to. Testing the robustness of a component can often be done

by testing if an exception occurs when the component is used in a wrong way.

It is useful to use only a small number of different exceptions that may occur during

a method call because handling a lot of different exceptions on their own may cause

a big effort. In a layered architecture for example, the application layer should

not deal with low level exceptions. Each level should define and handle its own

exceptions.

Java offers the option of wrapping an exception within another by using a pre-defined

constructor of the Java exception object. Sometimes it is useful to use these nested

exceptions2 to forward an exception through different API levels that use different

exception types, but generally each level should handle its exceptions on its own to

2Compare to the chain of responsibility [GHRV95, p. 223]. Nested exception are nor equal to

a chain of responsibility but it is a related concept.

Page 71: Component based testing during the software development cycle

CHAPTER 5. TESTING 55

provide a transparent usage of this level even when an exception occurs.

However, despite all their advantages exceptions are not as widely used as they

should. Programmers think that it is inconvenient to be forced to think of critical

situations during the implementation and the design but it is necessary. Even when

exceptions are not so popular let alone testing them. A good test concept must also

cover exception testing.

The purpose of the last chapters was to motivate the topic of testing in a theoretical

way. The following ones will discuss this important topic from another point of

view.

Page 72: Component based testing during the software development cycle

CHAPTER 6. CRASHIT - A SHORT INTRODUCTION 56

Chapter 6

CrashIt - A short introduction

It is not enough to be busy; so are the ants.

The question is: What are we busy about?

Henry David Thoreau

Almost every developer implements tests for the written code to verify it. Sometimes

a framework like JUnit is used to implement these tests. This enhances the potential

of the test code because it can be started automatically and it can therefore be shared

between different programmers who do not know details of the components that will

be verified by running these tests. As JUnit uses implemented test-classes it is a

declarative approach. The framework which was designed and implemented during

this work uses a descriptive approach. It is called CrashIt .

The aim of this chapter1 is to give a short overview of CrashIt its concepts and its

usage. A full introduction into CrashIt can be found in [Val03, p. 41].

1This chapter includes some passages of the whole chapter in [Val03, p. 41]

Page 73: Component based testing during the software development cycle

CHAPTER 6. CRASHIT - A SHORT INTRODUCTION 57

6.1 What is CrashIt ?

CrashIt is a framework which supports developers and test-engineers during the

unit- and the integration-test of software components. The aim of the project is

to motivate developers to write test-cases based on their specifications. These test-

cases will be processed by this framework, which finally returns a report about the

whole test.

A test2 of a component-based system consists of

1. checking the behavior of each component (= component-test or unit-test) and

2. monitoring the communication between components in order to detect mis-

usages and check the behavior of components in a network (= integration-

test).

The framework actually provides the capabilities to

• define tests (test-cases, test-case-sequences, configuration),

• run tests and log their results,

• check communication between components and

• create reports about tests.

User Groups

CrashIt was designed for three different user groups that consequently demand dif-

ferent requirements:

Framework-developers write modules for the framework. They know its core

functionality and are able to extend it or integrate the framework into other

software development-tools.

2Testing is defined as a process with the aim of finding differences between the implementation

and the specified requirements. The premise is that the specification is correct.

Page 74: Component based testing during the software development cycle

CHAPTER 6. CRASHIT - A SHORT INTRODUCTION 58

Component-developers write components to be tested by CrashIt .

Test-engineers start unit- or integration-tests over component(s). This user group

is able to write complete test-configurations (see for more details section A.3

on page 96). Furthermore, test-engineers analyze test-results and try to give

hints where bugs could be located, too. [Bec00] suggests that not only com-

ponent-developers should test, but also customers, because they know the

requirements of the product.

6.2 Main concept

CrashIt can be used as a stand-alone application or as an ant-task [Apa03] (see

8.3 on page 83). The descriptive test data are stored in several files. CrashIt uses

XML-files3 that include these necessary test-configurations. A test is separated into

different parts and each part is defined in an individual file.

Test-case : It models one method call. It contains

• how the call has to be done by defining the TestClass,

• what should be done by defining the method and all needed parameters

and

• how the result should be verified (ResultChecker).

Test-sequence : It combines several test-cases to a sequence. These sequences in-

clude flow-control information that can be used to skip tests if an unexpected

result appears. The sequences define the test-cases and refer to a connection

description that is used to set up a test scenario. First, this connection de-

scription is be used to initialize and connect the needed components. After

this has been finished the test-sequence will be executed. This is necessary to

3The current version supports only xml files, but it is possible to define an own file-format or

an other source that stores the configuration e.g. a database.

Page 75: Component based testing during the software development cycle

CHAPTER 6. CRASHIT - A SHORT INTRODUCTION 59

guarantee that each test-sequence uses new components that do not include

information of older test runs.

The connection description is spitted in to a definition of components and its

relations. It can additionally include contract-components that can be used

as proxies between two components 4.

Test-configuration : It defines a whole test scenario, including different test-

sequences and configurations. The sequence of the defined test-sequences in-

cludes also information for the flow-control, so that it is possible that some

test-sequences are be skipped or - depending on some results - other test-

sequences are be invoked.

Defining this configuration in several separated files provides the possibility of only

reading data needed for the test. Files including data that would be used by skipped

test-cases or test-sequences are not read. If the test-configuration was stored in only

one file, the whole file would have to be read before the test starts even when a lot

of test-cases and test-sequences are skipped.

6.3 Testing topics covered by CrashIt

The framework has been designed to support developers and test-engineers in test-

ing applications and their components. As mentioned above there are different

approaches to run tests on software components. Some of these approaches can be

implemented by using CrashIt . Chapter 5 in [Val03] includes a detailed explanation

of how CrashIt can be used to increase the quality of a software project by addressing

the different approaches. The next sections will only give a quick overview of how

this can be done.

4These proxies are noted in 2 on page 52 and refer to the concept described in [Val03, Chapter

4,p. 37]

Page 76: Component based testing during the software development cycle

CHAPTER 6. CRASHIT - A SHORT INTRODUCTION 60

6.3.1 Correctness

The correctness of a component can be verified by invoking methods of this compo-

nent and comparing the results with expected values. This can be done in CrashIt

by using the ResultChecker of the applied test-case. ResultCheckers are a more

general approach of verifying the impact of a method-call than the assert-methods

of JUnit. The ResultCheckers can for instance be used for verifying if a database

entry or a file has been written (see 24, page 80 for details). It is easy to implement

and integrate its own ResultChecker to verify special impacts of method calls which

are not covered by existing ones.

6.3.2 Robustness

The robustness of a program can be tested by applying method-calls with wrong

parameters or in an wrong sequence. Even for checking this quality factor the

ResultCheckers are useful, because they can verify if everything is still consistent

despite the appearance of a failure. The provided reasons and hints, e.g. in a thrown

exception, can also be verified by a ResultChecker. Testing for robustness is even

as important as testing for correctness.

6.3.3 Compatibility

It will be possible to use CrashIt in two ways for compatibility-testing. First, it is

possible to implement proxies that verify and record if a component behaves as it

has been defined in a general description for this component type. Interfaces and

components that implement it can be tested by using this approach.

It is also possible to do this in JUnit by implementing an abstract TestClass that

uses only the interface definition of a component. An inherited TestCase that

instantiates a concrete implementation for the interface can be implemented and

this test can also verify if the component behaves as described in the design. These

Page 77: Component based testing during the software development cycle

CHAPTER 6. CRASHIT - A SHORT INTRODUCTION 61

abstract TestClasses can be implemented after the interfaces have been defined as

specified in extreme programming (see 68). In this approach it is assumed that the

two implementations are written in the same language.

This leads to the second way of compatibility tests by using CrashIt that was one

of the reasons for starting this project. This method is not yet available, but it is

planned to realize it in the near future. CrashIt is now able to test Java components

and these tests are defined in a language-independent way. If an implementation

of CrashIt is available for other languages or other platforms, it is possible to use

the same test-configuration for testing the components, because the test has been

defined in XML-files and has not been implemented in a specific language.

Page 78: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 62

Chapter 7

Using CrashIt in the software

design cycle

There is no useful rule without an exception.

Thomas Fuller

This chapter will show how CrashIt can be used in different software development

models. It will not provide a single way for all development models but it provide

ideas how CrashIt can be used during the development cycle. Test-benches are

traditionally closely bound to extreme programming but all other models also need

test-benches to verify the development progress.

A development-process consists of several topics that interact with each other. De-

sign models are schemes how this abstract topics can be combined. The topics and

its relations are part of a methodology.

Page 79: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 63

7.1 Methodology concepts

A methodology is defined as a series of related methods or techniques for reaching

a concrete goal. There are 13 (see figure 7.1) elements that define the elements of a

methodology[Coc02, chapter 4,p.115].

Figure 7.1: Elements of a methodology

These elements1 can be used in each team endeavor, whether it deals with software

or not. When the defined steps of a development-model are generalized it can also

been applied to none software projects.

The testing environment is related to almost every methodology element. It is part

of the used tools and techniques, it forces the team members to increase their skills,

it helps to increase the quality of the product and will be necessary to verify if a

certain milestone has been reached. Testing has also an influence on the required

activities, sometimes new activities like implementing test classes and running the

1A detailed discussion of the elements can be found in [Coc02, 115-120]

Page 80: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 64

tests are necessary and sometimes activities will become useless, e.g. a test-tool is

able to generate the documentation of the test run. Testing is therefore one key

topic in the development process.

7.2 Development-models

There are different models that are based on different - sometimes historical - ap-

proaches. Typically they focus on processes, milestones and activities, but all other

methodology elements have a strong influence on the whole development process.

The next sections will quickly repeater the concepts of the most famous models.

A detailed description of the different models is not necessary for the rest of this

chapter but it is useful to understand their main concepts.

7.2.1 The waterfall model

This is one of the oldest models. It is based on the assumption that the development

cycle can be separated into different independent phases. These phases are passed

through one after the each other. The original model did not support a step back to

a previous phase but the need of iteration has been quickly recognized. The second

assumption that a phase is a strictly separated item has also been revised in newer

models. A new phase can only start when the previous one has been finished.

When a phase failed, new requirements must be defined and depending on the

consequences one or more phases must be passed again. There are typically seven

steps in the model:

1. Document the system concept.

2. Identify system requirements and analyze them.

3. Break the system into components (architectural design).

Page 81: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 65

4. Design each component (detailed design).

5. Code the system components and test them individually (coding, debugging,

and unit testing).

6. Integrate the pieces and test the integrated components (integration test)

7. Test the whole system (system testing).

8. Deploy the system and operate it.

Figure 7.2: The waterfall model

The advantage of the waterfall-model is that the development process can be strictly

planned, including milestones and deadlines, because no phase overlaps with a pre-

vious one.

The disadvantage it that there is nearly no time for reflection or revision and going

back to the previous step is quite expensive.

Page 82: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 66

The model is useful for small and simple projects that use well known technologies

as in such an environment projects can be easily and appropriately calculated.

CrashIt and the waterfall model

Planning the tests-scenario should be done as early as possible but in this model it

is useful to start at that point, where the pieces of the system have been designed.

At this point it is possible to define the proxies between the components that can

be used for monitoring the communication between them.

The next step is to define the tests for each component during its design. The idea of

writing the tests before the code has been written is part of eXtreme Programming

but fits very well even in this development model. If it is simple to define tests for

the component it is in all probability even simple to use it.

During the implementation of each component it can be tested by using the defined

test-cases in the previous phase.

The system- and integration-tests can be done by combining components by using

the connection-configuration. As mentioned above the connection-configuration will

be applied before a test sequence is invoked.

7.2.2 The spiral model

The basic idea of the spiral model is an evolutionary development process that

uses the waterfall model for each cycle. Analyzing the risk of each step is one

main idea. Components with high priority will be designed and implemented first,

reviewed, shown to the users so that the gained feedback can be used in the next

cycle. Each cycle starts with a risk assessment in order to lead the next steps in the

right direction. The spiral model is one of the currently used development models.

Its concepts have big influence on other models or commercial solutions like the

Rational Unified Process.

Page 83: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 67

Figure 7.3: The spiral model

The advantage of the spiral model it that it focuses on re-use and error limitation.

Generally it puts software quality factors up to front and tries to minimize the risk

of the development. The feedback of users is an important part of each development

process and is integrated into the spiral model.

The disadvantages of the model are that it must be adapted for each project as it

is not generally applicable and it causes overhead by the risk assessments.

CrashIt and the spiral model

CrashIt can be used during each cycle when the current implementation must be

verified. According to its usage in the waterfall model it can be used in the related

steps.

Page 84: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 68

7.2.3 Prototyping

Prototyping is based on the idea to implement a basic prototype and enhance it

step by step. It follows similar steps like the previous models but has its focus on

re-designing and implementing a sample solution

The advantage of prototyping is that in each phase a running solution is available

that can be used for tests or evaluating the user’s acceptance.

Its disadvantages are that the complexity of a grown program can quickly become

unrulable and that it forces code-and-repair rather than a good design.

CrashIt and prototyping

CrashIt can be used to verify the prototypes after one step has been finished. This

model makes the need of integration tests necessary. but it may cause big efforts to

adapt the unit test between two circles, because the interfaces may strongly change

during a re-design. The descriptive approach of defining the tests that is used in

CrashIt may allow a faster update of the test-cases that implement a test for the new

version.

7.2.4 Extreme programming

Extreme programming is a combination of several successful ideas. It covers four

values: communication, simplicity, feedback and courage and defines four basic ac-

tivities: coding, designing, listening and testing. Other famous aspects of XP are:

The Planning Game : It combines the technical estimates and business priorities

to define the next milestone.

Small releases : A simple system should run as quickly as possible. This running

solution will be enhanced in small short release cycles.

Page 85: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 69

Refactorings : Changing the system structure without changing its behavior. This

may force to reliability and performance of the system.

Simple design : The system design has to be as simple as possible. Programmers

should avoid solving problems that are parts of future releases.

Pair programming : All code will be written by two programmers using one

machine. It is more unlikely that two developers do not see a bug that one.

Coding standards : All developers write their code in accordance with naming

and design rules. This will simplify the communication between the involved

developers.

Testing : Programmer write test-cases to demonstrate that a feature has been

implemented. The tests will be written before implementing a new feature.

Extreme programming is an interesting mixture of techniques and concepts but it

seems not to be applicable for all projects, because implementations of the tech-

nologies or distributed applications had to be planned precisely. It can be hard to

implement such applications from the scratch by redefining and enhancing a proto-

type or it may cost big efforts for Refactorings.

If a running prototype exists XP seems to be the right model especially when the

customer is involved in design decisions. Figure 7.4 shows the life cycle in XP

where the customer is able to define features that has to be included in the next

cycle [JC00, chapter 2,p.13]. In well-established teams it is possible to define an

amount of man-power that is used for refactorings and internal improvements. The

remaining amount can be assigned by the costumer to implement important features.

This provides a good balance of increasing the features and maintaining the code.

Some aspects like writing the tests before starting to implement a feature might look

academic, but writing a test is similar to using a component and the test is therefore

the first application of this component and writing the test is the first verification of

the design. Other aspects like pair-programming will contradict business guidelines.

Page 86: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 70

Figure 7.4: Life-cycle in XP

Extreme programming is a controversial model as it defines extreme ways for solving

problems. However, it is an interesting approach that can be adapted and used as

a basis for successful developments.

CrashIt and XP

Testing is one central concept in XP so CrashIt can be used, as a test-bench in

this approach to verify that a feature has been successfully implemented. Also, in

this model the descriptive approach that is used by CrashIt to define the test-cases

simplifies the changing of the test-case because the descriptive data are independent

from the current implementation. A test can be run even when some interfaces

or classes have not been fully adapted because no test-class must be compiled. In

JUnit for example a test can only be run if all interfaces have been fully adapted.

Otherwise it is not possible to compile the test-class.

Page 87: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 71

7.3 Using CrashIt in different models

As mentioned above CrashIt can be used in each model as there must be a proof that

a component has been correctly implemented. Not all models define the time when

test-cases should be defined but a general rule is to write the test-cases as early as

possible.

7.3.1 Which way is the best?

There is no general rule when a model should be used. It depends on the application

and the experience of the developers. The approach used by XP might be a bit

extreme but forces developers to think very early about the tests. For implementing

a new application XP might miss some important points of the application, but for

maintaining a product in cooperation with users it might be the right way.

The next steps are a proposal of combining the different models during the devel-

opment cycle.

Analyzing : Start with analyzing the user-requirements and derive the software-

requirements from them.

Design : After the software- and user-requirements have been written down the

needed components and modules can be specified. This will result in an overall

design including the main interfaces between the modules. Walk-throughs of

the core functionalities are necessary to verify the design. During this phase

it is useful to define the tests of the main components and integration tests.

Defining the tests improves the information obtained during the walk-through

because this information is written down in the test as valid usage of the

components. The test scenario should also include invalid calls to verify if the

components are able to notify a misusage. For this aspect thinking of the tests

early in the design is necessary because otherwise it can be impossible to test

the component as the it does not provide methods for verifying a misusage or

Page 88: Component based testing during the software development cycle

CHAPTER 7. USING CRASHIT IN THE SOFTWARE DESIGN CYCLE 72

does not throw exceptions.

Implementing a prototype : After all interfaces and the tests have been written

down the first prototype can be implemented. The prototype is finished when

all test-cases are successful.

Iterative development : Now new features and required refactorings can be dis-

cussed on a running system and an iterative development can be started. This

process iterates over several steps until the development has been finished:

1. Defining the necessary changes.

2. Alter the test-cases.

3. Implementing the changes.

4. Verifying the implementation if it fulfills the tests.

These steps combine some concepts of all models in order to focus on testing the

application. There are only few environments where software is necessary that

must not include bugs e.g. applications in medical environments or applications

in aircrafts. For almost every other application users have accepted that is not

possible to implement it without a bug. As IT solutions are having more and more

influence in our lives it becomes more important that software does not include

bugs. However, it is a goal that it hard to reach but it must still be the goal of each

developer. Focusing on testing during the development will force stable, reliable and

application without or fewer bugs.

Page 89: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 73

Chapter 8

The Implementation of CrashIt

Nothing is particularly hard if you divide it into small jobs.

Henry Ford

The goal if this chapter is to explain the internal structure of CrashIt . This may

help developers to extend the framework. It is not necessary to implement addi-

tional components when CrashIt is used as test-bench but in some cases it makes

sense to implement its own components. If someone wants to implements an addi-

tional component for CrashIt please contact the development team, because it can

be possible that a similar component is being planned or that the component will

be part of a further release.

8.1 Overall architecture

The overall architecture of CrashIt follows the main principle that it is to be be pos-

sible to replace everything that is useful to exchange and to provide a simple way

of integrating new components or extensions.

Page 90: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 74

This requirements forces the framework to support a declarative configuration that

can be widely adapted without recompiling components of the framework. It is

possible to write its own configuration-subsystem but a full-featured XML-based

version is part of the actual CrashIt implementation1 .

Figure 8.1: CrashIt modules

The architecture consists of several main parts:

Configurator : This component is responsible for initializing several sub-components.

It therefore is uses a configuration object including all information on the con-

figuration. The configuration is able to load and connect the components that

are necessary for the test.

1For own versions implement org.dinopolis.crashit.configurator.config.Configu-

ration and see org.dinopolis.crashit.impl.TestApplication how a configuration can be used

Page 91: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 75

Test-engine : This component is responsible to run the tests. It uses a test-

configuration that holds all information about the test.

ContractAdministrator : This component is responsible for managing contracts

which are used during a test. These contracts can interact with each other

and they therefore use the contractadministrator which is able to return the

available contracts for a test run.

Logger : CrashIt provides a configurable logging interface that allows to use differ-

ent logging mechanisms. An implementation of a logging framework - carried

out by two students of Graz University of Technology - and an adapter for

using Log4J as logging mechanism in CrashIt are included in the current dis-

tribution.

ResultSummary : This component is able to create summaries of what happened

during the test.

This is a simple overview of the components that are part of the whole framework.

A detailed description of all components can be found in [Val03, chapter 6].

8.2 Interfaces and design-decisions

It is necessary to explain design-decisions and to understand its resulting interface-

structure to use a framework as basis of an application or to extend it. The next

sections point out these decisions and explain how the framework can be used to

implement a test-application.

8.2.1 The CrashIt environment

One real important class of the framework is org.dionopolis.crashit.CrashIt-

Environment. This is an abstract class that implements several static methods

for registering and creating instances of CrashIt -components. CrashIt -components

Page 92: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 76

themselves must implement the interface org.dinopolis.crashit.CrashItFramework-

Component that is used to register, initialize and shut down a component-instance.

Each concrete implementation of a CrashIt -environment should be done by inherit-

ing from this class, because the static methods of the abstract environment class are

used by several other components within the framework. The CrashItEnvironment

can be interpreted as a singleton [GHRV95] that is used in the whole framework.

As it does not provide a static getInstance() method it is a pseudo-singleton2.

The singleton pattern is a very useful design pattern but it has one big drawback

especially when it is used in Java. There is no simple way of generalizing a single-

ton because the singleton pattern is based on the assumption that a static method

getInstance returns the only instance of this class. As it is not possible to de-

fine static methods in interfaces it is not possible to generalize a singleton in Java.

However, it is necessary to use the CrashItEnvironment as a base class for fu-

ture implementations. Consequently two further interfaces have been defined. The

org.dinopolis.CrashItFramework interfaces provides methods for configuring the

framework and org.dinopolis.FrameworkConfigurator) is used to set up concrete

CrashItFramework implementations. These implementations have to be realized as

singletons3 and they have to inherit the abstract CrashItEnvironment class.

This seems to be a rather complicated approach to solve the problem, but when it is

done in this way the pseudo-singleton class can be used in the whole framework for

addressing the components and a general configuration of the framework is possible,

too.

2Pseudo-singleton means that the class behave like a singleton because this class is the only

way for accessing the registered components but do not support the instantiation of a singleton

object.3The package org.dinopolis.crashit.xml includes the implementations of this interfaces that

are required for the XML implementation

Page 93: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 77

Figure 8.2: UML diagram of the environment classes

8.2.2 Accessing the configuration, the configuration-layer

As mentioned above the configuration of the framework can be done in a very flexible

way. This is true for the used format, but internally there are some interfaces that

are used to represent the configuration. The configuration-subsystem of CrashIt is

responsible for converting the stored information into objects that can be accessed

by these interfaces. There is one important pattern that has been used in the

Page 94: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 78

design of these internal configuration interfaces, the Iterator - pattern[GHRV95]. All

objects that hold data of the configuration are implemented as lists that support an

iterator for accessing its elements. The elements of these lists are value-objects that

store the required data (e.g. org.dinopolis.crashit.configurator.config.De-

scriptionOfComponent). These structures allows the lists to read the configuration

exactly in that moment when it is needed. If some data are not needed because their

corresponding test-task has been skipped these data will not been read. Figure 8.3

on page 79 shows these internal interfaces.

8.2.3 Using the framework

CrashIt is designed as a framework, but how can it be used for writing a test-

application? There are two implementations that use CrashIt for realizing a test-

bench (see 8.3 on page 83). An application that uses this framework for setting up

a test application has to proceed the following steps4:

1. Load a FrameworkConfigurator. This can be done by using a class-loader.

2. Creating a new CrashItFramework instance by invoking the createEnviron-

ment() method of the configurator.

3. Applying the test-configuration to the CrashItFramework instance.

4. Initializing the framework by invoking the static method initialize of the

CrashItEnvironment class.

After these steps have been applied the framework can be used to run the configured

test. The test is started by invoking the method run() of the CrashItEnvironment

class.

4See the source code of class org.dinopolis.crashit.impl.TestApplication for an imple-

mentation of these steps.

Page 95: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 79

Figure 8.3: UML diagram of internal configuration-interfaces

After the tests have been finished the shutdown process has to be signaled by in-

voking initializeShutDown and will be finished by calling the method shutDown.

Splitting the shutdown process in two steps was done to provide a chance for the

involved components to shut down on a concurrent way, so that there is enough time

to close all opened files or database connections.

Page 96: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 80

8.2.4 Interfaces for test-cases

The last sections described some important interfaces for the configuration and its

usage. Beyond these examples there are several other interfaces and components

that are important especially for describing the test-case. It is even more possible

for one of these components to be implemented for a special test-application than

for a new configuration to be written.

The test case is the smallest part of the test configuration. A test-case is slitted into

two parts.

TestClass : This part defines how the test will be applied. For example, a method

of a local component is be executed or a method of a remote object is called.

Separating the invocation of a method in an own class was done to increase

the possibilities of compatibility-testing5.

A new type of invoking a method can be easily realized by implementing

the interface org.dinopolis.crashit.testengine.TestClass and can be

integrated in the test by defining it in the test-configuration.

ResultChecker : The ResultCheckers are a more general approach of verifying

the result of a method-call. Verifying the result of a method-call is the main-

concept of running a test. Other test-frameworks solve this problem by pro-

viding several methods that compare the result of the call with the expected

result. But what if the call does not return a result because it uses a void

method, or how to verify that it is true that a database entry has been writ-

ten by the method? In these cases the result can only be checked by either

invoking a further method or by implementing additional functionality that

verifies the result. Both solutions do not exclude that an error was hidden by

the additional code that is needed to verify the result.

ResultCheckers can be used to implement this additional code only once. They

5If other components are be loaded, it is necessary to re-implement or extend the current

component factory which is responsible for creating the objects that are to be tested.

Page 97: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 81

will be verified and test-engineers can trust them. For each object-type that

is to be verified in a test-run an appropriate ResultChecker must exist. This

seems to cause an additional effort for the test-engineer, but if a checker has

been written it can be used in other test-cases, too6.

ResultCheckers can be realized by writing a class that implements the org.di-

nopolis.crashit.testengine.ResultChecker interface and that can be eas-

ily used in a test-run as it is enough to define them in the test-configuration.

8.2.5 The XML-subsystem

As mentioned in the last sections the configuration is one of those topics that has the

biggest influence on the framework-design. CrashIt supports a configuration-layer

and provides a full-featured XML-based implementation of these interfaces. The

XML-based implementation itself uses a small framework for defining and extending

XML-configurations. This XML-framework is part of CrashIt and is based on the

concept of tag-libraries. A SAX-based parser reads the XML-file that stores the

configuration and depending on the type of the file it uses different tag-configurations

for processing the configuration.

The reason for this approach is the fact that SAX-based parsers will become quickly

unmanageable and in some cases it is therefore rather complex to extend their func-

tionality. If a system that uses this tag-library-based approach has to be extended,

the mixture of describing the new tags and implementing their additional function-

ality in seperated classes provides a controllable development because the relations

and semantics of the tags are separated from their functionality.

The tag-library itself uses an XML-format for storing the tag-names, their corre-

sponding classes and the relations between the different tags.

6The current implementation of CrashIt supports several basic object-type ResultCheckers but

in the future the number of available checkers is likely to increase.

Page 98: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 82

1 <t ag l i b >2 <name>Connection L i s t Tags</name>3 <owner>org . d i n opo l i s . c r a s h i t . xml . tags . connec t i ons . ConnectionListOwner</owner>4 <tags>5 <tag>6 <name>c o nn e c t i o n l i s t</name>7 <c l a s s >org . d i n opo l i s . c r a s h i t . xml . tags . connec t i ons . Connect ionsListTag</c l a s s >8 <a t t r i bu t e s >9 <a t t r i bu t e >

10 <name>path</name>11 </a t t r i bu t e >12 </a t t r i bu t e s >13 <nestedtags>14 <tag>connect ion</tag>15 </nestedtags>16 </tag>17 <tag>18 <name>connect ion</name>19 <c l a s s >org . d i n opo l i s . c r a s h i t . xml . tags . connec t i ons . ConnectionListTag</c l a s s >20 . . .

Listing 8.1: CrashIt tag-library example

Each tag-library definition is parsed and mapped into an internal representation a

org.dinopolis.crashit.xml.TagContext. This context can be used to generating

instances of the tags which have been defined in the tag-library file. This context

is used by a parser to process concrete configuration files. The parser also needs

a concrete handler that stores the processed information. This is handler is called

TagContextOwner.

Figure 8.4 shows an incomplete UML diagram of the configuration tags. The

StartTag extends the ConfigBasicTag wich extends itself the BasicTag. The

ConfigBasicTag holds a reference on the concrete TagContextOwner for this con-

figuration environment the ConfigurationOwner.

Parsing and generating the tag-context is an expensive task and should only be

done once for each file. A manager (org.dinopolis.crashit.xml.TagContext-

Manager) has been therefore implemented that can be used to generate a context.

This manager caches generated tag-contexts to optimize the process of reading the

configuration files.

A tag-library-based approach is a common way of handling complex configurations

or applications. It can be found in popular projects like JavaServerPages, J2EE, or

ant.

Page 99: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 83

Figure 8.4: UML diagram of the configuration tags (incomplete)

8.3 Available applications

As mentioned above CrashIt is designed as a framework but there are also two

applications available that are based on it, a stand-alone application (org.dinop-

olis.crashit.impl.TestApplication) and an ant-task (org.dinopolis.crash-

it.ant.CrashItAntTask. Both implementations uses the XML-configuration, the

same configuration can therefore be used in both. Notice that depending on the

logging sub-system additional libraries has to be visible in the classpath. CrashIt

uses Log4J as default logging framework.

Page 100: Component based testing during the software development cycle

CHAPTER 8. THE IMPLEMENTATION OF CRASHIT 84

8.3.1 Stand-alone Application

This application is implemented as a Java-Class that can be started by:

java org.dinopolis.crashit.ant.TestApplication <config-file>

The <config-file> refers to a file that stores the main configuration for the test.

A detailed description of this file can be found in section A.3.1 on page 97.

8.3.2 Ant-task

Adding an ant-task to a build.xml is simple as ant provides a method of integrating

new tasks. The new task must be visible in the classpath of the environment where

ant will be used.

The antCrashIt -task can be used by adding the following lines to the build.xml file:

1 <t a r g e t name=” t e s t ”>2 <t a skde f name=” c r a s h i t ”3 classname=”org . d i n opo l i s . c r a s h i t . ant . CrashItTask”/>4 <c r a s h i t t e s t f i l e=”<con f i g− f i l e >”/>5 </target>

Listing 8.2: CrashIt as ant-task

When the target test will be invoked by calling ”ant test”CrashIt will start. In-

tegrating CrashIt into an existing development-environment that is based on ant is

therefore very simple.

Page 101: Component based testing during the software development cycle

CHAPTER 9. CONCLUSION 85

Chapter 9

Conclusion

9.1 Personal experience

Designing and implementing CrashIt and the need of dealing with topics like XP

(eXtreme Programming) surely increased my faculties in designing software. The

work-flow defined in chapter 7.3.1 will be the way of developing future projects and

this work will therefore have a strong influence in organizing and developing future

projects. The teamwork and the focus on designing and planning before staring to

implement made it possible to comply with all milestones and pointed out that this

is one successful methodology in software-development. It was an interesting and

exciting year that I do not want to miss.

9.2 Project related topics

The actual version of CrashIt can be seen as the core of a bigger framework that will

be implemented in future. It will now be used in a lecture about software design

and thereby hopefully a lot of feedback can be treated in future releases.

This version is the successful prove that the ideas of CrashIt can be used for testing

Page 102: Component based testing during the software development cycle

CHAPTER 9. CONCLUSION 86

component based systems.

CrashIt has the some disadvantages in comparison to existing test-benches.

• Now a all configuration-files must be written by hand. This will be improved

when an integrated environment is available. This environment will emphases

the advantages of CrashIt

• Missing integration into any existing IDE. Integration can now only be done

by using the ant-task ([Apa03])

The biggest advantage of CrashIt is up to now is that no test-code must be written

and therefore the test-team van start to design and implement the test parallel to

the development team without the need of existing interfaces. An other advantage

of CrashIt is its modular design so that it should be easy to include it into an other

environment e.g. testing J2EE applications.

CrashIt is now available on crashit.hti.at but will in future be available on

www.dinopolis.org

9.3 Outlook

There are several projects that are pending or are in progress.

Result Store the result-store of CrashIt has been redesigned by a group of students

and will be included into CrashIt during this term.

C++ porting of CrashIt or reimplementing it as a test bench for C++ programs

will be part of future master thesis or dissertations.

J2EE during a pending J2EE project the implementation of an J2EE version of

CrashIt is planned.

Page 103: Component based testing during the software development cycle

CHAPTER 9. CONCLUSION 87

automated test-sequence generation this will be part of a following disserta-

tion. Some theoretical work on this topic has been done during the design of

CrashIt but this must be reviewed and tested before it will be publicized.

Page 104: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 88

Appendix A

The usage of CrashIt - a simple

example

A.1 Introduction

The intention of this example is to give a short introduction in designing component

based frameworks and to show the usage of CrashIt on a small component based ex-

ample. In this example serveral components define a small framework for managing

string based services.

This example uses the hook ☞Hook pattern. The hook pattern is one of the basic

patterns. For understanding the concepts of CrashIt and its usage in this simple

example it is sufficient to know about the hook-pattern and its application. Now

lets have a look on this concret example of a small component based framework.

A.2 A service example

Sometimes it is necessary to administrate different services. For example, storing

a document on a local hard disk or storing it on a FTP (File Transfer Protocol)

Page 105: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 89

server can be described by using similar parameters. In the first version of an

application storing documents on a FTP server may not be supported. By applying

an update, this additional functionality can be added easily if the mechanism for

storing a document is clear separated and, for example, implemented as a service of

the application.

In this example a service is simpler than storing a document, but the example uses

a similar mechanism for implementing services. In this example a service is able

to deal with strings and to perform calculations on these strings. Consequently, a

method String execute(String) is therefore used that processes the given string

value.

A.2.1 Parts of the framework

The framework consists of three different parts:

Service: It processes a string value and returns the result as another string. A

Service must be initialized and throws an exception if it is used before it has

been initialized. It is possible to check if the service has been initialized or

not. A Service hast to provide an id through which a provider may address it.

ServiceProvider: It manages different services. The ServiceProvider1 can ini-

tialize all registered services. A service can be used for processing a certain

value by passing two parameters to the ServiceProvider, the id of the service

that is to be used and the value that is to be processed.

Configurator: It can be used to configurate the ServiceProvider by applying a

concrete Configurator to the ServiceProvider.2 The Configurator regis-

ters the different Services and forces the provider to start their initialization.

1The service provider can be interpreted as a Mediator [GHRV95, Cap. 5].2The Configurator used in this example can be configured by a property file. This is a simple

method to keep the configuration changeable.

Page 106: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 90

The implementation of this framework is achieved by defining interfaces for each

part and using these interfaces to implement concrete components. For minimizing

the effort of component development an implementation of an abstract class should

be done. If a new component is needed, the programmer must only extend the

abstract class, so the benefits of inheritance and using interfaces can be combined.

A.2.2 Interfaces

There are three interfaces that correspond to the three different parts of this example

framework:

Service Interface

The Service interface (see listing A.1) consists of four methods:

• getServiceId

• execute

• init

• isInitilized

This is a very small interface because an implementation that can be used in a real

world environment would be too extensive for this example. A service interface in an

real world implementation should possibly contain mechanisms for supporting differ-

ent versions, compatibility checks, a mechanism for applying additional parameters

or the chance to initialize a roll-back.

ServiceProvider Interface

The ServiceProvider interface (see listing A.2) is very small too. Even a smarter

implementation of a provider would go beyond the scope of this example. The

Page 107: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 91

1 package org . s e r v i c e s ;2

3 // -----------------------------------------------------------------4 /**5 * This is a interface for a simple string -based Service.6 *7 * @pre: Before sending requests to a Service , which implements8 * this interface , it should be initialized by calling9 * {@link # execute(String )}.

10 *11 * @author Gerhard Fliess12 */13 public interface Se rv i c e14 {15 // ---------------------------------------------------------------16 /**17 * Method : getServiceId18 * @return String19 *20 * Usage : This method returns the <b>id </b> of the21 * Service22 *23 * @package : org.services24 */25 public St r ing g e tS e r v i c e I d ( ) ;26

27 // ---------------------------------------------------------------28 /**29 * Method : execute30 * @param value31 * @return String32 * @throws ServiceFailure : This exception would be33 * thrown when the Service is unable to answer the34 * request or when it is not initilized.35 *36 * Usage : The Service processes <b>value </b> and37 * returns the modified string38 *39 * @pre: isInitialized ()== true40 *41 * @package : org.services42 */43 public St r ing execute ( S t r ing value ) throws Se r v i c eFa i l u r e ;44

45 // ---------------------------------------------------------------46 /**47 * Method : init48 * @throws ServiceFailure49 *50 * Usage : Over this method , the Service could be51 * initialized.52 *53 * @pre: isInitialized () != true54 * @post : isInitialized () == true55 *56 * @package : org.services57 */58 public void i n i t ( ) throws Se r v i c eFa i l u r e ;59

60 // ---------------------------------------------------------------61 /**62 * Method : isInitialized63 * @return boolean64 *65 * Usage : This method returns <b>TRUE </b>, if66 * the Service is initialized , otherwise <b>FALSE </b>.67 *68 * @package : org.services69 */70 public boolean i s I n i t i a l i z e d ( ) ;71 }

Listing A.1: Service interface

Page 108: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 92

1 package org . s e r v i c e s ;2

3 // --------------------------------------------------------------------------4 /**5 * A ServiceProvider manages different string -based Services.6 * All registered Services must implement the { @link Service } interface.<br >7 * A ServiceProvider can be initialized by registering all services or8 * over a { @link ProviderConfig}9 *

10 * @author Gerhard Fliess11 */12 public interface Se rv i c eProv ide r13 {14 // ---------------------------------------------------------------15 /**16 * Method : registerService17 * @param service18 * @throws ServiceFailure19 *20 * Usage : Over this method , Services can be registered.21 * After all Services has been registered { @link # initServices ()}22 * should be called.23 *24 * @pre: service.isInitialized () == false25 *26 * @package : org.services27 */28 public void r e g i s t e r S e r v i c e ( S e rv i c e s e r v i c e ) throws Se r v i c eFa i l u r e ;29

30 // ---------------------------------------------------------------31 /**32 * Method : initServices33 * @throws ServiceFailure34 *35 * Usage : This method would invoke the36 * initialisation of all registered Services.37 *38 * @post : For all service.isInitialized () == true39 *40 * @package : org.services41 */42 public void i n i t S e r v i c e s ( ) throws Se r v i c eFa i l u r e ;43

44 // ---------------------------------------------------------------45 /**46 * Method : ask47 * @param serviceId48 * @param value49 * @return String50 * @throws ServiceFailure : would be thrown if the51 * Service is not available or another failure appears52 * during the request.53 *54 * Usage : Over this method , a external user can55 * invoke a the Service ( specified by a <b>serviceId </b>)56 * to process <b>value <b>. The answer of the57 * Service would be forwarded to the external user.58 *59 * @package : org.services60 */61 public St r ing ask ( S t r ing s e r v i c e Id , S t r ing value ) throws Se r v i c eFa i l u r e ;62

63 // ---------------------------------------------------------------64 /**65 * Method : config66 * @param config67 * @throws ServiceFailure68 *69 * Usage : This method is used to configurate the70 * provider . The configuration is contained in71 * {@link org.service.ProviderConfig }.72 *73 *74 * @pre: config must be valid75 * @package : org.services76 */77 public void c on f i g ( Prov iderConf ig c on f i g ) throws Se r v i c eFa i l u r e ;78 }

Listing A.2: ServiceProvider Interface

Page 109: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 93

interface consists of four methods:

• registerService

• initServices

• ask

• config

Corresponding to the extensions of the Service interface the ServiceProvider

interface for a real world application should contain more methods than this simple

one. Possible extensions for a provider could contain mechanisms for searching a

service or at least a compatible service. Other possible extensions would be methods

for registering initialized services and mechanisms to unregister services or versions

of services. In order to keep it simple all important extensions are dropped.

Configurator Interface

1 package org . s e r v i c e s ;2

3 /**4 * Interface of a Configurator to initialize a5 * { @link org.service.ServiceProvider }.6 *7 * @author Gerhard Fliess8 */9 public interface ProviderConf ig

10 {11 // ---------------------------------------------------------------12 /**13 * Method : configure14 * @param provider15 * @throws ServiceFailure16 *17 * Usage : This method configures a ServiceProvider18 * denoted by <b>provider </b>19 *20 * @pre: provider must exist21 * @package : org.services22 */23 public void c on f i gu r e ( Se rv i c eProv ide r prov ide r ) throws Se r v i c eFa i l u r e ;24 }

Listing A.3: Configurator

Even the Configurator interface is very small to keep the example simple. In real

world applications designers should avoid interfaces with only one or two methods.

Page 110: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 94

If an interface is that, small the probability of a design failure or forgotten function-

ality is relatively high.

The Configurator interface of this example consists of only one method for config-

uring it ServiceProvider. A configurator of a real world implementation should

be more complex. For example, it should be possible to add additional parameters

like the used configuratin file or there should be methods to define constrains for

the initialized services.

A.2.3 Implementations

The version of this example supports two services:

Echo : returns the given value

Concat : concatenates the given values

There are also implementations of a ServiceProvider (org.service.ServiceProviderImpl)

and a configurator (org.service.SimpleProviderConfig). The configurator automati-

cally registers services that are defined in the services.properties file.

Next, two examples are given, which show how the framework can be used. The first

one (see listing A.4) shows how the framework can be used without an configurator.

These steps are important for defining test sequences for the framework.

1. an instance of the service provider must be created

2. both services are created and registered at the provider

3. the provider is forced to initialize the registered services

4. the services can be used by applying the corresponding key and a value

Page 111: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 95

1

2 package org . s e r v i c e s ;3

4 /**5 * This example shows how a provider can be initialized without6 * a configurator.7 *8 * @author Gerhard Fliess9 */

10 public class ServiceExample111 {12

13 public stat ic void main ( St r ing [ ] a rgs )14 {15

16 Se rv i c eProv ide r prov ide r = new Serv i ceProv ider Impl ( ”MainProvider ” ) ;17

18 try19 {20 prov ide r . r e g i s t e r S e r v i c e (new Echo ( ) ) ;21 prov ide r . r e g i s t e r S e r v i c e (new Concat ( ) ) ;22

23 prov ide r . i n i t S e r v i c e s ( ) ;24

25 System . out . p r i n t l n ( ”Echo reque s t : ” + prov ide r . ask ( ” echo” , ”12345” ) ) ;26 System . out . p r i n t l n ( ”Concat r eque s t : ” + prov ide r . ask ( ” concat ” , ”12345” ) ) ;27 System . out . p r i n t l n ( ”Concat r eque s t : ” + prov ide r . ask ( ” concat ” , ”12345” ) ) ;28

29 } catch ( S e r v i c eFa i l u r e e )30 {31 e . pr intStackTrace ( ) ;32 }33

34 }35

36 }

Listing A.4: A sample application without a Configurator

The second one (see listing A.5) shows how a provider can be configured by using the

Configurator. This version of setting up a framework follows the object-orientated

approach and should be forced in designing frameworks3:

1. an instance of the service provider must be created

2. the provider is configured by using the Configurator

3. the services can be used by applying the corresponding key and a value

A test of this example would contain Testcase-sequences that test these two different

setup methods and it would test the each used components separately.

3A configurator follows the Delegator -pattern, see [GHRV95]

Page 112: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 96

1 package org . s e r v i c e s ;2

3 /**4 * This example shows the usage of a configurator.5 *6 * @author Gerhard Fliess7 */8 public class ServiceExample29 {

10

11 public stat ic void main ( St r ing [ ] a rgs )12 {13

14 Se rv i c eProv ide r prov ide r = new Serv i ceProv ider Impl ( ”MainProvider ” ) ;15

16 try17 {18 prov ide r . c on f i g (new SimpleProviderConf ig ( ) ) ;19

20 System . out . p r i n t l n ( ”Echo reque s t : ” + prov ide r . ask ( ” echo” , ”12345” ) ) ;21 System . out . p r i n t l n ( ”Concat r eque s t : ” + prov ide r . ask ( ” concat ” , ”12345” ) ) ;22 System . out . p r i n t l n ( ”Concat r eque s t : ” + prov ide r . ask ( ” concat ” , ”12345” ) ) ;23

24 } catch ( S e r v i c eFa i l u r e e )25 {26 e . pr intStackTrace ( ) ;27 }28

29 }30 }

Listing A.5: A sample application using a Configurator

A.3 The CrashIt Configuration

CrashIt has an extensible design that allows developers to define their own file for-

mats. The default format of configuration files used in CrashIt is XML (eXtensible

Markup Language). (Even XML files are extensible, so CrashIt provides a frame-

work to write new tags for the XML files.) For using CrashIt it is not necessary to

write new tags but if new tags or additional properties are needed, the extension of

CrashIt can be done easily. However, in such cases please contact the development

team because it is possible that a similar extension is planned for the next release.

The team can also provide help for developing extensions and it is possible that

these extensions become part of the whole framework.

The definition of a whole configuration is split into more files which can be stored

in different directories. This split is justified by:

• Testcase-configuration could be quite extensive or some of its parts may not be

Page 113: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 97

processed, e.g. if previous tests failed (see also section A.3.8 on page 112). So

it is futile to parse parts of the configuration which never would be performed.

• It is possible, that some testcases or Testcase-sequences are executed several

times. As a consequence it is futile to read files twice. CrashIt provides a cache

which stores such files.

In the whole configuration, files are referred by a unique id. There This is why,

that there must exist further lists which map ids to the according files. Most con-

figurations are separated into two types of files. One file defines mappings between

a unique id and a part of the framework which is defined in a separate file. The files

that contain the mapping are parsed during the setup of CrashIt, all other files will

be parsed when they are needed.

In the following sections the different configuration files will be described. There

are several files that are needed to define a test which seems to be a big effort, but

nearly all files can be written during the design and must be written only once.

A.3.1 The Configuration file

This is the main configuration file for a test run. It provides information where

the Test-configuration can be found and additional information for the CrashIt en-

vironment. The file is separated into two different parts that are embedded into a

<configuration> ... </configuration> environment:

• the definition of the Test-configuration <test>. . .</test> (see listing A.6 line

2-10). (For a detailed explanation, read section Test-configuration Related

Part on page 98)

• and the definition of the ResultStore <resultstoreconfig>. . .</resultstore-

config> (See listing A.6 line 11-14). (For a detailed explanation, see the

section ResultStore Related Part on page 99)

Page 114: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 98

1 <con f i gu ra t i on >2 <t e s t >3 <dir>examp le s e rv i c e</dir>4 <con f i g >5 <dir>t e s t c o n f i g u r a t i o n</dir>6 < f i l e >t e s t c o n f i g u r a t i o n . xml</ f i l e >7 </con f i g >8 <components>components . xml</components>9 <connect ions>connect i ons . xml</connect ions>

10 <cont rac t s >con t r a c t s . xml</cont rac t s >11 <r e su l t ch e cke r >r e s u l t c h e c k e r . xml</r e su l t ch e cke r >12 <t e s t c l a s s e s >t e s t c l a s s e s . xml</ t e s t c l a s s e s >13 </te s t >14 <r e s u l t s t o r e c o n f i g >15 <c l a s s >org . d i n opo l i s . c r a s h i t . r e s u l t s t o r e . impl . XMLResultStoreConfig</c l a s s >16 <r e su l t >r e s u l t . xml</r e su l t >17 </ r e s u l t s t o r e c o n f i g >18 </con f i gu ra t i on >

Listing A.6: The main configuration file

Test-configuration Related Part

Tags for the description of the Test-configuration are:

<dir> (listing A.6 line 3): This is the root directory, where all Test-configura-

tion files are placed. All subdirectories of <dir> are defined relatively to this

directory.

<config> (listing A.6 line 4-7): This tag is a frame for additional properties

of the Test-configuration:

• <dir>(listing A.6 line 5): Defines the subdirectory of the previously

declared <dir> where the Test-configuration files can be found.

• <file>(listing A.6 line 6): Defines the name of the file that con-

tains the primary Test-configuration file (see also section A.3.2 on page 100)

<components> (listing A.6 line 8): This tag defines the file which contains the

mappings between the component ids and the XML files that describe these

components. (Detailed informations about this file can be found in section

section A.3.4 on page 106).

<connections> (listing A.6 line 9): This tag defines the file which contains

Page 115: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 99

the mappings between the connection ids and the XML files that define the

connections (see section A.3.5 on page 107).

<contracts> (listing A.6 line 10): This tag defines the file which contains

the mappings between the contract ids and the XML files that specify the

contracts (refer tosection A.3.6 on page 111) .

<resultchecker> (listing A.6 line 11): Resultchecker are used to determine

if a result is valid or not. This tag defines the file which contains the mappings

between the Resultchecker ids and the XML files that define the properties of

these classes. (Detailed informations about this mapping file is placed in 30

on page 105).

<testclasses> (listing A.6 line 12): Testclasses are used to perform a single

testcase on a component. This tag defines the file, describing the mappings

between the Testclass ids and the appropriate XML file

ResultStore Related Part

This section describes the tags of a CrashIt -configuration which are used for the

ResultStore. The development of ResultStore is not finished yet4. This section

describes the configuration of the current version.

<class> (listing A.6 line 15): This tag defines the class which configures the

ResultStore. A default class is actually given.

<result> (listing A.6 line 16): Defines the filename into which the Result-

Store stores the test results.

4The implementation which is used in this version of CrashIt will be replaced by another one

written by Arne Tauber and Mario Ivkovic.

Page 116: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 100

A.3.2 The Test-configuration File

The configuration of the Testcase-sequences consists of a file that merges the Test-

case-sequence files into one configuration. A Test-configuration can be controlled

by the results of the Testcase-sequences. This provides a Flowcontrol mechanism

which allows to skip tests if something has gone wrong.

Listing A.7 on page 101 shows an example of a configuration that does not use

Flowcontrol. The tags in this file are even valid for configurations that supports

Flowcontrol.

<flow-control> (listing A.7 line 2): This tag is used to enable or disable

the Flowcontrol. It supports the values on and off.

<path> (listing A.7 line 3): This defines the directory that contains all con-

figuration files for Testcase-sequences. It is relative to the directory that is

defined in the configuration file (listing A.6 line 3).

<testsequences> (listing A.7 line 4-11): This section defines the Testcase-

-sequences and its order that should be applied during the test. Each Test-

case-sequence is defined in a separate file. It is possible to define a variable,

which stores the result of a Testcase-sequence. This is optional, but necessary

if Flowcontrol is enabled, because Flowcontrol needs the results of the Test-

case-sequences to decide which sequence is applied next. If Flowcontrol is not

used, this section should only consist of test-sequence-tags.

Listing A.8 on page 101 shows a configuration that uses the Flowcontrol mecha-

nism. This file is not part of the service example. It is used to explain the syntax

of a Flowcontrolled configuration. It is a simple configuration that uses only the

if-construct. The if-section of the configuration must include a condition-section

and a then-section. The else-section is optional. A condition can consist of a sim-

ple compare statement or a boolean operation. Listing A.8 defines an if-statement

Page 117: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 101

1 <t e s t c on f i g u r a t i o n >2 <f low−cont ro l >o f f</flow−cont ro l >3 <path>t e s t−sequences</path>4 <t e s t s equence s >5 <t e s t s equence r e s u l t=”echo”>testEcho . xml</te s t sequence >6 <t e s t s equence r e s u l t=” concat ”>testConcat . xml</te s t sequence >7 <t e s t s equence r e s u l t=” s e r v i c e c o n f i g t e s t ”>t e s t S e r v i c eCon f i g . xml</te s t sequence >8 <t e s t s equence r e s u l t=”echoFlow”>testEchoFlow . xml</te s t s equence >9 <t e s t s equence r e s u l t=” con t r a c t s ”>t e s tCont ra c t s . xml</te s t s equence >

10 <t e s t s equence r e s u l t=” s e r v i c e t e s t ”>t e s t S e r v i c e . xml</te s t s equence >11 </te s t s equence s >12 </t e s t c on f i g u r a t i o n >

Listing A.7: A simple configuration without Flowcontrol

1 <t e s t c on f i g u r a t i o n >2 <f low−cont ro l >on</flow−cont ro l >3 <path>t e s t−sequences</path>4 <t e s t s equence s >5 <t e s t s equence r e s u l t=” s imple ”>s imple . xml</te s t sequence >6 <t e s t s equence r e s u l t=” f i r s t ”> f i r s t . xml</te s t s equence >7 < i f >8 <cond i t ion>9 <or>

10 <compare opera t i on=” equal ”>11 <target>s imple</target>12 <value type=”Boolean”>t rue</value>13 </compare>14 <compare opera t i on=” equal ”>15 <target> f i r s t </target>16 <value type=”Boolean”>t rue</value>17 </compare>18 </or>19 </cond i t ion>20 <then>21 <t e s t s equence >s imple . xml</te s t sequence >22 </then>23 <e l s e >24 <t e s t s equence > f i r s t . xml</te s t sequence >25 </e l s e >26 </ i f >27 <t e s t s equence >beantes t . xml</te s t s equence >28 </te s t s equence s >29 </t e s t c on f i g u r a t i o n >

Listing A.8: A Flowcontrolled configuration

Page 118: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 102

(line 7-26) which consists of two compare operations which are combined by an or-

statement. This statement written in pseudo code would look like this:

1 i f ( s imple == true ) or ( f i r s t == true )2 then { apply(simple.xml) }3 else { apply(first.xml) }

The names of the Testcase-sequence results simple and first can be interpreted as

variables that store the result. Conditions can be more complex than in this example,

as it is possible to combine boolean operations on results. Writing such a file causes

more effort than writing a simple file, because the syntax of the Flowcontrol must

be described in XML, too. Future versions of CrashIt will contain a graphical tool

to define this sequences.

Independent of the usage of the Flowcontrol mechanism the applied Testcase-sequen-

ces are defined in separated files. These files can also define a plain Testcase-sequence

or use the Flowcontrol mechanism. Testcase-sequence files refer to testcase files that

are described in section A.3.3 on page 104.

The definition of a Testcase-sequence consists of several parameters:

<start-node> (listing A.9 line 2): This tag is used to define which component

should be tested. The component is referred by its id (see section A.3.4). All

test cases are applied on this component.

<path> (listing A.9 line 3): This defines the directory, which contains the test-

case descriptions for this Testcase-sequence. It is also a relative path.

<flow-control> (listing A.9 line 4): This tag is used to enable or disable

Flowcontrol. It supports two values on and off.

<testcases> (listing A.9 line 5-10): This section contains the testcases that

should be applied on the startnode or Flowcontrol tags. Each testcase is

described in a separate file (see section A.3.3).

Page 119: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 103

1 <t e s t s equence >2 <s ta r t−node>echo</s ta r t−node>3 <path>t e s t−echo</path>4 <f low−cont ro l >o f f</flow−cont ro l >5 <t e s t c a s e s >6 <t e s t c a s e >g e tS e r v i c e I d . xml</t e s t c a s e >7 <t e s t c a s e > i n i t . xml</t e s t c a s e >8 <t e s t c a s e > i s I n i t i a l i z e d t r u e . xml</t e s t c a s e >9 <t e s t c a s e >execute . xml</t e s t c a s e >

10 </t e s t c a s e s >11 </te s t sequence >

Listing A.9: A simple Testcase-sequence without Flowcontrol

1 <t e s t s equence >2 <s ta r t−node>echo</s ta r t−node>3 <path>t e s t−echo</path>4 <f low−cont ro l >on</flow−cont ro l >5 <t e s t c a s e s >6 <t e s t c a s e >g e tS e r v i c e I d . xml</t e s t c a s e >7 <t e s t c a s e r e s u l t=” i n i t i a l i z e d ”> i s I n i t i a l i z e d t r u e . xml</t e s t c a s e >8 < i f >9 <cond i t ion>

10 <compare opera t i on=” equal ”>11 <target> i n i t i a l i z e d</target>12 <value type=”Boolean”> f a l s e</value>13 </compare>14 </cond i t ion>15 <then>16 <t e s t c a s e > i n i t . xml</t e s t c a s e >17 <t e s t c a s e >execute . xml</t e s t c a s e >18 <t e s t c a s e >g e tS e r v i c e I d . xml</t e s t c a s e >19 </then>20 <e l s e >21 <t e s t c a s e >execute . xml</t e s t c a s e >22 </e l s e >23 </ i f >24 </t e s t c a s e s >25 </te s t sequence >

Listing A.10: A Testcase-sequence using Flowcontrol

Page 120: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 104

The Flowcontrol mechanism used for the Testcase-sequence is equal to the mecha-

nism used for controlling the Test-configuration. Results of testcases can be named

and used like variables, it is the same as the mechanism in the Test-configuration.

Listing A.10 shows a Testcase-sequence that uses a Flowcontrol.

A.3.3 The Testcase files

These files defines a single testcase, which will be applied to some components by

CrashIt . Listing A.11 shows a simple example of a testcase. Each testcase must be

described in a own testcase file. This seems to be a big effort, but testcase files can

be reused in other Testcase-sequences5.

1 <t e s t c a s e >2 <t e s t c l a s s >3 <id>methodCall</id>4 </ t e s t c l a s s >5 <funct ion>execute</funct ion>6 <r e su l t >7 <returntype>St r ing</returntype>8 <equals>Otto</equals>9 </r e su l t >

10 <parameters>11 <parameter>12 <type>St r ing</type>13 <value>Otto</value>14 </parameter>15 </parameters>16 <r e su l t ch e cke r >17 <id>Str ingChecker</id>18 </r e su l t ch e cke r >19 </t e s t c a s e >

Listing A.11: Calling a method

Several parameters defines a testcase:

<testclass> (listing A.11 line 2-4): A Testclass implements how a test

is conducted. A unique id is applied to each Testclass. These ids are defined

in a separate file. The testclass-tag uses the id to address the Testclass.

So far only one Testclass has been implemented. This class is able to invoke

5Future versions of CrashIt will contain a graphical tool for designing testcases and Testcase-se-

quences.

Page 121: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 105

methods of components that can be accessed locally. Future versions of CrashIt

will include smarter Testclasses. For example, it could be useful to implement

Testclasses that are able to invoke method calls via RMI (Remote Method

Invocation).

The definition of the used Testclasses is done in two simple files, a file that

assigns the ids for the test classes, in this example the file testclasses.xml,

and files for each test-class.

From a more formal view the Test-configuration and the test-classes defines

the method for the application of the test.

<function> (listing A.11 line 5): This tag defines which method should be

called on the component.

<parameters> (listing A.11 line 10-15): This tag can contain several param-

eter definitions. The parameters defined in this tag are used in the method

call. Each parameter definition is embedded in a parameter tag that includes

the subtagsthat are applied to the parameter converter described in section

A.3.7:

• <type> which defines the object-type of this parameter.

• <value> which defines the value of this parameter

Both will be preprocessed by a Parameter-converter, see also section A.3.7 on

page 111.

<result> (listing A.11 line 6-9): This tag defines which result is expected.

Section A.3.7 describes the implementation of the general converter which is

used for creating objects. The tag contains two sub tags:

• <returntype> This defines the expected object-type of the return value.

• <equals> This defines which value is expected.

<resultchecker> (listing A.11 line 16-18): Resultcheckers are a more ab-

stract approach of comparing the results of a testcase and the definition in

Page 122: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 106

the specification. Simple Resultcheckers compare objects like Strings with

the expected objects defined in the specification, however, a more complex

result-checker can also be used to check a result without comparing it with

another object. Resultchecker is also used for checking complex results, e.g. if

a database entry was written or a file was created during the actual testcase.

This feature also allows to check the result of a method that has a void return

type, but is responsible to change the state of other components or resources.

This can be done by using a result-checker that is able to monitor the related

components or resources. Resultcheckers are defined in several files, too: one

file that assigns the ids, (in this example resultchecker.xml) and another

for each Resultchecker.

A.3.4 The Component Files

Components files define which component should be loaded and describe how this

component must be initialized. They consist of a some tags, that are embedded in

a component-tag:

<class> (listing A.12 line 2): This tag defines which class is a component.

The class mus be available in the classpath. If this class does not support a

simple constructor, a section defining the needed constructor parameters must

be added into this file.

<constructor> (listing A.12 line 3-8): This section is optional if a compo-

nent supports a simple constructor. However, it is necessary if no simple

default constructor is defined. It consists of one ore more parameter sections.

These parameter sections are equal to the sections in the testcase file (see

section A.3.3 or listing A.11 line 10-15 ) (A detailed description can be found

in section A.3.7). If a constructor is described, it will be called after the

component-class has been loaded.

<setup> (listing A.12 line 9-17): This section is also optional. It can be used

Page 123: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 107

to define a method that will be invoked after the constructor has been called.

It supports the definition of the method-name and further parameters whose

syntax is equal to other parameter sections (see section A.3.7 on page 111).

1 <component>2 <c l a s s >org . s e r v i c e s . Se rv i ceProv ider Impl</c l a s s >3 <cons t ructor >4 <parameter>5 <type>St r ing</type>6 <value>MainProvider</value>7 </parameter>8 </cons t ructor >9 <setup>

10 <method>11 <name>setup</method>12 <parameter>13 <type>St r ing</type>14 <value>MainProvider</value>15 </parameter>16 </method>17 </setup>18 </component>

Listing A.12: Description of a component

So far all files had been discussed that define the main configuration, the testcases

and the involved components. The following sections describe how these components

can be put together.

A.3.5 The Connection Files

These connections can be grouped. These groups of connections can be accessed

via a unique id. In this example the assignment is done in the connection.xml see

Listing A.13 on page 108.

<path> (listing A.13 line 2): Defines the relative path where the connection

files can be found.

<connection> (listing A.13 line 3-6): These sections assign the XML con-

nection file (listing A.13 line 5) to a unique id (listing A.13 line 4).

The connection files are a little bit more complex. Defining these files may imply a

big effort, depending on how many components are involved.

Page 124: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 108

1 <c onn e c t i o n l i s t >2 <path>connect i ons</path>3 <connect ion>4 <id>ba s i c</id>5 < f i l e >d i r e c t . xml</ f i l e >6 </connect ion>7 <connect ion>8 <id>using−con t r a c t s</id>9 < f i l e >u s i n g c on t r a c t s . xml</ f i l e >

10 </connect ion>11 </c onn e c t i o n l i s t >

Listing A.13: The connections config file

1 <connect ions>2 <connect ion>3 <id>provider−echo</id>4 <c l i e n t >5 <type>component</type>6 <id>prov ide r</id>7 <hook>8 <method>r e g i s t e r S e r v i c e</method>9 <returntype>void</returntype>

10 <parameters>11 <parameter>12 <name>s e r v i c e</name>13 <type>org . s e r v i c e s . S e rv i c e</type>14 </parameter>15 </parameters>16 </hook>17 </c l i e n t >18 <supp l i e r >19 <type>component</type>20 <id>echo</id>21 </supp l i e r >22 </connect ion>23 <connect ion>24 <id>provider−concat</id>25 <c l i e n t >26 <type>component</type>27 <id>prov ide r</id>28 <hook>29 <method>r e g i s t e r S e r v i c e</method>30 <returntype>void</returntype>31 <parameters>32 <parameter>33 <name>s e r v i c e</name>34 <type>org . s e r v i c e s . S e rv i c e</type>35 </parameter>36 </parameters>37 </hook>38 </c l i e n t >39 <supp l i e r >40 <type>component</type>41 <id>concat</id>42 </supp l i e r >43 </connect ion>44 </connect ions>

Listing A.14: Connections file with direct connections

Page 125: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 109

Connection files consist of several connection-parts ( see listing A.14 line 2-22 and

23-43). Each connection must be described by a connection-part which consists of

an id, an involved client and a supplier.

1 . . .2 <connect ion>3 <id>echo−cont rac t echo−s e r v i c e</id>4 <c l i e n t >5 <type>cont rac t</type>6 <id>echo−cont rac t</id>7 <hook>8 <method>r e g i s t e r S e r v i c e</method>9 <returntype>void</returntype>

10 <parameters>11 <parameter>12 <name>s e r v i c e</name>13 <type>org . s e r v i c e s . S e rv i c e</type>14 </parameter>15 </parameters>16 </hook>17 </c l i e n t >18 <supp l i e r >19 <type>component</type>20 <id>echo</id>21 </supp l i e r >22 </connect ion>23 <connect ion>24 <id>provider−echo−cont rac t</id>25 <c l i e n t >26 <type>component</type>27 <id>prov ide r</id>28 <hook>29 <method>r e g i s t e r S e r v i c e</method>30 <returntype>void</returntype>31 <parameters>32 <parameter>33 <name>s e r v i c e</name>34 <type>org . s e r v i c e s . S e rv i c e</type>35 </parameter>36 </parameters>37 </hook>38 </c l i e n t >39 <supp l i e r >40 <type>cont rac t</type>41 <id>echo−cont rac t</id>42 </supp l i e r >43 </connect ion>44 . . .

Listing A.15: A connection that uses a Contract

<id> (listing A.14 line 3): This defines the id that is used to access the fol-

lowing connection definition.

<client> (listing A.14 line 4-17): This section describes the client. It consist

of several tags that are needed to completely define the client of a connection

including its type, the Hook-Up method and the used parameters.

Page 126: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 110

• <type> (listing A.14 line 5): Specifies if the client is a component

or a Contract. This is necessary because Contracts and components have

different meaning and behave differently during the test. and there are

therefore different factories and managers to monitor their behavior and

they must be notified about the connected objects.

• <id> (listing A.14 line 6): Specifies the id of the component or Con-

tract that is used as client. The id must be assigned either in the com-

ponents description or in the Contract description.

• <hook>(listing A.14 line 7-16): This section describes the method

(Hook-Up) which is used to connect the client and the supplier.

– <method> (listing A.14 line 8): The name of the method

– <returntype> (listing A.14 line 9): : The expected return type.

– <parameters> (listing A.14 line 10-14): : The parameter type

of the Hook-Up-method.

<supplier> (listing A.14 line 18-21)

• <type> (listing A.14 line 19): Defines if the supplier is a component

or a Contract.

• <id> (listing A.14 line 20): The id of the instance that is used as

supplier.

Listing A.14 shows connections only between components. Connections between

Contracts and components can be formulated in a similar way. Listing A.15 shows a

connection between a Contract and components. This differs only in the definition

of the <type> in the <client> and <supplier>. (Compare lines 5,19 and 26,40 in

listing A.14 with the equal lines in listing A.15). Following the changes of the object

type also the ids are adjusted to make sure that the right objects are connected.

This flexible configuration allows different useful combinations of object connec-

tions. If necessary connections of objects via several monitoring contracts can be

implemented.

Page 127: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 111

A.3.6 The Contract Files

Before using Contracts in CrashIt , these have to be described in proper files: In one

file, that maps an unique id to a contract(see listing A.17), and in another file, that

describes the contract (see listing A.16).

1 <contract>2 <c l a s s >org . s e r v i c e s . Se rv i ceContrac t</c l a s s >3 <c l i e n t t yp e s >4 <type name=”Hook”>org . s e r v i c e s . S e rv i c eProv ide r</type>5 </c l i e n t t yp e s >6 <supp l i e r type s >7 <type name=”HookUp”>org . s e r v i c e s . S e rv i c e</type>8 </supp l i e r type s >9 </contract>

Listing A.16: Definition of the service-Contract

Multiple instances of the same contract can be defined by assigning another id to

this contracts, as it is done in Listing A.17.

1 <cont rac t s >2 <contract>3 <id>echo−cont rac t</id>4 < f i l e >s e r v i c e−cont rac t . xml</ f i l e >5 </contract>6 <contract>7 <id>concat−cont rac t</id>8 < f i l e >s e r v i c e−cont rac t . xml</ f i l e >9 </contract>

10 </cont rac t s >

Listing A.17: Definition of the used Contracts

A.3.7 Creating Objects - the Parameter-Converter

Several sections need the definition of objects by a given type and a certain value.

As a consequence it was useful to define a general factory for those cases. This

factory creates wrappers for initialized objects. These wrappers (converters) sup-

port returning the object and its corresponding class. For some developers this may

sound useless as the class of an instance can always be returned by invoking the

.getClass() method, but in some cases - especially when simple types are used -

this general factory is necessary. Another reason for implementing a general factory

Page 128: Component based testing during the software development cycle

APPENDIX A. THE USAGE OF CRASHIT - A SIMPLE EXAMPLE 112

was that only one factory should be responsible for initializing objects and conver-

ters, because this allows to use only one configuration that stores for all supported

object-types. Which converters are supported by the framework can be defined in

a simple property file.

1 i n t=org . d i n opo l i s . c r a s h i t . u t i l . c onve r t e r s . IntConverter2 St r ing=org . d i n opo l i s . c r a s h i t . u t i l . c onve r t e r s . St r ingConverte r3 Object=org . d i n opo l i s . c r a s h i t . u t i l . c onve r t e r s . ObjectConverter4 Component=org . d i n opo l i s . c r a s h i t . u t i l . c onve r t e r s . ComponentConverter5 Boolean=org . d i n opo l i s . c r a s h i t . u t i l . c onve r t e r s . BooleanConverter

Listing A.18: Converter property file

The keys used in this property file can be used in the different configuration files to

define the needed type (See line 10-15 Listing A.11 on page 104).

A.3.8 Flowcontrol

The section A.3.2 on page 100 describes the main usage of the Flowcontrol-mecha-

nism. The syntax of the used tags may be a bit unusual, but it allows an effective

parsing and setting up of the Flowcontrol sequences. It has not been designed to

be written by hand, but it has been designed to facilitate the implementation of a

graphical tool which allows to define and write these configuration-files. This tool

will be a part of the next version of CrashIt .

The files in this example, which uses Flowcontrol define only simple sequences that

include one <if>...<then>...<else> construct. In CrashIt through Flowcontrol

it is able to evaluate also <while> statements, nested statements and nested ex-

pressions. Even <or> and <and> tags can be used to express a condition Designing

such complex Flowcontrol expressions may be usefully in some cases, but generally,

simple expressions should be preferred as they are more comprehensible.

Page 129: Component based testing during the software development cycle

Bibliography

[Ale01] Andrei Alexandrescu. Modern C++ Design, Generic Programming and

Design Patterns Applied. The C++ In-Depth Series. Addison Wesley,

2001. ISBN: 0 201 70431 5.

[AM01] Crupi John Alur, Deepak and Dan Malks. J2EE Patterns. SUN Mi-

crosystems Press, 2001.

[Apa03] Ant development manual, 2003. http://ant.apache.org/manual/

develop.html.

[ASU99] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullmann. Compilerbau Teil 1.

Oldenbourg, 1999. ISBN: 3 486 25294 1.

[BBM+78] Barry W. Boehm, J.R. Brown, G. McLeod, Myron Lipow, and M. Merrit.

Characteristics of Software Quality. TRW Series of Software Technology.

North-Holland Publishing Co., July 1978.

[Bec00] Kent Beck. eXtreme Programming explained. The XP Series. Addison-

Wesley, first edition, October 2000.

[Coc02] Alistair Cockburn. Agile Software Development. Addison-Wessley, 2002.

[GHRV95] Erich Gamma, Richard Helm, Johnson Raplh, and John Vlissides. De-

sign Patterns, Elements of Reusable Object Oriented Software. Ob-

ject Oriented Technology. Addison Wesley, Massachusetts, 1995. ISBN:

0 201 63361 2.

I

Page 130: Component based testing during the software development cycle

BIBLIOGRAPHY II

[Hoa72] C.A.R Hoare. The quality of software. Software, Practice and Experience,

2(2):103–105, 1972.

[JC00] Ann Jeffries, ron Anderson and Hendricksin Chet. eXtreme Programming

installed. The XP Series. Addison-Wesley, 2000. ISBN: 0 201 70842 6.

[Jon96] Cliff B. Jones. Systematic Software Development Using VDM. Prentice

Hall International, Hemel Hempstead(U.K.), 1996.

[jUn03] jUnit.org. junit homepage. available online http://www.junit.org,

2003.

[Kra01] Reto Kramer. icontract - the java design by contract tool. http://

www.reliable-systems.com/tools/iContract/iContract.htm, 2001.

visited, 09.08.2002.

[McC77] James McCall. Factors in software quality. Technical report, General

Electric, 1977.

[Mey97] Bertrand Meyer. Object-Oriented Software Construction. Prentice Hall,

Upper Saddle River, New Jersey, second edition, 1997.

[ORF03] ORF. Software ist haufigste Fehlerquelle. available online http://

futurezone.orf.at/futurezone.orf?read=detail&id=157548, 2003.

[Pop94] Sir Karl. Popper. Alles Leben ist Problemlosen. Piper, 1994. ISBN:

3 492 22300 1.

[Rom02] Ed Roman. Mastering Enterprise JavaBeans. Wiley, 2002. ISBN:

0 471 41711 4.

[Sch] Klaus Schmaranz. AK-Softwareentwicklung - Programming for Large

Libraries. Lecture notes.

[Sch01] Klaus Schmaranz. Software-Entwicklung in C. Springer, July 2001.

Page 131: Component based testing during the software development cycle

BIBLIOGRAPHY III

[Sch02] Klaus Schmaranz. Dinopolis - A Massively Distributable Component-

ware System. Habilitation, July 2002.

[Sut00] Herb Sutter. Exceptional C++. The C++ In-Depth Series. Addison Wes-

ley, 2000. ISBN: 3 8273 1711 8.

[Val03] Egon Valentini. Development of a framework for contract-based test-

ing of software-components. Master’s thesis, IICM, Graz University of

Technology, November 2003. available online http://crashit.hti.at.

Page 132: Component based testing during the software development cycle

List of Acronyms

FTP . . . . . . . . . . File Transfer Protocol see also the glossary entry for ☞FTP

IDE . . . . . . . . . . . Integrated Development Environment

J2EE . . . . . . . . . Java 2 Enterprise Edition

RMI . . . . . . . . . . Remote Method Invocation see also the glossary entry for ☞RMI

URD . . . . . . . . . . User Requirements Document see also the glossary entry for

☞URD

XML . . . . . . . . . . eXtensible Markup Language see also the glossary entry for ☞XML

XP . . . . . . . . . . . . eXtreme Programming see ☞XP

IV

Page 133: Component based testing during the software development cycle

Glossary

C: A high-level programming language developed by Dennis Ritchie and Brian

Kernighan at Bell Labs in the mid 1970s. Although originally designed as a

systems programming language, C has proved to be a powerful and flexible

language that can be used for a variety of applications, from business programs

to engineering.

C++: A high-level programming language developed by Bjarne Stroustrup at Bell

Labs. C++ adds object-oriented features to its predecessor ☞C. C++ is one

of the most popular programming language for graphical applications, such as

those that run in Windows, ☞UNIX, and Macintosh environments.

Client: The inter-software-communication may be split into a side, which asks

for a service, and the other side, which responds. The client represents that

module, which requests services from the ☞Supplier.

Contract: defines stipulations between two parties, which communicates among

themselves. The stipulations enclose obligations and benefits for both parties,

the client and supplier. In CrashIt contracts are in concrete objects, that can be

inserted between client and supplier. Contracts monitors the communication

and guarantee, that stipulations hold.

Framework: is a set of components building together an application.

FTP: File Transfer Protocol The protocol used on the Internet for sending files.

V

Page 134: Component based testing during the software development cycle

BIBLIOGRAPHY VI

GOF : Gang of four, is an abbreviation of the four authors Gamma, Helm, Johnson

and Vlissides that wrote the well known book Design Patterns, elements of

reusable object-orientated software

Hook: The hook pattern consists of the parts, the hook itself an the hook-up

☞Hook-Up. The definition of the hook consists of two parts: the abstract hook

and the concrete hook The abstract hook defines an interface for a component

that represents methods how this component can be used. A concrete hook

implements this abstract hook interface.

In the example the Service interface is a abstract hook-up and the Echo class

is a concrete hook because of the execute method that is used to ask the

service.

Hook Hook-Up pattern: The Hook Hook-Up pattern (refer to [Sch02]) is split

in two parts, the Hook and the HookUp. Refer to section 4.3.1 on page 40 for

a detailed explanation.

Hook-Up: The hook-up part of the hook pattern can be split up into two parts:

the abstract hook-up and the concrete hook-up. An abstract hook-up defines a

method that is used to connect a concrete hook with an other component. A

concrete hook-up is a component that implements the abstract hook-up inter-

face.

In the example the ServiceProvider interface is a abstract hook-up because it

contains the hook-up method register(Service). The ServiceProviderImpl

class is a concrete hook-up.

Java: A high-level programming language developed by Sun Microsystems. It

is an object-oriented language similar to ☞C++, but simplified to eliminate

language features that cause common programming errors. Java source code

files are compiled into a format called bytecode, which can then be executed

by a Java interpreter. Compiled Java code can run on most computers because

Java interpreters and runtime environments, known as Java Virtual Machines

Page 135: Component based testing during the software development cycle

BIBLIOGRAPHY VII

(VMs), exist for most operating systems, including UNIX, the Macintosh OS,

and Windows.

Module: The properties of a module can be summarized as follow:

• A module has a clear defined functionality.

• A module has a clear defined interface to external world. This interface

can be described by functions, not by global variables. The interface

integrates also a description of conditions to work properly.

• A module is completely self-contained. It means, that no relation to

outstanding code-parts is needed to work.

• Modules are organized hierarchically following their abstraction-level

RMI: Remote Method Invocation A set of protocols being developed by Sun’s

JavaSoft division that enables Java objects to communicate remotely with

other Java objects.

Supplier: The inter-software-communication may be split into the part, which

asks for a service, and the other part, which responds. The supplier represents

the module, which responds on requests of the ☞Client.

Test: Consists in the validation, whether a product corresponds to the user re-

quirements (see ☞URD). Therefore testcases must be developed. In CrashIt

these testcases can be grouped in ☞Testcase-sequence, these in a ☞Test-

configuration.

Testcase: specifies the method-call which is applied to a component, and the

extended results.

Testcase-sequence: specifies a sequence of testcases (see ☞Testcase) to apply on

a component.

Test-configuration: specifies a sequence of ☞Testcase-sequence.

Page 136: Component based testing during the software development cycle

BIBLIOGRAPHY VIII

Unit test: Unit testing involves testing software code at its smallest functional

point, which is typically a single class. Each individual class should be tested

in isolation before it is tested with other units or as part of a module or

application.

The objective of unit testing is to test not only the functionality of the code,

but also to ensure that the code is correct and robust.

UNIX: A popular multi-user, multitasking operating systems developed at Bell

Labs in the early 1970s.

URD: User Requirements Document is a summary of all that requirements, that

the customer demands from a product. A URD has to be so exact, that it can

be used like a checklist to validate the final product.

XML: eXtensible Markup Language A specification developed by the W3C6. XML

is a pared-down version of SGML, designed especially for Web documents. It

allows designers to create their own customized tags, enabling the definition,

transmission, validation, and interpretation of data between applications and

between organizations.

XP: eXtreme Programming, is a development method that forces documentation

and includes testing as an implicit development task

6http://www.w3c.org

Page 137: Component based testing during the software development cycle

Index

AAvailable

CrashIt . . . . . . . . . . . . . . . . . . . . . . . . . 86

BBehavior

Exception . . . . . . . . . . . . . . . . . . . . . . 15Observer . . . . . . . . . . . . . . . . . . . . . . . 40

BehaviorsMediator . . . . . . . . . . . . . . . . . . . . . . . 40

CClient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Coding standards . . . . . . . . . . . . . . . . . . . 69Compatibility . . . . . . . . . . . . . . . . . . . 34, 36Components . . . . . . . . . . . . . . . . . . . . . 42–44

class . . . . . . . . . . . . . . . . . . . . . . . . . 106constructor . . . . . . . . . . . . . . . . . . 106setup . . . . . . . . . . . . . . . . . . . . . . . . . 106Contract . . . . . . . . . . . . . . . . . . . . . . . 44Definition . . . . . . . . . . . . . . . . . . 30, 106Dynamically loadable . . . . . . . . . . . 43Exceptions . . . . . . . . . . . . . . . . . . . . . 43Hook Hook-Up . . . . . . . . . . . . . . . . . 43Side-effect-free . . . . . . . . . . . . . . . . . . 42

componentsstates . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Configurationcomponents . . . . . . . . . . . . . . . . . . . . 98config . . . . . . . . . . . . . . . . . . . . . . . . . 98connections . . . . . . . . . . . . . . . . . . . 98contracts . . . . . . . . . . . . . . . . . . . . . 99dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98resultchecker . . . . . . . . . . . . . . . . . 99testclasses . . . . . . . . . . . . . . . . . . . 99

Description . . . . . . . . . . . . . . . . . . . . . 97example

Flowcontrol . . . . . . . . . . . . . . . . . 100File . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97ResultStoreclass . . . . . . . . . . . . . . . . . . . . . . . . 99result . . . . . . . . . . . . . . . . . . . . . . . 99

test configuration . . . . . . . . . . . . . . 100Connection

client.id . . . . . . . . . . . . . . . . . . . . 110client.type . . . . . . . . . . . . . . . . . . 110client . . . . . . . . . . . . . . . . . . . 109, 110connection . . . . . . . . . . . . . . . . . . . 107hook-method . . . . . . . . . . . . . . . . . . 110hook-parameters . . . . . . . . . . . . . 110hook-returntype . . . . . . . . . . . . . 110id . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109path . . . . . . . . . . . . . . . . . . . . . . . . . . 107supplier.id . . . . . . . . . . . . . . . . . . 110supplier.type . . . . . . . . . . . . . . . . 110supplier . . . . . . . . . . . . . . . . . . . . . . 110

Contractslinear contracts . . . . . . . . . . . . . . . . . 23nested contracts . . . . . . . . . . . . . . . . 24pseudo-linear contracts . . . . . . . . . 24

Correctness . . . . . . . . . . . . . . . . . . . . . 33, 35Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

CrashIt . . . . . . . . . . . . . . . . . . . . . . . . . . 2, 56Available . . . . . . . . . . . . . . . . . . . . . . . 86Configuration . . . . . . . . . . . . . . 96–112Flowcontrol . . . . . . . . . . . . . . . . . . . 112Parameter Converter . . . . . . . . . . 111ResultStore . . . . . . . . . . . . . . . . . . . . . 99Testcase . . . . . . . . . . . . . . . . . . . . . . . 104User groups . . . . . . . . . . . . . . . . . . . . 57

IX

Page 138: Component based testing during the software development cycle

INDEX X

DDefinition

Components . . . . . . . . . . . . . . . 30, 106deterministic call . . . . . . . . . . . . . . . 15Enhanced state-machine . . . . . . . . 22nondeterministic call . . . . . . . . . . . 15one way limitation . . . . . . . . . . . . . . 20Result checker . . . . . . . . . . . . . . . . . 105Silent Catch . . . . . . . . . . . . . . . . . . . . 54Test-Case . . . . . . . . . . . . . . . . . . . . . . . 58Test-class . . . . . . . . . . . . . . . . . . . . . . 104

DescriptionConfiguration . . . . . . . . . . . . . . . . . . . 97Test

configuration . . . . . . . . . . . . . . . . 100Design by Contract . . . . . . . . . . . . . . . . . . 7Design Pattern . . . . . . . . . . . . . . . . . . 29, 40

Hook . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Hook Hook-Up . . . . . . . . . . . . . . . . . 40Hook-Up . . . . . . . . . . . . . . . . . . . . . . . 41Observer . . . . . . . . . . . . . . . . . . . . . . . 40

deterministic callDefinition . . . . . . . . . . . . . . . . . . . . . . 15

Development modelsExtreme programming . . . . . . . . . . 68Prototyping . . . . . . . . . . . . . . . . . . . . 68Spiral model . . . . . . . . . . . . . . . . . . . . 66Waterfall model . . . . . . . . . . . . . . . . 64

DownloadCrashIt . . . . . . . . . . . . . . . . . . . . . . . . . 86

EEase of use . . . . . . . . . . . . . . . . . . . . . . 34, 36Eiffel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Enhanced state-machine

Definition . . . . . . . . . . . . . . . . . . . . . . 22Exception . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Behavior . . . . . . . . . . . . . . . . . . . . . . . 15Extendibility . . . . . . . . . . . . . . . . . . . . 33, 36External factors . . . . . . . . . . . . . . . . . . . . 34Extreme Programming . . . . . . . . . . . . . . 68

FFile

Configuration . . . . . . . . . . . . . . . . . . . 97Test

configuration . . . . . . . . . . . . . . . . 100Flowcontrol . . . . . . . . . . . . . . . . . . . . . . . . 112

Configurationexample . . . . . . . . . . . . . . . . . . . . . 100

Sequenceexample . . . . . . . . . . . . . . . . . . . . . 102

short description . . . . . . . . . . . . . . 100test configuration . . . . . . . . . . . . . . 100test sequence . . . . . . . . . . . . . . . . . . 102

Framework . . . . . . . . . . . . . . . . . . . . . . . . . 30

HHook Hook-Up . . . . . . . . . . . . . . . . . . . . . 40

IiContract . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Integration test . . . . . . . . . . . . . . . . . . . . . 51intermediate,working

states . . . . . . . . . . . . . . . . . . . . . . . . . . . 16InternalFactors . . . . . . . . . . . . . . . . . . . . . 37Interoperability

Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Intuitivity . . . . . . . . . . . . . . . . . . . . . . . . . . 34

JJava . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5jUnit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8–10

testcase . . . . . . . . . . . . . . . . . . . . . . . . . . 8testsuite . . . . . . . . . . . . . . . . . . . . . . . . . 8

MMediator

Behaviors . . . . . . . . . . . . . . . . . . . . . . . 40Pattern . . . . . . . . . . . . . . . . . . . . . . . . . 40

Methodology . . . . . . . . . . . . . . . . . . . . . . . 63Modularity . . . . . . . . . . . . . . . . . . . . . . 34, 39

NNested exceptions . . . . . . . . . . . . . . . . . . . 54nondeterministic call

Page 139: Component based testing during the software development cycle

INDEX XI

Definition . . . . . . . . . . . . . . . . . . . . . . 15

OObserver

Behavior . . . . . . . . . . . . . . . . . . . . . . . 40Design Pattern . . . . . . . . . . . . . . . . . 40

one way limitationDefinition . . . . . . . . . . . . . . . . . . . . . . 20

OOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

PPair programming . . . . . . . . . . . . . . . . . . 69Pattern

Mediator . . . . . . . . . . . . . . . . . . . . . . . 40Performance . . . . . . . . . . . . . . . . . . . . 34, 37

Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Planning Game . . . . . . . . . . . . . . . . . . . . . 68Prototyoping . . . . . . . . . . . . . . . . . . . . . . . 68

RRefactoring . . . . . . . . . . . . . . . . . . . . . . . . . 69Response time . . . . . . . . . . . . . . . . . . . . . . 49Result checker

Definition . . . . . . . . . . . . . . . . . . . . . 105Result store

result . . . . . . . . . . . . . . . . . . . . . . . . . 99ResultChecker . . . . . . . . . . . . . . . . . . . . . . 60ResultStore

class . . . . . . . . . . . . . . . . . . . . . . . . . . 99Robustness . . . . . . . . . . . . . . . . . . . . . . 33, 35

Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

SSequence

exampleFlowcontrol . . . . . . . . . . . . . . . . . 102

Service . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 89ServiceProvider . . . . . . . . . . . . . . . . . 31, 89shut down . . . . . . . . . . . . . . . . . . . . . . . . . . 79Silent Catch

Definition . . . . . . . . . . . . . . . . . . . . . . 54Simplicity . . . . . . . . . . . . . . . . . . . . . . . 34, 38Software quality factors . . . . . . . . . 33–39

Compatibility . . . . . . . . . . . . . . . 34, 36

Correctness . . . . . . . . . . . . . . . . . 33, 35Ease of use . . . . . . . . . . . . . . . . . 34, 36Extendibility . . . . . . . . . . . . . . . . 33, 36External factors . . . . . . . . . . . . . . . . 34Internal Factors . . . . . . . . . . . . . . . . 37Intuitivity . . . . . . . . . . . . . . . . . . . . . . 34Modularity . . . . . . . . . . . . . . . . . 34, 39Performance . . . . . . . . . . . . . . . . 34, 37Robustness . . . . . . . . . . . . . . . . . 33, 35Simplicity . . . . . . . . . . . . . . . . . . . 34, 38

statescomponents . . . . . . . . . . . . . . . . . . . . 15intermediate,working . . . . . . . . . . . 16

Supplier . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

TTest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

caseclass . . . . . . . . . . . . . . . . . . . . . . . 104function . . . . . . . . . . . . . . . . . . . 105parameters . . . . . . . . . . . . . . . . . 105resultchechecker . . . . . . . . . . 105result . . . . . . . . . . . . . . . . . . . . . . 105

configurationflow-control . . . . . . . . . . . . . . . 100path . . . . . . . . . . . . . . . . . . . . . . . . 100Description . . . . . . . . . . . . . . . . . . 100File . . . . . . . . . . . . . . . . . . . . . . . . . 100

Correctness . . . . . . . . . . . . . . . . . . . . . 47Design principles . . . . . . . . . . . . . . . 51Interoperability . . . . . . . . . . . . . . . . . 49Performance . . . . . . . . . . . . . . . . . . . . 48Robustness . . . . . . . . . . . . . . . . . . . . . 48sequenceflow-control . . . . . . . . . . . . . . . 102path . . . . . . . . . . . . . . . . . . . . . . . . 102start-node . . . . . . . . . . . . . . . . . 102testcases . . . . . . . . . . . . . . . . . . 102

Test-CaseDefinition . . . . . . . . . . . . . . . . . . . . . . 58

Test-classDefinition . . . . . . . . . . . . . . . . . . . . . 104

Test-configuration

Page 140: Component based testing during the software development cycle

INDEX XII

configdir . . . . . . . . . . . . . . . . . . . . . . . . . . . 98file . . . . . . . . . . . . . . . . . . . . . . . . . 98

WWaterfall model . . . . . . . . . . . . . . . . . . . . 64

typical steps . . . . . . . . . . . . . . . . . . . . 64