26
Research Article — SACJ, No. 47., July 2011 7 Software Testing in Small IT Companies: A (not only) South African Problem Stefan Gruner, Johan van Zyl Department of Computer Science, Faculty of Engineering and IT, University of Pretoria, Republic of South Africa ABSTRACT Small software companies make for the majority of software companies around the world, but their software development processes are often not as clearly defined and structured as in their larger counterparts. Espe- cially the test process is often the most neglected part of the software process. This contribution analyses the software testing process in a small South African IT company, here called X, to determine the problems that currently cause it to deliver software fraught with too many defects. The findings of a survey con- ducted with all software developers in company X are discussed, and several typical problems are identified. Solutions to those (or similar) problems often already exist, but a major part of the problem addressed in this contribution is the unawareness, or unfamiliarity, of many small-industrial software developers and IT managers as far as the scientific literature on software science and engineering, and especially in our case: software testing, is concerned.We also discuss two prevalent test process improvement models that can be used to reason about the possibilities of process improvement. This contribution is an extended and revised summary of a longer project report [56] which can be obtained from the corresponding author of this article upon request. CATEGORIES AND SUBJECT DESCRIPTORS: KEYWORDS: Software process improvement, Software testing, IT management 1 MOTIVATION Software has become so intrinsic in the fab- ric of modern society that its value cannot be ignored. This has opened a window for many businesses wanting a ‘slice’ of the soft- ware ‘pie’. Moreover, because software engi- neering is not (yet) a protected profession, any- body with money for investment can hire a few programmers and start his own ‘software business’ — in contrast to, for example, an ac- counting firm which is required by law to be operated by state-certified and chartered ac- Email: Stefan Gruner [email protected], Johan van Zyl [email protected] countants. Opportunities like these have given birth to thousands of small software companies (SSC) across the world. Small software com- panies, with between one and fifty employees, constitute the majority of the global software industry [57]. SSC face many challenges in their pursuit to create acceptable software and to survive in an increasingly competitive market place: The first challenge is that SSC tend not to believe that the same processes and methods used by large software companies are applica- ble to them as well. The reasons for not adopt- ing more rigorous methods are typically: cost, lack of resources, time, and level of complex- ity [3] [53]. A second challenge facing SSC is

Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 7

Software Testing in Small IT Companies: A (not only)

South African Problem

Stefan Gruner, Johan van Zyl

Department of Computer Science, Faculty of Engineering and IT, University of Pretoria, Republic ofSouth Africa

ABSTRACT

Small software companies make for the majority of software companies around the world, but their software

development processes are often not as clearly defined and structured as in their larger counterparts. Espe-

cially the test process is often the most neglected part of the software process. This contribution analyses

the software testing process in a small South African IT company, here called X, to determine the problems

that currently cause it to deliver software fraught with too many defects. The findings of a survey con-

ducted with all software developers in company X are discussed, and several typical problems are identified.

Solutions to those (or similar) problems often already exist, but a major part of the problem addressed in

this contribution is the unawareness, or unfamiliarity, of many small-industrial software developers and IT

managers as far as the scientific literature on software science and engineering, and especially in our case:

software testing, is concerned.We also discuss two prevalent test process improvement models that can be

used to reason about the possibilities of process improvement. This contribution is an extended and revised

summary of a longer project report [56] which can be obtained from the corresponding author of this article

upon request.

CATEGORIES AND SUBJECT DESCRIPTORS:

KEYWORDS: Software process improvement, Software testing, IT management

1 MOTIVATION

Software has become so intrinsic in the fab-ric of modern society that its value cannotbe ignored. This has opened a window formany businesses wanting a ‘slice’ of the soft-ware ‘pie’. Moreover, because software engi-neering is not (yet) a protected profession, any-body with money for investment can hire afew programmers and start his own ‘softwarebusiness’ — in contrast to, for example, an ac-counting firm which is required by law to beoperated by state-certified and chartered ac-

Email: Stefan Gruner [email protected], Johan van [email protected]

countants. Opportunities like these have givenbirth to thousands of small software companies(SSC) across the world. Small software com-panies, with between one and fifty employees,constitute the majority of the global softwareindustry [57].

SSC face many challenges in their pursuitto create acceptable software and to survivein an increasingly competitive market place:The first challenge is that SSC tend not tobelieve that the same processes and methodsused by large software companies are applica-ble to them as well. The reasons for not adopt-ing more rigorous methods are typically: cost,lack of resources, time, and level of complex-ity [3] [53]. A second challenge facing SSC is

Page 2: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

8 Research Article — SACJ, No. 47., July 2011

that their processes and organisational struc-tures are mostly informal which usually leadsto a chaotic workflow [50] [52]. A possible rea-son for this is that SSC have to focus on time-to-market to survive and thus often neglect themore resource-intensive rigorous processes. Athird challenge is a lack of resources in termsof skills and experience [53] [41]: SSC can of-ten not afford to employ experienced softwaredevelopers, not even to mention software en-gineers. A fourth challenge, is that despitethe many Software Process Improvement (SPI)programs such as CMM (Capability MaturityModel), CMMI (Capability Maturity ModelIntegration) and ISO/IEC15504, SSC are ei-ther not aware of them, or SPI programs arenot tailored towards solving the problems fac-ing small companies [53] [22]. These problemsare not specific to developing countries such asSouth Africa; they are also known in highlydeveloped countries like Germany with its ap-proximately 20,000 software producing compa-nies. A study conducted on the German soft-ware industry [9] indicated:

• Only about 30% of all German IT housesfollowed a somehow ‘defined’ workflow intheir software production process.

• Only about 22% of all German IT housescooperated with scientific software engi-neering research institutions for the sakeof software quality improvement.

• Only 5% of all German IT houses were ona CMM maturity level of 2 or higher.

• 95% of the approximately 20,000 GermanIT houses were on the lowest CMM level1.

The problems facing small software companiesmentioned above are similar to the problemsfaced by company X, a small South Africansoftware company with approximately 40 em-ployees, eight of whom are software developers.Their workflow process is ‘officially’ defined,but not followed consistently in each project.The type of software products developed bycompany X covers a wide spectrum that rangesfrom small desktop applications to large web-based content management systems for vari-ous platforms. Company X focuses on currentcustomer needs and will attempt any type ofdevelopment project required by a customer,

even if there are no skills in that particular do-main. This diverse product portfolio can beattributed to a need for securing revenue in acompetitive market.

Not adhering to the defined process duringprojects can be attributed to a lack of histor-ical project data which leads, amongst others,to unachievable and unreasonable deadline es-timations in current projects. The project be-comes rushed as it falls more and more behindschedule, which in turn leads to ad-hoc short-cuts being taken in the process. The resultof this rushed and ad-hoc process are untestedsoftware with questionable quality. The othermajor problem facing the company is the lackof skills and experience, as none of the devel-opers have any industry experience, apart fromthat which was gained since working for com-pany X. Together with the lack of financialresources, software developers currently em-ployed are university graduates with little orno industry experience. This leads to cus-tomer facing projects being a training groundfor software developers that spend most of theirtime performing research to come to grips withnew technology. The software process used incompany X is the well-known Waterfall model,where the software testing phase follows at theend of the process. The rushed and chaotic na-ture of software projects at company X causesthe testing phase to be largely ignored, andmost often used for implementation of miss-ing functionality. This leads to late, untestedsoftware products fraught with defects. Thisis not a sustainable model as frustrated andunsatisfied customers will not return for futurecooperation.

The contribution of this article is a criticalassessment of the software testing process fol-lowed at company X as well as a proposal foran improved process, based on the software en-gineering literature as well as on insight fromexperience and interviews with industry testpractitioners. Since company X is a typicalcase, representative of many others, this con-tribution should be of interest to a wide rangeof readers from the IT industry as well as fromacademic software science and engineering in-stitutes. As an ‘added value’ this article alsooffers a rich literature database under the fol-

Page 3: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 9

lowing two aspects:

• The aspect of small IT companies (andtheir software quality procedures) is cov-ered by references [2] [3] [11] [12] [22] [25][26] [32] [36] [39] [41] [50] [51] [52] [53].

• The aspect of internationality (for com-paring our South African situation withthe situations in countries such as Aus-tralia, Brazil, Canada, Germany, orSouth-Korea) is covered by references [9][19] [34] [37] [51].

2 PROBLEM ANALYSIS

To study the current testing practices of com-pany X more closely, several interviews wereconducted with all its developers [56]. Thequestionnaire used in the interviews containedfifteen open-ended and two close-ended ques-tions. The questions were clearly explained tothe participants. Interviews lasted for about30 minutes. The last question required that thedeveloper ‘model’ the testing process as he cur-rently performs it. Participants were given onefull day to complete this question. The mainobjective of the interviews was to determinethe current state of testing practices in com-pany X and identify concrete problems withinthe current testing process. More specifically,the questionnaire focused on eliciting informa-tion from the following three categories [56]:

• the level of software testing skills andknowledge of the developers in companyX,

• the company’s managerial commitment tosoftware testing as perceived by its devel-opers,

• the current software testing process usedin the company.

Unfortunately, this survey was of a rather‘qualitative-interpretative’ character, sinceonly eight people participated in the survey,which is per-se not a statistically significantnumber. However, since company X is small,those eight people comprised 100% of thecompany’s software developing members ofstaff at the time when those interviews wereconducted. Seven out of those eight developersare university graduates with qualificationsranging from three-year diplomas to four-year

bachelor degrees in Information Technologyand Computer Science or related fields. Twoof those were, at the time of the survey, still inthe process of completing their studies, whileone developer dropped out.

The summaries of the findings in the initialproblem analysis, which are described in muchfiner detail in [56], are given below. As far asthe individual testing skills in company X areconcerned [56]:

• The developers did not receive rigoroustraining in software testing at tertiary(university or college) level: The littletraining received by some of the developerswas only a superficial overview of testingand was not explored in depth or practi-cally exercised.

• No further training courses or seminarshave been attended after their tertiaryeducation was completed: This meantthat none of the developers were properlytrained in software testing in practice.

• No internal training programs were con-ducted by company X to further educateits developers.

• Unit tests, which are usually performed inthe testing process, were not performed bysix out of the eight developers: The mainreason given was the lack of knowledge,which can be attributed to the first threeproblems.

• No test tools were used to assist the de-veloper in the process of software test-ing: This also implies no test automationand tedious, repetitive manual ‘debugging’work.

• Software testing as it is currently done isnot integrated into the software process.Almost all of the developers in companyX only performed their ‘debugging’ afterthe implementation phase.

• Six out of eight of the Developers in com-pany X did not know that testing en-tails more than just the execution of code:To them, there did not exist any concep-tual difference between testing and ‘de-bugging’.

As far as the commitment of company X, es-pecially in terms of its management, towardsa proper software testing process is concerned,

Page 4: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

10 Research Article — SACJ, No. 47., July 2011

our initial problem analysis has identified thefollowing shortcomings [56]:

• The managers of the company did not reg-ularly read the scientific literature on soft-ware testing and did also not sufficientlyinform themselves by other means (e.g.seminar participation) about the currentstate of the art in this field.

• The company did not provide develop-ers with a company-wide testing policy:There were no testing objectives or princi-ples or check-lists to indicate the view orobjectives of the company about how todo software testing.

• No proper testing environments were givento the developers for testing: Develop-ers used virtual machines on their note-book or desktop computers to performtheir testing. These virtual machines werenot properly managed, such that any testresults were not sufficiently repeatable.They were mostly shared amongst severaldevelopers and thus did not present anypristine testing environment.

• No test templates, historical data, ordocumentation from previous softwareprojects were provided: Developers some-times created their own test documenta-tion ad-hoc, which did not adhere to anyproper standards.

• None of these test levels that comprisethe testing process were clearly defined.The tasks that should be performed withineach test level were also not defined by thecompany.

The combined effects of these shortcomings,both on the level of the individual employeesas well as on the managerial level of companyX as a whole, are quite typical:

• Far too little time (only about 10%) perproject was dedicated to testing.

• The testing process itself could not bemonitored because most developers hadonly sketchy documentation about theirtesting progress. No test results wereavailable for future reference, as thosesketches were not part of the deliverablesof the project. There was thus no quanti-tative measurement of the current testingprocess.

• Moreover, there was a lack of planning onhow the testing on a project should beperformed. The testing was mostly donead-hoc as ‘debugging’, and the decision onwhat needs to be tested was solely at thediscretion of the individual developer.

• Few or no test cases were written, due toa lack of testing skills and in depth the-oretical knowledge, such as mentioned byAmman and Offut [1]. This also impliedthat most code written for testing pur-poses formed part of the production code;in other words, there was no clear sepa-ration between production code and testcode.

• Defects were not captured in a defecttracking system where they could be prop-erly monitored, managed, and prioritised.Developers mostly wrote the defects theyencountered on pieces of paper.

• There was no dedicated test team ortesters available: all testing activities weresolely performed by the developers them-selves, with the usual ‘developer bias’towards showing the absence of defectsrather than their presence.

• Testing was thus systematically geared to-wards quick and shallow user-acceptancetests, which correspond to the require-ments specification of a software sys-tem, whereas the deeper small-scale tests,which correspond to the software architec-ture and its units’ levels, were too oftenomitted.

The following section discusses possible solu-tions to the concrete problems identified here.

3 MATURITY LEVELS IN TESTING

Small IT companies, of which company X isonly one example, typically operate at a low oreven very low level of maturity as far as theirsoftware testing processes are concerned. Teststandards such as the international IEEE829or the British BS7925-2, already exist, but theydo not provide sufficient ‘progress guidelines’for sub-standard IT houses which wish to makeprogress towards the application of such stan-dards. To be able to measure their process im-provement progress, IT companies need crite-

Page 5: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 11

CTP Criterion TMMi Category TMMi Level

Analysis of quality risks Test monitoring and control 2

Test estimation Test planning 2

Test planning Test policy and strategy 2

Test planning Test planning 2

Test team building Test planning 2

Design and implementation of test support systems Test design and execution 2

Design and implementation of test support systems Test environment 2

Running and tracking of tests Test monitoring and control 2

Running and tracking of tests Test design and execution 2

Managing of bugs Test monitoring and control 2

Discovery of test context Test life cycle and integration 3

Test planning Peer reviews 3

Test team building Test organisation 3

Test team building Test training programme 3

Running and tracking of tests Non-functional testing 3

Figure 1: Mapping between Black’s CTP Criteria and TMMi Categories at medium Levels of Maturity

ria against which to check the observable phe-nomena.

The Test Maturity Model (TMM)/Test Ma-turity Model Integration (TMMi) framework[46] is well-known amongst academics, but lit-tle known (if not even completely unknown)amongst small IT houses such as companyX. On the other hand, the testing handbookby Black with its Critical Testing Processes(CTP) framework [5] specifically addresses in-dustrial practitioners and may thus be assumedto be wider known amongst industrial softwaredevelopers. Black [5] describes the CTP pro-cesses as lightweight check-lists and not /bu-reaucratic regulations. Together with Black’s20 years experience in the testing industry,not to mention the concise, practical guide-lines on implementing a test process improve-ment project, made the CTP a good candi-date for this study. Whereas the CTP canbe seen as an agile, easy-to-implement frame-work, the TMMi follows a more heavyweightapproach with a strong foundation in testingprinciples, best practices, and general testinggoals. Their complementing factors made thesetwo frameworks ideal candidates for this study.One of the problems in this context is that theCTP criteria provided by Black do not imme-diately let us see to which TMMi level (whichwe take as a reference framework) they cor-

respond. Small IT houses such as companyX, which usually start at the lowest possibleTMMi level (1), would thus not know whichhigher TMMi level they would reach by follow-ing Black’s criteria. To tackle this problem, wehave described and explained a mapping [56]between the categories of TMMi and Black’scriteria, which is summarised in Figure 1. Thismapping takes only the medium TMMi levels,(2) and (3), into account because it is highlyunrealistic for small IT companies to reach outfor the higher TMMi levels, (4) and (5), whichare often not even reached by large interna-tional industrial software corporations. It isworth noting that there are also several CTPcriteria and recommendations in [5] which can-not be directly mapped to TMMi categories atthe above-mentioned levels. Companies usingBlack’s CTP criteria should also aim at apply-ing for an official TMM/TMMi certificate. Al-though the CTP positively contributed to theimprovement process, this study put more em-phasis on the TMMi improvement model. Thereason is that Company X preferred to alignitself with a fast-growing industry standard.

The TMM/TMMi classification wasmodeled with the older, widely knownCMM/CMMi classification in mind, after ithad been discovered that even those IT houseswhich enjoy the benefits of a high CMMi

Page 6: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

12 Research Article — SACJ, No. 47., July 2011

level (in general) can still be ‘pretty bad’ asfar as their software testing procedures (inparticular) are concerned. In other words, ahigh CMMi level does not necessarily implya high TMMi level as well. Purchase andprocurement managers should be aware ofthe possibility of such a discrepancy betweenCMMi and TMMi, and should thus not belured into purchase contracts merely on thebasis of a high CMMi certificate.

An IT house at the lowest possible TMMilevel can be characterised by the followingdescription: “At TMMi level 1, testing is achaotic, undefined process and is often consid-ered as part of debugging. The organisationdoes not provide a stable environment to sup-port the processes. Tests are developed in anad-hoc way after coding is completed. The ob-jective of testing at this level is to show thatthe software runs without major failures. Inthe field, the product does often not fulfil itsneeds, is not stable, or is too slow to work with.Within testing, there is a lack of resources,tools and well-educated staff. At TMMi level1, there are no defined process areas. Prod-ucts also tend to be delivered late, budgets areoverrun and quality is not according to expec-tations” [46].

If this is the case in any IT house, whetherbig or small, improvements should be at-tempted as soon as possible. Many ‘standard’solutions to those well-known problems can befound in the software engineering literature,but they must still be adapted to the situationsin each case.

4 LITERATURE SOLUTIONS

In section 2, several problems of the case studyobject, company X, we summarised togetherwith their software testing procedures, includ-ing their meta problem of not knowing thattheir problems are rather common or ‘typi-cal’, and that the scientific software engineeringand software testing literature already providesmany quite easily applicable solutions to thosekinds of problems. Therefore, in this section,we re-visit the previously identified problemsin the context of company X and we pointto several relevant papers (related work) the

authors of which have also dealt with similarproblems. Consequently, the papers mentionedin this section can be regarded as ‘tools’ forimproving the software testing procedures incompany X, which is a typical responsibilityof IT management.

4.1 Testing Training and Education

All the developers in company X, except forone, had completed their tertiary education inComputer Science related fields. The develop-ers commented in our interviews that little orno software testing principles had been taughtto them at university. Indeed, according to [42]and [13] only a small portion of undergradu-ate Computer Science/Informatics curricula isallocated to the topic of testing. They recom-mended that more testing should be taught attertiary level. Paper [20] agrees that univer-sity training on software testing is currently notsufficient. A study conducted by [34] indicatedthat 27 out of 65 participating organisationsreported that less than 20% of their test teamreceived training in software testing at univer-sity. Paper [34] concluded that there is eithera lack of practical skills delivered to tertiarystudents in software testing courses, or a com-plete lack of software testing courses providedby universities. Paper [42] recommends thatmore testing be taught and that there be morecommunication between industry practitionersand academics to ensure that undergraduatecurricula provide students with a solid foun-dation in software testing. These papers areall hinting at long term solutions to a widelyrecognised problem. In the immediate situa-tion of company X, however, they are not di-rectly applicable. Therefore, such IT housesmust in the meantime go through the effortof sending their technical staff to commerciallyavailable courses and seminars, as they are of-ten offered by consulting houses and other ITsupport companies. In South Africa, also theComputer Society of South Africa (CSSA) (es-pecially its SIGiST special interest group insoftware testing) is active with public seminarsin this field.

In this context a study conducted by [34]indicated that 47 out of 65 IT organisations

Page 7: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 13

provided opportunities for their testing staff tobe trained in software testing. Among the or-ganisations investigated in this study, 37 out of65 organisations preferred commercial trainingcourses, as opposed to 25 organisations thatpreferred internal training courses. Anotherstudy [19], conducted in Canada, indicatedthat 31% of the respondents received rigoroustraining on software testing. Paper [49] ad-vises that small to medium enterprises (SME)should invest in test training courses for devel-opers or testers. The ‘International SoftwareTesting Qualification Board’ (ISTQB) providesprofessional software testing certifications in al-most 40 countries across the world, includingour country, South Africa.

The ISTQB syllabi are organised in threelevels: ‘Foundation’, ‘Advanced’, and ‘Expert’.The Foundation certification syllabus providesa solid introduction to software testing and iswell suited for people entering the field of soft-ware testing. The Advanced certification syl-labus provides a mid-level certification to prac-titioners with at least 5 years experience. Pa-per [6] suggests that prospective testers do noteven have to attend expensive training, but cangain free access to the ISTQB syllabi from theISTQB website and only write the exams ataccredited testing centres.

The South African Software Testing Qual-ification Board (SASTQB) was recently estab-lished in 2008, as the 40th National Boardmember of the ISTQB. There are at least threeaccredited ISTQB training providers in SouthAfrica, and we recommended that all the devel-opers from small IT companies, such as com-pany X, attain the Foundation and Advancedlevel certifications.

4.2 Test Planning and Test Cases

During the time of this case study, testing incompany X was not planned at all. Develop-ers mostly started testing without any idea ofhow, when, or what needed to be tested. Inthe context of similar ‘emerging’ software or-ganisations, the authors of [25] performed astudy that analysed the current testing prac-tices of a small software company. The resultsof that study also indicated that no upfront

planning of the testing process was done. Ac-cording to [25] this resulted in deadlines beingoverrun and increases in the project costs. Theauthors of [10] suggested that test planning isan essential component of a testing process as itensures that the process is repeatable, defined,and managed. This view is also supportedby [17], which states that test planning con-tributes significantly to improve the test pro-cess; also [5] has argued in this direction.

On the highest, most general level of testplanning is the so-called ‘Master Test Plan’(MTP). It provides an overall view of the testplanning and test management of a project asa whole. According to standard IEEE 829, thisentails activities such as:

• selecting the different parts of the project’stest effort,

• setting objectives for each constituent partof the test process identified,

• identifying the risks, assumptions andstandards for each part, and

• identifying the different levels of test andtheir associated tasks, as well as the doc-umentation requirements for each level.

On the small scales of test planning we eventu-ally find the task of defining test cases whichcan be regarded as ‘micro-requirements’ thatmust be fulfilled by the various software unitsunder test (see also the following sub-section4.3). Standard IEEE 610 defines a test case asa set of test inputs, execution conditions, andexpected results developed for a particular ob-jective, such as to exercise a program path orto verify compliance with a requirement.

In our case study, the software developersat company X also did not design nor exe-cute any explicitly defined test cases. This wasmainly due to a lack of general knowledge intesting as well as a lack of guidance from a testpolicy and test strategy (see also sub-section4.6). In the case study of [21] amongst twelvesoftware organisations, nine out of them didnot enforce the use of test case selection meth-ods. It was left to developers and testers todecide which test cases were selected. In [16]it was argued that the absence of clearly de-fined test cases in a test process impedes theimprovement of the test process. They recom-mended that templates be available that pro-

Page 8: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

14 Research Article — SACJ, No. 47., July 2011

vides guidelines on the most common steps tocreate a test case. Another study, conductedfor paper [24] investigated the test processesfollowed by developers amongst six organisa-tions: it revealed that only half of those or-ganisations created test cases for testing theirsoftware. None of those organisations in thatstudy developed any test cases before the prod-uct programming began.

Test Driven Development (TDD) is a soft-ware practice whereby unit tests are definedbefore product program code is implemented.The code for the unit test is gradually addeduntil a test is passed. In [18] it was arguedthat code developed using TDD had superiorquality compared to code developed using tra-ditional approaches such as the waterfall-likemodel applied in company X. The same studyalso claimed that 78% of the professional pro-grammers interviewed believed that TDD im-proved their productivity.

On the other hand, there is also recent ev-idence which indicates that not all is well withTDD: In a comparative case study conductedamongst five small-scale software projects [43],it was found that “an unwanted side effect canbe that some parts of the code may deterio-rate. In addition, the differences in the pro-gram code, between TDD and the iterativetest-last development, were not as clear as ex-pected. This raises the question as to whetherthe possible benefits of TDD are greater thanthe possible downsides. Moreover, it addition-ally questions whether the same benefits couldbe achieved just by emphasising unit-level test-ing activities” [43]. This leads us now to thetopic of unit testing.

4.3 Unit Testing

IEEE standard 610 explicates unit testing as‘testing of individual hardware or softwareunits or groups of related units’; in Object ori-ented programming (OOP) a unit may repre-sent a single method in a class, or the wholeclass itself. IEEE standard 1008 defines anintegrated approach to structured and docu-mented software unit testing in this context.

None of the developers in company X unit-tested their program code in detail. According

to [23] [24] [33], however, programmers typ-ically write unit tests as opposed to testers.Those papers also emphasise the importance ofunit testing and state that it is the most effi-cient test to conduct due to its low granularityand high code coverage. In [24], a study con-ducted in six software organisations on the test-ing processes of programmers, revealed thateven the definition of test process did not implythat programmers had a concrete ‘recipe’ forconducting unit and unit integration testing.According to [4], unit testing is seen as an es-sential phase in software testing. According to[33] [4], unit testing allows individual units tobe inspected that can reveal faults early, whichwould otherwise be difficult to find in system oruser-acceptance testing. In [4] it is also notedthat unit testing is generally not performedwell and sometimes not performed at all due tocost implications, as it was also the case withour study object, company X. In this contextthe authors of [49] recommend the following ITmanagement tactics for improving the qualityof unit testing done by programmers:

• Force developers to perform unit testingby emphasising that unit testing is theirresponsibility.

• Motivate developers by having regularmeetings and seminars to share informa-tion.

• Allow developers the freedom to per-form unit testing as they feel convenient:Thereby, allow them to experiment withdifferent test tools and technologies avail-able.

• Provide developers with the necessary re-sources: This includes time specifically al-located to the study of testing methodsand to practice the learned methods invarious exercises.

4.4 Test Automation

In our case study, company X did not use anytest tools to assist in the automation of testtasks. In a study by [34] it was found that 44out of 65 organisations used test automationtools. Interestingly, 30 of those 44 organisationthought that the use of automated test toolswas beneficial. According to [34], the biggest

Page 9: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 15

factors against the use of test tools were cost,time and difficulty of use. This view is also sup-ported by [14] who states that test tools can bevery expensive and involve a lot of effort fromthe testers. They suggest that organisationsmay consider using test tools when they havemature test practices or have a large numberof tests. Organisations may do well in soft-ware testing without the use of test tools, too[14]. Also in [29] it is said that using test toolscould pose a problem for organisations havingthe following characteristics:

• Lack of well-defined testing process.

• Ad-hoc testing.

• Organisational culture does not supportand embrace testing.

These three issues were relevant for companyX too, as they had no structured testing pro-cess, their testing activities were ad-hoc andthe company had not yet embraced a ‘culture’of software testing.

Although testing tools and automated test-ing lighten the workload by eliminating repeti-tive manual tasks, the difficulties identified bythe literature suggest that manual testing willbe sufficient for small companies, such as com-pany X for the time being. In any approachto automated software testing one must alsotake into account the possibility of the so-called‘Heisenbugs’: these appear to be (pseudo) ‘de-fects’ which are caused by the test apparatusused for automated software testing, not by thesoftware under test itself. Because of the pos-sibility of those ‘Heisenbugs’, automated soft-ware testing must never be conducted naively:the testers need to know their test apparatuswell, and they also need to have some the-oretical knowledge about the phenomenon of‘Heisenbugs’, their causes and their potentialeffects.

4.5 ‘Static Testing’ by Program CodeReviews

The only form of testing being conducted incompany X during our case study was ‘dy-namic’ testing which is defined by the ISTQB[47] as testing that involves the execution ofthe software of a component or system. Thisimplies that code must already be available be-

fore testing can commence. ‘Static’ testing onthe other hand does not entail the execution ofcode, but refers to activities such as programcode reviews.

A study conducted on software testing ma-turity in the Korean defence industry [37] in-dicated that static test techniques were iden-tified as having a low maturity on the testprocess improvement (TPI) maturity matrix[27]. A study conducted by [24] on the testingprocesses in six software organisations revealedthat only two organisations conducted code re-views against company coding standards. Onlytwo organisations conducted peer reviews andonly three organisations did inspections. Al-though the sample size of this study was small,it might be regarded as ‘anecdotal evidence’ forthe lack of static testing techniques in manysoftware organisations.

In [24] it was also recommended that or-ganisations encourage the application of statictesting alongside dynamic testing. Similarly,[33] mentioned reviews of functional require-ments and design as one two the key fac-tors that indicate an organisation’s test ma-turity. There it was said that functional re-quirements reviews improve the correctness ofrequirements even before the software is de-signed. In terms of design reviews it was ar-gued in [33] that conducting reviews of the de-sign can help to eliminate flaws in the designbefore coding commences.

In a recent study conducted by [16]amongst ten software organisations to deter-mine which factors impede the improvement oftesting processes, the establishment of reviewtechniques ranging from informal peer reviewsto rigorous code inspections was suggested asan approach to improving the testing process.In [30] it was argued that reviewed documentscan result in a 65% reduction in defects thatcould otherwise have been carried forward tolater phases of the development cycle.

For small IT companies with limited re-sources (such as company X in our case study)we would recommend on the basis of our ex-perience three reviews throughout the softwarecycle:

• The first review should be conducted dur-ing the requirements phase to try to en-

Page 10: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

16 Research Article — SACJ, No. 47., July 2011

sure that all specified requirements aretestable.

• The second review should be conductedafter the design phase to try to ensurethat the design does not contain flawswhich would propagate to the program-ming phase.

• The third review should be a programcode review held during the implementa-tion phase to try to ensure that developersare programming according to companycoding standards and to identify possibleweaknesses or bad programming practices.

4.6 In-House Guidelines: Policy andStrategy

Several of the developers in company X in-dicated in our semi-structured interviews thatthey did not know how to perform testing, orwhat is expected of them in this regard. Theyalso indicated that the company did not pro-vide them with in-house guidelines on how theIT management would want their testing to beconducted.

According to [48] a test policy answers ques-tions such as ‘Why do we test?’, ‘How impor-tant is quality?’, and ‘Are we time-driven orquality-driven?’. It was further suggested in[48] that a test policy forms a critical basis forcommitment to all test-related activities in anorganisation. Similarly it was suggested in [38]that an explicitly defined test policy shouldprovide a framework for planning, organisa-tion and execution of all test activities within acompany. The author also warns that if a testpolicy is only given implicitly it can have thefollowing adverse effects [38]:

• Testing is not consistent over different de-partments: This will cause the testingapproach to be defined individually onprojects and the results will not be com-parable between projects.

• The test policy will not be recognised andadhered to by the people involved.

• Managers will struggle to properly tracka project’s testing status, such that per-formance will differ based on the testingapproach followed individually.

For companies such as our case study companyX we therefore suggest that the test policy beone of the first improvements to be introduced.Any test policy will emphasise the importanceof testing and software quality. This will cre-ate an awareness and sense of responsibilityamongst its employees, especially the softwaredevelopers, that testing is viewed as a criticalpart of the software cycle.

In contrast to a test policy (as mentionedabove), a test strategy is defined by the ISTQBas a high-level description of the test levels tobe performed and the testing within those lev-els for an organisation. According to [21] a teststrategy thus provides an organisation with re-sponsibilities within each test level and also ad-vises the organisation on how to choose testmethods and coverage criteria in each phase.There was no such test strategy in company Xto guide developers on how to perform testingat the time our case study was conducted. In-terestingly, from the study conducted by [21]only four out of the twelve organisations un-der study used explicitly defined test strate-gies. Three organisations had implicitly de-fined strategies that formed part of some stan-dard regulating the testing aspects of a prod-uct.

Another test process maturity study [24]conducted in 2007 amongst six software or-ganisations indicated that there was a definitelack of testing guidelines, too. This study ar-gued that developers did not know what to testor how to test and provides an ideal opportu-nity for implementation of a test strategy. Ac-cording to [48] a test strategy is based on anorganisation-wide test policy; thus test policyand test strategy are closely related to eachother. The strategy must be aligned with theorganisation’s software development process asvarious test activities and procedures are de-fined for each phase of the software develop-ment process. These activities typically includeuser acceptance testing planning at the require-ments phase, system testing planning at thearchitecture phase, integration testing at thedesign phase, and unit test design and execu-tion during the implementation phase; (see thewell-known V-model for comparison).

Page 11: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 17

4.7 Test Specialists

Closely related to the issue of in-house guide-lines (test policies and test strategies) is thequestion whether or not an IT house employsspecialist software testers which are not alsoplaying the role of software programmers at thesame time. There is no ‘one-size-fits-all’ solu-tion to this problem; for some organisations orproject it might be better to keep testers andprogrammers completely separate, whereas forother organisations and projects it might bebetter to have programmers and testers ‘min-gled’ or in flexibly interchangeable roles. Thisalso depends on the maturity level (see section3) of the organisation; anyway the in-houseguidelines must clearly answer this organisa-tional question and should not leave it danglingin an undefined limbo state.

At the time of our case study, company Xdid not employ any dedicated software testers,and all test-related activities were performedby its software developers. As it was also recog-nised by [15], small companies have limited re-sources and therefore do not (or even cannot)invest in separate quality assurance staff. An-other software process improvement study con-ducted in a small IT house [3] also indicatedthat not enough resources were available to es-tablish separate quality assurance group. Thiscoincides with the view of [32]. In the alreadymentioned study [24], however, five out of thesix organisations had independent test teams.This study, however, did not specify the sizeof the organisation, but indicated that some ofthe organisations had up to only twenty devel-opers; the ratio between developers and testersindicated that the organisations had betweenone to four developers per tester. Company Xat the time of our case study had only eightdevelopers.

In [25] it was found that having a dedicatedtest role in projects allowed for testing to be-come more structured as well as allowing de-velopers to focus more on fault finding at theunit level than at the system level of testing. Asurvey conducted by [34] in Australia regard-ing software testing practices indicated that 44out of 65 organisations investigated had an in-dependent test team indeed. Also [49] recom-mended that even small to medium sized enter-

prises (SME) should employ at least one per-son that is dedicated to testing. If this is how-ever not possible, then one may also follow [40]which recommended that some developers inthe team be assigned the role of testers tem-porarily. This would imply that the same de-veloper may not tests his own program codesuch that there is some degree of independencebetween development and testing. However,if an IT house wishes to reach TMMi level 3(see section 3 above), then dedicated testersmust be members of a separate quality assur-ance group.

4.8 Test Timing: Early versus Late Testing

Company X in our case study followed the‘Waterfall’ approach to software developmentwith the testing phase being situated betweenprogramming and deployment. This practiceis in contrast to newer software engineeringpractices. A study described in [25] on testingin emerging software organisations (similar tocompany X) revealed that testing was mostlybeing performed too late in projects. In [28] itwas argued that testing has a life cycle of itsown and must therefore run concurrently withthe software development process. They, too,suggest that testing must start already duringthe requirements phase of a project. This isalso emphasised in [31]. According to [45], be-tween 20% of the testing effort can be spenton planning, 40% on preparation and the re-maining 40% should be spent on test execution.This clearly implies that testing cannot occuronly at the end of the software development lifecycle. In [30] it was suggested that testers beinvolved in reviewing requirements documentsas well as design documentation. According to[30] [47] defects found early in the software lifecycle will cost much less to fix than the onesdetected in the later stages of the developmentprocess.

Theoretically, not much testing will beneeded if software was properly and formallyspecified at the beginning of the project cycle,but practice usually differs from theory. As a‘rule of thumb’ one can say that about 50% ofthe total project time should be invested intotesting [23]. This means: if in some project

Page 12: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

18 Research Article — SACJ, No. 47., July 2011

less than half of its time is spent on testing,then it is most likely this project itself is in direstraits (and very unlikely that specifications atin the beginning of the project had been doneso exceptionally well that hardly any testing isneeded in the end any more.) This further im-plies, typically, that when a project runs outof time and goes beyond schedule, which mostprojects actually do [8], then the most impor-tant activity of testing is usually the very firstvictim to be sacrificed at the altar of the dead-line cult. In our case study at company Xwe found that less than 10% of the averagetotal project time was actually spent on soft-ware testing, which is five times less than rec-ommended. For comparison, also [25] foundthat testing activities were given too little timeand too few resources in a project, whilst [24]found, too, that developers did not pay enoughattention to testing, again mainly due to a lackof available project time.

Business people often say that ‘time ismoney’, thereby hoping and believing to savemoney by saving project time. For this short-sighted dogma very high prices will have tobe paid later, when previously undetected soft-ware ‘bugs’ (due to hasty testing) will creep tothe daylight after deployment and have to beeliminated at even higher costs. For this prob-lem, however, we do not have any technical so-lution in our software engineering tool box; thisis a problem of the business mind. All projectstakeholders, especially on the non-technicalside, need to understand in their minds thatquality requires time; it is indeed the respon-sibility of the software engineer, in his role asthe technological expert, to warn the business-man on the project against any unrealistic pre-sumptions or wishful thinking on the basis of ashort-sighted ‘time is money’ dogma.

Of course we will never have an infiniteamount of testing time available and we thusneed to allocate our available time as wisely aspossible; this implies that we should also havesome reasonable priority declarations in placewhich state what parts of a system under testshould be tested first with highest priority, andwhat parts of the system can be tested laterwith lower priority.

4.9 Test Priorities and Critical Tasks

Related to the question of test timing is thequestion about which tasks to do first, withhighest priority, in an available frame of time.Our ability to assign such test priorities ap-propriately, however, depends on our depth ofunderstanding of the ‘most critical’ parts and‘bottle neck’ sections of the software systemunder consideration. This requires skills, andtools, for the analysis of the structure (andstructural complexity) of the software systembefore the various test tasks can be prioritised.In our case study, the software developers incompany X indicated that they did not knowwhat to test nor how to prioritise their testactivities to test the most ‘complex’ programcode first, or the code most likely to containdefects, or those highly connected sections ofthe code in which a defect would have the mostfar-reaching ‘ripple effects’.

In order to address such problems, [44] rec-ommended static code analysis and softwarevisualisation techniques. Static analysis is afault-detection support technique that aims toevaluate a system or component based on itsform, structure, content, or file documentation,whereby the component or system’s code itselfis not executed [24]. Tools are available to au-tomatically perform static analysis and we re-fer to these tools as Automatic Static Analysis(ASA). ASA tools automate the detection offlaws and poor code constructs by parsing thesource code and searching for certain patternsin the code [55]. According to [44] a typicalpipeline for a static analysis tool includes thefollowing components:

• Parser : analyses the raw source codeand produces and low-level representationthereof. The source code is usually repre-sented as a syntax tree.

• Query engine: checks the source code forcertain facts by scanning the syntax treefor specific patterns. These patterns in-clude showing variables that are used be-fore they are initialised, code modular-ity, and cyclomatic complexity to name afew. Thereby, cyclomatic complexity is themeasure of the complexity of a softwaremodule, and a module is defined as anysingle method or function that has a sin-

Page 13: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 19

gle entry and exit point [54].

• Presentation engine: represents query re-sults graphically to show the patterns un-covered by the query engine.

The authors of [44] integrated their own queryengine with SolidFX, an existing InteractiveReverse-engineering Environment (IRE). ThisIRE has the same look as modern Inte-grated Development Environments (IDE) suchas Eclipse or Microsoft Visual Studio. They ap-plied their query engine together with SolidFXand Call-i-Grapher to a number of industrialC and C++ projects. Thereby, their tool re-vealed the following interesting characteristics:

• Complexity ‘hot-spots’ based on metricssuch as McCabe’s cyclomatic complexity,and lines of code commented;

• The modularity of a system and its con-stituent modules based on its call graphs;

• The maintainability of a system based onspecified metrics such as lines of code, linesof commented code, and cyclomatic com-plexity.

Similarly it was argued in [54] that cyclomaticcomplexity should be limited because complexsoftware is harder to maintain, harder to test,more difficult to comprehend and more likely tocause errors. Since such software visualisationtools are commercially available and specifi-cally designed for industrial application [44],we recommend that also small IT houses shouldmake themselves familiar with them. High testpriorities can then be given rationally to the‘hot spots’ identified by those scanner tools.

4.10 Test Repeatability and Environments

Experimental repeatability is one of the keycharacteristics of the scientific method. Thisalso holds for software testing, if we regarda software test run in analogy to a ‘scientificexperiment’. One of the methodological pre-conditions of scientific repeatability is the con-duction of experiments under well-defined, iso-lated and well-controlled laboratory conditions.This is also true in the area of software test-ing, where isolated and well-maintained ‘testenvironments’ resemble a scientific laboratory.Though it is well known that not all softwaretesting experiments are repeatable in this strict

sense —distributed and concurrent systems no-toriously display nondeterministic behaviour,because of the wide network environment onwhich they run which is typically beyond ourcontrol— most of the ‘ordinary’ software test-ing experiments can indeed be made repeatableif only those well-controlled laboratory condi-tions are fulfilled.

In our case study, on the contrary, thetest environment available at company X wasfound not adequate to support their differentsoftware projects in a scientifically acceptableway. The developers in our case study mostlyused virtual machines running on their per-sonal notebook computers to perform testing,which means that the ‘test laboratory’ was notcarefully controlled. Indeed, due to the costs ofhardware and support software tools, small or-ganisations often do not have the means to ob-tain and maintain dedicated test environments[32]. In the already mentioned study [24] con-ducted between six organisations on their im-plemented testing processes, one of the diffi-culties listed was that of creating test environ-ments that closely represented production en-vironments.

The author of [33] also developed a set oftwenty questions to determine the testing ma-turity of an organisation: The first question inhis questionnaire enquired whether an organi-sation consistently creates an environment fordevelopment and testing that is separate fromthe production environment. He argued thatthe benefit of such a separate testing environ-ment allows an organisation to avoid makingchanges to production environments ad-hoc, aswell as the possibility to develop and test soft-ware completely before it is deployed in a pro-duction environment.

Although a test environment is an impor-tant part of software testing as identified bythe literature, it is not always possible (or prac-tically feasible) to use a pristine test environ-ment in and for every project. The main reasonfor this is cost. Nevertheless it should be pos-sible also for small IT houses to exercise bet-ter control over their available resources, suchthat the development environments and test-ing environments are kept separate even if theyare running on the same hardware devices, and

Page 14: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

20 Research Article — SACJ, No. 47., July 2011

that the scientific ‘laboratory conditions’ (forthe sake of repeatability) are at least ‘approx-imated’ as far as possible.

It is indeed the responsibility of the ITmanagement in a company to fully appreci-ate that software testing is de-facto the mostscientific of all software engineering activities,whereas other software engineering activitiesstill resemble much of an ‘art’ or ‘craft’ [35].However, the ‘power of science’ cannot be un-leashed if IT managers are not aware of it.

4.11 Test Documentation

During our case study, little or no test doc-umentation was created in company X. Thefew test plans that have been created were notdone according to any industry standard suchas, for example, IEEE 829. A study conductedin Finland [41] about software process improve-ment in small companies also indicated thatthere were problems with software test docu-mentation; however that study did not elabo-rate on the reasons for this problem. The au-thors of [36] too assessed the software process ofa small company and found that no written testplans or reports existed. According to themthis lack of documentation caused difficulty inreproducing the defects in order to eliminatethem. It was also noted in [36] that once thesedefects were fixed, the problem and solutionswere never recorded at all. This caused the de-velopment team to investigate similar problemsfrom scratch again and again, without havingaccess to past documentation for historic ori-entation.

The IEEE 829 standard provides guidelinesfor software test documentation which can beused as templates in various testing projects.Those are not too difficult to handle and shouldthus also be suitable for small IT houses withshort resources. There are templates for a‘Master Test Plan’ (MTP), ‘Level Test Case’(LTP), ‘Level Test Log’ (LTL), ‘Anomaly Re-port’ (AR) as well as a ‘Master Test Report’(MTR). Those templates should be stored inthe document management system of an IThouse such that all relevant employees have ac-cess to them. The use of proper test documen-tation must also be emphasised in the test pol-

icy document of a company, and must —lastbut not least— be proactively enforced by itsIT management.

4.12 Test-Related Documentation

A test basis is defined by the ISTQB as “alldocuments from which the requirements of acomponent or system can be inferred”. Here itbecomes obvious that testing is closely linkedto the entire software project and cannot beseen only in isolation. Indeed, the author of[7] argued that insufficient test cases could beattributed to the low quality of the test ba-sis. The software developers of company Xin our case study also argued that better re-quirements specifications be created so as toimprove the testing process. Although the re-quirements specification is strictly speaking asoftware development process component andnot a test process component, it has a directeffect on the quality of the test effort.

According to [15], small organisations dotypically not have clearly defined and stablerequirements on software projects either. In astudy [41] conducted in 2007 regarding soft-ware process improvement (SPI) in a smallcompanies in Finland, one of the problemsidentified was that of requirements documentsthat were not good enough. It was foundthat structured documents such as require-ments documents were created on projects,but their contents and depth were deficient.A possible reason for the haphazard require-ments documentation might be attributed toa lack of research in requirements engineeringfor small companies. According to [2] thereare not many papers published in requirementsengineering conferences that addresses the re-quirements processes of small IT houses specif-ically. The authors of [2] further argued thatthis may be due to mistaken assumptions thatsmall companies are no different than theirlarger counterparts or that small companiesdo not present noteworthy research challenges.Although not referring specifically to require-ments engineering or testing, also paper [53]emphasise that not enough publications ex-ist that provide software engineering solutionsspecifically for small companies.

Page 15: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 21

Anyway it is obvious that requirements en-gineering must be done properly in support oftest engineering, and it is also clear that the ex-isting ‘general’ literature on requirements engi-neering is relevant until more ‘special’ resultsbecome known on requirements engineering forsmall IT houses in particular.

4.13 Test Tracking and Recording

Related to the issue of test documentation isthe issue of test tracking and recording (in databases) for future reference, with the purposeof learning the lessons from past experiences.Recording the history is also a precondition forbeing able to gauge the success of any processimprovement measurements being undertaken.For example: if we are not aware which ratio rof our code units are currently defective (longterm average), and we are now enforcing a newtest policy P which leads to a new average ratior′ of defective units, then we cannot even tell ifthe introduction of P was a successful measure-ment in terms of r > r′. This was at the timeof our case study the situation in company X.

The reason for this lack of measurabilitywas the absence of any defect tracking systemat company X at the time of our study. Oncea developer at company X detected a defect,he would either fix it immediately (and thenforget about it), or try to remember the defectfor later fixing, or scribble a private note ona piece of office paper, which would sooner orlater land in the litter bin, too. On the con-trary, [33] suggested that a defect tracking sys-tem allows an organisation to assign ownershipto defects and to store the knowledge aboutdefects, until they are resolved, and even be-yond. According to [33] can also indicate thestatus of the test effort by reporting on thenumber of defects found, number of defects re-solved, as well as the yet to-be-fixed defects.These metrics correspond well with those sug-gested by [25]. Defect tracking system suchas ‘Bugzilla’ or ‘Track’ are easily accessible de-fect tracking systems, freely available from theInternet, such that even small IT houses withlimited financial resources can —and should—take advantage of them.

The already mentioned Anomaly Report

template from standard IEEE 829 can also beincorporated into a defect tracking system toensure that all necessary defect data are cap-tured for future reference. The defect trackingsystem will also automatically store some ofthe test metrics that can be used to monitorthe test process progress.

According to [28], metrics are measure-ments, collections of data about project ac-tivities, resources and deliverables. Paper [28]can be used as ‘tool’ to assist in the estima-tion of projects, measure project progress andperformance, as well as quantifying project at-tributes. The authors of [28] recommendedthat the following test metrics be tracked inevery project:

• Number of test cases,

• Number of test cases executed,

• Number of test cases passed,

• Number of test cases failed,

• Number of test cases under investigation,

• Number of test cases blocked,

• Number of test cases re-executed.

In addition to these, we recommend to comparealso the efforts of software testing versus theefforts of having to deal ‘a-posteriori’ with cus-tomers’ complaints about buggy software afterproduct delivery and deployment: We assumethat any improvement of an IT house’s testingprocedures is in the first place an ‘investment’on the negative side in the books of that IThouse; but the return on investment later, interms of ‘customer happiness’, might well beworth the effort. Last but not least we rec-ommend that also the time spent on testingshould be recorded in such data bases for fu-ture reference.

A study conducted by [34] in Australia in-dicated that only 38 out of 65 of the organisa-tions investigated used test metrics for such orsimilar purposes. 21 out of those 38 organisa-tions agreed that metrics improved the qualityof the software developed. Similarly, [7] alsoadvises that metrics are an important part ofthe test process as it provides valuable insightinto the progress of test projects. CompanyX did not work with any kind of metrics andwas therefore unable to make reasonable esti-mations on future software projects. In thiscontext [7] also argues that metrics provide a

Page 16: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

22 Research Article — SACJ, No. 47., July 2011

basis for test estimation and help to identifyprocess improvement opportunities. On theother hand, the authors of [48] have warnedagainst using too many metrics when embark-ing on a test process improvement initiative,as this could demand too many resources formonitoring process.

It should be emphasised that the combina-tion of numeric metrics and historic records,especially in the area of software testing, islikely to make future software projects morepredictable for project planners. An ‘emerg-ing’ IT house with few resources, such as com-pany X, should wisely start with only a smallnumber of metrics applied, and then stepwiseincrease the amount of quantitative measure-ments being taken while the testing process ofthe company matures. (The path from ‘quali-tative understanding’ to ‘quantitative measure-ments’ has indeed been walked by various sci-entific disciplines, though at different pace andvelocity, during the history of science.)

5 TEST WORKFLOW IMPROVEMENT

The first step in improving the test workflow ina small IT house, such as our case study com-pany X, is to gain clarity about what work-flow implicitly exists in such a company whichdoes not have any workflow explicitly defined.On the basis of the interviews which were con-ducted with the employees of company X [56],we were able to extract a de-facto existingworkflow which is shown in Software and Sys-tems Process Engineering Metamodel (SPEM)notation in Figure 2. The ‘issues’ attached tothe process nodes in Figure 2 are the problemswhich we have described and discussed in sec-tion 2 of this paper; for more details refer to[56].

The SPEM diagram of Figure 2 shows in-tuitively that this is not the picture of a well-organised test workflow. It is the picture of ad-hoc activities in which there is no methodologi-cal distinction between proper testing and mere‘debugging’. The document-like icons repre-sent work products that serve as input or asdeliverables of the process. The input workproduct depicted in Figure 2 is a requirementsspecification. This document is used by devel-

opers to determine the functionality that needsto be implemented. The icons with sharp edgespointing to the right, represent tasks in the pro-cess. Developers would use the requirementsspecification to implement a feature. Theywould then proceed to build and run the codeartifact for this feature to determine whetherit performs the intended functionality. In thecase of a failure, the developer would proceedto find the fault that caused it. This code-and-fix cycle continues until all functionality hasbeen implemented and no more apparent bugsexist. The software product is released to thecustomer at this stage for further testing bythe customer. One should also note that thehuman-like icons in Figure 2 represent develop-ers. No other human role players are involvedin test process and it is completely up to thedeveloper to determine whether the software isready for release.

The reason why we have taken this de-factoworkflow representation is threefold:

• On the small scale it enables us to pin-point ‘issues’ at each individual node inthis process diagram.

• On the large scale it enables us to ‘see intu-itively’ the problems with such a workflowas a whole.

• The visual representation of the existingtest processes in Company X (workflow)aids understanding and provides a betterplatform to discuss the ‘issues’ with thecompany’s non-technical managers, whoare ultimately responsible for supervis-ing and enforcing the improvement of theworkflow to be applied.

The workflow as depicted in Figure 2 has beensomewhat simplified for the sake of clarity;the figure does not include all problems iden-tified in [56]. Also note that the figure onlyshows the test workflow identified in our casestudy, not the entire software engineering work-flow. Such a representation however, wouldhave revealed that the test workflow was un-fortunately isolated from the other software de-velopment phases, instead of being reasonablywell integrated with them. [1] warns againstthe isolation of the test process into separatestages of the software lifecycle, and recom-mends that testing should run parallel to im-

Page 17: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 23

Figure 2: Implicit De-Facto Test Workflow, extracted from Interviews conducted with Employees in Com-

pany X.

prove the quality at each stage.

Anyway, the de-facto representation takenof an immature test workflow should be thestarting point from which a company’s IT man-agement can launch its process improvementexercise. As explained above, and as furtherelaborated in [56], such a test process improve-ment exercise should take into account the fol-lowing items:

• Test Policy,

• Test Strategy,

• Test Planning,

• Test Design,

• Test Execution,

• Test Monitoring (Control),

• Test Recording (Storing),

• Test Reporting (Closure).

For each of these activities, further micro-workflows could be defined, depending on thesize of the company, its available resources, aswell as the size of the project and the numberof personnel involved:

• If the company and its projects are onlysmall, then it would be a costly ‘manageri-alist overkill’ to define further rigid micro-workflows for those items.

• On the other hand if a company and itsprojects are larger, then it might wellmake sense to further increase the preci-sion of the overall testing workflow defini-tion by introducing micro-workflow defini-tions for those items on a smaller scale.

It, however, must be taken into account that,according to the well-known V-model, differ-ent forms of testing correspond to each activity

Page 18: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

24 Research Article — SACJ, No. 47., July 2011

Figure 3: Improved and explicitly defined Test Workflow for Company X, including consideration of the

V-Model.

on the software development side: Unit test-ing corresponds to the activity of programmingin the small, integration testing correspondsto the level of software architecture (program-ming in the large), user acceptance testing cor-responds to the activity of requirements engi-neering, and so on. This can be clearly seenin Figure 3, which depicts an improved testingworkflow which we had suggested to our casestudy company X on the basis of the precedinganalysis as further described in [56].

The picture of Figure 3 does not show in alldetail that we have also suggested some clearlydefined activities and deliverables (representedby the ‘envelope’ symbols in the picture) foreach phase of the new testing process . Manyof those suggested solutions for the improved

workflow were indeed derived from the processareas of the TMMi model, according to ourmotivation of increasing the TMMi certifica-tion level of small IT houses such as companyX. Therefore we should now also reason aboutthe TMMi maturity of the improved workflow,to determine how it would compare to the oldworkflow maturity (which was at the lowestpossible level of the scale).

6 PREPARATION FOR TMMI

With the development of the improved work-flow, as shown in the previous section andas further explained in [56], we carefully fol-lowed the recommendations made for TMMilevel 2. We attempted to include all of those

Page 19: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 25

process areas into our new workflow for com-pany X, and this was generally achieved. How-ever, the TMMi is a very comprehensive modelwith many specific and generic goals, which inturn contains even more specific and genericpractices. We attempted to select the mostimportant goals and practices from each pro-cess area for company X. It is currently notpossible to fulfil all TMMi-2 requirements ineach process area, because such would just addto the already heavy laden process improve-ment project. We therefore suggested in [56]that more goals and practices be added at laterstages as the test process in company X be-comes stable and more mature.

Especially as far as the TMMi-2 area ‘TestMonitoring and Control’ is concerned, we couldnot recommend to company X to fulfil all thoserequirements at once; such would have over-stretched the company’s capacity. We pre-sume that this would also be true for manyother small IT houses which ‘play’ in the same‘league’ as company X. In fact the TMMi-‘Test Monitoring and Control’ process areacontains 17 specific sub-practices. In our casestudy we have selected only a few practices toadd to our improved workflow due to two rea-sons [56]:

• The first is that company X is only justbeginning with its test process improve-ment program which needs to be rathersimple and relatively easy to implement.We did not want to introduce too muchcomplexity into the process at this earlystage, but rather introduce new practicesover time.

• The second reason for limiting our moni-toring and control phase to basic practiceswas that company X did not have enoughhuman resources to manage the implemen-tation of these practices.

Based on these suggestions, we can howeverstill ascertain that the improved workflowmeets the requirements of this process area asat least two of the specific practices are met.This is shown in Figure 4.

As far as the approach to TMMi level 3 isconcerned, our case study company X turnedout to be too poorly equipped to make this goalrealistically achievable in the near future; the

same will most likely be true for many othersmall IT houses ‘playing’ in the same ‘league’as company X. Nevertheless it turned out thatcompany X seemed to be prepared for a partialapproach to TMMi level 3, namely in the areaof ‘Test Life Cycle and Integration’ (with twosub-goals seeming to be achievable) as well asin the area of ‘Peer Reviews’ (ditto). This isshown in Figure 5.

In summary, we found that the TMMi isvery comprehensive and demands that manypractices be implemented in each process areabefore that process area can be satisfied. Wehave decided that not all these practices andgoals in each process area would be beneficialto the suggested new workflow of company Xat the time of our case study. The main reasonfor this decision was that the implementationof all these practices and goals is a large taskthat could not be handled with the limited theresources available in company X at the time ofthe study. We therefore recommended that anindependent team be established in companyX to take on this task in ‘baby steps’. Thiswill be necessary should company X decide tosubmit their test process for official accredita-tion with the TMMi Foundation in the future.For purposes of future research we also recom-mended that the suggested workflow would beimplemented as is, to first establish a struc-tured process. Once the test process has ma-tured and is fully integrated into the activitiesof the software development division of com-pany X, we would recommend that the specificgoals still missing (Figure 4, Figure 5) shouldbe tackled. Still the question arises, how muchtime and effort would be needed to lift a smallIT house, such as company X, to TMMi level2 from the lowest possible level. This questionshall be addressed in the following section.

7 TRANSITION EFFORTS

Figure 6 provides a ‘to do’ list with recommen-dations for our case study entity, company X,as further explained in [56]. If all these itemswould be done, then company X would havelifted itself from TMMi level 1 to almost TMMilevel 2; let us say for the sake of illustration:‘TMMi 1.75’ (if such fraction values would ex-

Page 20: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

26 Research Article — SACJ, No. 47., July 2011

Figure 4: Assessment of Improved Workflow against TMMi Level 2.

ist in the classification).

How much effort would it take to make thistransition? Clearly this cannot be a trivial one-week task, and indeed [48] suggest that thetransition phase could take up to two yearsto reach TMMi level 2. Company X, how-ever, had really no established test practicesat all that could be ‘improved’ or built upon,and therefore we estimate that this effort mighttake at least two years to complete for an IThouse such as company X. It was also sug-gested in [48] that the following phases shouldbe gone through as a preparation for TMMicertification in an organisation:

• initiation,

• planning,

• implementation,

• deployment.

These four phases will be described in the fol-lowing, which should also explain the long du-ration (two years) of the whole transition asmentioned above.

The initiation phase is used to determinethe current maturity level of the organisation.

Based on the results, recommendations mustbe made to management. Fortunately this re-search has already analysed the current testprocess and has already made recommenda-tions. We therefore suggested in [56] that theresults of this research be presented to the man-agement of company X. As part of the ini-tiation phase, common goals must be set bythe organisation on what they want to achievewith this improvement project. These goalscan include predictability, higher productivity,efficiency, and effectiveness [48].

The planning phases involve the creationof a project team that will be responsible forthe improvement project. A common prob-lem seen by [48] is that organisations do notinvest enough resources and attention into animprovement project. Each of the recommen-dations is assigned to a workgroup that im-plements and deploys the recommendations.We recommended in [56] that company Xshould establish a steering committee with atest project improvement manager. Specifictime and resources must be allocated to the

Page 21: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 27

Figure 5: Assessment of Improved Workflow against TMMi Level 3.

workgroup so that this project is separate fromday-to-day work activities. Proper commu-nication during this project is important asmany changes will be introduced. All em-ployees of company X need to know whichchanges are going to be implemented and whythese changes are necessary. It was also rec-ommended in [48] to issue a periodic newslet-ter that communicates progress of the project.This will assist with the buy-in from employ-ees into the project. The use of an externalconsultant to take ownership of the project isalso recommended by [48] to guide the entireprocess. We do not currently foresee this rec-ommendation as feasible for very small com-panies, such as company X, due to the costimplications. However, the creation of an in-ternal project team to manage this process im-provement project can be critical to its success.Furthermore [48] recommended that the pro-cesses which the test process depend on mustalso be taken into consideration. As the sug-

gested test process is tightly integrated intothe overall software development process, thisprocess must also be changed to accommodatethe changes in the testing process. We haveseen throughout our case study that the soft-ware development process will be affected bythe suggested testing process. Therefore wesuggested in [56] that a separate improvementproject should be launched for the software de-velopment process improvement at large. Sucha project could be managed by the currentsoftware development manager such that noadditional resources would be needed in thiscase. We also recommended that the newlysuggested test process be documented thor-oughly and be made available to all stakehold-ers of the company.

The implementation phase of a process im-provement project realises the recommenda-tions made in the previous phases. We agreewith [48] that the test policy and strategy arethe first and most important changes to be im-

Page 22: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

28 Research Article — SACJ, No. 47., July 2011

Figure 6: To-do list from old workflow to new workflow.

plemented. This will provide direction on thewhole improvement project. We then recom-mend that each process area of TMMi level2 (and, later, 3) that was satisfied with thesuggested test process be implemented. Wealso recommended in [56] that the structure ofthe recommended documentation be created.All the templates from the IEEE standard 829should be adapted to the needs of companyX and stored in a document management sys-tem. A document management system is al-ready in use in company X; therefore this prac-tice should not present any great difficulties.

According to [48] the deployment phase ofthe improvement project is the most difficultone to do. All the documentation and pro-

cess guidelines are futile if employees do notadhere to them. We therefore recommended in[56] that company X deploy the suggested testprocess on an internal software pilot projectfirst. This will allow management to carefullymonitor the progress of the test project with-out running the risk of possible failure with acustomer-facing delivery project. Only whenthe basic practices of the test process have beenimplemented in small steps and the test pro-cess is producing the desired outcomes shouldit be implemented in customer-facing deliveryprojects. We estimate that this pilot imple-mentation could take between six and eightmonths for a company similar to company X.

Page 23: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 29

8 FUTURE WORK

Though our case study conducted with com-pany X can be regarded as comprehensive,some aspects were not taken into account; theyremain open for future work. For example: wedid not investigate the test process from theperspective of the management of company X,because our study started from the interviewswith the software developers (though permis-sion was given by the management, whichmeans that the management was aware of thefact that such a study was being carried out).However, an informal ‘chat’ with the Chief Ex-ecutive Officer (CEO) of company X regard-ing this case study was encouraging: The CEOseemed convinced that those process improve-ments, as they were elaborated in [56] and sum-marised in the previous sections of this article,should be introduced in company X as soonas possible. This generally positive attitudeby the CEO might be regarded as an indica-tor for the awareness of the current problemsbeing experienced with the software process,more specifically the test process, at a too lowlevel of TMMi.

Another perspective that was not takeninto account in our case study was the per-spective of the customers who were buying the(often defective) software products from com-pany X. We did not analyse the customer’sview of the products they purchased and theirperceived quality thereof.

This case study only focused on one smallcompany that might perhaps not be fully rep-resentative of the test processes of other smallIT houses. Though there is much ‘anecdotalevidence’ of the similarity of many other smallIT houses with our case study company X, wecan —strictly speaking— not conclude fromthis one case study that software testing is ageneral problem facing many other small IThouses.

It is a task for future work to actually im-plement the suggested testing process in com-pany X, and then to study the effects of theimplementation, such that we would be able tojudge if (and, if yes, to what extent) the pro-cess improvement exercise was successful. Thiswould especially entail the consideration of the

following parameters (and their possible corre-lations): software developers’ disgruntlement,software defect rate, and customers’ satisfac-tion, as briefly outlined in the following:

• Software developers in company X were,at the time of our case study, disgrun-tled with the current process because therewas often considerable re-work to be car-ried out after delivery of defective soft-ware. The customers would obviouslysend complaints about such products, suchthat the re-maintenance of those defectsdisturbed the software developers’ sched-ules while they were already working onnew projects, which meant that the newprojects got into delay by the need to fixthe defects from previous projects. It iswell known that employee disgruntlementcan have negative effects on the companyas a whole.

• Software defect rates at company X werenot counted during the time of our casestudy. Without such figures, however, itis not possible to measure the effects ofany suggested test improvement exercise.Defect measuring is thus the preconditionfor any subsequent initiative. The intro-duction of defect rate counting to companyX would therefore be the first step of fu-ture work into the direction of a long-termstudy.

• Future work would need to determine howsatisfied customers are with the quality ofthe current software. Once the suggestedprocess has been implemented, one wouldhave to ask the customers again in or-der to find out if they are happier nowwith the less defective software than previ-ously. This study would, however, requirealso some recurring long-term customers;a study like this would be difficult to con-duct on the basis of one-time customerswho only buy one product and then nevercome back.

However, the ultimate goal for future workwould be the achievement of getting companyX TMMi-accredited for level 2; this could beregarded as sufficient evidence for the effec-tivity of the improvement measurements de-scribed in [56] and in the previous sections

Page 24: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

30 Research Article — SACJ, No. 47., July 2011

of this article. Since official TMMi accredita-tions are still quite rare —especially here inSouth Africa— a small IT house which is ableto present its TMMI certification diploma toprospective customers should be able to gaina considerable competitive advantage over itscountless competitors on the crowded softwaremarket.

Acknowledgments

We would like to thank the software develop-ers in company X for having participated in thesurvey behind this contribution. Thanks to theparticipants of the South African Institute forComputer Scientists and Information Technol-ogists (SAICSIT) Postgraduate Student Work-shop 2009 for their comments on a related pre-sentation. Last but not least many thanks toBernhard Westfechtel for his expertise.

REFERENCES

[1] P. Amman, J. Offutt, “Introduction to Soft-ware Testing”. Cambridge Univ. Press, 2008.

[2] J. Aranda, S. Easterbrook, G. Wilson, “Re-quirements in the Wild: How Small Compa-nies do it”. Proc. RE’07: 15th IEEE Inter-national Requirements Eng. Conf., pp. 39-48,2007.

[3] J. Batista, A.D. de Figueiredo, “SPI in a YerySmall Team: A Case with CMM”. SoftwareProcess Improvement and Practice 5/4, pp.243-250, 2000.

[4] A. Bertolino, “Software Testing Research:Achievements, Challenges, Dreams”. Proc.FOSE ’07: Future of Software EngineeringConf., pp. 85-103, 2007.

[5] R. Black, “Critical Testing Processes: Plan,Prepare, Perform, Perfect”. Addison-Wesley,2003.

[6] R. Black, “ISTQB Certification: Why YouNeed It and How to Get It”. Testing Experi-ence The Magazine for Professional Testers 1,pp. 26-30, 2008.

[7] K. Blokland, “A Universal Managementand Monitoring Process for Testing”. Proc.ICSTW’08: IEEE Internat. Conf. on Softw.Testing, Verification and Validation, pp. 315-321, 2008.

[8] F. Brooks, “The Mythical Man-Month”.Addison-Wesley, 1975.

[9] M. Broy, D. Rombach, “Software Engineering:Wurzeln, Stand und Perspektiven”. InformatikSpektrum 16, pp. 438-451, 2002.

[10] I. Burnstein, T. Suwanassart, R. Carlson, “De-veloping a Testing Maturity Model for Soft-ware Test Process Evaluation and Improve-ment”. Proc. Internat. Test Conf., pp. 581-589,1996.

[11] A. Cater-Steel, “Process Improvement in fourSmall Software Companies”. Proc. ASWEC13th Australian Softw. Eng. Conf., pp. 262-272, 2001.

[12] G. Coleman, F. McCaffery, P.S. Taylor,“Adept: A Unified Assessment Method forSmall Software Companies”. IEEE Software24/1, pp. 24-31, 2007.

[13] S.H. Edwards, “Improving Student Perfor-mance by Evaluating how well Students Testtheir own Programs”. Journal Educ. Resour.Comput. 3/3, pp. 1-, 2003.

[14] A. Farooq, P.R. Dumke, “Evalu-ation Approaches in Software Test-ing”. Techn. Rep. FIN-05-2008, Facultyof Comp. Sc., Univ. of Magdeburg,2008. http://www.cs.uni-magdeburg.de/fin media/downloads/forschung/preprints/2008/TechReport5.pdf

[15] M.E. Fayad, M. Laitinen, R.P. Ward, “Think-ing objectively: Software Engineering in theSmall”. Communications of the ACM 43/3,pp. 115-118, 2000.

[16] J. Garcia, A. de Amescua, M. Velasco, A. SanzA, “Ten Factors that impede Improvement ofVerification and Validation Processes in Soft-ware intensive Organizations”. Software Pro-cess Improvement and Practice 13/4, pp. 335-343, 2008.

[17] D. Gelperin, B. Hetzel, “The Growth of Soft-ware Testing”. Communications of the ACM31/6, pp. 687-695, 1988.

[18] B. George, L. Williams, “An initial Investiga-tion of Test Driven Development in Industry”.Proc. SAC’03: Annual ACM Symp. on Appl.Comp., pp. 1135-1139, 2003.

[19] A.M. Geras, M.R. Smith, J. Miller, “A Sur-vey of Software Testing Practices in Alberta”.Canadian Journal of Electrical and Comp.Eng. 29/3, pp. 183-191, 2004.

Page 25: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

Research Article — SACJ, No. 47., July 2011 31

[20] R.L. Glass, R. Collard, A. Bertolino, J. Bach,C. Kaner, “Software Testing and IndustryNeeds”. IEEE Software 23/4, pp. 55-57, 2006.

[21] M. Grindal, J. Offutt, J. Mellin, “On the Test-ing Maturity of Software Producing Organiza-tions”. Proc. TAIC PART: Testing: Academicand Industrial Conference on Practice and Re-search Techniques, pp.171-180, 2006.

[22] P. Grunbacher, “A Software Assessment Pro-cess for Small Software Enterprises”. Proc.EUROMICRO’97, 23rd EUROMICRO Conf.,pp. 123-128, 1997.

[23] N. Juristo, A.M. Moreno, W. Strigel (eds.),“Software Testing Practices in Industry”.IEEE Software 23/1, pp. 19-21, 2006.

[24] M. Kajko-Mattsson, T. Bjornsson, “Outlin-ing Developers’ Testing Process Model”. Proc.EUROMICRO’07, 33rd EUROMICRO Conf.,pp. 263-270, 2007.

[25] D. Karlstrom, P. Runeson, S. Norden S, “AMinimal Test Practice Framework for Emerg-ing Software Organizations”. Software TestingVerification and Reliability 15/3, pp. 145-66,2005.

[26] K. Kautz, H.W. Hansen, K. Thaysen, “Ap-plying and Adjusting a Software Process Im-provement Model in Practice: The Use ofthe IDEAL Model in a Small Software Enter-prise”. Proc. ICSE’2000, 22nd Internat. Conf.on Softw. Eng., pp. 626-633, 2000.

[27] T. Koomen, M. Pol, “Test Process Improve-ment: A practical step-by-step guide to struc-tured testing”. Addison-Wesley, 1999.

[28] L. Lazic, N. Mastorakis, “Cost effective Soft-ware Test Metrics”. WSEAS Transactions onComp. 7/6, pp. 599-619, 2008.

[29] W.E. Lewis, “Software Testing and Continu-ous Quality Improvement”. 2nd ed., AuerbachPubl., 2004.

[30] D. Maes, S. Mertens, “10 Tips for SuccessfulTesting”. Testing Experience: The Magazinefor Professional Testers 3, pp. 52-53, 2008.

[31] W. Mallinson, “Testing: AGrowing Opportunity”, 2002.http://www.testfocus.co.za/featurearticles/Jan2002.htm

[32] K. Martin, B. Hoffman, “An Open Source Ap-proach to Developing Software in a Small Or-ganization”. IEEE Software 24/1, pp. 46-53,2007.

[33] G.E. Mogyorodi, “Let’s Play TwentyQuestions: Tell me about your Organi-zation’s Quality Assurance and Testing”.Crosstalk: The Journal of Defense SoftwareEngineering, without page numbers, 2003.http://www.stsc.hill.af.mil/crosstalk/2003/03/mogyorodi1.html

[34] S.P. Ng, T. Murnane, K. Reed, D. Grant, T.Y.Chen, “A Preliminary Survey on SoftwareTesting Practices in Australia”. Proc. Aus-tralian Softw. Eng. Conf., pp. 116-125, 2004.

[35] M. Northover, D.G. Kourie, A. Boake, S.Gruner, A. Northover, “Towards a Philoso-phy of Software Development: 40 Years afterthe Birth of Software Engineering”. Zeitschriftfur allgemeine Wissenschaftstheorie 39/1, pp.85-113, 2008.

[36] S. Otoya, N. Cerpa, “An Experience: a SmallSoftware Company Attempting to Improve itsProcess”. Proc. STEP’99 Software Technologyand Engineering Practice Conf., pp. 153-160,1999.

[37] J. Park, H. Ryu, H.J. Choi, D.K. Ryu, “ASurvey on Software Test Maturity in KoreanDefense Industry”. Proc. ISEC’08: 1st IndiaSoftw. Eng. Conf., pp. 149-150, 2008.

[38] I. Pinkster-o’Riordain, “Test Policy: Gain-ing Control on IT Quality and Processes”.ICSTW’08: Proc. IEEE Internat. Conf. onSoftw. Testing Verification and Validation, pp.338-342, 2008.

[39] F.J. Pino, Felx Garcia, M. Piattini, “Key Pro-cesses to start Software Process Improvementin Small Companies”. Proc. SAC’09 ACMSymposium on Applied Comp., pp. 509-516,2009.

[40] M. Pyhajarvi, K. Rautiainen, J. ItkonenJ, “Increasing Understanding of the ModernTesting Perspective in Software Product De-velopment Projects”. Proc. 36th IEEE Inter-nat. Conf. on System Sc., pp. 250-259, 2002.

[41] P. Savolainen, H. Sihvonen, J.J. Ahonen, “SPIwith Lightweight Software Process Modelingin a Small Software Company”. LNCS 4764,pp. 71-81; 2007.

[42] T. Shepard, M. Lamb, D. Kelly D, “More Test-ing should be taught”. Communications of theACM 44/6, pp. 103-108, 2001.

[43] M. Siniaalto, P. Abrahamsson, “Does Test-Driven Development Improve the ProgramCode? Alarming Results from a ComparativeCase Study”. LNCS 5082, pp. 143-156, 2008.

Page 26: Software Testing in Small IT Companies: A (not only) South ...€¦ · the software testing process in a small South African IT company, here called X, to determine the problems that

32 Research Article — SACJ, No. 47., July 2011

[44] A. Telea, H. Byelas, “Querying Large C andC++ Code Bases: The Open Approach”.Colloquium and Festschrift at the Occasionof the 60th birthday of Derrick Kourie, 2008.http://www.cs.up.ac.za/cs/sgruner/Festschrift/

[45] E. van Veenendaal, M. Pol, “A Test Man-agement Approach for Structured Testing”.Achieving Software Product Quality, onlinecollection without page numbers, 1997.

[46] E. van Veenendaal, “Test Maturity Model in-tegrated (TMMi): Version 2.0”. TMMi Foun-dation, http://www.tmmifoundation.org/

[47] E. van Veenedaal, D. Graham, I. Evans, R.Black, “Foundations of Software Testing:ISTQB Certification”. Internat. ThomsonBusiness Press, 2008.

[48] E. van Veenendaal, R. Hendriks, J. van deLaar, B. Bouwers, “Test Process Improvementusing TMMi”. Testing Experience: The Maga-zine for Professional Testers 3, pp. 21-25, 2008.

[49] T.E.J. Vos, J.S. Sanches, M. Mannise, “Sizedoes matter in Process Improvement”. Test-ing Experience The Magazine for ProfessionalTesters 3, pp. 91-95, 2008.

[50] C.G. von Wangenheim, T. Varkoi, C.F. Sal-viano, “Standard-based Software Process As-sessments in Small Companies”. Software Pro-cess Improvement and Practice 11/3, pp. 329-35, 2006.

[51] C.G. von Wangenheim, A. Anacleto, C.F. Sal-viano, “Helping Small Companies Assess Soft-ware Processes”. IEEE Software 23/1, pp. 91-98, 2006.

[52] C.G. von Wangenheim, S. Weber, J.C.R.Hauck, G. Trentin G, “Experiences on Es-tablishing Software Processes in Small Com-panies”. Information and Software Technology48/9, pp. 890-900, 2007.

[53] C.G. von Wangenheim, I. Richardson, “Whyare Small Software Organizations Different?”IEEE Software 24/1, pp. 18-22, 2007.

[54] A.H. Watson, T.J. McCable TJ, “Struc-tured Testing: A Testing Methodology Usingthe Cyclomatic Complexity Metric”. Tech-nical Report: National Institute of Stan-dards and Technology, NIST 500/235, 1996.http://www.mccabe.com/pdf/nist235r.pdf

[55] J. Zheng, L. Williams, N. Nagappan, W.Snipes, J.P, Hudepohl, A.M. Vouk, “On theValue of Static Analysis for Fault Detection inSoftware”. IEEE Transactions on Softw. Eng.32/4, pp. 240-253, 2006.

[56] Johan van Zyl, “Software Testing in a SmallCompany: A Case Study”. M.IT.-Thesis, De-partment of Comp. Sc., Univ. of Pretoria, De-cember 2009.

[57] Kautz K, Hansen HW, Thaysen K, editors.“Applying and adjusting a software processimprovement model in practice: the use ofthe IDEAL model in a small software enter-prise” ICSE ’00: Proceedings of the 22nd inter-national conference on Software engineering;2000; New York, NY, USA: ACM