113
1 MSIT 32 Software Quality and Testing SOFTWARE QUALITY & TESTING (MSIT - 32) : Contributing Author : Dr. B.N. Subraya Infosys Technologies Ltd., Mysore PDF created with pdfFactory Pro trial version www.pdffactory.com

SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

  • Upload
    others

  • View
    29

  • Download
    1

Embed Size (px)

Citation preview

Page 1: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

1MSIT 32 Software Quality and Testing

SOFTWARE QUALITY & TESTING(MSIT - 32)

: Contributing Author :

Dr. B.N. SubrayaInfosys Technologies Ltd.,

Mysore

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 2: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

2

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 3: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

3MSIT 32 Software Quality and Testing

a

Contents

Chapter 1

INTRODUCTION TO SOFTWARE TESTING 1

1.1 Learning Objectives.......................................................................... 11.2 Introduction...................................................................................... 11.3 What is Testing?............................................................................... 31.4 Approaches to Testing....................................................................... 51.5 Importance of Testing....................................................................... 61.6 Hurdles in Testing............................................................................. 61.7 Testing Fundamentals........................................................................ 7

Chapter 2

SOFTWARE QUALITY ASSURANCE 10

2.1 Learning Objectives.......................................................................... 102.2 Introduction...................................................................................... 102.3 Quality Concepts............................................................................... 112.4 Quality of design............................................................................... 122.5 Quality of Conformance.................................................................... 122.6 Quality Control (QC)......................................................................... 132.7 Quality Assurance (QA).................................................................... 132.8 Software Quality ASSURANCE (SQA)............................................. 142.9 Formal Technical Reviews (FTR)....................................................... 212.10 Statistical Quality Assurance.............................................................. 272.11 Software Reliability........................................................................... 302.12 The SQA Plan.................................................................................. 31

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 4: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

4

Chapter 3

PROGRAM INSPECTIONS, WALKTHROUGHS AND REVIEWSQUALITY ASSURANCE 36

3.1 Learning Objectives.......................................................................... 363.2 Introduction...................................................................................... 363.3 Inspections and Walkthroughs............................................................ 373.4 Code Inspections.............................................................................. 383.5 An Error Check list for Inspections.................................................... 393.6 Walkthroughs.................................................................................... 42

Chapter 4

TEST CASE DESIGN 43

4.1 Learning Objectives.......................................................................... 434.2 Introduction...................................................................................... 434.3 White Box Testing............................................................................ 444.4 Basis Path Testing............................................................................ 454.5 Control Structure testing.................................................................... 494.6 Black Box Testing............................................................................ . 534.7 Static Program Analysis.................................................................... 574.8 Automated Testing Tools................................................................... 58

Chapter 5

TESTING FOR SPECIALIZED ENVIRONMENTS 60

5.1 Learning Objectives.......................................................................... 605.2 Introduction...................................................................................... 605.3 Testing GUIs.................................................................................... 605.4 Testing of Client/Server Architectures................................................ 635.5 Testing documentation and Help facilities............................................ 63

Chapter 6

SOFTWARE TESTING STRATEGIES 65

6.1. Learning Objectives.......................................................................... 656.2. Introduction...................................................................................... 656.3 A Strategic Approach To Software Testing......................................... 696.4 Verification and Validation.................................................................. 706.5 Organizing for software testing.......................................................... 716.6 A Software Testing Strategy.............................................................. 726.7 Strategic issues................................................................................. 756.8 Unit Testing...................................................................................... 75

Contents

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 5: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

5MSIT 32 Software Quality and Testing

6.9 Integration Testing............................................................................ 806.10 Validation Testing.............................................................................. 856.11 System Testing.................................................................................. 866.12 Summary.......................................................................................... 89

Chapter 7

TESTING OF WEB BASED APPLICATIONS 91

7.1 Introduction...................................................................................... 917.2 Testing of Web Based Applications: Technical Peculiarities.................. 917.3 Testing of Static Web- based applications........................................... 927.4 Testing of Dynamic Web based applications........................................ 947.5 Future Challenges............................................................................. 96

Chapter 8

TEST PROCESS MODEL 97

8.0 Need for Test Process Model............................................................ 978.1 Test Process Cluster.......................................................................... 98

Chapter 9

TEST METRICS 103

9.0 Introduction...................................................................................... 1039.1 Overview of the Role and Use of Metrics........................................... 1049.2 Primitive Metric and Computed Metrics.............................................. 1049.3 Metrics typically used within the Testing Process................................ 1059.4 Defect Detection Effectiveness percentage (DDE)............................. 1069.5 Setting up and administering a Metrics Program.................................. 106

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 6: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

6

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 7: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

7MSIT 32 Software Quality and Testing

Chapter 1

Introduction to Software Testing

1.1 LEARNING OBJECTIVES

You will learn about:

l What is Software Testing?

l Need for software Testing,

l Various approaches to Software Testing,

l What is the defect distribution,

l Software Testing Fundamentals.

1.2 INTRODUCTIONSoftware testing is a critical element of software quality assurance and represents the ultimate process

to ensure the correctness of the product. The quality product always enhances the customer confidencein using the product thereby increases the business economics. In other words, a good quality productmeans zero defects, which is derived from a better quality process in testing.

The definition of testing is not well understood. People use a totally incorrect definition of the wordtesting, and that this is the primary cause for poor program testing. Examples of these definitions are suchstatements as “Testing is the process of demonstrating that errors are not present”, “The purpose of

MSIT 32 Software Quality and Testing 1

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 8: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

8

testing is to show that a program performs its intended functions correctly”, and “Testing is the process ofestablishing confidence that a program does what it is supposed to do”.

Testing the product means adding value to it, which means raising the quality or reliability of theprogram. Raising the reliability of the product means finding and removing errors. Hence one should nottest a product to show that it works; rather, one should start with the assumption that the program containserrors and then test the program to find as many errors as possible. Thus a more appropriate definition is:

Testing is the process of executing a program with the intent of finding errors.

Purpose of Testing

To show the software works: It is known as demonstration-oriented

To show the software doesn’t work: It is known as destruction-oriented

To minimize the risk of not working up to an acceptable level: it is known as evaluation-oriented

Need for Testing

Defects can exist in the software, as it is developed by human beings who can make mistakes duringthe development of software. However, it is the primary duty of a software vendor to ensure that softwaredelivered does not have defects and the customers day-to-day operations do not get affected. This can beachieved by rigorously testing the software. The most common origin of software bugs is due to:

l Poor understanding and incomplete requirements

l Unrealistic schedule

l Fast changes in requirements

l Too many assumptions and complacency

Some of major computer system failures listed below gives ample evidence that the testing is animportant activity of the software quality process.

l In April of 1999, a software bug caused the failure of a $1.2 billion military satellite launch, thecostliest unmanned accident in the history of Cape Canaveral launches. The failure was thelatest in a string of launch failures, triggering a complete military and industry review of U.S.space launch programs, including software integration and testing processes. Congressionaloversight hearings were requested.

l On June 4, 1996, the first flight of the European Space Agency’s new Ariane 5 rocket failedshortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It wasreportedly due to the lack of exception handling of a floating-point error in a conversion from a64-bit integer to a 16-bit signed integer.

Chapter 1 - Introduction to Software Testing

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 9: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

9MSIT 32 Software Quality and Testing

l In January of 2001 newspapers reported that a major European railroad was hit by the aftereffectsof the Y2K bug. The company found that many of their newer trains would not run due to theirinability to recognize the date ’31/12/2000'; the trains were started by altering the control system’sdate settings.

l In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a largepart of some U.S. credit card transaction authorization systems as well as other large U.S.bank, retail, and government data systems. The cause was eventually traced to a software bug.

l The computer system of a major online U.S. stock trading service failed during trading hoursseveral times over a period of days in February of 1999 according to nationwide news reports.The problem was reportedly due to bugs in a software upgrade intended to speed online tradeconfirmations.

l In November of 1997 the stock of a major health industry company dropped 60% due to reportsof failures in computer billing systems, problems with a large database conversion, and inadequatesoftware testing. It was reported that more than $100,000,000 in receivables had to be writtenoff and that multi-million dollar fines were levied on the company by government agencies.

l Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be creditedwith $924,844,208.32 each in May of 1996, according to newspaperreports. The American Bankers Association claimed it was the largest such error in bankinghistory. A bank spokesman said the programming errors were corrected and all funds wererecovered.

All the above incidents only reiterate the importance of thorough testing of software applications andproducts before they are put on production. It clearly demonstrates that cost of rectifying defect duringdevelopment is much less than rectifying a defect in production.

1.3 WHAT IS TESTING?l “Testing is an activity in which a system or component is executed under specified conditions;

the results are observed and recorded and an evaluation is made of some aspect of the systemor component” - IEEE

l Executing a system or component is known as dynamic testing.

l Review, inspection and verification of documents (Requirements, design documents Test Plansetc.), code and other work products of software is known as static testing.

l Static testing is found to be the most effective and efficient way of testing.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 10: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

10

l Successful testing of software demands both dynamic and static testing.

l Measurements show that a defect discovered during design that costs $1 to rectify at that stagewill cost $1,000 to repair in production. This clearly points out the advantage of early testing.

l Testing should start with small measurable units of code, gradually progress towards testingintegrated components of the applications and finally be completed with testing at the applicationlevel.

l Testing verifies the system against its stated and implied requirements, i.e., is it doing what it issupposed to do? It should also check if the system is not doing what it is not supposed to do, ifit takes care of boundary conditions, how the system performs in production-like environmentand how fast and consistently the system responds when the data volumes are high.

Reasons for Software Bugs

Following are the reasons for Software Bugs:

l Miscommunication or no communication - as to specifics of what an application should orshouldn’t do (the application’s requirements).

l Software complexity - the complexity of current software applications can be difficult tocomprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormousrelational databases, and sheer size of applications have all contributed to the exponential growthin software/system complexity. And the use of object-oriented techniques can complicate insteadof simplify a project unless it is well-engineered.

l Programming errors - programmers, like anyone else, can make mistakes.

l Changing requirements - the customer may not understand the effects of changes, or mayunderstand and request them anyway - redesign, rescheduling of engineers, effects on otherprojects, work already completed that may have to be redone or thrown out, hardwarerequirements that may be affected, etc. If there are many minor changes or any major changes,known and unknown dependencies among parts of the project are likely to interact and causeproblems, and the complexity of keeping track of changes may result in errors. Enthusiasm ofengineering staff may be affected. In some fast-changing business environments, continuouslymodified requirements may be a fact of life. In this case, management must understand theresulting risks, and QA and test engineers must adapt and plan for continuous extensive testingto keep the inevitable bugs from running out of control.

l time pressures - scheduling of software projects is difficult at best, often requiring a lot ofguesswork. When deadlines loom and the crunch comes, mistakes will be made.

Chapter 1 - Introduction to Software Testing

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 11: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

11MSIT 32 Software Quality and Testing

l Poorly documented code - it’s tough to maintain and modify code that is badly written or poorlydocumented; the result is bugs. In many organizations management provides no incentive forprogrammers to document their code or write clear, understandable code. In fact, it’s usually theopposite: they get points mostly for quickly turning out code, and there’s job security if nobodyelse can understand it (‘if it was hard to write, it should be hard to read’).

l Software development tools - visual tools, class libraries, compilers, scripting tools, etc. oftenintroduce their own bugs or are poorly documented, resulting in added bugs.

1.4 APPROACHES TO TESTINGMany approaches have been defined in literature. The importance of any approache depends on the

type of the system in which you are testing. Some of the approaches are given below:

Debugging-oriented:

This approach identifies the errors during debugging the program. There is no difference betweentesting and debugging.

Demonstration-oriented:

The purpose of testing is to show that the software works. Here most of the time, the software isdemonstrated in a normal sequence/flow. All the branches may not be tested. This approach is mainly tosatisfy the customer and no value added to the program.

Destruction-oriented:

The purpose of testing is to show the software doesn’t work.

It is a sadistic process, which explains why most people find it difficult. It is difficult to design testcases to test the program.

Evaluation-oriented:

The purpose of testing is to reduce the perceived risk of not working up to an acceptable value.

Prevention-oriented:

It can be viewed as testing is a mental discipline that results in low risk software. It is always better toforecast the possible errors and rectify it earlier.

In general, program testing is more properly viewed as the destructive process of trying to find theerrors (whose presence is assumed) in a program. A successful test case is one that furthers progress inthis direction by causing the program to fail. However, one wants to use program testing to establish some

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 12: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

12

degree of confidence that a program does what it is supposed to do and does not do what it is notsupposed to do, but this purpose is best achieved by a diligent exploration for errors.

1.5 IMPORTANCE OF TESTINGTesting activity cannot be eliminated in the life cycle as the end product must be bug free and reliable

one. Testing is important because:

l Testing is a critical element of software Quality Assurance

l Post-release removal of defects is the most expensive

l Significant portion of life cycle effort expended on testing

In a typical service oriented project, about 20-40% of project effort is spent on testing. It is much morein the case of “human-rated” software.

For example, at Microsoft, tester to developer ratio is 1:1 whereas at NASA shuttle developmentcenter (SEI Level 5), the ratio is 7:1. This shows that how testing is an integral part of Quality assurance.

1.6 HURDLES IN TESTINGAs in many other development projects, testing is not free from hurdles. Some of the hurdles normally

encountered are:

l Usually late activity in the project life cycle

l No “concrete” output and therefore difficult to measure the value addition

l Lack of historical data

l Recognition of importance is relatively less

l Politically damaging as you are challenging the developer

l Delivery commitments

l Too much optimism that the software always works correctly

Defect Distribution

In a typical project life cycle, testing is the late activity. When the product is tested, the defects may bedue to many reasons. It may be either programming error or may be defects in design or defects at anystages in the life cycle. The overall defect distribution is shown in fig 1.1 .

Chapter 1 - Introduction to Software Testing

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 13: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

13MSIT 32 Software Quality and Testing

Fig 1.1: Software Defect Distribution

1.7 TESTING FUNDAMENTALSBefore understanding the process of testing software, it is necessary to learn the basic principles of

testing.

1.7.1 Testing Objectivesl “Testing is a process of executing a program with the intent of finding an error.

l A good test is one that has a high probability of finding an as yet undiscovered error.

l A successful test is one that uncovers an as yet undiscovered error.”

The objective is to design tests that systematically uncover different classes of errors and do so witha minimum amount of time and effort.

Secondary benefits include:

l Demonstrate that software functions appear to be working according to specification.

l Those performance requirements appear to have been met.

l Data collected during testing provides a good indication of software reliability and some indicationof software quality.

Rqmts. 56%

Other 10%

Code 7%

Design 27%

Rqmts. Design Code Other

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 14: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

14

Testing cannot show the absence of defects, it can only show that software defects are present.

1.7.2 Test Information FlowA typical test information flow is shown in Fig 1.2.

Fig 1.2: Test information flow in a typical software test life cycle

In the above figure:

l Software configuration includes a Software Requirements Specification, a Design Specification,and source code.

l A test configuration includes a test plan and procedures, test cases, and testing tools.

l It is difficult to predict the time to debug the code, hence it is difficult to schedule.

1.7.3 Test Case DesignSome of the points to be noted during the test case design are:

l Can be as difficult as the initial design.

l Can test if a component conforms to specification - Black Box Testing.

l Can test if a component conforms to design - White box testing.

Chapter 1 - Introduction to Software Testing

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 15: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

15MSIT 32 Software Quality and Testing

l Testing cannot prove correctness as not all execution paths can be tested.

Consider the following example shown in fig 1.3,

Fig: 1.3

A program with a structure as illustrated above (with less than 100 lines of Pascal code) has about100,000,000,000,000 possible paths. If attempted to test these at rate of 1000 tests per second, would take3170 years to test all paths. This shows that exhaustive testing of software is not possible.

QUESTIONS

1. What is software testing? Explain the purpose of testing?

2. Explain the origin of the defect distribution in a typical software development life cycle?

_________

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 16: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

16

Chapter 2

Software Quality Assurance

2.1 LEARNING OBJECTIVES

You will learn about:

l Basic principles about the Software Quality,

l Software Quality Assurance and SQA activities

l Software Reliability

2.2 INTRODUCTION

The quality is defined as “a characteristic or attribute of something”. As an attribute of an item,quality refers to measurable characteristics-things we are able to compare to known standards such aslength, color, electrical properties, malleability, and so on. However, software, largely an intellectualentity, is more challenging to characterize than physical objects.

Quality design refers to the characteristic s that designers specify for an item. The grade of materials,tolerance, and performance specifications all contribute to the quality of design.

Quality of conformance is the degree to which the design specification s are followed duringmanufacturing. Again, the greater the degree of conformance, the higher the level of quality ofconformance.

16 Chapter 2 - Software Quality Assurance

Chapter 2 - Software Quality Assurance

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 17: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

17MSIT 32 Software Quality and Testing

Software Quality Assurance encompasses

q A quality management approach

q Effective software engineering technology

q Formal technical reviews

q A multi tiered testing strategy

q Control of software documentation and changes made to it

q A procedure to assure compliance with software development standards

q Measurement and reporting mechanisms

Software quality is achieved as shown in figure 2.1:

Figure 2.1: Achieving Software Quality

2.3 QUALITY CONCEPTSWhat are quality concepts?

l Qualityl Quality controll Quality assurancel Cost of quality

Quality

Software Engineering Methods

Formal Technical

Review

Standards

And

And

SCM & & SQA

Testing

Measurement

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 18: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

18

The American heritage dictionary defines quality as “a characteristic or attribute of something”. As anattribute of an item quality refers to measurable characteristic-things, we are able to compare to knownstandards such as length, color, electrical properties, and malleability, and so on. However, software,largely an intellectual entity, is more challenging to characterize than physical object.

Nevertheless, measures of a programs characteristics do exist. These properties include

1. Cyclomatic complexity

2. Cohesion

3. Number of function points

4. Lines of code

When we examine an item based on its measurable characteristics, two kinds of quality may beencountered:

l Quality of design

l Quality of conformance

2.4 QUALITY OF DESIGN

Quality of design refers to the characteristics that designers specify for an item. The grade of materials,tolerance, and performance specifications all contribute to quality of design. As higher graded materialsare used and tighter, tolerance and greater levels of performance are specified the design quality of aproduct increases if the product is manufactured according to specifications.

2.5 QUALITY OF CONFORMANCE

Quality of conformance is the degree to which the design specifications are followed duringmanufacturing. Again, the greater the degree of conformance, the higher the level of quality of conformance.

In software development, quality of design encompasses requirements, specifications and design ofthe system.

Quality of conformance is an issue focused primarily on implementation. If the implementation followsthe design and the resulting system meets its requirements and performance goals, conformance quality ishigh.

Chapter 2 - Software Quality Assurance

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 19: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

19MSIT 32 Software Quality and Testing

2.6 QUALITY CONTROL (QC)QC is the series of inspections, reviews, and tests used throughout the development cycle to ensure

that each work product meets the requirements placed upon it. QC includes a feedback loop to theprocess that created the work product. The combination of measurement and feedback allows us to tunethe process when the work products created fail to meet their specification. These approach views QC aspart of the manufacturing process QC activities may be fully automated, manual or a combination ofautomated tools and human interaction. An essential concept of QC is that all work products have definedand measurable specification to which we may compare the outputs of each process the feedback loop isessential to minimize the defect produced.

2.7 QUALITY ASSURANCE (QA)QA consists of the editing and reporting functions of management. The goal of quality assurance is to

provide management with the data necessary to be informed about product quality, there by gaininginsight and confidence that product quality is meeting its goals. Of course, if the data provided through QAidentify problems, it is management’s responsibility to address the problems and apply the necessaryresources to resolve quality issues.

2.7.1 Cost of QualityCost of quality includes all costs incurred in the pursuit of quality or in performing quality related

activities. Cost of quality studies are conducted to provide a base line for the current cost of quality, toidentify opportunities for reducing the cost of quality, and to provide a normalized basis of comparison.The basis of normalization is usually money. Once we have normalized quality costs on a money basis, wehave the necessary data to evaluate where the opportunities lie to improve our process further more wecan evaluate the effect of changes in money based terms.

QC may be divided into cost associated with

l Prevention

l Appraisal

l Failure

Prevention costs include

q Quality Planning

q Formal Technical Review

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 20: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

20

q Test Equipment

q Training

Appraisal costs include activity to gain insight into product condition the “First time through” eachprocess.

Examples for appraisal costs include:

l In process and inter process inspection

l Equipment calibration and maintenance

l Testing

Failure Costs are costs that would disappear if no defects appeared before shipping a product tocustomer. Failure costs may be subdivided into internal and external failure costs.

Internal failure costs are costs incurred when we detect an error in our product prior to shipment.

Internal failure costs includes

l Rework

l Repair

l Failure Mode Analyses

External failure costs are the cost associated with defects found after the product has been shipped tothe customer.

Examples of external failure costs are

1. Complaint Resolution

2. Product return and replacement

3. Helpline support

4. Warranty work

2.8 SOFTWARE QUALITY ASSURANCE (SQA)Quality Is defined as conformance to explicitly stated functional and performance requirements,

explicitly documented development standards, and implicit characteristics that are expected of allprofessionally developed software.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 21: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

21MSIT 32 Software Quality and Testing

The above definition emphasizes three important points.

1. Software requirements are the foundation from which quality is measured. Lack of conformanceto requirements is lack of quality.

2. Specified standards define a set of development criteria that guide the manner in which softwareis engineered. If the criteria are not followed, lack of quality will almost surely result.

3. There is a set of implicit requirements that often goes unmentioned. (e.g. the desire of goodmaintainability). If software conforms to its explicit requirements but fails to meet implicitrequirements, software quality is questionable.

2.8.1 Background IssuesQA is an essential activity for any business that produces products to be used by others.

The SQA group serves as the customer in-house representative. That is the people who perform SQAmust look at the software from customer’s point of views.

The SQA group attempts to answer the questions asked below and hence ensure the quality of software.The questions are

1. Has software development been conducted according to pre-established standards?

2. Have technical disciplines properly performed their role as part of the SQA activity?

SQA Activities

SQA Plan is interpreted as shown in Fig 2.2

SQA is comprised of a variety of tasks associated with two different constituencies

1. The software engineers who do technical work like

l Performing Quality assurance by applying technical methods

l Conduct Formal Technical Reviews

l Perform well-planed software testing.

2. SQA group that has responsibility for

l Quality assurance planning oversight

l Record keeping

l Analysis and reporting.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 22: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

22

QA activities performed by SE team and SQA are governed by the following plan.

l Evaluation to be performed.

l Audits and reviews to be performed.

l Standards that is applicable to the project.

l Procedures for error reporting and tracking

l Documents to be produced by the SQA group

l Amount of feedback provided to software project team.

Figure 2.2: Software Quality Assurance Plan

l What are the activities performed by SQA and SE team?

l Prepare SQA Plan for a project

l Participate in the development of the project’s software description

l Review software-engineering activities to verify compliance with defined software process.

l Audits designated software work products to verify compliance with those defined as part ofthe software process.

l Ensures that deviations in software work and work products are documented and handledaccording to a documented procedure.

l Records any noncompliance and reports to senior management.

SQA Planning Team Activities

Software Engineers Activities

SQA Plan

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 23: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

23MSIT 32 Software Quality and Testing

2.8.2 Software Reviews

Software reviews are a “filter “ for the software engineering process. That is, reviews are applied atvarious points during software development and serve to uncover errors that can then be removed.Software reviews serve to “purify” the software work products that occur as a result of analysis, design,and coding.

Any review is a way of using the diversity of a group of people to:

1. Point out needed improvements in the product of a single person or a team;

2. Confirm that parts of a product in which improvement is either not desired, or not needed.

3. Achieve technical work of more uniform, or at least more predictable quality that can be achievedwithout reviews, in order to make technical work more manageable.

There are many different types of reviews that can be conducted as part of software- engineering like

1. An informal meeting if technical problems are discussed.

2. Formal presentation of software design to an audience of customers, management, and technicalstaff is a form of review.

3. Formal technical review is the most effective filter from a quality assurance standpoint. Conductedby software engineers for software engineers, the FTR is an effective means of improvingsoftware quality.

2.8.3 Cost impact of Software Defects

To illustrate the cost impact of early error detection, we consider a series of relative costs that is basedon actual cost data collected for large software projects.

Assume that an error uncovered during design will cost 1.0 monetary unit to correct. Relative to thiscost, the same error uncovered just before testing commences will cost 6.5 units; during testing 15 units;and after release, between 60 and 100 units.

2.8.4 Defect Amplification and Removal

A defect amplification model can be used to illustrate the generation and detection of errors during

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 24: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

24

preliminary design, detail design, and coding steps of the software engineering process. The model isillustrated schematically in Figure 2.3.

A box represents a software development step. During the step, errors may be inadvertently generated.Review may fail to uncover newly generated errors from previous steps, resulting in some number oferrors that are passed through. In some cases, errors passed through from previous steps, resulting insome number of errors that are passed through. In some cases errors passed through from previous stepsare amplified (amplification factor, x) by current work. The box subdivisions represent each of thesecharacteristics and the percent efficiency for detecting errors, a function of the thoroughness of review.

DEVELOPMENT STEP

Errors from previous Step DEFECTS DETECTION

Figure 2.3: Defect Amplification Model.

Figure 2.4 illustrates hypothetical example of defect amplification for a software development processin which no reviews are conducted. As shown in the figure each test step is assumed to uncover andcorrect fifty percent of all incoming errors without introducing new errors (an optimistic assumption). Tenpreliminary design errors are amplified to 94 errors before testing commences. Twelve latent defects arereleased to the field. Figure 2.5 considers the same conditions except that design and code reviews areconducted as part of each development step. In this case, ten initial preliminary design errors are amplifiedto 24 errors before testing commences.

Only three latent defects exist. By recalling the relative cost associated with the discovery andcorrection of errors, overall costs (with and without review for our hypothetical example) can be established.

To conduct reviews a developer must expend time and effort and the development organization mustspend money. However, the results of the preceding or previous, example leave little doubt that we haveencountered a “Pay now or pay much more lately” syndrome.

Formal technical reviews (for design and other technical activities) provide a demonstrable cost benefitand they should be conducted.

DEVELOPMENT STEP

Errors from previous Step DEFECTS DETECTION

Errors passed through

Amplified errors 1:x

Newly generated errors

Percent efficiency for error

detection

Figure 2.3: Defect Amplification Model.

Errors passed to next step

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 25: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

25MSIT 32 Software Quality and Testing

Preliminary design

0

0

10

70%

2

1-1.5

25

50%

5

10 -3

25

60%

Integration Test

0

10

70%

2

1-1.5

25

50%

0

0

60%

Code/Unit Test

1 15 5

10

24

To integration

Validation test

System Test

12

6

Latent errors

24

3

Figure2.4: Defect Amplification -No Reviews

Detail Design 3, 2

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 26: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

26

Preliminary design

0

0

10

0%

6

4 x 1.5

x = 1.5

25

0%

10

27x3

x=3

25

20%

Integration Test

0

10

50%

2

1-1.5

25

50%

0

0

60%

Detail Design

Code/Unit Test

10,6

4 37

94

To integration

Validation test

System Test

47

24

Latent errors

94

12

Figure 2.5: Defect Amplification - Reviews Conducted

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 27: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

27MSIT 32 Software Quality and Testing

2.9 FORMAL TECHNICAL REVIEWS (FTR)FTR is a SQA activity that is performed by software engineers.

Objectives of the FTR are

l To uncover errors in function, logic, are implementations for any representation of the software.

l To verify that software under review meets its requirements.

l To ensure that software has been represented according to predefined standards

l To achieve software that is developed in an uniform manner

l To make projects more manageable

In addition, the FTR serves as a training ground, enabling junior engineers to observe different approachesto software analysis, design, and implementation. The FTR also serves to promote backup and continuitybecause numbers of people become familiar with parts of the software that they may not have other wiseseen.

The FTR is actually a class of reviews that include walkthrough inspection and round robin reviews,and other small group technical assessments of software. Each FTR is conducted as meeting and will besuccessful only if it is properly planned, controlled and attended.

Types of Formal Technical Review

While the focus of this research is on the individual evaluation aspects of reviews, for context severalother FTR techniques are discussed as well. Among the most common forms of FTR are the following:

1. Desk Checking, or reading over a program by hand while sitting at one’s desk, is the oldestsoftware review technique [Adrion et al. 1982]. Strictly speaking, desk checking is not a form ofFTR since it does not involve a formal process or a group. Moreover, desk checking is generallyperceived as ineffective and unproductive due to (a) its lack of discipline and (b) the generalineffectiveness of people in detecting their own errors. To correct for the second problem,programmers often swap programs and check each other’s work. Since desk checking is anindividual process not involving group dynamics, research in this area would be relevant butnone applicable to the current research was found.

It should be noted that Humphrey [1995] has developed a review method, called PersonalReview (PR), which is similar to desk checking. In PR, each programmer examines his ownproducts to find as many defects as possible utilizing a disciplined process in conjunction withHumphrey’s Personal Software Process (PSP) to improve his own work. The review strategyincludes the use of checklists to guide the review process, review metrics to improve the process,and defect causal analysis to prevent the same defects from recurring in the future. The approach

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 28: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

28

taken in developing the Personal Review process is an engineering one; no reference is made inHumphrey [1995] to cognitive theory.

2. Peer Rating is a technique in which anonymous programs are evaluated in terms of theiroverall quality, maintainability, extensibility, usability and clarity by selected programmers whohave similar backgrounds [Myers 1979]. Shneiderman [1980] suggests that peer ratings ofprograms are productive, enjoyable, and non-threatening experiences. The technique is oftenreferred to as Peer Reviews [Shneiderman 1980], but some authors use the term peer reviewsfor generic review methods involving peers [Paulk et al 1993; Humphrey 1989].

3. Walkthroughs are presentation reviews in which a review participant, usually the softwareauthor, narrates a description of the software and the other members of the review groupprovide feedback throughout the presentation [Freedman and Weinberg 1990; Gilb and Graham1993]. It should be noted that the term “walkthrough” has been used in the literature variously.Some authors unite it with “structured” and treat it as a disciplined, formal review process[Myers 1979; Yourdon 1989; Adrion et al. 1982]. However, the literature generally describeswalkthrough as an undisciplined process without advance preparation on the part of reviewersand with the meeting focus on education of participants [Fagan 1976].

4. Round-robin Review is a evaluation process in which a copy of the review materials is madeavailable and routed to each participant; the reviewers then write their comments/questionsconcerning the materials and pass the materials with comments to another reviewer and to themoderator or author eventually [Hart 1982].

5. Inspection was developed by Fagan [1976, 1986] as a well-planned and well-defined groupreview process to detect software defects – defect repair occurs outside the scope of theprocess. The original Fagan Inspection (FI) is the most cited review method in the literature andis the source for a variety of similar inspection techniques [Tjahjono 1996]. Among the FI-derived techniques are Active Design Review [Parnas and Weiss 1987], Phased Inspection[Knight and Myers 1993], N-Fold Inspection [Schneider et al. 1992], and FTArm [Tjahjono1996]. Unlike the review techniques previously discussed, inspection is often used to control thequality and productivity of the development process.

A Fagan Inspection consists of six well-defined phases:

i. Planning. Participants are selected and the materials to be reviewed are prepared and checkedfor review suitability.

ii. Overview. The author educates the participants about the review materials through a presentation.iii. Preparation. The participants learn the materials individually.iv. Meeting. The reader (a participant other than the author) narrates or paraphrases the review

materials statement by statement, and the other participants raise issues and questions. Questionscontinue on a point only until an error is recognized or the item is deemed correct.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 29: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

29MSIT 32 Software Quality and Testing

v. Rework. The author fixes the defects identified in the meeting.

vi. Follow-up. The “corrected” products are reinspected.

Practitioner Evaluation is primarily associated with the Preparation phase.

In addition to classification by technique-type, FTR may also be classified on other dimensions, includingthe following:

A. Small vs. Large Team Reviews. Siy [1996] classifies reviews into those conducted by small(1-4 reviewers) [Bisant and Lyle 1996] and large (more than 4 reviewers) [Fagan 1976, 1986]teams. If each reviewer depends on different expertise and experiences, a large team shouldallow a wider variety of defects to be detected and thus better coverage. However, a large teamrequires more effort due to more individuals inspecting the artifact, generally involves greaterscheduling problems [Ballman and Votta 1994], and may make it more difficult for all participantsto participate fully.

B. No vs. Single vs. Multiple Session Reviews. The traditional Fagan Inspection provided for onesession to inspect the software artifact, with the possibility of a follow-up session to inspectcorrections. However, variants have been suggested.

Humphrey [1989] comments that three-quarters of the errors found in well-run inspections arefound during preparation. Based on an economic analysis of a series of inspections at AT&T,Votta [1993] argues that inspection meetings are generally not economic and should be replacedwith depositions, where the author and (optionally) the moderator meet separately with inspectorsto collect their results.

On the other hand, some authors [Knight and Myers 1993; Schneider et al. 1992] have arguedfor multiple sessions, conducted either in series or parallel. Gilb and Graham [1993] do not usemultiple inspection sessions but add a root cause analysis session immediately after the inspectionmeeting.

C. Nonsystematic vs. Systematic Defect-Detection Technique Reviews. The most frequentlyused detection methods (ad hoc and checklist) rely on nonsystematic techniques, and reviewerresponsibilities are general and not differentiated for single session reviews [Siy 1996]. However,some methods employ more prescriptive techniques, such as questionnaires [Parnas and Weiss1987] and correctness proofs [Britcher 1988].

D. Single Site vs. Multiple Site Reviews. The traditional FTR techniques have assumed that thegroup-meeting component would occur face-to-face at a single site. However, with improvedtelecommunications, and especially with computer support (see item F below), it has becomeincreasingly feasible to conduct even the group meeting from multiple sites.

E. Synchronous vs. Asynchronous Reviews. The traditional FTR techniques have also assumed

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 30: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

30

that the group meeting component would occur in real-time; i.e., synchronously. However, somenewer techniques that eliminate the group meeting or are based on computer support utilizeasynchronous reviews.

F. Manual vs. Computer-supported Reviews. In recent years, several computer supported reviewsystems have been developed [Brothers et al. 1990; Johnson and Tjahjono 1993; Gintell et al.1993; Mashayekhi et al 1994]. The type of support varies from simple augmentation of themanual practices [Brothers et al. 1990; Gintell et al. 1993] to totally new review methods [Johnsonand Tjahjono 1993].

2.2.2 Economic Analyses of Formal Technical ReviewWheeler et al. [1996], after reviewing a number of studies that support the economic benefit of FTR,

conclude that inspections reduce the number of defects throughout development, cause defects to befound earlier in the development process where they are less expensive to correct, and uncover defectsthat would be difficult or impossible to discover by testing. They also note “these benefits are not withouttheir costs, however. Inspections require an investment of approximately 15 percent of the total developmentcost early in the process [p. 11].”

In discussing overall economic effects, Wheeler et al. cite Fagan [1986] to the effect that investmentin inspections has been reported to yield a 25-to-35 percent overall increase in productivity. They alsoreproduce a graphical analysis from Boehm [1987] that indicates inspections reduce total developmentcost by approximately 30%.

The Wheeler et al. [1996] analysis does not specify the relative value of Practitioner Evaluation toFTR, but two recent economic analyses provide indications.

l Votta [1993]. After analyzing data collected from 13 traditional inspections conducted at AT&T,Votta reports that the approximately 4% increase in faults found at collection meetings (synergy)does not economically justify the development delays caused by the need to schedule meetingsand the additional developer time associated with the actual meetings. He also argues that it isnot cost-effective to use the collection meeting to reduce the number of items incorrectly identifiedas defective prior to the meeting (“false positives”). Based on these findings, he concludes thatalmost all inspection meetings requiring all reviewers to be present should be replaced withDepositions, which are three person meetings with only the author, moderator, and one reviewerpresent.

l Siy [1996]. In his analysis of the factors driving inspection costs and benefits, Siy reports thatchanges in FTR structural elements, such as group size, number of sessions, and coordination ofmultiple sessions, were largely ineffective in improving the effectiveness of inspections. Instead,inputs into the process (reviewers and code units) accounted for more outcome variation than

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 31: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

31MSIT 32 Software Quality and Testing

structural factors. He concludes by stating “better techniques by which reviewers detectdefects, not better process structures, are the key to improving inspection effectiveness[Abstract, p. 2].” (emphasis added)

Votta’s analysis effectively attributes most of the economic benefit of FTR to PE, and Siy’s explicitlystates that better PE techniques “are the key to improving inspection effectiveness.” These findings, ifsupported by additional research, would further support the contention that a better understanding ofPractitioner Evaluation is necessary.

2.2.3 Psychological Aspects of FTRWork on the psychological aspects of FTR can be categorized into four groups.

1. Egoless Programming. Gerald Weinberg [1971] began the examination of psychological issuesassociated with software review in his work on egoless programming. According to Weinberg,programmers are often reluctant to allow their programs to be read by other programmersbecause the programs are often considered to be an extension of the self and errors discoveredin the programs to be a challenge to one’s self-image. Two implications of this theory are asfollows:

i. The ability of a programmer to find errors in his own work tends to be impaired since hetends to justify his own actions, and it is therefore more effective to have other people checkhis work.

ii. Each programmer should detach himself from his own work. The work should be considereda public property where other people can freely criticize, and thus, improve its quality; otherwise,one tends to become defensive, and reluctant to expose one’s own failures.

These two concepts have led to the justification of FTR groups, as well as the establishmentof independent quality assurance groups that specialize in finding software defects in manysoftware organizations [Humphrey 1989].

2. Role of Management. Another psychological aspect of FTR that has been examined is therecording of data and its dissemination to management. According to Dobbins [1987], this mustbe done in such a way that individual programmers will not feel intimidated or threatened.

3. Positive Psychological Impacts. Hart [1982] observes that reviews can make one more carefulin writing programs (e.g., double checking code) in anticipation of having to present or share theprograms with other participants. Thus, errors are often eliminated even before the actual reviewsessions.

4. Group Process. Most FTR methods are implemented using small groups. Therefore, several

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 32: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

32

key issues from small group theory apply to FTR, such as group think (tendency to suppressdissent in the interests of group harmony), group deviants (influence by minority), and dominationof the group by a single member. Other key issues include social facilitation (presence of othersboosts one’s performance) and social loafing (one member free rides on the group’s effort)[Myers 1990]. The issue of moderator domination in inspections is also documented in theliterature [Tjahjono 1996].

Perhaps the most interesting research from the perspective of the current study is that of Saueret al. [2000]. This research is unusual in that it has an explicit theoretical basis and outlines abehaviorally motivated program of research into the effectiveness of software developmenttechnical reviews. The finding that most of the variation in effectiveness of software developmenttechnical reviews is the result of variations in expertise among the participants provides additionalmotivation for developing a solid understanding of Formal Technical Review at the individuallevel.

It should be noted that all of this work, while based on psychological theory, does not address the issueof how practitioners actually evaluate software artifacts.

2.9.1 The Review MeetingThe Focus of the FTR is on a work product - a component of the software.

At the end of review all attendees of the FTR must decide

1. Whether to accept the work product without further modification.

2. Reject the work product due to serve errors (Once corrected another review must be performed)

3. Accept the work product provisionally (minor errors have been encountered and must be correctedbut no additional review will be required).

Once the decision made, all FTR attendees complete a sign-off indicating their participation in thereview and their concurrence with the review team findings.

2.9.2 Review reporting and record keepingThe review summary report is typically is a single page form. It becomes part of the project historical

record and may be distributed to the project leader and other interested parties. The review issue listsserves two purposes.

1. To identify problem areas within the product

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 33: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

33MSIT 32 Software Quality and Testing

2. To serve as an action item. Checklist that guides the producer as corrections are made. Anissues list is normally attached to the summary report.

It is important to establish a follow up procedure to ensure that item on the issues list have beenproperly corrected. Unless this is done, it is possible that issues raised can “fall between the cracks”. Oneapproach is to assign responsibility for follow up for the review leader. A more formal approach as signsresponsibility independent to SQA group.

2.9.3 Review GuidelinesThe following represents a minimum set of guidelines for formal technical reviews

l Review the product, not the producer

l Set an agenda and maintain it

l Limit debate and rebuttal

l Enunciate problem areas but don’t attempt to solve every problem noted

l Take return notes

l Limit the number of participants and insist upon advance preparation

l Develop a check list each work product that is likely to be reviewed

l Allocate resources and time schedule for FTRs.

l Conduct meaningful training for all reviewers

l Review your earlier reviews

2.10 STATISTICAL QUALITY ASSURANCEStatistical quality assurance reflects a growing trend throughout industry to become more quantitative

about quality. For software, statistical quality assurance implies the following steps

l Information about software defects is collected and categorized

l An attempt is made to trace each defect to its underlying cause

l Using Pareto principle (80% of the defects can be traced to 20% of all possible causes), isolatethe 20% (the “vital few”)

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 34: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

34

l Once the vital few causes have been identified, move to correct the problems that have causedthe defects.

This relatively simple concept represents an important step toward the creation of an adaptive softwareengineering process in which changes are made to improve those elements of the process that introduceerrors. To illustrate the process, assume that a software development organization collects information ondefects for a period of one year. Some errors are uncovered as software is being developed. Otherdefects are encountered after the software has been released to its end user.

Although hundreds of errors are uncovered all can be tracked to one of the following causes.

q Incomplete or Erroneous Specification (IES)

q Misinterpretation of Customer Communication (MCC)

q Intentional Deviation from Specification (IDS)

q Violation of Programming Standards ( VPS )

q Error in Data Representation (EDR)

q Inconsistent Module Interface (IMI)

q Error in Design Logic (EDL)

q Incomplete or Erroneous Testing (IET)

q Inaccurate or Incomplete Documentation (IID)

q Error in Programming Language Translation of design (PLT)

q Ambiguous or inconsistent Human-Computer Interface (HCI)

q Miscellaneous (MIS)

To apply statistical SQA table 2.1 is built. Once the vital few causes are determined, the softwaredevelopment organization can begin corrective action.

After analysis, design, coding, testing, and release, the following data are gathered.

Ei = The total number of errors uncovered during the ith step in the software

Engineering process

Si = The number of serious errors

Mi = The number of moderate errors

Ti = The number of minor errors

PS = Size of the product (LOC, design statements, pages of documentation at the ith

step

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 35: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

35MSIT 32 Software Quality and Testing

Ws, Wm, Wt = weighting factors for serious, moderate and trivial errors where recommended valuesare Ws = 10, Wm = 3, Wt = 1.

The weighting factors for each phase should become larger as development progresses. This rewardsan organization that finds errors early.

At each step in the software engineering process, a phase index, PIi, is computed

PIi = Ws (Si/Ei)+Wm (Mi/Ei)+Wt (Ti/Ei)

The error index EI ids computed by calculating the cumulative effect or each PIi, weighting errorsencountered later in the software engineering process more heavily than those encountered earlier.

EI =Σ (i x PIi)/PS

= (PIi+2PI2 +3PI3 +iPIi)/PS

The error index can be used in conjunction with information collected in table to develop an overallindication of improvement in software quality.

DATA COLLECTION FOR STATISTICAL SQA

Total Serious Moderate Minor

Error No. % No. % No % No %

IES 205 22 34 27 68 18 103 24

MCC 156 17 12 9 68 18 76 17

IDS 48 5 1 1 24 6 23 5

VPS 25 3 0 0 15 4 10 2

EDR 130 14 26 20 68 18 36 8

IMI 58 6 9 7 18 5 31 7

EDL 45 5 14 11 12 3 19 4

IET 95 10 12 9 35 9 48 11

IID 36 4 2 2 20 5 14 3

PLT 60 6 15 12 19 5 26 6

HCI 28 3 3 2 17 4 8 2

MIS 56 6 0 0 15 4 41 9

TOTALS 942 100 128 100 379 100 435 100

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 36: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

36

2.11 SOFTWARE RELIABILITYSoftware reliability, unlike many other quality factors, can be measured, directed and estimated using

historical and developmental data. Software reliability is defined in statistical terms as “Probability offailure free operation of a computer program in a specified environment for a specified time” to illustrate,program x is estimated to have reliability of 0.96 over 8 elapsed processing hours. In other words, ifprogram x were to be executed 100 times and required 8 hours of elapsed processing time, it is likely tooperate correctly to operate 96/100 times.

2.11.1 Measures of Reliability and AvailabilityIn a computer-based system, a simple measure of reliability is Mean Time Between Failure

(MTBF), where

MTBF = MTTF+MTTR

The acronym MTTF and MTTR are Mean Time To Failure and Mean Time To Repair, respectively.

In addition to reliability measure, we must develop a measure of availability. Software availability is theprobability that a program is operating according to requirements at a given point in time and is defined as:

Availability = MTTF / (MTTF+MTTR) x100%

The MTBF reliability measure is equally sensitive to MTTF and MTTR. The availability measure issomewhat more sensitive to MTTR an indirect measure of the maintainability of the software.

2.11.2 Software Safety and Hazard AnalysisSoftware safety and hazard analysis are SQA activities that focus on the identification and assessment

of potential hazards that may impact software negatively and cause entire system to fail. If hazards canbe identified early in the software engineering process software design features can be specified that willeither eliminate or control potential hazards.

A modeling and analysis process is conducted as part of safety. Initially hazards are identified andcategorized by criticality and risk.

Once hazards are identified and analyzed, safety related requirements could be specified for thesoftware i.e., the specification can contain a list of undesirable events and desired system responses tothese events. The roll of software in managing undesirable events is then indicated.

Although software reliability and software safety are closely related to one another, it is important to

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 37: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

37MSIT 32 Software Quality and Testing

understand the subtle difference between them. Software reliability uses statistical analysis to determinethe likelihood that a software failure will occur however, the occurrence of a failure does not necessarilyresult in a hazard or mishap. Software safety examines the ways in which failure result in condition thatcan be lead to mishap. That is, failures are not considered in a vacuum. But are evaluated in the contextof an entire computer based system.

2.12 THE SQA PLANThe SQA plan provides a road map for instituting software quality assurance. Developed by the SQA

group and the project team, The plan serves as a template for SQA activities that are instituted for eachsoftware project.

ANSI/IEEE Standards 730-1984 and 983-1986 SQA plans is defined as shown below.

I. Purpose of Plan

II. References

III Management

1. Organization

2. Tasks

3. Responsibilities

IV. Documentation

1. Purpose

2. Required software engineering documents

3. Other Documents

V. Standards, Practices and conventions

1. Purpose

2. Conventions

VI. Reviews and Audits

1. Purpose

2. Review requirements

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 38: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

38

a. Software requirements

b. Designed reviews

c. Software V & V reviews

d. Functional Audits

e. Physical Audit

f. In-process Audits

g. Management reviews

VII. Test

VIII. Problem reporting and corrective action

IX. Tools, techniques and methodologies

X. Code Control

XI. Media Control

XII. Supplier Control

XIII. Record Collection, Maintenance, and retention

XIV. Training

XV. Risk Management.

2.12.1 The ISO Approach to Quality Assurance SystemISO 9000 describes the elements of a quality assurance in general terms. These elements include the

organizational structure, procedures, processes, and resources needed to implement quality planning, qualitycontrol, quality assurance, and quality improvement. However, ISO 9000 does not describe how anorganization should implement these quality system elements.

Consequently, the challenge lies in designing and implementing a quality assurance system that meetsthe standard and fits the company’s products, services, and culture.

2.12.2 The ISO 9001 standardISO 9001 is the quality assurance standard that applies to software engineering. The standard contains

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 39: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

39MSIT 32 Software Quality and Testing

20 requirements that must be present for an effective quality assurance system. Because the ISO 9001standard is applicable in all engineering disciplines, a special set of ISO guidelines have been developed tohelp interpret the standard for use in the software process.

The 20 requirements delineated by ISO9001 address the following topic.

1. Management responsibility

2. Quality system

3. Contract review

4. Design control

5. Document and data control

6. Purchasing

7. Control of customer supplied product

8. Product identification and tractability

9. Process control

10.Inspection and testing

11.Control of inspection, measuring, and test equipment

12.Inspection and test status

13.Control of non confirming product

14.Corrective and preventive action

15.Handling, storage, packing, preservation, and delivery

16.Control of quality records

17.Internal quality audits

18.Training

19.Servicing

20.Statistical techniques

In order for a software organization to become registered to ISO 9001, it must establish policies andprocedure to address each of the requirements noted above and then be able to demonstrate that thesepolicies and procedures are being followed.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 40: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

40

2.12.3 Capability Maturity Model (CMM)The Capability Maturity Model for Software (also known as the CMM and SW-CMM) has been a

model used by many organizations to identify best practices useful in helping them increase the maturityof their processes. It was developed by the software development community along with SoftwareEngineering Institute and Carnegie Melon University under direction of the US department of defense.

It is applicable to any size software company. Its five levels as shown in Figure: 2.6 provide a simplemeans to assess a company’s software development maturity and determine the key practices they couldadopt to move up to the next level of maturity.

Fig 2.6: The software capability maturity model is used to assess a software company’s maturity at softwaredevelopment

Level 1: Initial. The software development processes at this level are ad hoc and often chaotic.There are no general practices for planning, monitoring or controlling the process. The test process is justas ad hoc as the rest of the process.

Level 2: Repeatable. This maturity level is best described as project level thinking. Basic projectmanagement processes are in place to track the cost, schedule, functionality and quality of the project.Basic disciplines like software testing practices like test plans and test cases are used.

Level3: Defined: Organizational, not just project specific, thinking comes into play at this level.Common management and engineering activities are standardized and documented. These standards areadapted and approved for use in different projects. Test documents and plans are reviewed and approvedbefore testing begins.

Level4: Managed. At this maturity level, the organization’s process is under statistical control. Productquality is specified quantitatively beforehand and the software isn’t release until that goal is met.

Level5:Optimizing.This level is called optimizing which is a continuously improving from level 4.

New technologies and processes are attempted, the results are measured, and both incremental andrevolutionary changes are instituted to achieve even better quality levels.

Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those,

Initial

RepeatableDefinedManagedOptimizing

1 Adhoc and chaotic process.

2.Project leve l thin king

3.Organizatio nal level thinking

4.Controlled Process

5.Continuous proce ss impro ve ment through quantita tive feedback and ne w approaches.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 41: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

41MSIT 32 Software Quality and Testing

27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and (0.4% at 5.) The median size oforganizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S.federal contractors or agencies. For those rated at Level 1, the most problematical key process areawas in Software Quality Assurance.

QUESTIONS

1. Quality and reliability are related concepts, but are fundamentally different in a number of ways. Discuss them.

2. Can a program be correct and still not be reliable? Explain.

3. Can a program be correct and still not exhibit good quality? Explain.

4. Explain in more detail, the review technique adopted in Quality Assurance.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 42: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

42

Chapter 3

Program Inspections, Walkthroughsand Reviews

3.1 LEARNING OBJECTIVES

You will learn about

l What is static testing and its importance in Software Testing.

l Guidelines to be followed during static testing

l Process involved in inspection and walkthroughs

l Various check lists to be followed while handling errors in Software Testing

l Review techniques

3.2 INTRODUCTION Majority of the programming community worked under the assumptions that programs are written

solely for machine execution and are not intended to be read by people. The only way to test a programis by executing it on a machine. Weinberg built a convincing strategy that why programs should be read bypeople, and indicated this could be an effective error detection process.

Experience has shown that “human testing” techniques are quite effective in finding errors, so muchso that one or more of these should be employed in every programming project. The method discussed inthis Chapter are intended to be applied between the time that the program is coded and the time thatcomputer based testing begins. We discuss this based on two ways:

Chapter 3 - Program Inspections, Walkthroughs and Reviews42

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 43: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

43MSIT 32 Software Quality and Testing

l It is generally recognized that the earlier errors are found, the lower are the costs or correctingthe errors and the higher is the probability of correcting the errors correctly.

l Programmers seem to experience a psychological change when computer-based testingcommences.

3.3 INSPECTIONS AND WALKTHROUGHSCode inspections and walkthroughs are the two primary “human testing” methods. It involve the

reading or visual inspection of a program by a team of people. Both methods involve some preparatorywork by the participants. Normally it is done through meeting and it is typically known as “meeting of theminds”, a conference held by the participants. The objective of the meeting is to find errors, but not to findsolutions to the errors (i.e. to test but not to debug).

What is the process involved in inspection and walkthroughs?

The process is performed by a group of people (three or four), only one of whom is the author of theprogram. Hence the program is essentially being tested by people other than the author, which is inconsonance with the testing principle stating that an individual is usually ineffective in testing his or herown program. Inspection and walkthroughs are far more effective compared to desk checking (the processof a programmer reading his/her own program before testing it) because people other than the program’sauthor are involved in the process. These processes also appear to result in lower debugging (errorcorrection) costs, since, when they find an error, the precise nature of the error is usually located. Also,they expose a batch or errors, thus allowing the errors to be corrected later enmasse. Computer basedtesting, on the other hand, normally exposes only a symptom of the error and errors are usually detectedand corrected one by one.

Some Observations:

l Experience with these methods has found them to be effective e in finding from 30% to 70% ofthe logic design and coding errors in typical programs. They are not, however, effective indetecting “high-level” design errors, such as errors made in the requirements analysis process.

l Human processes find only the “easy” errors (those that would be trivial to find with computer-based testing) and the difficult, obscure, or tricky errors can only be found by computer-basedtesting.

l Inspections/walkthroughs and computer-based testing are complementary; error-detectionefficiency will suffer if one or the other is not present.

l These processes are invaluable for testing modifications to programs. Because modifying anexisting program is a more error-prone process(in terms of errors per statement written) thanwriting a new program.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 44: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

44

3.4 CODE INSPECTIONSAn inspection team usually consists of four people. One of the four people plays the role of a moderator.

The moderator is expected to be a competent programmer, but he/she is not the author of the program andneed not be acquainted with the details of the program. The duties of the moderator include:

l Distributing materials for scheduling inspections

l Leading the session,

l Recording all errors found, and

l Ensuring that the errors are subsequently corrected.

Hence the moderator may be called as quality-control engineer. The remaining members usually consistof the program’s designer and a test specialist.

The general procedure is that the moderator distributes the program’s listing and design specificationto the other participants well in advance of the inspection session. The participants are expected tofamiliarize themselves with the material prior to the session. During inspection session, two main activitiesoccur:

1. The programmer is requested to narrate, statement by statement, the logic of the program.During the discourse, questions are raised and pursued to determine if errors exist. Experiencehas shown that many of the errors discovered are actually found by the programmer, rather thanthe other team members, during the narration. In other words, the simple act of reading aloudone’s program to an audience seems to be a remarkably effective error-detection technique.

2. The program is analyzed with respect to a checklist of historically common programming errors(such a checklist is discussed in the next section).

It is moderator’s responsibility to ensure the smooth conduction of the proceedings and that theparticipants focus their attention on finding errors, not correcting them.

After the session, the programmer is given a list of the errors found. The list of errors is also analyzed,categorized, ad used to refine the error checklist to improve the effectiveness of future inspections.

The main benefits of this method are;

l Identifying early errors,

l The programmers usually receive feedback concerning his or her programming style and choiceof algorithms and programming techniques.

l Other participants are also gain in similar way by being exposed to another programmer’s errorsand programming style.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 45: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

45MSIT 32 Software Quality and Testing

l The inspection process is a way of identifying early the most error-prone sections of the program,thus allowing one to focus more attention on these sections during the computer based testingprocesses.

3.5 AN ERROR CHECK LIST FOR INSPECTIONSAn important part of the inspection process is the use of a checklist to examine the program for

common errors. The checklist is largely language independent as most of the errors can occur with anyprogramming language.

Data-Reference Errors

1. Is a variable referenced whose value is unset or uninitialized? This is probably the most frequentprogramming error; it occurs in a wide variety of circumstances.

2. For all array references, is each subscript value within the defined bounds of the correspondingdimension?

3. For all array references, does each subscript have an integer value? This is not necessarily anerror in all languages, but it is a dangerous practice.

4. For all references through pointer or reference variables, is the referenced storage currentlyallocated? This is known as the “dangling reference” problem. It occurs in situations where thelifetime of a pointer is greater than the lifetime of the referenced storage.

5. Are there any explicit or implicit addressing problems if, on the machine being used, the units ofstorage allocation are smaller than the units of storage addressability?

6. If a data structure is referenced in multiple procedures or subroutines, is the structure definedidentically in each procedure?

7. When indexing into a string, are the limits of the string exceeded?

Data-Declaration Error

1. Have all variables been explicitly declared? A failure to do so is not necessarily an error, but itis a common source of trouble.

2. If all attributes of a variable are not explicitly stated in the declaration, are the defaults wellunderstood?

3. Where a variable is initialized in a declarative statement, is it properly initialized?

4. Is each variable assigned the correct length, type, and storage class?

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 46: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

46

5. Is the initialization of a variable consistent with its storage type?

Computation Errors

1. Are there any computations using variables having inconsistent (e.g. Nonarithmetic) data types?

2. Are there any mixed mode computations?

3. Are there any computations using variables having the same data type but different lengths?

4. Is the target variable of an assignment smaller than the right-hand expression?

5. Is an overflow or underflow exception possible during the computation of an expression? Thatis, the end result may appear to have a valid value, but an intermediate result might be too big ortoo small for the machine’s data representations.

6. Is it possible for the divisor in a division operation to be zero?

7. Where applicable, can the value of a variable go outside its meaningful range?

8. Are there any invalid uses of integer arithmetic, particularly division? For example, if I is aninteger variable, whether the expression 2*I/2 is equal to I depends on whether I has an odd oran even value and whether the multiplication or division is performed first.

Comparison Errors

1. Are there any comparisons between variables having inconsistent data types (e.g. comparing acharacter string to an address)?

2. Are there any mixed-mode comparisons or comparisons between variables of different lengths?If so, ensure that the conversion rules are well understood.

3. Does each Boolean expression state what it is supposed to state? Programmers often makemistakes when writing logical expressions involving “and”, “or”, and “not”.

4. Are the operands of a Boolean operator Boolean? Have comparison and Boolean operatorsbeen erroneously mixed together?

Control-Flow Errors

1. If the program contains a multi way branch (e.g. a computed GO TO in Fortran), can the indexvariable ever exceed the number of branch possibilities? For example, in the Fortran statement,

GOTO(200,300,400), I

Will I always have the value 1,2, or 3?

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 47: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

47MSIT 32 Software Quality and Testing

2. Will every loop eventually terminate? Devise an informal proof or argument showing that eachloop will terminate

3. Will the program, module, or subroutine eventually terminate?

4. Is it possible that, because of the conditions upon entry, a loop will never execute? If so, doesthis represent an oversight? For instance, for loops headed by the following statements:

DO WHILE (NOTFOUND)

DO I=X TO Z

What happens if NOTFOUND is initially false or if X is greater than Z?

5. Are there any non-exhaustive decisions? For instance, if an input parameter’s expected valuesare 1, 2, or 3; does the logic assume that it must be 3 if it is not 1 or 2? If so, is the assumptionvalid?

Interface Errors

1. Does the number of parameters received by this module equal the number of arguments sent byeach of the calling modules? Also, is the order correct?

2. Do the attributes (e.g. type and size) of each parameter match the attributes of each correspondingargument?

3. Does the number of arguments transmitted by this module to another module equal the numberof parameters expected by that module?

4. Do the attributes of each argument transmitted to another module match the attributes of thecorresponding parameter in that module?

5. If built-in functions are invoked, are the number, attributes, and order of the arguments correct?

6. Does the subroutine alter a parameter that is intended to be only an input value?

Input/Output Errors

1. If files are explicitly declared, are their attributes correct?

2. Are the attributes on the OPEN statement correct?

3. Is the size of the I/O area in storage equal to the record size?

4. Have all files been opened before use?

5. Are end-of-file conditions detected and handled correctly?

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 48: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

48

6. Are there spelling or grammatical errors in any text that is printed or displayed by the program?

3.6 WALKTHROUGHSThe code walkthrough, like the inspection, is a set of procedures and error-detection techniques for

group code reading. It shares much in common with the inspection process, but the procedures are slightlydifferent, and a different error-detection technique is employed.

The walkthrough is an uninterrupted meeting of one to two hours in duration. The walkthrough teamconsists of three to five people to play the role of moderator, secretary (a person who records all errorsfound), tester and programmer. It is suggested to have other participants like:

l A highly experienced programmer,

l A programming-language expert,

l A new programmer (to give a fresh, unbiased outlook)

l The person who will eventually maintain the program,

l Someone from different project and

l Someone from the same programming team as the programmer.

The initial procedure is identical to that of the inspection process: the participants are given the materialsseveral days in advance to allow them to study the program. However, the procedure in the meeting isdifferent. Rather than simply reading the program or using error checklists, the participants “play computer”.The person designated as the tester comes to the meeting armed with a small set of paper test cases-representative sets of inputs (and expected outputs) for the program or module. During the meeting, eachtest case is mentally executed. That is, the test data are walked through the logic of the program. Thestate of the program (i.e. the values of the variables) is monitored on paper or a blackboard.

The test case must be simple and few in number, because people execute programs at a rate that isvery slow compared to machines. In most walkthroughs, more errors are found during the process ofquestioning the programmer than are found directly by the test cases themselves.

QUESTIONS

1. Is code reviews are relevant to the software testing? Explain the process involved in a typical code review.

2. Explain the need for inspection and list the different types of code reviews.

3. Consider a program and perform a detailed review and list the review findings in detail.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 49: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

49MSIT 32 Software Quality and Testing

CHAPTER 4

Test Case Design

4.1 LEARNING OBJECTIVES

You will learn about:

l Dynamic testing of Software Applications

l White box and black box testing

l Various techniques used in White box testing

l Various techniques used in black box testing

l Static program analysis

l Automation of testing process

4.2 INTRODUCTIONSoftware can be tested either by running the programs and verifying each step of its execution against

expected results or by statically examining the code or the document against its stated requirement orobjective. In general, software testing can be divided into two categories, viz. Static and dynamic testing.Static testing is a non-execution-based testing and carried through by mostly human effort. In statictesting, we test, design, code or any document through inspection, walkthroughs and reviews as discussedin Chapter 2. Many studies show that the single most cost-effective defect reduction process is theclassic structural test; the code inspection or walk-through. Code inspection is like proof reading and

MSIT 32 Software Quality and Testing 49

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 50: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

50

developers will be benefited in identifying the typographical errors, logic errors and deviations in stylesand standards normally followed.

Dynamic testing is an execution based testing technique. Program must be executed to find the possibleerrors. Here, the program, module or the entire system is executed(run) and the output is verified againstthe expected result. Dynamic execution of tests is based on specifications of the program, code andmethodology.

4.3 WHITE BOX TESTINGThis testing technique takes into account the internal structure of the system or component. The entire

source code of the system must be available. This technique is known as white box testing because thecomplete internal structure and working of the code is available.

White box testing helps to derive test cases to ensure:

1. All independent paths are exercised at least once.

2. All logical decisions are exercised for both true and false paths.

3. All loops are executed at their boundaries and within operational bounds.

4. All internal data structures are exercised to ensure validity.

White box testing helps to:

l Traverse complicated loop structures

l Cover common data areas,

l Cover control structures and sub-routines,

l Evaluate different execution paths

l Test the module and integration of many modules

l Discover logical errors, if any.

l Helps to understand the code

Why the white box testing is used to test conformance to requirements?

l Logic errors and incorrect assumptions most likely to be made when coding for “special cases”.Need to ensure these execution paths are tested.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 51: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

51MSIT 32 Software Quality and Testing

l May find assumptions about execution paths incorrect, and so make design errors. White boxtesting can find these errors.

l Typographical errors are random. Just as likely to be on an obscure logical path as on a mainstreampath.

“Bugs lurk in corners and congregate at boundaries”

4.4 BASIS PATH TESTINGIt is a testing mechanism proposed by McCabe. The aim is to derive a logical complexity measure of

a procedural design and use this as a guide for defining a basic set of execution paths. Test cases, whichexercise basic set, will execute every statement at least once.

4.4.1 Flow Graph NotationFlow graph notation helps to represent various control structures of any programming language. Various

notations for representing control flow are: (fig 4)

Fig 4.1: Notations used for control structures

On a flow graph:

l Arrows called edges represent flow of control

l Circles called nodes represent one or more actions.

l Areas bounded by edges and nodes called regions.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 52: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

52

l A predicate node is a node containing a condition

Any procedural design/ program can be translated into a flow graph. Later the flow graph can beanalyzed for various paths within it.

Note that compound Boolean expressions at tests generate at least two predicate nodes and additionalarcs.

Example:

Fig 4.2: Control flow of a program and the corresponding flow diagram

4.4.2 Cyclomatic ComplexityThe cyclomatic complexity gives a quantitative measure of the logical complexity. This value gives the

number of independent paths in the basis set, and an upper bound for the number of tests to ensure thateach statement is executed at least once.

An independent path is any path through a program that introduces at least one new set of processingstatements or a new condition (i.e., a new edge)

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 53: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

53MSIT 32 Software Quality and Testing

Fig 4.3 : Sample program and corresponding flow diagram

In Fig 3.3, the statements are numbered and the corresponding nodes also numbered with the samenumber. The sample program contains one DO and three nested IF statements.

From the example we can observe that:

l Cyclomatic Complexity of 4 can be calculated as:

1. Number of regions of flow graph, which is 4.

2. #Edges - #Nodes + 2, which is 11-9+2=4.

3. #Predicate Nodes + 1, which is 3+1=4.

The above complexity provides the upper bound on the number of tests cases to be generated orindependent execution paths in the program. The independent paths(4 paths) for the program shown in fig4.3 is given below:

l Independent Paths:

1. 1, 8

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 54: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

54

2. 1, 2, 3, 7b, 1, 8

3. 1, 2, 4, 5, 7a, 7b, 1, 8

4. 1, 2, 4, 6, 7a, 7b, 1, 8

Cyclomatic complexity provides upper bound for number of tests required to guarantee the coverageof all program statements.

4.4.3. Deriving Test CasesTest cases are designed in many ways. The steps involved for test case design are:

1. Using the design or code, draw the corresponding flow graph.

2. Determine the cyclomatic complexity of the flow graph.

3. Determine a basis set of independent paths.

4. Prepare test cases that will force execution of each path in the basis set.

Note: some paths may only be able to be executed as part of another test.

4.4.4. Graph MatricesGraph matrices can automate derivation of flow graph and determination of a set of basis paths.

Software tools to do this can use a graph matrix. A sample graph matrix is shown is Fig 3.4.

The graph matrix:

l Is a square matrix with number of sides equal to number of nodes.

l Rows and columns of the matrix correspond to the number of nodes in the flow graph.

l Entries correspond to the edges.

The matrix can associate a number with each entry of the edge.

Use a value of 1 to calculate the cyclomatic complexity. The cyclomatic complexity is calculated asfollows:

l For each row, sum column values and subtract 1.

l Sum these totals and add 1.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 55: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

55MSIT 32 Software Quality and Testing

Which is 4.

Some other interesting link weight can be measured by the graph as:

l Probability that a link (edge) will be executed

l Processing time for traversal of a link

l Memory required during traversal of a link

l Resources required during traversal of a link

Fig 4.4: Example of a graph matrix

4.5 CONTROL STRUCTURE TESTINGIn programs, conditions are very important and testing such conditions is more complex than other

statements like assignment and declarative statements. Basic path testing is one example of controlstructure testing. There are many ways in which control structure can be tested.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 56: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

56

4.5.1 Conditions TestingCondition testing aims to exercise all logical conditions in a program module. Logical conditions may be

complex or simple. Logical conditions may be nested with many relational operations.

Can define:

l Relational expression: (E1 op E2), where E1 and E2 are arithmetic expressions. For example,(x+y) – (s/t), where x, y, s and t are variables.

l Simple condition: Boolean variable or relational expression, possibly proceeded by a NOT operator.

l Compound condition: composed of two or more simple conditions, Boolean operators andparentheses along with relational operators.

l Boolean expression: Condition without relational expressions.

Normally errors in expressions can be due to one or all or the following:

l Boolean operator error

l Boolean variable error

l Boolean parenthesis error

l Relational operator error

l Arithmetic expression error

l Mismatch of types

Condition testing methods focus on testing each condition in the program of any type of conditions.There are many strategies to identify errors.

Some of the strategies proposed include:

l Branch testing: Every branch is executed at least once.

l Domain Testing: Uses three or four tests for every relational operator depending on the complexityof the statement.

l Branch and relational operator testing: Uses condition constraints. Based on the complexity ofthe relational operators, many branches will be executed.

Example 1: C1 = B1 & B2

l where B1, B2 are boolean conditions..

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 57: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

57MSIT 32 Software Quality and Testing

l Condition constraint of form (D1,D2) where D1 and D2 can be true (t) or false(f).

l The branch and relational operator test requires the constraint set {(t,t),(f,t),(t,f)} to be coveredby the execution of C1.

Coverage of the constraint set guarantees detection of relational operator errors.

4.5.2. Data Flow TestingFirst, a proper data flow diagram like control flow(see basis path flow) is drawn. Then selects test

paths according to the location of definitions and use of variables. Any variables that have been defined inany program behaves in the following way:

D: define the variable, normally defined in declarative section,

U: use the variables which is defined earlier, in the program.

K: kill the variable, which is another state of the variable at any time of the execution of the program.

Any variable that is part of the program will undergo any of the above states. However, the sequenceof states is important. We can avoid following anomalies during the program execution:

l DU: Normal,

l UK, UU: Normal,

l DD: Suspicious

l DK: Probable bug

l KD: Normal

l KK: Probable bug

l KU: bug

l UD: Normal

For example,

DU: Normal means a variable is defined first and then used in the program which is normal behaviorof the data flow in the program.

DK: Probable bug means a variable is defined and then killed before using in the program. This may bebug as why the variable is defined and killed with out using in the program.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 58: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

58

4.5.3. Loop TestingLoops are fundamental to many algorithms. Loops can be categorized as, define loops as simple,

concatenated, nested, and unstructured. Loops can be defined in many ways.

Examples:

Fig 4.5 : Different types of Loops

To test the loops, following guidelines may be followed:

l Simple Loops of size n:

q Skip loop entirely

q Only one pass through the loop

q Two passes through the loop

q m passes through loop where m<n.

q (n-1), n, and (n+1) passes through the loop. This helps in testing the boundary of the loops.

l Nested Loops

q Start with inner loop. Set all other loops to minimum values.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 59: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

59MSIT 32 Software Quality and Testing

q Conduct simple loop testing on inner loop.

q Work outwards and take the next nested loop.

q Continue until all loops are tested.

l Concatenated Loops

q If independent loops, use simple loop testing.

q If dependent, treat as nested loops.

l Unstructured loops

q Don’t test - redesign. This is known as poor design.

4.6 BLACK BOX TESTINGFunctional tests examine the observable behavior of software as evidenced by its outputs, without any

reference to internal functions. This kind of tests is from the user point of view, which means as if the useris testing as in the normal business functions.

l Black box tests normally determine the quality of the software. It is an advantage to create thequality criteria from this point of view from the beginning.

l In black box testing, software is subjected to a full range of inputs and the outputs are verifiedfor their correctness. Here, the structure of the program is immaterial.

l Black box testing technique can be applied once unit and integration testing is completed.

l It focuses on functional requirements.

l It is compliment to the white box testing.

The main objective of the black box testing is to find:

1. Incorrect Or Missing Functions

2. Interface Errors

3. Errors In Data Structures Or External Database Access

4. Performance Errors

5. Initialization and Termination Errors.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 60: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

60

Some of the techniques used for black box testing are discussed below:

4.6.1 Equivalence PartitioningThe main objective of this method is to partitioning the input so that an optimal input data is selected.

Steps to be followed are:

1. Divide the input domain into classes of data for which test cases can be generated.

2. Attempting to uncover classes of errors, if any.

3. Identify both right and wrong input data while partitioning the data.

4. Test the program for all types of data.

Based on equivalence classes for input conditions.

An equivalence class represents a set of valid or invalid states

An input condition is either a specific numeric value, range of values, a set of related values, or aboolean condition.

Equivalence classes can be defined by:

l If an input condition specifies a range or a specific value, one valid and two invalid equivalenceclasses defined.

l If an input condition specifies a boolean or a member of a set, one valid and one invalid equivalenceclasses defined.

Test cases for each input domain data item developed and executed.

This method uses less number of input data compared to exhaustive testing. However, the data forboundary values are not considered.

This method though reduces significantly the number of input data to be tested, it does not test thecombinations of the input data.

4.6.2 Boundary Value AnalysisIt is observed that boundary points for all inputs are not tested properly. This leads to many errors.

Large number of errors tends to occur at boundaries of the input domain. Boundary Value Analysis(BVA) leads to selection of test cases that exercise boundary values.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 61: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

61MSIT 32 Software Quality and Testing

BVA complements equivalence partitioning i.e. select any element in an equivalence class, selectthose at the ‘’edge’ of the class.

Examples:

1. For a range of values bounded by a and b, test (a-1), a, (a+1), (b-1), b, (b+1).

2. If input conditions specify a number of values n, test with (n-1), n and (n+1) input values.

3. Apply 1 and 2 to output conditions (e.g., generate table of minimum and maximum size).

4. If internal program data structures have boundaries (e.g., buffer size, table limits), use input datato exercise structures on boundaries.

BVA and Equivalence partitioning both helps in testing the programs and covers most of the conditions.This method does not test the combinations of input conditions.

4.6.3 Cause Effect Graphing TechniquesTranslation of natural language descriptions of procedures to software based algorithms is error prone.

Example: From US Army Corps of Engineers:

Executive Order 10358 provides in the case of an employee whose work week varies from the normalMonday through Friday work week, that Labor Day and Thanksgiving Day each were to be observed onthe next succeeding workday when the holiday fell on a day outside the employee’s regular basic workweek. Now, when Labor Day, Thanksgiving Day or any of the new Monday holidays are outside anemployee’s basic workbook, the immediately preceding workday will be his holiday when the non-workdayon which the holiday falls is the second non-workday or the non-workday designated as the employee’sday off in lieu of Saturday. When the non-workday on which the holiday falls is the first non-workday orthe non-workday designated as the employee’s day off in lieu of Sunday, the holiday observance is movedto the next succeeding workday.

How do you test code, which attempts to implement this?

Cause-effect graphing attempts to provide a concise representation of logical combinations andcorresponding actions.

1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assignedto each.

2. A cause-effect graph developed.

3. Graph converted to a decision table.

4. Decision table rules are converted to test cases.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 62: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

62

Simplified symbology:

Fig 4.6: Representation of cause-effect nodes

4.6.4 Comparison Testingl In some applications the reliability is critical.

l Redundant hardware and software may be used.

l For redundant software, use separate teams to develop independent versions of the software.

l Test each version with same test data to ensure all provisional identical output.

l Run all versions in parallel with a real-time comparison of results.

l Even if it will only run one version in final system, for some critical applications can developindependent versions and use comparison testing or back-to-back testing.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 63: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

63MSIT 32 Software Quality and Testing

l When outputs of versions differ, each is investigated to determine if there is a defect.

l Method does not catch errors in the specification.

4.7 STATIC PROGRAM ANALYSIS

This strategy helps in identifying errors without executing the program. Peer reviewers and programmerswill use this strategy to uncover probable static errors.

4.7.1 Program Inspections

Have covered in the previous Chapter.

4.7.2 Mathematical Program Verification

If the programming language semantics are formally defined, one can consider program to be a set ofmathematical statements. We can attempt to develop a mathematical proof that the program is correctwith respect to the specification. If the proof can be established, the program is verified and testing tocheck verification is not required.

There are a number of approaches to proving program correctness. We will only consider axiomaticapproach.

Suppose that at points P(1), .. , P(n) assertions concerning the program variables and their relationshipscan be made.

The assertions are a (1), ..., a(n).

The assertion a(1) is about inputs to the program, and a(n) about outputs.

We can now attempt, for k between 1 and (n-1), to prove that the statements between

P(k) and P(k+1) transform the assertion a(k) to a(k+1).

Given that a(1) and a(n) are true, this sequence of proofs shows partial program correctness. If it canbe shown that the program will terminate, the proof is complete.

Note: Students are requested to note this as an introductory section.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 64: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

64

4.7.3 Static Program AnalysersStatic analysis tools scan the source code to try to detect errors.

The code does not need to be executed.

Most useful for languages which do not have strong typing.

It can check:

1. Syntax.

2. Unreachable code

3. Unconditional branches into loops

4. Undeclared variables

5. Uninitialised variables.

6. Parameter type mismatches

7. Uncalled functions and procedures.

8. Variables used before initialization.

9. Non-usage of function results.

10.Possible array bound errors.

11.Misuse of pointers.

4.8 AUTOMATED TESTING TOOLSAutomation of testing is the state of the art technique where in number of tools will help in testing

program automatically. Programmers can use any tool to test his/her program and ensure the quality.There are number of tools are available in the market. Some of the tools which helps the programmer are:

1. Static analyser

2. Code Auditors

3. Assertion processors

4. Test file generators

5. Test Data Generators

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 65: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

65MSIT 32 Software Quality and Testing

6. Test Verifiers

7. Output comparators.

Programmer can select any tool depending on the complexity of the program.

QUESTION

1. What is black box testing? Explain

2. What are the different techniques are available to conduct black box testing?

3. Explain different methods available in white box testing with examples.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 66: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

66

Chapter 5

Testing for SpecializedEnvironments

5.1 LEARNING OBJECTIVES

You will learn about:

l Concept of Graphic User Interface (GUI) Testing,

l GUI checklist for Windows, Data entry and related activities

l Testing Client/Server Architecture

l Testing documentation and help facilities

5.2 INTRODUCTIONThe need for specialized testing approaches is becoming mandatory as computer software has become

more complex. The White-box and black box testing methods are applicable across all environments,architectures and applications, but unique guidelines and approaches to testing are sometime important.We address the testing guidelines for specialized environments, architectures, and applications that arecommonly encountered by software engineers.

5.3 TESTING GUISThe growth of Graphical User Interfaces (GUIs) in various applications has become a challenge for

Chapter 5 - Testing for Specialized Environments66

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 67: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

67MSIT 32 Software Quality and Testing

test engineers. Because of reusable components provided as part of GUI development environments, thecreation of the user interface has become less time consuming and more precise. GUI is becomingmandatory for any application as users are used to it. Sometime, the user interface may be treated as adifferent layer and easily separated from the traditional functional or business layer. The design anddevelopment of user interface layer requires separate design and development methodology. Here themain problem is to understand the user psychology during the development time. Due to complexity ofGUIs, testing and generating test cases has become more complex and tedious.

Because of modern GUIs standards (same look and feel), common tests can be derived.

What are the guidelines to be followed which helps for creating a series of generictests for GUIs?

Guidelines can be categorized into many operations. Some of them are discussed below:

For windows:

l Will the window open properly based on related typed or menu-based commands?

l Can the window be resized, moved, scrolled?

l Does the window properly regenerate when it is overwritten and then recalled?

l Are all functions that relate to the window available when needed?

l Are all functions that relate to the window available when needed?

l Are all functions that relate to the window operational?

l Are all relevant pull-down menus, tool bars, scroll bars, dialog boxes, and buttons, icons, andother controls available and properly represented?

l Is the active window properly highlighted?

l If multiple or incorrect mouse picks within the window cause unexpected side effects?

l Are audio and/or color prompts within the window or as a consequence of window operationspresented according to specification?

l Does the window properly close?

For pull-down menus and mouse operations:

l Is the appropriate menu bar displayed in the appropriate context?

l Does the application menu bar display system related features (e.g. a clock display)

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 68: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

68

l Do pull-down operations work properly?

l Do breakaway; menus, palettes, and tool bars work properly?

l Are all menu functions and pull-down sub functions properly listed?

l Are all menu functions and pull-down sub functions properly listed?

l Are all menu functions properly addressable by the mouse?

l Are text typeface, size, and format correct?

l Is it possible to invoke each menu function using its alternative text-based command?

l Are menu functions highlighted (or grayed-out) based on the context of current operationswithin a window?

l Does each menu function perform as advertised?

l Are the names of menu functions self-explanatory?

l Is help available for each menu item, and is it context sensitive?

l Are mouse operations properly recognized throughout the interactive context?

l If multiple clicks are required, are they properly recognized in context?

l If the mouse has multiple buttons, are they properly recognized in context?

l Do the cursor, processing indicator (e.g. an hour glass or clock), and pointer properly change asdifferent operations are invoked?

Data entry:

l Is alphanumeric data entry properly echoed and input to the system?

l Do graphical modes of data entry (e.g., a slide bar) work properly?

l Is invalid data properly recognized?

l Are data input messages intelligible?

l Are basic standard validation on each data is considered during the data entry itself?

l Once the data is entered completely and if a correction is to be done for a specific data, does thesystem requires entering the entire data again?

l Does the mouse clicks are properly used?

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 69: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

69MSIT 32 Software Quality and Testing

l Does the help buttons are available during data entry?

In addition to the above guidelines, finite state modeling graphs may be used to derive a series of teststhat address specific data and program objects that are relevant to the GUI.

5.4 TESTING OF CLIENT/SERVER ARCHITECTURESClient/server architectures represent a significant challenge for software testers. The distributed nature

of client/server environments, the performance issues associated with transaction processing, the potentialpresence of a number of different hardware platforms, the complexities of network communication, theneed to service multiple clients from a centralized (or in some cases, distributed) database, and thecoordination requirements imposed on the server all combine to make testing of C/S architectures and thesoftware that reside within them considerably more difficult than testing standalone applications. In fact,recent industry studies indicate a significant increase in testing time and cost when C/S environments aredeveloped.

5.5 TESTING DOCUMENTATION AND HELP FACILITIESErrors in documentation can be as devastating to the acceptance of the program as errors in data or

source e code. You must have seen the difference between following the user guide and getting results orbehaviors that do not coincide with those predicted by the document. For this reason, documentationtesting should be a meaningful part of every software test plan.

Documentation testing can be approached in two phases. The first phase, formal technical review,examines the document for editorial clarity. The second phase, live test, users the documentation inconjunction with the use of the actual program.

Some of the guidelines are discussed here:

l Does the documentation accurately describe how to accomplish each mode of use?

l Is the description of each interaction sequence accurate?

l Are examples accurate and context based?

l Are terminology, menu descriptions, and system responses consistent with the actual program?

l Is it relatively easy to locate guidance within the documentation?

l Can troubleshooting be accomplished easily with the documentation?

l Are the document table of contents and index accurate and complete?

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 70: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

70

l Is the design of the document (layout, typefaces, indentation, graphics) conducive to understandingand quick assimilation of information?

l Are all error messages displayed for the user described in more detail in the document?

l If hypertext links are used, are they accurate and complete?

The only viable way to answer these questions is to have an independent third party to test thedocumentation in the context of program usage. All discrepancies are noted, and areas of documentambiguity or weakness are defined for potential rewrite.

QUESTIONS

1. Explain the need for GUI testing and its complexity?

2. List the guidelines required for a typical tester during GUI testing?

3. Select your own GUI based software system and test the GUI related functions by using the listed guidelines inthis Chapter.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 71: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

71MSIT 32 Software Quality and Testing

Chapter 6

Software Testing Strategies

6.1. LEARNING OBJECTIVES

You will learn about:

l Various testing Strategies in Software Testing

l Basic concept of Verification and Validation

l Criteria for Completion of Testing

l Unit Testing

l Integration Testing

l Validation Testing

l System Testing

l Debugging Process

6.2. INTRODUCTIONA strategy for software testing integrates software test case design methods into a well-planned

series of steps that result in the successful construction of software. As important, a software testingstrategy provides a road map for the software developer, the quality assurance organization, and thecustomer- a road map that describes the steps to be conducted as part of testing, when these steps are

MSIT 32 Software Quality and Testing 71

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 72: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

72

planned and then undertaken, and how much effort, time, and resources will be required. Therefore anytesting strategy must incorporate test planning, test case design, test execution, and resultant data collectionand evaluation.

A software testing strategy should be flexible enough to promote the creativity and customization thatare necessary to adequately test all large software-based systems. At the same time, the strategy must berigid enough to promote reasonable planning and management tracking as the project progresses. Shoomansuggests these issues:

In many ways, testing is an individualistic process, and the number of different types of tests varies asmuch as the different development approaches. For many years, our only defense against programmingerrors was careful design and the native intelligence of the programmer. We are now in an era in whichmodern design techniques are helping us to reduce the number of initial errors that are inherent in thecode. Similarly, different test methods are beginning to cluster themselves into several distinct approachesand philosophies.

These approaches and philosophies are what we shall call strategy.

Different test methods begin to cluster into several distinct approaches and philosophies, which iscalled strategy.

A testing strategy incorporates :

l Test planning

l Test case design

l Test execution And

l Resultant data collection and evaluation

l A software testing strategy should be flexible enough to promote the creativity and customizationthat are necessary to adequately test all large software based systems.

l At the same time, the strategy must be rigid enough to promote reasonable planning andManagement tracking as the project progresses.

Types of Testing

The level of test is the primary focus of a system and derives from the way a software system isdesigned and built up. Conventionally this is known as the “V” model, which maps the types of test toeach stage of development.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 73: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

73MSIT 32 Software Quality and Testing

Component Testing

Starting from the bottom the first test level is “Component Testing”, sometimes called Unit Testing. Itinvolves checking that each feature specified in the “Component Design” has been implemented in thecomponent.

In theory an independent tester should do this, but in practise the developer usually does it, as they arethe only people who understand how a component works. The problem with a component is that itperforms only a small part of the functionality of a system, and it relies on co-operating with other parts ofthe system, which may not have been built yet. To overcome this, the developer either builds, or usesspecial software to trick the component into believing it is working in a fully functional system.

Interface Testing

As the components are constructed and tested they are then linked together to check if they work witheach other. It is a fact that two components that have passed all their tests, when connected to each otherproduce one new component full of faults. These tests can be done by specialists, or by the developers.

Interface Testing is not focussed on what the components are doing but on how they communicatewith each other, as specified in the “System Design”. The “System Design” defines relationships betweencomponents, and this involves stating:

l What a component can expect from another component in terms of services.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 74: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

74

l How these services will be asked for.

l How they will be given.

l How to handle non-standard conditions, i.e. errors.

Tests are constructed to deal with each of these.

The tests are organised to check all the interfaces, until all the components have been built and interfacedto each other producing the whole system.

System Testing

Once the entire system has been built then it has to be tested against the “System Specification” tocheck if it delivers the features required. It is still developer focussed, although specialist developersknown as systems testers are normally employed to do it.

In essence System Testing is not about checking the individual parts of the design, but about checkingthe system as a whole. In effect it is one giant component.

System testing can involve a number of specialist types of test to see if all the functional and non-functional requirements have been met. In addition to functional requirements these may include thefollowing types of testing for the non-functional requirements:

l Performance - Are the performance criteria met?

l Volume - Can large volumes of information be handled?

l Stress - Can peak volumes of information be handled?

l Documentation - Is the documentation usable for the system?

l Robustness - Does the system remain stable under adverse circumstances?

There are many others, the needs for which are dictated by how the system is supposed to perform.

Acceptance Testing

Acceptance Testing checks the system against the “Requirements”. It is similar to systems testing inthat the whole system is checked but the important difference is the change in focus:

l Systems Testing checks that the system that was specified has been delivered.

l Acceptance Testing checks that the system delivers what was requested.

The customer, and not the developer should always do acceptance testing. The customer knows whatis required from the system to achieve value in the business and is the only person qualified to make thatjudgement.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 75: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

75MSIT 32 Software Quality and Testing

The forms of the tests may follow those in system testing, but at all times they are informed by thebusiness needs.

Release Testing

Even if a system meets all its requirements, there is still a case to be answered that it will benefit thebusiness. The linking of “Business Case” to Release Testing is looser than the others, but is still important.

Release Testing is about seeing if the new or changed system will work in the existing businessenvironment. Mainly this means the technical environment, and checks concerns such as:

l Does it affect any other systems running on the hardware?

l Is it compatible with other systems?

l Does it have acceptable performance under load?

These tests are usually run the by the computer operations team in a business. The answers to theirquestions could have significant a financial impact if new computer hardware should be required, andadversely affect the “Business Case”.

It would appear obvious that the operations team should be involved right from the start of a project togive their opinion of the impact a new system may have. They could then make sure the “Business Case”is relatively sound, at least from the capital expenditure, and ongoing running costs aspects. However inpractise many operations teams only find out about a project just weeks before it is supposed to go live,which can result in major problems.

6.3 A STRATEGIC APPROACH TO SOFTWARE TESTINGTesting activity can be planned and conducted systematically; hence, to be very specific test case

design methods are defined called as templates.

A number of software testing has been proposed in the literature. All provide the software developerwith a template for testing and all have the following generic characteristics.

l Testing begins at module level or class or object level in object-oriented systems and worksOutward toward the integration of the entire computer based system.

l Different techniques are appropriate at different points in time

l Testing is conducted by the developer of the software and, an independent test group for largeprojects

l Testing and debugging are different activities, but debugging must be accommodated in anytesting strategy

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 76: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

76

How should a strategy be?

l A strategy for software testing must accommodate low-level tests that are necessary to verifythat a small source code segment has been correctly implemented as well as high-level teststhat validate major customer requirements.

l A strategy must provide guidance for the practitioner and a set of milestones for the manager.Because the steps of the test strategy occur at a time when deadline pressure begins to rise,progress must be measurable and problems must surface.

6.4 VERIFICATION AND VALIDATIONSoftware testing is one element of a broader topic that is often referred to as verification and

validation(V&V). Verification refers to the set of activities that ensure that software correctly implementsa specific function. Validation refers to a different set of activities that ensure that the software that hasbeen built is traceable to customer requirements.

Software testing is one element of a broader topic that is often referred to as verification and validation(V&V).

l Verification refers to the set of activities that ensure that correctly implements a specific function.

l Validation refers to a different set of activities that ensure that the software that has been builtis traceable to customer requirements.

Boehm states like this.

Verification: “Are we building the product right”

Validation: “Are we building the right product?”

Fig 6.1: Achieving Software Quality

Fig 6.1. Achieving Software Quality

Quality

Software Engineering

Methods

Formal Technical

Review

Standards And

Procedures

SCM &

SQA

Testing

Measurement

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 77: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

77MSIT 32 Software Quality and Testing

Fig 5.1 shows by application of methods and tools, effective formal technical reviews, and solidmanagement and measurement all lead to quality that is confirmed during testing.

Testing provides the last bastion from which quality can be assessed and, more pragmatically, errorscan be uncovered.

However, testing should not be viewed as a safety net. Quality cannot be tested it won’t be when youbegin testing and when finished testing Quality is incorporated throughout software process.

4 Note:

It is important to note that V&V encompass a wide array of SQA activities that include formaltechnical reviews, quality and configuration audits, performance monitoring, Simulation, feasibility study,documentation review, database review, algorithm analysis, development testing, qualification testing andinstallation testing.

Although testing plays an extremely important role in V&V, many other activities are also necessary.

6.5 ORGANIZING FOR SOFTWARE TESTING

The software developer is always responsible for testing the individual units (modules) of the programensuring that each performs the function for which it was designed. In many cases, the developer alsoconducts integration testing - A testing step that leads to construction of the complete program structure.Only after the software architecture is complete, does an independent test group (ITG) become involved?

The role of an ITG is to remove the inherent problems associated with letting the builder test the thingthat has been built. Independent testing removes the conflict of interest that may otherwise present. Afterall, personnel in the ITG team are paid to find errors.

How ever, the software developer does not turn the program over to ITG and walk away. The developerand the ITG work closely throughout a software project to ensure that thorough tests will be conducted.While testing is conducted, the developer must be available to correct errors that are uncovered.

The ITG is part of the software development project team in the sense that it becomes involved duringthe specification process and stays involved (planning and specifying test procedures) throughout a largeproject.

However, in many cases the ITG reports to the SQA organization, there by achieving a degree ofindependence that might not be possible if it were a part of the software development organization.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 78: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

78

6.6 A SOFTWARE TESTING STRATEGYThe software engineering process may be viewed as a spiral, illustrated in figure 6.2,

Initially system engineering defines the roll of software and leads to software requirements and analysis,where the information domain, function, behavior, performance, constraints, and validation criteria forsoftware are established. Moving inward along the spiral, we come to design and finally to coding.

To develop computer software, We spiral in along streamlines that decrease the level of Abstractionon each turn.

Figure 5.2: Testing Strategy

The strategy for software testing may also be viewed in the context of the spiral.

Unit testing begins at the vortex of the spiral and concentrates on each unit of the software asimplemented in source code. Testing progresses by moving outward along the spiral to integration testing,where the focus is on design and the construction of the software architecture. Talking another turnoutward on the spiral, we encounter

Validation testing where requirements established as part of software requirements analysis arevalidated against the software that has been constructed. Finally, We arrive at system testing where thesoftware and other system elements are tested as a whole.

To test computer software, we spiral out along streamlines that broaden the scope of testing with eachturn.

S

R

D

C

U I

V

ST

System Engineering

Requirements

Design

Code Unit Test

Integration test

Validation Test

System test

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 79: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

79MSIT 32 Software Quality and Testing

Considering the process from a procedural point of view testing within the context of software engineeringis a series of four steps that are implemented sequentially.

The steps are shown In Figure 5.3 initially tests focus on each module individually, assuring that itfunctions as a unit hence the name unit testing. Unit testing makes heavy use of white-box testingtechniques, exercising specific paths in a module’s control structure to ensure complete coverage andmaximum error detection. Next, modules must be assembled or integrated to form the complete softwarepackage. Integration testing addresses the issues associated with the dual problems of verification andprogram construction. Black-box test case design techniques are most prevalent during integration, althougha limited amount of white -box testing may be used to ensure coverage of major control paths. After thesoftware has been integrated (constructed), sets of high-order test are conducted. Validation criteria(established during requirements analysis) must be tested. Validation testing provides final assurancethat software needs all functional, behavioral and performance requirements. Black-box testing techniquesare used exclusively during validation.

The last high-order testing step falls outside the boundary of software engineering and into the broadercontext of computer system engineering. Software once validated must be combined with other systemelements (e.g., hardware, people, and databases). System testing verifies the tall elements mesh properlyand that overall system function/performance is achieved.

Figure 5.3: Software Testing Steps

Requirements

Code

Design

Testing Direction

Unit test

Integration test

High order tests

Coding

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 80: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

80

Criteria for completion of testingUsing statistical modeling and software reliability theory, models of software failures (uncovered during

testing) as a function of execution time can be developed.

Version of the failure model called a logarithmic Poisson execution-time model takes the form:

f(t) = (1/p)ln(lo pt +1) (1)

Where

f(t) = cumulative number of failures that are expected to occur once the software has nhas been tested for a certain amount of execution time, t

lo = the initial software failure intensity (failures per unit time) at the beginning of testing

p = the exponential reduction in failure intensity as errors are uncovered and repairs aremade.

The instantaneous failure intensity, l(t) can be derived by taking the derivative of f(t),

l(t) = lo(lo pt +1) (2)

Using the relationship noted in equation 2, testers can predict the drop off of errors as testing progresses.The actual error intensity can be plotted against the predict curve

figure 5.4. If the actual data gathered during testing and logarithmic Poisson execution-time model areresponsibly close to one another over a number of data points, the model can be used to predict totaltesting time required to achieve an acceptable low failure intensity.

Figure5. 4: Failure intensity as a function of execution time

Failu

res per

tes

t hour

lo

Predicted failure intensity, l(t)

Execution time,

Data Collected during testing

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 81: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

81MSIT 32 Software Quality and Testing

6.7 STRATEGIC ISSUESFollowing issues must be addressed if a successful software strategy is to be implemented

l Specify product requirements in a quantifiable manner long before testing commences.Although the overriding objective of testing is to find errors good testing strategy also assessesother quality characteristics such as portability, maintainability , usability .These should be specifiedin a way that is measurable so that testing results are unambiguous.

l State testing objectives explicitly. The specific objectives of testing should be stated inmeasurable terms for example, test effectiveness, test coverage, meantime to failure, the costto find and fix defects, remaining defect density or frequency of occurrence, and test work -hours per regression test should all be stated within the test plan.

l Understand the users of the software and develop a profile for each user category.usecases ,which describe interaction scenario for each class of user can reduce overall testingeffort by focussing testing on actual use of the product.

l Develop a testing plan that emphasizes “rapid cycle testing”. The feedback generatedfrom the rapid cycle tests can be used to control quality levels and corresponding test strategies.

l Build “robust” software that is designed to test itself. Software should be designed in amanner that uses antibugging techniques. that is software should be capable of diagnosing certainclasses of errors. In addition, the design should accommodate automated testing regressiontesting.

l Use effective formal technical reviews as a filter prior to testing. formal technical reviewscan be as effective as testing in uncovering errors. For this reason, reviews can reduce theamount of testing effort that is required to produce high-quality software.

l Conduct formal technical reviews to assess the test strategy and test cases themselves. Formaltechnical reviews can uncover inconsistencies, omissions, and outright errors in the testingapproach. This saves time and improves product quality.

l Develop a continuous improvement approach for the testing process. The test strategy shouldbe measured. The metrics collected during testing should be used as part of a statistical processcontrol approach for software testing.

6.8 UNIT TESTINGUnit testing focuses verification efforts on the smallest unit of software design the module. Using the

procedural design description as guide, important control paths are tested to uncover errors within the

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 82: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

82

boundary of the module . the relative complexity of tests and uncovered errors are limited by the constraintscope established for unit testing. The unit test is normally white-box oriented, and the step can be conductedin parallel for multiple modules.

6.8.1 Unit test considerationThe tests that occur as part of unit testing are illustrated schematically in figure 6.5.

The module interface is tested to ensure that information properly flows into and out of the programunit under test. The local data structure is examined to ensure the data stored temporarily maintains itsintegrity during all steps in an algorithm’s execution.

Boundary conditions are tested to ensure that the module operates properly at boundaries establishedto limit or restrict processing. All independent paths through the control structure are exercised to ensurethat all statements in a module have been executed at least once. And finally, all error-handling paths aretested.

Tests of data flow across a module interface are required before any other test is initiated. If data donot enter and exit properly, all other tests are doubtful.

Figure 6.5 : Unit Test

Tests of data flow across a module interface are required before any other test is initiated.

If data do not enter and exit properly, all other tests are doubtful.

§ Checklist for interface tests

Module ----------- ------------ -------------

Test Cases

Interface Local data structures Boundary conditions Independent paths Error handling paths

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 83: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

83MSIT 32 Software Quality and Testing

6.8.2 Checklist for interface tests1. Number of input parameters equals to number of arguments.

2. Parameter and argument attributes match.

3. Parameter and argument systems match.

4. Number of arguments transmitted to called modules equal to number of parameters.

5. Attributes of arguments transmitted to called modules equal to attributes of parameters.

6. Unit system of arguments transmitted to call modules equal to unit system of parameters.

7. Number attributes and order of arguments to built-in functions correct.

8. Any references to parameters not associated with current point of entry.

9. Input-only arguments altered.

10.Global variable definitions consistent across modules.

11.Constraints passed as arguments.

When a module performs external I/O, following additional interface test must beconducted.

1. File attributes correct?

2. Open/Close statements correct?

3. Format specification matches I/O statements?

4. Buffer size matches record size?

5. Files opened before use?

6. End-of-File conditions handled?

7. I/O errors handled

8. Any textual errors in output information?

The local data structure for a module is a common source of errors .Test cases should be designed touncover errors in the following categories

1. Improper or inconsistent typing

2. erroneous initialization are default values

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 84: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

84

3. incorrect variable names

4. inconsistent data types

5. underflow, overflow, and addressing exception

In addition to local data structures, the impact of global data on a module should be ascertained duringunit testing.

Selective testing of execution paths is an essential task during the unit test. Test cases should bedesigned to uncover errors to erroneous computations; incorrect comparisons are improper control flow.Basis path and loop testing are effective techniques for uncovering a broad array of path errors.

Among the more common errors in computation are :

1. misunderstood or incorrect arithmetic precedence

2. mixed mode operation

3. incorrect initialization

4. precision Inaccuracy

5. incorrect symbolic representation of an expression.

Comparison and control flows are closely coupled to one another.

Test cases should uncover errors like:

1. Comparison of different data types

2. Incorrect logical operators are precedence

3. Expectation of equality when precision error makes equality unlikely

4. Incorrect comparison or variables

5. Improper or non-existent loop termination.

6. Failure to exit when divergent iteration is encountered

7. Improperly modified loop variables.

Good design dictates that error conditions be anticipated and error handling paths set up to reroute orcleanly terminate processing when an error does occur.

Among the potential errors that should be tested when error handling is evaluated are:

1. Error description is unintelligible

Chapter 6 - Software Testing Strategies84

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 85: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

85MSIT 32 Software Quality and Testing

2. Error noted does not correspond to error encountered

3. Error condition causes system intervention prior to error handling

4. Exception-condition processing is incorrect

5. Error description does not provide enough information to assist in the location of the cause of theerror.

Boundary testing is the last task of the unit tests step. software often files at its boundaries. That is,errors often occur when the nth element of an n-dimensional array is processed; when the ith repetition ofa loop with i passes is invoke; or when the maximum or minimum allowable value is encountered. Testcases that exercise data structure, control flow and data values just below, at just above maxima andminima are Very likely to uncover errors.

6.8.3 Unit test procedures

Unit testing is normally considered as an adjunct to the coding step. After source-level code has beendeveloped, reviewed, and verified for correct syntax, unit test case design begins. A review of designinformation provides guidance for establishing test cases that are likely to uncover errors in each of thecategories discussed above. Each test case should be coupled with a set of expected results.

Because a module is not a standalone program, driver and or stub software must be developed foreach unit test. The unit test environment is illustrated in figure 5.6.In most applications a driver is nothingmore than a “Main program” that accepts test case data, passes such data to the test module and printsrelevant results. Stubs serve to replace modules that are subordinate to the module that is to be tested. Astub or “dummy sub program” uses the subordinate module’s interface may do minimal data manipulationprints verification of entry, and returns.

Drivers and stubs represent overhead. That is, both are software that must be developed but that isnot delivered with the final software product. If drivers and stubs are kept simple, actual overhead isrelatively low. Unfortunately, many modules cannot be adequately unit tested with “simple” overheadsoftware. In such cases, Complete testing can be postponed until the integration test step (Where driversor stubs are also used).

Unit test is simplified when a module with high cohesion is designed. When a module addresses onlyone function, the number of test cases is reduced and errors can be more easily predicted and uncovered.

85MSIT 32 Software Quality and Testing

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 86: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

86

Figure 5.6 : Unit Test Environment

6.9 INTEGRATION TESTINGIntegration testing is a systematic technique for constructing the program structure while conducting

tests to uncover errors associated with interfacing.

The objective is to take unit tested modules and build a program structure that has been dictated bydesign.

6.9.1 Different Integration StrategiesIntegration testing is a systematic technique for constructing the program structure while conducting

tests to uncover errors associated with interfacing. The objective is to take unit tested modules and builda program structure that has been dictated by design.

There are often a tendency to attempt non-incremental integration; that is, to contrtruct the programusing a “big bang” approach. All modules are combined in advance. The entire program is tested as a

Chapter 6 - Software Testing Strategies86

Driver

Module to be tested

Stub Stub

Interface Local data structures Boundary conditions Independent paths Error handling paths

Test Cases

RESULTS

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 87: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

87MSIT 32 Software Quality and Testing

whole. And chaos usually results! A set of errors is encountered. Correction is difficult because isolationof causes is complicated by the vast expanse of the entire program. Once these errors are corrected, newones appear and the process continues in a seemingly endless loop.

Incremental integration is the antithesis of the big bang approach. The program is constructed andtested in small segments, where errors are easier to isolate and correct; interfaces are more likely to betested completely; and a systematic test approach may be applied. We discuss some of incrementalmethods here:

6.9.2 Top down integrationTop-down integration is an incremental approach to construction of program structure. Modules are

integrated by moving downward through the control hierarchy, beginning with the main control module.

The integration process is performed in a series of five steps:

1. The main control module is used as a test driver, and stubs are substituted for all modulesdirectly subordinate to the main control module.

2. Depending on the integration approach selected (i.e., depth-or breadth first), subordinate stubsare replaced one at a time with actual modules.

3. Tests are conducted as each modules are integrated

4. On completion of each set of tests, another stub is replaced with real module

5. Regression testing may be conducted to ensure that new errors have not been introduced

The process continues from step2 until the entire program structure is built.

Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems arise. Themost common of these problems occurs when processing at low levels in the hierarchy is required toadequately test upper levels. Stubs replace low-level modules at the beginning of top-down testing; therefore,no significant data can flow upward in the program structure.

The tester is left with three choices

1. Delay many tests until stubs are replaced with actual modules.

2. Develop stubs that perform limited functions that simulate the actual module

3. Integrate the software from the bottom of the hierarchy upward

The first approach causes us to lose some control over correspondence between specific tests andincorporation of specific modules. this can lead to difficulty in determining the cause of errors tends to

87MSIT 32 Software Quality and Testing

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 88: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

88

violate the highly constrained nature of the top down approach. The second approach is workable but canlead to significant overhead, as stubs become increasingly complex. The third approach is discussed innext section.

6.9.3 Bottom -Up IntegrationModules are integrated from the bottom to top, in this approach processing required for modules

subordinate to a given level is always available and the needs for subs is eliminated.

A bottom-up integration strategy may be implemented with the following steps:

1. Low-level modules are combined into clusters that perform a specific software sub function.

2. A driver is written to coordinate test case input and output.

3. The cluster is tested.

4. Drivers are removed and clusters are combined moving upward in the program structure.

As integration moves upward, the need for separate test drivers lessens. In fact, if the top two levelsof program structure are integrated top-down, the number of drivers can be reduced substantially andintegration of clusters is greatly simplified.

6.9.4 Regression TestingEach time a new model is added as a part of integration testing, the software changes.

New data flow paths are established, new I/O may occur, and new control logic is invoked. Thesechanges may cause problems with functions that previously worked flawlessly. In the context of anintegration test, strategy regression testing is the re-execution of subset of tests that have already beenconducted to ensure that changes have not propagated unintended side effects.

Regression testing is the activity that helps to ensure that changes do not introduce unintended behavioror additional errors.

How is regression test conducted?

Regression testing may be conducted manually, by re-executing a subset of all test cases or usingautomated capture playback tools.

Capture-playback tools enable the software engineer to capture test cases and results for subsequentplayback and comparison.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 89: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

89MSIT 32 Software Quality and Testing

The regression test suite contains three different classes of test cases.

1. A representative sample of tests that will exercise all software functions.

2. Additional tests that focus on software functions that are likely to be affected by the change.

3. Tests that focus on software components that have been changed.

4Note:

It is impractical and inefficient to re-execute every test for every program function once a change hasoccurred.

Selection of an integration strategy depends upon software characteristics and some time projectschedule. In general, a combined approach that uses a top-down strategy for upper levels of the programstructure, coupled with bottom-up strategy for subordinate levels may be best compromise.

Regression tests should follow on critical module function.

What is critical module?

A critical module has one or more of the following characteristics.

l Addresses several software requirements

l Has a high level of control

l Is a complex or error-prone

l Has a definite performance requirement.

6.9.5 Integration Test DocumentationAn overall plan for integration of the software and a description of specific tests are documented in a

test specification. The specification is deliverable in the software engineering process and becomes partof the software configuration.

Test Specification Outline

I. Scope of testing

II. Test Plan

1. Test phases and builds

2. Schedule

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 90: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

90

3. Overhead software

4. Environment and resources

III. Test Procedures

1. Order of integration

q Purpose

q Modules to be tested

2. Unit test for modules in build

q Description of test for module n

q Overhead software description

q Expected results

3. Test environment

q Special tools or techniques

q Overhead software description

4. Test case data

5. Expected results for build

IV. Actual Test Results

V. References

VI. Appendices

The Following criteria and corresponding tests are applied for all test phases.

Interfaces integrity. Internal and external interfaces are tested as each module is incorporated intothe structure.

Functional Validity. Tests designed to uncover functional error are conducted.

Information content. Tests designed to uncover errors associated with local or global data structuresare conducted.

Performance Test designed to verify performance bounds established during software design areconducted.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 91: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

91MSIT 32 Software Quality and Testing

A schedule for integration, overhead software, and related topics are also discussed as part of the“test Plan” section. Start and end dates for each phase are established and availability windows for unittested modules are defined. A brief description of overhead software(stubs and drivers) concentrates oncharacteristics that might require special effort. Finally, test environments and resources are described.

6.10 VALIDATION TESTINGAt the culmination of integration testing, software is completely assembled as a package. Interfacing

errors have been uncovered and corrected, and a final series of software tests - validation testing- maybegin.

Validation can be defined in many ways, but a simple definition is that validation succeeds whensoftware functions in a manner that can be reasonable expected by customer.

What are reasonable expectations?

Reasonable expectations are defined in the software requirement specification - a document thatdescribes all user-visible attributes of the software. The specification contains a section titled “ValidationCriteria”.

Information contained in that section forms the basis for a validation testing approach.

6.10.1 Validation Test CriteriaA test plans outlines the classes of tests to be conducted and a test procedure defines specific test

cases that will be used in an attempt to uncover errors in conformity with requirements. Both the plan andprocedure are designed to ensure that:

l All functional requirements are satisfied

l All performance requirements are achieved

l Documentation is correct and human-engineered and

l Other requirements like portability, error recovery, and maintainability are met.

6.10.2 Configuration ReviewAn important element of the validation process is a configuration review. The intent of the review is to

ensure that all elements of the software configuration have been properly developed, are catalogued, andhave the necessary detail to support the maintenance phase of the software life cycle.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 92: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

92

6.10.3 Alpha and Beta TestingIf software is developed as a product to be used by many customers, it is impractical to perform formal

acceptance tests with each one. Most software product builders use a process called alpha beta testing touncover errors that only the end user seems able to find.

The alpha test conducted at the developer’s site by a customer software is used in a natural settingwith the developer “Looking over the shoulder” of the user and recording errors and usage problems.Alpha tests are conducted in a controlled environment.

The beta test is conducted at one or more customer sites by the end user(S) of the software . Unlikealpha testing the developer is generally not present; therefore the beta test is “live”.

Application of the software in an environment that cannot be controlled by the developer.

The customer records all problems (real/imagined) that are encountered during beta testing and reportsthese to the developer at regular intervals. Because of problems reported during beta test, the softwaredeveloper makes modification and then prepares for release of the software product to the entire customerbase.

6.11 SYSTEM TESTINGSystem testing is actually a series of different tests whose primary purpose is to fully exercise the

computer-based system. Although each test has a different purpose, all work to verify that all systemelements have been properly integrated and perform allocated functions.

A classic system testing problem is “finger pointing”. This occurs when an error is uncovered, andeach system element developer blames the other for the problem. Rather than indulging in such nonsense,the software engineer should anticipate potential interfacing problems and 1) design error-handling pathsthat test all information coming from other elements of the system; 2) conduct a series of tests thatsimulate bad data or other potential errors at the software interface;3) record the results of tests to use as“evidence” if finger pointing does occur; and 4) participate in planning and design of system tests toensure that software is adequately tested.

In the section that follows, we discuss the types of system tests that are worthwhile for software -based system.

6.11.1 Recovery TestingMany computer-based systems must recover from faults and resume processing within a pre-specified

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 93: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

93MSIT 32 Software Quality and Testing

time. In some cases, a system must be fault tolerant; that is, processing faults must not cause overallsystem function to cease. In other cases, a system failure must be corrected within a specified period oftime or severe economic damage will occur.

Recovery testing is a system test that forces the software to fail in a variety of ways and verifies thatrecovery is properly performed. If recovery is automatic (performed by the system itself), re-initialization,check pointing, mechanism, data recovery, and restart are each evaluated for correctness. If recoveryrequires human intervention, the mean time to repair is evaluated to determine whether it is within acceptablelimits.

6.11.2 Debugging

Software testing is a process that can be systematically planned and specified. Test case design canbe conducted, a strategy can be defined, and results can be evaluated against prescribed expectations.

Debugging occurs as a consequence of successful testing. That is, when a test case uncovers an error,debugging is the process that results in the removal of the error. Debugging is not testing, but it alwaysoccurs as consequence of testing as shown in figure 6.1

6.11.3 The Debugging Process

The debugging process begins with the execution of a test case. As shown in fig 5.7, the debuggingprocess begins with the execution of a test case. Results are assessed and a lack of correspondencebetween expected and actual is encountered. In many cases, the non-corresponding data is a symptom ofan underlying cause as yet hidden. The debugging process attempts to match symptom with cause, therebyleading to error correction.

The debugging process attempts to match symptom with cause, there by leading to error correction.

The debugging process will always have two outcomes:

1. The cause will be found, corrected, and removed

2. The cause will not be found.

In the latter case, the person performing debugging may suspect a cause, design a test case to helpvalidate his/her suspicion, and work toward error correction in iterative fashion.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 94: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

94

Figure 6.1: Debugging Process

Why debugging is so difficult?

Some characteristics of bugs provide some clues:

1. The symptom and the cause may be geographically remote. That is, the symptom may appear inone part of a program, while the cause may actually be located at a site that is far removed.Highly coupled program structures exacerbate this situation.

2. The symptom may disappear(temporarily) when another error is corrected.

3. The symptom may actually be caused by no errors (e.g. round-off inaccuracies).

4. The symptom may be caused by human error that is not easily traced.

5. The symptom may be a result of timing problems, rather than processing problems.

6. It may be difficult to accurately reproduce input conditions(e.g. a real-time application in whichinput ordering is indeterminate).

Execution of Cases

Results

Debugging

Additional Tests

Suspected causes

Regression Tests

Corrections Identified Causes

Test Cases

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 95: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

95MSIT 32 Software Quality and Testing

7. The symptom may be due to causes that are distributed across a number of tasks running ondifferent processors.

6.11.4 Debugging approachIn general, three categories for debugging approaches may be proposed.

l Brute force

l Back tracking

l Cause elimination

The brute force category of debugging is probably the most common and efficient method for isolatingthe cause of a software error. Brute force debugging methods are applied when all methods of debuggingfail. Using a philosophy, memory dumps are taken, run time traces are invoked and the program is loadedwith WRITE statement. When this is done, one finds a clue by the information produced which leads tocause of an error.

Backtracking is a common debugging approach that can be used successfully in small programs.Beginning at the site where a symptom has been uncovered, the source code is traced backward (manually)until the site of the cause is found. This process has a limitation when the source lines are more.

Cause Elimination is manifested by induction or deduction and introduces the concept of binarypartitioning. Data related to the error occurrence are organized to isolate potential causes.

Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each.

If initial tests indicate that a particular cause hypothesis shows promise the data are refined in anattempt to isolate the bug.

6.12 SUMMARYq Software testing accounts for the largest percentage of technical effort in the software process.

Yet, we are only beginning to understand the subtleties of systematic test planning, executionand control.

q The objective of software testing is to uncover errors.

q To fulfill this objective, a series of test step-unit, integration, validation, and system tests-areplanned and executed.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 96: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

96

q Unit and integration tests concentrate on functional verification of a module and incorporation ofmodules into a program structure.

q Validation testing demonstrates tractability to software requirements, and

q System testing validates software once it has been incorporated into a larger system.

q Each test step is accomplished through a series of systematic test techniques that assist in thedesign of test cases. With each testing step, the level of abstraction with which software isconsidered is broadened.

q Unlike testing, debugging must be viewed as an art. Beginning with a symptomatic indication ofa problem, the debugging activity tracks down the cause of an error. Of the many resourcesavailable during debugging, the most valuable is the counsel of other software engineers.

q The requirement for higher-quality software demands a more systematic approach to testing.

QUESTIONS

1. What is the difference between Verification and Validation? Explain in your own words.

2. Explain unit test method with the help of your own example.

3. Develop an integration testing strategy for any the system that you have implemented already. List the problemsencountered during such process.

4. What is validation test? Explain.

REFERENCES

1. Software Engineering, A practitioner’s Approach, Fourth Edition, by Roger S. Pressman, McGraw Hill.

2. Effective Methods of Testing, by William Perry, Wiley.

3. The Art of Software Testing, by Glenford J. Myers, John Wiley & Sons.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 97: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

97MSIT 32 Software Quality and Testing

MSIT 32 Software Quality and Testing 97

Chapter 7

Testing of Web Based Applications

7.1 INTRODUCTION

Currently the web is the most popular and fastest growing information system deployed on theinternet, more than 80% of its traffic.

As of date, we can say that web based application deserve a high level of all software quality characterstics defined in the ISO standards namely:

Functionality: Verified content of web must be ensured as well as fitness for intended purpose.

Reliability: Security and availability are of utmost importance especially for applications that requiredtrusted transactions or that must exclude the possibility that information is tampered.

Efficiency: Response times are one of the success criteria for on-line services

Usability: High user satisfaction is the basis for success

Portability: Platform independence must be ensured at client level.

Maintainability: High evolution speed of services requires that applications can be evolved veryquickly.

7.2 TESTING OF WEB BASED APPLICATIONS: TECHNICALPECULIARITIESThe following are the peculiarities of the web based applications :

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 98: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

98

l Web based applications consists of a large degree of components written by somebody else and“integrated “together” with application software;

l User interface is often more complex than many GUI based client-server application

l Performance behavior is largely unpredictable and depends on many factors which are notunder the control of the developers

l Security threats can come from anywhere

l We do not have only HTML but also Perl, Java, VRML etc.

l Browser compatibility is mandatory but is made difficult by layers and multi platforms

l Reference platforms are brand new and are being changed constantly

l Interoperability issues are magnified and thorough testing requires substantial investments insoftware and hardware.

7.3 TESTING OF STATIC WEB- BASED APPLICATIONSTest levels of static web sites are briefly summarized as follows :

Basic correctness/adherence to standards and guidelines

This refers to syntax, stylistic and lexical testing that has the goal to check the basic correctness ofweb sites.

The common types of problems found in static web applications are

q Syntax Problems

q Stylistic problems

q Lexical problems.

User interaction

User interaction covers

l Links,

l Fast loading,

l Compatibility

l Usability testing

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 99: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

99MSIT 32 Software Quality and Testing

The following guidelines provide the necessary steps to test links:

Ensure that all hyper-links are valid and keep doing so, by means of continuous checks when the siteis operational;

Consistency check of internal and external links, as well as anchors;

Internal Links shall be relative, to minimize the overhead and faults when the web site is moved toproduction environment.

External link can change without control: thus automated regression testing shall be promoted.

Avoid link that require parameter passing

Check that content can be accessed by means of: search engine .site map, navigation structure.

Fast Loading Testing

Following aspects needs to be looked into :

l Fast loading are concerning with aspects like the web pages the presence of a fast loadingabstract/index, the presence of width and height attributes for IMG tag.

l Fast loading testing is very important if we consider that 85% of web users indicate slow loadingtimes as the reason for avoiding further visits to web sites.

l Two approaches to have to be followed

l Reduce the size of the transferred data

l Optimise rendering and HTTP management

l The following rules related to the page weight should be established as support to fast loadingtesting

l Home page weight should be less than a specified size.(e.g. 45K)

l Every page weight should be less than a specified size (e.g. 50k) “Graphical sugar” picturesshould be less than 3K (e.g. bullet header)

l Keep table test size to a minimum

l To reach greater sizes, use multiple tables separated one from the other

l Avoid nested tables

l Minimize pictures within the tables and always specify width and height

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 100: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

100

l Every page should contain text or other information before the first <TABLE> tag.

Compatibility testing

Compatibility testing concerns cross-Browser compatibility that checks for site behaviour across industrystandard browsers and their recent versions. It checks that pages conform to W3C standards for HTMLand other languages. It checks that site behaviour for java applets and Active X control. Cross platformjava compatibility checks for the site’s behaviour across industry standard desktop hardware and OS.

Usability Testing

Usability testing refers to coherence of look and feel, navigational aids, and user interactions andprinting. These aspects must be tests with respect to normal behaviour, destructive behaviour andinexperienced users.

Structural Aspects

This includes both portability and integrity topics. All filenames in must be in lowercase which are inserver side.

Links to URLs outside in the web site be in canonical form and links to URLs into the web site mustbe in relative form.

Moreover it must be checked that every directory must have an index page, every anchor must pointto an existing page, and that are no limbo pages.

7.4 TESTING OF DYNAMIC WEB BASED APPLICATIONSIntegration /Security Testing

This includes web specific issues components and proxies and caching.

A peculiar aspect of the testing of dynamic WWW applications is testing of cookies. The web ismemory less system, with no concept of session. To overcome these issues you can use cookies: a smallpiece if information sent by a web server to store on a web browser so it can later be read back frombrowser. The problems related with testing are that cookies expire and that users can disable them inbrowser.

Besides the basic security checking performed during the previous test level, specific security testinghas to be performed when web applications make usage of sensitive data.

Aspects to be covered by security testing will include: password security and authentication; Encryptionof business transaction on WWW (SSL); Encryption of e-mails (PGP-Pretty Good Privacy); firewalls,Routers and Proxy servers.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 101: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

101MSIT 32 Software Quality and Testing

Stress testing

Stress test is designed to confront programs with abnormal situations. In essence, the tester whoperforms stress testing asks:” How high can we crank this up before it fails”?

Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency,or volume.

For example :

1. Special test may be designed that generate 10 interrupts per second when 1 or 2 is the averagerate.

2. Input data rates may be increased by an order of magnitude to determine how input function willrespond.

3. Test cases that require maximum memory or other resources may be executed

4. Test cases that may cause trashing in a virtual operating system may be designed.

5. Test cases that may cause excessive hunting for disk resident data may be created.

Essentially the tester attempts to break the program.

A variation of stress testing is a technique called sensitivity testing. In some situation, a very smallrange of data contained within the bounds of valid data for a program may cause extreme and evenerroneous processing or profound performance degradation. This situation is analogous to a singularity ina mathematical function.

Sensitivity testing attempts to uncover data combinations within valid input classes that may causeinstability or improper processing.

Performance Testing

Performance testing is designed to test run-time performance testing occurs throughout all steps in thetesting process. Even at the unit level, the performance of an individual module may be assessed as white-box tests are conducted. However, it is not until all system elements are fully integrated that the trueperformance of a system can be ascertained.

Performance tests are often coupled with stress testing and often require both hardware and softwareinstrumentation. That is, it is often necessary to measure resource utilization in an exacting fashion.External instrumentation can monitor execution intervals, log events as they occur, and sample machinestates on a regular basis. By incrementing a system, the tester can uncover situation that lead to degradationand possible system failure.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 102: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

102

7.5 FUTURE CHALLENGESFuture challenges are related to the evolution of the WWW namely:

l HTTP-NG ( New generation HTTP protocol)

l Integration between TV and Web

l Emergence of XML

l New references for user interfaces like

l Evolution of HTML on MathML (for publishing Math)

l On SMIL for multimedia presentation

l SVG for publishing diagrams and vector based graphics

l Privacy issues

l Digital signatures

l Micro payments

These and other emerging technologies and services will require Internet testing approaches to becontinually fine-tuned, to guarantee the reliability and quantity of service.

QUESTIONS

1. What are the Technical Peculiarities in Web Site Testing?

2. What are the different levels of static web site testing?

3. What is Stress Testing

4. What is fast load testing?

5. What is integrity and security checking?

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 103: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

103MSIT 32 Software Quality and Testing

Chapter 8

Test Process Model

8.0 NEED FOR TEST PROCESS MODEL

The standard model for process assessment and improvement includes the main processes for thetest processes. As a starting Point for all process improvement activities a more detailed processmodel is needed.

When looking at the software engineering Process a lot of models are available .They are documentedand distributed and their use and tailoring needs are widely known and discussed .The Test Process isoften characterized by trial and error implementation from people who are mainly experienced in SE andnot in the testing area. In addition the models are not well documented and available to the QA responsibleones in the companies

Based on the experiences of the problems known from the BOOTSTRAP assessments in manycompanies and the experiences more than 15 years in organizing and performing test in a wide area ofcompanies a standard test process model was defined by SQS and integrated in the standard assessmentmethod of the BOOTSTRAP.

MSIT 32 Software Quality and Testing 103

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 104: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

104

8.1 TEST PROCESS CLUSTER

Fig 8.1: Test Process Cluster

Test process cluster is explained in the following sections:

8.1.1 Organisational ProcessTest Process definition:

The purpose of the test process definition process is to establish the proceedings for a stable andrepeatable test process and to document these in away apt for use.

Human Resource Management

The purpose of the human resource management process is to provide the organization and projectswith individuals for testing who possess the skill and knowledge to perform their roles effectively.

Infrastructure management

The purpose of the infrastructure management process is to provide a stable and up-to-date environmentwith apt methods and tools for the software test and provide the testing staff with an environment forwork.

The processes of this cluster are dealing with framework which allows performing an efficient test.

Organizational Process

Test Management Process

Support Process

Operative Test

Processes

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 105: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

105MSIT 32 Software Quality and Testing

8.1.2 Management ProcessTest management process mainly consists of managing test, quality and risks.

Test management

The purpose of the test management process is to define the necessary processes for co-coordinatingand managing a test project and the appropriate resources for testing a software product.

Quality and test strategy

The purpose of the quality and test strategy process is to identify the appropriate strategy to ensure thequality of the products and services of a project to satisfy the customer needs.

Risk Management

The purpose of the risk management process is to continuously identify and mitigate the project risksthroughout the life cycle of the project. The process involves establishing a focus on management of risksat both the project and the organizational level.

The processes of these clusters are dealing with the management of the test project with a focus onthe appropriate organization of the process and a permanent watch on potential risks.

8.1.3 Support ProcessesTest documentation

The purpose of the test documentation process is to establish a system for the documentation to assurethe documentation of the test preparation and test protocols.

Configuration Management

The purpose of the configuration management process is to provide a mechanism for identifying,controlling and tracking the versions of all work products of test project or process.

Error and Change Management

The purpose of the error and change management process is to ensure that all deviations from therequirements are removed and all changes of the requirement are analyzed and managed and trends areidentified.

Joint Reviews

The purpose of the joint reviews process is to maintain a common understanding with the customer of

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 106: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

106

the progress against reaching the customers goals and what should be done to help to ensure the developmentof a product that satisfies the customer.

The processes in this cluster are dealing with the supporting process that provides techniques andinfrastructure for a successful test project.

8.1.4 Operative test processesTo implement a well organized test process it is necessary to distinct between test activities in accordance

to the test focus of each activity. As shown in figure 3 you can distinguish between the products of eachengineering phase and therefore the tests can also be structure in accordance to these products. As aresult the standard test process model contains 12 processes in these categories.

Test Documentation

The purpose of test of documentation process is to ensure that the documented work products of theproject activities (e.g. requirement documents) comply with their fined requirements regarding form content;

Module Test

The purpose of the module test process is to ensure that modules of the software comply with theirdefined formal coding requirements and with the requirement of the software design.

Module integration Test

The purpose of the module integration test process is to ensure that the integrated software modulescompiled with its defined requirements.

OO class test

The purpose of the OO class test is to ensure that the OO classes comply with their defined formalcoding requirements and with the requirements of the software design.

OO class integration test

The purpose of the OO class integration test process is to ensure that integrated OO classes complywith their defined requirements.

Functional Test

The purpose of the functional test process is to ensure that the functions of the application fulfill theirfunctional requirements.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 107: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

107MSIT 32 Software Quality and Testing

Business Workflow test

The purpose of the business workflow test process is to ensure that the business workflow of thefunctions of the applications fulfill its requirements.

Unit interface test

The purpose of user interface test process is to ensure that the user interface of the application fulfillits requirement.

Performance Test

The purpose of the performance test process is to ensure that the performance of the applicationcomplies with its requirements.

Application interface test

The purpose of the application interface test process is to ensure the interfaces of the application withother systems comply with their requirements.

Installation Test

The purpose of the installation test process is to ensure that the deliverable application can be installedin the defined target environments.

Compatibility test

The compatibility test process is to ensure that the application is compatible with other specifiedapplication in the target environment.

How to improve test process?

The assessment results consists of two main results

1. The capability of each process

2. The Key Finding: Strength and Improvement areas

As a starting point to derive an improvement strategy the evaluated goals and business strategies areanalyzed .To support the defining the goals and their priorities as well as to measure the actual quality ofthe processes metrics are selected and adopted to the needs of the company.

Based on these metrics and the analysis of the process capabilities, the strength and weaknesses ofthe test processes the improvement steps are defined. The selection of improvements from the improvementsuggestion will be done by the assessed team, a management representative and will be supported by the

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 108: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

108

experienced assessor in an improvement workshop. The workshop ensures that the improvement areasare defined based on the needs and the experiences of the assessed organization and the knowledge ofthe experts.

The next step is the concrete definitions of activities .The activities for the test improvement areplanned and controlled like a “normal” software engineering project .The predefined metrics are used todefine measures that can be used to measure the success of the improvement.

Typical improvements for projects and organization that are starting with the test process improvementare

l Guarantee independent QA responsibilities

l Implement a structure of testing phases with specified goals and criteria’s for the next phase

l Evaluation and implementation of tool support for testing activities

Fig 8.2 shows the basic steps on the way to test process improvement

QUESTIONS

1. Explain different test process and its benefits in software application development.

SQA Test Metrics

SQA Test / Analyze

SQA Test /Advice

SQA Test /Guidance

Fig 8.2: Improvement Steps

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 109: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

109MSIT 32 Software Quality and Testing

Chapter 10

Test Metrics

10.0 INTRODUCTION

The idea of understanding test metrics is to investigate the benefits of adopting some specificapproach to testing.

If no formal, quantitative measurements are made, it is possible only to make qualitative statementsabout the effectiveness of the testing process, which may in the short term assure senior management,but which in the long term will not help to improve the testing process. Typical examples of where metricscan be used in the testing process include:

l For objective assessment of testing quality against agreed standards

l For estimating the testing effort required to complete a given testing project/task.

l For highlighting complex elements of the system under test that may be error-prone and requireadditional testing effort.

l To measure and track the cost of testing.

l To assess and track the progress of the testing task.

l To predict when it is appropriate to stop testing a particular AUT.

Formally metrics are objective numerical measures obtained by inspection and analysis of the productsand processes of a software project. Although can be collected at all stages of the SDLC, this chapterfocuses on those of relevance to the testing process.

MSIT 32 Software Quality and Testing 109

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 110: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

110

This chapter discusses the role and use of metrics in process improvement, reviews the metrics employedin such programs , discusses the issues involved in setting up and adopting a metrics program, and makesa number of proposals for a simple and effective metrics set that can be used to improve the testingprocess.

10.1 OVERVIEW OF THE ROLE AND USE OF METRICSAlthough the Statistical Techniques clause of ISO9001 provides guidance on the adoption of a metrics-

based approach, there is a present no single universal standard for software metrics in the informationtechnology or software development arena. However, a great deal of work has been performed in the ITindustry on software metrics, and many metrics sets have been proposed and adopted by different authorsand organizations.

10.2 PRIMITIVE METRIC AND COMPUTED METRICSA primitive metric is one that can be measured directly Examples include

1. The Number of defects

2. The complexity of the AUT (Application under testing)

3. The cost of testing

Primitive metrics typically form the raw data for a metrics program and will represent the observeddata collected during the project. Often plotting the progress of primitive matrix over a time is a powerfulmeans of observing the trends within the testing project. For example, plotting the numbers of defectsdetected during the testing projects against time can provide the test team leader with one means ofdetermining when to stop testing by observing when the rate of detection of defects declines.

A computed metric is one that must be calculated from other data or metrics Examples include

1. The number of non comment lines of code written per day

2. The defect density

3. The number of defects detected and or reported per unit time or per development phase

Computed metrics typically form the basis of forming conclusion regarding the progress of a process–improvement program. For example, observing the defect detection effectiveness percentage achievedby a testing team across a number of testing projects provides a valuable indication of the change inefficiency of the testing process over time.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 111: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

111MSIT 32 Software Quality and Testing

10.3 METRICS TYPICALLY USED WITHIN THE TESTINGPROCESS

This section reviews the commonly collected and calculated metrics that can be used in improving thetesting process.

Size of the AUT: This metric is typically measured as NCSS (Non Comment Source Statements).Inan object –Oriented or GUI- based development, information on the number of objects (and their methods)or windows could be collected. This Metric must be collected pragmatically – it does not make goodsense manually to count every line of code of a huge application (however, an automated approachesusing a line-counting tool might be appropriate).

Complexity of the AUT: This metrics attempt to take account of the complexity of the AUT interms of iteration, recursion, conditional branching points, function points, and so on. Caution must beexercised in the use of this metric, since it will be difficult to produce generally applicable conclusions fororganization involved in a number of software developments and testing projects, each of which utilizesdifferent implementation technologies.

This is a difficult metric to collect; However, complexity-estimating tools are commercially availablethat will simplify the collection and use of complexity information.

Cost effort: This metric is typically measured as a payroll month and includes time taken by staffduring testing, as well as time spent by managers engaged on testing tasks.

It may also be of benefit to measure of the cost /effort involved in administrating and collecting themetrics since comparison of this value with the total effort expended in the testing process will provideinformation about the efficiency of the metrics program

Total Defects Found in Testing :In this context, a defect can be defined as any aspect of thebehavior of the software that would not exists if the software were fit for purpose .It may be of benefit toconsider recording the severity of the observed defects to help in comparing their relative impact. Asimple three-point scale of critical, serious, and minor is often sufficient for this purpose.

Communications: This Metrics is typically measured as the number of interfaces that a givenproject team has come up with the purpose of characterizing the constraints on the project team due todependencies with entities organizationally and physically distant, such as users, managers, and or thesupplier.

This metric is most useful for large and or complex organization where development and testing takeplace across geographically distinct sites. In particular metric can be used to identify possible improvementsto the organization and administration of complex testing projects and provides a means for assessing theeffectiveness of such improvements. For small testing projects involving few stuff located in the sameoffice or site, there is unlikely to be a great deal of benefit from the use of this metric.

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 112: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

112

10.4 DEFECT DETECTION EFFECTIVENESS PERCENTAGE(DDE)

This computed metric will provide an indication of how effective the testing process is overtime (DDEshould increase with an increase in the effectiveness of the testing).DDE is calculated as follows

DDE = (TDFT/ (TDFC+TDFT)) X100, where

TDFT g Total Defect Found by Testing team

TDFC g Total Defect Found by Client(Measured up to some standard point

After release-say after 6 months)

Defect Removal Effectiveness Percentage (DRE)

This computed metric will provide an indication of how effective is the testing task is at removalof defects. DRE is calculated as follows.

DRE = (TDCT/TDFT) X100, where

TDCT g Total Defects Closed during Testing

TDFT g Total Defect Found by Testing team

Test Case design efficiency percentage

This compound metric will provide information about the effectiveness of the test case design process.TDE is calculated as follows.

TDE = TDFC/NTC X100 where

TDFT g Total Defect Found by Testing team

NTC g Number of Test Cases Run.

10.5 SETTING UP AND ADMINISTERING A METRICSPROGRAM

The following requirements need to be considered in setting up a typical metrics program, and shouldbe reviewed to determine those appropriate within the context of the scheme planned as part of anyspecific testing process.

l The need to define organizational objectives for the process-improvement program to establish:

q the methods to be used

PDF created with pdfFactory Pro trial version www.pdffactory.com

Page 113: SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE QUALITY & TESTING (MSIT - 32): Contributing Author : Dr. B.N. Subraya Infosys Technologies

113MSIT 32 Software Quality and Testing

q the cost that will be deemed acceptable in running the program

q the urgency of the program

q the required support from the management for the program

l The need to establish the following roles and responsibilities

q what part of the organization will have responsibility for the program?

q who will manage implement, and administer the program?

q who will submit/collect the metrics

l The need to research which metrics to record and what analysis is required .during process thefollowing issues must be considered.

q Whatever metrics are selected initially will almost certainly change. They will be modified byexperience during the metrics program itself.

q Ensure that results of analysis are immediately useful to project management. If you continueto collect metrics with no visible benefit, management will become discouraged with theprocess.

q Be aware of what is realistic (For example, although of potential interest , the number ofkeystrokes made by an engineer is not a realistic metric to record)

q Err on the side of generosity when selecting metrics to collect. It may be difficult to go backand obtain metrics for a parameter that was initially not thought to be of interest, but whichsubsequently turns out to be useful .You can always drop a metric later if it turns out to be ofno use.

q Be aware that the more metrics you decide to record, the greater the effort to collect , thegreater the project resistance to the process , the larger the requirements for storage andprocessing of the metrics

l The need to set up the infrastructure for recording the metrics .The following steps are likelyto be implemented:

q Set up a system for holding the metrics

q Generate simple procedures for defining what metrics are to be collected and how this is tobe achieved. And formally document them in a form that can be distributed and easilyunderstood.

QUESTIONS

1. What is test process model? Explain.

2. Explain the defect prevent mechanism.

PDF created with pdfFactory Pro trial version www.pdffactory.com