94
 SOFTWARE TESTING 1

Corporate Material for Manual Testing

Embed Size (px)

Citation preview

Page 1: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 1/94

 

SOFTWARE TESTING

1

Page 2: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 2/94

Evolution of Software Testing

The ability to produce software in a cost-effective way is the key factor thatdetermines the effective functioning of the modern systems. To produce cost effectivesoftware a number of activities were in practice.

The attitude towards Software testing underwent a major positive change in therecent years. In 1950’s when machine languages were used testing is nothing butdebugging.1960’s Compilers were developed, testing started to be considered as a separateactivity from debugging. In 1970 ‘s when Software engineering concepts were introducedsoftware testing began to evolve as technical disciplineOver the last two decades there has been an increased focus on the better, faster, cost-effective, secured software. This has increased the acceptance of testing as a technicaldiscipline and also a career choice.

What is Software Testing?

The process of exercising the software or part of the software by a set of inputs to checkwhether the results are obtained as required.

Software testing is the process used to identify the correctness, completeness and quality of developed software.

It is a means of evaluating the system or a system component to determine that it meets therequirement of the customer.

Goals of Testing.

• Determine whether the system meets the requirements.

• Determine whether the system meets the specification• Find the bugs

• Ensure Quality

Advantages of Testing 

• Detect defects early ,

• Reduces the cost of defect fixing

• Prevent the detection of bugs by the customer 

• Ensure that the product works as to the satisfaction of the customer 

 

2

Page 3: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 3/94

Introduction to Software process

  Process. This term has also found its rightful place in Software Industry. It was Deming whopopularized this term. Japanese have managed a miraculous industrial revolution based on thesimple concept of a Process. “Process is a mean by which people, procedures, methods, equipment,

and tools are integrated to produce a desired end result” [quoted from CMM for Software, version 2B].Humphrey in his Book Introduction to the PSP, (1997) defines a process in Software Developmentcontext as “Process defines a way to do a project, Projects typically produces a product, and Product is something you produce for a co-worker, an employer, or a customer.”  

Now that we know what Process means, how can we use this knowledge to achieve success? Theanswer lies in the following three-step strategy:

1- Analyze the current process, by which your organization executes its projects,

2- Figure out the strengths and weaknesses of the current process,

3- Improve upon your Process’s strengths and remove its weaknesses.

Software Development Life Cycle (SDLC)

The software development life cycle (SDLC) is the entire process of formal, logical steps

taken to develop a software product. The phases of SDLC can vary somewhat but generally

include the following:

conceptualization;

requirements and cost/benefits analysis;

detailed specification of the software requirements;

software design;

programming;

testing;

user and technical training;

and finally, maintenance.

There are several methodologies or models that can be used to guide the software development lifecycle.Some of these include:

- Linear or waterfall model (which was the original SDLC method);

- Rapid application development (RAD)

- Prototyping model - Incremental model

- Spiral Model

Waterfall Model

The waterfall model derives its name due to the cascading effect from one phase to the other as is

3

Page 4: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 4/94

illustrated in Figure. In this model each phase well defined starting and ending point, with identifiabledeliveries to the next phase.

 

Note that this model is sometimes referred to as the linear sequential model or the software life cycle.

 

The model consist of six distinct stages, namely:

 

1. In the requirements analysis phase

(a) The problem is specified along with the desired service objectives (goals)

(b) The constraints are identified

 

2. In the specification phase the system specification is produced from the detailed definitions of 

(a) and (b) above. This document should clearly define the product function. 

Note that in some text, the requirements analysis and specifications phases are combined andrepresented as a single phase.

3. In the system and software design phase, the system specifications are translated into asoftware representation. The software engineer at this stage is concerned with:

Data structure

4

Page 5: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 5/94

Software architecture

Algorithmic detail and

Interface representations

 

The hardware requirements are also determined at this stage along with a picture of the overallsystem architecture. By the end of this stage the software engineer should be able to identify therelationship between the hardware, software and the associated interfaces. Any faults in thespecification should ideally not be passed ‘down stream’

 

4. In the implementation and testing  phase stage the designs are translated into the softwaredomain

Detailed documentation from the design phase can significantly reduce the codingeffort.

Testing at this stage focuses on making sure that any errors are identified and that thesoftware meets its required specification.

 

5. In the integration and system testing  phase all the program units are integrated and tested toensure that the complete system meets the software requirements. After this stage the software isdelivered to the customer [Deliverable – The software product is delivered to the client for acceptance testing.]

 

6. The maintenance phase the usually the longest stage of the software. In this phase the softwareis updated to:

Meet the changing customer needs

Adapted to accommodate changes in the external environment

Correct errors and oversights previously undetected in the testing phases

Enhancing the efficiency of the software

 

Observe that feed back loops allow for corrections to be incorporated into the model. For example aproblem/update in the design phase requires a ‘revisit’ to the specifications phase. When changes aremade at any phase, the relevant documentation should be updated to reflect that change.

 

Advantages

Testing is inherent to every phase of the waterfall model

It is an enforced disciplined approach

It is documentation driven, that is, documentation is produced at every stage

 

Disadvantages

The waterfall model is the oldest and the most widely used paradigm.However, many projects rarely follow its sequential flow. This is due to the inherent problems associatedwith its rigid format. Namely:

5

Page 6: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 6/94

It only incorporates iteration indirectly, thus changes may cause considerable confusion asthe project progresses.

As The client usually only has a vague idea of exactly what is required from the softwareproduct, this WM has difficulty accommodating the natural uncertainty that exists at thebeginning of the project.

The customer only sees a working version of the product after it has been coded. This may

result in disaster if any undetected problems are precipitated to this stage.

Prototyping Model

The Prototyping Model is a systems development method (SDM) in which a prototype (an earlyapproximation of a final system or product) is built, tested, and then reworked as necessary until an

acceptable prototype is finally achieved from which the complete system or product can now be developed.This model works best in scenarios where not all of the project requirements are known in detail ahead of time. It is an iterative, trial-and-error process that takes place between the developers and the users.

There are several steps in the Prototyping Model:

1. The new system requirements are defined in as much detail as possible. This usually involvesinterviewing a number of users representing all the departments or aspects of the existing system.

2. A preliminary design is created for the new system.3. A first prototype of the new system is constructed from the preliminary design. This is usually a

scaled-down system, and represents an approximation of the characteristics of the final product.

6

Page 7: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 7/94

4. The users thoroughly evaluate the first prototype, noting its strengths and weaknesses, whatneeds to be added, and what should to be removed. The developer collects and analyzes theremarks from the users.

5. The first prototype is modified, based on the comments supplied by the users, and a secondprototype of the new system is constructed.

6. The second prototype is evaluated in the same manner as was the first prototype.7. The preceding steps are iterated as many times as necessary, until the users are satisfied that the

prototype represents the final product desired.8. The final system is constructed, based on the final prototype.

The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuingbasis to prevent large-scale failures and to minimize downtime.

Incremental Model

This model combines the elements of the waterfall model with the iterative philosophy of prototyping.However, unlike prototyping the IM focuses on the delivery of an operational product at the end of eachincrement.

An example of this incremental approach is observed in the development of word processing applicationswhere the following services are provided on subsequent builds:

1. Basic file management, editing and document production functions

2. Advanced editing and document production functions

3. Spell and grammar checking

4. Advance page layout

 

The first increment is usually the core product which addresses the basic requirements of the system. Thismaybe either be used by the client or subjected to detailed review to develop a plan for the next increment.This plan addresses the modification of the core product to better meet the needs of the customer, and thedelivery of additionally functionality. More specifically, at each stage

· The client assigns a value to each build not yet implemented

· The developer estimates cost of developing each build

· The resulting value-to-cost ratio is the criterion used for selecting which build isdelivered next

 

Essentially the build with the highest value-to-cost ratio is the one that provides the client with the mostfunctionality (value) for the least cost. Using this method the client has a usable product at all of thedevelopment stages.

Incremental Model

Iterative: many releases (increments) – First increment: core functionality – Successive increments: add/fix functionality – Final increment: the complete product• Each iteration: a short mini-project with a separate lifecycle

 – e.g., waterfall• Increments may be built sequentially or in parallel

7

Page 8: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 8/94

Iterative & Incremental Model• Outcome of each iteration: tested, integrated, executable system• Iterations length is short and fixed

 – e.g. 2 weeks, 4 weeks, 6 weeks• Takes many iterations (e.g. 10-15)• Does not try to “freeze” the requirements and design speculatively

 – Rapid feedback, early insight, opportunity to modify requirements and design

 – Later iterations: reqs/design become stable

Increme

     a       t     u     r     e     s

Spiral Model

The spiral model is a software development model combining elements of both

design and prototyping-in-stages, in an effort to combine advantages of top-down

and bottom-up concepts.

The spiral model was defined by Barry Boehm. This model was not the first model

to discuss iteration, but it was the first model to explain why the iteration matters.As originally envisioned, the iterations were typically 6 months to 2 years long. This

persisted until around 2000.

Each phase starts with a design goal (such as a user interface prototype as an early

phase) and ends with the client (which may be internal) reviewing the progress

thus far.

Analysis and engineering efforts are applied to each phase of the project, with an

eye toward the end goal of the project.

8

Page 9: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 9/94

So, for a typical shrink-wrap application, this might mean that you have a rough-cut

of user elements (without the pretty graphics) as an operable application, add

features in phases, and, at some point, add the final graphics.

The Spiral model is not used today (2004) as such. However, it has influenced the

modern day concept of agile software development. Agile software development

tends to be rather more extreme in their approach than the spiral model.

V- MODEL

The V-model is a software development model which can be presumed to be the extension of thewaterfall model Instead of moving down in a linear way, the process steps are bent upwards after thecoding phase, to form the typical V shape. The V-Model demonstrates the relationships betweeneach phase of the development life cycle and its associated phase of testing.

9

Page 10: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 10/94

The V-model can be said to have developed as a result of the evolution of software testing. Varioustesting techniques were defined and various kinds of testing were clearly separated from each other which led to the waterfall model evolving into the V-model. The tests in the ascending (Validation)hand are derived directly from their design or requirements counterparts in the descending(Verification) hand. The ‘V’ can also stand for the terms Verification and Validation.

Verification Phases

Requirements analysis

In this phase, the requirements of the proposed system are collected by analyzing the needs of theusers. This phase is concerned about establishing what the ideal system has to perform. However, itdoes not determine how the software will be designed or built. Usually, the users are interviewed anda document called the user requirements document is generated. The user requirements documentwill typically describe the system’s functional, physical, interface, performance, data, securityrequirements etc as expected by the user. It is one which the business analysts use to communicatetheir understanding of the system back to the users. The users carefully review this document as thisdocument would serve as the guideline for the system designers in the system design phase. Theuser acceptance tests are designed in this phase.

System Design

System engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirementscan be implemented. If any of the requirements is not feasible, the user is informed of the issue. Aresolution is found and the user requirement document is edited accordingly.

The software specification document which serves as a blueprint for the development phase isgenerated. This document contains the general system organization, menu structures, datastructuresetc. It may also hold examples business scenarios, sample windows, reports for the better understanding. Other technical documentation like entity diagrams, data dictionary will also beproduced in this phase. The documents for system testing is prepared in this phase.

Architecture Design

This phase can also be called as high-level design. The baseline in selecting the architecture is that itshould realize all the requirements within the given time, cost and resources. Software architecture iscommonly represented as two-tier, three-tier or multi-tier models which typically comprises of thedatabase layer, user-interface layer and the application layer. The modules and componentsrepresenting each layer, their inter-relationships, subsystems, operating environment and interfacesare laid out in detail.

The output of this phase is the high-level design document which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies,

databasetables, architecture diagrams, technology details etc. The integration testing design iscarried out in this phase.

 Module Design

 This phase can also be called as low-level design. The designed system isbroken up in to smaller units or modules and each of them is explained sothat the programmer can start coding directly. The low level design

10

Page 11: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 11/94

document or program specifications will contain a detailed functional logicof the module, in pseudocode - database tables, with all elements, includingtheir type and size - all interface details with complete API references- alldependency issues- error message listings- complete input and outputs for amodule. The unit test design is developed in this stage

Validation Phases

Unit Testing

In the V-model of software development, unit testing implies the first stage of dynamic testingprocess. According to software development expert Barry Boehm, a fault discovered and corrected inthe unit testing phase is more than a hundred times cheaper than if it is done after delivery to thecustomer.

It involves analysis of the written code with the intention of eliminating errors. It also verifies that thecodes are efficient and adheres to the adopted coding standards. Testing is usually white box. It isdone using the Unit test design prepared during the module design phase. This may be carried out by

software testers, software developers or both.

Integration Testing

In integration testing the separate modules will be tested together expose faults in the interfaces andin the interaction between integrated components. Testing is usually black box as the code is notdirectly checked for errors. It is done using the integration test design prepared during thearchitecture design phase. Integration testing is generally conducted by software testers.

System Testing

System testing will compare the system specifications against the actual system. The system test

design derived from the system design documents and is used in this phase. Sometimes systemtesting is automated using testing tools. once all the modules are integrated several errors may rise.testing done at this stage is called system test.

 User Acceptance Testing

Acceptance Testing checks the system against the requirements of the user. It uses black box testingusing real data, real people and real documents to ensure ease of use and functionality of systems.Users who understand the business functions run the tests as given in the acceptance test plans,including installation and Online help Hardcopies of user documentation are also being reviewed for usability and accuracy. The testers formally document the results of each test, and provide error reports, correction requests to the developers.

Benefits

The V-model deploys a well-structured method in which each phase can be implemented by thedetailed documentation of the previous phase. Testing activities like test designing start at thebeginning of the project well before coding and therefore saves a huge amount of the project time

11

Page 12: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 12/94

What is Quality?

Quality is the customer’s perception of how a good or service is fit for their purpose and how it satisfies

stated and implicit specifications.

Quality in an organization is best achieved by Management creating a Quality Management System

(QMS). A QMS is a formalized system that documents the company structure, management and employee

responsibilities, and the procedures required to deliver a quality product or service. Four quality tools

should be utilized when creating a QMS: Quality Manual, Standard Operating Procedures (SOPs), work

instructions and supporting documentation as flowcharts and quality records. All four tools must be

consistent, coherent and work together to increase the perceived value of the good or service.

How do I manage Quality?

Quality Management is effectively managing your company QMS to achieve maximum customer 

satisfaction at the lowest overall cost. Quality Management (QM) is a continuous process that requires

inputs of time, effort and commitment from all company resources.

Eight QM principles form the foundation for effective quality management:

1. Customer Focus - Understand your customer’s needs. Measure customer satisfaction. Strive to

exceed their expectations.

2. Leadership - Management establishes the strategy and leads the company toward achieving its

objectives. Management creates an environment that encourages staff to continuously improve

and work towards satisfying the customer.3. People Involvement - Train your staff effectively. Teamwork and full employee involvement

makes quality a reality.

4. Continuous Improvement - Continue to make things better.

5. Process Approach - Understand and organize company resources and activities to optimize how

the organization operates.

6. Factual Approach to Decision Making - Make decisions based on the facts. Data must be

gathered, analyzed and assessed against the objectives.

7. System Approach to Management - Determine sequence and interaction of processes and

manage them as a system. Processes must meet customer requirements.

8. Mutually Beneficial Supplier Relationships - Work with your suppliers to produce a win-win

outcome.

The quality of a product or service refers to the perception of the degree to which the product or 

service meets the customer's expectations.

Quality is essentially about learning what you are doing well and doing it better. It also means

finding out what you may need to change to make sure you meet the needs of your service users.

12

Page 13: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 13/94

Quality is defined by the customer. A quality product or service is one that meets customer 

requirements. Not all customers have the same requirements so two contrasting products may both be

seen as quality products by their users. For example, one house-owner may be happy with a standard light

bulb - they would see this as a quality product. Another customer may want an energy efficient light bulb

with a longer life expectancy - this would be their view of quality. Quality can therefore be defined as being

fit for the customer's purpose.

There are three main ways in which a business can create quality:

One key distinction to make is there are two common applications of the term Quality as form of 

activity or function within a business. One is Quality Assurance which is the "prevention of defects", such

as the deployment of a Quality Management System and preventative activities like FMEA. The other is

Quality Control which is the "detection of defects", most commonly associated with testing which takes

place within a Quality Management System typically referred to as Verification and Validation.

Quality is about:

• knowing what you want to do and how you want to do it

• learning from what you do

• using what you learn to develop your organization and its services

• seeking to achieve continuous improvement

• satisfying your stakeholders - those different people and groups with an interest in your 

organization.

Definitions of Quality

1. Customer-Based Fitness for use, meeting customer expectations.

2. Manufacturing-Based Conforming to design, specifications, or requirements. Having no defects.

3. Product-Based The product has something that other similar products do not that adds value.

4. Value-Based The product is the best combination of price and features.

5. Transcendent It is not clear what it is, but it is something good...

Typically, these are the stages that organizations implementing a quality system aim to follow:

• Agree on standards. These concern the performance that staff, trustees and users expect from

the organization

• Carry out a self-assessment. This means that you compare how well you are doing against

these expectations.

• Draw up an action plan. This will include what needs to be done, who will do it, how it will be

done, and when

• Implement. Do the work

13

Page 14: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 14/94

• Review. At this stage, you check what changes have been made and whether they have made

the difference you were hoping to achieve.

Why does quality matter?These are some of the demands on voluntary organizations. They need to show that:

• they meet the often conflicting needs and demands of their service users, and that users are

satisfied with the quality of services offered

• they provide users with efficient, consistent services

• the organization is making a real difference

• they can work effectively with limited resources or short-term project funding.

Why is quality important?The most successful organizations are those that give customers what they want. Satisfied

customers are loyal to those suppliers they feel best understand their requirements. As a result they will

make repeat purchases and will recommend a business to their friends.

There are two main types of customers for a business:

• end customers - people like you and me, looking to buy an iPod or plasma screen television

• organizational customers - for example, a company recording audio CDs would buy in blank CDs,

record music to them and sell them on as a finished product.

Quality, in the eye of the consumer, means that a product must provide the benefits required by

the consumer when it was purchased. If all the features and benefits satisfy the consumer, a quality

product has been bought. It is consumers, therefore, who define quality.

Quality as defined by the consumer, he argued, is more important than price in determining

demand for most goods and services. Consumers will be prepared to pay for the best quality. Value is thus

added by creating those quality standards required by consumers.

Consumer quality standards involve:

• Creating consumer satisfaction

• Exceeding consumer expectations

• Delighting the consumer 

 

14

Page 15: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 15/94

Quality management and software development

Management responsibility Quality system

Control of non-conforming

 products

Design control

Handling, storage, packaging

and delivery

Purchasing

Purchaser-supplied products Product identification and

traceability

Process control Inspection and testing

Inspection and test equipment Inspection and test status

Contract review Corrective action

Document control Quality records

Internal quality audits Training

Servicing Statistical techniques

Quality planning

• A quality plan sets out the desired product qualities and how these are assessed and

defines the most significant quality attributes.

• The quality plan should define the quality assessment process.

• It should set out which organizational standards should be applied and, where necessary,

define new standards to be used.

Quality plan

• Quality plan structure

• Product introduction

• Product plans

• Process descriptions

• Quality goals

• Risks and risk management

• Quality plans should be short, succinct documents

• If they are too long, no-one will read them

15

Page 16: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 16/94

Quality attributes

What is a quality assurance system?

Quality assurance is the process of verifying or determining whether products or services meet or 

exceed customer expectations. Quality assurance is a process-driven approach with specific steps to help

define and attain goals. This process considers design, development, production, and service.

When the term 'quality assurance system' is used, it means a formal management system you can

use to strengthen your organization. It is intended to raise standards of work and to make sure everything

is done consistently. A quality assurance system sets out expectations that a quality organization should

meet. Quality assurance is the system set up to monitor the quality and excellence of goods and services.

Quality assurance demands a degree of detail in order to be fully implemented at every step.

• Planning , for example, could include investigation into the quality of the raw materials used in

manufacturing, the actual assembly, or the inspection processes used.

• The Checking  step could include customer feedback, surveys, or other marketing vehicles to

determine if customer needs are being exceeded and why they are or are not.

16

Safety Understandability Portability

Security Testability Usability

Reliability Adaptability Reusability

Resilience Modularity Efficiency

Robustness Complexity Learnability

Page 17: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 17/94

•   Acting could mean a total revision in the manufacturing process in order to correct a technical or 

cosmetic flaw.

Quality assurance verifies that any customer offering, regardless if it is new or evolved, is

produced and offered with the best possible materials, in the most comprehensive way, with the highest

standards. The goal to exceed customer expectations in a measurable and accountable process is

provided by quality assurance.

Quality controlQuality control is a process employed to ensure a certain level of quality in a product or service. It

may include whatever actions a business deems necessary to provide for the control and verification of 

certain characteristics of a product or service. The basic goal of quality control is to ensure that the

products, services, or processes provided meet specific requirements and are dependable, satisfactory,

and fiscally sound.

Essentially, quality control involves the examination of a product, service, or process for certain

minimum levels of quality. The goal of a quality control team is to identify products or services that do not

meet a company’s specified standards of quality. If a problem is identified, the job of a quality control team

or professional may involve stopping production temporarily. Depending on the particular service or 

product, as well as the type of problem identified, production or implementation may not cease entirely.

Quality control can cover not just products, services, and processes, but also people. Employees

are an important part of any company. If a company has employees that don’t have adequate skills or 

training, have trouble understanding directions, or are misinformed, quality may be severely diminished.

When quality control is considered in terms of human beings, it concerns correctable issues.

Difference between QA & QCQuality control is concerned with the product, while quality assurance is process–oriented.

Basically, quality control involves evaluating a product, activity, process, or service. By contrast, quality

assurance is designed to make sure processes are sufficient to meet objectives. Simply put, quality

assurance ensures a product or service is manufactured, implemented, created, or produced in the right

way; while quality control evaluates whether or not the end result is satisfactory.

(1) Quality Assurance: A set of activities designed to ensure that the development and/or 

maintenance process is adequate to ensure a system will meet its objectives.

(2) Quality Control: A set of activities designed to evaluate a developed work product.

The difference is that QA is process oriented and QC is product oriented.

Testing therefore is product oriented and thus is in the QC domain. Testing for quality isn't

assuring quality, it's controlling it.

17

Page 18: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 18/94

Quality Assurance makes sure you are doing the right things, the right way. Quality Control makes sure the

results of what you've done are what you expected.

QA ActivityThe mission of the QA Activity is fourfold. QA improves the quality of specifications, through

guidelines and reviews of specifications at critical stages of their development. QA promotes wide

deployment and proper implementation of these specifications through articles, tutorials and validation

services. QA communicates the value of test suites and helps Working Groups produce quality test suites.

And QA designs effective processes that, if followed, will help groups achieve these goals.

The overall mission of the QA Activity is to improve the quality of specification implementation in

the field. In order to achieve this, the QA activity will work on the quality of the specifications themselves,

making sure that each specification has a conformance section, primer, is clear, unambiguous and

testable, and maintains consistency between specifications, promote the development of good validators,

test tools, and harnesses for implementors and end user to use.

The QA Activity was initiated to address these demands and improve the quality of specificationsas well as their implementation. In particular, the Activity has a dual focus:

(1) To solidify and extend current quality practices regarding the specification publication

process, validation tools, test suites, and test frameworks.

(2) To share with the Web community their understanding of issues related to ensuring and

promoting quality, including conformance, certification and branding, education, funding models, and

relationship with external organizations.

QA activities ensure that the process is defined and appropriate. Methodology and standards

development are examples of QA activities. A QA review would focus on the process elements of a project

- e.g., are requirements being defined at the proper level of detail.

QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements

the right requirements. Testing is one example of a QC activity, but there are others such as inspections.

18

Page 19: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 19/94

V & V

In the process of testing two terms that needs significance and understanding are

1. Verification

2. Validation

Verification:

"Are we building the product right?” i.e., does the product conform to the specifications?It is one aspect of testing a product's fitness for purpose.

The verification process consists of static and dynamic parts. E.g., for a software product one can inspect

the source code (static) and run against specific test cases (dynamic). Validation usually can only be done

dynamically, i.e., the product is tested by putting it through typical usages and atypical usages ("Can we

break it?").

Verification Techniques

There are many different verification techniques and are said to Static testing.

Static testing - Testing that does not involve the operation of the system or component.

Some of these techniques are performed manually while others are automated. Static testing

can be further divided into 2 categories - techniques that analyze consistency and techniques

that measure some program property.

Consistency techniques - Techniques that are used to insure program properties such as correct syntax,

correct parameter matching between procedures, correct typing, and correct requirements and

specifications translation.

Measurement techniques - Techniques that measure properties such as error proneness,understandability, and well-structuredness.

Validation:

"Are we building the right product?", i.e., does the product do what the user really requires?

Validation is the complementary aspect. Often one refers to the overall checking process as V & V.

19

Page 20: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 20/94

Validation TechniquesThere are also numerous validation techniques, including formal methods, fault injection, and

dependability analysis. Validation usually takes place at the end of the development cycle, and looks at the

complete system as opposed to verification, which focuses on smaller sub-systems.

• Formal methods - Formal methods is not only a verification technique but also a

validation technique. Formal methods means the use of mathematical and logical

techniques to express, investigate, and analyze the specification, design, documentation,

and behavior of both hardware and software.

• Fault injection - Fault injection is the intentional activation of faults by either hardware or 

software means to observe the system operation under fault conditions.

• Hardware fault injection - Can also be called physical fault injection because

we are actually injecting faults into the physical hardware.

• Software fault injection - Errors are injected into the memory of the computer 

by software techniques. Software fault injection is basically a simulation of hardware fault

injection.

• Dependability analysis - Dependability analysis involves identifying hazards

and then proposing methods that reduces the risk of the hazard occurring.

`

• Hazard analysis - Involves using guidelines to identify hazards, their root causes, and

possible counter measures.

• Risk analysis - Takes hazard analysis further by identifying the possible consequences

of each hazard and their probability of occurring.

Verification ensures the product is designed to deliver all functionality to the customer; it typically

involves reviews and meetings to evaluate documents, plans, code, requirements and specifications;

20

Page 21: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 21/94

Validation ensures that functionality, as defined in requirements, is the intended behavior of the

product; validation typically involves actual testing and takes place after verifications are completed.

Methods of Testing

WHITE BOX TESTINGThis method of testing checking the logical and structure of program keeping the requirement

specification in mind.

The purpose of white box testing is to

Initiate a strategic initiative to build quality throughout the life cycle of a software product or service.

Provide a complementary function to black box testing.

Perform complete coverage at the component level.

Improve quality by optimizing performance.

White box testing is a test case design approach that employs the control architecture of theprocedural design to produce test cases. Using white box testing approaches, the software

engineering can produce test cases that

Guarantee that all independent paths in a module have been exercised at least once.

Exercise all logical decisions.

Execute all loops at their boundaries and in their operational bounds.

Exercise internal data structures to maintain their validity.

Types of testing under White/Glass Box Testing Strategy:

Static and Dynamic Analysis:

Static analysis techniques

The only generally acknowledged and therefore most important characteristic of static analysistechniques is that the testing as such does not necessitate the execution of the program .Essentialfunctions of static analysis are checking whether representations and descriptions of software areconsistent, noncontradictory or unambiguous. It aims at correct descriptions, specifications andrepresentations of software systems and is therefore a precondition to any further testing exercise.

21

Page 22: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 22/94

Static analysis covers the lexical analysis of the program syntax and investigates and checks thestructure and usage of the individual statements . There are principally three different possibilities of program testing ie

1. program internally, to check completeness and consistency2. considering pre-defined rules

3. comparing the program with its specification or documentation

This technique, which aims at detecting problems in the translation between specification and

program realisation, is called static verification . Verification requires formal specifications and formal

definitions of the specification and programming languages used as well as a method of algorithmic

proving that is adapted to these description means . Static verification compares the actual values

provided by the program with the target values as pre-defined in the specification document. It does

not, however, provide any means to check whether the program actually solves the given problems,

i.e.\ whether the specification as such is correct

The most important manual technique which allows testing the program without running it is softwareinspection . The method of inspection originally goes back to Fagan who saw the practical necessityto implement procedures to improve software quality at several stages during the software life-cycle.In short a software inspection can be described as follows: ``A software inspection is a group reviewprocess that is used to detect and correct defects in a software workproduct. It is a formal, technicalactivity that is performed by the workproduct author and a small peer group on a limited amount of material. It produces a formal, quantified report on the resources expended and the results achieved

During inspection either the code or the design of a workproduct is compared to a set of pre-established inspection rules Inspection processes are mostly performed along checklists which cover typical aspects of software behaviour ``Inspection of software means examining by reading,explaining, getting explanations and understanding of system descriptions, software specificationsand programs. Some software engineers report inspection as adequate for any kind of document, e.g.

specifications, test plans etc. . While most testing techniques are intimately related to the systemattribute whose value they are designed to measure, and thus offer no information about other attributes, a major advantage of inspection processes is that any kind of problem can be detected andthus results can be delivered with respect to every software quality factor .

Walkthroughs are similar peer review processes that involve the author of the program, the tester, asecretary and a moderator .The participants of a walkthrough create a small number of test cases by``simulating'' the computer. Its objective is to question the logic and basic assumptions behind thesource code, particularly of program interfaces in embedded systems .

Dynamic analysis techniques

While static analysis techniques do not necessitate the execution of the software, dynamic analysis iswhat is generally considered as ``testing``, i.e. it involves running the system. ``The analysis of thebehaviour of a software system before, during and after its execution in an artificial or realapplicational environment characterises dynamic analysis'' . Dynamic analysis techniques involve therunning of the program formally under controlled circumstances and with specific results expected . Itshows whether a system is correct in the system states under examination or not .

Among the most important dynamic analysis techniques are path and branch testing. During dynamicanalysis path testing involves the execution of the program during which as many as possible logicalpaths of a program are exercised . The major quality attribute measured by path testing is program

22

Page 23: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 23/94

complexity . Branch testing requires that tests be constructed in a way that every branch in a programis traversed at least once . Problems when running the branches lead to the probability of later program defects.

Statement Coverage:

In this type of testing the code is executed in such a manner that every statement of theapplication is executed at least once. It helps in assuring that all the statements execute without anyside effect.

Branch Coverage:

No software application can be written in a continuous mode of coding, at some point we

need to branch out the code in order to perform a particular functionality. Branch coverage testing

helps in validating of all the branches in the code and making sure that no branching leads to

abnormal behavior of the application.

Security Testing:

Security Testing is carried out in order to find out how well the system can protect itself fromunauthorized access, hacking – cracking, any code damage etc. which deals with the code of 

application. This type of testing needs sophisticated testing techniques.

Mutation Testing:

A kind of testing in which, the application is tested for the code that was modified after fixing a

particular bug/defect. It also helps in finding out which code and which strategy of coding can help in

developing the functionality effectively.

Besides all the testing types given above, there are some more types which fall under both

Black box and White box testing strategies such as: Functional testing (which deals with the code in

order to check its functional performance), Incremental integration testing (which deals with the

testing of newly added code in the application), Performance and Load testing (which helps in finding

out how the particular code manages resources and give performance etc.) etc.

Basis Path Testing

Basic path testing is a white box testing techniques that allows the test case designer to

produce a logical complexity measure of procedural design and use this measure as an approach for 

outlining a basic set of execution paths. Test cases produced to exercise each statement in the

program at least one time during testing.

Flow Graphs

23

Page 24: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 24/94

The flow graph can be used to represent the logical flow control and therefore all heexecution

paths that need testing. To illustrate the use of flow graphs consider the procedural design depicted

in the flow chart below. This is mapped into the flow graph below where the circles are nodes that

represent one or more procedural statements and the arrow in the flow graph called edges represent

the flow control. Each node that includes a condition is known as a predicate node, and has two or 

more edges coming from

it.

Flow chart

24

1

8

6

9

2,3

7 4,5

10

11

Region

 Node

Predicate Node

1

2

3

46

57 8

11

10

9

Page 25: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 25/94

Cyclomatic Complexity

As we have seen before McCabe’s cyclomatic complexity is a software metric that offers an

indication of the logical complexity of a program. When used in the context of the basis path testing

approach, the value is determined for cyclomatic complexity defines the number of independent paths

in the basis set of a program and offer upper bounds for number of tests that ensures all statements

have been executed at least once. An independent path is any path through the program that

introduces at least one new group of processing statements or new condition. A set of independent

paths for the example flow graph are:

Path 1: 1-11

Path 2: 1-2-3-4-5-10-1-11

Path 3: 1-2-3-6-8-9-10-11

Deriving Test Cases

The basis path testing method can be applied to a detailed procedural design or to source

code. Basis path testing can be seen as a set of steps.• Using the design or code as the basis, draw an appropriate flow graph.

• Determine the cyclomatic complexity of the resultant flow graph.

• Determine a basis set of linear independent paths

• Prepare test cases that will force execution of each path in the basis set.

Date should be selected so that conditions at the predicate nodes is tested. Each test case is

executed and contrasted with the expected result. Once all test cases have been completed, the

tester can ensure that all statements in the program are executed at least once.

Graphical Matrices

The procedure involved in producing the flow graph and establishing a set of basis paths can

be mechanized. To produce a software tool that helps in basis path testing, a data structure, called a

graph matrix, can be quite helpful. A graph matrix is a square matrix whose size is the same as the

identified nodes, and matrix entries match the edges between nodes. A basic flow graph and its

associated graph matrix is shown below.

25

1

2

5 3

4

a

d

 b

c

g

E

Page 26: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 26/94

Connection to node

Node 1 2 3 4 5

1 A

2 B

3 d, c f  

45 E g

Graph Matrix

In the graph and matrix each node is represented with a number and each edge a letter. A

letter is entered into the matrix related to connection between the two nodes. By adding a link weight

for each matrix entry the graph matrix can be used to examine program control structure during

testing. In its basic form the link weight is 1 or 0. The link weights can be given more interesting

characteristics:• The probability that a link will be executed.

• The processing time expanded during traversal of a link

• The memory required during traversal of a link

Represented in this form the graph matrix is called a connection matrix.

Connection to node

Node 1 2 3 4 5 Connections

1 1 1-1=0

2 1 1-1=0

3 1,1 1 3-1=24 0

5 1 1 2-1=1

Cyclomatic complexity is 2+1=3

Control Structure Testing

Although basis path testing is simple and highly effective, it is not enough in itself. Next we consider 

variations on control structure testing that broaden testing coverage and improve the quality of white

box testing.

Condition Testing

Condition testing is a test case design approach that exercises the logical conditions

contained in a program module. A simple condition is a Boolean variable or a relational expression,

possibly with one NOT operator. A relational expression takes the form

26

Page 27: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 27/94

21 E>operator relational<E -

where 21 EE  and  are arithmetic expressions and relational operator is one of the following <, =,

≤,≠ , (nonequality) >, or  ≥ . A compound condition is made up of two or more simple conditions,

Boolean operators, and parentheses. We assume that Boolean operators allowed in a compound

condition include OR, AND and NOT.

The condition testing method concentrates on testing each condition in a program. The

purpose of condition testing is to determine not only errors in the conditions of a program but also

other errors in the program. A number of condition testing approaches have been identified. Branch

testing is the most basic. For a compound condition, C, the true and false branches of C and each

simple condition in C must be executed at least once.

Domain testing needs three and four tests to be produced for a relational expression. For a

relational expression of the form

21 E>operator relational<E -

Three tests are required the make the value of 1

E greater than, equal to and less than2

E ,

respectively.

Loop Testing

Loops are the basis of most algorithms implemented using software. However, often we do

consider them when conducting testing. Loop testing is a white box testing approach that

concentrates on the validity of loop constructs. Four loops can be defined: simple loops,

concatenate loops, nested loops, and unstructured loops.

Simple loops: The follow group of tests should be used on simple loops, where n is the maximum

number of allowable passes through the loop:

• Skip the loop entirely.

• Only one pass through the loop.

• Two passes through the loop.

• M passes through the loop where m<n.

• n-1, n, n+1 passes through the loop.

27

Page 28: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 28/94

Nested loop: For the nested loop the number of possible tests increases as the level of nesting

grows. This would result in an impractical number of tests. An approach that will help to limit the

number of tests:

• Start at the innermost loop. Set all other loops to minimum values.

• Conduct simple loop tests for the innermost loop while holding the outer loop at their 

minimum iteration parameter value.

• Work outward, performing tests for the next loop, but keeping all other outer loops at

minimum values and other nested loops to “typical” values.

• Continue until all loops have been tested.

Concatenated loops: Concatenated loops can be tested using the techniques outlined for simple

loops, if each of the loops is independent of the other. When the loops are not independent the

approach applied to nested loops is recommended.

28

Page 29: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 29/94

Example program.

29

Page 30: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 30/94

Black Box Testing

Black Box Testing is testing without knowledge of the internal workings of the item being

tested. For example, when black box testing is applied to software engineering, the tester would only

know the "legal" inputs and what the expected outputs should be, but not how the program actually

arrives at those outputs. It is because of this that black box testing can be considered testing with

respect to the specifications, no other knowledge of the program is necessary. For this reason, the

tester and the programmer can be independent of one another, avoiding programmer bias toward his

own work. Black box testing is sometimes also called as "Opaque Testing", "Functional/Behavioral

Testing" and "Closed Box Testing".

In order to implement Black Box Testing Strategy, the tester is needed to be thorough with

the requirement specifications of the system and as a user, should know, how the system should

behave in response to particular action.

Various testing types that fall under the Black Box Testing strategy are: functional testing,

stress testing, recovery testing, volume testing, User Acceptance Testing (also known as UAT),

system testing, Sanity or Smoke testing, load testing, Usability testing, Exploratory testing, ad-hoc

testing,alpha testing,beta testing etc.

These testing types are again divided in two groups:

a) Testing in which user plays a role of tester and

b) User is not required.

Testing Strategies/Techniques

• Black box testing should make use of randomly generated inputs (only a test range should be

specified by the tester), to eliminate any guess work by the tester as to the methods of the

function

• Data outside of the specified input range should be tested to check the robustness of the

program

• Boundary cases should be tested (top and bottom of specified range) to make sure the

highest and lowest allowable inputs produce proper output

• The number zero should be tested when numerical data is to be input

• Stress testing should be performed (try to overload the program with inputs to see where it

reaches its maximum capacity), especially with real time systems

• Crash testing should be performed to see what it takes to bring the system down

30

Page 31: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 31/94

• Test monitoring tools should be used whenever possible to track which tests have already

been performed and the outputs of these tests to avoid repetition and to aid in the software

maintenance

• Other functional testing techniques include: transaction testing, syntax testing, domain

testing, logic testing, and state testing.

• Finite state machine models can be used as a guide to design functional tests

• According to Beizer the following is a general order by which tests should be designed:

• Clean tests against requirements.

• Additional structural tests for branch coverage, as needed.

• Additional tests for data-flow coverage as needed.

• Domain tests not covered by the above.

• Special techniques as appropriate--syntax, loop, state, etc.

• Any dirty tests not covered by the above.

Advantages of Black Box Testing

• more effective on larger units of code than glass box testing

• tester needs no knowledge of implementation, including specific programming

languages

• tester and programmer are independent of each other 

• tests are done from a user's point of view

• will help to expose any ambiguities or inconsistencies in the specifications

• test cases can be designed as soon as the specifications are complete

Disadvantages of Black Box Testing

• Only a small number of possible inputs can actually be tested, to test every possible

input stream would take nearly forever 

• Without clear and concise specifications, test cases are hard to design

• There may be unnecessary repetition of test inputs if the tester is not informed of test

cases the programmer has already tried

• May leave many program paths untested

• Cannot be directed toward specific segments of code which may be very complex

(and therefore more error prone) .

31

Page 32: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 32/94

Levels of testing

There are four levels of testing. They are

• Unit Testing.

• Integration Testing.

• System Testing.

• Acceptance testing

UNIT TESTING

Introduction to Unit Testing

Unit testing. Isn't that some annoying requirement that we're going to ignore? Many developers

get very nervous when you mention unit tests. Usually this is a vision of a grand table with every single

method listed, along with the expected results and pass/fail date. It's important, but not relevant in most

programming projects.

The unit test will motivate the code that you write. In a sense, it is a little design document that says, "What

will this bit of code do?" Or, in the language of object oriented programming, What will these clusters of 

objects do?"

The crucial issue in constructing a unit test is scope. If the scope is too narrow, then the tests will be trivial

and the objects might pass the tests, but there will be no design of their interactions. Certainly, interactions

of objects are the crux of any object oriented design.

Likewise, if the scope is too broad, then there is a high chance that not every component of the

new code will get tested. The programmer is then reduced to testing-by-poking-around, which is not an

effective test strategy.

 Need for Unit Test

How do you know that a method doesn't need a unit test? First, can it be tested by inspection? If 

the code is simple enough that the developer can just look at it and verify its correctness then it is simpleenough to not require a unit test. The developer should know when this is the case.

Unit tests will most likely be defined at the method level, so the art is to define the unit test on the

methods that cannot be checked by inspection. Usually this is the case when the method involves a cluster 

of objects. Unit tests that isolate clusters of objects for testing are doubly useful, because they test for 

failures, and they also identify those segments of code that are related. People who revisit the code will

use the unit tests to discover which objects are related, or which objects form a cluster. Hence: Unit tests

32

Page 33: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 33/94

isolate clusters of objects for future developers.

Another good litmus test is to look at the code and see if it throws an error or catches an error. If 

error handling is performed in a method, then that method can break. Generally, any method that can

break is a good candidate for having a unit test, because it may break at some time, and then the unit test

will be there to help you fix it.

The danger of not implementing a unit test on every method is that the coverage may be incomplete. Just

because we don't test every method explicitly doesn't mean that methods can get away with not being

tested. The programmer should know that their unit testing is complete when the unit tests cover at the

very least the functional requirements of all the code. The careful programmer will know that their unit

testing is complete when they have verified that their unit tests cover every cluster of objects that form their 

application.

Life Cycle Approach to Testing

Testing will occur throughout the project lifecycle i.e., from Requirements till User AcceptanceTesting. The main Objective to Unit Testing are as follows :

• To execute a program with the intent of finding an error.;

• To uncover an as-yet undiscovered error ; and

• Prepare a test case with a high probability of finding an as-yet undiscovered error.

Concepts in Unit Testing:

• The most 'micro' scale of testing;• To test particular functions or code modules.

• Typically done by the programmer and not by testers.

• As it requires detailed knowledge of the internal program design and code.

• Not always easily done unless the application has a well-designed architecture with tight code;

Types of Errors detected

The following are the Types of errors that may be caught

• Error in Data Structures

• Performance Errors

• Logic Errors

• Validity of alternate and exception flows

• Identified at analysis/design stages

Unit Testing – Black Box Approach• Field Level Check

• Field Level Validation

33

Page 34: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 34/94

• User Interface Check

• Functional Level Check

Unit Testing – White Box Approach

Statement coverage

Decision coverage

Condition coverage

Multiple condition coverage (nested conditions)

Condition/decision coverage

Path coverage

Unit Testing – Field level checks

• Null / Not Null Checks

• Uniqueness Checks

• Length Checks

• Date Field Checks

• Numeric Checks

• Negative Checks

Unit Testing – Field Level Validations• Test all Validations for an Input field

• Date Range Checks (From Date/To Date’s)

• Date Check Validation with System date

Unit Testing – User Interface Checks

• Readability of the Controls

• Tool Tips Validation

• Ease of Use of Interface Across

• Tab related Checks• User Interface Dialog

• GUI compliance checks

Unit Testing - Functionality Checks

• Screen Functionalities

34

Page 35: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 35/94

• Field Dependencies

• Auto Generation

• Algorithms and Computations

• Normal and Abnormal terminations

• Specific Business Rules if any..

Unit Testing - Other measures

Function coverage

Loop coverage

Race coverage

Execution of Unit Tests

• Design a test case for every statement to be executed.

• Select the unique set of test cases.

• This measure reports whether each executable statement is encountered.

• Also known as: line coverage, segment coverage and basic block coverage.

• Basic block coverage is the same as statement coverage except the unit of code measured is

each sequence of non-branching statements.

Example of Unit Testing:

int invoice (int x, int y) {

int d1, d2, s;

if (x<=30) d2=100;

else d2=90;

s=5*x + 10 *y;

if (s<200) d1=100;

else if (s<1000) d1 = 95;

else d1 = 80;

return (s*d1*d2/10000);

}

35

Page 36: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 36/94

Unit Testing Flow:

36

Page 37: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 37/94

Advantage of Unit Testing

• Can be applied directly to object code and does not require processing source code.

• Performance profilers commonly implement this measure.

Disadvantage of Unit Testing

• Insensitive to some control structures (number of iterations)

• Does not report whether loops reach their termination condition

• Statement coverage is completely insensitive to the logical operators (|| and &&). 

Method for Statement Coverage

-Design a test-case for the pass/failure of every decision point

-Select unique set of test cases

This measure reports whether Boolean expressions tested in control structures (such as the if-statement and while-statement) evaluated to both true and false.

• The entire Boolean expression is considered one true-or-false predicate regardless of 

whether it contains logical-and or logical-or operators.

• Additionally, this measure includes coverage of switch-statement cases, exception handlers, and

interrupt handlers

• Also known as: branch coverage, all-edges coverage, basis path coverage, decision-decision-path

testing

• "Basis path" testing selects paths that achieve decision coverage.

Advantage:

Simplicity without the problems of statement coverage

Disadvantage

• This measure ignores branches within boolean expressions which occur due to short-circuit

operators.

Method for Condition Coverage:

-Test if every condition (sub-expression) in decision for true/false

-Select unique set of test cases.

• Reports the true or false outcome of each Boolean sub-expression, separated by logical-and and

logical-or if they occur.

• Condition coverage measures the sub-expressions independently of each other. 

• Reports whether every possible combination of boolean sub-expressions occurs. As with condition

coverage, the sub-expressions are separated by logical-and and logical-or, when present.

• The test cases required for full multiple condition coverage of a condition are given by the logical

operator truth table for the condition.

37

Page 38: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 38/94

Disadvantage:

• Tedious to determine the minimum set of test cases required, especially for very complex Boolean

expressions

• Number of test cases required could vary substantially among conditions that have similar 

complexity

• Condition/Decision Coverage is a hybrid measure composed by the union of condition coverage

and decision coverage.

• It has the advantage of simplicity but without the shortcomings of its component measures

• This measure reports whether each of the possible paths in each function have been followed.

• A path is a unique sequence of branches from the function entry to the exit.

• Also known as predicate coverage. Predicate coverage views paths as possible combinations of 

logical conditions

• Path coverage has the advantage of requiring very thorough testing

Function coverage:

• This measure reports whether you invoked each function or procedure.

• It is useful during preliminary testing to assure at least some coverage in all areas of the software.

• Broad, shallow testing finds gross deficiencies in a test suite quickly.

Loop coverage

This measure reports whether you executed each loop body zero times, exactly once, twice and

more than twice (consecutively).

For do-while loops, loop coverage reports whether you executed the body exactly once, and more than

once.

The valuable aspect of this measure is determining whether while-loops and for-loops execute more than

once, information not reported by others measure.

Race coverageThis measure reports whether multiple threads execute the same code at the same time.

Helps detect failure to synchronize access to resources.

Useful for testing multi-threaded programs such as in an operating system.

Integration testing

Integration testing (sometimes called Integration and Testing, abbreviated I&T) is the phase of 

Software testing in which individual software modules are combined and tested as a group. It follows unit

testing and precedes system testing.

Testing performed to expose faults in the interfaces and in the interaction between integrated

components.

38

Page 39: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 39/94

Integration testing takes as its input modules that have been unit tested, groups them in larger 

aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output

the integrated system ready for system testing.

Integration testing is a logical extension of unit testing. In its simplest form, two units that have

already been tested are combined into a component and the interface between them is tested. A

component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario,

many units are combined into components, which are in turn aggregated into even larger parts of the

program. The idea is to test combinations of pieces and eventually expand the process to test your 

modules with those of other groups. Eventually all the modules making up a process are tested together.

Beyond that, if the program is composed of more than one process, they should be tested in pairs rather 

than all at once. Integration testing identifies problems that occur when units are combined. By using a test

plan that requires you to test each unit and ensure the viability of each before combining units, you know

that any errors discovered when combining units are likely related to the interface between units. This

method reduces the number of possibilities to a far simpler level of analysis.

Purpose

The purpose of integration testing is to verify functional, performance and reliability requirements

placed on major design items. These "design items", i.e. assemblages (or groups of units), are exercised

through their interfaces using Black box testing, success and error cases being simulated via appropriate

parameter and data inputs. Simulated usage of shared data areas and inter-process communication is

tested, individual subsystems are exercised through their input interface. All test cases are constructed to

test that all components within assemblages interact correctly, for example, across procedure calls or 

process activations, and is done after the testing single module i.e. unit testing. The overall idea is a"building block" approach, in which verified assemblages are added to a verified base which is then used to

support the Integration testing of further assemblages.

The different types of integration testing are big bang, top-down and bottom-up.

Different Approach of Testing• Big Bang approach

• Incremental approach

• Top Down approach

• Bottom Up approach

Big Bang (project management)

A big bang project is one that has no staged delivery. The customer must wait, sometimes

months, before seeing anything from the development team. At the end of the wait comes a "big bang". A

39

Page 40: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 40/94

common argument against big bang projects is that there are no check points during the project where the

customers expectations can be tested, thus risking that the final delivery is not what the customer had in

mind.

All components are tested in isolation, and will be mixed together when we first test the final

system.

Disadvantages:

• Requires both stubs and drivers to test the independent components.

• When failure occurs, it is very difficult to locate the faults.

After the modification, we have to go through the testing, locating faults, modifying faults again. 

Integration testing can be done in a variety of ways but the following are common strategies:

The top-down approach to integration testing requires the highest-level modules be test and

integrated first. This allows high-level logic and data flow to be tested early in the process and it tends to

minimize the need for drivers. However, the need for stubs complicates test management and low-level

utilities are tested relatively late in the development cycle. Another disadvantage of top-down integrationtesting is its poor support for early release of limited functionality.

The bottom-up approach requires the lowest-level units be tested and integrated first. These units

are frequently referred to as utility modules. By using this approach, utility modules are tested early in the

development process and the need for stubs is minimized. The downside, however, is that the need for 

drivers complicates test management and high-level logic and data flow are tested late. Like the top-down

approach, the bottom-up approach also provides poor support for early release of limited functionality.

40

Page 41: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 41/94

Top-down and design

Top-down and bottom-up are strategies of information processing and knowledge ordering,

mostly involving software, and by extension other humanistic and scientific System theories.

In the top-down model an overview of the system is formulated, without going into detail for any

part of it. Each part of the system is then refined by designing it in more detail. Each new part may then be

refined again, defining it in yet more detail until the entire specification is detailed enough to validate the

model.

1. The main control module is used as a test driver and stubs are substituted for all components

directly subordinate to the main module.

2. Depending on integration approach, subordinate stubs are replaced once a time with actual

components.

3. Tests are conducted as each component is integrated.

4. Stubs are removed and integration moves downward in the program structure.

Advantage

Can verify major control or decision points early in the testing process.

41

M1

M2 M3 M4

M5 M6 M7

M8

Page 42: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 42/94

Disadvantage

Stubs are required when perform the integration testing, and generally, develop stubs is verydifficult.

Bottom-up

In bottom-up design, first the individual parts of the system are specified in great detail. The parts

are then linked together to form larger components, which are in turn linked until a complete system is

formed. This strategy often resembles a "seed" model, whereby the beginnings are small, but eventually

grow in complexity and completeness.

Major steps

1. Low-level components will be tested individually first.2. A driver(a control program for testing) is written to coordinate test case input and output.

3. The driver is removed and integration moves upward in the program structure.4. Repeat the process until all components are included in the test.

AdvantageCompared with stubs, drivers are much easier to develop.

Disadvantage Major control and decision problems will be identified later in the testing process.

Advantages of top-down programming:

• Programming team stays focused on the goal

• Everyone knows his or her job.

• By the time the programming starts there are no questions.

• Code is easier to follow, since it is written methodically and with purpose.

Disadvantages of top-down programming:

• Top-down programming may complicate testing, since nothing executable will even exist

until near the end of the project.

• Bottom-up programming may allow for unit testing, but until more of the system comes

together none of the system can be tested as a whole, often causing complications near the end

of the project, "Individually we stand, combined we fall."

• All decisions depend on the starting goal of the project, and some decisions cannot be

made depending on how specific that description is.

42

Page 43: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 43/94

System Testing:

System testing is testing conducted on a complete, integrated system to evaluate the system's

compliance with its specified requirements. System testing falls within the scope of Black box testing and

as such, should require no knowledge of the inner design of the code or logic. System testing should be

performed by testers who are trained to plan, execute, and report on application and system code. They

should be aware of scenarios that might not occur to the end user, like testing for null, negative, and format

inconsistent values.

System testing is actually done to the entire system against the Functional RequirementSpecifications (FRS) and/or the System Requirement Specification (SRS). Moreover, the System testing is

an investigatory testing phase, where the focus is to have almost a destructive attitude and test not only

the design, but also the behavior and even the believed expectations of the customer. It is also intended to

test up to and beyond the bounds defined in the software/hardware requirements specification.

Types of System Testing:

Types of Testing:

• Sanity Testing

• Compatibility Testing

• Recovery Testing

• Usability Testing

• Exploratory Testing

• Adhoc Testing

• Stress Testing

• Volume Testing

• Load Testing

• Performance Testing• Security Testing

Sanity Testing

Testing the major working functionality of the system whether the system is working fine for 

the major testing effort. This testing is done before the testing phase and after the coding. The tests

performed during sanity testing are

43

Page 44: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 44/94

Compatibility Testing

Testing how well software performs in a particular hardware/software/operating

system/network/environment etc.

Recovery Testing

Recovery testing is basically done in order to check how fast and better the application can

recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in

the requirement specifications.

UsabilityTesting: 

This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User 

Interface of the application stands an important consideration and needs to be specific for the specific

type of user.

ExploratoryTesting:  

This testing is similar to the ad-hoc testing and is done in order to learn/explore the

application.

Ad-hocTesting: 

This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc

testing helps in deciding the scope and duration of the various other testing and it also helps testers

in learning the application prior starting with any other testing.

StressTesting: 

The application is tested against heavy load such as complex numerical values, large number 

of inputs, large number of queries etc. which checks for the stress/load the applications can

withstand.VolumeTesting: 

Volume testing is done against the efficiency of the application. Huge amount of data is

processed through the application (which is being tested) in order to check the extreme limitations of 

theSystem.

LoadTesting: 

The application is tested against heavy loads or inputs such as testing of web sites in order to

find out at what point the web-site/application fails or at what point its performance degrades.

UserAcceptanceTesting: 

In this type of testing, the software is handed over to the user in order to find out if thesoftware meets the user expectations and works as it is expected to.

AlphaTesting: 

In this type of testing, the users are invited at the development center where they use the

application and the developers note every particular input or action carried out by the user. Any type

44

Page 45: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 45/94

of abnormal behavior of the system is noted and rectified by the developers.

BetaTesting: 

In this type of testing, the software is distributed as a beta version to the users and users test

the application at their sites. As the users explore the software, in case if any exception/defect occurs

that is reported to the developers.

Testing Techniques

Manual Testing Technique:

This method of testing is done manually by the tester on the application under test

Why do manual testing?

Even in this age of short development cycles and automated-test-driven development, manual testingcontributes vitally to the software development process. Here are a number of good reasons to domanual testing:

• By giving end users repeatable sets of instructions for using prototype software, manualtesting allows them to be involved early in each development cycle and draws invaluablefeedback from them that can prevent "surprise" application builds that fail to meet real-worldusability requirements.

• Manual test cases gives testers something to use while awaiting for the construction of theapplication

• Manual test cases can be used to provide feedback to development teams in the form of aset of repeatable steps that lead to bugs or usability problems.

• If done thoroughly, manual test cases can also form the basis for help or tutorial files for theapplication under test.

• Finally, in a nod toward test-driven development, manual test cases can be given to thedevelopment staff to provide a clear description of the way the application should flow acrossuse cases.

In summary, manual testing fills a gap in the testing repertoire and adds invaluably to the softwaredevelopment process.

Test automation is the use of software to control the execution of tests , the comparison of actualoutcomes to predicted outcomes, the setting up of test preconditions, and other test control and testreporting functions. Commonly, test automation involves automating a manual process already inplace that uses a formalized testing process.

Over the past few years , tools that help programmers quickly create applications with graphical user interfaces have dramatically improved programmer productivity. This has increased the pressure ontesters, who are often perceived as bottlenecks to the delivery of software products. Testers arebeing asked to test more and more code in less and less time. Test automation is one way to do this,as manual testing is time consuming. As and when different versions of software are released, thenew features will have to be tested manually time and again. But, now there are tools available that

45

Page 46: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 46/94

help the testers in the automation of the GUI which reduce the test time as well as the cost, other testautomation tools support execution of performance tests.

Many test automation tools provide record and playback features that allow users to recordinteractively user actions and replay it back any number of times, comparing actual results to thoseexpected. However, reliance on these features poses major reliability and maintainability problems.

Most successful automators use a software engineering approach, and as such most serious testautomation is undertaken by people with development experience.

A growing trend in software development is to use testing frameworks such as the xUnit frameworks(for example, JUnit and Nunit which allow the code to conduct unit tests to determine whether various sections of the code are acting as expected in various circumstances. Test cases describetests that need to be run on the program to verify that the program runs as expected. All threeaspects of testing can be automated.

Another important aspect of test automation is the idea of partial test automation, or automating partsbut not all of the software testing process. If, for example, an oracle cannot reasonably be created, or if fully automated tests would be too difficult to maintain, then a software tools engineer can insteadcreate testing tools to help human testers perform their jobs more efficiently. Testing tools can help

automate tasks such as product installation, test data creation, GUI interaction, problem detection(consider parsing or polling agents equipped with oracles), defect logging, etc., without necessarilyautomating tests in an end-to-end fashion.

Test automation is expensive and it is an addition, not a replacement, to manual testing. It can bemade cost-effective in the longer term though, especially in regression testing. One way to generatetest cases automatically is model-based testing where a model of the system is used for test casegeneration, but research continues into a variety of methodologies for doing so.

46

Page 47: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 47/94

SOFTWARE TESTING LIFECYCLE

Testing lifecycle ensures that all the relevant requirements ( inputs) are obtained,planning is

adequately carried out , the test cases are designed and executed as per plan.It also ensures

that the results are obtained,reviewed and monitered. The Application or custom-build Lifecycle

and its phases are:

Test Requirements:

Requirements for testing can be collected from Documents, like Requirement Specificationdocuments , Functional Specification documents , Design Specification documents or Use caseDocumentsTest Planning:

Test plan is a road map to all testing activities. In this phase entire testing activity like the scope,Strategies, Approaches and Techniques to be used in testing that particular application has to beplanned. Entire Testing activity has to be Scheduled and Risk has to be analyzed

Test Environment Setup :

Test Bed has to be installed ,and Network connectivities has to set up.All the Software/ toolsinstallation and configuration has to done.Coordination with Vendors of the tools and others has toset up.

Test Design

Test Scenarios Identification has to be done. TestCasepreparationTestdataandTestscriptspreparationTestcasereviewsandApproval. Base lining under Configuration Management

Test Execution and Defect Tracking

Executing Test cases, Testing Test Scripts, Capture, review and analyze Test Results.

Raised the defects and tracking for its closure

Test Reporting

Test summary reports, Test Metrics and process Improvements madeBuild release

47

Page 48: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 48/94

TEST PLAN

• The test plan keeps track of possible tests that will be run on the system after coding.

• The test plan is a document that develops as the project is being developed.

• Record tests as they come up

• Test error prone parts of software development.

• The initial test plan is abstract and the final test plan is concrete.

• The initial test plan contains high level ideas about testing the system without getting into the

details of exact test cases.

• The most important test cases come from the requirements of the system.

• When the system is in the design stage, the initial tests can be refined a little.

• During the detailed design or coding phase, exact test cases start to materialize.

• After coding, the test points are all identified and the entire test plan is exercised on the

software.

Purpose of Software Test Plan:

• To achieve 100% CORRECT code. Ensure all Functional and Design Requirements are

implemented as specified in the documentation.

• To provide a procedure for Unit and System Testing.

• To identify the documentation process for Unit and System Testing.

• To identify the test methods for Unit and System Testing.

Advantages of test plan

• Serves as a guide to testing throughout the development.

• We only need to define test points during the testing phase.

• Serves as a valuable record of what testing was done.

• The entire test plan can be reused if regression testing is done later on.

• The test plan itself could have defects just like software!

48

Page 49: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 49/94

In software testing, a test plan gives detailed testing information regarding an upcoming

testing effort, including

• Scope of testing

• Schedule

•Test Deliverables

• Release Criteria

• Risks and Contingencies

Process of the Software Test Plan

• Identify the requirements to be tested. All test cases shall be derived using the current Design

Specification.

• Identify which particular test(s) you're going to use to test each module.

• Review the test data and test cases to ensure that the unit has been thoroughly verified and

that the test data and test cases are adequate to verify proper operation of the unit.

• Identify the expected results for each test.

• Document the test case configuration, test data, and expected results. This information shall

be submitted via the on-line Test Case Design(TCD) and filed in the unit's Software

Development File(SDF). A successful Peer Technical Review baselines the TCD and initiates

coding.

• Perform the test(s).

• Document the test data, test cases, and test configuration used during the testing process.

This information shall be submitted via the on-line Unit/System Test Report(STR) and filed inthe unit's Software Development File(SDF).

• Successful unit testing is required before the unit is eligible for component integration/system

testing.

• Unsuccessful testing requires a Program Trouble Report to be generated. This document

shall describe the test case, the problem encountered, its possible cause, and the sequence

of events that led to the problem. It shall be used as a basis for later technical analysis.

• Test documents and reports shall be submitted on-line. Any specifications to be reviewed,

revised, or updated shall be handled immediately.

Deliverables: Test Case Design, System/Unit Test Report, Problem Trouble Report(if any).

Test plan template• Test Plan Identifier 

• References

• Introduction of Testing

• Test Items

• Software Risk Issues

49

Page 50: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 50/94

• Features to be Tested

• Features not to be Tested

• Approach

• Item Pass/Fail Criteria

• Entry & Exit Criteria• Suspension Criteria and Resumption Requirements

• Test Deliverables

• Remaining Test Tasks

• Environmental Needs

• Staffing and Training Needs

• Responsibilities

• Schedule

• Planning Risks and Contingencies

• Approvals

• Glossary

Test plan identifier 

Master test plan for the Line of Credit Payment System.

References

List all documents that support this test plan.

Documents that are referenced include:

• Project Plan• System Requirements specifications.

• High Level design document.

• Detail design document.

• Development and Test process standards.

• Methodology guidelines and examples.

• Corporate standards and guidelines.

Objective of the plan

Scope of the plan

In relation to the Software Project plan that it relates to. Other items may include, resource

and budget constraints, scope of the testing effort, how testing relates to other evaluation activities

(Analysis & Reviews), and possibly the process to be used for change control and communication

and coordination of key activities.

50

Page 51: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 51/94

Test items (functions)

These are things you intend to test within the scope of this test plan. Essentially, something

you will test, a list of what is to be tested. This can be developed from the software application

inventories as well as other sources of documentation and information.

This can be controlled on a local Configuration Management (CM) process if you have one.This information includes version numbers, configuration requirements where needed, (especially if 

multiple versions of the product are supported). It may also include key delivery schedule issues for 

critical elements.

Remember, what you are testing is what you intend to deliver to the client.

This section can be oriented to the level of the test plan. For higher levels it may be by

application or functional area, for lower levels it may be by program, unit, module or build.

Software risk issues

Identify what software is to be tested and what the critical areas are, such as:

• Delivery of a third party product.

• New version of interfacing software.

• Ability to use and understand a new package/tool, etc.

• Extremely complex functions.

• Modifications to components with a past history of failure.

• Poorly documented modules or change requests.

There are some inherent software risks such as complexity; these need to be identified.

• Safety.

• Multiple interfaces.

• Impacts on Client.

• Government regulations and rules.

Another key area of risk is a misunderstanding of the original requirements. This can occur at

the management, user and developer levels. Be aware of vague or unclear requirements and

requirements that cannot be tested.

The past history of defects (bugs) discovered during Unit testing will help identify potential

areas within the software that are risky. If the unit testing discovered a large number of defects or a

tendency towards defects in a particular area of the software, this is an indication of potential future

problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will

most likely continue to be defect prone.

One good approach to define where the risks are is to have several brainstorming sessions.

51

Page 52: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 52/94

Features to be tested 

This is a listing of what is to be tested from the user 's viewpoint of what the system does. This

is not a technical description of the software, but a users view of the functions.

Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High,

Medium and Low. These types of levels are understandable to a User. You should be prepared todiscuss why a particular level was chosen.

Features not to be tested 

This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system

does and a configuration management/version control view. This is not a technical description of the

software, but a user's view of the functions.

Identify why the feature is not to be tested, there can be any number of reasons.

• Not to be included in this release of the Software.

• Low risk has been used before and was considered stable.

• Will be released but not tested or documented as a functional part of the release of this

version of the software.

 Approach (strategy)

This is your overall test strategy for this test plan; it should be appropriate to the level of the

plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans.

Overall rules and processes should be identified.

• Are any special tools to be used and what are they?

• Will the tool require special training?

• What metrics will be collected?

• Which level is each metric to be collected at?

• How is Configuration Management to be handled?

• How many different configurations will be tested?

• Hardware

• Software

• Combinations of HW, SW and other vendor packages

• What levels of regression testing will be done and how much at each test level?

• Will regression testing be based on severity of defects detected?

• How will elements in the requirements and design that do not make sense or are un

testable be processed?

52

Page 53: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 53/94

If this is a master test plan the overall project testing approach and coverage requirements must also

be identified.

Specify if there are special requirements for the testing.

• Only the full component will be tested.

• A specified segment of grouping of features/components must be tested together.

Other information that may be useful in setting the approach are:

(3) MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and

if the data is available.

(4) SRE, Software Reliability Engineering - if this methodology is in use and if the information is

available.

How will meetings and other organizational processes be handled?

.Item pass/fail criteria

Show stopper issues. Specify the criteria to be used to determine whether each test item has passed

or failed.

Entry & exit criteria

Entrance Criteria

The Entrance Criteria specified by the system test controller, should be fulfilled before

System Test can commence. In the event, that any criterion has not been achieved, the System Test

may commence if Business Team and Test Controller are in full agreement that the risk is

manageable.

• All developed code must be unit tested. Unit and Link Testing must be completed and signed

off by development team.

• System Test plans must be signed off by Business Analyst and Test Controller.

• All human resources must be assigned and in place.

• All test hardware and environments must be in place, and free for System test use.

• The Acceptance Tests must be completed, with a pass rate of not less than 80%.

Exit Criteria

The Exit Criteria detailed below must be achieved before the Phase 1 software can be

recommended for promotion to Operations Acceptance status. Furthermore, I recommend that there

be a minimum 2 days effort Final Integration testing AFTER the final fix/change has been retested.

• All High Priority errors from System Test must be fixed and tested

• If any medium or low-priority errors are outstanding - the implementation risk must be signed

off as acceptable by Business Analyst and Business Expert

53

Page 54: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 54/94

Suspension Criteria and Resumption Requirements:

This is a particular risk clause to define under what circumstances testing would stop and

restart.

  Resumption Criteria 

In the event that system testing is suspended resumption criteria will be specified and testing

will not re-commence until the software reaches these criteria.

• Project Integration Test must be signed off by Test Controller and Business Analyst.

• Business Acceptance Test must be signed off by Business Expert.

Risks and Contingencies

This defines all other risk events, their likelihood, impact and counter measures to over comethem.

SummaryThe goal of this exercise is to familiarize students with the process of creating test plans.

This exercise is divided into three tasks as described below.

• Devise a test plan for the group.

• Inspect the plan of another group who will in turn inspect yours.

• Improve the group's own plan based on comments from the review session

Task 1: Creating a Test PlanThe role of a test plan is to guide all testing activities. It defines what is to be tested and what

is to be overlooked, how the testing is to be performed (described on a general level) and by whom. It

is therefore a managerial document, not technical one - in essence, it is a project plan for testing.

Therefore, the target audience of the plan should be a manager with a decent grasp of the technical

issues involved.

Experience has shown that good planning can save a lot of time, even in an exercise, so do

not underestimate the effort required for this phase.

The goal of all these exercises is to carry out system testing on Word Pad, a simple word

processor. Your task is to write a thorough test plan in English using the above-mentioned sources as

guidelines. The plan should be based on the documentation of Word Pad

54

Page 55: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 55/94

Task 2: Inspecting a Test Plan

The role of a review is to make sure that a document (or code in a code review) is readable

and clear and that it contains all the necessary information and nothing more. Some implementation

details should be kept in mind:

• The groups will divide their roles themselves before arriving at the inspection. A failure to

follow the roles correctly will be reflected in the grading. However, one of the assistants will

act as the moderator and will not assume any other roles.

• There will be only one meeting with the other group and the moderator. All planning, overview

and preparation is up to the groups themselves. You should use the suggested check lists inthe lecture notes while preparing. Task 3 deals with the after-meeting activities.

• The meeting is rather short, only 60 minutes for a pair (that is, 30 minutes each). Hence, all

comments on the language used in the other group's test plan are to be given in writing. The

meeting itself concentrates on the form and content of the plan.

Task 3: Improved Test Plan and Inspection Report

After the meeting, each group will prepare a short inspection report on their test plan listing

their most typical and important errors in the first version of the plan together with ideas for correcting

them. You should also answer the following questions in a separate document:

• What is the role of the test plan in designing test cases?

• What were the most difficult parts in your test plan and why?

Furthermore, the test plan is to be revised according to the input from the inspection.

55

Page 56: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 56/94

Test Case:

A set of test inputs, execution conditions, and expected results developed for a particular 

objective, such as to exercise a particular program path or to verify compliance with a specific

requirement.

In software engineering, a test case is a set of conditions or variables under which a tester 

will determine if a requirement upon an application is partially or fully satisfied. It may take many test

cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements

of an application are met, there must be at least one test case for each requirement unless a

requirement has sub requirements. In that situation, each sub requirement must have at least one

test case. Some methodologies recommend creating at least two test cases for each requirement.

One of them should perform positive testing of requirement and other should perform negative

testing.

If the application is created without formal requirements, then test cases are written based on the

accepted normal operation of programs of a similar class.

What characterises a formal, written test case is that there is a known input and an expected output ,

which is worked out before the test is executed. The known input should test a precondition and the

expected output should test a post condition.

Under special circumstances, there could be a need to run the test, produce results, and then a team

of experts would evaluate if the results can be considered as a pass. This happens often on new

products' performance number determination. The first test is taken as the base line for subsequent

test / product release cycles.

Written test cases include a description of the functionality to be tested taken from either the

requirements or use cases, and the preparation required to ensure that the test can be conducted.

Written test cases are usually collected into Test suites.

A variation of test cases are most commonly used in acceptance testing. Acceptance testing is done

by a group of end-users or clients of the system to ensure the developed system meets their 

requirements. User acceptance testing is usually differentiated by the inclusion of happy path or 

positive test cases.

56

Page 57: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 57/94

Test Case Template

Tes

57

Page 58: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 58/94

Test Case Design Techniques

■ Black Box testing Techniques

 – Equivalence Class Partitioning

 – Boundary Value Analysis

 – Cause-Effect Diagram

 – State-Transtition.

Equivalence Class Partitioning 

 A definition of Equivalence Partitioning from our software testing dictionary:

Equivalence Partitioning: An approach where classes of inputs are categorized for product or 

function validation. This usually does not include combinations of input, but rather a single state value

based by class. For example, with a given function there may be several classes of input that may be

used for positive testing. If function expects an integer and receives an integer as input, this would be

considered as positive test assertion. On the other hand, if a character or any other input class other 

than integer is provided, this would be considered a negative test assertion or condition.

WHAT IS EQUIVALENCE PARTITIONING?

Concepts: Equivalence partitioning is a method for deriving test cases. In this method, classes of 

input conditions called equivalence classes are identified such that each member of the class causes

the same kind of processing and output to occur.

In this method, the tester identifies various equivalence classes for partitioning. A class is a set of 

input conditions that are is likely to be handled the same way by the system. If the system were to

handle one case in the class erroneously, it would handle all cases erroneously.

WHY LEARN EQUIVALENCE PARTITIONING?

Equivalence partitioning drastically cuts down the number of test cases required to test a

system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the smallest

number of test cases.

DESIGNING TEST CASES USING EQUIVALENCE PARTITIONINGTo use equivalence partitioning, you will need to perform two steps

Identify the equivalence classes

Design test cases

58

Page 59: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 59/94

STEP 1: IDENTIFY EQUIVALENCE CLASSES

Take each input condition described in the specification and derive at least two equivalence classes

for it. One class represents the set of cases which satisfy the condition (the valid class) and one

represents cases which do not (the invalid class)

Following are some general guidelines for identifying equivalence classes:

a) If the requirements state that a numeric value is input to the system and must be within a range of 

values, identify one valid class inputs which are within the valid range and two invalid equivalence

classes inputs which are too low and inputs which are too high. For example, if an item in inventory

can have a quantity of - 9999 to + 9999, identify the following classes:

1. One valid class: (QTY is greater than or equal to -9999 and is less than or equal to 9999). This is

written as (- 9999 < = QTY < = 9999)

2. the invalid class (QTY is less than -9999), also written as (QTY < -9999)3. the invalid class (QTY is greater than 9999) , also written as (QTY >9999)

b) If the requirements state that the number of items input by the system at some point must lie within

a certain range, specify one valid class where the number of inputs is within the valid range, one

invalid class where there are too few inputs and one invalid class where there are too many inputs.

For example, specifications state that a maximum of 4 purchase orders can be registered against

anyone product. The equivalence classes are: the valid equivalence class: (number of purchase an

order is greater than or equal to 1 and less than or equal to 4, also written as (1 < = no. of purchase

orders < = 4) the invalid class (no. of purchase orders> 4) the invalid class (no. of purchase

orders < 1)

c) If the requirements state that a particular input item match one of a set of values and each case will

be dealt with the same way, identify a valid class for values in the set and one invalid class

representing values outside of the set.

Says that the code accepts between 4 and 24 inputs; each is a 3-digit integer 

• One partition: number of inputs

• Classes “x<4”, “4<=x<=24”, “24<x”

• Chosen values: 3,4,5,14,23,24,25

59

Page 60: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 60/94

Boundary value analysis:

What is boundary value analysis in software testing?

Concepts: Boundary value analysis is a methodology for designing test cases that

concentrates software testing effort on cases near the limits of valid ranges Boundary value analysisis a method which refines equivalence partitioning. Boundary value analysis generates test cases

that highlight errors better than equivalence partitioning. The trick is to concentrate software testing

efforts at the extreme ends of the equivalence classes. At those points when input values change

from valid to invalid errors are most likely to occur. As well, boundary value analysis broadens the

portions of the business requirement document used to generate tests. Unlike equivalence

partitioning, it takes into account the output specifications when deriving test cases.

How do you perform boundary value analysis?

Once again, you'll need to perform two steps:

1. Identify the equivalence classes.

2. Design test cases.

But the details vary. Let's examine each step.

Step 1: identify equivalence classes

Follow the same rules you used in equivalence partitioning. However, consider the output

specifications as well. For example, if the output specifications for the inventory system stated that a

report on inventory should indicate a total quantity for all products no greater than 999,999, then you

d add the following classes

to the ones you found previously:

6. The valid class ( 0 < = total quantity on hand < = 999,999 )

7. The invalid class (total quantity on hand <0)

8. The invalid class (total quantity on hand> 999,999 )

Step 2: design test cases

In this step, you derive test cases from the equivalence classes. The process is similar to

that of equivalence partitioning but the rules for designing test cases differ. With equivalence

partitioning, you may select any test case within a range and any on either side of it with boundary

analysis, you focus your attention on cases close to the edges of the range. The detailed rules for 

generating test cases follow:

60

Page 61: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 61/94

Rules for test cases

1. If the condition is a range of values, create valid test cases for each end of the range and

invalid test cases just beyond each end of the range. For example, if a valid range of quantity on

hand is -9,999

through 9,999, write test cases that include:

1. the valid test case quantity on hand is -9,999,

2. the valid test case quantity on hand is 9,999,

3. the invalid test case quantity on hand is -10,000 and

4. the invalid test case quantity on hand is 10,000

You may combine valid classes wherever possible, just as you did with equivalence partitioning, and,

once again, you may not combine invalid classes. Don�t forget to consider output conditions as

well. In our inventory example the output conditions generate the following test cases:

1. the valid test case total quantity on hand is 0,

2. the valid test case total quantity on hand is 999,9993. the invalid test case total quantity on hand is -1 and

4. the invalid test case total quantity on hand is 1,000,000

2. A similar rule applies where the, condition states that the number of values must lie within a

certain range select two valid test cases, one for each boundary of the range, and two invalid test

cases, one just below and one just above the acceptable range.

3. Design tests that highlight the first and last records in an input or output file.

4. Look for any other extreme input or output conditions, and generate a test for each of them.

Definition of Boundary Value Analysis from our Software Testing Dictionary:

Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it

focuses on "corner cases" or values that are usually out of range as defined by the specification. This

means that if function

expects all values in range of negative 100 to positive 1000, test inputs would include negative 101

and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or 

volume testing. This type of validation is usually performed after positive functional validation has

completed (successfully) using requirements specifications and user documentation

Cause-Effect Graphing 

Cause-Effect Graphing (CEG) is used to derive test cases from a given natural language specification

to validate its corresponding implementation.

61

Page 62: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 62/94

The CEG technique is a black-box method, i.e, it considers only the desired external behavior of a

system. As well, it is the only black-box test design technique that considers combinations of causes

of system behaviors.

1. A cause represents a distinct input condition or an equivalence class of input conditions. A cause

can be interpreted as an entity which brings about an internal change in the system. In a CEG, a

cause is always positive and atomic.

2. An effect represents an output condition or a system transformation which is observable. An effect

can be a state or a message resulting from a combination of causes. 3Constraints represent external

constraints on the system.

62

Page 63: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 63/94

State- Transtition

When changes occurring in the state based behavior or attributes of an object or the various links

that the object has with other objects can be represented by this technique. State models are ideal

for describing the behavior of a single object .State transtition is State-based behavior of the

instances of a class.

For example

Operation of an Elevator 

Elevator has to go to all the 5 floors in a building. Consider each floor as one state.

Let the lift be initially at the 0th floor (initial state), now a request comes from the 5th floor it has to

respond to that request and the lift has move to 5th floor (next state) and now a request also comes

from the 3rd floor( another state) it has to respond to this request also. Like wise the requests may

come from other floors also.

Each floor means a different state, the lift has to take care of the request from all the states and has

to transit to all the state in sequence the request comes.

State Transtition Diagram

63

Page 64: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 64/94

Sample Test Cases

Test List : 

Test Name : CDL_TCD_USH_TCS_001

Subject : SearchStatus : ReviewDesigner : EdwinCreation Date : 05/09/2003Type : MANUALDescription : User screen GUI checkExecution Status : Passed

Steps : 

Step Name Description Expected Result

Step 1 Login to Citadel using correct GIDNumber, password and 'Citadel'server selected from drop-down menu

Search Screen should appear if loginsuccessful

Step 2 Check the GUI items in the User  search screen

1.The User search screen shoulddisplay "Reports, Change Passwordand Log Off" button at the top rightcorner of the screen.2. A Identifier Drop Down box shouldbe displayed at the top left side of thescreen.3. A Folder Check box and text fieldshould be displayed next to theIdentifier drop down list.4. Major Folder list box should bedisplayed.5. Minor Folder list box should be

displayed below the Major folder 6. Document Type Code text fieldshould be displayed along with abutton " Doc Code List"5. Document Type Description textfield should be displayed6. Document Date field along withdisplay options list box should bedisplayed7. Scan Date field along with displayoptions list box should be displayed7. Buttons namely "Import, Searchand Reset" should be displayed belowthe above fields.

64

Page 65: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 65/94

Test execution

When the test design activity has finished, the test cases are executed. Test execution is the

phase that follows after everything discussed to this point, with test strategies, test planning, test

procedures designed and developed, and the test environment operational, it is time to execute thetests created in the preceding phases.

Once development of the system is underway and software builds become ready for testing,

the testing team must have a precisely defined work flow for executing tests, tracking defects found,

and providing information, or metrics, on the progress of the testing effort.

When to stop testing?

Testing is potentially endless. We can not test till all the defects are unearthed and removed

-- it is simply impossible. At some point, we have to stop testing and ship the software. The question

is when.

Realistically, testing is a trade-off between budget, time and quality. It is driven by profit

models. There is two types of approach,

• Pessimistic Approach.

• Optimistic Approach.

The pessimistic and unfortunately most often used approach is to stop testing whenever 

some or any of the allocated resources -- time, budget, or test cases -- are exhausted.

The optimistic stopping rule is to stop testing when either reliability meets the requirement, or 

the benefit from continuing testing cannot justify the testing cost. This will usually require the use of 

reliability models to evaluate and predict reliability of the software under test. Each evaluation

requires repeated running of the following cycle: failure data gathering -- modeling -- prediction.

DefectDefects are commonly defined as "failure to conform to specifications," e.g., incorrectly

implemented specifications and specified requirement(s) missing from the software. A bug in software

product is any exception that can hinder the functionality of either the whole software or part of it.

Defect Fundamental 

Defect is termed as flaw or deviation from the requirements. Basically, test cases/scripts are

run in order to find out any unexpected behavior of the software product under test. If any such

unexpected behavior or exception occurs, it is called as a bug.

65

Page 66: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 66/94

Discussions within the software development community consistently recognize that most

failures in software products are due to errors in the specifications or requirements—as high as 80

percent of total defect costs.

Defect is termed as variance from a desired attribute. These attributes include complete and

correct requirements and specifications, designs that meet requirements and programs that observe

requirements and business rules.

Report a BugBefore you report a bug, please review the following guidelines:

It’s a good practice to take screen shots of execution of every step during software testing. If 

any test case fails during execution, it needs to be failed in the bug-reporting tool and a bug has to be

reported/logged for the same.

The tester can choose to first report a bug and then fail the test case in the bug-reporting tool

or fail a test case and report a bug. In any case, the Bug ID that is generated for the reported bug

should be attached to the test case that is failed.

At the time of reporting a bug, all the mandatory fields from the contents of bug (such as

Project, Summary, Description, Status, Detected By, Assigned To, Date Detected, Test Lead,

Detected in Version, Closed in Version, Expected Date of Closure, Actual Date of Closure, Severity,

Priority and Bug ID etc.) are filled and detailed description of the bug is given along with the expected

and actual results. The screen-shots taken at the time of execution of test case are attached to the

bug for reference by the developer.

After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then

associated with the failed test case. This Bug ID helps in associating the bug with the failed test case.

After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug

fixing process progresses.

If more than one tester are testing the software application, it becomes a possibility that some

other tester might already have reported a bug for the same defect found in the application. In such

situation, it becomes very important for the tester to find out if any bug has been reported for similar 

type of defect. If yes, then the test case has to be blocked with the previously raised bug (in this case,

the test case has to be executed once the bug is fixed). And if there is no such bug reportedpreviously, the tester can report a new bug and fail the test case for the newly raised bug.

If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner 

in a file with four columns containing Test Step No, Test Step Description, Expected Result and

Actual Result. The expected and actual results are written for each step and the test case is failed for 

the step at which the test case fails.

66

Page 67: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 67/94

Those are the three basic elements of a bug report: what you did, what you wanted to happen, and

what actually happened. You need to tell us exactly what you did, what you expected to have happen,

and what actually happened.

This file containing test case and the screen shots taken are sent to the developers for 

reference. As the tracking process is not automated, it becomes important keep updated information

of the bug that was raised till the time it is closed.

Always search the bug database first

The odds are high that if you've found a problem, someone else has found it, too. If you

spend a few minutes of your time making sure that you're not filing a duplicate bug, that's a few more

minutes someone can spend helping to fix that bug rather than sorting out duplicate bug reports.

If you don't understand an error message, ask for help. Don't report an error message you

don't understand as a bug. There are a lot of places you can ask for help in understanding what is

going on before you can claim that an error message you do not understand is a bug. Be brief, butdon't leave any important details out.

There are some general guidelines as given below:

• Remember the three basics: what you did, what you expected to happen, and what

happened.

• When you provide code that demonstrates the problem, it should almost never be more than

ten lines long. Anything longer probably contains a lot of code that has nothing to do with the

problem, which makes it more difficult to figure out the real problem.

• If the product is crashing, include a trace log or stack dump ( be sure to copy and paste all of 

the cryptic error codes and line numbers included in the results )

Don't report bugs about old versions.

Every time a new version is released, many bugs are fixed. If you're using a version of a

product that is more than two revisions older than the latest version, you should upgrade to the latest

version to make sure the bug you are experiencing still exists.

Only report one problem in each bug report. If you have encountered two bugs that don't

appear to be related, create a new bug report for each one. This makes it easier for different people

to help with the different bugs.

Defect TrackingDefect tracking is the process of finding defects in a product, (by inspection, testing , or 

recording feedback from customers), and making new versions of the product that fix the defects.Defect tracking is important in software engineering as complex software systems typically have tensor hundreds or thousands of defects: managing, evaluating and prioritizing these defects is a difficulttask: defect tracking systems are computer database systems that store defects and help people tomanage them.

67

Page 68: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 68/94

Severity Vs Priority

The five Levels are:

1. Critical

2. Major 

3. Average

4. Minor 

5. Exception

1. Critical - The defect results in the failure of the complete software system, of a subsystem, or of asoftware unit (program or module) within the system.

2. Major - The defect results in the failure of the complete software system, of a subsystem, or of asoftware unit (program or module) within the system. There is no way to make the failedcomponent(s), however, there are acceptable processing alternatives which will yield the desiredresult.

3. Average - The defect does not result in a failure, but causes the system to produce incorrect,incomplete, or inconsistent results, or the defect impairs the systems usability.

4. Minor - The defect does not cause a failure, does not impair usability, and the desired processingresults are easily obtained by working around the defect.

5. Exception - The defect is the result of non-conformance to a standard, is related to the aestheticsof the system, or is a request for an enhancement. Defects at this level may be deferred or evenignored.

In addition to the defect severity level defined above, defect priority level can be used with severitycategories to determine the immediacy of repair. A five repair priority scale has also be used incommon testing practice. The levels are:

1. Resolve Immediately

2. Give High Attention

3. Normal Queue

4. Low Priority

5. Defer 

1. Resolve Immediately - Further development and/or testing cannot occur until the defect has beenrepaired. The system cannot be used until the repair has been effected.

2. Give High Attention - The defect must be resolved as soon as possible because it is impairingdevelopment/and or testing activities. System use will be severely affected until the defect is fixed.

68

Page 69: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 69/94

3. Normal Queue - The defect should be resolved in the normal course of development activities. Itcan wait until a new build or version is created.

4. Low Priority - The defect is an irritant which should be repaired but which can be repaired after more serious defect have been fixed.

5. Defer - The defect repair can be put of indefinitely. It can be resolved in a future major systemrevision or not resolved at all.

69

Page 70: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 70/94

TEST REPORT

 

The Software Test Report (STR) is a record of the qualification testing performed on a Computer 

Software Configuration Item (CSCI), a software system or subsystem, or other software-relateditem.

The STR enables the acquirer to assess the testing and its results.

APPLICATION/INTERRELATIONSHIP  

The Data Item Description (DID) contains the format and content preparation instructions for thedata product generated by specific and discrete task requirements as delineated in the contract.

This DID is used when the developer is tasked to analyze and record the results of CSCIqualification testing, system qualification testing of a software system, or other testing identified inthe contract.

The Contract Data Requirements List (CDRL) should specify whether deliverable data are to bedelivered on paper or electronic media; are to be in a given electronic form (such as ASCII, CALS,or compatible with a specified word processor or other support software); may be delivered indeveloper format rather than in the format specified herein; and may reside in a computer-aidedsoftware engineering (CASE) or other automated tool rather than in the form of a traditionaldocument.

PREPARATION INSTRUCTIONS 

General instructions.

a. Automated techniques. Use of automated techniques is encouraged. The term "document" in thisDID means a collection of data regardless of its medium.

b. Alternate presentation styles. Diagrams, tables, matrices, and other presentation styles areacceptable substitutes for text when data required by this DID can be made more readable usingthese styles.

c. Title page or identifier. The document shall include a title page containing, as applicable: documentnumber; volume number; version/revision indicator; security markings or other restrictions on thehandling of the document; date; document title; name, abbreviation, and any other identifier for thesystem, subsystem, or item to which the document applies; contract number; CDRL item number;organization for which the document has been prepared; name and address of the preparingorganization; and distribution statement. For data in a database or other alternative form, thisinformation shall be included on external and internal labels or by equivalent identification methods.

d. Table of contents. The document shall contain a table of contents providing the number, title, andpage number of each titled paragraph, figure, table, and appendix. For data in a database or other alternative form, this information shall consist of an internal or external table of contents containing

70

Page 71: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 71/94

pointers to, or instructions for accessing, each paragraph, figure, table, and appendix or their equivalents.

e. Page numbering/labeling. Each page shall contain a unique page number and display thedocument number, including version, volume, and date, as applicable. For data in a database or other alternative form, files, screens, or other entities shall be assigned names or numbers in such a way

that desired data can be indexed and accessed.

f. Response to tailoring instructions. If a paragraph is tailored out of this DID, the resulting documentshall contain the corresponding paragraph number and title, followed by "This paragraph has beentailored out." For data in a database or other alternative form, this representation need occur only inthe table of contents or equivalent.

g. Multiple paragraphs and subparagraphs. Any section, paragraph, or subparagraph in this DID maybe written as multiple paragraphs or subparagraphs to enhance readability.

h. Standard data descriptions. If a data description required by this DID has been published in astandard data element dictionary specified in the contract, reference to an entry in that dictionary ispreferred over including the description itself.

i. Substitution of existing documents. Commercial or other existing documents may be substituted for all or part of the document if they contain the required data.

1. Content requirements . Content requirements begin on the following page. The numbers

shown designate the paragraph numbers to be used in the document. Each such number isunderstood to have the prefix within this DID. For example, the paragraph numbered 1.1 isunderstood to be paragraph 10.2.1.1 within this DID.

Scope. This section shall be divided into the following paragraphs.

Identification. This paragraph shall contain a full identification of the system and the software towhich this document applies, including, as applicable, identification number(s), title(s),abbreviation(s), version number(s), and release number(s).

System overview. This paragraph shall briefly state the purpose of the system and the software towhich this document applies. It shall describe the general nature of the system and software;summarize the history of system development, operation, and maintenance; identify the projectsponsor, acquirer, user, developer, and support agencies; identify current and planned operatingsites; and list other relevant documents.

1. Document overview . This paragraph shall summarize the purpose and contents of this

document and shall describe any security or privacy considerations associated with its use.

1. Referenced documents . This section shall list the number, title, revision, and date of all

documents referenced in this report. This section shall also identify the source for alldocuments not available through normal Government stocking activities.

71

Page 72: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 72/94

Overview of test results. This section shall be divided into the following paragraphs to provide anoverview of test results.

Overall assessment of the software tested. This paragraph shall:

a. Provide an overall assessment of the software as demonstrated by the test results in this report

b. Identify any remaining deficiencies, limitations, or constraints that were detected by the testingperformed. Problem/change reports may be used to provide deficiency information.

c. For each remaining deficiency, limitation, or constraint, describe:

1) Its impact on software and system performance, including identification of requirements not met

2) The impact on software and system design to correct it

3) A recommended solution/approach for correcting it

Impact of test environment. This paragraph shall provide an assessment of the manner in which thetest environment may be different from the operational environment and the effect of this differenceon the test results.

Recommended improvements. This paragraph shall provide any recommended improvements inthe design, operation, or testing of the software tested. A discussion of each recommendationand its impact on the software may be provided. If no recommended improvements are provided,this paragraph shall state "None."

Detailed test results. This section shall be divided into the following paragraphs to describe thedetailed results for each test. Note: The word "test" means a related collection of test cases.

(Project-unique identifier of a test). This paragraph shall identify a test by project-unique identifier and shall be divided into the following subparagraphs to describe the test results.

Summary of test results. This paragraph shall summarize the results of the test. The summary shallinclude, possibly in a table, the completion status of each test case associated with the test (for example, "all results as expected," "problems encountered," "deviations required"). When thecompletion status is not "as expected," this paragraph shall reference the following paragraphs for details.

Problems encountered. This paragraph shall be divided into subparagraphs that identify each testcase in which one or more problems occurred.

(Project-unique identifier of a test case). This paragraph shall identify by project-unique identifier atest case in which one or more problems occurred, and shall provide:

a. A brief description of the problem(s) that occurred

b. Identification of the test procedure step(s) in which they occurred

c. Reference(s) to the associated problem/change report(s) and backup data, as applicable

72

Page 73: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 73/94

d. The number of times the procedure or step was repeated in attempting to correct the problem(s)and the outcome of each attempt

e. Backup points or test steps where tests were resumed for retesting

Deviations from test cases/procedures. This paragraph shall be divided into subparagraphs that

identify each test case in which deviations from test case/test procedures occurred.

(Project-unique identifier of a test case). This paragraph shall identify by project-unique identifier atest case in which one or more deviations occurred, and shall provide:

a. A description of the deviation(s) (for example, test case run in which the deviation occurred andnature of the deviation, such as substitution of required equipment, procedural steps not followed,schedule deviations). (Red-lined test procedures may be used to show the deviations)

b. The rationale for the deviation(s)

1. An assessment of the deviations' impact on the validity of the test case

Test log. This section shall present, possibly in a figure or appendix, a chronological record of the testevents covered by this report. This test log shall include:

a. The date(s), time(s), and location(s) of the tests performed

b. The hardware and software configurations used for each test including, as applicable,part/model/serial number, manufacturer, revision level, and calibration date of all hardware, andversion number and name for the software components used

c. The date and time of each testrelated activity, the identity of the individual(s) who performed theactivity, and the identities of witnesses, as applicable

Notes. This section shall contain any general information that aids in understanding this document(e.g., background information, glossary, rationale). This section shall include an alphabetical listing of all acronyms, abbreviations, and their meanings as used in this document and a list of any terms anddefinitions needed to understand this document.

73

Page 74: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 74/94

TESTING METRICS

What is Test Metric?

Test metrics is a process for analyzing the current level of maturity while testing and predict

future trends, finally meant for enhancing the testing activities which were missed in current testing

will be added in next build to improve or enhance the Testing Process.

Metrics are the numerical data which will help us to measure the test effectiveness.

Metrics are produced in two forms

1. Base Metrics and

2. Derived Metrics.

Example of Base Metrics:

# Test Cases

# New Test Cases

# Test Cases Executed

# Test Cases Unexecuted

# Test Cases Re-executed

# Passes

# Fails# Test Cases Under Investigation

# Test Cases Blocked

# 1st Run Fails

#Test Case Execution Time

# Testers

Examples of Derived Metrics:

% Test Cases Complete

% Test Cases Passed

% Test Cases Failed

% Test Cases Blocked

% Test Defects Corrected

74

Page 75: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 75/94

 

Objective of Test Metrics

The objective of Test Metrics is to capture the planned and actual quantities the effort, time

and resources required to complete all the phases of Testing of the software Project.

Test metrics usually covers 3 things:

1. Test coverage

2. Time for one test cycle.

3. Convergence of testing

Why Testing metrics?

As, we all know, a major percentage of software projects suffer from quality problems.

Software testing provides visibility into product and process quality. Test metrics are key ”facts” that

project managers can understand their current position and to prioritize their activities to reduce the

risk of schedule over-runs on software releases.

Test metrics are a very powerful management tool. They help you to measure your current

performance. Because today’s data becomes tomorrow’s historical data. its never too late to start

recording key information on your project. This data can be used to improved future work estimates

and quality levels. Without historical data, estimates will be guesses.

You cannot track the project status meaningfully unless you know the actual effort and time

spent on each task as compared to your estimates. You cannot sensibly decide whether your product

is stable enough to ship unless you track the rates at which your team is finding and fixing defects.

You cannot quantify the performance of your new development processes without some statistics on

your current performance and a baseline to compare it with. Metrics help you to better control your 

software projects. They enable you to learn more about the functioning of your organization by

establishing a Process Capability baseline that can be used to better estimate and predict the quality

of your projects in the future.

The benefits of having testing metrics

1. Test metrics data collection helps predict the long-term direction and scope for an organization and

enables a more holistic view of business and identifies high-level goals

75

Page 76: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 76/94

2. Provides a basis for estimation and facilitates planning for closure of the performance gap

3. Provides a means for control / status reporting

4. Identifies risk areas that requires more testing

5. Provides meters to flag actions for faster, more informed decision making

6. Quickly identifies and helps resolve potential problems and identifies areas of improvement

7. Test metrics provide an objective measure of the effectiveness and efficiency of testing

Key factors to bear in mind while setting up test metrics

1. Collect only the data that you will actually use/need to make informed decisions to alter your 

strategies, if you are not going to change your strategy regardless of the finding, your time is better 

spent in testing.

2. Do not base decisions solely on data that is variable or can be manipulated. For example,

measuring testers on the number of tests they write per day can reward them for speeding through

superficial tests or punish them for tracking trickier functionality.

3. Use statistical analysis to get a better understanding of the data. Difficult metrics data should be

analyzed carefully. The templates used for presenting data should be self explanatory.

4. One of the key inputs to the metrics program is the defect tracking system in which the reported

process and product defects are logged and tracked to closure. It is therefore very important to

carefully decide on the fields that need per defect in the defect tracking systems and then generate

customizable reports.

5. Metrics should be decided on the basis of their importance to stakeholders rather than ease of data

collection. Metrics that are of not interest to the stakeholders should be avoided.

6. Inaccurate data should be avoided and complex data should be collected carefully. Proper benchmarks should be definite for the entire program

17.6 Deciding on the Metrics to Collect

There are literally thousands of possible software metrics to collect and possible things to

76

Page 77: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 77/94

measure about software development. There are many books and training programs available about

software metrics, which of the many metrics are appropriate for your situation? One method is to start

with one of the many available published suites of metrics and a vision of your own management

problems and goals, and then customize the metrics list based on the following metrics collection

checklist. For each metric, you must consider,

1) What are you trying to manage with this metric? Each metric must relate to a specific

management area of interest in a direct way. The more convoluted the relationship between

the measurement and the management goal, the less likely you are to be collecting the right

thing.

2) What does this metric measure? Exactly what does this metric count? High-level

attempts to answer this question (such as "it measures how much we've accomplished") may

be misleading. The detailed answers (such as "it reports how much we had budgeted for 

design tasks that first-level supervisors are reporting as greater than 80 percent complete") is

much more informative, and can provide greater insight regarding the accuracy and

usefulness of any specific metric.

3) If your organization optimized this metric alone, what other important aspects of your 

software design, development, testing, deployment, and maintenance would be affected?

Asking this question will provide a list of areas where you must check to be sure that you

have a balancing metric. Otherwise, your metrics program may have unintended effects and

drive your organization to undesirable behavior.

4) How hard/expensive is it to collect this information? This is where you actually get to

identify whether collection of this metric is worth the effort. If it is very expensive or hard to

collect, look for automation that can make the collection easier, or consider alternative

metrics that can be substituted.

5) Does the collection of this metric interact with (or interfere with) other business

processes? For example, does the metric attempt to gather financial information on a

different periodic basis or with different granularity than your financial system collects and

reports it? If so, how will the two quantitative systems be synchronized? Who will reconcile

differences? Can the two collection efforts be combined into one and provide sufficient

software metrics information?

6) How accurate will the information be after you collect it? Complex or manpower-

intensive metrics collection efforts are often short circuited under time and schedule pressure

77

Page 78: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 78/94

by the people responsible for the collection. Metrics involving opinions (e.g., what percentage

complete do you think you are?) are notoriously inaccurate. Exercise caution, and carefully

evaluate the validity of metrics with these characteristics.

7) Can this management interest area be measured by other metrics? What alternatives

to this metric exist? Always look for an easier-to-collect, more accurate, more timely metric

that will measure relevant aspects of the management issue of concern.

Use of this checklist will help ensure the collection of an efficient suite of software development

metrics that directly relates to management goals. Periodic review of existing metrics against this

checklist is recommended.

Projects that are underestimated, over-budget, or that produce unstable products, have the

potential to devastate the company. Accurate estimates, competitive productivity, and renewed

confidence in product quality are critical to the success of the company.

Hoping to solve these problems as quickly as possible, the company management embarks

on the 8-Step Metrics Program

Step 1: Document the Software Development Process

Integrated Software does not have a defined development process. However, the new

metrics coordinator does a quick review of project status reports and finds that the activities of 

requirements analysis, design, code, review, recode, test, and debugging describe how the teams

spend their time. The inputs, work performed, outputs and verification criteria for each activity have

not been recorded. He decides to skip these details for this "test" exercise. The recode activity

includes only effort spent addressing software action items (defects) identified in reviews.

Step 2: State the Goals

The metrics coordinator sets out to define the goals of the metrics program. The list of goals

in Step 2 of the 7 -Step Metrics Program are broader than (yet still related to) the immediate concerns

of Integrated Software. Discussion with development staff leads to some good ideas on how to tailor 

these goals into specific goals for the company.

1. Estimates

The development staff at Integrated Software considers past estimates to have been

unrealistic as they were established using “finger in the wind” techniques. They suggest that current

plan could benefit from past experience as the present project is very similar to past projects.

Goal: Use previous project experience to improve estimations of Productivity.

2. Productivity

Discussions about the significant effort spent in debugging center on a comment by one of 

the developers that defects found early on in reviews have been faster to repair than Defects

78

Page 79: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 79/94

discovered by the test group. It seems that both reviews and testing are needed, but the amount of 

effort to put into each is not clear.

Goal: Optimize defect detection and removal.

3. Quality

The test group at the company argues for exhaustive testing. This however, is Prohibitively

expensive. Alternatively, they suggest looking at the trends of defects discovered and repaired over 

time to better understand the probable number of defects remaining.

Goal: Ensure that the defect detection rate during testing is converging towards a level that indicates

that less than five defects per KSLOC will be discovered in the next year .

Step 3: Define Metrics Required to Reach Goals and Identify Data to Collect

Working from the Step 3 tables, the metrics coordinator chooses the following metrics for the

metrics program.

Goal 1: Improve Estimates

Actual effort for each type of software in PH•     Size of each type of software in SLOC

• Software product complexity (type)

• Labor rate (PH/SLOC) for each type

Goal 2: Improve Productivity

•     Total number of person hours per activity

•     Number of defects discovered in reviews

•     Number of defects discovered in testing

•     Effort spent repairing defects discovered in reviews

•     Effort spent repairing defects discovered in testing

•     Number of defects removed per effort spent in reviews and recode

•     Number of defects removed per effort spent in testing and debug

Goal 3: Improve Quality

•     total number of defects discovered

•     total number of defects repaired

•     number of defects discovered / schedule date

• number of defects repaired / schedule date

Types of test metrics1. Product test metrics 

i. Number of remarks

ii. Number of defects

ii. Remark status

iv. Defect severity

79

Page 80: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 80/94

v. defect severity index

vi. Time to find a defect

vii. Time to solve a defect

viii. Test coverage

ix. defects/KLOC

2. Project test metrics

i. workload capacity ratio

ii. Test planning performance.

Iii. Test effort ratio.

iv. Defect category

3. Process test metrics

i. should be found in which phase

ii. Residual defect density

iii. Defect remark ratioiv. Valid remark ratio

v. bad fix ratio

vi. Defect removal efficiency

vii. Phase yield

viii. Backlog development

ix. Backlog testing

x. scope changes

Product test metrics

80

Page 81: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 81/94

I. Number of remarks

Definition

The total number of remarks found in a given time period/phase/test type. A remark is a claim

made by test engineer that the application shows an undesired behavior. It may or may not result in

software modification or changes to documentation.

Purpose

One of the earliest indicators to measure once the testing commences; provides initial

indications about the stability of the software

Data to collect

Total number of remarks found.

II. Number of defects

Definition

The total number of remarks found in a given time period/phase/test type that resulted insoftware or documentation modifications.

Purpose

  The total number of remarks found in a given time period/phase/test type that resulted in

software or documentation modifications.

Data to collect

Only remarks that resulted in modifying the software or the documentation are counted.

III. Remark status

Definition

The status of the defect could vary depending upon the defect-tracking tool that is used.

Broadly, the following statuses are available: To be solved: Logged by the test engineers and waiting

to be taken over by the software engineer. To be retested: Solved by the developer, and waiting to be

retested by the test engineer. Closed: The issue was retested by the test engineer and was approved.

Purpose

Track the progress with respect to entering, solving and retesting the remarks. During this

phase, the information is useful to know the number of remarks logged, solved, waiting to be resolved

and retested.

Data to collect

This information can normally be obtained directly from the defect tracking system based on

the remark status.

IV. Defect severity

Definition

81

Page 82: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 82/94

The severity level of a defect indicates the potential business impact for the end user 

(business impact = effect on the end user x frequency of occurrence). 

Purpose

Provides indications about the quality of the product under test. A high-severity defect means

low product quality, and vice versa. At the end of this phase, this information is useful to make the

release decision based on the number of defects and their severity levels.

Data to collect

Every defect has severity levels attached to it. Broadly, these are Critical, Serious, Medium

and Low.

V. Defect severity index

Definition

An index representing the average of the severity of the defects.

Purpose  Provides a direct measurement of the quality of the product—specifically, reliability, fault

tolerance and stability.

Data to collect

  Two measures are required to compute the defect severity index. A number is assigned

against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its

severity level number and add the totals; divide this by the total number of defects to determine the

defect severity index.

 

VI. Time to find a defect

Definition

The effort required to find a defect.

Purpose

Shows how fast the defects are being found. This metric indicates the correlation between the

test effort and the number of defects found.

Data to collect

Divide the cumulative hours spent on test execution and logging defects by the number of 

defects entered during the same period.

82

Page 83: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 83/94

 

VII. Time to solve a defect

Definition

Effort required resolving a defect (diagnosis and correction).

Purpose

Provides an indication of the maintainability of the product and can be used to estimate

projected maintenance costs.

Data to collect

Divide the number of hours spent on diagnosis and correction by the number of defects

resolved during the same period.

VIII. Test coverage

Definition

  Defined as the extent to which testing covers the product’s complete functionality.

Purpose

This metric is an indication of the completeness of the testing. It does not indicate anything

about the effectiveness of the testing. This can be used as a criterion to stop testing.

Data to collect

Coverage could be with respect to requirements, functional topic list, business flows, use

cases, etc. It can be calculated based on the number of items that were covered vs. the total number 

of items.

83

Page 84: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 84/94

IX. Test case effectiveness

Definition

  The extent to which test cases are able to find defects

Purpose

This metric provides an indication of the effectiveness of the test cases and the stability of 

the software.

Data to collect

Ratio of the number of test cases that resulted in logging remarks vs. the total number of 

test cases.

X. Defects/KLOC

Definition

  The number of defects per 1,000 lines of code.

Purpose

  This metric indicates the quality of the product under test. It can be used as a basis for 

estimating defects to be addressed in the next phase or the next version.

Data to collect

Ratio of the number of defects found vs. the total number of lines of code (thousands)

Formula used

 

Uses of defect/KLOC

Defect density is used to compare the relative number of defects in various software

components. This helps identifies candidates various for additional inspection or testing or for 

84

Page 85: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 85/94

possible engineering or replacement. Identifying defect prone components allows the concentration of 

limited resources into areas with the highest potential return on investment.

Another use of defect density is to compare subsequent releases of a product to track the

impact of defect reduction and quality improvement activities. Normalling by size allows releasing of 

various sizes to be compared. Differences between products or products lines can also be compared

in this manner. 

Project test metrics:

I. Workload capacity

Definition

Ratio of the planned workload and the gross capacity for the total test project or phase.

Similar.

PurposeThis metric helps in detecting issues related to estimation and planning. It serves as an input

for estimating similar projects as well.

Data to collect

Computation of this metric often happens in the beginning of the phase or project. Workload

is determined by multiplying the number of tasks against their norm times. Gross capacity is nothing

but planned working time, determined by workload divided by gross capacity.

II. Test planning performance

Definition  The planned value related to the actual value.

Purpose

Shows how well estimation was done.

Data to collect

  The ratio of the actual effort spent to the planned effort

III. Test effort percentage

Definition

  Test effort is the amount of work spent, in hours or days or weeks. Overall project effort is

divided among multiple phases of the project: requirements, design, coding, testing and such. This

metric can be computed by dividing the overall test effort by the total project effort.

Purpose

85

Page 86: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 86/94

The effort spent in testing, in relation to the effort spent in the development activities, will give

us an indication of the level of investment in testing. This information can also be used to estimate

similar projects in the future.

Data to collect

This metric can be computed by dividing the overall test effort by the total project effort.

IV. Defect category

Definition

An attribute of the defect in relation to the quality attributes of the product. Quality attributes of 

a product include functionality, usability, documentation, performance, installation and

internationalization.

Purpose

  This metric can provide insight into the different quality attributes of the product.

Data to collect

  This metric can be computed by dividing the defects that belong to a particular category by

the total number of defects.

Process test metrics

I. Should be found in which phase

Definition

An attribute of the defect, indicating in which phase the remark should have been found.

Purpose

Are we able to find the right defects in the right phase as described in the test strategy?

Indicates the percentage of defects that are getting migrated into subsequent test phases.

Data to collect

Computation of this metric is done by calculating the number of defects that should have

been found in previous test phases.

II. Residual defect density:

Definition

An estimate of the number of defects that may have been unresolved in the product phase.

Purpose

The goal is to achieve a defect level that is acceptable to the clients. We remove defects in

each of the test phases so that few will remain.

Data to collect

86

Page 87: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 87/94

This is a tricky issue. Released products have a basis for estimation. For new versions,

industry standards, coupled with project specifics, form the basis for estimation.

III. Defect remark ratio

Definition

Ratio of the number of remarks that resulted in software modification vs. the total number of 

remarks.

Purpose

Provides an indication of the level of understanding between the test engineers and the

software engineers about the product, as well as an indirect indication of test effectiveness.

Data to collect

The number of remarks that resulted in software modification vs. the total number of logged

remarks. Valid for each test type, during and at the end of test phases.

IV. Valid remark ratio

Definition

Percentage of valid remarks during a certain period.

Purpose

Indicates the efficiency of the test process.

Data to collect

Ratio of the total number of remarks that are valid to the total number of remarks found.

Formula used

Valid remarks = number of defects + duplicate remarks + number of remarks that will be

resolved in the next phase or release.

V. Phase yield

Definition

Defined as the number of defects found during the phase of the development life cycle vs. the

estimated number of defects at the start of the phase.

Purpose

Shows the effectiveness of the defect removal. Provides a direct measurement of product

quality; can be used to determine the estimated number of defects for the next phase.

Data to collect

Ratio of the number of defects found by the total number of estimated defects. This can be

used during a phase and also at the end of the phase.

87

Page 88: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 88/94

VI. Backlog development

Definition

The number of remarks that are yet to be resolved by the development team.

Purpose

Indicates how well the software engineers are coping with the testing efforts.

Data to collect

The number of remarks that remain to be resolved.

VII. Backlog testing

Definition

  The number of resolved remarks that are yet to be retested by the development team.

Purpose

  Indicates how well the test engineers are coping with the development efforts.Data to collect

The number of remarks that have been resolved.

VII. Scope changes

Definition

The number of changes that were made to the test scope.

Purpose

Indicates requirements stability or volatility, as well as process stability.

Data to collect

Ratio of the number of changed items in the test scope to the total number of items.

VIII. Defect removal efficiency

Definition

The number of defects that are removed per time unit (hours/days/weeks)

Purpose

Indicates the efficiency of defect removal methods, as well as indirect measurement of the

quality of the product.

Data to collect

Computed by dividing the effort required for defect detection, defect resolution time and

retesting time by the number of remarks. This is calculated per test type, during and across test

phases.

88

Page 89: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 89/94

Defect removal profiles:

Rating Automated Analysis Peer Reviews Execution Testing

and Tools

Very-Low Simple compiler syntax

checking.

No peer review. No testing

Nominal Some compiler extensions

for static

module and inter-module

level code

analysis, syntax, type-

checking.Basic requirements and

design

consistency, traceability

checking.

Well-defined

sequence of  

preparation,review,

minimal

follow-up.

I nformal review rolesand

procedures.

Basic unit test,

integration test,

system test process.

Basic test data

management,

problem trackingsupport.

Test criteria based on

checklists.

Extra- High Formalized* specification

and

verification.

Advanced distributed

processing and

temporal analysis, model

checking,

symbolic execution.

*Consistency-checkable pre-

conditions

and post-conditions, but not

mathematical theorems.

Formal review roles

and

procedures for fixes,

change

control.

Extensive review

checklists, root

cause analysis.

Continuous review

process

improvement.

User/Customer 

involvement,

Statistical Process

Control

.

Highly advanced tools

for test

oracles, distributed

monitoring

and analysis,

assertion checking

Integration of  

automated

analysis and test tools.

Model-based test

process

management.

Formula used 

89

Page 90: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 90/94

 

VII. Valid remark ratio

Definition

Percentage of valid remarks during a certain period.

Purpose

Indicates the efficiency of the test process.

Data to collect

Ratio of the total number of remarks that are valid to the total number of remarks found

Formula used

Valid remarks = number of defects + duplicate remarks + number of remarks that will be

resolved in the next phase or release

VIII. Bad fix ratio

Definition

Percentage of the number of resolved remarks that resulted in creating new defects while

resolving existing ones.

Purpose

Indicates the effectiveness of the defect-resolution process, plus indirect indications as to the

maintainability of the software.

Data to collect

Ratio of the total number of bad fixes to the total number of resolved defects. This can be

calculated per test type, test phase or time period.

90

Page 91: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 91/94

Page 92: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 92/94

supports user defined macros, so they develop a macro in the editor to count code consistently. The

algorithm implemented ignores blank and commented lines and includes data and executable

statements.

Step 6: Creates a Metrics database

Integrated software's metrics database needs to retain the information entered directly from

the forms it has used. It must also permit the analysis of this data and calculation of the metrics

specified. It needs to be able to display metrics graphically for presentations and reports. Since

Integrated Software's defect tracking system keeps history data on defects, this data will be extracted

directly into a spreadsheet, where it can be used to compute and present the defect trend metrics.

Step 7: Define the feedback mechanism

Given the small size of company the coordinator decides the results from the metric analysis

should be presented in a meeting, saving the effort of writing a detailed report. The graphs and

metrics calculated will be prepared on overhead transparencies for presentation and handouts of theslides will be provided. The data which is collected to be analyzed and collected in the metrics

program will be the company’s first baseline, which can be enhanced later as more projects are

entered into the metrics program.

The Cost of Software Quality

92

Page 93: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 93/94

Testing can be considered an investment. A software organization ether anin-house IT shop, market-driven shrink-wrap software vendor, or Internet ASPchooses to forego spending money on newprojects or additional features to fund he test team. What’s the return on that investment (ROI)? Costof quality nalysis provides one way to quantify ROI.

Introduction

The process of negotiating a software testing budget can be painful. Some project managers viewtesting as a necessary evil that occurs at the end of the project. In these people’s minds, testing coststoo much, takes too long, doesn’t help them build the product, and can create hostility between thetest team and the rest of the development organization. No wonder people who view testing thisway spend as little as possible on it. Other project managers, though, are inclined to spend more ontesting. Why?Smart software managers understand that testing is an investment in quality. Out of the overallproject budget, the project managers set aside some money for assessing the system and resolvingthe bugs that the testers find. Smart test managers have learned how to manage that investmentwisely. In such circumstances, the test investment produces a positive return, fits within the overallproject schedule, has quantifiable findings, and is seen as a definite contributon to the project.

What Does Quality Cost?

The costs of quality can be broken as these costs:Cquality=Cconformance+CnonconformanceConformance costs include prevention costs and appraisal costs. Prevention costs include moneyspent on quality assurance—tasks like training,requirements and code reviews, and other activitiesthat promote good software.Appraisal costs include money spent on planning test activities,developing test cases and data, and executing those test cases once.Nonconformance costs come intwo flavors, internal failures and external failures.The costs of internal failure include all expenses thatarise when test cases fail the first time they’re run, as they often do. A programmer incurs a cost of internal failure while debugging problems found during her own unit and component testing.

Once we get into formal testing in an independent test team, the costs of internal failure increase.Think through the process: The tester researches and reports the failure, the programmer finds and

fixes the fault, the release engineer produces a new release, the system administration team installsthat release in the test environment, and the tester retests the new release to confirm the fix and tocheck for regression.The costs of external failure are those incurred when, rather than a tester finding a bug, the customer does. These costs will be even higher than those associated with either kind of internal failure,programmer-found or tester-found. In these cases, not only does the same process described for tester-found bugs occur, but you also incur the technical support overhead and the more expensiveprocess of releasing a fix to the field rather than to the test lab. In addition, consider the intangiblecosts: Angry customers, damage to the company image, lost business, and maybe even lawsuits.Two observations lay the foundation for the enlightened view of testing as an investment. First, likeany cost equation in business, we will want to minimize the cost of quality. Second, while it is oftencheaper to prevent problems than to repair them, if we must repair problems, internal failures costless than external failures.

Examples of Quality Costs Associated with Software Products.

Prevention Appraisal  

• Staff training • Design review

93

Page 94: Corporate Material for Manual Testing

8/4/2019 Corporate Material for Manual Testing

http://slidepdf.com/reader/full/corporate-material-for-manual-testing 94/94

• Requirements analysis

• Early prototyping

• Fault-tolerant design

• Defensive programming

• Usability analysis

• Clear specification

• Accurate internal documentation

• Evaluation of the reliability of developmenttools (before buying them) or of other potential components of the product

• Code inspection

• Glass box testing

• Black box testing

• Training testers

• Beta testing

Test automation• Usability testing

• Pre-release out-of-box testing by customer service staff 

Internal Failure External Failure

• Bug fixes

Regression testing• Wasted in-house user time

• Wasted tester time

• Wasted writer time

• Wasted marketer time

• Wasted advertisements

• Direct cost of late shipment

• Opportunity cost of late shipment

• Technical support calls

• Preparation of support answer books

• Investigation of customer complaints

• Refunds and recalls

• Coding / testing of interim bug fix releases

• Shipping of updated product

• Added expense of supporting multipleversions of the product in the field

• PR work to soften drafts of harsh reviews

• Lost sales

• Lost customer goodwill

• Discounts to resellers to encourage them tokeep selling the product

• Warranty costs

• Liability costs

• Government investigations

• Penalties

• All other costs imposed by law