80
SOFTWARE TESTING Summer Internship Report Training done at Steria (India) Ltd, NOIDA Swapnil Khilariwal Shiv Nadar University

Steria Intern Report

Embed Size (px)

DESCRIPTION

Internship Report

Citation preview

Page 1: Steria Intern Report

SOFTWARE TESTING Summer Internship Report

Training done at Steria (India) Ltd, NOIDASwapnil KhilariwalShiv Nadar University

Page 2: Steria Intern Report

This is hereby declare that, the original and genuine investigation work has been carried out to investigate about the subject matter and the related data collection and investigation has been completed solely, sincerely and satisfactorily by Swapnil Khilariwal, a student of Shiv Nadar University, Greater Noida,Gautam Buddha Nagar,UP regarding project work titled “Software Testing” submitted to Steria (India) Limited,Noida.

DECLARATION

Page 3: Steria Intern Report

It would be my utmost pleasure to express my sincere thanks to my project guide Mr. Prashant Sarswat, Steria (India) Limited, in providing a helping hand in this project. His valuable guidance, support and supervision all through this project titled “Software Testing” are responsible for attaining its present form.

I will remain grateful to Steria (India) Ltd, for the guidance in database & for the facilities provided during the course of project.

I acknowledge my indebtedness to all those scholars whose work I have drawn upon. They have not only enriched by understanding the subject but also served as a source of inspiration.

I gratefully acknowledge the support, encouragement and patience of my parents.

ACKNOWLEDGEMENT

INDEX

Page 4: Steria Intern Report

S.No. CONTENT

1. IMPORTANCE OF TESTING

2. SOFTWARE TESTING LIFE CYCLE

2.1 Test Planning

2.2 Test Analysis

2.3 Test Design

2.4 Construction and verification

2.5 Testing Cycles

2.6 Final Testing and Implementation

2.7 Post Implementation

2.8 Software Testing Life Cycle

2.9 Models for Testing

2.9.1 The V-Model

2.9.2 Verification and validation

2.9.3 High Level Test Planning

3. TYPES OF TESTING

3.1 The box approach

3.1.1 White-Box testing

Page 5: Steria Intern Report

3.1.1.1 Static Testing

3.1.2 Black-box testing

3.1.3 Grey-box testing

3.2 Visual testing

4 LEVELS OF TESTING

4.1 Test target

4.1.1 Unit testing

4.1.2 Integration testing

4.1.3 System testing

4.1.4 System integration testing

4.2 Objectives of testing

4.2.1 Installation testing

4.2.2 Sanity testing

4.2.3 Regression testing

4.2.4 Acceptance testing

4.2.5 Alpha testing

4.2.6 Beta testing

5 Conclusion

Page 6: Steria Intern Report

Importance of Testing

Software Testing is the process used to help identify the correctness, completeness, security, and quality of developed computer software. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate.

As we all are human beings and human beings commit errors in any process, some of the errors do not impact much on our day to day life and can be ignored, however some errors are so severe that they can break the whole system or software. In such kind of situations you need to take care that such errors are caught well in advance before deploying the system/software in production environment.

Page 7: Steria Intern Report

Let’s take some real life examples and see how our errors can cause critical failures or disasters if not caught early.

Examples:

Let’s take some ABC banks net banking website, if any net banking customer logs into the website and transfers some amount to other account. After transferring he got the confirmation that amount got transferred successfully and it got deducted from his account. After sometime when he confirmed with other person whom he transferred the amount he got to know that he has not received the amount, now you need to visit your bank to settle this dispute. Also you will get annoyed by the net banking experience of that bank.

This is one situation where testing your software is very important before deploying it. Testing cannot be ignored because it impacts all the end users of software.

If the website would have got tested thoroughly for all the possible user operations this problem would have been found well in advance and got fixed before deploying the website.

Let’s take another example.

Lots of software is nowadays used in air traffic control system. Suppose we don’t test air traffic control software and directly deploy them in production environment, there is very much risk involved in this, most probably this software will fail sometime because it has not been tested for all the scenarios in which it should work. If the air traffic software crashes it may lead to very severe disaster.

Now you might have got an idea why testing is such an important process in any kind of product development.

Furthermore, many of the computer systems being developed today are so large and complex that it is not generally possible for any one person to understand every aspect of a system. Many people have

Page 8: Steria Intern Report

specialised in particular areas or aspects, for example we have database experts, software design experts, algorithm experts, network and operating system experts. Also, there are experts in specific areas of the businesses that the systems support. Good documentation (specifications and the like) are necessary to communicate all the relevant information each person needs to complete their parts of a system. However, people are fallible and so make errors causing faults in the specifications. When dealing with one part of a system, a systems designer or programmer may think they understand some other aspect outside of their particular area of expertise but they may not. Assumptions are often made because it is easier and quicker to assume one thing than to find out what the real answer should be.

What do software faults cost?

The cost of software faults that are not detected before a system is put into live operation varies, depending on a number of factors. Some of the more spectacular software failures are well known. For example the European Space Agency's Ariane 5 rocket exploded 37 seconds into its launch because of a

Page 9: Steria Intern Report

software fault. This fault cost $7billion. Although it had been tested a series of errors meant that out-of-date test data was used. Another example is the Mariner Space probe. This was meant to be going to Venus but lost its way as a result of a software fault. The Fortran program had a full stop instead of a comma in a FOR (looping) statement that the compiler accepted as an assignment statement. A couple of more recent examples involve Mars probes. In the first example the one organisation‘s software calculated the distance to landing in inches while another calculated the distance in centimetres. This resulted in the loss of the probe at a cost of $125 million. In the second example, a probe was lost shortly before it should have landed at a cost of $250 million. It turned out to be

Page 10: Steria Intern Report

spurious signals from hardware sensors were interpreted by the software as landing, so antennas were deployed, protective covers removed, etc. This fault had been identified in earlier testing, and changes were made to the hardware and the software. However, there was no final regression test, due to time and budget pressure. (The lack of a final test was also responsible for the Hubble telescope being myopic for its first years of operation.) One final example is where the expected results were not calculated beforehand and in this instance cost American Airlines $50million. By way of background to this example, airlines never want to fly with empty seats - they would prefer to offer discount seats than to fly empty. However they also do not want to discount the seat when they could receive the full price. Airlines run complex 'yield management programs' to achieve the balance right. But these programs often change a lot to increase profitability. One such change was catastrophic for American Airlines. The program gave reasonable - but wrong - results for the number of discount seats! A spokesman said 'We're convinced that if we would have done more thorough testing, we would have discovered the problem before the software was ever brought on-line'. Of course not every software fault causes such huge failure costs (fortunately!). Some failures are simple spelling mistakes or a misalignment in a report. There is a vast array of different costs but there is a problem - software is not linear. Small faults can have large effects. For example a spelling mistake on a screen title said ―ABC SOFTWARE‖, a simple mistake but it cost the company a sale. Another spelling mistake was when the programmer spelt an insurance company‘s name wrong – instead of ―Union‖ they spelt it ―Onion‖ – only one letter, but this was a major embarrassment.

The consequences of faults in safety critical systems can be much more serious. The follow examples are all real cases.

Therac-25 (Canada) - this machine was used in the treatment of cancer patients. It had the capability of delivering both x-rays and

Page 11: Steria Intern Report

gamma rays. The normal dose of x-rays is relatively low while gamma ray dosage is much higher. Switching from one type of ray to the other was a slow process but the operators discovered that if they typed fast it could be made to switch more quickly. However, it only appeared to have switched. Six people died as a result of having too high an x-ray dose. This was a design fault.

A train driver in Washington DC was killed when an empty train failed to stop in a stabling siding. The train was fully automated; driver just opens and closes doors. Driver had been requesting manual operation, as the trains had been overshooting the stops - this was denied by the humans in ―central control‖ - RISKS Forum, 26 Jan 1998.

Korean Airlines Flight 801, Boeing 747, 29 survivors (out of 254 passengers and crew). Came in 1000 ft too low. Combination of ―normal‖ factors: severe weather, ground proximity system not switched on (or automatically disabled when landing gear lowered), secondary radar system should have sounded an alarm - either didn‘t or wasn‘t noticed. – Computer Weekly, 14 Aug 1987.

Airbus crashes into trees at an air show at Habsham, Germany. Software protected engines by preventing them from being accelerated as fast as the pilot required to avoid hitting some trees after a low fly pass.

The following example is not of a safety critical system, rather a banking system. However, this did contribute to a death.

An elderly man bought presents for his grandchildren, and as a result became overdrawn for the first time in his life. The banks system automatically sent threatening letters and charged him for

Page 12: Steria Intern Report

being overdrawn. The man committed suicide. Source: client from major UK bank.

Why Testing is necessary??Software is likely to have faults so testing is needed to avoid these faults early as well as soon as possible.

1. Software testing is really required to point out the defects and errors that were made during the development phases.

2. It’s essential since it makes sure of the Customer’s reliability and their satisfaction in the application.

3. It is very important to ensure the Quality of the product.  Quality product delivered to the customers helps in gaining their confidence.

4. Testing is necessary in order to provide the facilities to the customers like the delivery of high quality product or software application which requires lower maintenance cost and hence results into more accurate, consistent and reliable results.

Page 13: Steria Intern Report

5. Testing is necessary because of effective optimum performance of system and capacity utilization, software reliability, quality, and system or application assurance.

6. It’s important to ensure that the application should not result into any failures because it can be very expensive in the future or in the later stages of the development.

7. It is necessary because it is including in the project plan, and to stay in business.

Testing is not to prove that the software has no faults because it is impossible to prove that software has no faults. Neither is testing done because testing is included in the project plan‘. Testing should be included in the project plan, but this is not the reason for testing. (Why is this testing included in the plan - this is the reason for testing.)

Testing is essential in the development of any software system. Testing is needed in order to assess what the system actually does, and how well it does it, in its final environment. A system without testing is merely a paper exercise - it may work or it may not, but there is no way of knowing it without testing it.

SOFTWARE TESTING LIFE CYCLE

Software testing life cycle (STLC)  is basically develop to identify which Testing activities needs to be carry out and what’s the best time to perform them to accomplish those test activities. Even though testing differs between organizations, there is a testing life cycle.

Page 14: Steria Intern Report

STLC is a very important concept that we need to know if we wants to understand testing process in detail. Many people think testing is a static task and finds it boring, but it is definitely not. STLC shows the different phases which are essential for quality control of any software project.

Every company follows its own Software Testing Life Cycle. STLC is affected by the Software Development Life Cycle (SDLC) implemented by the company as well as the management’s views towards Quality Assurance & Control activities.

Software Testing Life Cycle consists of six (generic) phases:

1. Requirements Analysis & Test Planning

2. Test Analysis

3. Test Design,

4. Construction and verification,

5. Testing Cycles,

6. Final Testing and Implementation and Post Implementation.

Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in software testing life cycle is to control/deal with software testing – Manual, Automated and Performance.

Page 15: Steria Intern Report

Difference between SDLC and STLC

Page 16: Steria Intern Report

As we can see that SDLC and STLC have some common aspects but they differ completely from each other. SDLC vs STLC is a complex debate and hence there are some examples to make this point clear -

1. STLC is a part of SDLC. It is like a SET and a SUBSET. We cannot have STLC running individually. It needs to wait for its roll call before implementing its phases.

2. STLC is limited to Testing software module. SDLC is rather a vast model with more inputs and executions.

3. STLC is the most important part of the SDLC life cycle. One cannot release the final product without running it through STLC process.

4. STLC team requires skilled developers and Testers. The efficiency demand is rather high here in comparison to other parts of the SDLC module.

5. STLC is also a part of the post release update cycle. The bug fixes and end user reports are logged by the application. This log is checked and fixed while building and releasing the new version of the software or Program module.

Page 17: Steria Intern Report

 Requirement Analysis/Review

This is a very important phase in STLC. Here the focus is on understanding the requirements of the system with the viewpoint of testing in mind.

In this phase the QA interacts with the Business Analyst, System Analyst, Development Manager/Team Lead, etc. or if required the QA may also interact with Client to completely understand the requirements of the system.

During this phase the QA takes many important decisions like what are the testing types & techniques to be performed, feasibility for automation testing implementation, etc.

Page 18: Steria Intern Report

Test Planning

This is the phase where Project Manager has to decide what things need to be tested, do I have the appropriate budget etc. Naturally proper planning at this stage would greatly reduce the risk of low quality software. This planning will be an ongoing process with no end point. Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template The Software Test Plan (STP) is designed to prescribe the scope, approach, resources, and schedule of all testing activities. The plan must identify the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing, and the risks associated with the plan.). Almost all of the activities done during this stage are included in this software test plan and revolve around a test plan.

In Test Planning following are the major tasks:

1. Defining scope of testing

2. Identification of approaches

3. Defining risk

4. Identifying resources.

5. Defining Time Schedule

Simply planning the future activities are done in this phase. Example completion criteria are (some are better than others and using a combination of criteria is usually better than using just one):

Page 19: Steria Intern Report

100% statement coverage;

100% requirement coverage;

All screens / dialogue boxes / error messages seen;

100% of test cases have been run;

100% of high severity faults fixed;

80% of low & medium severity faults fixed;

Maximum of 50 known faults remain;

Maximum of 10 high severity faults predicted.

Time has run out;

Testing budget is used up.

Test Analysis

Once test plan is made and decided upon, next step is to delve a little more into the project and decide what types of testing should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when the appropriate time to automate is, what type of specific documentation I need for testing, Proper and regular meetings should be held between testing teams, project managers, development teams, Business Analysts to check the progress of things which will give a fair idea of the movement of the project and ensure the completeness of the test plan created in the planning phase, which will further help in enhancing the right testing strategy created earlier. We will start creating test case formats and test

Page 20: Steria Intern Report

cases itself. In this stage we need to develop Functional validation matrix based on Business Requirements to ensure that all system requirements are covered by one or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design, Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress and Performance Testing. Requirements are also analyzed.

Test Design

Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also revised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then you have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for unit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test environment is prepared.

Construction and verification

In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. We have to support the development team in their unit testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are performed and errors (if any) are reported.

Page 21: Steria Intern Report

Testing Cycles

In this phase we have to complete testing cycles until test cases are executed without errors or a predefined condition is reached. Run test cases --> Report Bugs --> revise test cases (if needed) --> add new test cases (if needed) --> bug fixing --> retesting (test cycle 2, test cycle 3….). In bug lifecycle the default state is "NEW" The final state of bug lifecycle is "CLOSE".

Final Testing and Implementation

Page 22: Steria Intern Report

In this we have to execute remaining stress and performance test cases, documentation for testing is completed / updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be conducted and the application needs to be verified under production conditions. The default state in bug lifecycle is "NEW" The final state of bug lifecycle is "CLOSE".

Post Implementation

In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test machines are restored to base lines in this stage.

Software Testing Life Cycle

The software testing life cycle consists of various testing activities that need to be carried out to validate if the software meets the

Page 23: Steria Intern Report

required design specification. It also explains which testing activity needs to be carried out and when.

The software testing life cycle consists of a series of stages through which a software product goes through and describes the various activities pertaining to testing that are carried out on the product.

Phase Activities Outcome

Planning Create high level test plan

Test plan, Refined Specification

Analysis 

Create detailed test plan, Functional Validation Matrix, test cases

Revised Test Plan, Functional validation matrix, test cases

Design 

test cases are revised; select which test cases to automate

revised test cases, test data sets, sets, risk assessment sheet

Construction 

scripting of test cases to automate,

Test procedures/Scripts, Drivers, test results, Bug reports.

Testing cycles

complete testing cycles Test results, Bug Reports

Final testing execute remaining stress and performance tests, complete documentation

Test results and different metrics on test efforts

Page 24: Steria Intern Report

Post implementation

Evaluate testing processes

Plan for improvement of testing process

Models for Testing

THE WATERFALL MODEL

The waterfall model is a sequential design process, often used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design, Construction, Testing and Maintenance.

The waterfall development model originates in the manufacturing and construction industries; highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development.

History of the Waterfall Model

Page 25: Steria Intern Report

In 1970 Royce proposed what is now popularly referred to as the waterfall model as an initial concept, a model which he argued was flawed (Royce 1970). His paper then explored how the initial model could be developed into an iterative model, with feedback from each phase influencing previous phases, similar to many methods used widely and highly regarded by many today.

Despite Royce's intentions for the waterfall model to be modified into an iterative model, use of the "waterfall model" as a purely sequential process is still popular, and, for some, the phrase "waterfall model" has since come to refer to any approach to software creation which is seen as inflexible and non-iterative.

Page 26: Steria Intern Report

Most software engineers are familiar with a software life cycle model; the waterfall was the first such model to be generally accepted. Before this, there were informal mental models of the software development process, but they were fairly simple. The process of producing software was referred to as "programming", and it was integrated very closely with testing. The programmers would write some code, try it out, and write some more. After a lot of iterations, the program would emerge. The point is that testing was very much an integral part of the software production process.

The main difference in the waterfall model was that the programming steps were spelled out. Now instead of programming, there are a number of distinct stages such as requirements analysis, structural or architectural design, detailed design, coding, and then finally testing. Although the stratification of software production activities is very helpful, notice what the effect has been on testing. Now it comes last (after the "interesting" part?), and is no longer an integral part of the whole process. This is a significant change, and has actually damaged the practice of testing and hence affected the quality of the software produced in ways that are often not appreciated.

The problems with testing in the classic waterfall model are that testing is very much product-based, and applied late in the development schedule. The levels of detail of test activities are not acknowledged, and testing is now vulnerable to schedule pressure, since it occurs last.

Advantages of waterfall model:

This model is simple and easy to understand and use.

Page 27: Steria Intern Report

It is easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process.

In this model phases are processed and completed one at a time. Phases do not overlap.

Waterfall model works well for smaller projects where requirements are very well understood.

 Disadvantages of waterfall model:

Once an application is in the testing stage, it is very difficult to go back and change something that was not well-thought out in the concept stage.

No working software is produced until late during the life cycle.

High amounts of risk and uncertainty.

Not a good model for complex and object-oriented projects.

Poor model for long and ongoing projects.

Not suitable for the projects where requirements are at a moderate to high risk of changing.

When to use the waterfall model:

This model is used only when the requirements are very well known, clear and fixed.

Product definition is stable.

Technology is understood.

There are no ambiguous requirements

Page 28: Steria Intern Report

Ample resources with required expertise are available freely

The project is short

THE V MODEL

V- model means Verification and Validation model. Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of processes. Each phase must be completed before the next phase begins. Testing of the product is planned in parallel with a corresponding phase of development.

Page 29: Steria Intern Report

The various phases of the V-model are as follows:

Requirements like BRS and SRS begin the life cycle model just like the waterfall model. But, in this model before development is started, a system test plan is created.  The test plan focuses on meeting the functionality specified in the requirements gathering.

The high-level design (HLD) phase focuses on system architecture and design. It provides overview of solution, platform, system, product and service/process. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together.

The low-level design (LLD) phase is where the actual software components are designed. It defines the actual logic for each and every component of the system. Class diagram with all the methods and relation between classes comes under LLD. Component tests are created in this phase as well.

The implementation phase is, again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use.

Coding: This is at the bottom of the V-Shape model. Module design is converted into code by developers.

Advantages of V-model:

Simple and easy to use.

Page 30: Steria Intern Report

Testing activities like planning, test designing happens well before coding. This saves a lot of time. Hence higher chance of success over the waterfall model.

Proactive defect tracking – that is defects are found at early stage.

Avoids the downward flow of the defects.

Works well for small projects where requirements are easily understood.

Disadvantages of V-model:

Very rigid and least flexible.

Software is developed during the implementation phase, so no early prototypes of the software are produced.

If any changes happen in midway, then the test documents along with requirement documents has to be updated.

When to use the V-model:

The V-shaped model should be used for small to medium sized projects where requirements are clearly defined and fixed.

The V-Shaped model should be chosen when ample technical resources are available with needed technical expertise.

High confidence of customer is required for choosing the V-Shaped model approach. Since, no prototypes are produced,

Page 31: Steria Intern Report

there is a very high risk involved in meeting customer expectations.

Verification and validation

BS7925-1 defines verification as "the process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase. The 'conditions imposed at the start of that phase' are the key to understanding verification. These conditions should be generic in that they should apply to any product of that phase and be used to ensure that the development phase has worked well. They are checks on the quality of the process of producing a product such as 'documentation must be unambiguous', 'document conforms to standard template', and in the case of an actual system, 'has the system been assembled correctly'.

The full definition of the term 'validation' as given by BS7925-1 is "The determination of the correctness of the products of software development with respect to the user needs and requirements. The key to remembering validation is the phrase 'with respect to the user needs and requirements'. This means that the checks may be unique to a particular system since different systems are developed to meet different user needs. (While this last statement may be rather obvious it is worth stating when comparing validation with verification.) While verification is more to do with the process of producing a system, validation is more concerned with the products produced, i.e. the system itself.

Page 32: Steria Intern Report

Validation of each of the products of software development typically involves comparing one product with its parent. For example, (using the terminology given in the V-Model of this course) to validate a project specification we would compare it with the business requirement specification. This involves checking completeness and consistency, for example by checking that the project specification addresses all of the business requirements.

High Level Test Planning

Before planning for a set of tests

Set organizational test strategy

Identify people to be involved (sponsors, testers, QA, Development, support, et al.)

Set up the test organization and infrastructure

Defining test deliverables & reporting structure.

Purpose:-What is the purpose of a high level test plan?

Who does it communicate to?

Why is it a good idea to have one?

What information should be in a high level test plan?

What is your standard for contents of a test plan?

Have you ever forgotten something important?

Page 33: Steria Intern Report

What is not included in a test plan?

The purpose of high level test planning is to produce a high-level test plan! A high-level test plan is synonymous with a project test plan and covers all levels of testing. It is a management document describing the scope of the testing effort, resources required, schedules, etc.

There is a standard for test documentation. It is ANSI/IEEE 829 "Standard for Software Test Documentation". This outlines a whole range of test documents including a test plan. It describes the information that should be considered for inclusion in a test plan under 16 headings. These are described below.

Content of a high level Test Plan

1. Test Plan Identifier

Some unique reference for this document.

2. Introduction

A guide to what the test plan covers and references to other relevant documents such as the Quality Assurance and Configuration Management plans.

3. Test Items

The physical things that are to be tested such as executable programs, data files or databases. The version numbers of these, details of how they will be handed over to testing (on disc, tape, across the network, etc.) and references to relevant documentation.

Page 34: Steria Intern Report

4. Features to be Tested

The logical things that are to be tested, i.e. the functionality and features.

5. Features not to be Tested The logical things (functionality / features) that are not to be tested.

6. Approach The activities necessary to carry out the testing in sufficient detail to allow the overall effort to be estimated. The techniques and tools that are to be used, the completion criteria (such as coverage measures) and constraints such as environment restrictions and staff availability, all are estimated.

7. Item Pass / Fail Criteria For each test item the criteria for passing (or failing) that item such as the number of known (and predicted) outstanding faults.

8. Suspension / Resumption Criteria The criteria that will be used to determine when (if) any testing activities should be suspended and resumed. For example, if too many faults are found with the first few test cases it may be more cost effective to stop testing at the current level and wait for the faults to be fixed.

9. Test Deliverables What the testing processes should provide in terms of documents, reports, etc.

Test deliverables

Test plan

Page 35: Steria Intern Report

Test design specification

Test case specification

Test procedure specification

Test item transmittal reports

Test logs

Test incident reports

Test summary reports

10. Testing Tasks

Specific tasks, special skills required and the inter-dependencies.

11. Environment

Details of the hardware and software that will be needed in order to execute the tests. Any other facilities (including office space and desks), that may be required.

12. Responsibilities

Who is responsible for which activities and deliverables?

13. Staffing and Training Needs

Staff required and any training they will need such as training on the system to be tested (so they can understand how to use it) training in the business or training in testing techniques or tools.

Page 36: Steria Intern Report

14. Schedule

Milestones for delivery of software into testing, availability of the environment and test deliverables.

15. Risks and Contingencies

What could go wrong and what will be done about it to minimise adverse impacts if anything does go wrong.

16. Approvals

Names and when approved. This is rather a lot to remember (though in practice you will be able to use the test documentation standard IEEE 829 as a checklist). To help you remember what is and is not included in a test plan, consider the following table that maps most of the headings onto the acronym SPACE.

Scope Test Items, Features to be Tested, Features not to be Tested.

People Staffing and Training Needs, Schedule, Responsibilities.

Approach Approach. Criteria Item Pass/Fail Criteria,

Suspension and Resumption Criteria.

Environment Environment.

TYPES OF TESTING

Page 37: Steria Intern Report

The box approach Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.

Page 38: Steria Intern Report

White-Box testing White-box testing is when the tester has access to the internal data structures and algorithms including the code that implements these.

Types of white-box testing The following types of white-box testing exist:

1. API testing (application programming interface) - testing of the application using public and private APIs

2. Code coverage - creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once)

3. Fault injection methods - intentionally introducing faults to gauge the efficacy of testing strategies

4. Mutation testing methods

5. Static testing - All types

Test coverage

White-box testing methods can also be used to evaluate the completeness of a test suite that was created with black-box testing methods. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.

Two common forms of code coverage are:

1. Function coverage, which reports on functions executed

2. Statement coverage, which reports on the number of lines executed to complete the test They both return a code coverage metric, measured as a percentage.

Page 39: Steria Intern Report

STATIC TESTING

These techniques are referred to as "static" because the software is not executed; rather the specifications, documentation and source code that comprise the software are examined in varying degrees of detail. There are two basic types of static testing. One of these is people-based and the other is tool-based. People-based techniques are generally known as ―reviews‖ but there are a variety of different ways in which reviews can be performed. The tool-based techniques examine source code and are known as "static analysis". Both of these basic types are described in separate sections below.

What are Reviews?

Reviews are the generic name given to people-based static techniques. More or less any activity that involves one or more people examining something could be called a review. There are a variety of different ways in which reviews are carried out across different organizations and in many cases within a single organization. Some are very formal, some are very informal, and many lie somewhere between the two. The chances are that you have been involved in reviews of one form or another. One person can perform a review of his or her own work or of someone else‘s work. However, it is generally recognized that reviews performed by only one person are not as effective as reviews conducted by a group of people all examining the same document (or whatever it is that is being reviewed).

Review techniques for individuals

Page 40: Steria Intern Report

Desk checking and proof reading are two techniques that can be used by individuals to review a document such as a specification or a piece of source code. They are basically the same processes: the reviewer double-checks the document or source code on their own. Data stepping is a slightly different process for reviewing source code: the reviewer follows a set of data values through the source code to ensure that the values are correct at each step of the processing.

Review techniques for groups

The static techniques that involve groups of people are generically referred to as reviews. Reviews can vary a lot from very informal to highly formal, as will be discussed in more detail shortly. Two examples of types of review are Walkthroughs and Inspection. A walkthrough is a form of review that is typically used to educate a group of people about a technical document. Typically the author "walks" the group through the ideas to explain them and so that the attendees understand the content. Inspection is the most formal of all the formal review techniques. Its main focus during the process is to find faults, and it is the most effective review technique in finding them (although the other types of review also find some faults). Inspection is discussed in more detail later.

Types of review

We have now established that reviews are an important part of software testing. Testers should be involved in reviewing the development documents that tests are based on, and should also review their own test documentation. In this section, we will look at different types of reviews, and the activities that are done to a greater or lesser extent in all of them. We will also look at the

Page 41: Steria Intern Report

Inspection process in a bit more detail, as it is the most effective of all review types.

Characteristics of different review types

Informal review

As its name implies, this is very much an ad hoc process. Normally it simply consists of someone giving their document to someone else and asking them to look it over. A document may be distributed to a number of people, and the author of the document would hope to receive back some helpful comments. It is a very cheap form of review because there is no monitoring of metrics, no meeting and no follow-up. It is generally perceived to be useful, and compared to not doing any reviews at all, it is. However, it is probably the least effective form of review (although no one can prove that since no measurements are ever done!).

Technical review or Peer review

A technical review may have varying degrees of formality. This type of review does focus on technical issues and technical documents. A peer review would exclude managers from the review. The success of this type of review typically depends on the individuals involved - they can be very effective and useful, but sometimes they are very wasteful (especially if the meetings are not well disciplined), and can be rather subjective. Often this level of review will have some documentation, even if just a list of issues rose. Sometimes metrics will be kept. This type of review can find important faults, but can also be used to resolve difficult technical problems, for example deciding on the best way to implement a design.

Walkthrough

Page 42: Steria Intern Report

A walkthrough is typically led by the author of a document, for the purpose of educating the participants about the content so that everyone understands the same thing. A walkthrough may include "dry runs" of business scenarios to show how the system would handle certain specific situations. For technical documents, it is often a peer group technique.

Inspection

An Inspection is the most formal of the formal review techniques. There are strict entry and exit criteria to the Inspection process, it is led by a trained Leader or moderator (not the author), and there are defined roles for searching for faults based on defined rules and checklists. Metrics are a required part of the process. More detail on Inspection is given later in this session.

Page 43: Steria Intern Report

Looking at the V life cycle diagram that was discussed in Session 2, reviews and Inspections apply to everything on the left-hand side of the V-model. Note that the reviews apply not only to the products of development but also to the test documentation that is produced early in the life cycle. We have found that reviewing the business needs alongside the Acceptance Tests works really well. It clarifies issues that might otherwise have been overlooked. This is yet another way to find faults as early as possible in the life cycle so that they can be removed.

Static analysis

What can static analysis do?

Static analysis is a form of automated testing. It can check for violations of standards and can find things that may or may not be faults. Static analysis is descended from compiler technology. In fact, many compilers may have static analysis facilities available for developers to use if they wish. There are also a number of stand-alone static analysis tools for various different computer programming languages. Like a compiler, the static analysis tool analyses the code without executing it, and can alert the developer to various things such as unreachable code, undeclared variables, etc. Static analysis tools can also compute various metrics about code such as cyclomatic complexity.

Black-box testing Black-box testing treats the software as a "black box"—without any knowledge of internal implementation. Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, exploratory testing and specification-based testing.

Page 44: Steria Intern Report

Specification-based testing:

Aims to test the functionality of software according to the applicable requirements. Thus, the tester inputs data into, and only sees the output from, the test object. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Specification-based testing is necessary, but it is insufficient to guard against certain risks.

Advantages and disadvantages:

The black-box tester has no "bonds" with the code, and a tester's perception is very simple: a code must have bugs. Using the principle, "Ask and you shall receive," black-box testers find bugs where programmers do not. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight," because the tester doesn't know how the software being tested was actually constructed.

As a result, there are situations when

(1) A tester writes many test cases to check something that could have been tested by only one test case, and/or

(2) Some parts of the back-end are not tested at all.

Therefore, black-box testing has the advantage of "an unaffiliated opinion", on the one hand, and the disadvantage of "blind exploring", on the other.

Grey-box testing

Page 45: Steria Intern Report

Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code. [not in citation given] Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, modifying a data repository does qualify as grey-box, as the user would not normally be able to change the data outside of the system under test. Grey-box testing may also include reverse engineering to determine, for instance, boundary values or error messages.

By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up his testing environment; for instance, seeding a database; and the tester can observe the state of the product being tested after performing certain actions. For instance, in testing a database product he/she may fire an SQL query on the database and then observe the database, to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.

Visual testing

Page 46: Steria Intern Report

The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he requires, and the information is expressed clearly.

At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing therefore requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.

Visual testing provides a number of advantages. The quality of communication is increased dramatically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.

Visual testing is particularly well-suited for environments that deploy agile methods in their development of software, since agile methods require greater communication between testers and developers and collaboration within small teams. Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, whilst important bugs can be

Page 47: Steria Intern Report

found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of a test tool to visually record everything that occurs on a system becomes very important.

Visual testing is gathering recognition in customer acceptance and usability testing, because the test can be used by many individuals involved in the development process. For the customer, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on screen, as well as their voice and image, to provide a complete picture at the time of software failure for the developer.

LEVELS OF TESTING

Test target Unit testing Integration testing System testing System integration testing

Objectives of testing Installation testing Sanity testing Regression testing

Page 48: Steria Intern Report

Acceptance testing Alpha testing Beta testing

Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. The main levels during the development process as defined by the SWEBOK guide are unit-, integration-, and system testing that are distinguished by the test target without implying a

Page 49: Steria Intern Report

specific process model. Other test levels are classified by the testing objective.

Test target

Unit testing

Unit testing, also known as component testing, refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.

These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to assure that the building blocks the software uses work independently of each other.

Page 50: Steria Intern Report

Component Testing

What is Component Testing?

BS7925-1 defines a component as "A minimal software item for which a separate specification is available". Components are relatively small pieces of software that are, in effect the building blocks from which the system is formed. They may also be referred to as modules, units or programs and so this level of testing may also be known as module, unit or program testing. For some organizations a component can be just a few lines of source code while for others it can be a small program. Component testing then is the lowest level of testing (i.e. it is at the bottom on the V-Model software development life cycle). It is the first level of testing to start executing test cases (but should be the last to specify test cases). It is the opportunity to test the software in isolation and therefore in the greatest detail, looking at its functionality and structure, error handling and interfaces.

Because it is just a component being tested, it is often necessary to have a test harness or driver to form an executable program that can be executed. This will usually have to be developed in parallel with the component or may be created by adapting a driver for another component. This should be kept as simple as possible to reduce the risk of faults in the driver obscuring faults in the component being tested. Typically, drivers need to provide a means of taking test input from the tester or a file, passing it on to the component, receiving the output from the component and presenting it to the tester for comparison with the expected outcome. The programmer who wrote the code most often performs component testing. This is sensible because it is the most economic approach. A programmer who executes test cases on his or her own code can usually track down and fix any faults

Page 51: Steria Intern Report

that may be revealed by the tests relatively quickly. If someone else were to execute the test cases they may have to document each failure. Eventually the programmer would come to investigate each of the fault reports, perhaps having to reproduce them in order to determine their causes. Once fixed, the fixed software would then be re-tested by this other person to confirm each fault had indeed been fixed. This amount to more effort and yet the same outcome: faults fixed. Of course it is important that some independence is brought into the test specification activity. The programmer should not be the only person to specify test cases (see Session 1 "Independence"). Both functional and structural test case design techniques are appropriate though the extent to which they are used should be defined during the test planning activity. This will depend on the risks involved, for example, how important, critical or complex they are.

Component Test Strategy

Page 52: Steria Intern Report

The Software Component Testing Standard BS7925-2 requires that a Component Test Strategy be documented before any of the component test process activities are carried out (including the component test planning). The component test strategy should include the following information. The:

Test techniques that are to be used for component testing and the rationale for their choice;

Completion criteria for component testing and the rationale for their choice (typically these will be test coverage measures);

Degree of independence required during the specification of test cases;

Approach required (either isolation, top-down, bottom-up, or a combination of these);

Environment in which component tests are to be executed (including hardware and software such as stubs, drivers and other software components);

Test process to be used, detailing the activities to be performed and the inputs and outputs of each activity.

Page 53: Steria Intern Report

Integration testing

Page 54: Steria Intern Report

Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be localized more quickly and fixed.

Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.

Integration Testing in the Small

What is Integration Testing in the Small?

Integration testing in the small is bringing together individual components (modules/units) that have already been tested in isolation. The objective is to test that the set of components function together correctly by concentrating on the interfaces between the components. We are trying to find faults that couldn‘t be found at an individual component testing level. Although the interfaces should have been tested in component testing, integration testing in the small makes sure that the things that are communicated are correct from both sides, not just from one side of the interface. This is an important level of testing but one that is sadly often overlooked.

As more and more components are combined together then a subsystem may be formed which has more system-like

Page 55: Steria Intern Report

functionality that can be tested. At this stage it may also be useful to test non-functional aspects such as performance. For integration testing in the small there are two choices that have to be made:

how many components to combine in one go;

in what order to combine components.

The decision over which choices to make is called the ‗integration strategy‘. There are two main integration strategies: Big Bang and incremental. These are described in separate sections below.

Big Bang integration

"Big Bang" integration means putting together all of the components in one go. The philosophy is that we have already tested all of the components so why not just throw them all in together and test the lot? The reason normally given for this approach is that is saves time - or does it?

If we encounter a problem it tends to be harder to locate and fix the faults. If the fault is found and fixed then re-testing usually takes a lot longer. In the end the Big Bang strategy does not work - it actually takes longer this way. This approach is based on the [mistaken] assumption that there will be no faults.

Page 56: Steria Intern Report

Incremental integration

Incremental integration is where a small number of components are combined at once. At a minimum, only one new component would be added to the baseline at each integration step. This has the advantage of much easier fault location and fixing, as well as faster and easier recovery if things do go badly wrong. (The finger of suspicion would point to the most recent addition to the baseline.)

Page 57: Steria Intern Report

However, having decided to use an incremental approach to integration testing we have to make a second choice: in what order to combine the components. This decision leads to three different incremental integration strategies: top-down, bottom-up and functional incrementation.

Top-down integration

As its name implies, top-down integration combines components starting with the highest levels of a hierarchy. Applying this strategy strictly, all components at a given level would be integrated before any at the next level down would be added.

Page 58: Steria Intern Report

Because it starts from the top, there will be missing pieces of the hierarchy that have not yet been integrated into a baseline. In order to test the partial system that comprises the baseline, stubs are used to substitute for the missing components.

Bottom-up integration

Page 59: Steria Intern Report

Bottom-up integration is the opposite of top-down. Applying it strictly, all components at the lowest levels of the hierarchy would be integrated before any of the higher level ones.

Thread integration

Page 60: Steria Intern Report

Thread integration is minimum capability with respect to time; the history or thread of processing determines the minimum number of components to integrate together.

System testing

System testing tests a completely integrated system to verify that it meets its requirements.

System Testing

System testing has two important aspects, which are distinguished in the syllabus: functional system testing and non-functional system testing. The non-functional aspects are often as important as the functional, but are generally less well specified and may therefore be more difficult to test (but not impossible). If an organization has an independent test group, it is usually at this level, i.e. it performs system testing.

Page 61: Steria Intern Report

Functional System Testing

Functional system testing gives us the first opportunity to test the system as a whole and is in a sense the final baseline of integration testing in the small. Typically we are looking at end to end functionality from two perspectives. One of these perspectives is based on the functional requirements and is called requirement-based testing. The other perspective is based on the business process and is called business process-based testing.

System integration testing

System integration testing verifies that a system is integrated to any external or third-party systems defined in the system requirements.

Objectives of testing

Installation testing

An Installation test assures that the system is installed correctly and working at actual customer's hardware.

Sanity testing

A Sanity test determines whether it is reasonable to proceed with further testing.

Regression testing

Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, or old bugs that have come back. Such regressions occur whenever software functionality that was previously

Page 62: Steria Intern Report

working correctly stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, to very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk.

Acceptance testing

Acceptance testing can mean one of two things:

1. A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e. before integration or regression.

2. Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development.

Alpha testing

Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.

Advantages of Alpha Testing:

Page 63: Steria Intern Report

Provides better view about the reliability of the software at an early stage

Helps simulate real time user behavior and environment.

Detect many showstopper or serious errors

Ability to provide early detection of errors with respect to design and functionality

Disadvantages of Alpha Testing:

In depth functionality cannot be tested as software is still under development stage Sometimes developers and testers are dissatisfied with the results of alpha testing

Beta testing

Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Advantages Beta Testing

Reduces product failure risk via customer validation.

Beta Testing allows a company to test post-launch infrastructure.

Improves product quality via customer feedback

Cost effective compared to similar data gathering methods

Page 64: Steria Intern Report

Creates goodwill with customers and increases customer satisfaction

Disadvantages Beta Testing

Test Management is an issue. As compared to other testing types which are usually executed inside a company in a controlled environment, beta testing is executed out in the real world where you seldom have control.

Finding the right beta users and maintaining their participation could be a challenge

Alpha and Beta testing

Both alpha and beta testing are normally used by software houses that produce mass-market shrink-wrapped software packages. This stage of testing is after system testing; it may include elements of integration testing in the large. The alpha or beta testers are given a pre-release version of the software and are asked to give feedback on the product. Alpha and beta testing is done where there are no identifiable "end users" other than the general public. The difference between alpha and beta testing is where they are carried out. Alpha testing is done on the development site - potential customers would be invited in to their offices. Beta testing is done on customer sites - the software is sent out to them.

Page 65: Steria Intern Report

Alpha Test Beta Test

What they do

Improve the quality of the product and ensure beta readiness.

Improve the quality of the product, integrate customer input on the complete product, and ensure release readiness.

When they happen

Toward the end of a development process when

Just prior to launch, sometimes ending within

Page 66: Steria Intern Report

the product is in a near fully-usable state.

weeks or even days of final release.

How long they last

Usually very long and see many iterations. It’s not uncommon for alpha to last 3-5x the length of beta.

Usually only a few weeks (sometimes up to a couple of months) with few major iterations.

Who cares about it

Almost exclusively quality/engineering (bugs, bugs, bugs).

Usually involves product marketing, support, docs, quality and engineering (basically the entire product team).

Who participates (tests)

Normally performed by test engineers, employees, and sometimes “friends and family”. Focuses on testing that would emulate ~80% of the customers.

Tested in the “real world” with “real customers” and the feedback can cover every element of the product.

Page 67: Steria Intern Report

What testers should expect

Plenty of bugs, crashes, missing docs and features.

Some bugs, fewer crashes, most docs, feature complete.

How they’re addressed

Most known critical issues are fixed, some features may change or be added as a result of early feedback.

Much of the feedback collected is considered for and/or implemented in future versions of the product. Only important/critical changes are made.

What they achieve

About methodology, efficiency and regiment. A good alpha test sets well-defined benchmarks and measures a product against those benchmarks.

About chaos, reality, and imagination. Beta tests explore the limits of a product by allowing customers to explore every element of the product in their native environments.

When it’s over

Page 68: Steria Intern Report

You have a decent idea of how a product performs and whether it meets the design criteria (and if it’s “beta-ready”)

You have a good idea of what your customer thinks about the product and what s/he is likely to experience when they purchase it.

What happens next

Beta Test! Release Party!

CONCLUSIONHence the project is developed by performing manual testing at each step of the development process. I learnt that software testing is a very important aspect of software development since it tells whether the software works as per the user requirements or not and what further changes can be brought about in the existing software to make it more effective or efficient.