Upload
others
View
33
Download
0
Embed Size (px)
Citation preview
Software Testing(TYIT)
Software Testing
Testing is the process of evaluating a system or its component(s) with the intent to
find whether it satisfies the specified requirements or not. In simple words, testing
is executing a system in order to identify any gaps, errors, or missing
requirements in contrary to the actual requirements.
According to ANSI/IEEE 1059 standard, Testing can be defined as - A process of
analyzing a software item to detect the differences between existing and required
conditions (that is defects/errors/bugs) and to evaluate the features of the
software item.
Who does Testing?
It depends on the process and the associated stakeholders of the project(s). In the
IT industry, large companies have a team with responsibilities to evaluate the
developed software in context of the given requirements. Moreover, developers
also conduct testing which is called Unit Testing. In most cases, the following
professionals are involved in testing a system within their respective capacities:
Software Tester
Software Developer
Project Lead/Manager
End User
Different companies have different designations for people who test the software
on the basis of their experience and knowledge such as Software Tester, Software
Quality Assurance Engineer, QA Analyst, etc.
It is not possible to test the software at any time during its cycle. The next two
sections state when testing should be started and when to end it during the SDLC.
When to Start Testing?
An early start to testing reduces the cost and time to rework and produce error-
free software that is delivered to the client. However in Software Development Life
Cycle (SDLC), testing can be started from the Requirements Gathering phase and
continued till the deployment of the software. It also depends on the development
model that is being used. For example, in the Waterfall model, formal testing is
conducted in the testing phase; but in the incremental model, testing is performed
at the end of every increment/iteration and the whole application is tested at the
end.
Testing is done in different forms at every phase of SDLC:
During the requirement gathering phase, the analysis and verification of
requirements are also considered as testing.
Reviewing the design in the design phase with the intent to improve the
design is also considered as testing.
Testing performed by a developer on completion of the code is also
categorized as testing.
When to Stop Testing?
It is difficult to determine when to stop testing, as testing is a never-ending
process and no one can claim that a software is 100% tested. The following
aspects are to be considered for stopping the testing process:
Testing Deadlines
Completion of test case execution
Completion of functional and code coverage to a certain point
Bug rate falls below a certain level and no high-priority bugs are identified
Management decision
Verification & Validation
These two terms are very confusing for most people, who use them
interchangeably. The following table highlights the differences between verification
and validation.
S.N. Verification Validation
1 Verification addresses the
concern: "Are you building it
Validation addresses the
concern: "Are you
right?" building the right thing?"
2 Ensures that the software
system meets all the
functionality.
Ensures that the
functionalities meet the
intended behavior.
3 Verification takes place first
and includes the checking for
documentation, code, etc.
Validation occurs after
verification and mainly
involves the checking of
the overall product.
4 Done by developers. Done by testers.
5 It has static activities, as it
includes collecting reviews,
walkthroughs, and inspections
to verify a software.
It has dynamic activities,
as it includes executing
the software against the
requirements.
6 It is an objective process and
no subjective decision should
be needed to verify a
software.
It is a subjective process
and involves subjective
decisions on how well a
software works.
Given below are some of the most common myths about software testing.
Myth 1 : Testing is Too Expensive
Reality : There is a saying, pay less for testing during software development or
pay more for maintenance or correction later. Early testing saves both time and
cost in many aspects, however reducing the cost without testing may result in
improper design of a software application rendering the product useless.
Myth 2 : Testing is Time-Consuming
Reality : During the SDLC phases, testing is never a time-consuming process.
However diagnosing and fixing the errors identified during proper testing is a time-
consuming but productive activity.
Myth 3 : Only Fully Developed Products are Tested
Reality : No doubt, testing depends on the source code but reviewing
requirements and developing test cases is independent from the developed code.
However iterative or incremental approach as a development life cycle model may
reduce the dependency of testing on the fully developed software.
Myth 4 : Complete Testing is Possible
Reality : It becomes an issue when a client or tester thinks that complete testing
is possible. It is possible that all paths have been tested by the team but
occurrence of complete testing is never possible. There might be some scenarios
that are never executed by the test team or the client during the software
development life cycle and may be executed once the project has been deployed.
Myth 5 : A Tested Software is Bug-Free
Reality : This is a very common myth that the clients, project managers, and the
management team believes in. No one can claim with absolute certainty that a
software application is 100% bug-free even if a tester with superb testing skills
has tested the application.
Myth 6 : Missed Defects are due to Testers
Reality : It is not a correct approach to blame testers for bugs that remain in the
application even after testing has been performed. This myth relates to Time, Cost,
and Requirements changing Constraints. However the test strategy may also result
in bugs being missed by the testing team.
Myth 7 : Testers are Responsible for Quality of Product
Reality : It is a very common misinterpretation that only testers or the testing
team should be responsible for product quality. Testers’ responsibilities include the
identification of bugs to the stakeholders and then it is their decision whether they
will fix the bug or release the software. Releasing the software at the time puts
more pressure on the testers, as they will be blamed for any error.
Myth 8 : Test Automation should be used wherever possible to Reduce Time
Reality : Yes, it is true that Test Automation reduces the testing time, but it is not
possible to start test automation at any time during software development. Test
automaton should be started when the software has been manually tested and is
stable to some extent. Moreover, test automation can never be used if
requirements keep changing.
Myth 9 : Anyone can Test a Software Application
Reality : People outside the IT industry think and even believe that anyone can
test a software and testing is not a creative job. However testers know very well
that this is a myth. Thinking alternative scenarios, try to crash a software with the
intent to explore potential bugs is not possible for the person who developed it.
Myth 10 : A Tester's only Task is to Find Bugs
Reality : Finding bugs in a software is the task of the testers, but at the same
time, they are domain experts of the particular software. Developers are only
responsible for the specific component or area that is assigned to them but testers
understand the overall workings of the software, what the dependencies are, and
the impacts of one module on another module.
Most people get confused when it comes to pin down the differences among
Quality Assurance, Quality Control, and Testing. Although they are interrelated and
to some extent, they can be considered as same activities, but there exist
distinguishing points that set them apart. The following table lists the points that
differentiate QA, QC, and Testing.
Quality Assurance Quality Control Testing
QA includes
activities that
ensure the
implementation of
processes,
procedures and
standards in context
It includes
activities that
ensure the
verification of a
developed software
with respect to
documented (or
It includes activities
that ensure the
identification of
bugs/error/defects
in a software.
to verification of
developed software
and intended
requirements.
not in some cases)
requirements.
Focuses on
processes and
procedures rather
than conducting
actual testing on the
system.
Focuses on actual
testing by
executing the
software with an
aim to identify
bug/defect through
implementation of
procedures and
process.
Focuses on actual
testing.
Process-oriented
activities.
Product-oriented
activities.
Product-oriented
activities.
Preventive
activities.
It is a corrective
process.
It is a preventive
process.
It is a subset of
Software Test Life
Cycle (STLC).
QC can be
considered as the
subset of Quality
Assurance.
Testing is the subset
of Quality Control.
Audit and Inspection
Audit : It is a systematic process to determine how the actual testing process is
conducted within an organization or a team. Generally, it is an independent
examination of processes involved during the testing of a software. As per IEEE, it
is a review of documented processes that organizations implement and follow.
Types of audit include Legal Compliance Audit, Internal Audit, and System Audit.
Inspection : It is a formal technique that involves formal or informal technical
reviews of any artifact by identifying any error or gap. As per IEEE94, inspection is
a formal evaluation technique in which software requirements, designs, or codes
are examined in detail by a person or a group other than the author to detect
faults, violations of development standards, and other problems.
Formal inspection meetings may include the following processes: Planning,
Overview Preparation, Inspection Meeting, Rework, and Follow-up.
Testing and Debugging
Testing : It involves identifying bug/error/defect in a software without correcting
it. Normally professionals with a quality assurance background are involved in
bugs identification. Testing is performed in the testing phase.
Debugging : It involves identifying, isolating, and fixing the problems/bugs.
Developers who code the software conduct debugging upon encountering an error
in the code. Debugging is a part of White Box Testing or Unit Testing. Debugging
can be performed in the development phase while conducting Unit Testing or in
phases while fixing the reported bugs.
Quality Assurance and Testing
Many organizations around the globe develop and implement different standards to
improve the quality needs of their software. This chapter briefly describes some of
the widely used standards related to Quality Assurance and Testing.
ISO/IEC 9126
This standard deals with the following aspects to determine the quality of a
software application:
Quality model
External metrics
Internal metrics
Quality in use metrics
This standard presents some set of quality attributes for any software such as:
Functionality
Reliability
Usability
Efficiency
Maintainability
Portability
The above-mentioned quality attributes are further divided into sub-factors, which
you can study when you study the standard in detail.
ISO/IEC 9241-11
Part 11 of this standard deals with the extent to which a product can be used by
specified users to achieve specified goals with Effectiveness, Efficiency and
Satisfaction in a specified context of use.
This standard proposed a framework that describes the usability components and
the relationship between them. In this standard, the usability is considered in
terms of user performance and satisfaction. According to ISO 9241-11, usability
depends on context of use and the level of usability will change as the context
changes.
ISO/IEC 25000:2005
ISO/IEC 25000:2005 is commonly known as the standard that provides the
guidelines for Software Quality Requirements and Evaluation (SQuaRE). This
standard helps in organizing and enhancing the process related to software quality
requirements and their evaluations. In reality, ISO-25000 replaces the two old ISO
standards, i.e. ISO-9126 and ISO-14598.
SQuaRE is divided into sub-parts such as:
ISO 2500n - Quality Management Division
ISO 2501n - Quality Model Division
ISO 2502n - Quality Measurement Division
ISO 2503n - Quality Requirements Division
ISO 2504n - Quality Evaluation Division
The main contents of SQuaRE are:
Terms and definitions
Reference Models
General guide
Individual division guides
Standard related to Requirement Engineering (i.e. specification, planning,
measurement and evaluation process)
ISO/IEC 12119
This standard deals with software packages delivered to the client. It does not
focus or deal with the clients’ production process. The main contents are related to
the following items:
Set of requirements for software packages.
Instructions for testing a delivered software package against the specified
requirements.
Miscellaneous
Some of the other standards related to QA and Testing processes are mentioned
below:
Standard Description
IEEE 829 A standard for the format of documents used in
different stages of software testing.
IEEE 1061 A methodology for establishing quality
requirements, identifying, implementing,
analyzing, and validating the process, and product
of software quality metrics.
IEEE 1059 Guide for Software Verification and Validation
Plans.
IEEE 1008 A standard for unit testing.
IEEE 1012 A standard for Software Verification and
Validation.
IEEE 1028 A standard for software inspections.
IEEE 1044 A standard for the classification of software
anomalies.
IEEE 1044-1 A guide for the classification of software
anomalies.
IEEE 830 A guide for developing system requirements
specifications.
IEEE 730 A standard for software quality assurance plans.
IEEE 1061 A standard for software quality metrics and
methodology.
IEEE 12207 A standard for software life cycle processes and
life cycle data.
BS 7925-1 A vocabulary of terms used in software testing.
BS 7925-2 A standard for software component testing.
This section describes the different types of testing that may be used to test a
software during SDLC.
Manual Testing
Manual testing includes testing a software manually, i.e., without using any
automated tool or any script. In this type, the tester takes over the role of an end-
user and tests the software to identify any unexpected behavior or bug. There are
different stages for manual testing such as unit testing, integration testing, system
testing, and user acceptance testing.
Testers use test plans, test cases, or test scenarios to test a software to ensure
the completeness of testing. Manual testing also includes exploratory testing, as
testers explore the software to identify errors in it.
Automation Testing
Automation testing, which is also known as Test Automation, is when the tester
writes scripts and uses another software to test the product. This process involves
automation of a manual process. Automation Testing is used to re-run the test
scenarios that were performed manually, quickly, and repeatedly.
Apart from regression testing, automation testing is also used to test the
application from load, performance, and stress point of view. It increases the test
coverage, improves accuracy, and saves time and money in comparison to manual
testing.
What is Automate?
It is not possible to automate everything in a software. The areas at which a user
can make transactions such as the login form or registration forms, any area
where large number of users can access the software simultaneously should be
automated.
Furthermore, all GUI items, connections with databases, field validations, etc. can
be efficiently tested by automating the manual process.
When to Automate?
Test Automation should be used by considering the following aspects of a
software:
Large and critical projects
Projects that require testing the same areas frequently
Requirements not changing frequently
Accessing the application for load and performance with many virtual users
Stable software with respect to manual testing
Availability of time
How to Automate?
Automation is done by using a supportive computer language like VB scripting and
an automated software application. There are many tools available that can be
used to write automation scripts. Before mentioning the tools, let us identify the
process that can be used to automate the testing process:
Identifying areas within a software for automation
Selection of appropriate tool for test automation
Writing test scripts
Development of test suits
Execution of scripts
Create result reports
Identify any potential bug or performance issues
Software Testing Tools
The following tools can be used for automation testing:
HP Quick Test Professional
Selenium
IBM Rational Functional Tester
SilkTest
TestComplete
Testing Anywhere
WinRunner
LoadRunner
Visual Studio Test Professional
WATIR
There are different methods that can be used for software testing. This chapter
briefly describes the methods available.
Black-Box Testing
The technique of testing without having any knowledge of the interior workings of
the application is called black-box testing. The tester is oblivious to the system
architecture and does not have access to the source code. Typically, while
performing a black-box test, a tester will interact with the system's user interface
by providing inputs and examining outputs without knowing how and where the
inputs are worked upon.
The following table lists the advantages and disadvantages of black-box testing.
Advantages Disadvantages
Well suited and efficient for
large code segments.
Code access is not
required.
Clearly separates user's
perspective from the
developer's perspective
Limited coverage,
since only a selected
number of test
scenarios is actually
performed.
Inefficient testing,
due to the fact that
through visibly defined
roles.
Large numbers of
moderately skilled testers
can test the application
with no knowledge of
implementation,
programming language, or
operating systems.
the tester only has
limited knowledge
about an application.
Blind coverage, since
the tester cannot
target specific code
segments or error-
prone areas.
The test cases are
difficult to design.
White-Box Testing
White-box testing is the detailed investigation of internal logic and structure of the
code. White-box testing is also called glass testing or open-box testing. In
order to perform white-boxtesting on an application, a tester needs to know the
internal workings of the code.
The tester needs to have a look inside the source code and find out which
unit/chunk of the code is behaving inappropriately.
The following table lists the advantages and disadvantages of white-box testing.
Advantages Disadvantages
As the tester has
knowledge of the
source code, it
becomes very easy to
find out which type of
data can help in
testing the application
effectively.
It helps in optimizing
the code.
Extra lines of code
Due to the fact that a
skilled tester is needed to
perform white-box testing,
the costs are increased.
Sometimes it is impossible
to look into every nook and
corner to find out hidden
errors that may create
problems, as many paths
will go untested.
It is difficult to maintain
can be removed which
can bring in hidden
defects.
Due to the tester's
knowledge about the
code, maximum
coverage is attained
during test scenario
writing.
white-box testing, as it
requires specialized tools
like code analyzers and
debugging tools.
Grey-Box Testing
Grey-box testing is a technique to test the application with having a limited
knowledge of the internal workings of an application. In software testing, the
phrase the more you know, the better carries a lot of weight while testing an
application.
Mastering the domain of a system always gives the tester an edge over someone
with limited domain knowledge. Unlike black-box testing, where the tester only
tests the application's user interface; in grey-box testing, the tester has access to
design documents and the database. Having this knowledge, a tester can prepare
better test data and test scenarios while making a test plan.
Advantages Disadvantages
Offers combined benefits
of black-box and white-
box testing wherever
possible.
Grey box testers don't
rely on the source code;
instead they rely on
interface definition and
functional specifications.
Based on the limited
Since the access to
source code is not
available, the ability to
go over the code and
test coverage is limited.
The tests can be
redundant if the
software designer has
already run a test case.
Testing every possible
information available, a
grey-box tester can
design excellent test
scenarios especially
around communication
protocols and data type
handling.
The test is done from the
point of view of the user
and not the designer.
input stream is
unrealistic because it
would take an
unreasonable amount
of time; therefore,
many program paths
will go untested.
A Comparison of Testing Methods
The following table lists the points that differentiate black-box testing, grey-box
testing, and white-box testing.
Black-Box Testing Grey-Box Testing White-Box
Testing
The internal
workings of an
application need not
be known.
The tester has
limited knowledge of
the internal
workings of the
application.
Tester has full
knowledge of the
internal workings
of the application.
Also known as
closed-box testing,
data-driven testing,
or functional testing.
Also known as
translucent testing,
as the tester has
limited knowledge of
the insides of the
application.
Also known as
clear-box testing,
structural testing,
or code-based
testing.
Performed by end-
users and also by
testers and
Performed by end-
users and also by
testers and
Normally done by
testers and
developers.
developers. developers.
Testing is based on
external
expectations -
Internal behavior of
the application is
unknown.
Testing is done on
the basis of high-
level database
diagrams and data
flow diagrams.
Internal workings
are fully known
and the tester can
design test data
accordingly.
It is exhaustive and
the least time-
consuming.
Partly time-
consuming and
exhaustive.
The most
exhaustive and
time-consuming
type of testing.
Not suited for
algorithm testing.
Not suited for
algorithm testing.
Suited for
algorithm testing.
This can only be
done by trial-and-
error method.
Data domains and
internal boundaries
can be tested, if
known.
Data domains and
internal boundaries
can be better
tested.
What is fundamental test process in software testing?
Testing is a process rather than a single activity. This process starts from test
planning then designing test cases, preparing for execution and evaluating status
till the test closure. So, we can divide the activities within the fundamental test
process into the following basic steps:
1) Planning and Control
2) Analysis and Design
3) Implementation and Execution
4) Evaluating exit criteria and Reporting
5) Test Closure activities
1) Planning and Control:
Testplanning has following major tasks:
i. To determine the scope and risks and identify the objectives of testing.
ii. To determine the test approach.
iii. To implement the test policy and/or the test strategy. (Test strategy is an
outline that describes the testing portion of the software development cycle. It is
created to inform PM, testers and developers about some key issues of the testing
process. This includes the testing objectives, method of testing, total time and
resources required for the project and the testing environments.).
iv. To determine the required test resources like people, test environments, PCs,
etc.
v. To schedule test analysis and design tasks, test implementation, execution and
evaluation.
vi. To determine the Exit criteria we need to set criteria such as Coverage
criteria. (Coverage criteria are the percentage of statements in the software that
must be executed during testing. This will help us track whether we are completing
test activities correctly. They will show us which tasks and checks we must
complete for a particular level of testing before we can say that testing is
finished.)
Test control has the following major tasks:
i. To measure and analyze the results of reviews and testing.
ii. To monitor and document progress, test coverage and exit criteria.
iii. To provide information on testing.
iv. To initiate corrective actions.
v. To make decisions.
2) Analysis and Design:
Test analysis and Test Design has the following major tasks:
i. To review the test basis. (The test basis is the information we need in order to
start the test analysis and create our own test cases. Basically it’s a
documentation on which test cases are based, such as requirements, design
specifications, product risk analysis, architecture and interfaces. We can use the
test basis documents to understand what the system should do once built.)
ii. To identify test conditions.
iii. To design the tests.
iv. To evaluate testability of the requirements and system.
v. To design the test environment set-up and identify and required infrastructure
and tools.
3) Implementation and Execution:
During test implementation and execution, we take the test conditions into test
cases and procedures and other testware such as scripts for automation, the test
environment and any other test infrastructure. (Test cases is a set of conditions
under which a tester will determine whether an application is working correctly or
not.)
(Testware is a term for all utilities that serve in combination for testing a software
like scripts, the test environment and any other test infrastructure for later reuse.)
Test implementation has the following major task:
i. To develop and prioritize our test cases by using techniques and create test data
for those tests. (In order to test a software application you need to enter some data
for testing most of the features. Any such specifically identified data which is used
in tests is known as test data.)
We also write some instructions for carrying out the tests which is known as test
procedures.
We may also need to automate some tests using test harness and automated tests
scripts. (A test harness is a collection of software and test data for testing a
program unit by running it under different conditions and monitoring its behavior
and outputs.)
ii. To create test suites from the test cases for efficient test execution.
(Test suite is a collection of test cases that are used to test a software program to
show that it has some specified set of behaviours. A test suite often contains
detailed instructions and information for each collection of test cases on the system
configuration to be used during testing. Test suites are used to group similar test
cases together.)
iii. To implement and verify the environment.
Test execution has the following major task:
i. To execute test suites and individual test cases following the test procedures.
ii. To re-execute the tests that previously failed in order to confirm a fix. This is
known as confirmation testing or re-testing.
iii. To log the outcome of the test execution and record the identities and versions
of the software under tests. The test log is used for the audit trial. (A test log is
nothing but, what are the test cases that we executed, in what order we executed,
who executed that test cases and what is the status of the test case (pass/fail).
These descriptions are documented and called as test log.).
iv. To Compare actual results with expected results.
v. Where there are differences between actual and expected results, it report
discrepancies as Incidents.
4) Evaluating Exit criteria and Reporting:
Based on the risk assessment of the project we will set the criteria for each test
level against which we will measure the “enough testing”. These criteria vary from
project to project and are known as exit criteria.
Exit criteria come into picture, when:
— Maximum test cases are executed with certain pass percentage.
— Bug rate falls below certain level.
— When achieved the deadlines.
Evaluating exit criteria has the following major tasks:
i. To check the test logs against the exit criteria specified in test planning.
ii. To assess if more test are needed or if the exit criteria specified should be
changed.
iii. To write a test summary report for stakeholders.
5) Test Closure activities:
Test closure activities are done when software is delivered. The testing can be
closed for the other reasons also like:
When all the information has been gathered which are needed for the testing.
When a project is cancelled.
When some target is achieved.
When a maintenance release or update is done.
Test closure activities have the following major tasks:
i. To check which planned deliverables are actually delivered and to ensure that all
incident reports have been resolved.
ii. To finalize and archive testware such as scripts, test environments, etc. for later
reuse.
iii. To handover the testware to the maintenance organization. They will give
support to the software.
iv To evaluate how the testing went and learn lessons for future releases and
projects.
What is the Psychology of testing?
The comparison of the mindset of the tester and the developer.
The balance between self-testing and independent testing.
There should be clear and courteous communication and feedback on defects
between tester and developer.
Comparison of the mindset of the tester and developer:
The testing and reviewing of the applications are different from the analysing and
developing of it. By this we mean to say that if we are building or developing
applications we are working positively to solve the problems during the
development process and to make the product according to the user specification.
However while testing or reviewing a product we are looking for the defects or
failures in the product. Thus building the software requires a different mindset from
testing the software.
The balance between self-testing and independent testing:
The comparison made on the mindset of the tester and the developer in the above
article is just to compare the two different perspectives. It does not mean that the
tester cannot be the programmer, or that the programmer cannot be the tester,
although they often are separate roles. In fact programmers are the testers. They
always test their component which they built. While testing their own code they find
many problems so the programmers, architect and the developers always test their
own code before giving it to anyone. However we all know that it is difficult to find
our own mistakes. So, programmers, architect, business analyst depend on others
to help test their work. This other person might be some other developer from the
same team or the Testing specialists or professional testers. Giving applications to
the testing specialists or professional testers allows an independent test of the
system.This degree of independence avoids author bias and is often more effective
at finding defects and failures.
There is several level of independence in software testing which is listed here from
the lowest level of independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii. Tests by the person from some different group such as an independent test
team.
iv. Tests by a person from a different organization or company, such as outsourced
testing or certification by an external body.
Clear and courteous communication and feedback on defects between tester
and developer:
We all make mistakes and we sometimes get annoyed and upset or depressed
when someone points them out. So, when as testers we run a test which is a good
test from our viewpoint because we found the defects and failures in the software.
But at the same time we need to be very careful as how we react or report the
defects and failures to the programmers. We are pleased because we found a good
bug but how will the requirement analyst, the designer, developer, project manager
and customer react.The people who build the application may react defensively and
take this reported defect as personal criticism.The project manager may be
annoyed with everyone for holding up the project.The customer may lose
confidence in the product because he can see defects.
Because testing can be seen as destructive activity we need to take care while
reporting our defects and failures as objectively and politely as possible.
The balance between self-testing and independent testing