56
Beginners Guide To Software Testing – I Who will benefit? Beginners. For those of you who wish to mould your theoretical software engineering knowledge into practical approach to working in the real world. Those who wish to take up Software Testing as a profession.Developers! This is an era where you need to be an “All rounder”. It is advantageous for developers to posses testing capabilities to test the application before hand. This will help reduce overhead on the testing team. Already a Tester! You can refresh all your testing basics and techniques and gear up for Certifications in Software Testing An earnest suggestion: No matter which profession you choose, it is advisable that you posses the following skills: - Good communication skills – oratory and writing - Fluency in English - Good Typing skills By the time you finish reading this article, you will be aware of all the techniques and processes that improves your efficiency, skills and confidence to jump start into the field of Software Testing.[/size] Table Of Contents 1. OVERVIEW 1.1. THE BIG PICTURE 1.2. WHAT IS SOFTWARE? WHY SHOULD IT BE TESTED? 1.3. WHAT IS QUALITY? HOW IMPORTANT IS IT? 1.4. WHAT EXCATLY DOES A SOFTWARE TESTER DO?

Beginners Guide To Software Testing

Embed Size (px)

Citation preview

Page 1: Beginners Guide To Software Testing

Beginners Guide To Software Testing –   I

Who will benefit?

Beginners.

 For those of you who wish to mould your theoretical software engineering knowledge into practical approach to working in the real world. Those who wish to take up Software Testing as a profession.Developers!

This is an era where you need to be an “All rounder”. It is advantageous for developers to posses testing capabilities to test the application before hand. This will help reduce overhead on the testing team.

Already a Tester!

You can refresh all your testing basics and techniques and gear up for Certifications in Software Testing

An earnest suggestion: No matter which profession you choose, it is advisable that you posses the following skills:

- Good communication skills – oratory and writing- Fluency in English- Good Typing skills

By the time you finish reading this article, you will be aware of all the techniques and processes that improves your efficiency, skills and confidence to jump start into the field of Software Testing.[/size]

Table Of Contents

1. OVERVIEW1.1. THE BIG PICTURE1.2. WHAT IS SOFTWARE? WHY SHOULD IT BE TESTED?1.3. WHAT IS QUALITY? HOW IMPORTANT IS IT?1.4. WHAT EXCATLY DOES A SOFTWARE TESTER DO?1.5. WHAT MAKES A GOOD TESTER?1.6. GUIDELINES FOR NEW TESTERS2. INTRODUCTION2.1. SOFTWARE LIFE CYCLE2.1.1. VARIOUS LIFE CYCLE MODELS2.2. SOFTWARE TESTING LIFE CYCLE2.3. WHAT IS A BUG? WHY DO BUGS OCCUR?2.4. BUG LIFE CYCLE2.5. COST OF FIXING BUGS2.6. WHEN CAN TESTING BE STOPPED/REDUCED?3. SOFTWARE TESTING LEVELS, TYPES, TERMS AND DEFINITIONS

Page 2: Beginners Guide To Software Testing

3.1. TESTING LEVELS AND TYPES3.2. TESTING TERMS4. MOST COMMON SOFTWARE ERRORS5. THE TEST PLANNING PROCESS5.1. WHAT IS A TEST STRATEGY? WHAT ARE ITS COMPONENTS?5.2. TEST PLANNING – SAMPLE STRUCTURE5.3. MAJOR TEST PLANNING TASKS6. TEST CASE DEVELOPMENT6.1. GENERAL GUIDELINES6.2. TEST CASE – SAMPLE STRUCTURE6.3. TEST CASE DESIGN TECHNIQUES6.3.2. EQUIVALENCE PARTITIONING6.3.3. BOUNDARY VALUE ANALYSIS6.3.4. STATE TRANSITION TESTING6.3.5. BRANCH TESTING6.3.6. CONDITION TESTING6.3.7. DATA DEFINITION – USE TESTING6.3.8. INTERNAL BOUNDARY VALUE TESTING6.3.9. ERROR GUESSING6.4. USE CASES7. DEFECT TRACKING7.1. WHAT IS A DEFECT?7.2. WHAT ARE THE DEFECT CATEGORIES?7.3. HOW IS A DEFECT REPORTED?7.4. HOW DESCRIPTIVE SHOULD YOUR BUG/DEFECT REPORT BE?7.5. WHAT DOES THE TESTER DO WHEN THE DEFECT IS FIXED?8. TYPES OF TEST REPORTS8.1. TEST ITEM TRANSMITTAL REPORT8.2. TEST LOG8.3. TEST INCIDENT REPORT8.4. TEST SUMMARY REPORT9. SOFTWARE TEST AUTOMATION9.1. FACTORS DETERMINING TEST AUTOMATION9.2. APPROACHES TO AUTOMATION9.3. CHOOSING THE RIGHT TOOL9.4. TOP TEN CHALLENGES OF SOFTWARE TEST AUTOMATION10. INTRODUCTION TO SOFTWARE STANDARDS10.1. CMM10.2. SIX SIGMA10.3. ISO11. SOFTWARE TESTING CERTIFICATIONS12. FACTS ABOUT SOFTWARE ENGINEERING13. REFERENCES14. INTERNET LINKS

1. Overview

The Big PictureAll software problems can be termed as bugs. A software bug usually occurs when the software does not do what it is intended to do or does something that it is not intended to do. Flaws in specifications, design, code or other reasons can cause these bugs. Identifying and fixing bugs in the early stages of the software

Page 3: Beginners Guide To Software Testing

is very important as the cost of fixing bugs grows over time. So, the goal of a software tester is to find bugs and find them as early as possible and make sure they are fixed.Testing is context-based and risk-driven. It requires a methodical and disciplined approach to finding bugs. A good software tester needs to build credibility and possess the attitude to be explorative, troubleshooting, relentless, creative, diplomatic and persuasive.As against the perception that testing starts only after the completion of coding phase, it actually begins even before the first line of code can be written. In the life cycle of the conventional software product, testing begins at the stage when the specifications are written, i.e. from testing the product specifications or product spec. Finding bugs at this stage can save huge amounts of time and money. Once the specifications are well understood, you are required to design and execute the test cases. Selecting the appropriate technique that reduces the number of tests that cover a feature is one of the most important things that you need to take into consideration while designing these test cases. Test cases need to be designed to cover all aspects of the software, i.e. security, database, functionality (critical and general) and the user interface. Bugs originate when the test cases are executed.As a tester you might have to perform testing under different circumstances, i.e. the application could be in the initial stages or undergoing rapid changes, you have less than enough time to test, the product might be developed using a life cycle model that does not support much of formal testing or retesting. Further, testing using different operating systems, browsers and the configurations are to be taken care of.

Reporting a bug may be the most important and sometimes the most difficult task that you as a software tester will perform. By using various tools and clearly communicating to the developer, you can ensure that the bugs you find are fixed.

Using automated tools to execute tests, run scripts and tracking bugs improves efficiency and effectiveness of your tests. Also, keeping pace with the latest developments in the field will augment your career as a software test engineer.

What is software? Why should it be tested?

Software is a series of instructions for the computer that perform a particular task, called a program; the two major categories of software are system software and application software. System software is made up of control programs. Application software is any program that processes data for the user (spreadsheet, word processor, payroll, etc.).

A software product should only be released after it has gone through a proper process of development, testing and bug fixing. Testing looks at areas such as performance, stability and error handling by setting up test scenarios under controlled conditions and assessing the results. This is why exactly any software has to be tested. It is important to note that software is mainly tested to see that it meets the customers’ needs and that it conforms to the standards. It is a usual norm that software is considered of good quality if it meets the user requirements.

What is Quality? How important is it?

Quality can briefly be defined as “a degree of excellence”. High quality software usually conforms to the user requirements. A customer’s idea of quality may cover a breadth of features – conformance to specifications, good performance on platform(s)/configurations, completely meets operational requirements (even if not specified!), compatibility to all the end-user equipment, no negative impact on existing end-user base at introduction time.

Quality software saves good amount of time and money. Because software will have fewer defects, this saves time during testing and maintenance phases. Greater reliability contributes to an immeasurable

Page 4: Beginners Guide To Software Testing

increase in customer satisfaction as well as lower maintenance costs. Because maintenance represents a large portion of all software costs, the overall cost of the project will most likely be lower than similar projects.

Following are two cases that demonstrate the importance of software quality:

Ariane 5 crash June 4, 1996- Maiden flight of the European Ariane 5 launcher crashed about 40 seconds after takeoff- Loss was about half a billion dollars- Explosion was the result of a software error- Uncaught exception due to floating-point error: conversion from a 64-bit integer to a 16-bit signed integer applied to a larger than expected number- Module was re-used without proper testing from Ariane 4- Error was not supposed to happen with Ariane 4- No exception handler

Mars Climate Orbiter – September 23, 1999 - Mars Climate Orbiter, disappeared as it began to orbit Mars.- Cost about $US 125-million- Failure due to error in a transfer of information between a team in Colorado and a team in California- One team used English units (e.g., inches, feet and pounds) while the other used metric units for a key spacecraft operation.

What exactly does a software tester do?

Apart from exposing faults (“bugs”) in a software product confirming that the program meets the program specification, as a test engineer you need to create test cases, procedures, scripts and generate data. You execute test procedures and scripts, analyze standards and evaluate results of system/integration/regression testing. You also…

· Speed up development process by identifying bugs at an early stage (e.g. specifications stage)· Reduce the organization’s risk of legal liability· Maximize the value of the software· Assure successful launch of the product, save money, time and reputation of the company by discovering bugs and design flaws at an early stage before failures occur in production, or in the field· Promote continual improvement

What makes a good tester?

As software engineering is now being considered as a technical engineering profession, it is important that the software test engineer’s posses certain traits with a relentless attitude to make them stand out.Here are a few.

· Know the technology. Knowledge of the technology in which the application is developed is an added advantage to any tester. It helps design better and powerful test cases basing on the weakness or flaws of the technology. Good testers know what it supports and what it doesn’t, so concentrating on these lines will help them break the application quickly.· Perfectionist and a realist. Being a perfectionist will help testers spot the problem and being a realist helps know at the end of the day which problems are really important problems. You will know which ones require a fix and which ones don’t.

Page 5: Beginners Guide To Software Testing

· Tactful, diplomatic and persuasive. Good software testers are tactful and know how to break the news to the developers. They are diplomatic while convincing the developers of the bugs and persuade them when necessary and have their bug(s) fixed. It is important to be critical of the issue and not let the person who developed the application be taken aback of the findings.· An explorer. A bit of creativity and an attitude to take risk helps the testers venture into unknown situations and find bugs that otherwise will be looked over.· Troubleshoot. Troubleshooting and figuring out why something doesn’t work helps testers be confident and clear in communicating the defects to the developers.· Posses people skills and tenacity. Testers can face a lot of resistance from programmers. Being socially smart and diplomatic doesn’t mean being indecisive. The best testers are both-socially adept and tenacious where it matters.· Organized. Best testers very well realize that they too can make mistakes and don’t take chances. They are very well organized and have checklists, use files, facts and figures to support their findings that can be used as an evidence and double-check their findings.· Objective and accurate. They are very objective and know what they report and so convey impartial and meaningful information that keeps politics and emotions out of message. Reporting inaccurate information is losing a little credibility. Good testers make sure their findings are accurate and reproducible.· Defects are valuable. Good testers learn from them. Each defect is an opportunity to learn and improve. A defect found early substantially costs less when compared to the one found at a later stage. Defects can cause serious problems if not managed properly. Learning from defects helps – prevention of future problems, track improvements, improve prediction and estimation.

Guidelines for new testers·

. Testing can’t show that bugs don’t exist. An important reason for testing is to prevent defects. You can perform your tests, find and report bugs, but at no point can you guarantee that there are no bugs.· It is impossible to test a program completely. Unfortunately this is not possible even with the simplest program because – the number of inputs is very large, number of outputs is very large, number of paths through the software is very large, and the specification is subjective to frequent changes.· You can’t guarantee quality. As a software tester, you cannot test everything and are not responsible for the quality of the product. The main way that a tester can fail is to fail to report accurately a defect you have observed. It is important to remember that we seldom have little control over quality.· Target environment and intended end user. Anticipating and testing the application in the environment user is expected to use is one of the major factors that should be considered. Also, considering if the application is a single user system or multi user system is important for demonstrating the ability for immediate readiness when necessary. The error case of Disney’s Lion King illustrates this. Disney Company released it first multimedia CD-ROM game for children, The Lion King Animated Storybook. It was highly promoted and the sales were huge. Soon there were reports that buyers were unable to get the software to work. It worked on a few systems – likely the ones that the Disney programmers used to create the game – but not on the most common systems that the general public used.· No application is 100% bug free. It is more reasonable to recognize there are priorities, which may leave some less critical problems unsolved or unidentified. Simple case is the Intel Pentium bug. Enter the following equation into your PC’s calculator: (4195835 / 3145727) * 3145727 – 4195835. If the answer is zero, your computer is just fine. If you get anything else, you have an old Intel Pentium CPU with a floating-point division bug.· Be the customer. Try to use the system as a lay user. To get a glimpse of this, get a person who has no idea of the application to use it for a while and you will be amazed to see the number of problems the person seem to come across. As you can see, there is no procedure involved. Doing this could actually cause the system to encounter an array of unexpected tests – repetition, stress, load, race etc.· Build your credibility. Credibility is like quality that includes reliability, knowledge, consistency,

Page 6: Beginners Guide To Software Testing

reputation, trust, attitude and attention to detail. It is not instant but should be built over time and gives voice to the testers in the organization. Your keys to build credibility – identify your strengths and weaknesses, build good relations, demonstrate competency, be willing to admit mistakes, re-assess and adjust.· Test what you observe. It is very important that you test what you can observe and have access to. Writing creative test cases can help only when you have the opportunity to observe the results. So, assume nothing.· Not all bugs you find will be fixed. Deciding which bugs will be fixed and which won’t is a risk-based decision. Several reasons why your bug might not be fixed is when there is no enough time, the bug is dismissed for a new feature, fixing it might be very risky or it may not be worth it because it occurs infrequently or has a work around where the user can prevent or avoid the bug. Making a wrong decision can be disastrous.· Review competitive products. Gaining a good insight into various products of the same kind and getting to know their functionality and general behavior will help you design different test cases and to understand the strengths and weaknesses of your application. This will also enable you to add value and suggest new features and enhancements to your product.· Follow standards and processes. As a tester, your need to conform to the standards and guidelines set by the organization. These standards pertain to reporting hierarchy, coding, documentation, testing, reporting bugs, using automated tools etc.

2. Introduction

Software Life CycleThe software life cycle typically includes the following: requirements analysis, design, coding, testing, installation and maintenance. In between, there can be a requirement to provide Operations and support activities for the product.

Requirements Analysis. Software organizations provide solutions to customer requirements by developing appropriate software that best suits their specifications. Thus, the life of software starts with origin of requirements. Very often, these requirements are vague, emergent and always subject to change.Analysis is performed to – To conduct in depth analysis of the proposed project, To evaluate for technical feasibility, To discover how to partition the system, To identify which areas of the requirements need to be elaborated from the customer, To identify the impact of changes to the requirements, To identify which requirements should be allocated to which components.

Design and Specifications. The outcome of requirements analysis is the requirements specification. Using this, the overall design for the intended software is developed. Activities in this phase - Perform Architectural Design for the software, Design Database (If applicable), Design User Interfaces, Select or Develop Algorithms (If Applicable), Perform Detailed Design.Coding. The development process tends to run iteratively through these phases rather than linearly; several models (spiral, waterfall etc.) have been proposed to describe this process.

Activities in this phase – Create Test Data, Create Source, Generate Object Code, Create Operating Documentation, Plan Integration, Perform Integration.

Testing. The process of using the developed system with the intent to find errors. Defects/flaws/bugs found at this stage will be sent back to the developer for a fix and have to be re-tested. This phase is iterative as long as the bugs are fixed to meet the requirements.

Page 7: Beginners Guide To Software Testing

Activities in this phase – Plan Verification and Validation, Execute Verification and validation Tasks, Collect and Analyze Metric Data, Plan Testing, Develop Test Requirements, Execute Tests.

Installation. The so developed and tested software will finally need to be installed at the client place. Careful planning has to be done to avoid problems to the user after installation is done.

Activities in this phase – Plan Installation, Distribution of Software, Installation of Software, Accept Software in Operational Environment.

Operation and Support. Support activities are usually performed by the organization that developed the software. Both the parties usually decide on these activities before the system is developed.

Activities in this phase – Operate the System, Provide Technical Assistance and Consulting, Maintain Support Request Log.

Maintenance. The process does not stop once it is completely implemented and installed at user place; this phase undertakes development of new features, enhancements etc.

Activities in this phase – Reapplying Software Life Cycle.

Various Life Cycle Models

The way you approach a particular application for testing greatly depends on the life cycle model it follows. This is because, each life cycle model places emphasis on different aspects of the software i.e. certain models provide good scope and time for testing whereas some others don’t. So, the number of test cases developed, features covered, time spent on each issue depends on the life cycle model the application follows.

No matter what the life cycle model is, every application undergoes the same phases described above as its life cycle.

Following are a few software life cycle models, their advantages and disadvantages.

Waterfall Model

Strengths:•Emphasizes completion of one phase before moving on•Emphasises early planning, customer input, and design•Emphasises testing as an integral part of the life cycle •Provides quality gates at each life cycle phase

Weakness: •Depends on capturing and freezing requirements early in the life cycle•Depends on separating requirements from design•Feedback is only from testing phase to any previous stage•Not feasible in some organizations•Emphasises products rather than processes

Page 8: Beginners Guide To Software Testing

Prototyping Model

Strengths:•Requirements can be set earlier and more reliably•Requirements can be communicated more clearly and completely between developers and clients•Requirements and design options can be investigated quickly and with low cost•More requirements and design faults are caught early

Weakness:•Requires a prototyping tool and expertise in using it – a cost for the development organisation•The prototype may become the production system

Spiral Model

Strengths: •It promotes reuse of existing software in early stages of development•Allows quality objectives to be formulated during development•Provides preparation for eventual evolution of the software product•Eliminates errors and unattractive alternatives early.•It balances resource expenditure.•Doesn’t involve separate approaches for software development and software maintenance.•Provides a viable framework for integrated Hardware-software system development.

Weakness:•This process needs or usually associated with Rapid Application Development, which is very difficult practically.•The process is more difficult to manage and needs a very different approach as opposed to the waterfall model (Waterfall model has management techniques like GANTT charts to assess)

Software Testing Life Cycle

Software Testing Life Cycle consist of six (generic) phases: 1) Planning, 2) Analysis, 3) Design, 4) Construction, 5) Testing Cycles, 6) Final Testing and Implementation and 7) Post Implementation. Each phase in the life cycle is described with the respective activities.

Planning. Planning High Level Test plan, QA plan (quality goals), identify – reporting procedures, problem classification, acceptance criteria, databases for testing, measurement criteria (defect quantities/severity level and defect origin), project metrics and finally begin the schedule for project testing. Also, plan to maintain all test cases (manual or automated) in a database.

Analysis. Involves activities that – develop functional validation based on Business Requirements (writing test cases basing on these details), develop test case format (time estimates and priority assignments), develop test cycles (matrices and timelines), identify test cases to be automated (if applicable), define area of stress and performance testing, plan the test cycles required for the project and regression testing, define procedures for data maintenance (backup, restore, validation), review documentation.

Design. Activities in the design phase – Revise test plan based on changes, revise test cycle matrices and timelines, verify that test plan and cases are in a database or requisite, continue to write test cases and add new ones based on changes, develop Risk Assessment Criteria, formalize details for Stress and

Page 9: Beginners Guide To Software Testing

Performance testing, finalize test cycles (number of test case per cycle based on time estimates per test case and priority), finalize the Test Plan, (estimate resources to support development in unit testing).

Construction (Unit Testing Phase). Complete all plans, complete Test Cycle matrices and timelines, complete all test cases (manual), begin Stress and Performance testing, test the automated testing system and fix bugs, (support development in unit testing), run QA acceptance test suite to certify software is ready to turn over to QA.

Test Cycle(s) / Bug Fixes (Re-Testing/System Testing Phase). Run the test cases (front and back end), bug reporting, verification, revise/add test cases as required.

Final Testing and Implementation (Code Freeze Phase). Execution of all front end test cases – manual and automated, execution of all back end test cases – manual and automated, execute all Stress and Performance tests, provide on-going defect tracking metrics, provide on-going complexity and design metrics, update estimates for test cases and test plans, document test cycles, regression testing, and update accordingly.

Post Implementation. Post implementation evaluation meeting can be conducted to review entire project. Activities in this phase – Prepare final Defect Report and associated metrics, identify strategies to prevent similar problems in future project, automation team – 1) Review test cases to evaluate other cases to be automated for regression testing, 2) Clean up automated test cases and variables, and 3) Review process of integrating results from automated testing in with results from manual testing.

What is a bug? Why do bugs occur?

A software bug may be defined as a coding error that causes an unexpected defect, fault, flaw, or imperfection in a computer program. In other words, if a program does not perform as intended, it is most likely a bug.

There are bugs in software due to unclear or constantly changing requirements, software complexity, programming errors, timelines, errors in bug tracking, communication gap, documentation errors, deviation from standards etc.

· Unclear software requirements are due to miscommunication as to what the software should or shouldn’t do. In many occasions, the customer may not be completely clear as to how the product should ultimately function. This is especially true when the software is a developed for a completely new product. Such cases usually lead to a lot of misinterpretations from any or both sides.

· Constantly changing software requirements cause a lot of confusion and pressure both on the development and testing teams. Often, a new feature added or existing feature removed can be linked to the other modules or components in the software. Overlooking such issues causes bugs.

· Also, fixing a bug in one part/component of the software might arise another in a different or same component. Lack of foresight in anticipating such issues can cause serious problems and increase in bug count. This is one of the major issues because of which bugs occur since developers are very often subject to pressure related to timelines; frequently changing requirements, increase in the number of bugs etc.

· Designing and re-designing, UI interfaces, integration of modules, database management all these add to the complexity of the software and the system as a whole.

Page 10: Beginners Guide To Software Testing

· Fundamental problems with software design and architecture can cause problems in programming. Developed software is prone to error as programmers can make mistakes too. As a tester you can check for, data reference/declaration errors, control flow errors, parameter errors, input/output errors etc.

· Rescheduling of resources, re-doing or discarding already completed work, changes in hardware/software requirements can affect the software too. Assigning a new developer to the project in midway can cause bugs. This is possible if proper coding standards have not been followed, improper code documentation, ineffective knowledge transfer etc. Discarding a portion of the existing code might just leave its trail behind in other parts of the software; overlooking or not eliminating such code can cause bugs. Serious bugs can especially occur with larger projects, as it gets tougher to identify the problem area.

· Programmers usually tend to rush as the deadline approaches closer. This is the time when most of the bugs occur. It is possible that you will be able to spot bugs of all types and severity.

· Complexity in keeping track of all the bugs can again cause bugs by itself. This gets harder when a bug has a very complex life cycle i.e. when the number of times it has been closed, re-opened, not accepted, ignored etc goes on increasing.

Bug Life Cycle

Bug Life Cycle starts with an unintentional software bug/behavior and ends when the assigned developer fixes the bug. A bug when found should be communicated and assigned to a developer that can fix it. Once fixed, the problem area should be re-tested. Also, confirmation should be made to verify if the fix did not create problems elsewhere. In most of the cases, the life cycle gets very complicated and difficult to track making it imperative to have a bug/defect tracking system in place.

See Chapter 7 – Defect Tracking

Following are the different phases of a Bug Life Cycle:

Open: A bug is in Open state when a tester identifies a problem area

Accepted: The bug is then assigned to a developer for a fix. The developer then accepts if valid.

Not Accepted/Won’t fix: If the developer considers the bug as low level or does not accept it as a bug, thus pushing it into Not Accepted/Won’t fix state.

Such bugs will be assigned to the project manager who will decide if the bug needs a fix. If it needs, then assigns it back to the developer, and if it doesn’t, then assigns it back to the tester who will have to close the bug.

Pending: A bug accepted by the developer may not be fixed immediately. In such cases, it can be put under Pending state.

Fixed: Programmer will fix the bug and resolves it as Fixed.

Close: The fixed bug will be assigned to the tester who will put it in the Close state.

Page 11: Beginners Guide To Software Testing

Re-Open: Fixed bugs can be re-opened by the testers in case the fix produces problems elsewhere.

Cost of fixing bugs

Costs are logarithmic; they increase in size tenfold as the time increases. A bug found and fixed during the early stages – requirements or product spec stage can be fixed by a brief interaction with the concerned and might cost next to nothing.

During coding, a swiftly spotted mistake may take only very less effort to fix. During integration testing, it costs the paperwork of a bug report and a formally documented fix, as well as the delay and expense of a re-test.

During system testing it costs even more time and may delay delivery. Finally, during operations it may cause anything from a nuisance to a system failure, possibly with catastrophic as an aircraft or an emergency service.

When can testing be stopped/reduced?

It is difficult to determine when exactly to stop testing. Here are a few common factors that help you decide when you can stop or reduce testing:

· Deadlines (release deadlines, testing deadlines, etc.)· Test cases completed with certain percentage passed· Test budget depleted· Coverage of code/functionality/requirements reaches a specified point· Bug rate falls below a certain level· Beta or alpha testing period ends

3. Software Testing Levels, Types, Terms and Definitions Testing Levels and Types

There are basically three levels of testing i.e. Unit Testing, Integration Testing and System Testing. Various types of testing come under these levels.

Unit TestingTo verify a single program or a section of a single programIntegration TestingTo verify interaction between system componentsPrerequisite: unit testing completed on all components that compose a systemSystem TestingTo verify and validate behaviors of the entire system against the original system objectives Software testing is a process that identifies the correctness, completeness, and quality of software.

Following is a list of various types of software testing and their definitions in a random order:·

Formal Testing: Performed by test engineers· Informal Testing: Performed by the developers· Manual Testing: That part of software testing that requires human input, analysis, or evaluation.

Page 12: Beginners Guide To Software Testing

· Automated Testing: Software testing that utilizes a variety of tools to automate the testing process. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tools and the software being tested to set up the test cases.· Black box Testing: Testing software without any knowledge of the back-end of the system, structure or language of the module being tested. Black box test cases are written from a definitive source document, such as a specification or requirements document.· White box Testing: Testing in which the software tester has knowledge of the back-end, structure and language of the software, or at least its purpose.· Unit Testing: Unit testing is the process of testing a particular complied program, i.e., a window, a report, an interface, etc. independently as a stand-alone component/program. The types and degrees of unit tests can vary among modified and newly created programs. Unit testing is mostly performed by the programmers who are also responsible for the creation of the necessary unit test data.· Incremental Testing: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.· System Testing: System testing is a form of black box testing. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed.· Integration Testing: Testing two or more modules or functions together with the intent of finding interface defects between the modules/functions.· System Integration Testing: Testing of software components that have been distributed across multiple platforms (e.g., client, web server, application server, and database server) to produce failures caused by system integration defects (i.e. defects involving distribution and back-office integration).· Functional Testing: Verifying that a module functions as stated in the specification and establishing confidence that a program does what it is supposed to do.· End-to-end Testing: Similar to system testing – testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.· Sanity Testing: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes testing basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.· Regression Testing: Testing with the intent of determining if bug fixes have been successful and have not created any new problems.· Acceptance Testing: Testing the system with the intent of confirming readiness of the product and customer acceptance. Also known as User Acceptance Testing.· Adhoc Testing: Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an addition to formal testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed – usually done by skilled testers. Sometimes ad hoc testing is referred to as exploratory testing.· Configuration Testing: Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.· Load Testing: Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.· Stress Testing: Testing done to evaluate the behavior when the system is pushed beyond the breaking point. The goal is to expose the weak links and to determine if the system manages to recover gracefully.· Performance Testing: Testing with the intent of determining how efficiently a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.· Usability Testing: Usability testing is testing for ‘user-friendliness’. A way to evaluate and measure how users interact with a software product or site. Tasks are given to users and observations are made.· Installation Testing: Testing with the intent of determining if the product is compatible with a variety

Page 13: Beginners Guide To Software Testing

of platforms and how easily it installs.· Recovery/Error Testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.· Security Testing: Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.· Penetration Testing: Penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.· Compatibility Testing: Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.· Exploratory Testing: Any testing in which the tester dynamically changes what they’re doing for test execution, based on information they learn as they’re executing their tests.· Comparison Testing: Testing that compares software weaknesses and strengths to those of competitors’ products.· Alpha Testing: Testing after code is mostly complete or contains most of the functionality and prior to reaching customers. Sometimes a selected group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.· Beta Testing: Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large.· Gamma Testing: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks.· Mutation Testing: A method to determine to test thoroughness by measuring the extent to which the test cases can discriminate the program from slight variants of the program.· Independent Verification and Validation (IV&V): The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn’t fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software.· Pilot Testing: Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Typically involves many users, is conducted over a short period of time and is tightly controlled. (See beta testing)· Parallel/Audit Testing: Testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.· Glass Box/Open Box Testing: Glass box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.· Closed Box Testing: Closed box testing is same as black box testing. A type of testing that considers only the functionality of the application.· Bottom-up Testing: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.· Smoke Testing: A random test conducted before the delivery and after complete testing.Testing Terms

· Bug: A software bug may be defined as a coding error that causes an unexpected defect, fault or flaw. In other words, if a program does not perform as intended, it is most likely a bug.· Error: A mismatch between the program and its specification is an error in the program.· Defect: Defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations. It is a flaw in the software system and has no impact until it affects the user/customer and operational system. 90% of all the defects can be caused by process problems.

Page 14: Beginners Guide To Software Testing

· Failure: A defect that causes an error in operation or negatively impacts a user/ customer.· Quality Assurance: Is oriented towards preventing defects. Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.· Quality Control: quality control or quality engineering is a set of measures taken to ensure that defective products or services are not produced, and that the design meets performance requirements.· Verification: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.· Validation: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed. [/size]

4. Most common software errors

Following are the most common software errors that aid you in software testing. This helps you to identify errors systematically and increases the efficiency and productivity of software testing.Types of errors with examples·

User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues – Poor responsiveness, Can’t redirect output, Inappropriate use of key board· Error Handling: Inadequate – protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems.· Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.· Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation.· Initial and Later states: Failure to – set data item to zero, to initialize a loop-control variable, or re-initialize a pointer, to clear a string or flag, Incorrect initialization.· Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result, Missing/wrong default, Data Type errors.· Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit or user abort.· Race Conditions: Assumption that one event or task finished before another begins, Resource races, Tasks starts before its prerequisites are met, Messages cross or don’t arrive in the order sent.· Load Conditions: Required resources are not available, No available large memory area, Low priority tasks not put off, Doesn’t erase old files from mass storage, Doesn’t return unused memory.· Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence, Misunderstood status or return code, Wrong operation or instruction codes.· Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or program files.· Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case, Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify fixes, Failure to provide summary report.

Page 15: Beginners Guide To Software Testing

5. The Test Planning Process

What is a Test Strategy? What are its Components?

Test Policy – A document characterizing the organization’s philosophy towards software testing.

Test Strategy – A high-level document defining the test phases to be performed and the testing within those phases for a programme. It defines the process to be followed in each project. This sets the standards for the processes, documents, activities etc. that should be followed for each project. For example, if a product is given for testing, you should decide if it is better to use black-box testing or white-box testing and if you decide to use both, when will you apply each and to which part of the software? All these details need to be specified in the Test Strategy.Project Test Plan – a document defining the test phases to be performed and the testing within those phases for a particular project.

A Test Strategy should cover more than one project and should address the following issues: An approach to testing high risk areas first, Planning for testing, How to improve the process based on previous testing, Environments/data used, Test management – Configuration management, Problem management, What Metrics are followed, Will the tests be automated and if so which tools will be used, What are the Testing Stages and Testing Methods, Post Testing Review process, Templates.

Test planning needs to start as soon as the project requirements are known. The first document that needs to be produced then is the Test Strategy/Testing Approach that sets the high level approach for testing and covers all the other elements mentioned above.

Test Planning – Sample Structure

Once the approach is understood, a detailed test plan can be written. Usually, this test plan can be written in different styles. Test plans can completely differ from project to project in the same organization.

IEEE SOFTWARE TEST DOCUMENTATION Std 829-1998 – TEST PLAN

Purpose To describe the scope, approach, resources, and schedule of the testing activities. To identify the items being tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the risks associated with this plan.

OUTLINE A test plan shall have the following structure:· Test plan identifier. A unique identifier assign to the test plan.· Introduction: Summarized the software items and features to be tested and the need for them to be included.· Test items: Identify the test items, their transmittal media which impact their· Features to be tested· Features not to be tested· Approach· Item pass/fail criteria· Suspension criteria and resumption requirements· Test deliverables

Page 16: Beginners Guide To Software Testing

· Testing tasks· Environmental needs· Responsibilities· Staffing and training needs· Schedule· Risks and contingencies· Approvals

Major Test Planning Tasks

Like any other process in software testing, the major tasks in test planning are to – Develop Test Strategy, Critical Success Factors, Define Test Objectives, Identify Needed Test Resources, Plan Test Environment, Define Test Procedures, Identify Functions To Be Tested, Identify Interfaces With Other Systems or Components, Write Test Scripts, Define Test Cases, Design Test Data, Build Test Matrix, Determine Test Schedules, Assemble Information, Finalize the Plan

6. Test Case Development

A test case is a detailed procedure that fully tests a feature or an aspect of a feature. While the test plan describes what to test, a test case describes how to perform a particular test. You need to develop test cases for each test listed in the test plan.

General GuidelinesAs a tester, the best way to determine the compliance of the software to requirements is by designing effective test cases that provide a thorough test of a unit. Various test case design techniques enable the testers to develop effective test cases. Besides, implementing the design techniques, every tester needs to keep in mind general guidelines that will aid in test case design: a. The purpose of each test case is to run the test in the simplest way possible. [Suitable techniques - Specification derived tests, Equivalence partitioning]b. Concentrate initially on positive testing i.e. the test case should show that the software does what it is intended to do. [Suitable techniques - Specification derived tests, Equivalence partitioning, State-transition testing]

c. Existing test cases should be enhanced and further test cases should be designed to show that the software does not do anything that it is not specified to do i.e. Negative Testing [Suitable techniques - Error guessing, Boundary value analysis, Internal boundary value testing, State-transition testing]

d. Where appropriate, test cases should be designed to address issues such as performance, safety requirements and security requirements [Suitable techniques - Specification derived tests]

e. Further test cases can then be added to the unit test specification to achieve specific test coverage objectives. Once coverage tests have been designed, the test procedure can be developed and the tests executed [Suitable techniques - Branch testing, Condition testing, Data definition-use testing, State-transition testing]

Test Case – Sample Structure

The manner in which a test case is depicted varies between organizations. Anyhow, many test case templates are in the form of a table, for example, a 5-column table with fields:

Test Case IDTest Case Description

Page 17: Beginners Guide To Software Testing

Test Dependency/SetupInput Data Requirements/StepsExpected ResultsPass/Fail

Test Case Design Techniques

The test case design techniques are broadly grouped into two categories: Black box techniques, White box techniques and other techniques that do not fall under either category.

Black Box (Functional)- Specification derived tests- Equivalence partitioning- Boundary Value Analysis- State-Transition TestingWhite Box (Structural) - Branch Testing- Condition Testing- Data Definition – Use Testing- Internal boundary value testingOther- Error guessing

Specification Derived TestsAs the name suggests, test cases are designed by walking through the relevant specifications. It is a positive test case design technique.

Equivalence PartitioningEquivalence partitioning is the process of taking all of the possible test values and placing them into classes (partitions or groups). Test cases should be designed to test one value from each class. Thereby, it uses fewest test cases to cover the maximum input requirements.

For example, if a program accepts integer values only from 1 to 10. The possible test cases for such a program would be the range of all integers. In such a program, all integers upto to 0 and above 10 will cause an error. So, it is reasonable to assume that if 11 will fail, all values above it will fail and vice versa.

If an input condition is a range of values, let one valid equivalence class be the range (0 or 10 in this example). Let the values below and above the range be two respective invalid equivalence values (i.e. -1 and 11). Therefore, the above three partition values can be used as test cases for the above example.

Boundary Value AnalysisThis is a selection technique where the test data are chosen to lie along the boundaries of the input domain or the output range. This technique is often called as stress testing and incorporates a degree of negative testing in the test design by anticipating that errors will occur at or around the partition boundaries.

For example, a field is required to accept amounts of money between $0 and $10. As a tester, you need to check if it means upto and including $10 and $9.99 and if $10 is acceptable. So, the boundary values are $0, $0.01, $9.99 and $10.

Page 18: Beginners Guide To Software Testing

Now, the following tests can be executed. A negative value should be rejected, 0 should be accepted (this is on the boundary), $0.01 and $9.99 should be accepted, null and $10 should be rejected. In this way, it uses the same concept of partitions as equivalence partitioning.

State Transition TestingAs the name suggests, test cases are designed to test the transition between the states by creating the events that cause the transition.

Branch TestingIn branch testing, test cases are designed to exercise control flow branches or decision points in a unit. This is usually aimed at achieving a target level of Decision Coverage. Branch Coverage, need to test both branches of IF and ELSE. All branches and compound conditions (e.g. loops and array handling) within the branch should be exercised at least once.

Condition TestingThe object of condition testing is to design test cases to show that the individual components of logical conditions and combinations of the individual components are correct. Test cases are designed to test the individual elements of logical expressions, both within branch conditions and within other expressions in a unit.

Data Definition – Use TestingData definition-use testing designs test cases to test pairs of data definitions and uses. Data definition is anywhere that the value of a data item is set. Data use is anywhere that a data item is read or used. The objective is to create test cases that will drive execution through paths between specific definitions and uses.

Internal Boundary Value TestingIn many cases, partitions and their boundaries can be identified from a functional specification for a unit, as described under equivalence partitioning and boundary value analysis above. However, a unit may also have internal boundary values that can only be identified from a structural specification.

Error GuessingIt is a test case design technique where the testers use their experience to guess the possible errors that might occur and design test cases accordingly to uncover them.

Using any or a combination of the above described test case design techniques; you can develop effective test cases.

What is a Use Case?

A use case describes the system’s behavior under various conditions as it responds to a request from one of the users. The user initiates an interaction with the system to accomplish some goal. Different sequences of behavior, or scenarios, can unfold, depending on the particular requests made and conditions surrounding the requests. The use case collects together those different scenarios.

Use cases are popular largely because they tell coherent stories about how the system will behave in use. The users of the system get to see just what this new system will be and get toreact early.

7. Defect Tracking

Page 19: Beginners Guide To Software Testing

What is a defect?

As discussed earlier, defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations. It is a flaw in the software system and has no impact until it affects the user/customer and operational system.

What are the defect categories? With the knowledge of testing so far gained, you can now be able to categorize the defects you have found. Defects can be categorized into different types basing on the core issues they address. Some defects address security or database issues while others may refer to functionality or UI issues.Security Defects: Application security defects generally involve improper handling of data sent from the user to the application. These defects are the most severe and given highest priority for a fix.Examples: - Authentication: Accepting an invalid username/password- Authorization: Accessibility to pages though permission not given

Data Quality/Database Defects: Deals with improper handling of data in the database.Examples:- Values not deleted/inserted into the database properly- Improper/wrong/null values inserted in place of the actual values

Critical Functionality Defects: The occurrence of these bugs hampers the crucial functionality of the application.Examples:- Exceptions

Functionality Defects: These defects affect the functionality of the application.Examples:- All Javascript errors- Buttons like Save, Delete, Cancel not performing their intended functions- A missing functionality (or) a feature not functioning the way it is intended to- Continuous execution of loops

User Interface Defects: As the name suggests, the bugs deal with problems related to UI are usually considered less severe.Examples:- Improper error/warning/UI messages- Spelling mistakes- Alignment problems

General Testing   Concepts

1. Spike Test : Spike testing (also known as bounce testing) simulates system behavior similarity to real production usage where load is irregular. The system is loaded to 50 percent capacity then loaded to maximum and release & repeated. This determines the system ability to recover when placed under sporadically changing loads.

Page 20: Beginners Guide To Software Testing

2. What is the difference between functionality testing and system testing?Functionality testing is part of system testingSystem testing can be divided in to four partsUsability, functional, Security, PerformanceOR functional testing – black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)

System tests - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

3.What is a client server technology please explain ?Client-Server technology means client is some thing the user interacts with the application.Server means storing the data from application.

4. What is test condition?It is the functional attribute of the application.

5. What is the difference between Smoke testing and Sanity testing?Sanity Testing: Test Engineer cover basic functionality of the build to validate ” whether the build is stable for complete testing or not”.Smoke Testing: It is an extra shakeup in sanity process.In this testing, testing team rejects build with specific reasons where that build is not workingORSanity testing is an Initial effort to check whether the application can be tested further without any interruption. Basic GUI functionality, connectivity to database are concentrated here.Smoke Testing: It covers most crucial functions of the application.Testing carried out check whether most crucial functions of the application work fine or not? Don’t bother about the in-depth coverage of each test set.

6. How do we test when we click an url the correct page opened or not using WinRunner?BY USING WEB TETING FUNCTIONS, LIKE:web_link_clink,web_url_valid,web_link_valid

7. What are BVA and ECP?Equivalence class partion:Equivalence partion is a black box testing method.Divides input domain of a program into classes of data from which test cases can be derived.In Equivalence portioning a test case designed as to uncover group of class of errors.Here the input domain is divided into classes groups data.These classes are known as equivalence class, the process of making these equivalence classes is known as equivalence class partion.Equivalence class partion represents a set of valid and invalid states of every input conditions

Page 21: Beginners Guide To Software Testing

Boundary value Analysis: It has been observer that Program that work correctly for a set of values. In Equivalence class fail on some special values these values often lie on boundary of equivalence class.BVA test cases are extreme test cases.

8. What is the regular expression in WinRunner?Regular Expression is used to map the pattern for dynamically changing windows or objects.Example:While you are executing the script which contains date field it will show mismatch. Because recorded script contains different date. By using reg exp we can come across this

9. What is a Test Responsibility Matrix and can you specify a template for it?It is a matrix drawn between the functions to be tested and the type of tests or between test factor and type of test

10. What is the difference between ISO, CMM and CMMI?ISO (International Organization for Standardization) is the world’s largest developer of standards. Although ISO’s principal activity is the development of technical standards, ISO standards also have important economic and social repercussions. ISO standards make a positive difference, not just to engineers and manufacturers for whom they solve basic problems in production and distribution, but to society as a whole.The International Standards which ISO develops are very useful. They are useful to industrial and business organizations of all types, to governments and other regulatory bodies, to trade officials, to conformity assessment professionals, to suppliers and customers of products and services in both public and private sectors, and, ultimately, to people in general in their roles as consumers and end users.

CMM : The Capability Maturity Model for Software (CMM) is a framework that describes the key elements of an effective software process. There are CMMs for non software processes as well, such as Business Process Management (BPM)The CMM describes an evolutionary improvement path from an ad hoc, immature process to a mature, disciplined process. The CMM covers practices for planning, engineering, and managing software development and maintenance. When followed, these key practices improve the ability of organizations to meet goals for cost, schedule, functionality, and product quality. The CMM establishes a yardstick against which it is possible to judge, in a repeatable way, the maturity of an organization’s software process and compare it to the state of the practice of the industry. The CMM can also be used by an organization to plan improvements to its software process. It also reflects the needs of individuals performing software process, improvement, software process assessments, or software capability evaluations; is documented; and is publicly available. 

CMMI : Capability Maturity Model® Integration (CMMI) is a process improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a division, or an entire organization. CMMI helps integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes.

Page 22: Beginners Guide To Software Testing

11. What is the difference between client server (2 tier) testing and web testing (3 tier)Client server:  2 tiers and a single userWeb server: Multi tier and multi usersOR

C/S is a two tier architecture it means we’d to check the 2 tier arch. of system where as web based is a multi tier arch there we’d to check the multiple systems like load performance, stress on different machines impact

OR

Client Server Transactions are Limited, where Web Transaction are unpredictable and unlimited. Client Server User behavior is predictable and controllable, where in Web Testing User behavior is unpredictable and uncontrollable.Client Server variable are LAN, centralized H/W, and S/W, where as Web Testing variables are Firewalls, Routers, Housing Company, Caching System.Client Server failure is notice internally, where in Web Testing it is known externally.

12. How we can check HTML pages in any website and every links have different links?To check the pages r working with resp to links we’d to check when the link is clicked it correct page is opened r not if opened we go for stress,performance,load …

13. What is the exact difference between breakpoint and toggle breakpoint in WinRunner?Through breakpoint we can stop at specific locationUsing toggle button we can remove break point  

14. What is version control? Version Control-Whenever any application came for modification we consider that as a new version of that application. We kept our old version and this new version of application in our database. We use VSS (Version source System) to manage the verions of application.

15. What is bidirectional Traceability matrix briefly……….It is basically a table which establishes connectivity between the Requirements to testcases.ORIt is a traceability matrix which links the required testcases with their respective requirements as well as functional specs, design spec. It is bidirectional because mostly it happens that instead of writing the actual requirements and functional specifications in the table we write the corresponding column no. and hyperlink it so you click on that and it will take you to the respective page and also that if any change has to be done in any document, no need to change in the RTM.

16. What is the basic difference between Product testing and Application Testing?Product testing is just like black box testing in this u no need to test the coding just u wants to see whether the product meets expected with actual but application testing u need some knowledge of software languageOR

Page 23: Beginners Guide To Software Testing

Clear definition of product is ex: software released to the marketWhereas application is also software developed to particular client for his requirements. Whereas a product is in market for all the people needed.ORYes application is software developed to particular client for his requirements. Whereas a product is for all the people needed.But In practice application Testing and Product Testing are same.1] If someone asks about your project which you have tested that time our reply is our project name— means our product — Right?2] now if someone ask on which application do you have experience regarding Testing— our reply is Application by which our product or project has made example JAVA,ASP….. Right?These two sentence shows that our answer is different but meaning is same because we have tested the same Project which is our product and is developed in JAVA or ASP or any other application. So Application testing and Product Testing are same thing

17. .What is Database Testing? 2. What is Web Server? 3. What is SDLC in Product based company?1) Data base testing is also called as back end testing. We conduct this testing based on data validation and data integrity. Data validation means that whether front end values are correctly storing into back tables content or not. Data integrity means that whether impact of front end operations is working on back end tables content or not.2) 2.its a server ,what ever the web request user send it receives them and give response to that requests.3) 3. Sdlc means soft ware development life cycle. it contains all the stages in development such as information gathering,analysis,design,coding,testing ,maintenance.

18. What is the difference between severity and priority? Which one is used by a tester?Severity means Technical (i.e. if the application is shuts down, when we perform one function in the Application, the severity level is high)Priority means Business (If we are going to deliver a build, in that build defect may be low severity, but the Priority is high.)Severity only decided by the tester. Priority will be decided by the PM.OR

Severity: This is assigned by the tester. Severity of a defect is set based on the issue’s seriousness.Ii can be stated as mentionedShow stopper: 4, Major defect: 3, Minor defect: 2, Cosmetic: 1Setting values for these four categories can be again defined by the organization based on their views.Showstopper: this have a higher severity as u cannot proceed further testing with the application testing.Major: If there are any main defect based on the functionality.Minor: If there is any error in the functionality of one object under one functionalityCosmetic: any error based on the look and feel of the system, or improper location of the object (something based on the design of the web page)

Page 24: Beginners Guide To Software Testing

Priority: this will be set by the team lead or the project lead.Based on the severity and the time constraint that the module has the priority will be set 

19. What Is The Difference Between Quality Assurance, Quality Control, and Testing?Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.Quality Control: A set of activities designed to evaluate a developed work product. Testing: The process of executing a system with the intent of finding defects. (Note that the “process of executing a system” includes test planning prior to the execution of the test cases.)Quality Assurance Quality ControlA planned and systematic set of activities necessary to provide adequate confidence that requirements are properly established and products or services conform to specified requirements. The process by which product quality is compared with applicable standards; and the action taken when nonconformance is detected.An activity that establishes and evaluates the processes to produce the products. An activity which verifies if the product meets pre-defined standards.Helps establish processes. Implements the process.Sets up measurements programs to evaluate processes. Verifies if specific attribute(s) are in a specific product or serviceIdentifies weaknesses in processes and improves them. Identifies defects for the primary purpose of correcting defects.QA is the responsibility of the entire team. QC is the responsibility of the tester.Prevents the introduction of issues or defects Detects, reports and corrects defectsQA evaluates whether or not quality control is working for the primary purpose of determining whether or not there is a weakness in the process. QC evaluates if the application is working for the primary purpose of determining if there is a flaw / defect in the functionalities.QA improves the process that is applied to multiple products that will ever be produced by a process. QC improves the development of a specific product or service.QA personnel should not perform quality control unless doing it to validate quality control is working. QC personnel may perform quality assurance tasks if and when required

20. What’s a ‘test plan’?A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the ‘why’ and ‘how’ of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:• Title• Identification of software including version/release numbers• Revision history of document including authors, dates, approvals• Table of Contents• Purpose of document, intended audience• Objective of testing effort• Software product overview

Page 25: Beginners Guide To Software Testing

• Relevant related document list, such as requirements, design documents, other test plans, etc.• Relevant standards or legal requirements• Traceability requirements• Relevant naming conventions and identifier conventions• Overall software project organization and personnel/contact-info/responsibilities• Test organization and personnel/contact-info/responsibilities• Assumptions and dependencies• Project risk analysis• Testing priorities and focus• Scope and limitations of testing• Test outline – a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable• Outline of data input equivalence classes, boundary value analysis, error classes• Test environment – hardware, operating systems, other required software, data configurations, interfaces to other systems• Test environment validity analysis – differences between the test and production systems and their impact on test validity• Test environment setup and configuration issues• Software migration processes• Software CM processes• Test data setup requirements• Database setup requirements• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs• Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs• Test automation – justification and overview• Test tools to be used, including versions, patches, etc.• Test script/test code maintenance processes and version control• Problem tracking and resolution – tools and processes• Project test metrics to be used• Reporting requirements and testing deliverables• Software entrance and exit criteria• Initial sanity testing period and criteria• Test suspension and restart criteria• Personnel allocation• Personnel pre-training needs• Test site/location• Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues• Relevant proprietary, classified, security and licensing issues.• Open issues• Appendix – glossary, acronyms, etc.

Page 26: Beginners Guide To Software Testing

21. How can World Wide Web sites be tested?Web sites are essentially client/server applications – with web servers and ‘browser’ clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, JavaScript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.) ?• Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?• What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?• Will down time for server and content maintenance/upgrades be allowed? How much?• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?• How reliable are the site’s Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?• What processes will be required to manage updates to the web site’s content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?• Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?• Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??• How will internal and external links be validated and updated? how often?• Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet ‘traffic congestion’ problems to be accounted for in testing?• How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?

22. SMOKE Vs SANITYSmoke Test: When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing.Smoke testing can be done for testing the stability of any interim build.Smoke testing can be executed for platform qualification tests.

Page 27: Beginners Guide To Software Testing

Sanity testing: Once a new build is obtained with minor revisions, instead of doing a through regression, sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes. It’s generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the app.

Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after through regression cycles.

1 Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested. A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.2 A smoke test is scripted–either using a written set of tests or an automated test. A sanity test is usually unscripted. 3 A Smoke test is designed to touch every part of the application in a cursory way. It’s is shallow and wide. A Sanity test is used to determine a small section of the application is still working after a minor change.4 Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification). Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.5 Smoke testing is normal health check up to a build of an application before taking it to testing in depth.sanity testing is to verify whether requirements are met or not,Checking all features breadth-first.

23. C/S application Vs Web applicationApplications are connected by network such as LAN, WAN, and MAN

C/S application.

Web Application. 

1.  2 tier application. N-tier application.

2 Application resides on the client workstation.Browser resides on the client workstation.

3 User behavior is predictable and controllable.User behavior is unpredictable and uncontrollable4 Internal errors are easily predictable.External errors are unpredictable and uncontrollable.6 Application process is done by application server.

Client server variables are environment, operating system and so on.Application process is done by web server.

Page 28: Beginners Guide To Software Testing

Web variables are Firewalls, routers and so on.

Difference between an application server and a Web   server?

What is the difference between an application server and a Web server?

Taking a big step back, a Web server serves pages for viewing in a Web browser, while an application server provides methods that client applications can call. A little more precisely, we can say that: A Web server exclusively handles HTTP requests, whereas an application server serves business logic to application programs through any number of protocols. Let’s examine each in more detail.

The Web server

A Web server handles the HTTP protocol. When the Web server receives an HTTP request, it responds with an HTTP response, such as sending back an HTML page. To process a request, a Web server may respond with a static HTML page or image, send a redirect, or delegate the dynamic response generation to some other program such as CGI scripts, JSPs (JavaServer Pages), servlets, ASPs (Active Server Pages), server-side JavaScripts, or some other server-side technology. Whatever their purpose, such server-side programs generate a response, most often in HTML, for viewing in a Web browser. Understand that a Web server’s delegation model is fairly simple. When a request comes into the Web server, the Web server simply passes the request to the program best able to handle it. The Web server doesn’t provide any functionality beyond simply providing an environment in which the server-side program can execute and pass back the generated responses. The server-side program usually provides for itself such functions as transaction processing, database connectivity, and messaging. While a Web server may not itself support transactions or database connection pooling, it may employ various strategies for fault tolerance and scalability such as load balancing, caching, and clustering—features oftentimes erroneously assigned as features reserved only for application servers.

The application server

As for the application server, according to our definition, an application server exposes business logic to client applications through various protocols, possibly including HTTP. While a Web server mainly deals with sending HTML for display in a Web browser, an application server provides

Page 29: Beginners Guide To Software Testing

access to business logic for use by client application programs. The application program can use this logic just as it would call a method on an object (or a function in the procedural world). Such application server clients can include GUIs (graphical user interface) running on a PC, a Web server, or even other application servers. The information traveling back and forth between an application server and its client is not restricted to simple display markup. Instead, the information is program logic. Since the logic takes the form of data and method calls and not static HTML, the client can employ the exposed business logic however it wants. In most cases, the server exposes this business logic through a component API, such as the EJB (Enterprise JavaBean) component model found on J2EE (Java 2 Platform, Enterprise Edition) application servers. Moreover, the application server manages its own resources. Such gate-keeping duties include security, transaction processing, resource pooling, and messaging. Like a Web server, an application server may also employ various scalability and fault-tolerance techniques

An example

As an example, consider an online store that provides real-time pricing and availability information. Most likely, the site will provide a form with which you can choose a product. When you submit your query, the site performs a lookup and returns the results embedded within an HTML page. The site may implement this functionality in numerous ways. I’ll show you one scenario that doesn’t use an application server and another that does. Seeing how these scenarios differ will help you to see the application server’s function.

Scenario 1: Web server without an application server

In the first scenario, a Web server alone provides the online store’s functionality. The Web server takes your request, then passes it to a server-side program able to handle the request. The server-side program looks up the pricing information from a database or a flat file. Once retrieved, the server-side program uses the information to formulate the HTML response, then the Web server sends it back to your Web browser. To summarize, a Web server simply processes HTTP requests by responding with HTML pages.

Scenario 2: Web server with an application server

Scenario 2 resembles Scenario 1 in that the Web server still delegates the response generation to a script. However, you can now put the business logic for the pricing lookup onto an application server. With that change, instead of the script knowing how to look up the data and formulate a response, the script can simply call the application server’s lookup service.

Page 30: Beginners Guide To Software Testing

The script can then use the service’s result when the script generates its HTML response. In this scenario, the application server serves the business logic for looking up a product’s pricing information. That functionality doesn’t say anything about display or how the client must use the information. Instead, the client and application server send data back and forth. When a client calls the application server’s lookup service, the service simply looks up the information and returns it to the client. By separating the pricing logic from the HTML response-generating code, the pricing logic becomes far more reusable between applications. A second client, such as a cash register, could also call the same service as a clerk checks out a customer. In contrast, in Scenario 1 the pricing lookup service is not reusable because the information is embedded within the HTML page. To summarize, in Scenario 2′s model, the Web server handles HTTP requests by replying with an HTML page while the application server serves application logic by processing pricing and availability requests.

So,  difference between AppServer and a Web server (1) Webserver serves pages for viewing in web browser, application server provides exposes businness logic for client applications through various protocols

(2) Webserver exclusively handles http requests.application server serves bussiness logic to application programs through any number of protocols.

(3) Webserver delegation model is fairly simple,when the request comes into the webserver,it simply passes the request to the program best able to handle it(Server side program). It may not support transactions and database connection pooling.

(4) Application server is more capable of dynamic behaviour than webserver. We can also configure application server to work as a webserver.Simply applic! ation server is a superset of webserver. 

Bug Life Cycle &   Guidelines

In this tutorial you will learn about Bug Life Cycle & Guidelines, Introduction, Bug Life Cycle, The different states of a bug, Description of Various Stages, Guidelines on deciding the Severity of Bug, A sample guideline for assignment of Priority Levels during the product test phase and Guidelines on writing Bug Description.

 Introduction:

Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT).

Page 31: Beginners Guide To Software Testing

Bug Life Cycle:

In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:

The different states of a bug can be summarized as follows:

1. New2. Open3. Assign4. Test5. Verified6. Deferred7. Reopened8. Duplicate9. Rejected and10. Closed

Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

Page 32: Beginners Guide To Software Testing

3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.

Guidelines on deciding the Severity of Bug:

Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.

A sample guideline for assignment of Priority Levels during the product test phase includes:

1. Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under

Page 33: Beginners Guide To Software Testing

test..

2. Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc..

3. Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points..

4. Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.

Guidelines on writing Bug Description:

Bug can be expressed as “Result followed by the action”. That means, the unexpected behavior occurring when a particular action takes place can be given as bug description.

1. Be specific. State the expected behavior which did not occur – such as after pop-up did not appear and the behavior which occurred instead.

2. Use present tense.3. Don’t use unnecessary words.4. Don’t add exclamation points. End sentences with a period.5. DON’T USE ALL CAPS. Format words in upper and lower case (mixed case).6. Mention steps to reproduce the bug compulsorily.

Metrics Used In   Testing

Metrics Used In Testing

In this tutorial you will learn about metrics used in testing, The Product Quality Measures –

1. Customer satisfaction index,

2. Delivered defect quantities,

 3. Responsiveness (turnaround time) to users,

4. Product volatility,

5. Defect ratios,

6. Defect removal efficiency,

7. Complexity of delivered product,

Page 34: Beginners Guide To Software Testing

8. Test coverage,

9. Cost of defects,

10. Costs of quality activities,

11. Re-work,

12. Reliability and Metrics for Evaluating Application System Testing.

The Product Quality Measures:

1. Customer satisfaction index

This index is surveyed before product delivery and after product delivery(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:

Number of system enhancement requests per year Number of maintenance fix requests per year User friendliness: call volume to customer service hotline User friendliness: training time per new user Number of product recalls or fix releases (software vendors) Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities

They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users

Turnaround time for defect fixes, by level of severity Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility

Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)

5. Defect ratios

Defects found after product delivery per function point. Defects found after product delivery per LOC Pre-delivery defects: annual post-delivery defects Defects per function point of the system modifications

Page 35: Beginners Guide To Software Testing

6. Defect removal efficiency

Number of post-release defects (found by clients in field operation), categorized by level of severity

Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects

All defects include defects found internally plus externally (by customers) in the first year after product delivery

7. Complexity of delivered product

McCabe’s cyclomatic complexity counts across the system Halstead’s measure Card’s design complexity measures Predicted defects and maintenance costs, based on complexity measures

8. Test coverage

Breadth of functional coverage Percentage of paths, branches or conditions that were actually tested Percentage by criticality level: perceived level of risk of paths The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects

Business losses per defect that occurs during operation Business interruption costs; costs of work-arounds Lost sales and lost goodwill Litigation costs resulting from defects Annual maintenance cost (per function point) Annual operating cost (per function point) Measurable damage to your boss’s career

10. Costs of quality activities

Costs of reviews, inspections and preventive measures Costs of test planning and preparation Costs of test execution, defect tracking, version and change control Costs of diagnostics, debugging and fixing Costs of tools and tool support Costs of test case library maintenance Costs of testing & QA education associated with the product Costs of monitoring and oversight by the QA organization (if separate from the development

and test organizations)

11. Re-work

Page 36: Beginners Guide To Software Testing

Re-work effort (hours, as a percentage of the original coding hours) Re-worked LOC (source lines of code, as a percentage of the total delivered LOC) Re-worked software components (as a percentage of the total delivered components)

12. Reliability

Availability (percentage of time a system is available, versus the time the system is needed to be available)

Mean time between failure (MTBF). Man time to repair (MTTR) Reliability ratio (MTBF / MTTR) Number of product recalls or fix releases Number of production re-runs as a ratio of production runs

Metrics for Evaluating Application System Testing:

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed

Page 37: Beginners Guide To Software Testing

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

Life Cycle of Testing   Process

This article explains about Different steps in Life Cycle of Testing Process. in Each phase of the development process will have a specific input and a specific output. Once the project is confirmed to start, the phases of the development of project can be divided into the following phases:

Software requirements phase. Software Design Implementation Testing Maintenance

In the whole development process, testing consumes highest amount of time. But most of the developers oversee that and testing phase is generally neglected. As a consequence, erroneous software is released. The testing team should be involved right from the requirements stage itself.

The various phases involved in testing, with regard to the software development life cycle are:

1. Requirements stage2. Test Plan3. Test Design.4. Design Reviews5. Code Reviews6. Test Cases preparation.7. Test Execution8. Test Reports.9. Bugs Reporting10. Reworking on patches.11. Release to production.

Requirements Stage

Normally in many companies, developers itself take part in the requirements stage. Especially for product-based companies, a tester should also be involved in this stage. Since a tester thinks

Page 38: Beginners Guide To Software Testing

from the user side whereas a developer can’t. A separate panel should be formed for each module comprising a developer, a tester and a user. Panel meetings should be scheduled in order to gather everyone’s view. All the requirements should be documented properly for further use and this document is called “Software Requirements Specifications”.

Test Plan

Without a good plan, no work is a success. A successful work always contains a good plan. The testing process of software should also require good plan. Test plan document is the most important document that brings in a process – oriented approach. A test plan document should be prepared after the requirements of the project are confirmed. The test plan document must consist of the following information:

• Total number of features to be tested.• Testing approaches to be followed.• The testing methodologies• Number of man-hours required.• Resources required for the whole testing process.• The testing tools that are to be used.• The test cases, etc

Test Design

Test Design is done based on the requirements of the project. Test has to be designed based on whether manual or automated testing is done. For automation testing, the different paths for testing are to be identified first. An end to end checklist has to be prepared covering all the features of the project.

The test design is represented pictographically. The test design involves various stages. These stages can be summarized as follows:

• The different modules of the software are identified first.• Next, the paths connecting all the modules are identified.

Then the design is drawn. The test design is the most critical one, which decides the test case preparation. So the test design assesses the quality of testing process.

Test Cases Preparation

Test cases should be prepared based on the following scenarios:

• Positive scenarios• Negative scenarios• Boundary conditions and• Real World scenarios

Page 39: Beginners Guide To Software Testing

Design Reviews

The software design is done in systematical manner or using the UML language. The tester can do the reviews over the design and can suggest the ideas and the modifications needed.

Code Reviews

Code reviews are similar to unit testing. Once the code is ready for release, the tester should be ready to do unit testing for the code. He must be ready with his own unit test cases. Though a developer does the unit testing, a tester must also do it. The developers may oversee some of the minute mistakes in the code, which a tester may find out.

Test Execution and Bugs Reporting

Once the unit testing is completed and the code is released to QA, the functional testing is done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If any top-level failures occur, the bugs should be reported to the developer immediately to get the required workaround.

The test reports should be documented properly and the bugs have to be reported to the developer after the testing is completed.

Release to Production

Once the bugs are fixed, another release is given to the QA with the modified changes. Regression testing is executed. Once the QA assures the software, the software is released to production. Before releasing to production, another round of top-level testing is done.

The testing process is an iterative process. Once the bugs are fixed, the testing has to be done repeatedly. Thus the testing process is an unending process.