Software Testing Imp Doc

Embed Size (px)

Citation preview

  • 8/3/2019 Software Testing Imp Doc

    1/52

    Defect severity determines the defect criticality whereas defect priority determinesthe defect immediacy or urgency of repair

    1. High Severity & Low Priority : Suppose there is an application which generatessome banking related reports weekly monthly quarterly & yearly by doing somecalculations. In this application there is a fault while calculating YEARLY report .This is a high severity fault but low priority because this fault can be fixed in the nextrelease as a change request.

    2. Low Severity & High Priority : Suppose there is a spelling mistake or contentissue on the homepage of BT.com website which has daily laks of hits all over UK. Inthis case though this fault is not affecting the website or other functionalities butconsidering the status and popularity of the website in the competitive market it is ahigh priority fault.

    3. High Severity & High Priority : Suppose there is an application which givessome banking related reports weekly monthly quarterly & yearly by doing some

    calculations. In this application there is a fault while calculating WEEKLY report.This is a high severity and high priority fault because this fault will hamper the

    functionality of the application immediately within a week. It should be fixedurgently.

    4. Low Severity & Low Priority : Suppose there is a spelling mistake on the pageswhich has very less hits throughout the month on any website. This fault can be

    considered as low severity and low priority.

    Software Testing An Introduction

    What is 'Software Testing'?What is the need of testing?

    Page 1 of 52

  • 8/3/2019 Software Testing Imp Doc

    2/52

    What are 5 common problems in the software development process?What are 5 common solutions to software development problems?Why does software have bugs?What is the solution for these problems? OR How can you prevent defects at development stage?

    What is the role of a "Tester"?What makes a good Software Test engineer?What makes a good Software QA engineer?

    Quality Assurance

    What is software quality?What is 'Software Quality Assurance'?What is 'good code'?What is 'good design'?How can new Software QA processes be introduced in an existing organization?

    Verification & Validation

    VerificationVerification TechniquesValidationValidation TechniquesComparison between Verification & Validation

    Testing Techniques & Types of Testing

    What are the different types of Testing? Please elaborate on each type.Equivalence PartitioningBoundary Value AnalysisCause-Effect Graphing Techniques

    Life Cycles & Models

    What is the 'Software development life cycle'? Or Explain SDLCElaborate on Testing Life CycleBug Life Cycles or Defect life Cycle or what do you do after you find a defect.When to start testing OR Entry Criteria for testingWhen to stop testing or Exit CriteriaV-Model

    Deliverables of Testing

    Test StrategyTest PlanTest Case

    Page 2 of 52

  • 8/3/2019 Software Testing Imp Doc

    3/52

    Test ScenarioDifference between Test Strategy & Test Plan

    Common Terminology

    What is Priority? What is Severity? Please elaborate.Traceability MatrixWhat is bug, defect, issue, error?Risk Analysis & Risk IdentificationHot FixWhat is Defect removable efficiency?

    Scenario based testing

    What is the different between Sanity & Smoke testing?What is the exact difference between a product and a project? Give an example?

    What if the software is so buggy it can't really be tested at all?What if there isn't enough time for thorough testing?What if the project isn't big enough to justify extensive testing?Will automated testing tools make testing easier?What's the best way to choose a test automation tool?Why is it often hard for organizations to get serious about quality assurance?Who is responsible for risk management?Who should decide when software is ready to be released?What can be done if requirements are changing continuously?What if the application has functionality that wasn't in the requirements?How can Software QA processes be implemented without reducing productivity?How can it be determined if a test environment is appropriate?What's the best approach to software test estimation?How can World Wide Web sites be tested?

    Processes & Standards

    What is 'configuration management'?What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

    ##############################################

    Section 1 - Software Testing An Introduction

    What is 'Software Testing'?

    Page 3 of 52

  • 8/3/2019 Software Testing Imp Doc

    4/52

    Testing involves operation of a system or application under controlled (Simulated)conditions and evaluation of the results. Example: If the user is in interface A of theapplication and does B, then C should happen. The controlled conditions should include

    both normal and abnormal conditions (Positive & Negative). Testing should intentionallyattempt to make things go wrong to determine if things happen when they shouldn't or

    things don't happen when they should. It is oriented to 'Verification'.

    Why there is need of testing?OR Why there is a need of 'independent/separate testing'?

    Prior to the concept of Testing software as a Testing Project, the testing processexisted, but the developer(s) did that at the time of development.

    The fact is that, if you make something, you hardly feel that there can be somethingwrong with what you have developed. It's a common trait of human nature, we feel that

    there is no problem in our designed system as we have developed it and it is perfectlyfunctional and fully working. So the hidden bugs or errors or problems of the systemremain hidden and they raise their head when the system goes into production.

    On the other hand, when one person starts checking something which is made by someother person, there are 99% chances that checker/observer will find some problem withthe system , even if the problem is with some spelling that by mistake has been written inwrong way. Even though its wrong in terms of human behavior, this thing has been used for the

    benefit of software projects. When you develop something, you give it to get checked/tested and find out any problem, which never aroused during development of the system.If we minimize the problems with the system we developed, its beneficial for us. Our client will be happy if the system works without any problem and will generate morerevenues for us.

    That's why we need Testing!

    What are 5 common problems in the software development process?

    Poor requirements - If requirements are unclear, incomplete, too general, and nottestable, there will be problems.

    Unrealistic schedule - If too much work is crammed in too little time, problemsare inevitable. Inadequate testing - No one will know whether or not the program is any good until the

    customer complains or systems crash. Featurisms - Requests to pile on new features after development is underway;

    extremely common. Miscommunication - If developers don't know what's needed or customers have

    erroneous expectations, problems can be expected.

    Page 4 of 52

  • 8/3/2019 Software Testing Imp Doc

    5/52

    What are 5 common solutions to software development problems?

    Solid requirements - Clear, complete, detailed, attainable, testable requirementsthat are agreed to by all players. Continuous close coordination with customers/end-users

    is necessary to ensure that changing/emerging requirements are understood. Realistic schedules - Adequate time for planning, design, testing, bug fixing, re-

    testing, changes, and documentation should be provided. Adequate testing - Start testing early on, re-test after fixes or changes, plan for

    adequate time for testing and bug-fixing. 'Early' testing could include static codeanalysis/testing, unit testing by developers, automated post-build testing, etc.

    Stick to initial requirements where feasible - Defend against excessive changesand additions once development has begun, and explain consequences to end users aboutconstant change. If changes are necessary, they should be adequately reflected in relatedschedule changes. If possible, work closely with customers/end-users to manageexpectations.

    Communication - Conduct walkthroughs and inspections when appropriate;make extensive use of group communication tools - groupware, bug-tracking tools andchange management tools, etc to ensure that information/documentation is available andup-to-date - preferably electronic, not paper; promote teamwork and cooperation; use

    prototypes and/or continuous communication with end-users if possible to clarifyexpectations.

    Why does software have bugs?

    Miscommunication - The requirements are not clearly explained by the user or are misunderstood by the developer leading to Gaps.

    Software complexity - The complexity of current software applications can bedifficult to comprehend for anyone without experience in modern-day softwaredevelopment. Multi-tiered applications, client-server and distributed applications, datacommunications, enormous relational databases, and sheer size of applications have allcontributed to the exponential growth in software/system complexity.

    Programming errors - Programmers are bound to make mistakes while codingcausing defects.

    Changing requirements - If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely tointeract and cause problems, and the complexity of coordinating changes may result inerrors.

    Time pressures - Scheduling of software projects is difficult at best, oftenrequiring a lot of guesswork. When deadlines loom and the crunch comes,mistakes will be made.

    Poorly documented code - It's tough to maintain and modify code that is badlywritten or poorly documented; the result is bugs.

    Page 5 of 52

  • 8/3/2019 Software Testing Imp Doc

    6/52

    Software development tools - Visual tools, class libraries, compilers, scriptingtools, etc. often introduce their own bugs or are poorly documented, resulting inadded bugs.

    What is the solution for these problems?OR How can you prevent defects at development stage?

    Solid requirements Requirements should be clear, complete, detailed,cohesive, attainable and testable. Use prototypes to help nail down requirements.In 'agile'-type environments, continuous coordination with customers/end-users isnecessary.

    Realistic schedules - Allow adequate time for planning, design, testing, bugfixing, re-testing, changes, and documentation; personnel should be able tocomplete the project without burning out.

    Adequate testing - Start testing early on, re-tests after fixes or changes, and planfor adequate time for testing and bug-fixing. 'Early' testing ideally includes unittesting by developers and built-in testing and diagnostic capabilities.

    Stick to initial requirements as much as possible - Be prepared to defendagainst excessive changes and additions once development has begun, and be

    prepared to explain consequences. If changes are necessary, they should beadequately reflected in related schedule changes. If possible, work closely withcustomers/end-users to manage expectations. This will provide them a higher comfort level with their requirements decisions and minimize excessive changeslater on.

    Communication - Conduct walkthroughs and inspections when appropriate;

    make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc. Insurethat information/documentation is available and up-to-date - preferably electronic,not paper; promote teamwork and cooperation; use prototypes if possible toclarify customers' expectations.

    What is the role of a "Tester"?

    A Tester's focus is to demonstrate an application's weaknesses. It is to find test cases or configurations that would give unexpected results, or to show the software breaking.

    Planning and developing test cases Writing test plans and documentation, prioritizingthe testing based on assessing the risks, setting up test data, organizing test teams.

    Setting up the test environment An application will be tested using multiplecombinations of hardware and software and under different conditions. Setting up the

    prerequisites for the test cases (Test Data) is the task of testers.

    Page 6 of 52

  • 8/3/2019 Software Testing Imp Doc

    7/52

    Writing test harnesses and scripts Developing test applications to call the APIdirectly in order to automate the test cases. Writing scripts to simulate user interactions.

    Planning, writing and running load tests Non-functional tests to monitor anapplication's scalability and performance. Looking at how an application behaves under

    the stress of a large number of users.

    Writing bug reports Communicating the exact steps required to reproduce unexpected behavior on a particular configuration. Reporting to development team with test results.

    What makes a good test engineer?

    A good test engineer should have A test to break attitude,

    An ability to take the point of view of the customer, A strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with

    developers. An ability to communicate with both technical (developers) and non-technical

    (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper

    understanding of the software development process, gives the tester an appreciationfor the developers point of view, and reduce the learning curve in automated test tool

    programming. Judgment skills are needed to assess high-risk areas of an application on which to

    focus testing efforts when time is limited.

    What makes a good Software QA engineer?

    The same qualities a good tester has are useful for a QA engineer.Additionally:

    They must be able to understand the entire software development process andhow it can fit into the business approach and goals of the organization.

    Communication skills and the ability to understand various sides of issues areimportant.

    In organizations in the early stages of implementing QA processes, patience anddiplomacy are especially needed. An ability to find problems as well as to seewhats missing is important for inspections and reviews.

    ### Section 1 - Software Testing An Introduction Ends Here ###Section 2 Quality Assurance

    What is 'Software Quality Assurance'?

    Page 7 of 52

  • 8/3/2019 Software Testing Imp Doc

    8/52

    Software Quality Assurance involves the entire software development process -monitoring and improving the process, making sure that any agreed-upon standards and

    procedures are followed, and ensuring that problems are found and dealt with. It isoriented to 'Prevention'.

    What is software quality?

    Quality software is reasonably bug-free, delivered on time and within budget, meetsrequirements and/or expectations, and is maintainable. However, quality is obviously asubjective term. It will depend on who the 'customer' is and their overall influence in thescheme of things. A wide-angle view of the 'customers' of a software development

    project might include end-users, customer acceptance testers, customer contract officers,customer management, and the development organizations.

    Management/Users/Testers, Each type of 'customer' will have their own view on 'quality'

    - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.

    What is 'good code'?

    'Good code' is code that works, is bug free, and is readable and maintainable. Someorganizations have coding 'standards' that all developers are supposed to adhere to, buteveryone has different ideas about what's best, or what is too many or too few rules. Itshould be kept in mind that excessive use of standards and rules can stifle productivityand creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used tocheck for problems and enforce standards.

    What is 'good design'?

    'Design' could refer to many things, but often refers to 'functional design' or 'internaldesign'. Good internal design is indicated by software code whose overall structure isclear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Goodfunctional design is indicated by an application whose functionality can be traced back tocustomer and end-user requirements. For programs that have a user interface, it's often agood idea to assume that the end user will have little computer knowledge and may notread a user manual or even the on-line help; some common rules-of-thumb include:

    The program should act in a way that least surprises the user It should always be evident to the user what can be done next and how to exitThe program shouldn't let the users do something stupid without warning them.

    How can new Software QA processes be introduced in an existing organization?

    Page 8 of 52

  • 8/3/2019 Software Testing Imp Doc

    9/52

    A lot depends on the size of the organization and the risks involved. For largeorganizations with high-risk projects, serious management buy-in is required and aformalized QA process is necessary.

    Where the risk is lower, management and organizational buy-in and QA implementation

    may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.

    For small groups or projects, a more ad-hoc process may be appropriate, depending onthe type of customers and projects. A lot will depend on team leads or managers,feedback to developers, and ensuring adequate communications among customers,managers, developers, and testers.

    ### Section 2 Quality Assurance Ends Here ###Section 3 - Verification & Validation

    Page 9 of 52

  • 8/3/2019 Software Testing Imp Doc

    10/52

    What is Verification?

    The standard definition of Verification is Are we building the product RIGHT?" i.e.Verification is a process that makes it sure that the software product is being developed inthe right way. The software should confirm to its predefined specifications. As the

    product development goes through different stages, an analysis is done to ensure that allrequired specifications are met.

    The Verification part of Verification and Validation Model comes before Validation,which incorporates Software inspections, reviews, audits, walkthroughs, buddy checksetc. During the Verification, the work product (the ready part of the Software beingdeveloped and various documentations) is reviewed/examined personally by one oremore persons in order to find and point out the defects in it. This process helps in

    prevention of potential bugs, which may cause in failure of the project.

    Few terms involved in Verification:

    Inspection:Inspection involves a team of about 3-6 people, led by a leader, which formally reviewsthe documents and work product during various phases of the product development lifecycle. The work product and related documents are presented in front of the inspectionteam, the members of which carry different interpretations of the presentation. The bugsthat are detected during the inspection are communicated to the next level in order to takecare of them.

    Walkthroughs:Walkthrough can be considered same as inspection without formal preparation (of any

    presentation or documentations). During the walkthrough meeting, the presenter/author introduces the material to all the participants in order to make them familiar with it. Evenwhen the walkthroughs can help in finding potential bugs, they are used for knowledgesharing or communication purpose.

    Buddy Checks:This is the simplest type of review activity used to find out bugs in a work product duringthe verification. In buddy check, one person goes through the documents prepared byanother person in order to find out if that person has made mistake(s) i.e. to find out bugswhich the author couldnt find previously.

    The activities involved in Verification process are: Requirement Specificationverification, Functional design verification, internal/system design verification and codeverification. Each activity makes sure that the product is developed right way and everyrequirement, every specification, design code etc. is verified.

    What is Validation?

    Page 10 of 52

  • 8/3/2019 Software Testing Imp Doc

    11/52

    The standard definition of Validation is: "Are we building the RIGHT product" i.e.whatever the software product is being developed; it should do what the user expects it todo. The software product should satisfy all the functional requirements set by the user.Validation is done during or at the end of the development process in order to determinewhether the product satisfies specified requirements.

    Validation and Verification processes go hand in hand, but visibly Validation processstarts after Verification process ends (after coding of the product ends). Each Verificationactivity (such as Requirement Specification Verification, Functional design Verificationetc.) has its corresponding Validation activity (such as Functional Validation/Testing,Code Validation/Testing, System/Integration Validation etc.).

    All types of testing methods are basically carried out during the Validation process.Test plan, test suits and test cases are developed, which are used during the variousphases of Validation process.

    The activities involved in Validation process are as follows: Code Validation/Testing:Developers as well as testers do the code validation. Unit Code Validation or UnitTesting is a type of testing, which the developers conduct in order to find out any bug inthe code unit/module developed by them. Code testing other than Unit Testing can bedone by testers or developers.

    Integration Validation/Testing:Integration testing is carried out in order to find out if different (two or more)units/modules co-ordinate properly. This test helps in finding out if there is any defect inthe interface between different modules.

    Functional Validation/Testing:This type of testing is carried out in order to find if the system meets the functionalrequirements. In this type of testing, the system is validated for its functional behavior.Functional testing does not deal with internal coding of the project, in stead, it checks if the system behaves as per the expectations.

    User Acceptance Testing or System Validation:In this type of testing, the developed product is handed over to the user/paid testers inorder to test it in real time scenario. The product is validated to find out if it worksaccording to the system specifications and satisfies all the user requirements. As theuser/paid testers use the software, it may happen that bugs that are yet undiscovered,come up, which are communicated to the developers to be fixed. This helps inimprovement of the final product.

    Page 11 of 52

  • 8/3/2019 Software Testing Imp Doc

    12/52

    Comparison between Verification & Validation

    Validation VerificationAm I building the right product Am I building the product right

    Determining if the system complies withthe requirements and performs functionsfor which it is intended and meets theorganizations goals and user needs. It istraditional and is performed at the end of the project.

    The review of interim work steps andinterim deliverables during a project toensure they are acceptable. To determine if the system is consistent, adheres tostandards, uses reliable techniques andprudent practices, and performs theselected functions in the correct manner.

    Am I accessing the right data (in terms of the data required to satisfy therequirement)

    Am I accessing the data right (in the rightplace; in the right way)

    High level activity Low level activityPerformed after a work product isproduced against established criteriaensuring that the product integratescorrectly into the environment

    Performed during development on keyartifacts, like walkthroughs, reviews andinspections, mentor feedback, training,checklists and standards

    Determination of correctness of the finalsoftware product by a development projectwith respect to the user needs andrequirements

    Demonstration of consistency,completeness, and correctness of thesoftware at each stage and between eachstage of the development life cycle.

    ### Section 3 - Verification & Validation Ends Here ###Section 4 - Testing Techniques & Types of Testing

    Page 12 of 52

  • 8/3/2019 Software Testing Imp Doc

    13/52

    What are the different types of Testing? Please elaborate on each type.

    Black box testing A testing technique where the internal code of the application beingtested are not known by the tester. In a black box test, the tester only knows the valid

    inputs and what the expected outcomes should be and not how the program arrives atthose outputs. The tester does not ever examine the programming code and does not needany further knowledge of the program other than its specifications.

    Various testing types that fall under the Black Box Testing strategy are: Functionaltesting, Regression Testing, System testing, Interface testing, User Acceptance Testing,Sanity testing, Smoke testing, Load testing, Volume testing, Stress testing, Usabilitytesting, Ad-hoc testing, Exploratory testing, Recovery testing, Alpha testing, Beta testingetc.

    Functional testing - This type of testing is carried out in order to find if the system meetsthe functional requirements. In this type of testing, the system is validated for itsfunctional behavior. Functional testing does not deal with internal coding of the project,in stead, it checks if the system behaves as per the expectations.

    Regression testing - Testing after fixes or modifications of the software is calledregression testing. When a defect is fixed or some new functionality is implemented,impact analysis is done and Test cases are re-executed in order to check whether previousfunctionality of the application is working fine and new changes have not introduced anynew bugs. Automated testing tools can be especially useful for this type of testing.

    System testing System testing is performed on the entire system in the context of aFunctional Requirement Specification(s) (FRS) and/or a System RequirementSpecification (SRS). System testing is an investigatory testing phase, where the focus isto have almost a destructive attitude and test not only the design, but also the behavior and believed expectations of the customer.

    Integration testing - Testing of combined parts of an application to determine if theyfunction together correctly. The 'parts' can be code modules, individual applications,client and server applications on a network, etc. This type of testing is especially relevantto client/server and distributed systems.

    Acceptance testing - The developed product is handed over to the user/paid testers inorder to test it in real time scenario. The product is validated to find out if it worksaccording to the system specifications and satisfies all the user requirements.

    Smoke Testing - Smoke testing is done by developers before the build is released or bytesters before accepting a build for further testing. This is also known as a buildverification test.

    Sanity Testing - Sanity test tests the basic functions of an application to determinewhether the application logic is generally functional and correct (for example, an interest

    Page 13 of 52

  • 8/3/2019 Software Testing Imp Doc

    14/52

    rate calculation for a financial application). If the sanity test fails, it is not reasonable toattempt more rigorous testing.

    Load testing - The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point

    its performance degrades. Load testing tests the expected usage of a software program bysimulating multiple users accessing the program's services concurrently.

    Volume Testing - Volume testing refers to testing a software application for a certaindata volume. This volume can in generic terms be the database size or it could also be thesize of an interface file that is the subject of volume testing. For example, if you want tovolume test your application with a specific database size, you will explode your databaseto that size and then test the application's performance on it.

    Stress Testing - Stress testing often refers to tests that put a greater emphasis onrobustness, availability, and error handling under a heavy load, rather than on what would

    be considered correct behavior under normal circumstances. In particular, the goals of such tests may be to ensure the software doesn't crash in conditions of insufficientcomputational resources (such as memory or disk space), unusually high concurrency, or denial of service attacks. Examples:A web server may be stress tested using scripts, bots, and various denial of service toolsto observe the performance of a web site during peak loads.

    Usability Testing Testing of a system against the Usability Guidelines set by thecustomer or testing the system for its User Friendliness. The common points under usability are

    How easy is it for users to accomplish basic tasks the first time they encounter thedesign?Efficiency: Once users have learned the design, how quickly can they perform tasks?Memorability: When users return to the design after a period of not using it, how easilycan they re establish proficiency?Errors: How many errors do users make, how severe are these errors, and how easily canthey recover from the errors?Satisfaction: How pleasant is it to use the design?

    Ad-hoc Testing - Ad hoc testing is a commonly used term for software testing performedwithout planning and documentation. The tests are intended to be run only once, unless adefect is discovered. Ad hoc testing is a part of exploratory testing, being the least formalof test methods. In this view, ad hoc testing has been criticized because it isn't structured,

    but this can also be strength: important things can be found quickly. It is performed withimprovisation; the tester seeks to find bugs with any means that seem appropriate.

    Exploratory Testing - This testing is similar to the ad-hoc testing and is done in order tolearn/explore the application.

    Page 14 of 52

  • 8/3/2019 Software Testing Imp Doc

    15/52

    Recovery Testing - Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Typeor extent of recovery is specified in the requirement specifications.

    Alpha Testing - In-house developers often test the software in what is known as

    'ALPHA' testing which is often performed under a debugger or with hardware-assisteddebugging to catch bugs quickly. It can then be handed over to testing staff for additionalinspection in an environment similar to how it was intended to be used.

    Beta Testing - The software is distributed as a beta version to the users and users test theapplication at their sites. As the users explore the software, in case if any exception/defectoccurs that is reported to the developers.

    White Box Testing Includes

    White box testing It is based on knowledge of the internal logic of an application's

    code. Tests are based on coverage of code statements, branches, paths, conditions.

    Unit testing - The most 'micro' scale of testing; to test particular functions or codemodules. Typically done by the programmer and not by testers, as it requires detailedknowledge of the internal program design and code. Not always easily done unless theapplication has a well-designed architecture with tight code; may require developing testdriver modules or test harnesses.

    Other Types of testing include

    Install/uninstall testing - Testing of full, partial, or upgrade install/uninstall processes.

    Security testing - Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

    Compatibility testing - Testing how well software performs in a particular Hardware/software/operating system/network/etc. environment.

    Context-driven testing - Testing driven by an understanding of the environment, culture,and intended use of software. For example, the testing approach for life-critical medicalequipment software would be completely different than that for a low-cost computer game.

    Mutation testing - A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the originaltest data/cases to determine if the 'bugs' are detected. Proper implementation requireslarge computational resources.

    Incremental Integration Testing - Continuous testing of an application as newfunctionality is added; requires that various aspects of an application's functionality be

    Page 15 of 52

  • 8/3/2019 Software Testing Imp Doc

    16/52

    independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

    Testing techniques

    Equivalence Partitioning

    This method divides the input domain of a program into classes of data from which testcases can be derived. Equivalence partitioning strives to define a test case that uncoversclasses of errors and thereby reduces the number of test cases needed. It is based on anevaluation of equivalence classes for an input condition. An equivalence class representsa set of valid or invalid states for input conditions.Equivalence classes may be defined according to the following guidelines:

    1. If an input condition specifies a range, one valid and two invalid equivalence classesare defined.2. If an input condition requires a specific value, then one valid and two invalidequivalence classes are defined.3. If an input condition specifies a member of a set, then one valid and one invalidequivalence class are defined.Ex.

    a. Set A= {1, 4, 5, 7, 9, 12} implies that the elements or members of set A are 1, 4, 5,7, 9, and 12.

    b. Which of the following is a member of the set {A, C, D, F, G, R, T, W, Y}?Choices:

    A. RB. BC. PD. ZCorrect Answer: ASolution:Step 1: Any element of a set is a member of that set.Step 2: Here, only R is a member of the given set.{A, C, D, F, G, R, T, W, Y} as R is contained in the set.

    4. If an input condition is Boolean, then one valid and one invalid equivalence class are

    defined. (Boolean means true or false)Boundary Value Analysis

    This method leads to a selection of test cases that exercise boundary values. Itcomplements equivalence partitioning since it selects test cases at the edges of a class.Rather than focusing on input conditions solely, BVA derives test cases from the outputdomain also. BVA guidelines include:

    Page 16 of 52

  • 8/3/2019 Software Testing Imp Doc

    17/52

    1. For input ranges bounded by a and b, test cases should include values a and b and justabove and just below a and b respectively.2. If an input condition specifies a number of values, test cases should be developed toexercise the minimum and maximum numbers and values just above and below theselimits.

    3. Apply guidelines 1 and 2 to the output.4. If internal data structures have prescribed boundaries, a test case should be designed toexercise the data structure at its boundary.

    Cause-Effect Graphing Techniques

    Cause-effect graphing is a technique that provides a concise representation of logicalconditions and corresponding actions. There are four steps:

    1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.2. A cause-effect graph is developed.3. The graph is converted to a decision table.4. Decision table rules are converted to test cases.

    Does every software project need testers?While all projects will benefit from testing, some projects may not require independenttest staff to succeed.

    Which projects may not need independent test staff? The answer depends on the size andcontext of the project, the risks, the development methodology, the skill and experienceof the developers, and other factors. For instance, if the project is a short-term, small, lowrisk project, with highly experienced programmers utilizing thorough unit testing or test-first development, then test engineers may not be required for the project to succeed.

    In some cases an IT organization may be too small or new to have a testing staff even if the situation calls for it. In these circumstances it may be appropriate to instead usecontractors or outsourcing, or adjust the project management and development approach(by switching to more senior developers and agile test-first development, for example).Inexperienced managers sometimes gamble on the success of a project by skippingthorough testing or having programmers do post-development functional testing of their own work, a decidedly high risk gamble.

    For non-trivial-size projects or projects with non-trivial risks, a testing staff is usuallynecessary. As in any business, the use of personnel with specialized skills enhances anorganization's ability to be successful in large, complex, or difficult tasks. It allows for

    both a) deeper and stronger skills and b) the contribution of differing perspectives. For example, programmers typically have the perspective of 'what are the technical issues inmaking this functionality work? A test engineer typically has the perspective of 'what

    Page 17 of 52

  • 8/3/2019 Software Testing Imp Doc

    18/52

    might go wrong with this functionality, and how can we ensure it meets expectations?Technical people who can be highly effective in approaching tasks from both of those

    perspectives are rare, which is why, sooner or later, organizations bring in test specialists.

    ### Section 4 - Testing Techniques & Types of Testing Ends Here ###Section 5 - Life Cycles & Models

    What is Software Development Life Cycle? OR

    Elaborate on SDLC

    The Systems Development Life Cycle (SDLC) is a conceptual model used in projectmanagement that describes the stages involved in an information system development

    project from an initial feasibility study through maintenance of the completed application.Various SDLC methodologies have been developed to guide the processes involvedincluding the waterfall model (the original SDLC method), rapid applicationdevelopment (RAD), joint application development (JAD), the fountain model and thespiral model.

    The image below is the classic Waterfall model methodology, which is the first SDLCmethod and it describes the various phases involved in development.

    Briefly on different Phases:

    Page 18 of 52

  • 8/3/2019 Software Testing Imp Doc

    19/52

    Feasibility

    The feasibility study is used to determine if the project should get the go-ahead. If the project is to proceed, the feasibility study will produce a project plan and budget

    estimates for the future stages of development.

    Requirement Analysis and Design

    Analysis gathers the requirements for the system. This stage includes a detailed study of the business needs of the organization. Options for changing the business process may beconsidered. Design focuses on high level design like, what programs are needed and howare they going to interact, low-level design (how the individual programs are going towork), interface design (what are the interfaces going to look like) and data design (whatdata will be required). During these phases, the software's overall structure is defined.Analysis and Design are very crucial in the whole development cycle. Any glitch in the

    design phase could be very expensive to solve in the later stage of the softwaredevelopment. Much care is taken during this phase. The logical system of the product isdeveloped in this phase.

    Implementation

    In this phase the designs are translated into code. Computer programs are written using aconventional programming language or an application generator. Programming tools likeCompilers, Interpreters, and Debuggers are used to generate the code. Different highlevel programming languages like C, C++, Pascal, and Java are used for coding. Withrespect to the type of application, the right programming language is chosen.

    Testing

    In this phase the system is tested. Normally programs are written as a series of individualmodules, this subject to separate and detailed test. The system is then tested as a whole.The separate modules are brought together and tested as a complete system. The systemis tested to ensure that interfaces between modules work (integration testing), the systemworks on the intended platform and with the expected volume of data (volume testing)and that the system does what the user requires (acceptance/beta testing).

    Maintenance

    Inevitably the system will need maintenance. Software will definitely undergo changeonce it is delivered to the customer. There are many reasons for the change. Changecould happen because of some unexpected input values into the system. In addition, thechanges in the system could directly affect the software operations. The software should

    be developed to accommodate changes that could happen during the post implementation period.

    Page 19 of 52

  • 8/3/2019 Software Testing Imp Doc

    20/52

    Elaborate on Testing Life Cycle

    Software testing life cycle identifies what test activities to carry out and when toaccomplish those test activities. Even though testing differs between organizations, there

    is a testing life cycle.Software Testing Life Cycle consists of six (generic) phases:

    Test Planning,Test Analysis,Test Design,Construction and verification,Testing Cycles,Final Testing and Implementation andPost Implementation.

    Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in software testing life cycle is to control/deal with software testing Manual, Automated and Performance.

    Test PlanningThis is the phase where Project Manager has to decide what things need to be tested, do Ihave the appropriate budget etc. Naturally proper planning at this stage would greatlyreduce the risk of low quality software. This planning will be an ongoing process with noend point.

    Activities at this stage would include preparation of high level test plan-(according toIEEE test plan template The Software Test Plan (STP) is designed to prescribe the scope,approach, resources, and schedule of all testing activities. The plan must identify theitems to be tested, the features to be tested, the types of testing to be performed, the

    personnel responsible for testing, the resources and schedule required to complete testing,and the risks associated with the plan.). Almost all of the activities done during this stageare included in this software test plan and revolve around a test plan.

    Page 20 of 52

  • 8/3/2019 Software Testing Imp Doc

    21/52

    Test AnalysisOnce test plan is made and decided upon, next step is to delve little more into the projectand decide what types of testing should be carried out at different stages of SDLC, do weneed or plan to automate, if yes then when the appropriate time to automate is, what typeof specific documentation I need for testing.

    Proper and regular meetings should be held between testing teams, project managers, anddevelopment teams, Business Analysts to check the progress of things which will give afair idea of the movement of the project and ensure the completeness of the test plancreated in the planning phase, which will further help in enhancing the right testingstrategy created earlier. We will start creating test case formats and test cases itself. Inthis stage we need to develop Functional validation matrix based on BusinessRequirements to ensure that all system requirements are covered by one or more testcases, identify which test cases to automate, begin review of documentation, i.e.Functional Design, Business Requirements, Product Specifications, Product Externalsetc. We also have to define areas for Stress and Performance testing.

    Test DesignTest plans and cases which were developed in the analysis phase are revised. Functionalvalidation matrix is also revised and finalized. In this stage risk assessment criteria isdeveloped. If you have thought of automation then you have to select which test cases toautomate and begin writing scripts for them. Test data is prepared. Standards for unittesting and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test environment is prepared.

    Construction and verificationIn this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. Wehave to support the development team in their unit testing phase. And obviously bugreporting would be done as when the bugs are found. Integration tests are performed anderrors (if any) are reported.

    Testing CyclesIn this phase we have to complete testing cycles until test cases are executed withouterrors or a predefined condition is reached. Run test cases --> Report Bugs --> revise testcases (if needed) --> add new test cases (if needed) --> bug fixing --> retesting (test cycle2, test cycle 3.).

    Final Testing and ImplementationIn this we have to execute remaining stress and performance test cases, documentationfor testing is completed / updated, provide and complete different matrices for testing.Acceptance, load and recovery testing will also be conducted and the application needs to

    be verified under production conditions.

    Post Implementation

    Page 21 of 52

  • 8/3/2019 Software Testing Imp Doc

    22/52

    In this phase, the testing process is evaluated and lessons learnt from that testing processare documented. Line of attack to prevent similar problems in future projects isidentified. Create plans to improve the processes. The recording of new errors andenhancements is an ongoing process. Cleaning up of test environment is done and testmachines are restored to base lines in this stage.

    The below table describes the STLC in brief.

    Software Testing Life Cycle

    Phase Activities Outcome

    Planning Create high level test plan Test plan, RefinedSpecification

    Analysis

    Create detailed test plan,Functional ValidationMatrix, test cases

    Revised Test Plan, Functionalvalidation matrix, test cases

    Design test cases are revised; selectwhich test cases to automate

    revised test cases, test datasets, sets, risk assessmentsheet

    Construction scripting of test cases toautomate,

    Test procedures/Scripts,Drivers, test results, Bugreports.

    Testing cycles complete testing cycles Test results, Bug Reports

    Final testingexecute remaining stress and

    performance tests, completedocumentation

    Test results and differentmetrics on test efforts

    Post implementation Evaluate testing processes Plan for improvement of testing process

    Page 22 of 52

  • 8/3/2019 Software Testing Imp Doc

    23/52

    Bug Life Cycles or Defect life Cycle or what do you do after you find a defect.

    Page 23 of 52

  • 8/3/2019 Software Testing Imp Doc

    24/52

    Page 24 of 52

  • 8/3/2019 Software Testing Imp Doc

    25/52

    What steps are needed to develop and run software tests? OR What are the Entry Criteria for testing?

    The following are some of the steps to consider:

    Obtain requirements, functional design, and internal design specifications andother available/necessary informationObtain budget and schedule requirementsDetermine project-related personnel and their responsibilities, reportingrequirements, required standards and processes (such as release processes, change

    processes, etc.)Identify application's higher-risk and more important aspects, set priorities, anddetermine scope and limitations of tests.Determine test approaches and methods - unit, integration, functional, system,security, load, usability tests, etc.Determine test environment requirements (hardware, software, configuration,versions, communications, etc.)Determine test ware requirements (automation tools, coverage analyzers, testtracking, problem/bug tracking, etc.)Determine test input data requirementsIdentify tasks, those responsible for tasks, and labor requirementsSet schedule estimates, timelines, milestonesPrepare test plan document(s) and have needed reviews/approvalsWrite test casesHave needed reviews/inspections/approvals of test casesPrepare test environment and test ware, obtain needed user manuals/reference

    documents/configuration guides/installation guides, set up test tracking processes,set up logging and archiving processes, set up or obtain test input dataObtain and install software releasesPerform testsEvaluate and report resultsTrack problems/bugs and fixesRetest as needed

    Exit Criteria for testing OR When to Stop Testing

    This can be difficult to determine. Many modern software applications are so complex,and run in such as interdependent environment, that complete testing can never be done

    Common factors in deciding when to stop are:Deadlines (release deadlines, testing deadlines.)Test cases completed with certain percentages passedTest budget depletedCoverage of code/functionality/requirements reaches a specified point

    Page 25 of 52

  • 8/3/2019 Software Testing Imp Doc

    26/52

    The rate at which Bugs can be found is too smallBeta or Alpha Testing period ends

    V Model

    The V-model is a software development process which demonstrates the relationships

    between each phase of the development life cycle and its associated phase of testing.

    The V Model

    Page 26 of 52

  • 8/3/2019 Software Testing Imp Doc

    27/52

    Requirements analysis

    In this phase, the requirements of the proposed system are collected by analyzing theneeds of the user(s). This phase is concerned about establishing what the required systemhas to perform. However, it does not determine how the software will be designed or

    built. Usually, the users are interviewed and a document called the user requirementsdocument is generated. The user requirements document will typically describe thesystems functional, physical, interface, performance, data, security requirements etc asexpected by the user. It is one which the business analysts use to communicate their understanding of the system back to the users. The users carefully review this documentas this document would serve as the guideline for the system designers in the systemdesign phase. The user acceptance tests are designed in this phase.

    System Design

    System engineers analyze and understand the business of the proposed system by

    studying the user requirements document. They figure out possibilities and techniques bywhich the user requirements can be implemented. If any of the requirements are notfeasible, the user is informed of the issue. A resolution is found and the user requirementdocument is edited accordingly.

    The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menustructures, data structures etc. Other technical documentation like entity diagrams, datadictionary will also be produced in this phase. The documents for system testing are

    prepared in this phase.

    Architecture Design

    This phase can also be called as high-level design. The baseline in selecting thearchitecture is that it should realize all which typically consists of the list of modules,

    brief functionality of each module, their interface relationships, dependencies, databasetables, architecture diagrams, technology details etc. The integration testing design iscarried out in this phase.

    Module Design

    This phase can also be called as low-level design. The designed system is broken up in tosmaller units or modules and each of them is explained so that the programmer can startcoding directly. The low level design document or program specifications will contain adetailed functional logic of the module, in pseudo code - database tables, with allelements, including their type and size - all interface details with complete APIreferences- all dependency issues- error message listings- complete input and outputs for a module. The unit test design is developed in this stage.

    Page 27 of 52

  • 8/3/2019 Software Testing Imp Doc

    28/52

    CodingThe actual coding of application is done.

    Unit Testing

    In the V-model of software development, unit testing implies the first stage of dynamictesting process. It involves analysis of the written code with the intention of eliminatingerrors. It also verifies that the codes are efficient and adheres to the adopted codingstandards. Testing is usually white box. It is done using the Unit test design preparedduring the module design phase. This may be carried out by software testers, softwaredevelopers or both.

    Integration Testing

    In integration testing the separate modules will be tested together to expose faults in theinterfaces and in the interaction between integrated components. Testing is usually black box as the code is not directly checked for errors. It is done using the integration testdesign prepared during the architecture design phase. Integration testing is generallyconducted by software testers.

    System Testing

    System testing will compare the system specifications against the actual system. Thesystem test design is derived from the system design documents and is used in this phase.Sometimes system testing is automated using testing tools. Once all the modules areintegrated several errors may arise. Testing done at this stage is called system testing.

    User Acceptance Testing

    The software is tested in the "real world" by the actual end users. Acceptance testing is performed to determine whether a system satisfies its acceptance criteria or not. It enablesthe customer to determine whether to accept the system or not.

    Benefits of V-model

    The V-model deploys a well-structured method in which each phase can be implemented by the detailed documentation of the previous phase. Testing activities like test designingstart at the beginning of the project well before coding and therefore saves a huge amountof the project time.

    ### Section 5 - Life Cycles & Models Ends Here ###

    Page 28 of 52

  • 8/3/2019 Software Testing Imp Doc

    29/52

    Section 6 - Deliverables of Testing

    What is a Test Strategy?

    The test strategy is a formal description of how a software product will be tested. A teststrategy is developed for all levels of testing, as required. The test team analyzes therequirements, writes the test strategy and reviews the plan with the project team.

    The test plan may include test cases, conditions, test environment, a list of related tasks, pass/fail criteria and risk assessment.

    Inputs for this process

    A description of the required hardware and software components, including test

    tools. This information comes from the test environment, including test tool data. A description of roles and responsibilities of the resources required for the test

    and schedule constraints. This information comes from man-hours and schedules.

    Testing methodology. This is based on known standards.

    Functional and technical requirements of the application. This information comesfrom requirements, change request, technical and functional design documents.

    Requirements that the system can not provide, e.g. system limitations.

    Outputs for this process:

    An approved and signed off test strategy document, test plan, including test cases.

    Testing issues requiring resolution. Usually this requires additional negotiation atthe project management level.

    Page 29 of 52

  • 8/3/2019 Software Testing Imp Doc

    30/52

    What is a 'Test plan'?

    A software project test plan is a document that describes the objectives, scope, approach,and focus of a software testing effort. The process of preparing a test plan is a useful wayto think through the efforts needed to validate the acceptability of a software product. The

    completed document will help people outside the test group understand the 'why' and'how' of product validation. It should be thorough enough to be useful but not so thoroughthat no one outside the test group will read it. The following are some of the items thatmight be included in a test plan, depending on the particular project:

    TitleIdentification of software including version/release numbersRevision history of document including authors, dates, approvalsTable of ContentsPurpose of document, intended audienceObjective of testing effortSoftware product overviewRelevant related document list, such as requirements, design documents, other test

    plans, etc.Relevant standards or legal requirementsTraceability requirementsRelevant naming conventions and identifier conventionsOverall software project organization and personnel/contact-info/responsibilitiesTest organization and personnel/contact-info/responsibilitiesAssumptions and dependenciesProject risk analysis

    Testing priorities and focusScope and limitations of testingTest outline - a decomposition of the test approach by test type, feature,functionality, process, system, module, etc. as applicableOutline of data input equivalence classes, boundary value analysis, error classesTest environment - hardware, operating systems, other required software, dataconfigurations, interfaces to other systemsTest environment setup and configuration issuesSoftware migration processesSoftware Change Management processesTest data setup requirementsDatabase setup requirementsOutline of system-logging/error-logging/other capabilities, and tools such asscreen capture software, that will be used to help describe and report bugsDiscussion of any specialized software or hardware tools that will be used bytesters to help track the cause or source of bugsTest automation - justification and overviewTest tools to be used, including versions, patches, etc.

    Page 30 of 52

  • 8/3/2019 Software Testing Imp Doc

    31/52

    Test script/test code maintenance processes and version controlProblem tracking and resolution - tools and processesProject test metrics to be usedReporting requirements and testing deliverablesSoftware entrance and exit criteriaInitial sanity testing period and criteriaTest suspension and restart criteriaPersonnel allocationPersonnel pre-training needsTest site/locationOutside test organizations to be utilized and their purpose, responsibilities,deliverables, contact persons, and coordination issuesRelevant proprietary, classified, security and licensing issues.Open issuesAppendix - glossary, acronyms, etc.

    What is a Test case?

    A test case describes an input, action, or event and an expected response, to determine if a feature of a software application is working correctly. A test case may contain

    particulars such as test case identifier, test case name, objective, test conditions/setup,input data requirements, steps, and expected results. The level of detail may varysignificantly depending on the organization and project context.

    The process of developing test cases can help find problems in the requirements or design

    of an application, since it requires completely thinking through the operation of theapplication. For this reason, it's useful to prepare test cases early in the development cycleif possible.

    What is a Test Scenario?

    The terms "test scenario" and "test case" are often used synonymously. Test scenarios aretest cases or test scripts, and the sequence in which they are to be executed. Testscenarios are test cases that ensure that all business process flows are tested from end toend. Test scenarios are independent tests, or a series of tests that follow each other, where

    each of them dependent upon the output of the previous one. Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test scenarios are designed to represent bothtypical and unusual situations that may occur in the application. Test engineers defineunit test requirements and unit test scenarios. Test engineers also execute unit testscenarios. It is the test team that, with assistance of developers and clients, develops testscenarios for integration and system testing. Test scenarios are executed through the useof test procedures or scripts. Test procedures or scripts define a series of steps necessary

    Page 31 of 52

  • 8/3/2019 Software Testing Imp Doc

    32/52

    to perform one or more test scenarios. Test procedures or scripts may cover multiple testscenarios.

    Page 32 of 52

  • 8/3/2019 Software Testing Imp Doc

    33/52

    Difference between Test Strategy & Test Plan

    Page 33 of 52

  • 8/3/2019 Software Testing Imp Doc

    34/52

    ### Section 6 - Deliverables of Testing Ends Here ###

    Page 34 of 52

  • 8/3/2019 Software Testing Imp Doc

    35/52

    Section 7 - Common Terminology

    What is Priority? What is Severity? Please elaborate.

    Priority - Priority is the order in which developer has to fix the bug.The available priorities range from P1 (most important) to P5 (least important).

    Severity - Severity is how seriously the bug is impacting the application. Show Stopper blocks development and/or testing work Critical crashes, loss of data, severe memory leak Major loss of function

    Normal regular issue, some loss of functionality under specific circumstancesMinor loss of function, or other problem where easy workaround is present

    Trivial cosmetic problem like misspelled words or misaligned text

    Priority and Severity Examples

    High Priority & High Severity : A show stopper error which occurs on the basicfunctionality of the application. (E.g. A site maintains student details, on saving record if it, doesn't allow to save the record then this is high priority and high severity bug.)

    High Priority & Low Severity: The spell mistakes that happens on the cover page or heading or title of an application.

    High Severity & Low Priority: The application generates a show stopper for a link/Report which is rarely used by the end user.

    Low Priority and Low Severity: Any cosmetic or spell issues which is with in a paragraph or in the report (Not on cover page, heading, title).

    Page 35 of 52

  • 8/3/2019 Software Testing Imp Doc

    36/52

    Traceability matrix

    A traceability matrix is created by associating requirements with the test cases / scenariosthat satisfy them. Tests are associated with the requirements on which they are based andthe product tested to meet the requirement.

    Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management (CM) plan.

    Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower levelrequirements. Traceability is also used to manage change and provides the basis for test

    planning.

    Sample Traceability Matrix

    A traceability matrix is a report from the requirements database or repository. Whatinformation the report contains depends on your need. Information requirementsdetermine the associated information that you store with the requirements. Requirementsmanagement tools capture associated information or provide the capability to add it.

    The examples show forward and backward tracing between user and systemrequirements. User requirement identifiers begin with "U" and system requirements with"S." Tracing S12 to its source makes it clear this requirement is erroneous: it must beeliminated, rewritten, or the traceability corrected.

    Page 36 of 52

  • 8/3/2019 Software Testing Imp Doc

    37/52

    Unique

    No.Requirement Source of

    RequirementSoftware

    Reqs.Spec /

    FunctionalReq. Doc.

    DesignSpec.

    ProgramModule

    TestSpec.

    TestCase(s)

    SuccessfulTest

    Verification

    Modificationof

    Requirement

    Remarks

    Objective 1:

    Description of Matrix Fields

    Develop a matrix to trace the requirements back to the project objectives identified in theProject Plan and forward through the remainder of the project life cycle stages. Place acopy of the matrix in the Project File. Expand the matrix in each stage to showtraceability of work products to the requirements and vice versa. The requirements

    traceability matrix should contain the following fields:

    A unique identification number containing the general category of therequirement (e.g., SYSADM) and a number assigned in ascending order (e.g., 1.0;1.1; 1.2).The requirement statement.Requirement source (Conference; Configuration Control Board; Task Assignment, etc.).Software Requirements Specification/Functional Requirements Document

    paragraph number containing the requirement.Design Specification paragraph number containing the requirement.Program Module containing the requirement.Test Specification containing the requirement test.Test Case number(s) where requirement is to be tested (optional).Verification of successful testing of requirements.Modification field. If requirement was changed, eliminated, or replaced, indicatedisposition and authority for modification.Remarks.

    Page 37 of 52

  • 8/3/2019 Software Testing Imp Doc

    38/52

    What is bug, defect, issue, error?

    Bug - It is a fault in a program which causes the program to perform in an unintended or unanticipated manner. Bug is a terminology which is used by Test engineers

    Defect - Nonconformance to requirements or functional / program specificationDefect is a terminology which is used by programmers

    Issue: An issue is a major problem that will impede the progress of the project and cannot be resolved by the project manager and project team without outside help

    Error:Error is the deviation of a measurement, observation, or calculation from the truth

    Page 38 of 52

  • 8/3/2019 Software Testing Imp Doc

    39/52

    Risk Analysis:

    A risk is a potential for loss or damage to an Organization from materialized threats. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks. Athreat as we have seen is a possible damaging event. If it occurs, it exploits vulnerability

    in the security of a computer based system.

    Risk Identification:

    1. Software Risks: Knowledge of the most common risks associated with Softwaredevelopment, and the platform you are working on.

    2. Business Risks: Most common risks associated with the business using the Software

    3. Testing Risks: Knowledge of the most common risks associated with Software Testingfor the platform you are working on, tools being used, and test methods being applied.

    4. Premature Release Risk: Ability to determine the risk associated with releasingunsatisfactory or untested Software Products.

    5. Risk Methods: Strategies and approaches for identifying risks or problems associatedwith implementing and operating information technology, products and process;assessing their likelihood, and initiating strategies to test those risks.

    What is hot fix?A hot fix is a single, cumulative package that includes one or more files that are used toaddress a problem in a software product. Typically, hot fixes are made to address aspecific customer situation and may not be distributed outside the customer organization.Hot fixes are generally provided for bugs found at the customer place which has high

    priority.

    What is Defect removable efficiency?The DRE is the percentage of defects that have been removed during an activity,computed with the equation below. The DRE can also be computed for each softwaredevelopment activity and plotted on a bar graph to show the relative defect removalefficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g. design inspection, code walkthrough, unit test, 6 month operation, etc.)

    DRE = Number Defects Removed/ Number Defects at Start of Process*100

    DRE=A/A+B = 0.8A = Testing Team (Defects by testing team)B = Customer (Defects by customer)If DRE

  • 8/3/2019 Software Testing Imp Doc

    40/52

    Section 8 Scenario Based Testing

    What is the Difference between Sanity testing and Smoke testing?

    A smoke test determines whether it is possible to continue testing, as opposed to whether it is reasonable. A software smoke test determines whether the program launches andwhether its interfaces are accessible and responsible (for example, the responsiveness of aweb page or an input button). If the smoke test fails, it is impossible to conduct a sanitytest.

    Sanity test exercises the smallest subset of application functions needed to determinewhether the application logic is generally functional and correct (for example, an interestrate calculation for a financial application). If the sanity test fails, it is not reasonable toattempt more rigorous testing.

    Both sanity tests and smoke tests are ways to avoid wasting time and effort by quicklydetermining whether an application is too flawed to merit any rigorous testing. Manycompanies run sanity tests on a weekly build as part of their development process .

    What is the exact difference between a product and a project? Give an example.

    A Project is developed for particular client and the requirements are defined by thatclient.A Product is developed for market Requirements are defined by company itself byconducting market survey

    ExampleA simple example would be as followsProject: The shirt which we are interested stitching with tailor as per our specifications is

    projectProduct: Example is Ready made Shirt where the particular company will imagine

    particular measurements they made the productMainframes is a productProduct has many more versions but project has fewer versions i.e. depends upon changerequest and enhancements

    What if the software is so buggy it can't really be tested at all?

    The best bet in this situation is for the testers to go through the process of reportingwhatever bugs or blocking-type problems initially show up, with the focus being oncritical bugs. Since this type of problem can severely affect schedules, and indicatesdeeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.)managers should be notified, and provided with some documentation as evidence of the

    problem.

    Page 40 of 52

  • 8/3/2019 Software Testing Imp Doc

    41/52

    What if there isn't enough time for thorough testing?

    Use risk analysis, along with discussion with project stakeholders, to determine wheretesting should be focused.

    Since it's rarely possible to test every possible aspect of an application, every possiblecombination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgmentskills, common sense, and experience. (If warranted, formal methods are also available.)

    The following points can be considered:

    Which functionality is most important to the project's intended purpose?Which functionality is most visible to the user?Which functionality has the largest safety impact?Which functionality has the largest financial impact on users?

    Which aspects of the application are most important to the customer?Which aspects of the application can be tested early in the development cycle?Which parts of the code are most complex, and thus most subject to errors?Which parts of the application were developed in rush or panic mode?Which aspects of similar/related previous projects caused problems?Which aspects of similar/related previous projects had large maintenanceexpenses?Which parts of the requirements and design are unclear or poorly thought out?What do the developers think are the highest-risk aspects of the application?What kinds of problems would cause the worst publicity?

    What kinds of problems would cause the most customer service complaints?What kinds of tests could easily cover multiple functionalities?Which tests will have the best high-risk-coverage to time-required ratio?

    What if the project isn't big enough to justify extensive testing?

    Consider the impact of project errors, not the size of the project. However, if extensivetesting is still not justified, risk analysis is needed to prioritize the areas of testing. Thetester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.

    Will automated testing tools make testing easier?

    For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable.

    A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. inan application GUI and have them 'recorded' and the results logged by a tool. The

    Page 41 of 52

  • 8/3/2019 Software Testing Imp Doc

    42/52

    'recording' is typically in the form of text based on a scripting language that isinterpretable by the testing tool. If new buttons are added, or some underlying code in theapplication is changed, etc. the application might then be retested by just 'playing back'the 'recorded' actions, and comparing the logging results to check effects of the changes.The problem with such tools is that if there are continual changes to the system being

    tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there arerecord/playback tools for text-based interfaces also, and for all types of platforms.

    Another common type of approach for automation of functional testing is 'data-driven' or 'keyword-driven' automated testing, in which the test drivers are separated from the dataand/or actions utilized in testing (an 'action' would be something like 'enter a value in atext box'). Test drivers can be in the form of automated test tools or custom-writtentesting software. The data and actions can be more easily maintained - such as via aspreadsheet - since they are separate from the test drivers. The test drivers 'read' the

    data/action information to perform specified tests. This approach can enable moreefficient control, development, documentation, and maintenance of automated tests/testcases.

    Other automated tools can include: Code analyzers - monitor code complexity, adherence to standards, etc. Coverage analyzers - these tools check which parts of the code have been

    exercised by a test, and may be oriented to code statement coverage, conditioncoverage, path coverage, etc.

    Memory analyzers - such as bounds-checkers and leak detectors. Load/performance test tools - for testing client/server and web applications

    under various load levels. Web test tools - to check that links are valid, HTML code usage is correct, client-

    side and server-side programs work, a web site's interactions are secure. Other tools - for test case management, documentation management, bug

    reporting, and configuration management.

    What's the best way to choose a test automation tool?

    In manual testing, the test engineer exercises software functionality to determine if thesoftware is behaving in an expected way. This means that the tester must be able to judgewhat the expected outcome of a test should be, such as expected data outputs, screen

    messages, changes in the appearance of a User Interface, XML files, database changes,etc.

    In an automated test, the computer does not have human-like 'judgment' capabilities todetermine whether or not a test outcome was correct. This means there must be amechanism by which the computer can do an automatic comparison between actual andexpected results for every automated test scenario and unambiguously make a pass or faildetermination.

    Page 42 of 52

  • 8/3/2019 Software Testing Imp Doc

    43/52

    Read through information on the web about test automation such as general informationavailable on some test tool vendor sites or some of the automated testing articles.Read some books on test automation.Obtain some test tool trial versions or low cost or open source test tools and experiment

    with them.Attend software testing conferences or training courses related to test automationAs in anything else, proper planning and analysis are critical to success in choosing andutilizing an automated test tool. Choosing a test tool just for the purpose of 'automatingtesting' is not useful; useful purposes might include: testing more thoroughly, testing inways that were not previously feasible via manual methods (such as load testing), testingfaster, or reducing excessively tedious manual testing. Automated testing rarely enablessavings in the cost of testing, although it may result in software lifecycle savings (or increased sales) just as with any other quality-related initiative.

    With the proper background and understanding of test automation, the following

    considerations can be helpful in choosing a test tool:

    Analyze the current non-automated testing situation to determine where testing isnot being done or does not appear to be sufficientWhere is current testing excessively time-consuming?Where is current testing excessively tedious?What kinds of problems are repeatedly missed with current testing?What testing procedures are carried out repeatedly (such as regression testing or security testing)?What testing procedures are not being carried out repeatedly but should be?What test tracking and management processes can be implemented or made moreeffective through the use of an automated test tool?

    Taking into account the testing needs determined by analysis of these considerations andother appropriate factors, the types of desired test tools can be determined. For each typeof test tool (such as functional test tool, load test tool, etc.) the choices can be further narrowed based on the characteristics of the software application. The relevantcharacteristics will depend, of course, on the situation and the type of test tool and other factors. Such characteristics could include the operating system, GUI components,development languages, web server type, etc. Other factors affecting a choice couldinclude experience level and capabilities of test personnel, advantages/disadvantages indeveloping a custom automated test tool, tool costs, tool quality and ease of use,usefulness of the tool on other projects, etc.

    Once a short list of potential test tools is selected, several can be utilized on a trial basisfor a final determination. Any expensive test tool should be thoroughly analyzed duringits trial period to ensure that it is appropriate and that it's capabilities and limitations arewell understood. This may require significant time or training, but the alternative is totake a major risk of a mistaken investment.

    Page 43 of 52

  • 8/3/2019 Software Testing Imp Doc

    44/52

    Why is it often hard for organizations to get serious about quality assurance?

    Solving problems is a high-visibility process; preventing problems is low-visibility. Thisis illustrated by an old parable:

    In ancient China there was a family of healers, one of whom was known throughout theland and employed as a physician to a great lord. The physician was asked which of hisfamily the most skillful healer was.He replied, "I tend to the sick and dying with drastic and dramatic treatments, and onoccasion someone is cured and my name gets out among the lords.""My elder brother cures sickness when it just begins to take root, and his skills are knownamong the local peasants and neighbors.""My eldest brother is able to sense the spirit of sickness and eradicate it before it takesform. His name is unknown outside our home."

    This is a problem in any business, but it's a particularly difficult problem in the software

    industry. Software quality problems are often not as readily apparent as they might be inthe case of an industry with more physical products, such as auto manufacturing or homeconstruction.

    Additionally, many organizations are able to determine who is skilled at fixing problems,and then reward such people. However, determining who has a talent for preventing

    problems in the first place, and figuring out how to incentivize such behavior, is asignificant challenge.

    Who is responsible for risk management?

    Risk management means the actions taken to avoid things going wrong on a softwaredevelopment project, things that might negatively impact the scope, quality, timeliness,or cost of a project. This is, of course, a shared responsibility among everyone involvedin a project. However, there needs to be a 'buck stops here' person who can consider therelevant tradeoffs when decisions are required, and who can ensure that everyone ishandling their risk management responsibilities.

    It is not unusual for the term 'risk management' to never come up at all in a softwareorganization or project. If it does come up, it's often assumed to be the responsibility of QA or test personnel. Or there may be a 'risks' or 'issues' section of a project, QA, or test

    plan, and it's assumed that this means that risk management has taken place.

    It's generally NOT a good idea for a test lead, test manager, or QA manager to be the'buck stops here' person for risk management. Typically QA/Test personnel or managersare not managers of developers, analysts, designers and many other project personnel,and so it would be difficult for them to ensure that everyone on a project is handling their risk management responsibilities. Additionally, knowledge of all the considerations thatgo into risk management mitigation and tradeoff decisions is rarely the province of QA/Test personnel or managers. Based on these factors, the project manager is usually

    Page 44 of 52

  • 8/3/2019 Software Testing Imp Doc

    45/52

    the most appropriate 'buck stops here' risk management person. QA/Test personnel can,however, provide input to the project manager. Such input could include analysis of quality-related risks, risk monitoring, process adherence reporting, defect reporting, andother information.

    Who should decide when software is ready to be released?

    In many projects this depends on the release criteria for the software. Unfortunately it isnearly impossible to adequately specify useful criteria without a significant amount of assumptions and subjectivity. For example, if the release criteria are based on passing acertain set of tests, there is likely an assumption that the tests have adequately addressedall appropriate software risks. Additionally, since most software projects involve a

    balance of quality, timeliness, and cost, testing alone cannot address how to balance allthree of these competing factors when release decisions are needed.

    A typical approach is for a lead tester or QA or Test manager to be the release decision

    maker. This again involves significant assumptions - such as an assumption that the testmanager understands the spectrum of considerations that are important in determiningwhether software quality is 'sufficient' for release, or the assumption that quality does nothave to be balanced with timeliness and cost. In many organizations, 'sufficient quality' isnot well defined, is extremely subjective, may have never been usefully discussed, or may vary from project to project or even from day to day.

    Release criteria considerations can include deadlines, sales goals, business/market/competitive considerations, business segment quality norms, legalrequirements, technical and programming considerations, end-user expectations, internal

    budgets, impacts on other organization projects or goals, and a variety of other factors.Knowledge of all these factors is often shared among a number of personnel in a largeorganization, such as the project manager, director, customer service manager, technicallead or manager, marketing manager, QA manager, etc. In smaller organizations or

    projects it may be appropriate for one person to be knowledgeable in all these areas, butthat person is typically a project manager, not a test lead or QA manager.

    For these reasons, it's generally not a good idea for a test lead, test manager, or QAmanager to decide when software is ready to be released. Their responsibility should beto provide input to the appropriate person or group that makes a release decision. For small organizations and projects that person could be a product manager or a projectmanager. For larger organizations and projects, release decisions might be made by acommittee of personnel with sufficient collective knowledge of the relevantconsiderations.

    What can be done if requirements are changing continuously?

    This is a common problem for organizations where there are expectations thatrequirements can be pre-determined and remain stable. If these expectations arereasonable, here are some approaches:

    Page 45 of 52

  • 8/3/2019 Software Testing Imp Doc

    46/52

    Work with the project's stakeholders early on to understand how requirements mightchange so that alternate test plans and strategies can be worked out in advance, if

    possible.It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.

    If the code is well-commented and well-documented this makes changes easier for thedevelopers.Use some type of rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.The project's initial schedule should allow for some extra time commensurate with the

    possibility of changes.Try to move new requirements to a 'Phase 2' version of an application, while using theoriginal requirements for the 'Phase 1' version.

    Negotiate to allow only easily-implemented new requirements into the project, whilemoving more difficult new requirements into future versions of the application.Be sure that customers and management understand the scheduling impacts, inherent

    risks, and costs of significant requirements changes. Then let management or thecustomers (not the developers or testers) decide if the changes are warranted.Balance the effort put into setting up automated testing with the expected effort requiredto refactor them to deal with changes.Try to design some flexibility into automated test scripts.Focus initial automated testing on application aspects that are most likely to remainunchanged.Devote appropriate effort to risk analysis of changes to min