33
[ ] Testing Strategy Document Overview Prepared By: Author’s Name Prepared For: Department Name Date Created: Month Day, Year Last Updated: August 13, 2008 RASCI Alignment (R)esponsible (A)uthority (S)upport: (C)onsult: (I)nform: document.doc Page 1 of 33 Last Saved: 2/15/2010

QUAL022 PROJ Testing Strategy - Project … · Web view... Test Management 11 5.1) Test Roles and Responsibilities 11 5.1.1) Test Lead 11 5.1.2) Test Team 12 5.1.3) Project Management

Embed Size (px)

Citation preview

[ ]

Testing Strategy

Document OverviewPrepared By: Author’s NamePrepared For: Department NameDate Created: Month Day, YearLast Updated: August 13, 2008

RASCI Alignment(R)esponsible(A)uthority(S)upport:(C)onsult:(I)nform:

document.doc Page 1 of 25Last Saved: 2/15/2010

Revision Log

Revision Date Initials Description of Revision1.0 MM/DD/YY Initial Draft1.1 Added section on NetSec

Page 2 of 25Last Saved: 2/15/2010

Table of Contents

Revision Log...................................................................................................................................2Table of Contents............................................................................................................................3

1) OVERVIEW..............................................................................................................................................................5

1.1) TEST OBJECTIVES AND SCOPE.............................................................................................................................51.2) ASSUMPTIONS......................................................................................................................................................5

2) TESTING TIMELINE.............................................................................................................................................6

3) TESTING PHASES OVERVIEW..........................................................................................................................7

4) TEST PHASES.........................................................................................................................................................7

4.1) UNIT TESTING......................................................................................................................................................74.2) STRING TESTING..................................................................................................................................................84.3) INTEGRATION TESTING........................................................................................................................................94.4) USER ACCEPTANCE TESTING...............................................................................................................................94.5) REGRESSION TESTING........................................................................................................................................104.6) PERFORMANCE TESTING....................................................................................................................................104.7) NETWORK SECURITY TESTING...........................................................................................................................11

5) TEST MANAGEMENT.........................................................................................................................................11

5.1) TEST ROLES AND RESPONSIBILITIES..................................................................................................................115.1.1) Test Lead.....................................................................................................................................................115.1.2) Test Team....................................................................................................................................................125.1.3) Project Management Team.........................................................................................................................125.1.4) Functional Team.........................................................................................................................................125.1.5) Development Team.....................................................................................................................................125.1.6) Technical/Architecture Team.....................................................................................................................135.1.7) SMES and Super Users...............................................................................................................................14

5.2) TEST REPORTS...................................................................................................................................................16

6) ENTRY AND EXIT CRITERIA...........................................................................................................................17

7) TEST PLANNING..................................................................................................................................................17

7.1) TEST STRATEGY.................................................................................................................................................177.2) UNIT TEST PLANS AND CASES...........................................................................................................................177.3) INTEGRATION TEST PLANS AND SCENARIOS.....................................................................................................177.4) DETAILED TEST PLAN........................................................................................................................................187.5) TEST TRAINING..................................................................................................................................................18

8) TEST ENVIRONMENT AND TOOLS................................................................................................................18

8.1) LANDSCAPE........................................................................................................................................................188.2) TRANSPORTS/MIGRATION..................................................................................................................................198.3) TOOLS................................................................................................................................................................19

9) APPENDIX A: STANDARD DEFECT MANAGEMENT PROCESS............................................................21

9.1) DEFECT TRACKING WORKFLOW........................................................................................................................219.2) REPORT DEFECTS...............................................................................................................................................21

9.2.1) Severity.......................................................................................................................................................22

Page 3 of 25Last Saved: 2/15/2010

9.2.2) Priority........................................................................................................................................................229.3) REVIEW/ASSIGN DEFECTS..................................................................................................................................239.4) FIX DEFECTS......................................................................................................................................................239.5) RETEST...............................................................................................................................................................239.6) ANALYZE DEFECT DATA...................................................................................................................................24

10) Document Sign Off.................................................................................................................................................25

Page 4 of 25Last Saved: 2/15/2010

1) Overview

The Testing Strategy defines and communicates the approach to testing that supports the application and related processes on <Project>. It is the responsibility of the QA (Testing) Team to ensure that this procedure is implemented and adhered to by the entire project team.

1.1) Test Objectives and Scope

The overall objective of testing is to ensure that the business can continue to operate after “go live.”

The general objectives of all levels of testing are to: Validate that the system will support the business, functional, and technical

requirements defined in Design to enable the business operations in a production environment;

Verify technical stability of the system, including that online and batch response times meet requirements with expected volume of transactions in the system;

Verify security requirements have been implemented properly; and

Verify all previous functions work as required (regression testing).

The scope of testing will include all aspects of the project design: Project processes and configuration;

Integration from <Project> to existent environment;

Forms, Reports, Interfaces, Conversions and Enhancements;

Workflow;

Security;

Integration with external vendors.

1.2) Assumptions

The Testing Strategy presented in this document is based on the following assumptions: <Vendor> will participate in the test planning, script development, test

execution and test reporting to help facilitate knowledge transfer;

<Vendor> resources for integration testing and User Acceptance Testing will be available full time;

Page 5 of 25Last Saved: 2/15/2010

Configuration will be frozen during integration testing and User Acceptance Testing;

The legacy system environments will be made available in time for integration testing;

<Vendor> will provide selected production data for testing as identified jointly <vendor> and University of Chicago;

<Vendor> will provide licenses for any testing tools recommended and subsequently used;

All deliverables within each testing phase will have an approval and sign-off process;

The required number of stable testing environments and testing tools will be available as needed throughout the testing phases;

Appropriate business and IT resources will be assigned as required for all phases of testing;

Proper actions will be taken to resolve or offer alternate solutions to defects/exceptions in a timely manner;

Funding is available to purchase additional test software licenses, if necessary;

The creation and execution of all test cases will be performed by the functional and development teams;

All business process documentation will be created and made available for the preparation and execution of Integration testing;

Standards and training in the use of tools and testing methodologies will be made available to all functional teams;

A standard transport/migration and build strategy will be followed;

Testing will be conducted by resources knowledgeable in <application> transactions; and

The testing resources will be trained in the use of Test Tools.

2) Testing Timeline

The following is the testing schedule for Phase 1, illustrating the different types and cycles of testing that are planned. See the Project Plan for the current timeline.

Page 6 of 25Last Saved: 2/15/2010

3) Testing Phases Overview

Testing will be conducted using a “building block” approach starting with Unit testing. The test phases for unit testing, string testing, integration testing and system testing (stress and volume will have their unique processes, procedures, and testing criteria governed by a Test Approach document. The Test Approach details the scope of testing, the procedures to be adopted, the activities to be completed, the general resources required and the methods/processes to be used for that phase of testing. It details the activities, dependencies and effort required to conduct the test. The building blocks for testing, outlined below, are further explained following the table.

Testing Phase Objective Owner Environment

Unit Testing Isolate each part of the program and show that the individual parts are correct. A unit test provides a strict, written contract that the module must satisfy.

Functional TeamDevelopment Team

DEV

String/Scenario Testing

Validate the full operability of interconnected modules, methods, and objects within modules

Functional TeamSuper Users

DEV and QA

Integration Testing

Validate the interactions between the core application and all integrated development units around the core application that support the functional and technical requirements of the business solution.

Testing TeamFunctional Team

QA

User Acceptance Testing

Embedded into Integration Test Cycle #3 to provide ‘hands-on’ end-user level testing of organizational design and business operations including all relevant business cycles in preparation for ‘go-live’.

Testing TeamFunctional TeamSuper Users

QA

Regression Testing

Validate specific systems after new code releases have been deployed to insure that all related functionality, previously working, is still functional. A core set of transactions will be tested in all three Integration Test Cycles.

Testing Team Functional Team

QA

Performance (Stress and Volume) Testing

Verify acceptable system performance across all modules/functions/system development units in extreme load/stress/volume simulations of the production environment.

Technical TeamTesting Team

Production

4) Test Phases

4.1) Unit Testing

Unit testing is the validation of a unique transaction or development object that does a single process step. Unit testing is performed on the individual process and development

Page 7 of 25Last Saved: 2/15/2010

object units before they are integrated with other units. Unit tests will be rolled up into future Integration testing.

The objectives of unit testing are: To verify that the identified software component works according to

specification, and

To validate the software component’s logic against business requirements.

The scope of unit testing includes: Configuration – the formal testing performed by the Functional Teams to

validate that a business function and its associated configuration supports the required processes.

Development objects – testing that looks primarily at internal program logic to derive test cases. It focuses on testing the individual development components (subroutines, functions, procedures) of a program against the Specification (functional and technical). This test is conducted twice. First by the programmer for the purpose of discovering discrepancies between the module’s intended design and its actual behavior. Second test is by the Functional team for the purpose of validating the development meets the intended business requirement.

Security – the formal validation that the role definitions work as specified for all the business functions. This testing is generally performed after all other associated unit tests are complete, just prior to the start of integration testing.

A Unit Test Approach will be created to define the specific processes and procedures for configuration as well as development objects unit testing.

4.2) String Testing

String testing combines chains of transactions that flow together and reflect important business processes and scenarios. A string test scenario is a test that combines two or more unit test cases to prove that the combined transaction designs and programs work as expected. The objectives of string testing are: To verify specific pieces of functionality within and between modules

To ensure that cross-functional configuration and data flow requirements have been properly addressed

It is planned that String Testing will be the first phase of Testing to be conducted in the QA Environment.

String testing prioritizes on the business processes deemed critical; therefore, not all unit test cases are covered by string tests. Building from the Unit Test Approach, a String Test Approach will be created to define the processes and procedures that are unique to this phase of testing.

Page 8 of 25Last Saved: 2/15/2010

4.3) Integration Testing

Integration testing combines tested units into progressively more complex scenarios and testing these scenarios until the entire solution has been tested.

The objectives of integration testing are: To verify the functionality and data links between software components

(upstream and downstream)

To verify data storage and retrieval mechanisms

To verify functionality and data against the specifications to support business processes

Three cycles of Integration Testing, to be executed in the QA Environment, are recommended. Cycle 1: Will include some manual data load. End-to-end functionality to be

tested (including inbound and outbound interfaces if available).

Cycle 2: Will include a full data load. Continued end-to-end functionality tested. All inbound and outbound interfaces to be tested. Re-testing as required based on results from Cycle 1. “Go/No Go” checkpoint conducted before advancing to Cycle 3.

Cycle 3 (UAT): Will include a full data load. Continued end-to-end functionality tested. All inbound and outbound interfaces to be tested. Re-testing as required based on results from Cycle 2. This testing cycle is performed by the functional teams and super users and involves running the system(s) as if in production mode. “Go/No Go” checkpoint conducted before Go Live.

An Integration Test Approach will be created to define the processes and procedures that are unique to this phase of testing.

4.4) User Acceptance Testing

User Acceptance Testing (UAT) is embedded in Integration Test Cycle 3 above. Users will be asked to perform test scripts (Note: These are the same test scripts used for Integration Testing). It is assumed that this user group will come from members of the existing Change Network and Vendor resources who will be Trainers and have a base knowledge of the To-Be Business processes. This is incorporated into their knowledge transfer process). UAT involves testing the system under conditions agreed to by the Business in order to demonstrate that the system satisfies the Business requirements. The user acceptance test involves running the system(s) as if in a production mode.

The objectives of user acceptance testing are to demonstrate: The system, including data conversion, meets the business requirements specified

during the Blueprint Phase; and

Page 9 of 25Last Saved: 2/15/2010

The system meets all system constraints (e.g., data consistency, data flows, security) specified in the Business Requirements.

4.5) Regression Testing

Regression Testing is the testing that is performed after making a functional improvement or repair to a program, object or system to ensure that it works as correctly as before a change. Regression testing will be performed during Integration testing on any transaction that had a failure during completion of the Integration Test Script in a prior Integration Test Cycle. This will be performed either by redoing the Integration Test Script or a Unit Test on the specific function.

Regression testing will ensure that ongoing changes are not introducing defects into previously tested functionality that is in production. Thus regression testing occurs throughout all testing activities and for every new release of the system.

Regression testing is a separate activity performed during the iterative Integration testing when upgrades are applied to the system under test.

Regression testing is required if: A defect occurs in testing which requires change to the existing system

(configuration, development, or documentation)

New scope is added in a future wave which requires “re-testing” of functionality already in production (new interfaces, conversions)

The need for regression testing will be evaluated and planned for each integration test cycle, UAT (part of “Go/No Go” decision making process) and for each implementation wave.

4.6) Performance Testing

System Testing (stress and volume testing) also sometimes referred to as Performance testing, involves testing the software and hardware design to ensure performance goals.

The objectives of System testing are to: Ensure that system online response times are adequate,

Ensure that all batch processing can occur with in the defined window;

Measure system performance under expected “normal” load; and

Measure system performance under “peak” load conditions.

Assumptions on Performance Assumption 1 Assumption 2 Assumption 3

Page 10 of 25Last Saved: 2/15/2010

4.7) Network Security Testing

The University of Chicago’s Network Security Group (NetSec) works to protect the University's networks from malicious computer attacks, service interruptions, and network harassment. They use a variety of open source, custom, and commercial tools to identify anomalies and traffic that may indicate a security concern.

NetSec assists both individuals and local IT support on campus with keeping the network safe. Aside from security monitoring, this group provides other services to the campus community, including:

Web application security scanning Security guidance and assistance Incident response and investigation Enforcement of network security-related policies Certificate authority Password/key escrow Log backup Port scanning

5) Test Management

This section outlines the roles and responsibilities for the management and execution of testing, as well as the reports that will be produced to help manage the testing process.

5.1) Test Roles and Responsibilities

Generally, the resources to design and perform testing will be pulled from the functional and technical teams.

5.1.1) Test Lead

The Test Lead is responsible for the planning, execution and management of integration testing cycles, to include user acceptance testing as part of Integration Test Cycle 3. The Test Leader’s activities include: Drive test scenario planning and execution;

Identifying resource needs for testing and assembling the testing team;

Providing training for the test team;

Train the functional team on the specific process and procedures for Project;

Page 11 of 25Last Saved: 2/15/2010

Ensure test scenarios are assigned to the functional team;

Produce test status reports and metrics and lead daily test status meetings;

Lead change control and transport/migration meetings as necessary during the testing process;

Coordinate the impact of changes on training, roles, security, development objects, etc; and

Provide input and facilitate go/no go meetings (jointly with Project Management).

5.1.2) Test Team

The Test Team is responsible for working all “bugs” found in Integration testing. This team will be made up of key system experts, both from the vendor and University of Chicago. As problems are identified, the “Fix-It” team lead will work with the Test Lead to assign defect reports to a member of the support team who will need to diagnose and fix the problem.

5.1.3) Project Management Team

The Project Management team is responsible for overall quality assurance on the project, and as such must verify that entry and exit criteria are met for each phase of testing and that the appropriate level of infrastructure and resources are in place to support testing. The Project Management team will also conduct specific “Go/No-Go” checkpoints prior to each Go-live that will primarily evaluate the results of testing to date along with other readiness indicators.

5.1.4) Functional Team

The Functional team is responsible for unit testing and string testing of configuration, as well as reviewing other unit test results from a functionality perspective. The Functional team also is the primary driver of integration testing, including developing integration test scenarios and executing those scenarios.

The Functional team will also support User Acceptance Testing by acting as SMEs to support the Super Users as they execute the integration test scenarios. (Note: it is expected that Integration Testing and User Acceptance Testing Activities will be done concurrently during Integration Test Cycle 3).

5.1.5) Development Team

The Development team is responsible for unit testing all development objects, as well as supporting the other test phases. Development team resources will be

Page 12 of 25Last Saved: 2/15/2010

assigned to support integration testing and UAT. As defects are identified, development resources will be assigned to implementing and unit test fixes to identified problems

5.1.6) Technical/Architecture Team

The Technical team will drive Performance Testing (Stress and Volume). In addition, the team will support the other phases of testing by managing the transports/migrations and ensuring availability of the QA environments to support testing.

Page 13 of 25Last Saved: 2/15/2010

5.1.7) SMES and Super Users

Subject Matter Experts (SMEs) from the business, typically those who have not had extensive involvement in the project, will participate in User Acceptance Testing to ensure that the solution meets the needs of the business and that there are no “show stoppers” that would prevent the system from going live and to provide another set of eyes to the system.

The following table contains more detailed responsibilities by team per testing phase.

Testing Phase

Purpose Test Deliverables

3rd Party Test Responsibilities

U of C Test Responsibilities ENV

Unit Test

In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub-requirements. In that situation, each sub-requirement must have at least one test case. This is frequently done using a traceability matrix.

Testing of an interface will begin with the generation of files/data from the source system followed by the triggering of the interface, and then verification of the results in the target application.

Integration test may have to encompass some regression system- wide testing to ensure that changes are correct, and have not affected unmodified portions of the software, thus ensuring that no new defects have been introduced into previously test code

§ Master Test Plan§ Traceability Matrix§ Unit Test Scripts§ Defects§ TPR

§ Update Test Plan : only the “In scope and out of scope of Integration” section§ Test scenario creation§ Test case creation§ Requirement traceability matrix§ Test data creation§ Migration document update and review§ Test execution § Regression test execution§ Test results§ Test summary§ Record Defect in defect tracking tool§ Test re-execution of defects § Test re-execution for where test results are not satisfactory

§ Test Plan creation§ Review and signoff Test Scenarios§ Review and signoff test cases§ Review and signoff requirement traceability matrix§ Train on applications to create test data§ Assist with test data creation§ Review Migration document§ Plan and schedule refresh of test database§ Plan and determine new test builds§ Plan and schedule execution of code migration§ Review test input and test executed results§ Assist in re-executing some test cases where test result was unsatisfactory§ Manage test status§ Conduct Defect meetings§ Prioritize defects§ Create and execute test cases on integration defined specifically in the test plan to be out of scope for 3rd Party§ Create and execute regression test cases of existing integration relating to systems where existing code have been altered but does not include any functionality relating to the project§ Execute test cases where 3rd Party does not have access to system

TEST

String/Scenario Test

String testing ensure that the overall system programs and procedures function as expected and accordance to the functional, non functional, technical and existing business requirements.

These may be detailed and low-level tests, which are designed to test the individual functions, processes and data flows of the

§ Master Test Plan§ Traceability Matrix§ String Test Scripts§ Test Data§ Defects§ TPR

§ Update Test Plan : only the “In scope and out of scope of System/ Scenario test” section§ Test scenario creation§ Test case creation§ Requirement traceability matrix§ Test data creation§ Migration document update and review§ Test execution

§ Test Plan creation§ Review and signoff Test scenarios§ Review and signoff Test cases§ Review and signoff Requirement traceability matrix§ Train on applications to create Test data§ Assist with Test data creation§ Review Migration document§ Plan and schedule refresh of test database§ Plan and determine new test builds

TEST

Page 14 of 25Last Saved: 2/15/2010

Testing Phase

Purpose Test Deliverables

3rd Party Test Responsibilities

U of C Test Responsibilities ENV

application or system.

Scenario test may have to encompass some regression testing to ensure that changes are correct, and have not affected unmodified portions of the software, thus ensuring that no new defects have been introduced into previously test code

§ Regression test execution§ Test results§ Test summary§ Record Defect in defect tracking tool§ Test re-execution of defects § Test re-execution for where test results are not satisfactory

§ Plan and schedule execution of code migration§ Review test input and test executed results§ Assist in re-executing some test cases where test result was unsatisfactory§ Manage test status§ Conduct Defect meetings§ Prioritize Defects§ Create and execute test cases on functionality defined specifically in the test plan to be out of scope for 3rd Party§ Create and execute regression test cases of existing functionality relating to systems where existing code have been altered but does not include any functionality relating to project § Execute test cases where 3rd Party does not have access to system

Integration Test

As sub-systems are integrated into the final system, integration test provides the assurance that when all interfaces are in place; all the elements of a system interact appropriately together as an end-to-end solution. The focus for integration test is the integrity of the solution design and that the end system functionality meets the customer needsIntegration test will be conducted after the completion of system tests (unit tests and string tests).Integration test will rerun some of the functional test cases and all integration test cases in a logical order. Integration test will also test the re execution of the Project Migration plan into the test environment

§ Master Test Plan§ Traceability Matrix§ Integration Test Scripts§ Test Data§ Defects§ TPR

§ Review Integration test scenarios§ Test re-execution of defects in TEST environment§ Migration code and review

§ Test Plan creation§ Integration test scenario test creation§ Plan and schedule refresh of test database§ Test data creation§ Review Migration document§ Plan and determine new test builds§ Plan and schedule execution of code migration§ Test execution§ Regression test execution§ Record Test results§ Update Test summary§ Record Defect in Defect tracking tool§ Manage test status§ Conduct Defect meetings§ Prioritize defects§ Test re-execution of defects § Test re-execution for where test results are not satisfactory

TEST

Performance Test

Stress test is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.Put a greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. In particular, the goals of such tests may be to ensure the software doesn't crash in conditions of

§ Master Test Plan§ Traceability Matrix§ Performance Test Scripts§ Test Data§ Defects§ TPR

§ Test data creation§ Test execution § Stress test execution§ Test results§ Test summary§ Record Defect in defect tracking tool§ Test re-execution of defects § Test re-execution for where test results are not satisfactory

§ Assist with test data creation§ Review test input and test executed results§ Assist in re-executing some test cases where test result was unsatisfactory§ Manage test status§ Conduct Defect meetings§ Prioritize defects§ Execute test cases where 3rd Party does not have access to system

TEST

Page 15 of 25Last Saved: 2/15/2010

Testing Phase

Purpose Test Deliverables

3rd Party Test Responsibilities

U of C Test Responsibilities ENV

insufficient computational resources (such as memory or disk space), unusually high concurrency, or denial of service attacks.

Regression

Regression testing can be used not only for testing the correctness (with respect to a specification) of a program, but it is also often used to track the quality of its output.Regression testing should track the code size, simulation time and compilation time of the test suite cases.

§ Master Test Plan§ Traceability Matrix§ Regression Test Scripts§ Test Data§ Defects§ TPR

§ Test data creation§ Test execution § Stress test execution§ Test results§ Test summary§ Record Defect in defect tracking tool§ Test re-execution of defects § Test re-execution for where test results are not satisfactory

§ Assist with test data creation§ Review test input and test executed results§ Assist in re-executing some test cases where test result was unsatisfactory§ Manage test status§ Conduct Defect meetings§ Prioritize defects§ Execute test cases where 3rd Party does not have access to system

TEST

User Acceptance Test (UAT)

To ensure that the delivered system / solution meets the expectation of the business community

Acceptance test will test the usability of the new software within the affected customer business process(es) Acceptance Test PlanAcceptance test scenarios

Note: This test may not be a separate test but included in the Integration Test.

§ Master Test Plan§ Traceability Matrix§ Integration Test Scripts§ Test Data§ Defects§ TPR

§ Test re-execution of defects in TEST environment§ Review Acceptance test plan and test scenarios

§ Acceptance Test Plan creation§ Acceptance test scenario creation§ Plan and schedule refresh of test database§ Assist with test data creation§ Co-ordinate and manage Acceptance testing with acceptance users§ Test Summary (indicating, at minimum, that the customer have concluded their testing of the product and found that specifications are met)§ User Acceptance Sign-Off

UAT

5.2) Test Reports

Status reports will be used to track the progress of test completion. These reports will include: Test Problem Report: This report will indicate the test phase, status, priority,

executed vs. planned, and the team responsible for execution or re-testing as well as defect tracking.

Traceability Matrix: Detailed mapping to ensure that all in scope transactions (standard and custom) and business processes have been identified in the Test Scenario matrix and will be tested, as well as planned and actual start and end dates, person/team responsible, and status.

Templates are available as part of the Project Methodology Deliverables.

6) Entry and Exit Criteria

Each phase of testing has specific entry and exit criteria. The entry criteria define specific items that must be complete or conditions that must be met in order for the test to begin. The exit

Page 16 of 25Last Saved: 2/15/2010

criteria define specific items that must be complete or conditions that must be complete to declare the test phase completed, and move on to the next phase. The Entrance and Exit Criteria for each testing phase will be clearly defined in that phase’s Test Approach.

7) Test Planning

Test planning at the appropriate level of detail is a critical success factor for the entire testing process. Several levels of planning will be conducted – for the exact time, please consult the Test Lead.

7.1) Test Strategy

The Test Strategy begins the test planning process, by laying out the overall approach to testing, the test cycles, roles and responsibilities for testing and the high level testing timeline. The Test Strategy provides the basis for all other test planning activities.

7.2) Unit Test Plans and Cases

In parallel with completion of configuration, the unit test cases are created. The UTC’s have a targeted completion date. The UTC includes the objectives of the test, specific steps to follow, if applicable, and the expected results. Actual results are documented in the test case.

As development objects are finalized, the unit test plans are created along with the functional specifications. A Unit Test plan will be created that will detail all objects to be unit tested, who will test and target dates to complete the testing. Unit Test cases include the basic objectives of the test, along with specific steps and expected results. Actual results are also documented in the case. All Unit Test Cases should be listed in the Traceability Matrix where the status will be tracked.

7.3) Integration Test Plans and Scenarios

The Functional team will lead the development of Integration Test Scenarios. A “scenario” is an end-to-end business process that represents a real-life business scenario. The scenarios are identified with the help of the business representatives on the Functional teams, to ensure that they represent actual situations that occur during the course of business. The scenarios also need to be comprehensive of the major design elements that need to be tested.

All integration Test Cases should be listed in the Traceability Matrix where the status will be tracked as well as assigned tester and target dates to complete the testing.

7.4) Detailed Test Plan

Page 17 of 25Last Saved: 2/15/2010

The Detailed Test Plan will define which scenarios are run, by whom, in which order for each of the Integration Test Cycles. Participants in Integration Testing are generally formed into test “teams.” Each “team” will be assigned specific scenarios to be run during the test cycle. The order in which the scenarios should be run is also defined.

The Test plan also identifies the resources required to support testing from a technical perspective.

7.5) Test Training

Training will be conducted for all testing participants. The focus of the training will be to provide an orientation on the overall testing process, and to provide specific instructions on how to use the test tools.

The training will include: Logistics for testing – where and when testing will be conducted;

Procedures for executing the test scenarios and recording the execution of the scenarios;

Procedures for logging test problems, including system errors, data errors or problems with the documentation; and

Procedures for reporting test status.

8) Test Environment and Tools

Testing requires dedicated environments to allow for a controlled environment in which to perform testing.

8.1) Landscape

The following diagram represents the landscape that will support the project:

The Quality Assurance environment provides a controlled environment to perform testing. This environment is specifically used as a final validation point for acceptance into production. No changes are permitted in this system. The QA system will be updated via approved transports/ migration from the development system.

Page 18 of 25Last Saved: 2/15/2010

DEVELOPMENT* Configuration* Unit Testing

* Security

QA* Testing: String,

Integration, Regression

PRODUCTION* Production

* Performance Test

8.2) Transports/Migration

The process of moving configuration and development object code from one environment to another follows a controlled process.

8.3) Tools

There are a variety of tools that may be used to support testing. Tools can be used for various elements of the testing process as described below. Test Scenarios Definition: The Test Scenarios will be defined in the String/Scenario

Test Scripts. Template is available as part of the Project Methodology Deliverables.

Test Data Generation: Data will be loaded manually or using <tool used>.

Test Script Execution

Test Scripts will be executed manually or using <tool used>.

Defect Tracking

Defects will be tracked using <tool used>. See Appendix A.

Volume/Stress Testing

Tools are required to support stress and volume testing in order to simulate the loads associated with production peak volumes. <Tool used> will be used for this purpose. Specific tools for each testing phase appear in the following table:

Testing Phase

Tools

Unit Testing - Configuration

Word UTC template

Unit Testing – Development

Word UTC template

String Testing Word String template

Integration Testing

Word Integration template

Microsoft Project / Excel

User Acceptance Testing

Microsoft Project / Excel

Performance (stress and volume) Testing

<tool to be used>

Microsoft Project / Excel

Page 19 of 25Last Saved: 2/15/2010

Testing Phase

Tools

Regression Test

<tool to be used>

Microsoft Project / Excel

Page 20 of 25Last Saved: 2/15/2010

9) Appendix A: Standard Defect Management Process

The goal of a defect management process is to minimize the resolution time for problems by logging, tracking, and expediting problems as they occur, keeping stakeholders current as to resolution status, exploring all factors that can lower mean time to resolution (MTTR) and maintain a high level of overall customer satisfaction.

This is true whether the defect management process supports a development environment or a production environment. The following appendix describes the standard process to be used during Integration Testing. The defect management process will be refined to align with Quality Center Tool. This will occur during the Integration Test Planning.

9.1) Defect Tracking Workflow

When a deviation from expected results is encountered during execution of a test, a defect is reported.

New defects are reviewed, prioritized and assigned to a developer or configuration resource for resolution.

The developer or configuration resource identifies the cause of the anomaly, modifies the code or configuration and performs preliminary testing of the change.

The testing continues through the standard levels (unit, integration, etc.). Provisions are made for a certain amount of regression testing as determined by the tester(s).

The defects are analyzed to determine if the system or release is ready for go-live, based on the open defects.

9.2) Report Defects

When the actual results or behavior is not as expected, the anomaly should be logged in Traceability Matrix and Test Problem Report. Defects should be recorded with the following information:

Defect severity (Critical, High, Medium, Low)

Priority (High, Medium, Low))

Defect #

Opened by (originator)

Test Team

Page 21 of 25Last Saved: 2/15/2010

Report Defect Review/Assign New Defects

Fix Defects Retest Analyze Defect Data

Test Level (integration, system)

Defect title or short description

System/application/module

Test scenario/case/procedure/step identifier

Problem or defect description

Note – specific fields may change as dictated by QA tool. Fields will be finalized as part of Integration Test Planning.

9.2.1) Severity

Use the following definitions to determine the severity of a defect identified during testing:

Critical The product is unusable with defect present. The business cannot use the software or will not accept the product until the anomaly is fixed.

High The defect adversely affects processing. User can use the product but cannot perform specific tasks. The defect causes impairment of critical application functions, no workaround exists.

Medium The defect affects processing but user can use the product. The defect causes impairment of critical system functions but a workaround exists.

Low The defect is cosmetic, causes inconvenience or annoyance to the users.

9.2.2) Priority

Use the following to determine the priority for a defect. This scheme will help the developers determine in which order to fix the defects.

High Extremely urgent, resolve immediately. Unable to continue testing until defect is resolved.

Medium Resolution required ASAP. Testing impaired but workaround exists.

Low Resolution required before test can be considered complete. Testing impact is low.

Page 22 of 25Last Saved: 2/15/2010

9.3) Review/Assign Defects

After the initial data has been entered the defect will be reviewed by the Test Support Team lead. The Test Lead will validate the priority and assign the defect to the appropriate Development or Functional team member. Critical defects (showstoppers) will be immediately forwarded to a representative Developer/Configuration resource for resolution while all other defects may be prioritized and assigned in the daily Test Status meeting.

9.4) Fix Defects

The Configuration resource or Developer identifies the cause of the problem and incorporates the fix into the software, executing pre-test activities and unit testing to confirm the problem is resolved. Some regression testing should be performed to confirm the change has not adversely impacted previously tested areas of the code or configuration. Once the Developer is satisfied the fix is sound they record the following minimal information and the defect is reassigned to the Tester (Originator):

Date fixed

Fixed by

Description of fix

Classification of resolution (configuration change, program change, data error, other)

9.5) Retest

After correcting the error(s), the test team should retest the conditions that caused the error to ensure the functionality is correct. Depending on the defects and impact across applications, the test team must consider how much regression testing to perform to ensure that fixes have not broken previously tested functionality.

When retest has validated that the problem is fixed, the defect is closed with the following information:

Date closed

Verified by

Comments (allowing for recording info such as what additional defects were opened as a result of this fix)

If during retest it is determined that the problem is not resolved, the defect report will be reopened and returned to the Developer or Configuration resource. If the problem is resolved, but other deviations occur, a NEW defect will be opened.

Page 23 of 25Last Saved: 2/15/2010

Anyone can open a defect report, but only the Originator, Test Support Team Lead or the assigned Test Team should close the defect.

9.6) Analyze Defect Data

Throughout testing, the defect database is queried to determine the number of defects still open. Default reporting will be available to display defects by sorting on data fields such as Priority, Assigned To.

The Test Lead and Test Teams can use these reports to identify trends or weak areas of the system possibly requiring more attention or resources. They should also use the reports to determine if the system is ready for go-live or for promotion to the next level of testing (based on defect statuses).

Page 24 of 25Last Saved: 2/15/2010

10) Document Sign Off

Phase: Build

The (Deliverable Name) document has been reviewed and found to be consistent with the specifications and/or documented project requirements. The signature below documents acceptance of this document and/or work product by the signing authority

Organization: University of Chicago________________ Contractor________________

Approved by:

Signature: ___________________________________________________________________

Name: ______________________________________________________________________

Title:

Date:

Organization: University of Chicago________________ Contractor________________

Approved by:

Signature: ___________________________________________________________________

Name: ______________________________________________________________________

Title:

Date:

Organization: University of Chicago________________ Contractor________________

Approved by:

Signature: ___________________________________________________________________

Name: ______________________________________________________________________

Title:

Date:

document.doc Page 25 of 25Last Saved: 2/15/2010