32
Client X Test Strategy Quality Control-Test Strategy Template/Sample Project Name: Document Review and Sign-off The sign off of this document signifies that authorised signatories below have read, accepted and agreed with the planned objectives and requirements specified within. Applicable sections Sign-off /feedback received Name Position Review/ endorsement only Signoff Required Evidence Date Document Control Versions New versions are to be created when significant changes occur such as incorporating stakeholder feedback. Version No Issue Date Nature of Amendment Issued By Page 1 of 32 document.doc

Company XYZ Test Strategy Sample Ver 1

Embed Size (px)

Citation preview

Page 1: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Quality Control-Test Strategy Template/Sample

Project Name:

Document Review and Sign-off The sign off of this document signifies that authorised signatories below have read, accepted and agreed with the planned objectives and requirements specified within.

Applicable sections Sign-off /feedback received

Name Position Review/ endorsement only

SignoffRequired

Evidence Date

Document Control

VersionsNew versions are to be created when significant changes occur such as incorporating stakeholder feedback.Version No Issue Date Nature of Amendment Issued By

Page 1 of 28 document.doc

Page 2: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

CONTENTS

INTRODUCTION.........................................................................................................4Purpose 4Background 4Scope 4Audience 4Safety and Security Standards 4Related Documentation 4

THE STRATEGY 5Test Objectives 5Strategy Overview 6Release 1.0 Phases 6Assumptions 6Risks 7

TEST LEVELS 8Unit Test 8Integration Testing 10System Testing 11User Acceptance Testing 12Pre-production Testing 12Performance, Load and Stress Testing 12Regression Testing 14Infrastructure Testing 15

ENTRY AND EXIT CRITERIA 15System Test 15Acceptance Test 16

ROLES AND RESPONSIBILITIES 16

COMPONENTS UNDER TEST 18Component Testing Requirements 19

TEST AUTOMATION TOOLS 20TestDirector 20WinRunner 20LoadRunner 21

TEST PLANS AND SPECIFICATIONS 22Test Plans 22Test Specifications 22

TEST ENVIRONMENTS AND DATA 23Test Environments 23Test Data 23Data Loading Mechanisms 23Data Management 24

QUALITY MANAGEMENT24

Page 2 of 28 document.doc

Page 3: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Test Reporting 24Test Recording 25Defect Management 25Change Management (Example) 27Configuration Management (Example) 27Release Management (Example) 27

Page 3 of 28 document.doc

Page 4: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Introduction

Purpose

The purpose of this document is to set out the high level strategy for testing the systems making up Release 1.0 of the Company X’s initiative.

The initial issued version has been written at an early stage in the project. It is therefore likely that this document will require update as the project progresses.

Detailed test plans will be produced to supplement this document. See the Related Documentation section for details.

Background

The Company X Operations Center IT implementation project is part of the COO Restructuring Program.

The initial stage of the project, Release 1.0, is being undertaken in the UK by Company X. This stage will overlay some of the existing call center systems at St Albans, and will be carried out in several phases:

Phase 1 (controlled release of solution components) Phase 2 (production roll-out of the solution)

Scope

This document covers all testing to be performed on the systems making up the Operations Center project for Release 1.0.

Audience

The audience for this document is all those who have an interest in the testing of the systems…

Safety and Security Standards

All Company X security standards must be adhered to. This is particularly important where the production systems are accessed.

Related Documentation

This section lists the testing documentation that will be produced to support this strategy.

Test plans will be produced for: First level integration tests (sub-sections for each application area); Infrastructure test; System (end-to-end integration) test Phase 1; User acceptance test Phase 1; Pre-production test Phase 1; System test Phase 2; User acceptance test Phase 2; Pre-production test Phase 2; Performance, load and stress test.

Page 4 of 28 document.doc

Page 5: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Note that there is no ordering implied by the above list. Test planning will commence at the earliest opportunity.

Test specifications will be produced for: Unit tests; First level integration tests; System tests; User acceptance tests; Performance, load and stress tests.

Test reports will be produced: Weekly; At the end of each of the test stages XYZ Other

Procedures documents will be produced as required. They will include: Procedures for the use of the automated test tools; Detailed defect, configuration management and associated procedures.

The Strategy

Test Objectives

The objectives of the testing are to: Ensure that systems failure does not occur in production use; Minimize the risk of defects occurring in production use; Ensure that the system is secure; Ensure that data integrity is maintained; Ensure that operations and data can be maintained in the event of disasters; Verify that the performance of the systems will be adequate to support the business both for

Release 1.0 and for future releases.

These objectives must be achieved within the project timescales and budget. The testing process must be adequately documented so as to provide an audit trail. Performance criteria must be agreed with the ‘business’ at the earliest opportunity.

Page 5 of 28 document.doc

Page 6: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Strategy Overview

The strategy is influenced by a number of factors: Use of software packages Time constraints; Budgetary constraints.

The use of packages usually reduces the testing effort, since only modified components require in-depth testing. Testing will therefore be focused on unit and functional testing of modifications to the packages, on integration of the packages with each other and the back-end systems, and on validation that the business requirements are satisfied. Package functionality that has not been modified does not require detailed functional testing, although it will be tested as part of business scenario tests. Note however that it is expected that there will be little of the CRM front-end components that do not undergo modification.

Due to project deadlines there is a relatively short time available for testing following development. In addition, budgetary constraints preclude the use of a dedicated test team working in parallel with the development teams. For the deadlines to be met, rework during the test period will need to be minimized.

stem functionality will be prioritized in terms of its business criticality. Although all functionality will be tested, in the event of pressure on testing prioritization will allow the most important and the most complex functions to be focused on and tested in greater depth.

Users will be involved in testing as early as possible so that they can verify that the business requirements are satisfied before the user acceptance stage is reached.

Release 1.0 Phases

Each of the two phases will implement 10 of the 14 business processes included in Release 1.0. Note however that since a screen may be used by many processes, some of the components necessary for the phase 2 processes are likely to be present in phase 1. All components will be tested at the unit and integration levels, but only the phase 1 processes will be tested in the system and acceptance test business scenarios.

Phase 1 will be rolled-out to a selected group of users who will pilot its production use; although this will not be full production use, it will be against production data and the systems must therefore be adequately tested.

Phase 2 will require regression testing of the processes implemented in Phase 1 to ensure that there has been no impact from the Phase 2 development and that any enhancements or corrections made are correct. It should be possible to run the regression tests in a considerably shorter time than during Phase 1 testing since the system should be robust, the test data defined, and the tests themselves refined. This assumption allows the test period for Phase 2 to be less than twice that allotted for Phase 1.

Following Phase 1, it may be feasible to automate some of the tests, which would further reduce the time taken to regression test the Phase 1 functionality in Phase 2. The feasibility will depend on the people, and their skills, available during the limited time available between Phases 1 and 2, and on how much the Phase 1 functionality, including screens, is expected to change for Phase 2.

Assumptions

1. Development will be planned in such a way that incremental integration can be carried out.2. The development teams will have enough time to plan, specify and execute unit and first level

integration tests.3. Sufficient resource will be available to plan, specify and manage system and acceptance tests.

Page 6 of 28 document.doc

Page 7: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

4. Functional design specifications will be produced on-time and will be adequate to derive tests from;

5. An environment in which integration can be carried out will be available during the development phase;

6. The acceptance environment will be available according to plan;7. Regression testing of Phase 1 functionality in the Phase 2 tests will take less time than when

tested in Phase 1;8. Unit testing will be thorough;9. Verification of network security is outside the scope of the tests.10. There will not be a requirement for audit.

Risks

The following risks may impact on the success of the testing effort. The likelihood of the risks materialising is not given here. These should be recorded in the project risk register.

Description Impact Possible Mitigating ActionsThe start of system testing is delayed due to development slippage

Less time available for testing. Testing incomplete.

Monitor progress.Obtain additional resource.Obtain early confirmation of technical feasibility.Test incrementally during development.

The testing required cannot be achieved in the time allotted with the planned resources.

Testing incomplete Prioritise functionality – focus on most important.Obtain additional resource.

Major rework required during the test period.

Testing incomplete. Ensure design correct.Ensure unit test thorough.Early incremental integration and functional testing.Early user involvement.

Development environment does not allow incremental integration testing.

Integration tests delayed until system testing – increased chance of major rework being required.

Design development environment to allow integration testing.

Acceptance environment is not ready in time for system testing.

Testing delayed and/or incomplete.

Ensure environment is built early and tested thoroughly before use.

Creation and maintenance of test data may be complex, and restoration of datasets may not be easily achieved, due to the distribution of data elements and the restrictions on access to the CRM database.

If mechanisms to readily load data are not available, excessive time may be spent on data loading, or inadequate data may be used for unit tests.

Ensure data model is finalised in time to allow data management planning.Thoroughly define test data requirements and the mechanisms by which it will be entered well ahead of the start of testing.

Regression testing of Phase 1 processes in Phase 2 may not be quicker than when tested in Phase 1.

Phase 2 tests take longer than expected.

Minimize changes made to Phase 1 processes for Phase 2.

Unit testing is inadequate. Large numbers of defects discovered in later tests and in production use.

Ensure review and approval processes are followed.Provide guidance, if required, on unit test specification.

Insufficient resource to script system tests

Inadequate testingInadequate audit trail

Plan development effort to ascertain resource available for test scripting. Monitor progress and bring in additional resource if required.

A test manager is not Lack of co-ordination and Recruit a test manager from within

Page 7 of 28 document.doc

Page 8: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

appointed management of test scripting and execution resulting in inadequate testing and test documentation.

XYZ, or recruit externally – permanent or contract.

The existing performance criteria (response comparable to existing systems) may not be achievable since client/server systems operating over WANs rarely perform to the levels of green screen mainframe systems.

Contractual difficulties. Investigate performance as a priority.Acquire load test tools.Establish performance criteria based on time to undertake processes.Establish realistic response times for data retrieval etc. based on industry standards.

Test Levels

This section describes the levels or types of testing that will be used. The infrastructure and telephony testing are described in distinct sections since the testing approach for these differs from the software components.

Unit Test

A unit is the lowest level at which programs are produced. Units may be described as functions, objects, routines or modules. This section describes the testing of these units.

The intention should be to exercise all conditional branches and loop constructs against data values derived from boundary value analysis and equivalence partitioning. Consideration should also be given to possible interactions between input values, and to the possible effects of repeating data elements or groups where multiple records are processed.

Data retrieval functions should be tested against a dataset that includes the following: At least one record with maximum length data in each field; Records with null values in any fields which may be null; Parent records with and without child records; Values of zero in numeric fields; All instances of code values. Tools or test harnesses may be required to simulate inputs to the object and to record the values returned from a function or updated in a database or external variable.

Tests should be written in such a way that the conditions under test and the data values against which they are tested are clearly stated for the test cases – this may be described for each of the test cases written, or be documented separately.

A unit test specification must be produced. This should clearly state what the conditions under test are and the data values against which the conditions are tested.

Results must be documented in whatever way is most suitable and stored as a record of the testing carried out.

Defects found during this phase of testing will not be formally recorded. The developer carrying out the tests should correct defects and re-run the tests until all tests are successful.

The unit test specification and results should be reviewed and approved by the development team leader or another authorised team member.

Page 8 of 28 document.doc

Page 9: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Consideration should be given to repeatability of the tests for use in regression testing of later modifications. Where the tests are complex and/or are time consuming to run, the possibility of automation should be considered.

The details of how unit testing will be carried out for the various application areas of the project are described in the following sub-sections.

Application Views

All modifications and customisation of the Application screens will undergo unit testing by the development team. The unit for testing will be application views and their associated business objects, applets and business components.

The tests will cover the following areas: Layout Labelling Menus Help Navigation Mandatory/Optional Fields Maximum length data Field formatting List of values/codes Data retrieval Validation of input values; Any processing carried out from the screens, eg calculation of derived values; Error handling Performance

Tests will be specified against the detailed technical specifications for the views.

Middleware Integration Component Objects

Individual Visual Basic modules, held within a DLL, will be tested in isolation by means of Visual Basic test harnesses which make calls to the DLL functions. The harnesses will display or print the input values and the values returned, printouts or screen dumps of which will constitute the test results.

Tests will be specified against the detailed technical specifications.

Back-end Systems

The back-end systems wrappers will provide the interface between the back-end systems and the middleware. The intention is to make use of existing routines where possible.

Unit tests of the back-end systems wrappers will be carried out at two levels: function level tests of new routines, and module level tests in which the integration of the routines within the wrappers is tested. The module level tests may be incremental or may simply treat the wrapper as a module.

Test plans will be produced for each of the wrappers that will describe in detail how the testing will be performed.

Tests will be specified against transaction specifications and the overall middleware functional and technical design documentation.

Page 9 of 28 document.doc

Page 10: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Tests will be specified, executed and documented according to the existing standards in use by the back-end system support teams.

Interfaces

Test results will consist of the results of the queries on the interface tables.

Integration Testing

Integration tests will be performed at two levels: integration within an application area (first level integration), and integration between the major application areas (end-to-end integration).

The application areas, from the testing viewpoint are: CRM Application; Middleware; Back-end systems; Telephony; MSOffice and MSOutlook.

The major integration points between the application areas are: Telephony to CRM; CRM client to middleware; Middleware to the back-end systems; CRM server to back-end systems; CRM to MSOffice.

Note that integration within the telephony systems is covered in the section covering telephony testing.

The Integration Pilot

The integration pilot will simply aim to demonstrate connectivity. Tests will be carried out informally, although they should be as thorough as possible. Defects will not be formally recorded.

First Level Integration

Integration within an application area will be carried out in the development environment following unit testing of the components to be integrated.

An integration test plan will be produced which will describe how integration of each application area will be tested. The test plan will describe the sequence in which components will be integrated, will specify the tests that will be run, and will describe how the other application areas will be simulated if required. Simulation in general will be achieved by placing messages on the appropriate MQ queue.

Tests will be specified by reference to the functional design documentation and the detailed technical specifications.

The tests should fully exercise the interfaces between the components. Data inputs should be specified at a similar level to those suggested for unit testing above.

Defects found at this stage will be formally recorded in the project defect management system (see Defect Management section) to allow monitoring of their correction.

Page 10 of 28 document.doc

Page 11: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

End-to-End Integration

This level of integration will be carried out as part of system testing in the acceptance environment. However, prior to this, an informal level of integration will be achieved during the pilot phase, and, where possible, during first level integration tests in the development environment.

The pilot will create a sample process to demonstrate connectivity from the CRM client through to each of the back-end systems.

System Testing

System tests will consist of a set of tests based on the functional specifications and on the detailed system design. The tests will form four logical groups: Functional tests of customised functional areas; Business scenario tests; Integrity tests; Security tests.

User involvement, either as part of the test team or as witnesses, should be obtained. The ‘users’ should be those responsible for specifying the requirements. This will reduce the likelihood of mismatches between user expectations, functional requirements and system functionality being discovered during UAT.

Functional tests will be constructed for functional areas that have been customised or configured (note that all screens will be customised). Tests will be grouped into logical units that constitute discrete areas of functionality. This will allow functional tests to be run in isolation so allowing incremental testing of functionality and facilitating regression testing.

Functional tests will be based on the functional design specifications and cross-referenced to them. Tests will encompass both positive and negative conditions, ie what should happen and what should not happen. A coverage matrix will be used to ensure all functional test conditions are covered. Test cases will be constructed for each functional test condition by examining the possible states of the data utilized by the function. Boundary value analysis and equivalence partitioning should be used to identify data values.

Business scenario tests will cover all functional areas of the systems, whether customised or not. They will be constructed from the business process flows. Users will be asked to assist and to validate the scenarios. Each scenario will cover a number of business processes and will focus on the use of data within the processes from creation through to output to reports and fulfilment. These tests will form the basis for user acceptance tests.

Integrity tests will be devised to ensure that the integrity of data is preserved under all conditions. Tests will cover: Concurrent access to data records by multiple users; Concurrent access to data records by different processes; Rapid sequential update of the same record by the same user and by different users; Impact on the CRM system of creation and deletion of records in back-end systems, eg where a

record, which is due to be updated in CRM, is updated in the back-end after the record has been read into CRM;

Checks that there is no unexpected interference with existing back-end data; Any potential conflict between batch (if implemented via EIM) and on-line processes; Data interactions between the new and old systems when both in use.

Security tests will ensure that users have access to the areas of the system for which they are authorised according to their role, but not to other areas for which they are not authorised. Security of the network will be verified as part of the infrastructure testing described in a later section.

Functional tests will be carried out before running the business scenario tests. Integrity and security tests may be run in parallel with the functional and scenario tests.

Page 11 of 28 document.doc

Page 12: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Tests will be executed by the technical team in the acceptance environment.

Defects will be formally recorded in the project defect management system (see Defect Management section).

User Acceptance Testing

User acceptance tests will be specified against the business process flows and a set of finalised requirements derived from the Functional Catalogue [1] and from scoping studies carried out during the pilot stages of the project. The finalised requirements will be documented in such a way that test conditions can be easily derived.

Tests will be specified by the users, with assistance from the test manager, and will be executed by the users with support from the project team.

Users will be given the opportunity to carry out ‘ad hoc’ tests in which they can explore the capabilities of the systems. These tests will be carried out separately from the formal tests. Users should record their actions to allow the tests to be reproduced in the event of observations being raised. Note that during phase 1 there may be functionality included which is required for phase 1 processes and which provides parts of phase 2 functionality; if such tests are performed in phase 1, they must be undertaken with a clear knowledge of phase 1 scope to avoid unnecessary evaluation of perceived defects.

Where possible, use will be made of the business scenario tests developed for system testing.

Acceptance tests will be run in the acceptance environment and executed by users following successful system testing. The members of the user test team should include: at least one person who is familiar with the requirements for the system; CSC staff who are familiar with each of the corresponding functional / process areas in the

existing CSC.

The numbers of users required will be established and stated in the acceptance test plan. It is envisaged that it will be between 6 and 7 depending on the skill ranges of the users, the involvement of the users in earlier testing, and the requirements and expectations of the lead users for system validation and verification.

Defects will be formally recorded in the project defect management system (see Defect Management section).

Pre-production Testing

Pre-production tests, also known as business readiness tests, will be run with the applications installed in the production environment but against test data. These tests will consist of a sub-set of the user acceptance tests selected to cover a broad range of functionality. The objective is to demonstrate that the production release corresponds to that previously tested, and that the production environment is correctly built and configured.

Performance, Load and Stress Testing

The performance requirement given…

The performance of the application systems operating over the network from the CRM clients to the back-end systems will be measured under simulated load conditions with a large volume of data as described below.

Tests will be run in the acceptance test environment. The performance impact of any differences between the architectures of the acceptance and production environments should be considered when judging the acceptability of the performance measured during testing.

Page 12 of 28 document.doc

Page 13: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

A detailed performance test plan will be produced after the network simulation exercise, described below, has been carried out and the placement of machines finalised.

It is recommended that performance be assessed at the earliest opportunity to ensure that any performance issues are resolved as early as possible and before system and acceptance tests commence.

Network Simulation

The first stage will be simulation of the network using a simulation tool - OpNet Planner. This will be carried out at the earliest opportunity to establish the performance implications of the options for placement of the CRM System across the network

Load Simulation

The second stage of testing will use the test tool LoadRunner to simulate realistic numbers of users carrying out transactions from the client front-ends through to the back-end systems. The time taken to execute transactions will be automatically recorded for the LoadRunner generated transactions and, while the simulation is running, measurements will be taken of the execution times of transactions executed by testers from workstations.

This stage will require definition of typical transaction mixes that simulate real use; these should include the most business critical functions and the most frequently used functions. This stage of testing cannot therefore be carried out until the system has been developed and is reasonably stable. However, if the network simulation exercise suggests that there may be performance issues associated with the network, it may be worth considering early use of LoadRunner with the sample transactions developed during the pilot phase.

Tests should be performed with the ‘normal’ expected numbers of concurrent users and with the maximum expected number of users. If performance under the maximum loading is unsatisfactory, tests may be needed to establish the boundary at which performance becomes unsatisfactory.

Tests will in general be conducted against the current production volumes. However, tests should also be carried out against the maximum expected future volumes and the future expected number of users to ascertain whether performance will be adequate in the case of expansion of the enterprise.

A baseline set of performance metrics will be recorded for the ‘normal’ simulation. This will be used for comparison purposes following changes to the systems either for performance improvements or for enhancements to functionality, networks or hardware. The validity and reproducibility of the baseline may be compromised by variability of network traffic and other activity on the mainframes. To minimize this, tests may need to be scheduled to be run during quiet periods, such as at night or at weekends. To ascertain variability, each manually recorded metric should be replicated three times and the variance between them calculated; if the variability is excessive (less than 90% confidence) further measurements should be taken. If the variability cannot be reduced, scheduling of the tests may need to be changed or measures taken to reduce the activities causing the variability.

Investigation of Performance Issues

If performance is deemed unsatisfactory, further tests will be required to establish the cause of the poor performance. The causes of poor performance could include: Network latency; Network bandwidth; Server characteristics including memory and processing capacity; Workstation processing capacity; Poorly constructed database queries; Excessive or inefficient data fetches (compounded by network latency and bandwidth); Poorly tuned databases including table indexing.

Monitoring of the network, clients and servers will be required during these tests.

Page 13 of 28 document.doc

Page 14: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

LoadRunner provides tools that allow the transaction time to be split between client, server and network. The processing time on the server and client can be split into CPU, input/output, lock and system lock times. Network time can be recorded between any two processors, and can be broken down to show the delay between routers and between router and client. A server monitor for NT and UNIX is available that shows: Percentage CPU utilisation; Number of page faults; Number of file operations per second; The standard UNIX or NT monitoring tools.

Oracle also includes tools that allow statistics on factors such as use of memory cache, disk reads, etc to be gathered. These are used to tune the Oracle database.

The only additional tools that might be required are more sophisticated network monitoring tools if the network appears to be limiting performance. These are likely to be available within the technical services department.

Regression Testing

Regression testing is the process of ensuring that corrections or enhancements to a system have not affected other previously tested areas of the system. The set of tests that need to be run for regression testing must be determined by considering the impact of the changes made to the system - this requires analysis by the appropriate technical staff. In the event that the impact cannot be fully determined, a wider set of tests than may be necessary must be run.

The regression test suite will be built up from the set of unit, integration and functional system tests devised during development of the system. This suite will be extended in response to enhancements or to cover defects which were not raised against a test already included in the test suite, eg during acceptance testing or during production use.

Each regression test set or script will be associated with a particular data set from which it should start, together with any data modification or loading scripts required.

Defects found during regression testing will be formally recorded in the project defect management system.

Regression tests will be automated where feasible. See the section on automated test tools.

Page 14 of 28 document.doc

Page 15: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Infrastructure Testing

Infrastructure tests will be fully defined in an infrastructure test plan.

Networks

Network testing will ensure that:

the correct connectivity is achieved;

the network is secure;

the routing of data is optimal;

the load on network segments is balanced.

‘Ping’ will be used to establish connectivity, and network monitoring tools used to establish routing and segment balancing.

Business Continuity

Business continuity testing will ensure that business can be continued in the event of disaster. Tests will cover:

Failover

Failover testing will ensure that in the event of hardware failure such as disk or server malfunction there is automatic switchover to alternative hardware such as a mirrored disk or hot standby server.

Tests will involve switching off hardware components during operation.

Entry and Exit Criteria

The following sections describe the criteria that must be satisfied before system and user acceptance may commence, and the criteria for completion of these test phases.

Definitions of the severity levels mentioned in the following sections are given in the Defect Management section.

System Test

Entry Criteria

All unit tests of software must be complete, documented and approved;

All functional and first level integration tests of the telephony components should be complete;

All components of the systems must be under configuration management and the contents of the release into system test known and documented;

The system test environment must be operational;

Page 15 of 28 document.doc

Page 16: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Test data must be available;

A system test plan must have been produced and approved;

System test specifications must be completed and approved;

Defect, change and release management processes must be in place.

Exit Criteria

All planned tests must have been executed;

There must be no known and unresolved severity 1 or 2 defects;

The acceptability of any known and unresolved severity 3 or 4 defects must be agreed with the lead users and appropriate project sponsors;

There must be an agreed plan for the resolution of any known and unresolved defects.

Acceptance Test

Entry Criteria

The entry and exit criteria for system test must have been satisfied;

The acceptance environment must be complete;

Acceptance test specifications must be complete and approved;

A release notice fully describing the content of the release must be provided with the release;

All user documentation and on-line help must be complete and available to the testers.

Exit Criteria

All planned tests must have been executed;

There must be no known and unresolved severity 1 or 2 defects;

The acceptability of any known and unresolved severity 3 or 4 defects must be agreed with the lead users and appropriate project sponsors;

There must be an agreed plan for the resolution of any known defects.

Roles and Responsibilities

Activity RoleProduce overall test strategy Test strategist to produce initial version.

To be maintained by the project team under the direction of the project manager.

Page 16 of 28 document.doc

Page 17: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Approve test strategy Executive Sponsor (Rowland Strickland)Project Managers User leads (John Sellers, Gordon Finlay, Barry Reed).

Produce unit test plans/specifications DevelopersApprove unit testing Development team leads (or authorised

deputy)Produce first level integration test plans, including test specification

Development team leads

Approve first level integration test plans Test manager, project managerProduce system test plan Test managerApprove system test plan Project manager, user leads (individuals to

be confirmed)Produce system test scripts Development teams and/or dedicated testing

resource – individuals to be confirmed.Assistance from the business (John Tomlinson) and test manager.Input on business scenarios from user leads.

Approve system test scripts Test manager, project manager, user leads (individuals to be confirmed)

Produce acceptance test plans Test manager with input from user leadsProduce acceptance test scripts Users with support from project team.Approve acceptance test plans and scripts Project manager + user leads + sponsorDefine test environments Infrastructure team lead (Greg Renouf) in

conjunction with ADS (Application Development Support)

Build and maintain test environments XYZ and Company X support staffManage use of test environments, including data and application build and load

Test manager

Execute unit tests Developer of moduleExecute system tests System test team – see belowExecute acceptance tests Users – see Acceptance Test section for

details.Support acceptance tests Members of system test teamDay to day management of system and acceptance tests

Test manager

Produce weekly and end of stage test reports Test managerEvaluation of integration, system and acceptance test observations

Test managerProject managerDevelopment team leads‘User’ representative(s)(See Observation Management section)

Approval for correction of defects Project managerApproval of change requests (enhancements)

Project board

System builds Development teamsApproval of builds for production use. ADSRelease to production ADSProduce release note for integration, system, and acceptance test deliveries

Development teams

Produce release notes for production releases

Test or release manager.

The test team for the system test phase will be made up from members of the development teams, supplemented as necessary by members of the user community (John Tomlinson is currently expected to form part of the team). The makeup of the team will need to be defined once detailed estimates for the test effort are produced - this needs further definition of the functional specifications. It should be noted that a development capability will be required to correct defects raised during testing; this will need to be balanced with the use of development resources in testing.

Page 17 of 28 document.doc

Page 18: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Components Under Test

The diagram on the following page illustrates the system components. The shaded components are those that will be modified.

Inset Diagram

Page 18 of 28 document.doc

Page 19: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Component Testing Requirements

The following table describes the testing that will be applied to the components making up the systems.

Component Test LevelACD (Automatic Call Distributor)

FunctionalIntegrationSystem UAT

CTI Server FunctionalIntegrationSystem UAT

CRM Client - Modified Unit – GUI & FunctionalIntegrationSystem UAT

CRM Client – Unmodified

IntegrationSystem UAT

CRM to MQ Integration Component (IC)

Unit – FunctionalIntegrationSystem UAT

CRM Server IntegrationSystem UAT

CRM Interface (if implemented)

Unit – FunctionalIntegrationSystem UAT

MQ Unit – Functional (limited to exits for eg security processing)IntegrationSystem UAT

MQ-IMS Bridge IntegrationBack-end Wrappers Unit – Functional

IntegrationSystem UAT

Back-end Systems IntegrationSystem UAT

Page 19 of 28 document.doc

Page 20: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Test Automation tools

The Company Y Interactive suite of test tools will be used. This includes: TestDirector – test management; WinRunner – screen based test capture and replay tool (a CRM specific add-in is available); LoadRunner – a load testing tool which allows controlled simulation of multiple client transactions from a

central workstation or workstations.

TestDirector and WinRunner licences are in the process of being acquired by XYZ application support, and sufficient licences will be available for this project (20 TestDirector and 15 WinRunner licences will be available organisation wide). It is currently expected that the tools will be installed on the test LAN and available for use by the end of May 2000.

LoadRunner has not yet been purchased, and will require a business case to be presented prior to purchase by XYZ.

WinRunner can be linked to TestDirector to associate WinRunner automated scripts with sets of tests defined within TestDirector.

Company Y consultancy will be used by XYZ Application Development Support (ADS) to establish the tools in the XYZ environment, and ADS staff will receive training from Company Y. It is suggested that the appropriate staff from this project are included in the relevant parts of this training, in particular training on the strategy for structuring tests.

TestDirector

TestDirector will be used to record all tests and their results, to record observations on the systems raised during testing, and to report on the progress of testing.

If preferred, or if TestDirector is not available when test scripting starts, tests may be initially specified in a MSWord tables, or in an Excel spreadsheet. The content of these documents can then be automatically loaded into TestDirector. Templates will be provided for this purpose as required.

TestDirector allows definition of test cases within a hierarchical tree structure consisting of nodes that categorise the types of tests, within which tests can be further sub-divided or test cases defined. Each test case consists of a set of test steps or actions each with an expected result. Test cases are selected for inclusion into sets of tests for execution.

For functional tests, the test hierarchy will map to the structure of the functional design. Groups of test cases should map to discrete functional areas; this will provide flexibility for the future in the way that functional tests could be combined with each other or into test scenarios.

Unit and functional tests will have test steps defined at a sufficiently low level to allow unambiguous execution so that a tester other than the author of the tests can execute the tests consistently.

A document will be produced describing in detail how TestDirector will be used for defining test cases.

WinRunner

WinRunner will be used to automate tests for regression test purposes.

Note that the use of WinRunner will require in-house expertise in its use. Tests are recorded in scripts that are similar to program source code, and, in situations where only minor changes are required, considerable time can be saved by altering the scripts rather than re-recording the test.

Tests should not be recorded until the components of the systems under test are stable both in terms of expected functionality and presence of defects.

The most important and/or most complex tests will be automated first (provided they are stable).

Page 20 of 28 document.doc

Page 21: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

The process for automation of tests should take the following approach:

1. Before the tests have been recorded, the tests should be run manually until any errors found in the system or scripts are corrected.

2. Once a clean test run has been achieved, the tests should be recorded.

3. For subsequent runs of the tests:

Any required changes or additions should be made to the manual test scripts, and the affected tests run manually until a clean run is achieved;

The remaining tests, which have not been affected by the changes made to the system, should be run using the automated scripts. Any deviations from the expected results should be investigated and the causes identified. If any changes are required to the automated test scripts, they should be made in the manual scripts and those tests re-run manually until correct results are achieved.

4. The new and changed test scripts should then be recorded, or re-recorded, once a clean test run has been achieved.

LoadRunner

LoadRunner will be used to simulate large numbers of transactions being executed from the front-end client for performance and stress testing purposes. Execution times and system statistics are automatically recorded.

Page 21 of 28 document.doc

Page 22: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Test Plans and Specifications

Test Plans

Test plans set out the detail of why, how, when, where and who for a specific phase of testing. They should supplement and expand on the information given in this test strategy, and should not duplicate it.

Test plans should include the following information:

A detailed strategy for the test phase if this differs from the high level strategy or additional detail is required - any assumptions or justifications will be stated;

Details of what is covered by the tests, including references to change requests and corrected defects;

A description of the data requirements;

Preliminary resource and duration estimates for the test activities, including test specification - these are likely to change as specification progresses, and the plan will therefore need updating following the test specification phase;

Resource allocation to the activities, including test specification;

Responsibilities of the individuals associated with the test phase;

A test schedule;

An assessment of any risks associated with the test phase, together with the actions to be taken to minimize those risks.

Test Specifications

Test specifications will consist of three sections:1. A uniquely numbered list of the conditions that will be tested. These should include functional and error

conditions gathered from the source documentation against which the tests are specified, cross-referenced to the source documentation. They may be copied and pasted from the source documents if appropriate. Although this may seem to be duplication, it ensures visibility of what is being tested and facilitates maintenance if the source documents change. To allow for additional conditions to be entered after the list has been definitively numbered, the initial condition numbers may be incremented by a multiple, eg 2, 5 or 10, to allow for insertion.

2. A matrix of tests against test conditions to show that all test conditions are covered in at least one test.3. A set of test scripts.

Parts 1 and 2 will be maintained in a Word document. Part 3, the test scripts, will be recorded within TestDirector although they may initially be recorded in Word. See the Test Automation Tools section for further details.

Each script should correspond to discrete areas of functionality that correspond to the test structure defined for TestDirector.

Each test script should include the following information:

A unique identifier for the script;

A version number;

The author of the script;

The purpose and description of the script, including a list of the test conditions covered and the data conditions against which the test condition is being executed;

Dependencies, including required data and any tests which must have been run previously;

Page 22 of 28 document.doc

Page 23: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

A set of numbered test steps each with the actions to be performed and the expected results. Expected results should pay particular attention to checks on the data flows through the systems.

Test Environments And Data

The network and hardware used for testing are…

Test Environments

Unit testing will be carried out in the development environments. An instance of each database will be required, namely: CRM Environmen at the XYZ Data Center; Back-end systems – at XYZ Data Center.

An environment that provides connectivity from CRM through to the back-end systems will be required for the integration pilot. This will utilize the development environments.

Test Data

For unit tests, data will in general be created by individual developers according to the needs of the unit tests they are carrying out. Dynamic data will need to be owned by developers and partitioned in some way to prevent its update by other’s tests.

A common data set should be created for unit and functional tests of data retrieval functions such as population of screen lists or generation of reports. This data set will contain, if applicable: At least one record with maximum length data in each field; Records with null values in any fields which may be null; Parent records with and without child records; Values of zero in numeric fields; All instances of code values.

System and acceptance tests will use dynamic data created through use of the system, together with data taken from the production back-end systems; the CRM database will therefore start in an ‘empty’ state. Reference data will be created before the tests commence using the most appropriate data loading mechanisms available for each of the systems involved.

Performance testing will require a large set of data that simulates the maximum production volumes of data expected during the lifetime of the systems. This data will require automated loading mechanisms due to its size.

Data Loading Mechanisms

For unit and regression testing it is desirable to have the means to load defined sets of data without the overhead of manual keyboard entry.

The mechanisms available for loading data into the systems are given in the following sub-sections.

CRM

The underlying database tables are not directly accessible. A utility allows data to be loaded from spreadsheets.

Reference data may be loaded through the standard maintenance screens. Note however that these will require modification before use if the underlying data structure is modified.

Back-end Systems

Subsets of data can be copied from the production systems on request to the DBA team.

Page 23 of 28 document.doc

Page 24: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Direct access to the database is possible via SQL, so datasets can then be modified as required for particular tests.

Data Management

Mechanisms to allow clear-down of databases, loading of defined datasets and restoration of databases to known states must be established. These mechanisms will need to consider the technical issues of distributed data, and the logistics in terms of co-ordination of support staff at the various locations.

Back-end Systems

For the back-end systems, a base set of data will be extracted from the production systems and modified as required. This dataset will include all required reference data. Any names and addresses held in this data set must be made anonymous and dissociated from addresses and telephone numbers.

For unit testing purposes the modification will be achieved by means of SQL scripts created by the developers; these scripts must be saved so that the test start state can always be reproduced by re-loading the base data and re-running the script.

For system and acceptance test, modification of the base set will be carried out as required prior to the start of the tests. This set of data will then be saved before any tests are run. At the end of each set of functionally related tests, or at any point where substantial modifications have been made to the data during the course of testing, the entire data set will be saved. A log will be maintained describing the purpose of the data, when it was saved and the release of the software under test when it was saved.

CRM

Where there is reference data that is common to both CRM and the back-end systems, the back-end data will be the master and the CRM data should therefore match it.

Quality Management

Test Reporting

During system and acceptance testing, a daily test progress meeting will be held (teleconferencing facilities will be used if necessary). The primary purpose of this meeting will be to facilitate test monitoring and control, to provide first pass evaluation of observations, and to facilitate communication between the test team, the development teams and project management. The attendees at the meetings will include: The test manager; The project manager; The development team leaders; A representative of the ‘business’; Any others required for a particular purpose, eg a tester to fully explain an observation, or a business

representative for a particular area to assist with evaluation.

Weekly test reports will be produced during system and acceptance tests. These will include: A description of the activities undertaken during the week including details of the tests run

against plan;

Planned activities for the coming week;

A summary of incidents raised and closed during the week, categorised by severity and system;

An overall cumulative summary of the incident and defect status;

Any outstanding issues or risks associated with the testing.

Page 24 of 28 document.doc

Page 25: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

An end of stage test report will be produced at the end of each phase of system testing. This will include:

A summary of the tests run and any not run;

Summarised defect metrics;

Any remaining known faults, with workarounds and proposals for their remediation;

A conclusion on the status of the system, and recommendations for implementation or further testing.

Test Recording

All tests and their results will be recorded in TestDirector (other system could be used).

Results may include file outputs, screen dumps or other material associated with the test. If these are in electronic format, they should be attached to the test record in TestDirector; if in paper format, a copy should be placed in the project filing system and a reference to it made against the test record in TestDirector.

Defect Management

This section describes the processes and procedures for managing defects uncovered during testing.

The following definitions describe the terminology used:

Observation When a tester observes a discrepancy between the expected results and the actual results for a test, or notices behaviour which s/he considers incorrect or unsuitable, an Observation report will be raised.

Defect A defect is where the system fails to meet specified requirements or agreed system behaviour, or the software is in error as shown by error conditions occurring or failure to meet commonly agreed standards of interface usability.

Change A change request is required where the systems functionality is not considered suitable to support the business processes, and the functionality was not specified as a requirement or stated by the supplier as a function of the product under test.

Issue An issue is used to categorise an observation if it cannot initially be categorised as a fault, no fault or change. Issues must eventually be resolved.

Severity The impact on the business of a defect or change request (see definition of categories below).

Priority How quickly a defect or change request must be fixed or implemented (see definition of categories below).

All observations on the systems behaviour made during testing will be recorded in TestDirector by those executing the tests. This will allow association of defects with tests, and simplify their management and reporting.

The following information will be recorded against an observation – attributes shown in bold are mandatory and must be completed when the observation is raised:

The Observation Reference; Observation category: Change(Enhancement), Error, Issue; The Source of the Error: Application (CRM, middleware etc), Test Script, Requirement, Design

Specification etc; The Version of the system in which the error occurred; The Testing Stage (see below); Test script reference1; The dataset in use; The test environment in which the tests were run;

1 Note that if a defect is raised from tests which are not formally scripted, a test should be created which invokes the defect. This will prevent that defect re-occurring and will allow formal retest.

Page 25 of 28 document.doc

Page 26: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

The Reported Date; The Observer’s name; A keyword summary of the observation; The details of the observation; The severity of the observation (as given below); The priority of the observation (as given below); The Effort Required to fix/implement; The Status of the Observation:

Open Approved for correction In fix Fixed Closed No Fault Closed Withdrawn Closed Fixed

The Closed Date (mandatory if the status is set to Closed); Release fixed in(mandatory if the status is set to Closed Fixed); Items modified or added during correction or enhancement together with the version numbers of the

items after modification (items may be source code, customised screen definitions, requirements and design documentation, or test specifications);

Comments on the progress of the observation.

The testing stage field will be one of the following: Integration; Phase 1 System; Phase 1 Acceptance; Phase 2 System; Phase 2 Acceptance; Performance.

The strategy within XYZ application support is to use PVCS ProPlus as the standard tool for configuration and defect management. How this will be used on this project needs further consideration given that: TestDirector includes defect management facilities – use of PVCS would require construction of a link to

PVCS or duplication records with the associated risk of error; CRM includes both configuration and defect management tools; LPS has its own procedures for configuration and defect management; MDp holds code in COBOL object libraries.

Testers may make a preliminary judgement on the severity and priority of defects if appropriate, and may consult subject matter experts as appropriate.

On a daily basis, the test manager will make a preliminary evaluation of the observations raised, consulting the tester as appropriate. Paper copies of the observations will be made and passed to those attending the daily test meetings for further evaluation. Following further evaluation, the observation records will be updated in TestDirector as required.

Evaluation will consider firstly whether the observation is a defect, a change request, an issue, or should be withdrawn as ‘no fault’. If it is a defect, a severity and a priority will be assigned.

Severity definitions are as follows:Severity Category Definition1 Critical A defect which prevents use of affected system functionality

and that would prevent the system from going live2 Major A defect that materially reduces the ability to use the system.

This could be a critical fault for which a workaround is available. Generally would prevent the system going live except in exceptional circumstances.

3 Minor A defect that does not prevent the system going live, but which requires care or awareness on the part of the user.

4 Cosmetic A defect that has no material impact on the use of the system, eg spelling mistakes, incorrect layout etc.

Page 26 of 28 document.doc

Page 27: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

Priority definitions are as follows:Priority DefinitionHigh Must be fixed immediatelyMedium Must be fixed as soon as practicalLow Fix when convenient

Change Management (Example)

Observations raised which are deemed to be change requests after evaluation will be raised in the project change log and resolved within the project change control procedures

Change requests which originate outside the observation management system and which impact on development or maintenance of the systems will also be recorded in the observation management system once they are approved. The change log will be updated to reflect the status of the change request once the change has been implemented and tested.

All change requests will be identified as such in the observation management system.

Change requests must be authorised according to the project change control procedures.

Configuration Management (Example)

Configuration management will apply at two levels: application objects and hardware components.

As mentioned earlier, it is proposed by XYZ Application Development Support (ADS) that PVCS ProPlus be used as the standard tool for configuration management as well as for defect and change management.

Control of CRM objects will be achieved via the in-built CRM Tools mechanisms. This provides check-out/check-in facilities, but does not maintain a version history. In addition, CRM archive files (.sif) will be regularly transferred to PVCS; a decision is required on the frequency of this transfer. CRM provides an automatic link to external source control tools that automatically checks in a .sif file every time a CRM project (collection of objects) is checked-back into the CRM repository. The transfer options are: use the automatic transfer mechanism to record each project check-in; manually transfer archives at key system baseline points.The first option is recommended so as to allow back-out of any change made within a CRM project.

Integration Component (CRM Client to MQ) objects and, if implemented, code to implement the EIM interface will be stored directly in PVCS.

Further investigation is needed to establish how this tool will be used in conjunction with the mainframe systems.

Release Management (Example)

All releases will be built from defined sets of objects held under configuration management. It should always be possible to re-create a release from the source components. Components generated from lower level components, eg executables from source code, must also be held under configuration management, and the versions of the lower level components recorded against the generated component.

Once a system component is placed under configuration management, all changes made to that component must only be made against an approved observation report, either to correct a defect or to implement an enhancement.

For all releases made into system or acceptance test, a release notice must be produced. This must include: An overall version number for the release; A description of the purpose of the release – in particular whether the release is a full release or is a

patch release which does not replace all components, if the latter, the release on top of which it should be installed must be given;

A list of all components making up the release with their version numbers (taken from the appropriate configuration management system), together with an indication as to whether the component is changed in the release;

Page 27 of 28 document.doc

Page 28: Company XYZ Test Strategy Sample Ver 1

Client X Test Strategy

The location of release components and/or the media they are held on; Instructions on how the release should be installed; A list of all observations (defects and enhancements) which are implemented in the release; A description of any new functionality that has been developed in the release (excluding that made in

response to an observation or change request); A list of any known defects together with any workarounds; Any alterations made to system software in the development environment, eg upgrades or patches.

Page 28 of 28 document.doc