Click here to load reader

CSCI 3428: Software Engineering Tami Meredith Chapter 9 Testing the System

Embed Size (px)

DESCRIPTION

System Testing Functional Tests Non-Functional Tests (Performance) Acceptance & Installation Tests SYSTEM IN USE! (Live)

Citation preview

CSCI 3428: Software Engineering Tami Meredith Chapter 9 Testing the System TESTING The LAST LINE OF DEFENSE! Relying on testing is dangerous and leads to low-quality products # of Faults # LOC Faults are often clustered: Where there is one, there is usually another Fixing faults frequently introduces new faults, or reveals previously undetectable faults Faults frequently NOT fixed because the actual problem has not been identified System Testing Functional Tests Non-Functional Tests (Performance) Acceptance & Installation Tests SYSTEM IN USE! (Live) Configuration Management Versions and Releases Release 1.0: John Madden Football (1988) Versions: PC, C-64, Apple 2 Release 5: John Madden Football 93 (1992) Versions: PC, SNES, Genesis Release 13: Madden NFL 2000 (1999) Version: PC, PS, N64, GameBoy Colour, Mac Release 27: Madden NFL13 (2012) Versions: XBox360, PS3, Wii (U), PS Vita, iOS, Android MS-DOS, C64/128, Apple II, Genesis, SNES, Amiga, Windows, GameBoy (+ colour, Advance), Game Gear (TV), PS, Saturn, PS2, N64, Mac, NGC, Xbox, Nindendo DS, PSP, Win. Mobile, Xbox 360, PS3, Tapwave Zodiac, Wii (+ U), OS-X, iOS, Android, 3DS Regression Testing Tests applied to a new version/release Verifies it functions in the same manner as the previous version/release Should be used in iterative/incremental development Often automated and done by build team Daily builds advocated by agile methods, done by some companies (e.g., Microsoft) Function(al) Testing Does the software do what its supposed? Has the SRS been implemented? Generally black-box (as opposed to white-box for unit/integration testing) Try to minimise the number of tests while still testing the software fully Optimal (complete) testing is unfeasible! Too many paths Too many different input combinations (including bad data) Could be destructive Cause and Effect Graphs Yeah, whatever Non-Functional Testing (Performance) Stress max users, requests, devices Volume max data Configuration all hardware combinations Compatibility with other systems Regression with system being replaced Security is it safe Timing performance, efficiency, response speed Environmental use site (heat, humidity, vibration) Quality MTF, MTBF Recovery equipment failure, data loss Maintenance diagnostic tools and procedures work Documentation sufficient/correct documentation exists Human factors usability Failure Classification 1. Catastrophic loss of system or life 2. Critical severe injury or damage & mission failure 3. Marginal injury or damage that delays or degrades mission 4. Minor no mission impact but requires maintenance or repair Can you tell this specific classification is MIL-STD-1629A (US)? Others exist using different criteria. Measurement Software Metrics MTBF = MTTF + MTTR Reliability 0..1, R = MTTF / (1 + MTTF) Availability 0..1, A = MTBF / (1 + MTBF) Maintainability 0..1, M = 1 / (1 + MTTR) Acceptance Testing Convince the customer it works Generally involves the requirements team Benchmark Test: Typical Cases representative of use Pilot Test: Use on an experimental basis Alpha Testing: Testing of installed software by development organisation (e.g., programmers or professional testers) Beta Testing: Testing of installed software by client organisation (e.g., users) Parallel Testing: New system run in parallel with existing system and results compared Test Plans Specialised for purpose (integration, security, etc.) Rationale, purpose, overview of testing Methods used to perform each test Identifies tools, stubs, drivers, and other support needed Detailed list of test cases and outputs for those cases Descriptions of how to evaluate outputs Notes on how to record, track, describe faults Test Plans II Objectives Document References System Summary Major Tests Identify testing approaches Broken down as required Correspondences to SRS indicated Test Specifications: input/source, output/recording, limitations, requirements, ordering of sub-tests, procedures Schedule Materials Needed Fault Reports A. Admin tester, test #, site, equipment, date B. Location where did it occur C. Timing any details possible, sequencing D. Symptom how you knew it was a fault E. End-result failure caused, consequences F. Mechanism how to make it happen again G. Cause type of error that caused it H. Severity with respect to performance and application domain I. Cost to fix and its impact on application domain Not all of these can be clearly answered Components often overlap REPRODUCIBILITY is the key! Fault (Bug) Trackers Also known as issue trackers Large number exist Generally an interface to a DB system Wide variety of features May be integrated with other systems (RCS, IDE) Fault Flow Thoughts Testing cannot remove all faults Safety, reliability, and security are NOT the same thing Assume users will make every mistake possible (and then some) Review, review, and then review again Short-term costs should not overshadow long-term risks and costs Famous Moments in Software Engineering Marvy Medical Stuff 1986 to 1996: 450 reports to USFDA regarding software failure 24 Incidents lead to death or injury IV pump rand dry and injected air into patient Monitor failed to sound when patients stopped breathing Respirator gave extra breaths to patient Digital display put data of patient X beside name of patient Y Software failures responsible for 24% of device recalls (FDA 2011) Causes of failure (Abouzahra 2011) for IT Healthcare projects 43% Incomplete scope 30% Unidentified risks 14% Unidentified stakeholders (and interfaces) 8% Communication (developer customer) 5% Other 66% of device failures are triggered by a single variable value