36
Evaluation and Testing Gihan Seneviratne April 2013

ProjectSeminar2013 Testing

Embed Size (px)

DESCRIPTION

hi

Citation preview

  • Evaluation and Testing

    Gihan Seneviratne

    April 2013

  • Verification and Validation

    Software testing is part of a broader group of activities called verification

    and validation that are involved in software quality assurance

    Verification (Are the algorithms coded correctly?)

    The set of activities that ensure that software correctly implements a specific

    function or algorithm

    3

    function or algorithm

    Validation (Does it meet user requirements?)

    The set of activities that ensure that the software that has been built is

    traceable to customer requirements

  • Testing???

    Testing is an activity whose goal is to

    determine if the implementation is correct.

  • Testing Phases

    Determine Test activities

    Select appropriate test data

    Determine the expected results

    Conduct the tests Conduct the tests

    Collect test results

    Compare test results with expected results.

  • Design of Test activities

    Begins with an analysis of:

    The functional specifications of the system.

    How the system will be used (referred to as use

    cases).cases).

    Will result black box testing

  • Black Box testing

    ignores the internal structure

    test activities will be based on

    the expected external behavior

    of the system.of the system.

  • A Test case

    Test cases are defined by:

    A statement of case objectives.

    test data set pay attention to boundaries

    expected results expected results

    carry out concurrently with system

    development. Sometimes you may have to

    refine test cases during implementation

  • High Level: Generic Test Process

    Test Planning &

    Preparation

    Test Execution

    Goals YESNO Goals

    met?

    Analysis &

    Follow-up

    YESEXIT

    NO

  • Test plan

    describes the test cases

  • Test Planning & Preparation(important but not always fully performed)

    Two important items in a test plan include:

    Testing objectives & Targets in terms of: Reliability

    Performance

    Customer satisfaction

    etc. etc.

    General Test Strategy which includes: What & how much to test

    Techniques of testing

    Tools needed

    Measurements and Analysis (details of)

    People & Skills

    Schedule

  • Dont Forget: Test Risks and Contingencies

    Common to most of your projects Poor project management, lack of time, half done etc.

  • Sample: Recording Test Case & Test suites

    Test Case number (use some scheme as your text book shows )

    Test Case author

    A general description of the test purpose

    Pre-condition: includes test results of other pre-req modules

    Test inputs

    Expected outputs (if any) Expected outputs (if any)

    Post-condition

    Test Case history:

    Test execution date

    Test execution person

    Test execution result (s) : Pass Fail

    If failed : Failure information

    : fix status

  • Adequacy of test cases ???

    can be measured based on the concept of

    coverage

    Functional testing (specification-based testing

    or black box)or black box)

    Structural testing (implementation-based or

    white box)

  • Document templates

    Project test plan

    Component test plan

    Integration test plan

    Use case test plan Use case test plan

    System test plan

  • Relationships among test plans

  • Important test plan items

    Features not tested

    Test suspension criteria and resumption

    requirements

    Risks and contingencies Risks and contingencies

  • Component test plan

    Purpose is to define an overall strategy and

    specific test cases used to test a certain

    component

    One for each significant component that One for each significant component that

    warrants isolated testing

  • Component test plan sections

    Objectives for the class

    Guided inspection requirements

    Building and retaining test suites

    Functional test cases Functional test cases

    Structural test cases

    State-based test cases

    Interaction test cases

  • Use case test plan

    Purpose is to describe system level tests to be derived from a single use case

    Incorporated into system and integration test plansplans

    Use case levels

    High-level

    End-to-end system

    Functional sub-use cases

  • Use case test plan

    Functionality use cases; modify the data

    maintained by the system in some way

    Report use cases; access information in the

    system, summarized it, and formatted it for system, summarized it, and formatted it for

    presentation to the user

  • Integration test plan

    Important in iterative development

    environments

    Specific sets of functionality delivered in

    stages as the product slowly emergesstages as the product slowly emerges

    Use cases may not be build up yet

    Use the component test plan

  • System test plan

    Summarizes the individual use case test

    plans

    Provides information on additional testing

    neededneeded

    System test plan

    Use case number Test case number Reason for selecting

  • Coverage

    indicates which items have been touched by test cases Code coverage

    Post-condition coveragePost-condition coverage

    Model-element coverage

    Coverage metrics stated in terms of the product being tested rather than the process we are using to test it

  • Software Faults and FailuresWhy Does Software Fail?

    Wrong requirement: not what the customer

    wants

    Missing requirement

    Requirement impossible to implement Requirement impossible to implement

    Faulty design

    Faulty code

    Improperly implemented design

  • Software Faults and Failures

    Types of Faults

    Algorithmic fault

    Computation and precision fault

    a formulas implementation is wrong

    Documentation fault

    Documentation doesnt match what program does Documentation doesnt match what program does

    Capacity or boundary faults

    Systems performance not acceptable when certain limits are reached

    Timing or coordination faults

    Performance faults

    System does not perform at the speed prescribed

    Standard and procedure faults

  • Software Faults and FailuresOrthogonal Defect Classification

    Fault TypeFault TypeFault TypeFault Type MeaningMeaningMeaningMeaning

    Function Fault that affects capability, end-user interface, product interface with hardware architecture, or global data structure

    Interface Fault in interacting with other component or drivers via calls, macros, control, blocks or parameter lists

    Checking Fault in program logic that fails to validate data and Checking Fault in program logic that fails to validate data and values properly before they are used

    Assignment Fault in data structure or code block initialization

    Timing/serialization Fault in timing of shared and real-time resources

    Build/package/merge Fault that occurs because of problems in repositories management changes, or version control

    Documentation Fault that affects publications and maintenance notes

    Algorithm Fault involving efficiency or correctness of algorithm or data structure but not design

  • Testing IssuesFactors Affecting the Choice of Test Philosophy

    The number of possible logical paths

    The nature of the input data

    The amount of computation involved

    The complexity of algorithms The complexity of algorithms

  • The Purpose of Evaluation

    the measurement of results against

    established objectives

    Revisit your first chapter and get the list of

    objectivesobjectives

  • Objectives:

    A Prerequisite for Evaluation

    Before any system can be properly evaluated , its

    important to have a clearly established set of

    measurable objectives

  • Continued

    Dont wait until the end to determine how it will be evaluated

    starts in the planning stage , you break down the problem into measurable goals and objectives then after implementing the program , you measure the after implementing the program , you measure the results against goals

    A highly user friendly system... How to evaluate?

  • Facts gathering

    Surveys and Questionnaires

    Used to collect information from a large group of respondents.

    Interviews (including focus groups)

    Used to collect information from a small key set of respondents.

    Experiments

    Used to determine the best design features from many options.

    Copyright 2006 John Wiley and Sons, Inc.

    Used to determine the best design features from many options.

    Field studies

    Results are more generalizable since they occur in real settings.

  • Common mistakes

    Standalone development but claim its

    network/ web enabled (e.g. MS Access)

    Single user system but claims its multi-user

    enabledenabled

    MS based development but claims platform

    independent

    IP 127.0.0.1 but web-based

  • Contd

    Browser compatibility of web-based systems

    Display compatibilities (std vs wide)

    Backup and DR

    Parallel runs, run with historical data or hand Parallel runs, run with historical data or hand

    made data

    HCI issues

    Installation dependencies (works only in your

    pc)

  • Thank you!

    Q & AQ & A