Glossary & Types

Embed Size (px)

Citation preview

  • 8/14/2019 Glossary & Types

    1/8

    Software Testing Glossary & Type of testing

  • 8/14/2019 Glossary & Types

    2/8

    Software Testing Glossary & Type of testing

    Acceptance Testing: Testing conducted toenable a user/customer to determine whether toaccept a software product. Normally performed tovalidate the software meets a set of agreedacceptance criteria.

    Accessibility Testing: Verifying a product isaccessible to the people having disabilities (deaf,blind, mentally disabled etc.).

    Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying thesystem's functionality. Can include negativetesting as well. See also Monkey Testing.

    Agile Testing: Testing practice for projects usingagile methodologies, treating development as thecustomer of testing and emphasizing a test-firstdesign paradigm. See also Test DrivenDevelopment.

    Application Binary Interface (ABI): Aspecification defining requirements for portabilityof applications in binary forms across defferentsystem platforms and environments.

    Application Programming Interface (API): Aformalized set of software calls and routines thatcan be referenced by an application program inorder to access supporting system or network

    services.

    Automated Software Quality (ASQ): The use ofsoftware tools, such as automated testing tools, toimprove software quality.

    Automated Testing:

    Testing employing software tools whichexecute tests without manual intervention.Can be applied in GUI, performance, API,etc. testing.

    The use of software to control theexecution of tests, the comparison ofactual outcomes to predicted outcomes,the setting up of test preconditions, andother test control and test reportingfunctions.

    Backus-Naur Form: A metalanguage used toformally describe the syntax of a language.

    Basic Block: A sequence of one or moreconsecutive, executable statements containing nobranches.

    Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of theprogram to design tests.

    Basis Set: The set of tests derived using basis

    path testing.

    Baseline: The point at which some deliverableproduced during the software engineering processis put under formal change control.

    Benchmark Testing: Tests that userepresentative sets of programs and datadesigned to evaluate the performance ofcomputer hardware and software in a givenconfiguration.

    Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers.

    Binary Portability Testing: Testing anexecutable application for portability acrosssystem platforms and environments, usually forconformation to an ABI specification.

    Black Box Testing: Testing based on an analysisof the specification of a piece of software withoutreference to its internal workings. The goal is totest how well the component conforms to thepublished requirements for the component.

    Bottom Up Testing: An approach to integrationtesting where the lowest level components aretested first, then used to facilitate the testing ofhigher level components. The process is repeateduntil the component at the top of the hierarchy istested.

    Boundary Testing: Test which focus on theboundary or limit conditions of the software beingtested. (Some of these tests are stress tests).

    Boundary Value Analysis: In boundary valueanalysis, test cases are generated using theextremes of the input domaini, e.g. maximum,minimum, just inside/outside boundaries, typicalvalues, and error values. BVA is similar toEquivalence Partitioning but focuses on "cornercases".

    Branch Testing: Testing in which all branches inthe program source code are tested at least once.

    Breadth Testing: A test suite that exercises thefull functionality of a product but does not testfeatures in detail.

  • 8/14/2019 Glossary & Types

    3/8

    Software Testing Glossary & Type of testing

    Bug: A fault in a program which causes theprogram to perform in an unintended orunanticipated manner.

    CAST: Computer Aided Software Testing.

    Capture/Replay Tool: A test tool that records test

    input as it is sent to the software under test. Theinput cases stored can then be used to reproducethe test at a later time. Most commonly applied toGUI test tools.

    CMM: The Capability Maturity Model for Software(CMM or SW-CMM) is a model for judging thematurity of the software processes of anorganization and for identifying the key practicesthat are required to increase the maturity of theseprocesses.

    Cause Effect Graph: A graphical representationof inputs and the associated outputs effects whichcan be used to design test cases.

    Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixesare all that are left. All functions found in theFunctional Specifications have been implemented.

    Code Coverage: An analysis method thatdetermines which parts of the software have beenexecuted (covered) by the test case suite andwhich parts have not been executed and therefore

    may require additional attention.

    Code Inspection: A formal testing techniquewhere the programmer reviews source code witha group who ask questions analyzing the programlogic, analyzing the code with respect to achecklist of historically common programmingerrors, and analyzing its compliance with codingstandards.

    Code Walkthrough: A formal testing techniquewhere source code is traced by a group with asmall set of test cases, while the state of programvariables is manually monitored, to analyze theprogrammer's logic and assumptions.

    Coding: The generation of source code.

    Compatibility Testing: Testing whether softwareis compatible with other elements of a system withwhich it should operate, e.g. browsers, OperatingSystems, or hardware.

    Component: A minimal software item for which aseparate specification is available.

    Component Testing: See Unit Testing.

    Concurrency Testing: Multi-user testing gearedtowards determining the effects of accessing thesame application code, module or databaserecords. Identifies and measures the level oflocking, deadlocking and use of single-threadedcode and locking semaphores.

    Conformance Testing: The process of testingthat an implementation conforms to thespecification on which it is based. Usually appliedto testing conformance to a formal standard.

    Context Driven Testing: The context-drivenschool of software testing is flavor of AgileTesting that advocates continuous and creativeevaluation of testing opportunities in light of thepotential information revealed and the value ofthat information to the organization right now.

    Conversion Testing: Testing of programs or

    procedures used to convert data from existingsystems for use in replacement systems.

    Cyclomatic Complexity: A measure of thelogical complexity of an algorithm, used in white-box testing.

    Data Dictionary: A database that containsdefinitions of all data items defined duringanalysis.

    Data Flow Diagram: A modeling notation that

    represents a functional decomposition of asystem.

    Data Driven Testing: Testing in which the actionof a test case is parameterized by externallydefined data values, maintained as a file orspreadsheet. A common technique in AutomatedTesting.

    Debugging: The process of finding and removingthe causes of software failures.

    Defect: Nonconformance to requirements orfunctional / program specification

    Dependency Testing: Examines an application'srequirements for pre-existing software, initialstates and configuration in order to maintainproper functionality.

    Depth Testing: A test that exercises a feature ofa product in full detail.

    Dynamic Testing: Testing software throughexecuting it. See also Static Testing.

  • 8/14/2019 Glossary & Types

    4/8

    Software Testing Glossary & Type of testing

    Emulator: A device, computer program, orsystem that accepts the same inputs andproduces the same outputs as a given system.

    Endurance Testing: Checks for memory leaks orother problems that may occur with prolongedexecution.

    End-to-End testing: Testing a completeapplication environment in a situation that mimicsreal-world use, such as interacting with adatabase, using network communications, orinteracting with other hardware, applications, orsystems if appropriate.

    Equivalence Class: A portion of a component'sinput or output domains for which the component'sbehaviour is assumed to be the same from thecomponent's specification.

    Equivalence Partitioning: A test case designtechnique for a component in which test cases aredesigned to execute representatives fromequivalence classes.

    Exhaustive Testing: Testing which covers allcombinations of input values and preconditions foran element of the software under test.

    Functional Decomposition: A technique usedduring planning, analysis and design; creates afunctional hierarchy for the software.

    Functional Specification: A document thatdescribes in detail the characteristics of theproduct with regard to its intended features.

    Functional Testing: See also Black Box Testing.

    Testing the features and operationalbehavior of a product to ensure theycorrespond to its specifications.

    Testing that ignores the internalmechanism of a system or component

    and focuses solely on the outputsgenerated in response to selected inputsand execution conditions.

    Glass Box Testing: A synonym for White BoxTesting.

    Gorilla Testing: Testing one particularmodule,functionality heavily.

    Gray Box Testing: A combination of BlackBox and White Box testing methodologies: testing

    a piece of software against its specification butusing some knowledge of its internal workings.

    High Order Tests: Black-box tests conductedonce the software has been integrated.

    Independent Test Group (ITG): A group ofpeople whose primary responsibility is softwaretesting,

    Inspection: A group review quality improvementprocess for written material. It consists of twoaspects; product (document itself) improvementand process improvement (of both documentproduction and inspection).

    Integration Testing: Testing of combined parts ofan application to determine if they functiontogether correctly. Usually performed after unitand functional testing. This type of testing isespecially relevant to client/server and distributedsystems.

    Installation Testing: Confirms that theapplication under test recovers from expected orunexpected events without loss of data orfunctionality. Events can include shortage of diskspace, unexpected loss of communication, orpower out conditions.

    Load Testing: See Performance Testing.

    Localization Testing: This term refers to makingsoftware specifically designed for a specificlocality.

    Loop Testing: A white box testing technique thatexercises program loops.

    Metric: A standard of measurement. Softwaremetrics are the statistics describing the structureor content of a program. A metric should be a realobjective measurement of something such asnumber of bugs per lines of code.

    Monkey Testing: Testing a system or anApplication on the fly, i.e just few tests here and

    there to ensure the system or an application doesnot crash out.

    Mutation Testing: Testing done on theapplication where bugs are purposely added to it.

    Negative Testing: Testing aimed at showingsoftware does not work. Also known as "test tofail". See also Positive Testing.

    N+1 Testing: A variation of Regression Testing.Testing conducted with multiple cycles in whicherrors found in test cycle N are resolved and thesolution is retested in test cycle N+1. The cyclesare typically repeated until the solution reaches a

  • 8/14/2019 Glossary & Types

    5/8

    Software Testing Glossary & Type of testing

    steady state and there are no errors. Seealso Regression Testing.

    Path Testing: Testing in which all paths in theprogram source code are tested at least once.

    Performance Testing: Testing conducted to

    evaluate the compliance of a system orcomponent with specified performancerequirements. Often this is performed using anautomated test tool to simulate large number ofusers. Also know as "Load Testing".

    Positive Testing: Testing aimed at showingsoftware works. Also known as "test to pass". Seealso Negative Testing.

    Quality Assurance: All those planned orsystematic actions necessary to provide adequateconfidence that a product or service is of the typeand quality needed and expected by thecustomer.

    Quality Audit: A systematic and independentexamination to determine whether qualityactivities and related results comply with plannedarrangements and whether these arrangementsare implemented effectively and are suitable toachieve objectives.

    Quality Circle: A group of individuals with relatedinterests that meet at regular intervals to consider

    problems or other matters related to the quality ofoutputs of a process and to the correction ofproblems or to the improvement of quality.

    Quality Control: The operational techniques andthe activities used to fulfill and verify requirementsof quality.

    Quality Management: That aspect of the overallmanagement function that determines andimplements the quality policy.

    Quality Policy: The overall intentions anddirection of an organization as regards quality asformally expressed by top management.

    Quality System: The organizational structure,responsibilities, procedures, processes, andresources for implementing quality management.

    Race Condition: A cause of concurrencyproblems. Multiple accesses to a shared resource,at least one of which is a write, with nomechanism used by either to moderatesimultaneous access.

    Ramp Testing: Continuously raising an inputsignal until the system breaks down.

    Recovery Testing: Confirms that the programrecovers from expected or unexpected eventswithout loss of data or functionality. Events caninclude shortage of disk space, unexpected loss

    of communication, or power out conditions.

    Regression Testing: Retesting a previouslytested program following modification to ensurethat faults have not been introduced or uncoveredas a result of the changes made.

    Release Candidate: A pre-release version, whichcontains the desired functionality of the finalversion, but which needs to be tested for bugs(which ideally should be removed before the finalversion is released).

    Sanity Testing: Brief test of major functionalelements of a piece of software to determine if itsbasically operational. See also Smoke Testing.

    Scalability Testing: Performance testing focusedon ensuring the application under test gracefullyhandles increases in work load.

    Security Testing: Testing which confirms that theprogram can restrict access to authorizedpersonnel and that the authorized personnel canaccess the functions available to their security

    level.

    Smoke Testing: A quick-and-dirty test that themajor functions of a piece of software work.Originated in the hardware testing practice ofturning on a new piece of hardware for the firsttime and considering it a success if it does notcatch on fire.

    Soak Testing: Running a system at high load fora prolonged period of time. For example, runningseveral times more transactions in an entire day(or night) than would be expected in a busy day,to identify and performance problems that appearafter a large number of transactions have beenexecuted.

    Software Requirements Specification: Adeliverable that describes all data, functional andbehavioral requirements, all constraints, and allvalidation requirements for software/

    Software Testing: A set of activities conductedwith the intent of finding errors in software.

    Static Analysis: Analysis of a program carriedout without executing the program.

  • 8/14/2019 Glossary & Types

    6/8

    Software Testing Glossary & Type of testing

    Static Analyzer: A tool that carries out staticanalysis.

    Static Testing: Analysis of a program carried outwithout executing the program.

    Storage Testing: Testing that verifies the

    program under test stores data files in the correctdirectories and that it reserves sufficient space toprevent unexpected termination resulting fromlack of space. This is external storage as opposedto internal storage.

    Stress Testing: Testing conducted to evaluate asystem or component at or beyond the limits of itsspecified requirements to determine the loadunder which it fails and how. Often thisis performance testing using a very high level ofsimulated load.

    Structural Testing: Testing based on an analysisof internal workings and structure of a piece ofsoftware. See alsoWhite Box Testing.

    System Testing: Testing that attempts todiscover defects that are properties of the entiresystem rather than of its individual components.

    Testability: The degree to which a system orcomponent facilitates the establishment of testcriteria and the performance of tests to determinewhether those criteria have been met.

    Testing:

    The process of exercising software toverify that it satisfies specifiedrequirements and to detect errors.

    The process of analyzing a software itemto detect the differences between existingand required conditions (that is, bugs),and to evaluate the features of thesoftware item (Ref. IEEE Std 829).

    The process of operating a system or

    component under specified conditions,observing or recording the results, andmaking an evaluation of some aspect ofthe system or component.

    Test Automation: See Automated Testing.

    Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS,network topology, configuration of the productunder test, other application or system software,etc. The Test Plan for a project shouldenumerated the test beds(s) to be used.

    Test Case:

    Test Case is a commonly used term for aspecific test. This is usually the smallestunit of testing. A Test Case will consist ofinformation such as requirements testing,test steps, verification steps,prerequisites, outputs, test environment,etc.

    A set of inputs, execution preconditions,and expected outcomes developed for aparticular objective, such as to exercise aparticular program path or to verifycompliance with a specific requirement.

    Test Driven Development: Testing methodologyassociated with Agile Programming in which everychunk of code is covered by unit tests, which mustall pass all the time, in an effort to eliminate unit-level and regression bugs during development.Practitioners of TDD write a lot of tests, i.e. anequal number of lines of test code to the size of

    the production code.

    Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness.

    Test Environment: The hardware and softwareenvironment in which tests will be run, and anyother software with which the software under testinteracts when under test including stubs and testdrivers.

    Test First Design: Test-first design is one of the

    mandatory practices of Extreme Programming(XP).It requires that programmers do not write anyproduction code until they have first written a unittest.

    Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver.

    Test Plan: A document describing the scope,approach, resources, and schedule of intendedtesting activities. It identifies test items, thefeatures to be tested, the testing tasks, who willdo each task, and any risks requiring contingency

    planning. Ref IEEE Std 829.

    Test Procedure: A document providing detailedinstructions for the execution of one or more testcases.

    Test Scenario: Definition of a set of testcases or test scripts and the sequence in whichthey are to be executed.

    Test Script: Commonly used to refer to theinstructions for a particular test that will be carried

    out by an automated test tool.

  • 8/14/2019 Glossary & Types

    7/8

    Software Testing Glossary & Type of testing

    Test Specification: A document specifying thetest approach for a software feature orcombination or features and the inputs, predictedresults and execution conditions for theassociated tests.

    Test Suite: A collection of tests used to validate

    the behavior of a product. The scope of a TestSuite varies from organization to organization.There may be several Test Suites for a particularproduct for example. In most cases however aTest Suite is a high level concept, groupingtogether hundreds or thousands of tests relatedby what they are intended to test.

    Test Tools: Computer programs used in thetesting of a system, a component of the system,or its documentation.

    Thread Testing: A variation of top-down

    testing where the progressive integration ofcomponents follows the implementation ofsubsets of the requirements, as opposed to theintegration of components by successively lowerlevels.

    Top Down Testing: An approach to integrationtesting where the component at the top of thecomponent hierarchy is tested first, with lowerlevel components being simulated by stubs.Tested components are then used to test lowerlevel components. The process is repeated until

    the lowest level components have been tested.

    Total Quality Management: A companycommitment to develop a process that achieveshigh quality product and customer satisfaction.

    Traceability Matrix: A document showing therelationship between Test Requirements and TestCases.

    Usability Testing: Testing the ease with whichusers can learn and use a product.

    Use Case: The specification of tests that areconducted from the end-user perspective. Usecases tend to focus on operating software as anend-user would conduct their day-to-day activities.

    User Acceptance Testing: A formal productevaluation performed by a customer as acondition of purchase.

    Unit Testing: Testing of individual softwarecomponents.

    Validation: The process of evaluating software atthe end of the software development process to

    ensure compliance with software requirements.The techniques for validation is testing, inspectionand reviewing.

    Verification: The process of determining whetherof not the products of a given phase of thesoftware development cycle meet the

    implementation steps and can be traced to theincoming objectives established during theprevious phase. The techniques for verificationare testing, inspection and reviewing.

    Volume Testing: Testing which confirms that anyvalues that may become large over time (such asaccumulated counts, logs, and data files), can beaccommodated by the program and will not causethe program to stop working or degrade itsoperation in any manner.

    Walkthrough: A review of requirements, designs

    or code characterized by the author of thematerial under review guiding the progression ofthe review.

    White Box Testing: Testing based on an analysisof internal workings and structure of a piece ofsoftware. Includes techniques such as BranchTesting and Path Testing. Also knownas Structural Testing and Glass Box Testing.Contrast with Black Box Testing.

    Workflow Testing: Scripted end-to-end testing

    which duplicates specific workflows which areexpected to be utilized by the end-user.

  • 8/14/2019 Glossary & Types

    8/8

    Software Testing Glossary & Type of testing

    ACCEPTANCE TESTING. Testing to verify aproduct meets customer specified requirements. Acustomer usually does this type of testing on aproduct that is developed externally.

    BLACK BOX TESTING. Testing withoutknowledge of the internal workings of the item

    being tested. Tests are usually functional.

    COMPATIBILITY TESTING. Testing to ensurecompatibility of an application or Web site withdifferent browsers, OSs, and hardware platforms.Compatibility testing can be performed manuallyor can be driven by an automated functional orregression test suite.

    CONFORMANCE TESTING. Verifyingimplementation conformance to industrystandards. Producing tests for the behavior of an

    implementation to be sure it provides theportability, interoperability, and/or compatibility astandard defines.

    FUNCTIONAL TESTING. Validating anapplication or Web site conforms to itsspecifications and correctly performs all itsrequired functions. This entails a series of testswhich perform a feature by feature validation ofbehavior, using a wide range of normal anderroneous input data. This can involve testing ofthe product's user interface, APIs, databasemanagement, security, installation, networking,

    etcF testing can be performed on an automated ormanual basis using black box or white boxmethodologies.

    INTEGRATION TESTING. Testing in whichmodules are combined and tested as a group.Modules are typically code modules, individualapplications, client and server applications on anetwork, etc. Integration Testing follows unittesting and precedes system testing.

    LOAD TESTING. Load testing is a generic termcovering Performance Testing and Stress Testing.

    PERFORMANCE TESTING. Performance testingcan be applied to understand your application orWWW site's scalability, or to benchmark theperformance in an environment of third partyproducts such as servers and middleware forpotential purchase. This sort of testing isparticularly useful to identify performancebottlenecks in high use applications. Performancetesting generally involves an automated test suiteas this allows easy simulation of a variety ofnormal, peak, and exceptional load conditions.

    REGRESSION TESTING. Similar in scope to afunctional test, a regression test allows aconsistent, repeatable validation of each newrelease of a product or Web site. Such testingensures reported product defects have beencorrected for each new release and that no newquality problems were introduced in themaintenance process. Though regression testingcan be performed manually an automated testsuite is often used to reduce the time andresources needed to perform the required testing.

    SMOKE TESTING. A quick-and-dirty test that themajor functions of a piece of software workwithout bothering with finer details. Originated inthe hardware testing practice of turning on a newpiece of hardware for the first time andconsidering it a success if it does not catch on fire.

    STRESS TESTING. Testing conducted to

    evaluate a system or component at or beyond thelimits of its specified requirements to determinethe load under which it fails and how. A gracefuldegradation under load leading to non-catastrophic failure is the desired result. OftenStress Testing is performed using the sameprocess as Performance Testing but employing avery high level of simulated load.

    SYSTEM TESTING. Testing conducted on acomplete, integrated system to evaluate thesystem's compliance with its specifiedrequirements. System testing falls within thescope of black box testing, and as such, shouldrequire no knowledge of the inner design of thecode or logic.

    UNIT TESTING. Functional and reliability testingin an Engineering environment. Producing testsfor the behavior of components of a product toensure their correct behavior prior to systemintegration.

    WHITE BOX TESTING. Testing based on ananalysis of internal workings and structure of a

    piece of software. Includes techniques such asBranch Testing and Path Testing. Also known asStructural Testing and Glass Box Testing.