39
Glossary of software testing terms

Software testing terms - Glossary

Embed Size (px)

DESCRIPTION

Software testing terms - Glossary

Citation preview

Page 1: Software testing terms - Glossary

Glossaryof software testing terms

Page 2: Software testing terms - Glossary

This glossary of software testing terms and conditions is a compilation ofknowledge, gathered over time, from many different sources. It is provided“as-is” in good faith, without any warranty as to the accuracy or currency ofany definition or other information contained herein. If you have anyquestions or queries about the contents of this glossary, please contactOriginal Software directly.

Or ’g nal a. Not derivative or dependant, first-hand, not imitative, novel incharacter or style, inventive, creative, thinking or acting for oneself.

S ft’ware n. Programs etc. For computer, or other interchangeable materialfor performing operations.

Software Testing Terms > Glossary

Page 3: Software testing terms - Glossary

Software Testing Terms> Glossary

Acceptance Testing: Formal testing conducted to enablea user, customer, or other authorized entity to determinewhether to accept a system or component. Normally per-formed to validate the software meets a set of agreedacceptance criteria.

Accessibility Testing: Verifying a product is accessible tothe people having disabilities (visually impaired, hard ofhearing etc.)

Actual Outcome: The actions that are produced when theobject is tested under specific conditions.

Ad hoc Testing: Testing carried out in an unstructuredand improvised fashion. Performed without clear expectedresults, ad hoc testing is most often used as a complimentto other types of testing. See also Monkey Testing.

Alpha Testing: Simulated or actual operational testing bypotential users/customers or an independent test team atthe developers' site. Alpha testing is often employed foroff-the-shelf software as a form of internal acceptancetesting, before the software goes to beta testing.

Arc Testing: See branch testing.

Agile Testing: Testing practice for projects using agilemethodologies, treating development as the customer oftesting and emphasizing a test-first design philosophy. Inagile development testing is integrated throughout thelifecycle, testing the software throughout its development.See also Test Driven Development.

Application Binary Interface (ABI): Describes the low-level interface between an application program and theoperating system, between an application and its libraries,or between component parts of the application. An ABIdiffers from an application programming interface (API) inthat an API defines the interface between source code andlibraries, so that the same source code will compile on anysystem supporting that API, whereas an ABI allowscompiled object code to function without changes on anysystem using a compatible ABI.

Application Development Lifecycle The process flowduring the various phases of the application developmentlife cycle.The Design Phase depicts the design phase up to thepoint of starting development. Once all of the requirementshave been gathered, analyzed, verified, and a design hasbeen produced, we are ready to pass on the programmingrequirements to the application programmers.

The programmers take the design documents(programming requirements) and then proceed with theiterative process of coding, testing, revising, and testingagain, this is the Development Phase.After the programs have been tested by the programmers,they will be part of a series of formal user and system tests.These are used to verify usability and functionality from auser point of view, as well as to verify the functions of theapplication within a larger framework.The final phase in the development life cycle is to go toproduction and become a steady state. As a prerequisiteto going to production, the development team needs toprovide documentation. This usually consists of user train-ing and operational procedures. The user training familiar-izes the users with the new application. The operationalprocedures documentation enables Operations to takeover responsibility for running the application on an ongo-ing basis.In production, the changes and enhancements are han-dled by a group (possibly the same programming group)that performs the maintenance. At this point in the life cycleof the application, changes are tightly controlled and mustbe rigorously tested before being implemented into produc-tion

Application Programming Interface (API): Provided byoperating systems or libraries in response to supportrequests for services to be made of it by computer programs

Automated Software Quality (ASQ): The use of softwaretools, such as automated testing tools, to improve softwarequality.

Automated Software Testing: The use of software tocontrol the execution of tests, the comparison of actualoutcomes to predicted outcomes, the setting up of testpreconditions, and other test control and test reportingfunctions, without manual intervention.

Automated Testing Tools: Software tools used bydevelopment teams to automate and streamline their test-ing and quality assurance process.

Page 4: Software testing terms - Glossary

Software Testing Terms> Glossary

Backus-Naur Form (BNF): A metasyntax used to expresscontext-free grammars: that is, a formal way to describeformal languages.BNF is widely used as a notation for the grammars ofcomputer programming languages, instruction sets andcommunication protocols, as well as a notation forrepresenting parts of natural language grammars. Manytextbooks for programming language theory and/orsemantics document the programming language in BNF.

Basic Block: A sequence of one or more consecutive,executable statements containing no branches.

Basis Path Testing: A white box test case design tech-nique that fulfills the requirements of branch testing & alsotests all of the independent paths that  could be used toconstruct any arbitrary path through the computer program.

Basis Test Set: A set of test cases derived from BasisPath Testing.

Baseline: The point at which some deliverable producedduring the software engineering process is put under for-mal change control.

Bebugging: A popular software engineering techniqueused to measure test coverage. Known bugs are randomlyadded to a program source code and the programmer istasked to find them. The percentage of the known bugs notfound gives an indication of the real bugs that remain.

Behavior: The combination of input values and precondi-tions along with the required response for a function of asystem. The full specification of a function would normallycomprise one or more behaviors.

Benchmark Testing: Benchmark testing is a normal partof the application development life cycle. It is a team effortthat involves both application developers and databaseadministrators (DBAs), and should be performed againstyour application in order to determine current performanceand improve it. If the application code has been written asefficiently as possible, additional performance gains mightbe realized from tuning the database and database man-ager configuration parameters. You can even tune applica-tion parameters to meet the requirements of the applicationbetter.You run different types of benchmark tests to discoverspecific kinds of information:

A transaction per second benchmark deter-mines the throughput capabilities of the data-base manager under certain limited laboratoryconditions.

An application benchmark tests the same throughput capabilities under conditions that are closer production conditions.Benchmarking is helpful in understanding how the data-base manager responds under varying conditions. Youcan create scenarios that test deadlock handling, utilityperformance, different methods of loading data, transac-tion rate characteristics as more users are added, andeven the effect on the application of using a new release ofthe product.

Benchmark Testing Methods: Benchmark tests arebased on a repeatable environment so that the same testrun under the same conditions will yield results that youcan legitimately compare.You might begin benchmarking by running the test applica-tion in a normal environment. As you narrow down aperformance problem, you can develop specialized testcases that limit the scope of the function that you aretesting. The specialized test cases need not emulate anentire application to obtain valuable information. Start withsimple measurements, and increase the complexity onlywhen necessary.Characteristics of good benchmarks or measurementsinclude:

Tests are repeatable.Each iteration of a test starts in the samesystem state.No other functions or applications are active inthe system unless the scenario includes someamount of other activity going on in the system.The hardware and software used for bench-marking match your production environment.

For benchmarking, you create a scenario and then applica-tions in this scenario several times, capturing key informa-tion during each run. Capturing key information after eachrun is of primary importance in determining the changesthat might improve performance of both the application andthe database.

Beta Testing: Comes after alpha testing. Versions of thesoftware, known as beta versions, are released to a limitedaudience outside of the company. The software is releasedto groups of people so that further testing can ensure theproduct has few faults or bugs. Sometimes, beta versionsare made available to the open public to increase thefeedback field to a maximal number of future users.

Big-Bang Testing: An inappropriate approach to integra-tion testing in which you take the entire integrated systemand test it as a unit. Can work well on small systems but isnot favorable for larger systems because it may be difficultto pinpoint the exact location of the defect when a failureoccurs.

Page 5: Software testing terms - Glossary

Software Testing Terms> Glossary

Binary Portability Testing: Testing an executable appli-cation for portability across system platforms and environ-ments, usually for conformation to an ABI specification.

Black Box Testing: Testing without knowledge of theinternal workings of the item being tested.  For example,when black box testing is applied to software engineering,the tester would only know the "legal" inputs and what theexpected outputs should be, but not how the programactually arrives at those outputs.  It is because of this thatblack box testing can be considered testing with respect tothe specifications, no other knowledge of the program isnecessary.  For this reason, the tester and the programmercan be independent of one another, avoiding programmerbias toward his own work.  For this testing, test groups areoften used.Advantages of Black Box Testing

More effective on larger units of code than glassbox testing.

Tester needs no knowledge of implementation,

Tester and programmer are independent of eachother.

Tests are done from a user's point of view.

Will help to expose any ambiguities or inconsis-tencies in the specifications.

Test cases can be designed as soon as the spec-ifications are complete.

Disadvantages of Black Box Testing

Only a small number of possible inputs can actu-ally be tested, to test every possible input streamwould take nearly forever.

Without clear and concise specifications, testcases are hard to design.

There may be unnecessary repetition of test in-puts if the tester is not informed of test cases theprogrammer has already tried.

May leave many program paths untested.

Cannot be directed toward specific segments ofcode which may be very complex (and thereforemore error prone).

Most testing related research has been directedtoward glass box testing.

Block Matching: Automated matching logic applied todata and transaction driven websites to automaticallydetect block s of related data. This enables repeatingelements to be treated correctly in relation to otherelements in the block without the need for special coding.See TestDrive-Gold

Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, thenused to facilitate the testing of higher level components.

Boundary Testing: Tests focusing on the boundary orlimits of the software being tested.

Boundary Value: An input value or output value which ison the boundary between equivalence classes, or anincremental distance either side of the boundary.

Boundary Value Analysis: In boundary value analysis,test cases are generated using the extremes of the inputdomain, e.g. maximum, minimum, just inside/outsideboundaries, typical values, and error values.

Boundary Value Coverage: The percentage of boundaryvalues which have been exercised by a test case suite.

Branch: A conditional transfer of control from any state-ment to any other statement in a component, or anunconditional transfer of control from any statement to anyother statement in the component except the next state-ment, or when a component has more than one entry point,a transfer of control to an entry point of the component.

Branch Condition Coverage: The percentage of branchcondition outcomes in every decision that has been tested.

Branch Condition Combination Coverage: The percent-age of combinations of all branch condition outcomes inevery decision that has been tested.

Branch Condition Combination Testing: A test casedesign technique in which test cases are designed toexecute combinations of branch condition outcomes.

Branch Condition Testing: A technique in which testcases are designed to execute branch condition outcomes.

Branch Testing: A test case design technique for a com-ponent in which test cases are designed to executebranch outcomes.

Breadth Testing: A test suite that exercises the full func-tionality of a product but does not test features in detail.

Bug: A fault in a program which causes the program toperform in an unintended. See fault.

Page 6: Software testing terms - Glossary

Software Testing Terms> Glossary

Capture/Playback Tool: A test tool that records test inputas it is sent to the software under test. The input casesstored can then be used to reproduce the test at a latertime.

Capture/Replay Tool: See capture/playback tool.

CAST: Acronym for computer-aided software testing.Automated software testing in one or more phases of thesoftware life-cycle. See also ASQ.

Cause-Effect Graph: A graphical representation of inputsor stimuli (causes) with their associated outputs (effects),which can be used to design test cases.

Capability Maturity Model for Software (CMM): TheCMM is a process model based on software best-practiceseffective in large-scale, multi-person projects. The CMMhas been used to assess the maturity levels of organiza-tion areas as diverse as software engineering, systemengineering, project management, risk management, sys-tem acquisition, information technology (IT) or personnelmanagement, against a scale of five key processes,namely: Initial, Repeatable, Defined, Managed and Opti-mized.

Capability Maturity Model Integration (CMMI) CapabilityMaturity Model® Integration (CMMI) is a processimprovement approach that provides organizations withthe essential elements of effective processes. It can beused to guide process improvement across a project, adivision, or an entire organization. CMMI helps integratetraditionally separate organizational functions, set processimprovement goals and priorities, provide guidance forquality processes, and provide a point of reference forappraising current processes.Seen by many as the successor to the CMM, the goal ofthe CMMI project is to improve the usability of maturitymodels by integrating many different models into oneframework.

Certification: The process of confirming that a system orcomponent complies with its specified requirements andis acceptable for operational use.

Chow's Coverage Metrics: See N-switch coverage.

Code Complete: A phase of development where function-ality is implemented in its entirety; bug fixes are all that areleft. All functions found in the Functional Specificationshave been implemented.

Code Coverage: A measure used in software testing. Itdescribes the degree to which the source code of aprogram has been tested. It is a form of testing that looksat the code directly and as such comes under the headingof white box testing.To measure how well a program has been tested, thereare a number of coverage criteria – the main ones being:

Functional Coverage – has each function in theprogram been tested?

Statement Coverage – has each line of thesource code been tested?

Condition Coverage – has each evaluation point(i.e. a true/false decision) been tested?

Path coverage – has every possible route througha given part of the code been executed?

Entry/exit Coverage – has every possible call andreturn of the function been tested?

Code-Based Testing: The principle of structural codebased testing is to have each and every statement in theprogram executed at least once during the test. Based onthe premise that one cannot have confidence in a sectionof code unless it has been exercised by tests, structuralcode based testing attempts to test all reachable elementsin the software under the cost and time constraints. Thetesting process begins by first identifying areas in theprogram not being exercised by the current set of testcases, follow by creating additional test cases to increasethe coverage.

Code-Free Testing: Next generation software testingtechnique from Original Software which does not requirecomplicated scripting language to learn. Instead, a simplepoint and click interface is used to significantly simplify theprocess of test creation. See TestDrive-Gold

Fig 1. Code-Free testing with TestDrive-Gold

Page 7: Software testing terms - Glossary

Software Testing Terms> Glossary

Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who askquestions analyzing the program logic, analyzing the codewith respect to a checklist of historically common program-ming errors, and analyzing its compliance with codingstandards.

Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of testcases, while the state of program variables is manuallymonitored, to analyze the programmer's logic and assump-tions.

Coding: The generation of source code.

Compatibility Testing: The process of testing to under-stand if software is compatible with other elements of asystem with which it should operate, e.g. browsers, Oper-ating Systems, or hardware.

Complete Path Testing: See exhaustive testing.

Component: A minimal software item for which a sepa-rate specification is available.

Component Testing: The testing of individual softwarecomponents.

Component Specification: A description of a compo-nent's function in terms of its output values for specifiedinput values under specified preconditions.

Computation Data Use: A data use not in a condition.Also called C-use.

Concurrent Testing: Multi-user testing geared towardsdetermining the effects of accessing the same applicationcode, module or database records. See Load Testing

Condition: A Boolean expression containing no Booleanoperators. For instance, A<B is a condition but A and B isnot.

Condition Coverage: See branch condition coverage.

Condition Outcome: The evaluation of a condition toTRUE or FALSE.

Conformance Criterion: Some method of judging wheth-er or not the component's action on a particular specifiedinput value conforms to the specification.

Conformance Testing: The process of testing todetermine whether a system meets some specifiedstandard. To aid in this, many test procedures and testsetups have been developed, either by the standard'smaintainers or external organizations, specifically fortesting conformance to standards.Conformance testing is often performed by externalorganizations; sometimes the standards body itself, to givegreater guarantees of compliance. Products tested in sucha manner are then advertised as being certified by thatexternal organization as complying with the standard.

Context Driven Testing: The context-driven school ofsoftware testing is similar to Agile Testing that advocatescontinuous and creative evaluation of testing opportunitiesin light of the potential information revealed and the valueof that information to the organization right now.

Control Flow: An abstract representation of all possiblesequences of events in a program's execution.

Control Flow Graph: The diagrammatic representation ofthe possible alternative control flow paths through a com-ponent.

Control Flow Path: See path.

Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in re-placement systems.

Correctness: The degree to which software conforms toits specification.

Coverage: The degree, expressed as a percentage, towhich a specified coverage item has been tested.

Coverage Item: An entity or property used as a basis fortesting.

Cyclomatic Complexity: A software metric(measurement). It was developed by Thomas McCabe andis used to measure the complexity of a program. It directlymeasures the number of linearly independent pathsthrough a program's source code.

Page 8: Software testing terms - Glossary

Software Testing Terms> Glossary

Data Case: Data relationship model simplified for dataextraction and reduction purposes in order to create testdata.

Data Definition: An executable statement where a vari-able is assigned a value.

Data Definition C-use Coverage: The percentage of da-ta definition C-use pairs in a component that are exer-cised by a test case suite.

Data Definition C-use Pair: A data definition and compu-tation data use, where the data use uses the value de-fined in the data definition.

Data Definition P-use Coverage: The percentage of da-ta definition P-use pairs in a component that is tested.

Data Definition P-use Pair: A data definition and predi-cate data use, where the data use uses the value definedin the data definition.

Data Definition-use Coverage: The percentage of datadefinition-use pairs in a component that are exercised by atest case suite.

Data Definition-use Pair: A data definition and data use,where the data use uses the value defined in the datadefinition.

Data Definition-use Testing: A test case design tech-nique for a component in which test cases are designed toexecute data definition-use pairs.

Data Dictionary: A database that contains definitions of alldata items defined during analysis.

Data Driven Testing: A framework where test input andoutput values are read from data files and are loaded intovariables in captured or manually coded scripts. In thisframework, variables are used for both input values andoutput verification values. Navigation through the program,reading of the data files, and logging of test status andinformation are all coded in the test script.This is similar to keyword-driven testing in that the testcase is contained in the data file and not in the script; thescript is just a "driver," or delivery mechanism, for the data.Unlike in table-driven testing, though, the navigation dataisn't contained in the table structure. In data-driven testing,only test data is contained in the data files.

Data Flow Diagram: A modeling notation that representsa functional decomposition of a system.

Data Flow Coverage: Test coverage measure based onvariable usage within the code. Examples are data defini-tion-use coverage, data definition P-use coverage, datadefinition C-use coverage, etc.

Data Flow Testing: Data-flow testing looks at the lifecycleof a particular piece of data (i.e. a variable) in an applica-tion. By looking for patterns of data usage, risky areas ofcode can be found and more test cases can be applied.

Data Protection: Technique in which the condition of theunderlying database is synchronized with the test scenarioso that differences can be attributed to logical changes.This technique also automatically re-sets the databaseafter tests - allowing for a constant data set if a test isre-run. See TestBench

Data Protection Act: UK Legislation surrounding thesecurity, use and access of an individual’s information.May impact the use of live data used for testing purposes.

Data Use: An executable statement where the value of avariable is accessed.

Database Testing: The process of testing the functional-ity, security, and integrity of the database and the data heldwithin.Functionality of the database is one of the most criticalaspects of an application's quality; problems with the data-base could lead to data loss or security breaches, and mayput a company at legal risk depending on the type of databeing stored. For more information on database testingsee TestBench.

Fig 2. Database testing using TestBench for Oracle

Page 9: Software testing terms - Glossary

Software Testing Terms> Glossary

Debugging: A methodical process of finding and reducingthe number of bugs, or defects, in a computer program ora piece of electronic hardware thus making it behave asexpected. Debugging tends to be harder when varioussubsystems are tightly coupled, as changes in one maycause bugs to emerge in another.

Decision: A program point at which the control flow hastwo or more alternative routes.

Decision Condition: A condition held within a decision.

Decision Coverage: The percentage of decision out-comes that have been exercised by a test case suite.

Decision Outcome: The result of a decision.

Defect: Nonconformance to requirements or functional /program specification

Delta Release: A delta, or partial, release is one thatincludes only those areas within the release unit that haveactually changed or are new since the last full or deltarelease. For example, if the release unit is the program, adelta release contains only those modules that havechanged, or are new, since the last full release of theprogram or the last delta release of certain modules.

Dependency Testing: Examines an application'srequirements for pre-existing software, initial states andconfiguration in order to maintain proper functionality.

Depth Testing: A test that exercises a feature of a productin full detail.

Desk Checking: The testing of software by the manualsimulation of its execution.

Design-Based Testing: Designing tests based onobjectives derived from the architectural or detail design ofthe software (e.g., tests that execute specific invocationpaths or probe the worst case behavior of algorithms).

Dirty Testing: Testing which demonstrates that thesystem under test does not work. (Also known as negativetesting)

Documentation Testing: Testing concerned with the ac-curacy of documentation.

Domain: The set from which values are selected.

Domain Expert: A person who has significant knowledgein a specific domain.

Domain Testing: Domain testing is the most frequentlydescribed test technique. The basic notion is that you takethe huge space of possible tests of an individual variableand subdivide it into subsets that are (in some way)equivalent. Then you test a representative from each sub-set.

Downtime: Total period that a service or component is notoperational.

Dynamic Testing: Testing of the dynamic behavior ofcode. Dynamic testing involves working with the software,giving input values and checking if the output is asexpected.

Dynamic Analysis: the examination of the physicalresponse from the system to variables that are notconstant and change with time.

Page 10: Software testing terms - Glossary

Software Testing Terms> Glossary

Emulator: A device that duplicates (provides an emulationof) the functions of one system using a different system, sothat the second system behaves like (and appears to be)the first system. This focus on exact reproduction ofexternal behavior is in contrast to simulation, which canconcern an abstract model of the system being simulated,often considering internal state.

Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution.

End-to-End Testing: Testing a complete application envi-ronment in a situation that mimics real-world use, such asinteracting with a database, using network communica-tions, or interacting with other hardware, applications, orsystems if appropriate.

Entry Point: The first executable statement within a com-ponent.

Equivalence Class: A mathematical concept, an equiva-lence class is a subset of given set induced by an equiva-lence relation on that given set. (If the given set is empty,then the equivalence relation is empty, and there are noequivalence classes; otherwise, the equivalence relationand its concomitant equivalence classes are all non-emp-ty.) Elements of an equivalence class are said to be equiv-alent, under the equivalence relation, to all the otherelements of the same equivalence class.

Equivalence Partition: See equivalence class.

Equivalence Partitioning: Leverages the concept of"classes" of input conditions. A "class" of input could be"City Name" where testing one or several city names couldbe deemed equivalent to testing all city names. In otherwords each instance of a class in a test covers a large setof other possible tests.

Equivalence Partition Coverage: The percentage ofequivalence classes generated for the component, whichhave been tested.

Equivalence Partition Testing: A test case design tech-nique for a component in which test cases are designed toexecute representatives from equivalence classes.

Error: A mistake that produces an incorrect result.

Error Guessing: Error Guessing involves making anitemized list of the errors expected to occur in a particulararea of the system and then designing a set of test casesto check for these expected errors. Error guessing is moretesting art than testing science but can be very effectivegiven a tester familiar with the history of the system.

Error Seeding: The process of injecting a known numberof "dummy" defects into the program and then check howmany of them are found by various inspections and testing.If, for example, 60% of them are found, the presumption isthat 60% of other defects have been found as well. SeeBebugging.

Evaluation Report: A document produced at the end ofthe test process summarizing all testing activities andresults. It also contains an evaluation of the test processand lessons learned.

Executable statement: A statement which, when com-piled, is translated into object code, which will be executedprocedurally when the program is running and may per-form an action on program data.

Exercised: A program element is exercised by a testcase when the input value causes the execution of thatelement, such as a statement, branch, or other structuralelement.

Exhaustive Testing: Testing which covers all combina-tions of input values and preconditions for an element ofthe software under test.

Exit Point: The last executable statement within a com-ponent.

Expected Outcome: See predicted outcome.

Expert System: A domain specific knowledge base com-bined with an inference engine that processes knowledgeencoded in the knowledge base to respond to a user'srequest for advice.

Expertise: Specialized domain knowledge, skills, tricks,shortcuts and rules-of-thumb that provide an ability torapidly and effectively solve problems in the problem do-main.

Page 11: Software testing terms - Glossary

Software Testing Terms> Glossary

Failure: Non performance or deviation of the software fromits expected delivery or service.

Fault: A manifestation of an error in software. Also knowas a bug

Feasible Path: A path for which there exists a set of inputvalues and execution conditions which causes it to beexecuted.

Feature Testing: A method of testing which concentrateson testing one feature at a time.

Firing a Rule: A rule fires when the “if” part (premise) isproven to be true. If the rule incorporates an “else” compo-nent, the rule also fires when the “if” part is proven to befalse.

Fit For Purpose Testing: Validation carried out to dem-onstrate that the delivered system can be used to carry outthe tasks for which it was designed and acquired.

Forward Chaining: Applying a set of previously deter-mined facts to the rules in a knowledge base to see if anyof them will fire.

Full Release: All components of the release unit that arebuilt, tested, distributed and implemented together. Seealso delta release.

Functional Specification: The document that describesin detail the characteristics of the product with regard to itsintended capability.

Functional Decomposition: A technique used duringplanning, analysis and design; creates a functional hierar-chy for the software. Functional Decomposition broadlyrelates to the process of resolving a functional relationshipinto its constituent parts in such a way that the originalfunction can be reconstructed (i.e., recomposed) fromthose parts by function composition. In general, thisprocess of decomposition is undertaken either for thepurpose of gaining insight into the identity of theconstituent components (which may reflect individualphysical processes of interest, for example), or for thepurpose of obtaining a compressed representation of theglobal function, a task which is feasible only when theconstituent processes possess a certain level of modularity(i.e. independence or non-interaction).

Functional Requirements: Define the internal workings ofthe software: that is, the calculations, technical details,data manipulation and processing and other specificfunctionality that show how the use cases are to besatisfied. They are supported by non-functionalrequirements, which impose constraints on the design orimplementation (such as performance requirements,security, quality standards, or design constraints).

Functional Specification: A document that describes indetail the characteristics of the product with regard to itsintended features.

Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of aproduct to ensure they correspond to its specifi-cations.

Testing that ignores the internal mechanism of asystem or component and focuses solely on theoutputs generated in response to selected inputsand execution conditions.

Page 12: Software testing terms - Glossary

Software Testing Terms> Glossary

Genetic Algorithms: Search procedures that use themechanics of natural selection and natural genetics. Ituses evolutionary techniques, based on function optimiza-tion and artificial intelligence, to develop a solution.

Glass Box Testing: A form of testing in which the testercan examine the design documents and the code as wellas analyze and possibly manipulate the internal state of theentity being tested. Glass box testing involves examiningthe design documents and the code, as well as observingat run time the steps taken by algorithms and their internaldata. See structural test case design.

Goal: The solution that the program or project is trying toreach.

Gorilla Testing: An intense round of testing, quite oftenredirecting all available resources to the activity. The ideahere is to test as much of the application in as short aperiod of time as possible.

Graphical User Interface (GUI): A type of display formatthat enables the user to choose commands, start pro-grams, and see lists of files and other options by pointingto pictorial representations (icons) and lists of menu itemson the screen.

Gray (Grey) Box Testing: A testing technique that uses acombination of black box testing and white box testing.Gray box testing is not black box testing because the testerdoes know some of the internal workings of the softwareunder test. In gray box testing, the tester applies a limitednumber of test cases to the internal workings of the soft-ware under test. In the remaining part of the gray boxtesting, one takes a black box approach in applying inputsto the software under test and observing the outputs.

Page 13: Software testing terms - Glossary

Software Testing Terms> Glossary

Harness: A test environment comprised of stubs anddrivers needed to conduct a test.

Heuristics: The informal, judgmental knowledge of anapplication area that constitutes the "rules of goodjudgment" in the field. Heuristics also encompass theknowledge of how to solve problems efficiently andeffectively, how to plan steps in solving a complex problem,how to improve performance, etc.

High Order Tests: High-order testing checks that thesoftware meets customer requirements and that the soft-ware, along with other system elements, meets the func-tional, behavioral, and performance requirements. It usesblack-box techniques and requires an outsider perspec-tive. Therefore, organizations often use an IndependentTesting Group (ITG) or the users themselves to performhigh-order testing.High-order testing includes validation testing, system test-ing (focusing on aspects such as reliability, security, stress,usability, and performance), and acceptance testing(includes alpha and beta testing). The testing strategyspecifies the type of high-order testing that the projectrequires. This depends on the aspects that are importantin a particular system from the user perspective.

Page 14: Software testing terms - Glossary

Software Testing Terms> Glossary

ITIL (IT Infrastructure Library): A consistent and compre-hensive documentation of best practice for IT ServiceManagement. ITIL consists of a series of books givingguidance on the provision of quality IT services, and on theaccommodation and environmental facilities needed tosupport IT.

Implementation Testing: See Installation Testing.

Incremental Testing: Partial testing of an incompleteproduct. The goal of incremental testing is to provide anearly feedback to software developers.

Independence: Separation of responsibilities which en-sures the accomplishment of objective evaluation.

Independent Test Group (ITG): A group of people whoseprimary responsibility is to conduct software testing forother companies.

Infeasible path: A path which cannot be exercised by anyset of possible input values.

Inference: Forming a conclusion from existing facts.

Inference Engine: Software that provides the reasoningmechanism in an expert system. In a rule based expertsystem, typically implements forward chaining and back-ward chaining strategies.

Infrastructure: The organizational artifacts needed to per-form testing, consisting of test environments, automatedtest tools, office environment and procedures.

Inheritance: The ability of a class to pass on characteris-tics and data to its descendants.

Input: A variable (whether stored within a component oroutside it) that is read by the component.

Input Domain: The set of all possible inputs.

Inspection: A group review quality improvement processfor written material. It consists of two aspects; product(document itself) improvement and process improvement.

Installability: The ability of a software component orsystem to be installed on a defined target platform allowingit to be run as required. Installation includes both a newinstallation and an upgrade.

Installability Testing: Testing whether the software orsystem installation being tested meets predefined installa-tion requirements.

Installation Guide: Supplied instructions on any suitablemedia, which guides the installer through the installationprocess.  This may be a manual guide, step-by-step proce-dure, installation wizard, or any other similar process de-scription.

Installation Testing: Confirms that the application undertest recovers from expected or unexpected events withoutloss of data or functionality. Events can include shortageof disk space, unexpected loss of communication, or pow-er out conditions.Such testing focuses on what customers will need to do toinstall and set up the new software successfully and istypically done by the software testing engineer inconjunction with the configuration manager.Implementation testing is usually defined as testing whichplaces a compiled version of code into the testing orpre-production environment, from which it may or may notprogress into production. This generally takes placeoutside of the software development environment to limitcode corruption from other future releases which mayreside on the development network.

Installation Wizard: Supplied software on any suitablemedia, which leads the installer through the installationprocess.  It shall normally run the installation process,provide feedback on installation outcomes and prompt foroptions.

Instrumentation: The insertion of additional code into theprogram in order to collect information about programbehavior during program execution.

Integration: The process of combining components intolarger groups or assemblies.

Integration Testing: Testing of combined parts of anapplication to determine if they function together correctly.Usually performed after unit and functional testing. Thistype of testing is especially relevant to client/server anddistributed systems.

Interface Testing: Integration testing where the interfacesbetween system components are tested.

Isolation Testing: Component testing of individual com-ponents in isolation from surrounding components.

Page 15: Software testing terms - Glossary

Software Testing Terms> Glossary

KBS (Knowledge Based System): A domain specificknowledge base combined with an inference engine thatprocesses knowledge encoded in the knowledge base torespond to a user's request for advice.

Key Performance Indicator: Quantifiable measurementsagainst which specific performance criteria can be set.

Keyword Driven Testing: An approach to test script writ-ing aimed at code based automation tools that separatesmuch of the programming work from the actual test steps.The results is the test steps can be designed earlier andthe code base is often easier to read and maintain.

Knowledge Engineering: The process of codifying anexpert's knowledge in a form that can be accessed throughan expert system.

Known Error: An incident or problem for which the rootcause is known and for which a temporary Work-around ora permanent alternative has been identified.

Page 16: Software testing terms - Glossary

Software Testing Terms> Glossary

LCSAJ: A Linear Code Sequence And Jump, consisting ofthe following three items (conventionally identified by linenumbers in a source code listing): the start of the linearsequence of executable statements, the end of the linearsequence, and the target line to which control flow istransferred at the end of the linear sequence.

LCSAJ Coverage: The percentage of LCSAJs of a com-ponent which are exercised by a test case suite.

LCSAJ Testing: A test case design technique for a com-ponent in which test cases are designed to execute LCSA-Js.

Logic-Coverage Testing: Sometimes referred to as PathTesting, logic-coverage testing attempts to exposesoftware defects by exercising a unique combination of theprogram's statements known as a path.

Load Testing: The process of creating demand on asystem or device and measuring its response. Load testinggenerally refers to the practice of modeling the expectedusage of a software program by simulating multiple usersaccessing the program's services concurrently. As such,this testing is most relevant for multi-user systems, oftenone built using a client/server model, such as web servers.However, other types of software systems can be load-tested also. For example, a word processor or graphicseditor can be forced to read an extremely large document;or a financial package can be forced to generate a reportbased on several years' worth of data. The most accurateload testing occurs with actual, rather than theoretical,results. See also Concurrent Testing, Performance Test-ing, Reliability Testing, and Volume Testing.

Localization Testing: This term refers to making softwarespecifically designed for a specific locality. This test isbased on the results of globalization testing, which verifiesthe functional support for that particular culture/locale.Localization testing can be executed only on the localizedversion of a product.

The test effort during localization testing focuses on:

and content

and region-specific areas

In addition, localization testing should include:

ized environment

ity tests according to the product's targetregion.

Log: A chronological record of relevant details aboutthe execution of tests.

Loop Testing: Loop testing is the testing of a resource orresources multiple times under program control.

Page 17: Software testing terms - Glossary

Software Testing Terms> Glossary

Maintainability: The ease with which the system/softwarecan be modified to correct faults, modified to meet newrequirements, modified to make future maintenance easi-er, or adapted to a changed environment.

Maintenance Requirements:  A specification of therequired maintenance needed for the system/software.The released software often needs to be revised and/orupgraded throughout its lifecycle. Therefore it is essentialthat the software can be easily maintained, and any errorsfound during re-work and upgrading.Within traditional software testing techniques, script main-tenance is often a problem as it can be very complicatedand time consuming to ensure correct maintenance of thesoftware as the scripts these tools use need updatingevery time the application under test changes. See Code-Free Testing and Self Healing Scripts.

Manual Testing: The oldest type of software testing.Manual testing requires a tester to perform manual testoperations on the test software without the help of testautomation. Manual testing is a laborious activity thatrequires the tester to possess a certain set of qualities; tobe patient, observant, speculative, creative, innovative,open-minded, resourceful, un-opinionated, and skilful.As a tester, it is always advisable to use manual white boxtesting and black-box testing techniques on the test soft-ware. Manual testing helps discover and record anysoftware bugs or discrepancies related to the functionalityof the product.Manual testing can be augmented by test automation. It ispossible to record and playback manual steps and writeautomated test script(s) using test automation tools. Al-though, test automation tools will only help execute testscripts written primarily for executing a particular specifica-tion and functionality. Test automation tools lack the abilityof decision-making and recording any unscripted discrep-ancies during program execution. It is recommended thatone should perform manual testing of the entire product atleast a couple of times before actually deciding toautomate the more mundane activities of the product.Manual testing helps discover defects related to the usabil-ity testing and GUI testing area. While performing manualtests the software application can be validated whether itmeets the various standards defined for effective andefficient usage and accessibility. For example, the stand-ard location of the OK button on a screen is on the left andof CANCEL button on the right. During manual testing youmight discover that on some screen, it is not. This is a newdefect related to the usability of the screen. In addition,there could be many cases where the GUI is not displayedcorrectly and the basic functionality of the program iscorrect. Such bugs are not detectable using test automa-tion tools.

Repetitive manual testing can be difficult to perform onlarge software applications or applications having verylarge dataset coverage. This drawback is compensated forby using manual black-box testing techniques includingequivalence partitioning and boundary value analysis. Us-ing which, the vast dataset specifications can be dividedand converted into a more manageable and achievable setof test suites.There is no complete substitute for manual testing. Manualtesting is crucial for testing software applications morethoroughly. See TestDrive-Assist.

Metric: A standard of measurement. Software metrics arethe statistics describing the structure or content of a pro-gram. A metric should be a real objective measurement ofsomething such as number of bugs per lines of code.

Modified Condition/Decision Coverage: The percentageof all branch condition outcomes that independently affecta decision outcome that have been exercised by a testcase suite.

Modified Condition/Decision Testing: A test case de-sign technique in which test cases are designed to executebranch condition outcomes that independently affect adecision outcome.

Monkey Testing: Testing a system or an application onthe fly, i.e. a unit test with no specific end result in mind.

Multiple Condition Coverage: See Branch ConditionCombination Coverage.

Mutation Analysis: A method to determine test case suitethoroughness by measuring the extent to which a test casesuite can discriminate the program from slight variants(mutants) of the program. See also Error Seeding.

Mutation Testing: Testing done on the application wherebugs are purposely added to it. See Bebugging.

Page 18: Software testing terms - Glossary

Software Testing Terms> Glossary

N-switch Coverage: The percentage of sequences ofN-transitions that have been tested.

N-switch Testing: A form of state transition testing inwhich test cases are designed to execute all valid se-quences of N-transitions.

N-transitions: A sequence of N+ transitions.

N+1 Testing: A variation of regression testing. Testingconducted with multiple cycles in which errors found in testcycle N are resolved and the solution is retested in testcycle N+1. The cycles are typically repeated until thesolution reaches a steady state and there are no errors.See also Regression Testing.

Natural Language Processing (NLP): A computer sys-tem to analyze, understand and generate natural humanlanguages.

Negative Testing: Testing a system or application usingnegative data. (For example testing a password field thatrequires a minimum of 9 characters by entering a pass-word of 6).

Neural Network: A system modeled after the neurons(nerve cells) in a biological nervous system. A neuralnetwork is designed as an interconnected system of pro-cessing elements, each with a limited number of inputs andoutputs. Rather than being programmed, these systemslearn to recognize patterns.

Non-functional Requirements Testing: Testing of thoserequirements that do not relate to functionality. i.e. perfor-mance, usability, etc.

Normalization: A technique for designing relationaldatabase tables to minimize duplication of information and,in so doing, to safeguard the database against certaintypes of logical or structural problems, namely dataanomalies.

Page 19: Software testing terms - Glossary

Software Testing Terms> Glossary

Object: A software structure which represents an identifi-able item that has a well-defined role in a problem domain.

Object Orientated: An adjective applied to any system orlanguage that supports the use of objects.

Objective: The purpose of the specific test being under-taken.

Operational Testing: Testing performed by the end-useron software in its normal operating environment.

Oracle: A mechanism to produce the predicted outcomesto compare with the actual outcomes of the software un-der test.

Outcome: The result or visible effect of a test.

Output: A variable (whether stored within a component oroutside it) that is written to by the component.

Output Domain: The set of all possible outputs.

Output Value: An instance of an output.

Page 20: Software testing terms - Glossary

Software Testing Terms> Glossary

Page Fault: A program interruption that occurs when apage that is marked ‘not in real memory’ is referred to byan active page.

Pair Programming: A software development techniquethat requires two programmers to participate in a combineddevelopment effort at one workstation. Each memberperforms the action the other is not currently doing: forexample, while one types in unit tests, the other thinksabout the class that will satisfy the test.The person who is doing the typing is known as the driverwhile the person who is guiding is known as the navigator.It is often suggested for the two partners to switch roles atleast every half-hour or after a unit test is made. It is alsosuggested to switch partners at least once a day.

Pair Testing: In much the same way as PairProgramming, two testers work together to find defects.Typically, they share one computer and trade control of itwhile testing.

Pairwise Testing: A combinatorial software testingmethod that, for each pair of input parameters to a system(typically, a software algorithm) tests all possible discretecombinations of those parameters. Using carefully chosentest vectors, this can be done much faster than anexhaustive search of all combinations of all parameters, by"parallelizing" the tests of parameter pairs. The number oftests is typically O(nm), where n and m are the number ofpossibilities for each of the two parameters with the mostchoices.The reasoning behind all-pairs testing is this: the simplestbugs in a program are generally triggered by a single inputparameter. The next simplest category of bugs consists ofthose dependent on interactions between pairs ofparameters, which can be caught with all-pairs testing.Bugs involving interactions between three or moreparameters are progressively less common, whilst at thesame time being progressively more expensive to find byexhaustive testing, which has as its limit the exhaustivetesting of all possible inputs.Many testing methods regard all-pairs testing of a systemor subsystem as a reasonable cost-benefit compromisebetween often computationally infeasible higher-ordercombinatorial testing methods, and less exhaustivemethods which fail to exercise all possible pairs ofparameters. Because no testing technique can find allbugs, all-pairs testing is typically used together with otherquality assurance techniques such as unit testing. SeeTestDrive-Gold.

Partial Test Automation: The process of automatingparts but not all of the software testing process. If, forexample, an oracle cannot reasonably be created, or if fullyautomated tests would be too difficult to maintain, then asoftware tools engineer can instead create testing tools tohelp human testers perform their jobs more efficiently.Testing tools can help automate tasks such as productinstallation, test data creation, GUI interaction, problemdetection (consider parsing or polling agents equipped withoracles), defect logging, etc., without necessarilyautomating tests in an end-to-end fashion.

Pass: Software has deemed to have passed a test if theactual results of the test matched the expected results.

Pass/Fail Criteria: Decision rules used to determinewhether an item under test has passed or failed a test.

Path: A sequence of executable statements of a compo-nent, from an entry point to an exit point.

Path Coverage: The percentage of paths in a componentexercised by a test case suite.

Path Sensitizing: Choosing a set of input values to forcethe execution of a component to take a given path.

Path Testing: Used as either black box or white boxtesting, the procedure itself is similar to a walk-through.First, a certain path through the program is chosen.Possible inputs  and the correct result are written down.Then the program is executed by hand, and its result iscompared to the predefined. Possible faults  have to bewritten down at once.

Performance: The degree to which a system orcomponent accomplishes its designated functions withingiven constraints regarding processing time andthroughput rate.

Performance Testing: A test procedure that covers abroad range of engineering or functional evaluations wherea material, product, or system is not specified by detailedmaterial or component specifications: Rather, emphasis ison the final measurable performance characteristics. Alsoknown as Load Testing.

Portability: The ease with which the system/software canbe transferred from one hardware or software environmentto another.

Page 21: Software testing terms - Glossary

Software Testing Terms> Glossary

Portability Requirements: A specification of the requiredportability for the system/software.

Portability Testing: The process of testing the ease withwhich a software component can be moved from oneenvironment to another. This is typically measured in termsof the maximum amount of effort permitted. Results areexpressed in terms of the time required to move thesoftware and complete data conversion anddocumentation updates.

Postcondition: Environmental and state conditions thatmust be fulfilled after the execution of a test or testprocedure.

Positive Testing: Testing aimed at showing whether thesoftware works in the way intended. See also NegativeTesting.

Precondition: Environmental and state conditions whichmust be fulfilled before the component can be executedwith a particular input value.

Predicate: A logical expression which evaluates to TRUEor FALSE, normally to direct the execution path in code.

Predication: The choice to execute or not to execute agiven instruction.

Predicted Outcome: The behavior expected by the spec-ification of an object under specified conditions.

Priority: The level of business importance assigned to anindividual item or test.

Predication: The choice to execute or not to execute agiven instruction.

Predicted Outcome: The behavior expected by the spec-ification of an object under specified conditions.

Priority: The level of business importance assigned to anindividual item or test.

Process: A course of action which turns inputs into out-puts or results.

Process Cycle Test: A black box test design technique inwhich test cases are designed to execute business proce-dures and processes.

Progressive Testing: Testing of new features after re-gression testing of previous features.

Project: A planned undertaking for presentation of resultsat a specified time in the future.

Prototyping: A strategy in system development in which ascaled down system or portion of a system is constructedin a short time, then tested and improved upon overseveral iterations.

Pseudo-Random: A series which appears to be randombut is in fact generated according to some prearrangedsequence.

Page 22: Software testing terms - Glossary

Software Testing Terms> Glossary

Quality Assurance: The activity of providing evidenceneeded to establish confidence among all concerned, thatquality-related activities are being performed effectively. Allthose planned or systematic actions necessary to provideadequate confidence that a product or service will satisfygiven requirements for quality.For software development organizations, TMM (TestingMaturity Model) standards are widely used to measure theQuality Assurance. These standards can be divided in to 5steps, which a software development company canachieve by performing different quality improvementactivities within the organization.

Quality Attribute: A feature or characteristic that affectsan item’s quality.

Quality Audit: A systematic and independent examinationto determine whether quality activities and related resultscomply with planned arrangements and whether thesearrangements are implemented effectively and are suitableto achieve objectives.

Quality Circle: A group of individuals with related intereststhat meet at regular intervals to consider problems or othermatters related to the quality of outputs of a process and tothe correction of problems or to the improvement of quality.

Quality Control: The operational techniques and theactivities used to fulfill and verify requirements of quality.

Quality Management: That aspect of the overall manage-ment function that determines and implements the qualitypolicy. Direction and control with regard to quality generallyincludes the establishment of the quality policy and qualityobjectives, quality planning, quality control, qualityassurance and quality improvement.

Quality Conundrum: Resource, risk and appliaction time-to-market are often in conflict as IS teams strive to deliverquality applications within their budgetary constraints. Thisis the quality conundrum.

Fig. 4 The Quality Conundrum

Quality Policy: The overall intentions and direction of anorganization as regards quality as formally expressed bytop management.

Quality System: The organizational structure, responsibil-ities, procedures, processes, and resources for implement-ing quality management.

Query: A question. Often associated with an SQL query ofvalues in a database.

Queuing Time: Incurred when the device, which a pro-gram wishes to use, is already busy. The program there-fore has to wait in a queue to obtain service from that device.

Page 23: Software testing terms - Glossary

Software Testing Terms> Glossary

ROI: Return on Investment. A performance measure usedto evaluate the efficiency of an investment or to comparethe efficiency of a number of different investments. Tocalculate ROI, the benefit (return) of an investment is di-vided by the cost of the investment; the result is ex-pressed as a percentage or a ratio.

Ramp Testing: Continuously raising an input signal untilthe system breaks down.

Random Testing: A black-box testing approach in whichsoftware is tested by choosing an arbitrary subset of allpossible input values. Random testing helps to avoid theproblem of only testing what you know will work.

Re-testing: Testing that runs test cases that failed the lasttime they were run, in order to verify the success of correc-tive actions.

Recoverability: The capability of the software product tore-establish a specified level of performance and recoverthe data directly affected in case of failure.

Recovery Testing: The activity of testing how well thesoftware is able to recover from crashes, hardware failuresand other similar problems. Recovery testing is the forcedfailure of the software in a variety of ways to verify thatrecovery is properly performed.Examples of recovery testing:

While the application running, suddenly restart thecomputer and after that check the validness ofapplication's data integrity.

While application receives data from the network,unplug and then in some time plug-in the cable,and analyze the application ability to continuereceiving of data from that point, when networkconnection disappeared.

To restart the system while the browser will havedefinite number of sessions and after rebootingcheck, that it is able to recover all of them.

Recreation Materials: A script or set of results containingthe steps required to reproduce a desired outcome.

Regression Testing: Any type of software testing whichseeks to uncover regression bugs. Regression bugs occurwhenever software functionality that previously worked asdesired, stops working or no longer works in the same waythat was previously planned. Typically regression bugsoccur as an unintended consequence of program changes.Common methods of regression testing include re-runningpreviously run tests and checking whether previously fixedfaults have re-emerged.Experience has shown that as software is developed, thiskind of reemergence of faults is quite common. Sometimesit occurs because a fix gets lost through poor revisioncontrol practices (or simple human error in revisioncontrol), but often a fix for a problem will be "fragile" in thatit fixes the problem in the narrow case where it was firstobserved but not in more general cases which may ariseover the lifetime of the software. Finally, it has often beenthe case that when some feature is redesigned, the samemistakes will be made in the redesign that were made inthe original implementation of the feature.Therefore, in most software development situations it isconsidered good practice that when a bug is located andfixed, a test that exposes the bug is recorded and regularlyretested after subsequent changes to the program.Although this may be done through manual testingprocedures using programming techniques, it is often doneusing automated testing tools.Such a 'test suite' contains software tools that allow thetesting environment to execute all the regression testcases automatically; some projects even set up automatedsystems to automatically re-run all regression tests atspecified intervals and report any regressions. Commonstrategies are to run such a system after every successfulcompile (for small projects), every night, or once a week.Those strategies can be automated by an external tool,such as TestDrive-Gold from Original Software.

Fig. 2 A regression test performed in TestDrive-Gold

Page 24: Software testing terms - Glossary

Software Testing Terms> Glossary

Relational Operator: Conditions such as “is equal to” or“is less than” that link an attribute name with an attributevalue in a rule’s premise to form logical expressions thatcan be evaluated true or false.

Release Candidate: A pre-release version, whichcontains the desired functionality of the final version, butwhich needs to be tested for bugs (which ideally should beremoved before the final version is released).

Release Note: A document identifying test items, theirconfiguration, current status and other delivery informationdelivered by development to testing, and possibly otherstakeholders, at the start of a test execution phase.

Reliability: The ability of the system/software to performits required functions under stated conditions for a speci-fied period of time, or for a specified number of operations.

Reliability Requirements:  A specification of the requiredreliability for the system/software.

Reliability Testing:  Testing to determine whether thesystem/software meets the specified reliability require-ments.

Requirement: A capability that must be met or possessedby the system/software (requirements may be functional ornon-functional).

Requirements-based Testing: An approach to testing inwhich test cases are designed based on test objectivesand test conditions derived from requirements. For exam-ple: tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

Result: The consequence or outcome of a test.

Review: A process or meeting during which a work prod-uct, or set of work products, is presented to project person-nel, managers, users or other interested parties forcomment or approval.

Risk: A chance of negative consequences.

Risk Management: Systematic application of proceduresand practices to the tasks of identifying, analyzing, prioritiz-ing, and controlling risk.

Robustness: The degree to which a component or systemcan function correctly in the presence of invalid inputs orstressful environmental conditions.

Root Cause: An underlying factor that caused a non-con-formance and possibly should be permanently eliminatedthrough process improvement.

Rule: A statement of the form: if X then Y else Z. The “if”part is the rule premise, and the “then” part is theconsequent. The “else” component of the consequent isoptional. The rule fires when the if part is determined to betrue or false.

Rule Base: The encoded knowledge for an expert system.In a rule-based expert system, a knowledge base typicallyincorporates definitions of attributes and rules along withcontrol information.

Page 25: Software testing terms - Glossary

Software Testing Terms> Glossary

Safety Testing: The process of testing to determine thesafety of a software product.

Sanity Testing: Brief test of major functional elements ofa piece of software to determine if it’s basically operational.See also Smoke Testing.

Scalability Testing: Performance testing focused on en-suring the application under test gracefully handlesincreases in work load.

Schedule: A scheme for the execution of test procedures.The test procedures are included in the test executionschedule in the order in which they are to be executed.

Scrambling: Data obfuscation routine to de-identify sensi-tive data in test data environments to meet the require-ments of the Data Protection Act and other legislation. SeeTestBench.

Scribe: The person who has to record each defect men-tioned and any suggestions for improvement during areview meeting, on a logging form. The scribe has to makeensure that the logging form is understandable.

Script: See Test Script.

Security: Preservation of availability, integrity and confi-dentiality of information:

access to information and associated assets whenrequired.

pleteness of information and processing methods.

accessible only to those authorized to have access.

Security Requirements: A specification of the requiredsecurity for the system or software.

Security Testing: Process to determine that an IS(Information System) protects data and maintainsfunctionality as intended.The six basic concepts that need to be covered by securitytesting are: confidentiality, integrity, authentication,authorization, availability and non-repudiation.

ConfidentialityA security measure which protects against the disclosureof information to parties other than the intended

recipient(s). Often ensured by means of encoding usingadefined algorithm and some secret information knownonly to the originator of the information and the intendedrecipient(s) (a process known as cryptography) but that isby no means the only way of ensuring confidentiality.

IntegrityA measure intended to allow the receiver to determine thatthe information which it receives has not been altered intransit or by other than the originator of the information.Integrity schemes often use some of the same underlyingtechnologies as confidentiality schemes, but they usuallyinvolve adding additional information to a communicationto form the basis of an algorithmic check rather than theencoding all of the communication.

AuthenticationA measure designed to establish the validity of atransmission, message, or originator.Allows a receiver to have confidence that information itreceives originated from a specific known source.

AuthorizationThe process of determining that a requester is allowed toreceive a service or perform an operation.

AvailabilityAssuring information and communications services will beready for use when expected.Information must be kept available to authorized personswhen they need it.

Non-repudiationA measure intended to prevent the later denial that anaction happened, or a communication that took place etc.In communication terms this often involves the interchangeof authentication information combined with some form ofprovable time stamp.

Self-Healing Scripts: A next generation techniquepioneered by Original Software which enables an existingtest to be run over an updated or changed application, andintelligently modernize itself to reflect the changes in theapplication – all through a point-and-click interface.

Fig 3. Self Healing Scripts.

Page 26: Software testing terms - Glossary

Software Testing Terms> Glossary

Simple Subpath: A subpath of the control flow graph inwhich no program part is executed more than necessary.

Simulation: The representation of selected behavioralcharacteristics of one physical or abstract system byanother system.

Simulator: A device, computer program or system usedduring software verification, which behaves or operateslike a given system when provided with a set of controlledinputs.

Smoke Testing: A preliminary to further testing, whichshould reveal simple failures severe enough to reject aprospective software release. Originated in the hardwaretesting practice of turning on a new piece of hardware forthe first time and considering it a success if it does notcatch on fire. In the software world, the smoke is metaphor-ical.

Soak Testing: Involves testing a system with a significantload extended over a significant period of time, to discoverhow the system behaves under sustained use.For example, in software testing, a system may behaveexactly as expected when tested for 1 hour. However,when it is tested for 3 hours, problems such as memoryleaks cause the system to fail or behave randomly.Soak tests are used primarily to check the reaction of asubject under test under a possible simulated environmentfor a given duration and for a given threshold.Observations made during the soak test are used toimprove the characteristics of the subject under test further.

Software Requirements Specification: A deliverable thatdescribes all data, functional and behavioral requirements,all constraints, and all validation requirements for software.

Software Testing: The process used to measure thequality of developed computer software. Usually, quality isconstrained to such topics as correctness, completeness,security, but can also include more technical requirementsas described under the ISO standard ISO 9126, such ascapability, reliability, efficiency, portability, maintainability,compatibility, and usability. Testing is a process oftechnical investigation, performed on behalf ofstakeholders, that is intended to reveal quality-relatedinformation about the product with respect to the context inwhich it is intended to operate. This includes, but is notlimited to, the process of executing a program orapplication with the intent of finding errors. Quality is not anabsolute; it is value to some person.With that in mind, testing can never completely establish

the correctness of arbitrary computer software; testingfurnishes a criticism or comparison that compares the stateand behaviour of the product against a specification. Animportant point is that software testing should bedistinguished from the separate discipline of SoftwareQuality Assurance (SQA), which encompasses allbusiness process areas, not just testing.

Today, software has grown in complexity and size. Thesoftware product developed by a developer is according tothe System Requirement Specification. Every softwareproduct has a target audience. For example, a video gamesoftware has its audience completely different frombanking software. Therefore, when an organization investslarge sums in making a software product, it must ensurethat the software product must be acceptable to the endusers or its target audience. This is where SoftwareTesting comes into play. Software testing is not merelyfinding defects or bugs in the software, it is the completelydedicated discipline of evaluating the quality of thesoftware.

There are many approaches to software testing, buteffective testing of complex products is essentially aprocess of investigation, not merely a matter of creatingand following routine procedure. One definition of testing is"the process of questioning a product in order to evaluateit", where the "questions" are operations the testerattempts to execute with the product, and the productanswers with its behavior in reaction to the probing of thetester. Although most of the intellectual processes oftesting are nearly identical to that of review or inspection,the word testing is also used to connote the dynamicanalysis of the product—putting the product through itspaces. Sometimes one therefore refers to reviews,walkthroughs or inspections as "static testing", whereasactually running the program with a given set of test casesin a given development stage is often referred to as"dynamic testing", to emphasize the fact that formal reviewprocesses form part of the overall testing scope.

Specification: A description, in any suitable form, of re-quirements.

Specification testing:  An approach to testing wherein thetesting is restricted to verifying the system/software meetsan agreed specification.

Specified input: An input for which the specification pre-dicts an outcome.

State Transition: A transition between two allowablestates of a system or component.

Page 27: Software testing terms - Glossary

Software Testing Terms> Glossary

State Transition Testing: A test case design technique inwhich test cases are designed to execute state transitions.

Statement: An entity in a programming language which istypically the smallest indivisible unit of execution.

Statement Coverage: The percentage of executablestatements in a component that have been exercised by atest case suite.

Statement Testing: A test case design technique for acomponent in which test cases are designed to executestatements. Statement Testing is a structural or white boxtechnique, because it is conducted with reference to thecode. Statement testing comes under Dynamic Analysis.In an ideal world every statement of every componentwould be fully tested. However, in the real world this hardlyever happens. In statement testing every possible state-ment is tested. Compare this to Branch Testing, whereeach branch is tested, to check that it can be traversed,whether it encounters a statement or not.

Static Analysis: Analysis of a program carried out withoutexecuting the program.

Static Analyzer: A tool that carries out static analysis.

Static Code Analysis: The analysis of computer softwarethat is performed without actually executing programs builtfrom that software. In most cases the analysis is performedon some version of the source code and in the other casessome form of the object code. The term is usually appliedto the analysis performed by an automated tool, withhuman analysis being called program understanding orprogram comprehension.

Static Testing: A form of software testing where thesoftware isn't actually used. This is in contrast to Dynamictesting. It is generally not detailed testing, but checksmainly for the sanity of the code, algorithm, or document.It is primarily syntax checking of the code or and manuallyreading of the code or document to find errors. This type oftesting can be used by the developer who wrote the code,in isolation. Code reviews, inspections and walkthroughsare also used.From the black box testing point of view, static testinginvolves review of requirements or specifications. This isdone with an eye toward completeness or appropriatenessfor the task at hand. This is the verification portion ofVerification and Validation. Bugs discovered at this stageof development are normally less expensive to fix thanlater in the development cycle.

Statistical Testing: A test case design technique in whicha model is used of the statistical distribution of the input toconstruct representative test cases.

Storage Testing: Testing that verifies the program undertest stores data files in the correct directories and that itreserves sufficient space to prevent unexpected termina-tion resulting from lack of space. This is external storageas opposed to internal storage. See TestBench

Stress Testing: Testing conducted to evaluate a systemor component at or beyond the limits of its specified re-quirements to determine the load under which it fails andhow. Often this is performance testing using a very highlevel of simulated load.

Structural Coverage: Coverage measures based on theinternal structure of the component.

Structural Test Case Design: Test case selection that isbased on an analysis of the internal structure of the com-ponent.

Structural Testing: See structural test case design.

Structured Basis Testing: A test case design techniquein which test cases are derived from the code logic toachieve % branch coverage.

Structured Walkthrough: See walkthrough.

Stub: A skeletal or special-purpose implementation of asoftware module, used to develop or test a component thatcalls or is otherwise dependent on it.

Subgoal: An attribute which becomes a temporary inter-mediate goal for the inference engine. Subgoal valuesneed to be determined because they are used in thepremise of rules that can determine higher level goals.

Subpath: A sequence of executable statements within acomponent.

Suitability: The capability of the software product to pro-vide an appropriate set of functions for specified tasks anduser objectives.

Suspension Criteria: The criteria used to (temporarily)stop all or a portion of the testing activities on the test items.

Page 28: Software testing terms - Glossary

Software Testing Terms> Glossary

Symbolic Evaluation: See symbolic execution.

Symbolic Execution: A static analysis technique used toanalyse if and when errors in the code may occur. It can beused to predict what code statements do to specified inputsand outputs. It is also important for considering pathtraversal. It struggles when dealing with statements whichare not purely mathematical.

Symbolic Processing: Use of symbols, rather thannumbers, combined with rules-of-thumb (or heuristics), inorder to process information and solve problems.

Syntax Testing: A test case design technique for a com-ponent or system in which test case design is based uponthe syntax of the input.

System Testing: Testing that attempts to discover defectsthat are properties of the entire system rather than of itsindividual components. System testing falls within thescope of black box testing, and as such, should require noknowledge of the inner design of the code or logic.As a rule, system testing takes, as its input, all of the"integrated" software components that have successfullypassed integration testing and also the software systemitself integrated with any applicable hardware system(s).The purpose of integration testing is to detect anyinconsistencies between the software units that areintegrated together (called assemblages) or between anyof the assemblages and the hardware. System testing is amore limiting type of testing; it seeks to detect defects bothwithin the "inter-assemblages" and also within the systemas a whole.

Page 29: Software testing terms - Glossary

Software Testing Terms> Glossary

TMM (Testing Maturity Model): A model developed by DrIlene Bernstein of the Illinois Institute of Technology, forjudging the maturity of the software testing processes of anorganization and for identifying the key practices that arerequired to increase the maturity of these processes.Such a maturity model provides:

organization.A maturity model can be used as a benchmark forassessing different organizations for equivalentcomparison. The model describes the maturity of thecompany based upon the project the company is handlingand the related clients.Level 1 - InitialAt maturity level 1, processes are usually ad hoc, and theorganization usually does not provide a stableenvironment. Success in these organizations depends onthe competence and heroics of the people in theorganization, and not on the use of proven processes. Inspite of this ad hoc, chaotic environment, maturity level 1organizations often produce products and services thatwork; however, they frequently exceed the budget andschedule of their projects.Maturity level 1 organizations are characterized by atendency to over commit, abandon processes in the timeof crisis, and not be able to repeat their past successesagain.Level 1 software project success depends on having highquality people.

Level 2 - Repeatable [Managed]At maturity level 2, software development successes arerepeatable and the organization may use some basicproject management to track costs and schedule.Process discipline helps ensure that existing practices areretained during times of stress. When these practices arein place, projects are performed and managed accordingto their documented plans.Project status and the delivery of services are visible tomanagement at defined points (for example, at majormilestones and at the completion of major tasks).Basic project management processes are established totrack cost, schedule, and functionality. The minimumprocess discipline is in place to repeat earlier successeson projects with similar applications and scope. There ishowever, still a significant risk of exceeding cost and timeestimates.

Level 3 - DefinedThe organization’s set of standard processes, which arethe basis for level 3, are established and improved overtime. These standard processes are used to establishconsistency across the organization.The organization’s management establishes processobjectives for the organization’s set of standard processes,and ensures that these objectives are appropriatelyaddressed.A critical distinction between level 2 and level 3 is thescope of standards, process descriptions, and procedures.At level 2, the standards, process descriptions, andprocedures may be quite different in each specific instanceof the process (for example, on each particular project). Atlevel 3, the standards, process descriptions, andprocedures for a project are tailored from theorganization’s set of standard processes to suit a particularproject or organizational unit.Effective project management is implemented with the helpof good project management software such as TestPlanfrom Original Software.

Level 4 - Quantitatively ManagedUsing precise measurements, management can effectivelycontrol the software development effort. In particular,management can identify ways to adjust and adapt theprocess to particular projects without measurable losses ofquality or deviations from specifications. Organizations atthis level set quantitative quality goals for both softwareprocess and software maintenance. Sub processes areselected that significantly contribute to overall processperformance. These selected sub processes are controlledusing statistical and other quantitative techniques. A criticaldistinction between maturity level 3 and maturity level 4 isthe predictability of process performance. At maturity level4, the performance of processes is controlled usingstatistical and other quantitative techniques, and isquantitatively predictable. At maturity level 3, processesare only qualitatively predictable.

Level 5 - OptimizingMaturity level 5 focuses on continually improving processperformance through both incremental and innovativetechnological improvements. Quantitative process-improvement objectives for the organization areestablished, continually revised to reflect changingbusiness objectives, and used as criteria in managingprocess improvement. The effects of deployed processimprovements are measured and evaluated against thequantitative process-improvement objectives. Both thedefined processes and the organization’s set of standardprocesses are targets of measurable improvementactivities.Process improvements to address common causes ofprocess variation and measurably improve the

Page 30: Software testing terms - Glossary

organization’s processes are identified, evaluated, anddeployed.Optimizing processes that are nimble, adaptable andinnovative depends on the participation of an empoweredworkforce aligned with the business values and objectivesof the organization. The organization’s ability to rapidlyrespond to changes and opportunities is enhanced byfinding ways to accelerate and share learning.A critical distinction between maturity level 4 and maturitylevel 5 is the type of process variation addressed. Atmaturity level 4, processes are concerned with addressingspecial causes of process variation and providingstatistical predictability of the results. Though processesmay produce predictable results, the results may beinsufficient to achieve the established objectives. Atmaturity level 5, processes are concerned with addressingcommon causes of process variation and changing theprocess (that is, shifting the mean of the processperformance) to improve process performance (whilemaintaining statistical probability) to achieve theestablished quantitative process-improvement objectives.

Technical Review: A peer group discussion activity thatfocuses on achieving consensus on the technical ap-proach to be taken. A technical review is also known as apeer review.

Test Approach: The implementation of the test strategyfor a specific project. It typically includes the decisionsmade that follow based on the (test) project’s goal and therisk assessment carried out, starting points regarding thetest process and the test design techniques to be applied.

Test Automation: The use of software to control theexecution of tests, the comparison of actual outcomes topredicted outcomes, the setting up of test preconditions,and other test control and test reporting functions.Commonly, test automation involves automating a manualprocess already in place that uses a formalized testingprocess.Over the past few years, tools that help programmersquickly create applications with graphical user interfaceshave dramatically improved programmer productivity. Thishas increased the pressure on testers, who are oftenperceived as bottlenecks to the delivery of softwareproducts. Testers are being asked to test more and morecode in less and less time. Test automation is one way todo this, as manual testing is time consuming. As and whendifferent versions of software are released, the newfeatures will have to be tested manually time and again.But, now there are tools available that help the testers inthe automation of the GUI which reduce the test time aswell as the cost; other test automation tools supportexecution of performance tests.

Software Testing Terms> Glossary

Many test automation tools provide record and playbackfeatures that allow users to record interactively useractions and replay it back any number of times, comparingactual results to those expected. However, reliance onthese features poses major reliability and maintainabilityproblems. Most successful automators use a softwareengineering approach, and as such most serious testautomation is undertaken by people with developmentexperience. See Partial Test Automation.

Test Bed: An execution environment configured for test-ing. May consist of specific hardware, OS, network topolo-gy, configuration of the product under test, otherapplication or system software, etc. The Test Plan for aproject should enumerate the test beds(s) to be used.

TestBench: A suite of test automation solutions fromOriginal Software that facilitate the management and ma-nipulation of the database and visual layer components.TestBench addresses test verification, disk space, anddata confidentiality issues. In addition, control of test dataensures that every test starts with a consistent data state,essential if the data is to be predictable at the end of testing.

Test Case: A commonly used term for a specific test. Thisis usually the smallest unit of testing. A Test Case willconsist of information such as requirements testing, teststeps, verification steps, prerequisites, outputs, test envi-ronment, etc.An alternate definition for Test Case is a set of inputs,execution preconditions, and expected outcomes devel-oped for a particular objective, such as to exercise aparticular program path or to verify compliance with aspecific requirement.

Test Case Design Technique: A method used to deriveor select test cases.

Test Case Suite: A collection of one or more test casesfor the software under test.

Test Charter: A statement of test objectives, and possiblytest ideas. Test charters are amongst other used in explor-atory testing.

Test Comparator: A test tool that compares the actualoutputs produced by the software under test with theexpected outputs for that test case.

Page 31: Software testing terms - Glossary

Software Testing Terms> Glossary

Test Comparison: The process of identifying differencesbetween the actual results produced by the component orsystem under test and the expected results for a test. Testcomparison can be performed during test execution(dynamic comparison) or after test execution.

Test Completion Criterion: A criterion for determiningwhen planned testing is complete, defined in terms of a testmeasurement technique.

Test Data: Data that exists (for example, in a database)before a test is executed, and that affects or is affected bythe component or system under test.

Test Data Management: The management of test dataduring tests to ensure complete data integrity and legitima-cy from the beginning to the end of test. See TestBench

Test-Drive Assist: A new concept in testing from OriginalSoftware that delivers active support for manual testing bycompiling history on recent testing, making it easy to recre-ate and isolate software defects. With powerful trackingfunctions that operate in a non intrusive and natural fash-ion, testers can detect and highlight defects more quicklyand effectively resulting in developers correcting thesedefects fast and efficiently. This means defects can simplybe re-created at a touch of a button, whilst the results caneventually be used to build fully automated tests.

Test-Drive: Next Generation automated testing solutionfrom Original Software that allows technicians to defineand execute sophisticated tests, without being hindered bycomplex programming languages.  State of the art selfupdating technology automatically adapts tests to newsoftware releases and upgrades.

Test Driven Development: Testing methodology associ-ated with Agile Programming in which every chunk of codeis covered by unit tests which must all pass all the time, inan effort to eliminate unit-level and regression bugs duringdevelopment. Practitioners of TDD write a lot of tests, i.e.an equal number of lines of test code to the size of theproduction code.

Test Driver: A program or test tool used to execute soft-ware against a test case suite.

Test Environment: A description of the hardware andsoftware environment in which the tests will be run and anyother software with which the software under test interactswhen under test including stubs and test drivers.

Test Execution: The process of carrying out a test, wheth-er it be manually or using automated test software.

Fig 3. Main view in TestDrive-Assist showing screens and input.

Those screens with a red cross contain errors.

Test Execution Phase: The period of time in the applica-tion development life cycle during which the componentsof a software product are executed, and the softwareproduct is evaluated to determine whether or not require-ments have been satisfied.

Test Execution Schedule: A scheme for the execution oftest procedures. The test procedures are included in thetest execution schedule in their context and in the order inwhich they are to be executed.

Test Execution Technique: The method used to performthe actual test execution, e.g. manual, capture/playbacktool, etc.

Page 32: Software testing terms - Glossary

Test First Design: Test-first design is one of the mandato-ry practices of Extreme Programming (XP).It requires thatprogrammers do not write any production code until theyhave first written a unit test.

Test Generator: A program that generates test cases inaccordance to a specified strategy.

Test Harness: A program or test tool used to execute atest. Also known as a Test Driver.

Test Infrastructure: The organizational artifacts neededto perform testing, consisting of test environments, testtools, office environment and procedures.

Test Level: A group of test activities that are organizedand managed together. A test level is linked to the respon-sibilities in a project. Examples of test levels are compo-nent test, integration test, system test and acceptance test.

Test Log: A chronological record of relevant details aboutthe execution of tests.

Test Measurement Technique: A method used to mea-sure test coverage items.

Test Object: The component/system/application to betested.

Test Plan: A document describing the scope, approach,resources, and schedule of intended testing activities. Itidentifies test items, the features to be tested, the testingtasks, who will do each task, and any risks requiring con-tingency planning.

TestPlan: A planning solution from Original Software thatenables effective planning, team communications, andaccurate tracking of every testing activity.

Test Point Analysis: A formula based test estimationmethod based on function point analysis.

Test Procedure: A document providing detailed instruc-tions for the execution of one or more test cases.

Test Records: For each test, an unambiguous record ofthe identities and versions of the component under test, thetest specification, and actual outcome.

Test Run: Execution of a test on a specific version of thetest object.

Test Scenario: Definition of a set of test cases or testscripts and the sequence in which they are to be executed.

Test Script: A test script is the traditional but somewhatoutdates way of building test cases. A short programwritten in a programming language is used to test part ofthe functionality of a software system. Such scriptsunderpin the basis of all of the older type automation tools,however these are cumbersome and difficult to use andupdate, and the scripts themselves often have errors inthem. See Code-Free Testing.

Test Specification: A document specifying the test ap-proach for a software feature or combination or featuresand the inputs, predicted results and execution conditionsfor the tests.

Test Strategy: A high-level document defining the testlevels to be performed and the testing within those levelsfor a program (one or more projects).

Test Suite: A collection of tests used to validate the behav-ior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several TestSuites for a particular product for example. In most caseshowever a Test Suite is a high level concept, groupingtogether hundreds or thousands of tests related by whatthey are intended to test.

Test Target: A set of test completion criteria for the test.

Test Type: A group of test activities aimed at testing acomponent or system regarding one or more interrelatedquality attributes. A test type is focused on a specific testobjective, i.e. reliability test, usability test, regression testetc., and may take place on one or more test levels or testphases.

Testability: The degree to which a system or componentfacilitates the establishment of test criteria and the perfor-mance of tests to determine whether those criteria havebeen met.

Tester: A person (either a professional tester or a user)who is involved in the testing of a component or system.

Software Testing Terms> Glossary

Page 33: Software testing terms - Glossary

Software Testing Terms> Glossary

Testing:

satisfies specified requirements and to detect er-rors.

the differences between existing and required con-ditions (that is, bugs), and to evaluate the featuresof the software item.

under specified conditions, observing or recordingthe results, and making an evaluation of someaspect of the system or component.

Test Tools: Computer programs used in the testing of asystem, a component of the system, or its documentation.

Thread Testing: A version of component integrationtesting where the progressive integration of componentsfollows the implementation of subsets of the requirements,as opposed to the integration of components by levels of ahierarchy.

Top Down Testing: An approach to integration testingwhere the component at the top of the component hierar-chy is tested first, with lower level components beingsimulated by stubs. Tested components are then used totest lower level components. The process is repeated untilthe lowest level components have been tested.

Test Driven Development: A software development tech-nique consisting of short iterations where new test casescovering the desired improvement or new functionality arewritten first, then the production code necessary to passthe tests is implemented, and finally the software is refactored to accommodate the changes. The availability oftests before actual development ensures rapid feedbackafter any change. Practitioners emphasize that test-drivendevelopment is a method of designing software, not merelya method of testing. (i.e. avoid designing software that isdifficult to test).

Total Quality Management: A company commitment todevelop a process that achieves high quality product andcustomer satisfaction.

Traceability: The ability to identify related items in docu-mentation and software, such as requirements withassociated tests.

Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases.

Tracked Field: A value captured in one part of anautomated test process and retained for use at a later stage.

Timeline: A chronological representation of components ina result set, combining different areas of test analysis, suchas visual, database, messages and linked programs. SeeTestBench.

Total Testing: A complete approach to testing fromOriginal Software describing the desire to optimize all thesteps in the testing process (e.g. From unit testing to UAT),as well as the different layers of the application’s architec-ture. (e.g. From the user interface to the database).

Page 34: Software testing terms - Glossary

Software Testing Terms> Glossary

Understandability: The capability of the software productto enable the user to understand whether the software issuitable, and how it can be used for particular tasks andconditions of use.

Unit Testing: A procedure used to validate that individualunits of source code are working properly. A unit is thesmallest testable part of an application. In proceduralprogramming a unit may be an individual program,function, procedure, etc., while in object-orientedprogramming, the smallest unit is a method; which maybelong to a base/super class, abstract class orderived/child class.Ideally, each test case is independent from the others;mock objects and test harnesses can be used to assisttesting a module in isolation. Unit testing is typically doneby developers and not by end-users.

Unreachable Code: Code that cannot be reached andtherefore is impossible to execute.

Unstructured Decisions: This type of decision situation iscomplex and no standard solutions exist for resolving thesituation. Some or all of the structural elements of thedecision situation are undefined, ill-defined or unknown.

Usability: The capability of the software to be under-stood, learned, used and attractive to the user when usedunder specified conditions.

Usability requirements: A specification of the requiredusability for the system/software.

Usability Testing: Testing to determine whether thesystem/software meets the specified usability require-ments.

Use Case: The specification of tests that are conductedfrom the end-user perspective. Use cases tend to focus onoperating software as an end-user would conduct theirday-to-day activities.

Use Case Testing: A black box test design technique inwhich test cases are designed to execute user scenarios.

User Acceptance Testing: Black-box testing performedon a system prior to its delivery. In most environments,acceptance testing by the system provider is distinguishedfrom acceptance testing by the customer (the user orclient) prior to accepting transfer of ownership. In suchenvironments, acceptance testing performed by thecustomer is known as beta testing, user acceptance testing(UAT), end user testing, site (acceptance) testing, oracceptance testing.

Unit Testing: Testing of individual software components.

Page 35: Software testing terms - Glossary

Software Testing Terms> Glossary

V Model: Describes how inspection and testing activitiescan occur in parallel with other activities.

Validation Testing: Determination of the correctness ofthe products of software development with respect to theuser needs and requirements.

Variable Data: A repository for multiple scenario valueswhich can be used to drive repeatable automatedprocesses through a number of iterations when used inconjunction with an automation solution such as TestDrive.

Verification: The process of evaluating a system orcomponent to determine its completeness.

Version Identifier: A version number; version date, orversion date and time stamp.

Volume Testing: Testing which confirms that any valuesthat may become large over time (such as accumulatedcounts, logs, and data files), can be accommodated by theprogram and will not cause the program to stop working ordegrade its operation in any manner.

Page 36: Software testing terms - Glossary

Software Testing Terms> Glossary

Walkthrough: A review of requirements, designs or codecharacterized by the author of the material under reviewguiding the progression of the review.

Warping: The capability to manipulate dates in data tosimulate data of different ages in support of testing date-driven systems. Found in TestBench

Waterline: The lowest level of detail relevant to theCustomer.

What If Analysis: The capability of "asking" the softwarepackage what the effect will be of changing some of theinput data or independent variables.

White Box Testing: (a.k.a. clear box testing, glass boxtesting or structural testing). Uses an internal perspectiveof the system to design test cases based on internalstructure. It requires programming skills to identify all pathsthrough the software. The tester chooses test case inputsto exercise paths through the code and determines theappropriate outputs.While white box testing is applicable at the unit, integrationand system levels of the software testing process, it'stypically applied to the unit. So while it normally tests pathswithin a unit, it can also test paths between units duringintegration, and between subsystems during a systemlevel test. Though this method of test design can uncoveran overwhelming number of test cases, it might not detectunimplemented parts of the specification or missingrequirements. But you can be sure that all paths throughthe test object are executed.

Wide Band Delphi: A consensus-based estimation tech-nique for estimating effort. It was developed in the 1940sat the RAND Corporation as a forecasting tool. It has sincebeen adapted across many industries to estimate manykinds of tasks, ranging from statistical data collectionresults to sales and marketing forecasts. It has proven tobe a very effective estimation tool, and it lends itself well tosoftware projects. However, many see great problems withthe technique, such as unknown manipulation of a groupand silencing of minorites in order to see a preset outcomeof a meeting.

Workaround: Method of avoiding an incident or problem,either from a temporary fix or from a technique that meansthe Customer is not reliant on a particular aspect of aservice that is known to have a problem.

Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to beutilized by the end-user.

Page 37: Software testing terms - Glossary

Software Testing Terms> Glossary

XML: Extensible Markup Language. XML is a set of rulesfor designing text formats that let you structure your data.XML makes it easy for a computer to generate data, readdata, and ensure that the data structure is unambiguous.XML avoids common pitfalls in language design: it is exten-sible, platform-independent, and it supports international-ization and localization.

Page 38: Software testing terms - Glossary

Software Testing Terms> Glossary

Bibliography:

The Software Test Management Guide (www.ruleworks.com)The Software Testing Glossary (www.aptest.com)The International Software Testing Qualifications Board (www.istqb.org)The Testing Standards Working Party (www.testingstandards.co.uk)Wikipedia (wikipedia.org)IEEE (www.ieee.org)

Page 39: Software testing terms - Glossary

EMEA OFFICEOriginal Software Ltd

Grove HouseChineham Court

BasingstokeHampshireRG24 8AG

Phone: +44 (0) 1256 338666 Fax: +44 (0) 1256 338678

USA OFFICEOriginal Software Inc601 Oakmont Lane

Suite 170WestmontIL 60559

USA

Phone: 630-321-0092 Fax: 630-321-0223

[email protected]

Copyright Original Software 2007. All rights reserved. All descriptions, specifications and prices are intended forgeneral information only and are subject to change without notice.

OSG-GST-10/07