39
PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

Embed Size (px)

Citation preview

Page 1: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

PART B-UNIT-6-PROCESS FRAMEWORKCHAPTER 1-A FRAMEWORK FOR

TESTING AND ANALYSIS

Page 2: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

Validation and Verification

ActualRequirements

SWSpecs

System

Validation VerificationIncludes usability testing, user feedback

Includes testing, inspections, static analysis

Page 3: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

6.1 VALIDATION AND VERIFICATION:

EXPLAIN IN DETAIL, VALIDATION AND VERIFICATION ACTIVITIES AND THEIR DIFFERENCE (10M) JAN 2015, JULY 2013 ?

• Validation: Assessing the degree to which a software system actually fulfills its requirements,

in the sense of meeting the user’s real needs. Validation compares a description such as a requirements specification, a design,

or a running system against actual needs. A specification is a statement about a particular proposed solution to a problem,

and that proposed solution may or may not achieve its goals. Validation depends on the specification. Example, if a user presses a request

button at floor i, an available elevator must arrive at floor i soon is Unverifiable but validatable specification.

Page 4: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Verification: Verification is checking the consistency of an implementation with a specification. For example, an overall design could play the role of "specification" and a more

detailed design could play the role of "implementation"; checking whether the detailed design is consistent with the overall design would then be verification of the detailed design.

Later, the same detailed design could play the role of "specification" with respect to source code, which would be verified against the design.

Hence verification is a check of consistency between two descriptions. Verification depends on the specification. Example, if a user presses a request

button at floor i, an available elevator must arrive at floor i within 30 seconds is Verifiable specification.

Page 5: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• The relation of verification and validation activities with respect to artifacts (specifications, design, code, etc.) produced in a software development project is shown below.

Actual Needs andConstraints

Unit/Component

Specs

System Test

Integration Test

Module Test

User Acceptance (alpha, beta test)

Revie

w

Analysis /Review

Analysis /Review

User review of external behavior as it isdetermined or becomes visible

Unit/Components

SubsystemDesign/Specs

Subsystem

SystemSpecifications

SystemIntegration

DeliveredPackage

Validation activities check work products against actual user requirements, while verification activities check consistency of work

products.

Page 6: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Validation activities refer primarily to the overall system specification and the final code.

• With respect to overall system specification, validation checks for discrepancies between actual needs and the system specification that will fulfill its goals.

• With respect to final code, validation aims at checking discrepancies between actual need and the final product, to reveal possible failures and to make sure the product meets end-user expectations.

• Verification includes checks for self-consistency and well-formedness.• Validation against actual requirements necessarily involves human judgment and

the potential for ambiguity, misunderstanding, and disagreement.• A specification should be sufficiently precise and unambiguous.The dependability properties:• A system that meets its actual goals is useful, while a system that is consistent

with its specification is dependable.• Dependability properties include correctness, reliability, robustness, and safety.

Page 7: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Correctness is absolute consistency with a specification in all circumstances and with nontrivial specifications is almost never achieved.

• Reliability is a statistical approximation to correctness, expressed as the likelihood of correct behavior in expected use.

• Robustness, unlike correctness and reliability, weighs properties as more and less critical, and distinguishes which properties should be maintained even under exceptional circumstances in which full functionality cannot be maintained.

• Safety is a kind of robustness in which the critical property to be maintained is avoidance of particular hazardous behaviors.

• A good requirements document should include both a requirements analysis and a requirements specification. The requirements analysis describes the problem and the specification describes a proposed solution.

6.2 DEGREES OF FREEDOM:EXPLAIN DEGREES OF FREEDOM ?(4M) JAN 2015, JULY 2014, JULY 2013 • A logical correctness argument, albeit at high cost can be obtained for verification

of programs.

Page 8: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• From computability theory, for each verification technique for checking the undecidability of a property S , there exist at least one program for which that technique cannot obtain a correct answer.

• Exhaustive testing executing and checking every possible behavior of a program cannot be completed in any finite time for most program.

• Suppose we do make use of the fact that programs are executed on real machines with finite representations of memory values. Consider the following trivial Java class:

1 class Trivial{2 static int sum(int a, int b) { return a+b; }3 }• The Java language definition states that the representation of an int is 32 binary digits,

and thus there are only 232 × 232 = 264 ≈ 1021 different inputs on which the method Trivial.sum() need be tested to obtain a proof of its correctness. At one nanosecond (10-9 seconds) per test case, this will take approximately 1012 seconds, or about 30,000 years.

EXPLAIN VERIFICATION TRADE-OFF DIMENSIONS ?. (8M) JULY 2014, JAN 2014 • A technique for verifying a property can be pessimistic, meaning that it is not guaranteed

to accept a program even if the program does possess the property being analyzed, or it can be optimistic if it may accept some programs that do not possess the property. Simplified properties means to reduce the degree of freedom for simplifying the property to check as shown below.

Page 9: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Some analysis techniques may give a third possible answer, "don't know." We can consider these techniques to be either optimistic or pessimistic depending on how we interpret the "don't know" result. Perfection is unobtainable, but one can choose techniques that err in only a particular direction.

Perfect verification ofarbitrary properties bylogical proof or exhaustivetesting (Infinite effort)

Model checking:Decidable but possiblyintractable checking of

simple temporalproperties.

Theorem proving:Unbounded effort to

verify generalproperties.

Precise analysis ofsimple syntacticproperties.

Typical testingtechniques

Data flowanalysis

Optimisticinaccuracy

Pessimisticinaccuracy

Simplifiedproperties

Verification trade-off dimensions

Page 10: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• A software verification technique that errs only in the pessimistic direction is called a conservative analysis. A conservative analysis will often produce a very large number of spurious error reports, in addition to a few accurate reports and it is preferable to one that could accept a faulty program. Often only a careful choice of complementary optimistic and pessimistic techniques can help in mutually reducing the different problems of the techniques and produce acceptable results.

• Many different terms related to pessimistic and optimistic inaccuracy appear in the literature on program analysis are given below:

• Safe - A safe analysis has no optimistic inaccuracy; that is, it accepts only correct programs. A safe analysis related to a program optimization is one that allows that optimization only when the result of the optimization will be correct.

• Sound - Soundness is a term to describe evaluation of formulas. An analysis of a program P with respect to a formula F is sound if the analysis returns True only when the program actually does satisfy the formula. If satisfaction of a formula F is taken as an indication of correctness, then a sound analysis is the same as a safe or conservative analysis.

Page 11: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Complete- Completeness, like soundness, is a term to describe evaluation of formulas. An analysis of a program P with respect to a formula F is complete if the analysis always returns True when the program actually does satisfy the formula. If satisfaction of a formula F is taken as an indication of correctness, then a complete analysis is one that admits only optimistic inaccuracy. An analysis that is sound but incomplete is a conservative analysis.

• In addition to pessimistic and optimistic inaccuracy , substituting simple, checkable properties for actual properties of interest can be found in the design of modern programming languages. Consider, for example, the property that each variable should be initialized with a value before its value is used in an expression. In the C language, a compiler cannot provide a precise static check for this property, because of the possibility of code like the following:

1 int i, sum;2 int first=1;3 for (i=0; i<10; ++i) {4 if (first) {5 sum=0; first=0;

Page 12: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

6 }7 sum += i;8 }• Java neatly solves this problem by making code like this illegal; that is, the rule is

that a variable must be initialized on all program control paths, whether or not those paths can ever be executed.

• The following condition that we might wish to impose on requirements documents:1. Each significant domain term shall appear with a definition in the glossary of the

document. This property is nearly impossible to check automatically, since determining

whether a particular word or phrase is a "significant domain term" is a matter of human judgment.

1a. Each significant domain term shall be set off in the requirements document by the use of a standard style term. The default visual representation of the term style is a single underline in printed documents and purple text in on-line displays.

Property (1a) still requires human judgment.

Page 13: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

1b. Each word or phrase in the term style shall appear with a definition in the glossary of the document.

Property (1b) can be easily automated in a way that will be completely precise• As a second example, consider a Web-based service in which user sessions need

not directly interact, but they do read and modify a shared collection of data on the server. A well- known, formally verified, concurrency control protocol, such as the two-phase locking protocol, when imposed substitutes a much simpler, sufficient property (two-phase locking) for the complex property of interest (serializability). Further impose a global order on lock accesses, which again simplifies testing and analysis.

• With the adoption of coding conventions that make locking and unlocking actions easy to recognize, it may be possible to rely primarily on flow analysis to determine conformance with the locking protocol, with the role of dynamic testing reduced to a "back-up" to raise confidence in the soundness of the static analysis.

Page 14: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

6.3 VARIETIES OF SOFTWARE:The software testing and analysis techniques were developed primarily for procedural

and object-oriented software. These "generic" techniques are at least partly applicable to most varieties of software, particular application domains (e.g., realtime and safety-critical software) and construction methods (e.g., concurrency and physical distribution, graphical user interfaces) call for particular properties to be verified, or the relative importance of different properties, as well as imposing constraints on applicable techniques. Typically a software system does not fall neatly into one category but rather has a number of relevant characteristics that must be considered when planning verification.

Page 15: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

PART B-UNIT-6-PROCESS FRAMEWORKCHAPTER 2- BASIC PRINCIPLES

Page 16: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

MENTION THE BASIS PRINCIPLES UNDERLYING A & T TECHNIQUES. ?• Analysis and testing (A&T) has been common practice since the earliest software

projects.• Six principles that characterize various approaches and techniques for analysis

and testing are sensitivity, redundancy, restriction, partition, visibility, and feedback.

• General engineering principles:– Partition: divide and conquer– Visibility: making information accessible– Feedback: tuning the development process

• Specific A&T principles:– Sensitivity: better to fail every time than sometimes– Redundancy: making intentions explicit– Restriction: making the problem easier

Page 17: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

EXPLAIN (10M) JAN 2015, JULY 2014, JULY 2013 A. SENSITIVITY , B. REDUNDANCY , C. RESTRICTION , D. VISIBILITY , E. PARTITION2.1 SENSITIVITY: • Human developers make errors, producing faults in software. Faults may lead to failures,

but faulty software may not fail on every execution.• The sensitivity principle states that it is better to fail every time than sometimes.• If a fault is detected in unit testing, the cost of repairing is relatively small. • If a fault survives at the unit level, but triggers a failure detected in integration testing,

the cost of correction is much greater.• If the first failure is detected in system or acceptance testing, the cost is very high

indeed, and the most costly faults are those detected by customers in the field.• A fault that triggers a failure on every execution is unlikely to survive past unit testing.• For example, a fault that results in a failure only for some unusual configurations of

customer equipment may be difficult and expensive to detect. A fault that results in a failure randomly but very rarely.

• The small C program that has three faulty calls to string copy procedures is shown below,

Page 18: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

#include <assert.h> char before[ ] = "=Before="; char middle[ ] = "Middle"; char after[ ] = "=After=";

void show() { printf("%s\n%s\n%s\n", before, middle, after); }

void stringCopy(char *target, const char *source, int howBig) { assert(strlen(source) < howBig); strcpy(target, source); }

int main(int argc, char *argv) { show(); strcpy(middle, "Muddled"); /* Fault, but may not fail */ show(); strncpy(middle, "Muddled", sizeof(middle)); /* Fault, may not fail */ show(); stringCopy(middle, "Muddled",sizeof(middle)); /* Guaranteed to fail */ show(); }

Standard C functions strcpy and strncpy may or may not fail when the source string is too long. The procedure stringCopy is sensitive: It is guaranteed to fail in an observable way if the source string is too long.

Page 19: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• The call to strcpy, strncpy, and stringCopy all pass a source string "Muddled," which is too long to fit in the array middle. For strcpy, the fault may or may not cause an observable failure depending on the arrangement of memory.

• While strncpy avoids overwriting other memory, it truncates the input without warning, and sometimes without properly null-terminating the output.

• The function stringCopy, uses an assertion to ensure that, if the target string is too long, the program always fails in an observable manner.

• The sensitivity principle made these faults easier to detect by making them cause failure more often by applying in three main ways:

At the design level, changing the way in which the program fails; At the analysis and testing level, choosing a technique more reliable with respect to the

property of interest; At the environment level, choosing a technique that reduces the impact of external

factors on the results.• Examples of application of the sensitivity principle: Replacing strcpy and strncpy with stringCopy in the above program. Run-time array bounds checking in many programming languages.

Page 20: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

A variety of tools and replacements for the standard memory management library are available to enhance sensitivity to memory allocation and reference faults.

The fail-fast property of Java iterators provides an immediate and observable failure when the illegal modification occurs.

• A run time deadlock analysis works better if it is machine independent, i.e., if the program deadlocks when analyzed on one machine, it deadlocks on every machine

• A test selection criterion works better if every selected test provides the same result, i.e., if the program fails with one of the selected tests, it fails with all of them (reliable criteria)

2.2 REDUNDANCY: • Redundancy is the opposite of independence. In software test and analysis, we wish to

detect faults that could lead to differences between intended behavior and actual behavior, so the redundancy is in the form of making intentions explicit.

• Redundancy can be introduced to declare intent and automatically check for consistency. • Static type checking is a classic application of this principle: The type declaration is a

statement of intent that is at least partly redundant with the use of a variable in the source code.

• The type declaration constrains other parts of the code, so a consistency check can be applied.

Page 21: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Redundancy check is not limited to program source code, one can also intentionally introduce redundancy in other software artifacts (design) where, software design tools typically provide ways to check consistency between different design views or artifacts.

• Redundancy is exploited instead with run-time checks which is another application of redundancy in programming.

2.3 RESTRICTION:• When there are no acceptably cheap and effective ways to check a property, checking

can be done on more restrictive property or limit the check to a smaller, more restrictive class of programs.

• Consider the problem of ensuring that each variable is initialized before it is used, on every execution. It is not possible for a compiler or analysis tool to precisely determine whether it holds.

• The program shown below illustrates : Can the variable k ever be uninitialized the first time i is added to it? If someCondition(0) always returns true, then k will be initialized to zero on the first time through the loop, before k is incremented, so perhaps there is no potential for a run-time error - but method someCondition could be arbitrarily complex and might even depend on some condition in the environment.

Page 22: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Java's solution to this problem is to enforce a stricter, simpler condition: A program is not permitted to have any syntactic control paths on which an uninitialized reference could occur, regardless of whether those paths could actually be executed. The program has such a path, so the Java compiler rejects it.

• The choice of programming language(s) for a project may entail a number of source code restrictions that impact test and analysis.

• Additional restrictions may be imposed in the form of programming standards such as the use of type casts or pointer arithmetic in C and Other forms of restriction can apply to architectural and detailed design.

• Restrictions can be imposed as the property of serializability on the schedule of transactions that happens serially in some order. This is done by using a particular locking scheme on the program at design time.

• Stateless component interfaces are an example of restriction applied at the architectural level. An interface is stateless if the service does not remember anything about previous requests. One such stateless interface is the Hypertext Transport Protocol (HTTP) 1.0 of the World-Wide-Web which made Web servers much simpler and easier to test.

Page 23: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

1 /** A trivial method with a potentially uninitialized variable.2 * Maybe someCondition(0) is always true, and therefore k is3 * always initialized before use ... but it's impossible, in4 * general, to know for sure. Java rejects the method.5 */6 static void questionable() {7 int k;8 for (int i=0; i < 10; ++i) {9 if (someCondition(i)) {10 k=0;11 } else {12 k+=i;13 }14 }15 System.out.println(k);16 }17 }

Page 24: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

2.4 PARTITION:• Partition, often also known as "divide and conquer," is a general engineering principle.

Dividing a complex problem into sub problems to be attacked and solved independently is probably the most common human problem-solving strategy.

• In Analysis and testing the partition principle is widely used and exploited.• Partitioning can be applied both at process and technique levels. • At the process level, we divide complex activities into sets of simple activities that can be

attacked independently. For example, testing is usually divided into unit, integration, subsystem, and system testing. In this way, we can focus on different sources of faults at different steps, and at each step, we can take advantage of the results of the former steps.

• Many static analysis techniques divide the overall analysis into two subtasks, Simplify the system to make the proof of the desired properties feasible And then prove the property with respect to the simplified model.• Identify a finite number of classes of test cases either from specifications (functional

testing) or from program structure (structural testing) to execute.

Page 25: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

2.5 VISIBILITY:• Visibility means the ability to measure progress or status against goals. • In software engineering, the visibility principle is in the form of process visibility, and

project schedule visibility. • Quality process visibility also applies to measuring achieved (or predicted) quality against

quality goals. • Visibility is closely related to observability, the ability to extract useful information from

a software artifact.• A variety of simple techniques can be used to improve observability as in the Internet

protocols like HTTP and SMTP (Simple Mail Transport Protocol, used by Internet mail servers) are based on the exchange of simple textual commands. The choice of simple, human-readable text rather than a more compact binary encoding has a small cost in performance and a large payoff in observability.

• A variant of observability through direct use of simple text encodings is providing readers and writers to convert between other data structures and simple, human readable and editable text.

Page 26: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

2.6 FEEDBACK• Feedback is another classic engineering principle that applies to analysis and testing. • Feedback applies both to the process itself (process improvement) and to individual

techniques.• Systematic inspection derive its success from feedback. • Participants in inspection are guided by checklists, and checklists are revised and refined

based on experience.• New checklist items may be derived from root cause analysis, analyzing previously

observed failures to identify the initial errors that lead to them.

Page 27: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

SUMMARY• The discipline of test and analysis is characterized by 6 main principles:

– Sensitivity: better to fail every time than sometimes– Redundancy: making intentions explicit– Restriction: making the problem easier– Partition: divide and conquer– Visibility: making information accessible– Feedback: tuning the development process

• They can be used to understand andvantages and limits of different approaches and compare different techniques.

Page 28: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

PART B-UNIT-6-PROCESS FRAMEWORKCHAPTER 3 - TEST AND ANALYSIS ACTIVITIES

WITHIN A SOFTWARE PROCESS

Page 29: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

3.1 THE QUALITY PROCESS: • Quality process is a set of activities and responsibilities that focused primarily on ensuring adequate dependability

of the software product and concerned with project schedule or with product usability. • Like other parts of an overall software process, the quality process provides a framework for selecting and

arranging activities aimed at a particular goal, while also considering interactions and trade-offs with other important goals.

• The quality process should be structured for, completeness: appropriate activities are planned to detect each important class of faults. timeliness : faults are detected at a point of high leverage which means they are detected as early as possible. Costeffectiveness : it is the constraints of completeness and timeliness. Cost must be considered over the whole

development cycle and product life, so the dominant factor is usually the cost of repeating an activity through many change cycles.

• Activities of quality process are considered as being in the domain of quality assurance or quality improvement. • Carryout quality activities at the earliest opportunity because a defect introduced in coding is far cheaper to repair

during unit test than later during integration or system test, and most expensive if it is detected by a user of the fielded system.

3.2 PLANNING AND MONITORING:• Process visibility is a key factor in software quality processes .• Process visibility in software quality process emphasizes on progress against quality goals.• If one cannot gain confidence in the quality of the software system long before it reaches final testing, the quality

process has not achieved adequate visibility.• A well-designed quality process balances several activities across the whole development process, selecting and

arranging them to be as cost-effective as possible, and to improve early visibility.

Page 30: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Planning improves early visibility which motivates the use of “proxy” measures which means the use of quantifiable attributes that are not identical to the properties whished to measure but have the advantage of being measured earlier in development. Ex: the number of faults in design or code is not a true measure of reliability, but we may count faults discovered in design inspections as an early indicator of potential quality problems

• Quality goals can be achieved only through careful planning of activities that are matched to the identified objectives.

• Planning is integral to the quality process.• The overall analysis and test strategy identifies company- or project-wide standards that must be satisfied,

– procedures required, e.g., for obtaining quality certificates– techniques and tools that must be used– documents that must be produced.

• A complete analysis and test plan is a comprehensive description of the quality process that includes several items:– objectives and scope of A&T activities– documents and other items that must be available – items to be tested– features to be tested and not to be tested– analysis and test activities – staff involved in A&T– constraints– pass and fail criteria– schedule– deliverables– hardware and software requirements– risks and contingencies

Page 31: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

3.3 QUALITY GOALS:• Quality process visibility includes distinction among dependability qualities.• Product qualities are the goals of software quality engineering , and process qualities are mean to achieve those

goals.• Software product qualities can be divided in to those that are directly visible to a client and those that primarily

affect the software development organization.• Reliability is directly visible to the client. Maintainability affects development organization , and indirectly affects

client. • Properties that are directly visible to users of a software product, such as dependability, latency, usability and

throughput are called external properties.• Properties that are not directly visible to end users , such as maintainability, reusability and traceability are called

internal properties, even when their impact on the software development and evolution processes may indirectly affect users.

• The external properties of software can be divided into dependability and usefulness.• Quality can be considered as fulfillment of required and desired properties as distinguished from specified

properties. • Critical tasks in software quality analysis is to make desired properties explicit.

Page 32: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

3.4 DEPENDABILITY PROPERTIES:BRIEFLY DISCUSS THE DEPENDABILITY PROPERTIES IN PROCESS FRAMEWORK. ? (8M) JAN 2015, JULY 2014, JAN 2014

• Correctness: A program or system is correct if it is consistent with its specification. A specification divides all possible system

behaviors into two classes, successes (or correct executions) and failures. All of the possible behaviors of a correct system are successes.

A program cannot be mostly correct or somewhat . It is absolutely correct on all possible behaviors, or else it is not correct.

It is very easy to achieve correctness, with respect to some (very bad) specification. Achieving correctness with respect to a useful specification, on the other hand, is seldom practical for nontrivial systems.

• Reliability: It is a statistical approximation to correctness which means 100% reliable = correctness. It is the likelihood of correct function for some ``unit'' of behavior. It is relative to a specification and usage profile. The same program can be more or less reliable depending on how it

is used.• Availability: Particular measures of reliability can be used for different units of execution and different ways of counting success

and failure. Availability is an appropriate measure when a failure has some duration in time. The availability of the router is the time in which the system is "up" (providing normal service) as a fraction of total

time. Between the initial failure of a network router and its restoration we say the router is "down" or "unavailable." Thus, a network router that averages 1 hour of down time in each 24-hour period would have an availability of 23/24, or 95.8%.

Page 33: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Mean time between failures (MTBF) : Mean time between failures (MTBF) is yet another measure of reliability, also using time as the unit of execution. The hypothetical network switch that typically fails once in a 24-hour period and takes about an hour to recover

has a mean time between failures of 23 hours. Note that availability does not distinguish between two failures of 30 minutes each and one failure lasting an hour,

while MTBF does. The definitions of correctness and reliability have (at least) two major weaknesses.

First, since the success or failure of an execution is relative to a specification, they are only as strong as the specification.

Second, they make no distinction between a failure that is a minor annoyance and a failure that results in catastrophe.

These are simplifying assumptions that we accept for the sake of precision.• Safety and hazards: Software safety is an extension of the well-established field of system safety into software. Safety is concerned with preventing certain undesirable behaviors, called hazards. Software safety is typically a concern in "critical" systems such as avionics and medical systems, but the basic

principles apply to any system in which undesirable behaviors can be distinguished from failure. For example, the developers of a word processor might consider safety with respect to the hazard of file

corruption separately from reliability with respect to the complete functional requirements for the word processor.

Safety is meaningless without a specification of hazards to be prevented, and in practice the first step of safety analysis is always finding and classifying hazards.

Typically, hazards are associated with some system in which the software is embedded (e.g., the medical device), rather than the software alone.

Safety is that it is concerned only with these hazards, and not with other aspects of correct functioning.

Page 34: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

The dead-man switch of the mower , does not contribute in any way to cutting grass; its sole purpose is to prevent the operator from reaching into the mower blades while the engine runs by acting as the interlock device.

safety is best considered as a quality distinct from correctness and reliability for two reasons. First, by focusing on a few hazards and ignoring other functionality, a separate safety specification can be

much simpler than a complete system specification, and therefore easier to verify. Second, even if the safety specification were redundant with regard to the full system specification, it is

important because (by definition) we regard avoidance of hazards as more crucial than satisfying other parts of the system specification.

• Robustness: Software that fails under some conditions, which violate the premises of its design, may still be "correct" in the

strict sense, yet the manner in which the software fails is important. It is acceptable that the word processor fails to write the new file that does not fit on disk, but unacceptable to

also corrupt the previous version of the file in the attempt. It is acceptable for the database system to cease to function when the power is cut, but unacceptable for it to

leave the database in a corrupt state. It is usually preferable for the Web system to turn away some arriving users rather than becoming too slow for all,

or crashing. Software that gracefully degrades or fails "softly" outside its normal operating parameters is robust. Software safety is a kind of robustness, that concerns not only avoidance of hazards (e.g., data corruption) but also

partial functionality under unusual situations. Robustness, like safety, begins with explicit consideration of unusual and undesirable situations, and should

include augmenting software specifications with appropriate responses to undesirable events.

Page 35: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

3.5 Analysis:ILLUSTRATE THE PURPOSE OF SOFTWARE ANALYSIS? • Analysis techniques that do not involve actual execution of program source code play a prominent role in overall software quality

processes. • Manual inspection techniques and automated analyses can be applied at any development stage. • Inspection:- Applied to any document including requirements documents, architectural and design documents, test plans, test cases and

program source code. Inspection also benefits by spreading good practices and shared standards of quality. Inspection used primarily where other techniques are inapplicable and where other techniques do not provide sufficient coverage Inspection on the other hand takes a considerable amount of time. Moreover re-inspecting a changed component can be as

expensive as the initial inspection.

Page 36: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Automated static analyses:- It is more limited in applicability, but used when available because substituting machine cycles for human effort is

cost-effective. Due to the substantial effort for structuring a model for analysis, the cost advantage is diminished. But their application have the ability to check for particular classes of faults for which checking with other technique

is very difficult or expensive.• Sometimes the best aspects of manual inspection and automated static analysis can be obtained by carefully

decomposing properties to be checked. • For example, consider property of special term in the application domain appear in a glossary of terms. • This property is not directly agreeable to an automated static analysis, since current tools cannot distinguish

meaningful domain terms from other terms that have their ordinary meanings.• The property can be checked with manual inspection, but the process is tedious, expensive, and error-prone. • Hence a hybrid approach can be applied if each domain term is marked in the text. Manually checking that domain

terms are marked is much faster and therefore less expensive.3.6 Testing:ILLUSTRATE THE PURPOSE OF SOFTWARE TEST?• Despite the attractiveness of automated static analyses, manual inspections, dynamic testing remains a dominant

technique. • Dynamic testing is divided into several distinct activities that may occur at different points in a project. • Tests are executed when the corresponding code is available, but testing activities start earlier, as soon as the

artifacts required for designing test case specifications are available. • Thus, acceptance and system test suites should be generated before integration and unit test suites.• By early test design tests are specified independently from code.• Moreover, test cases may highlight inconsistencies and incompleteness in the corresponding software

specifications.

Page 37: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• Early design of test cases also allows for early repair of software specifications, preventing specification faults from propagating to later stages in development.

• Finally, programmers may use test cases to illustrate and clarify the software specifications, especially for errors and unexpected conditions.

• Just as the "earlier is better" rule dictates using inspection to reveal flaws in requirements and design before they are propagated to program code, the same rule dictates module testing to uncover as many program faults as possible before they are incorporated in larger subsystems of the product.

3.7 Improving the Process:• Confronted by similar problems, developers tend to make the same kinds of errors over and over, and consequently

the same kinds of software faults are often encountered project after project. • The quality process and the software development process can be improved by gathering, analyzing, and acting on

data regarding faults and failures.• The goal of quality process improvement is to find cost-effective countermeasures for classes of faults that are

expensive because they occur frequently, or failure they cause are expensive, or expensive to repair. • Countermeasures may be prevention or detection or quality assurance activities or aspects of software development

aspects.• The first part of a process improvement is gathering sufficiently complete and accurate raw data about faults and

failures. • A main obstacle is that data gathered in one project goes mainly to benefit other projects in the future and may seem

to have little direct benefit for the current project.• It is therefore helpful to integrate data collection with normal development activities, such as version and

configuration control, project management, and bug tracking.• Raw data on faults and failures must be aggregated into categories and prioritized. Faults may be categorized with

similar causes and possible remedies.

Page 38: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• The analysis step consists of tracing several instances of an observed fault or failure back to the human error from which it resulted, or even further to the factors that led to that human error.

• The analysis also involves the reasons the fault was not detected and eliminated earlier. This process is known as "root cause analysis”.

• For the buffer overflow errors in network applications, the countermeasure could involve differences in programming methods or improvements to quality assurance activities or sometimes changes in management practices.

3.8 Organizational Factors:WHY ORGANIZATIONAL FACTORS ARE NEEDED IN PROCESS FRAMEWORK. ? (4M) JAN 2014 • The quality process includes a wide variety of activities that require specific skills and attitudes and may be

performed by quality specialists or by software developers. • Planning the quality process involves not only resource management but also identification and allocation of

responsibilities. • A poor allocation of responsibilities can lead to major problems in which pursuit of individual goals conflicts with

overall project success. • For example, splitting responsibilities of development and quality-control between a development and a quality

team, and rewarding may produce undesired results. • The development team, not rewarded to produce high-quality software, may attempt to maximize productivity to

the detriment of quality. • Combining development and quality control responsibilities in one undifferentiated team, while avoiding the

perverse incentive of divided responsibilities, can also have unintended effects: As deadlines near, resources may be shifted from quality assurance to coding, at the expense of product quality.

• Conflicting considerations support both the separation of roles and the mobility of people and roles.

Page 39: PART B-UNIT-6-PROCESS FRAMEWORK CHAPTER 1-A FRAMEWORK FOR TESTING AND ANALYSIS

• At Chipmunk, responsibility for delivery of the new Web presence is distributed among a development team and a quality assurance team. The quality assurance team is divided into,

The analysis and testing group- Responsible for the dependability of the system The usability testing group- Responsible for usability. • Responsibility for security issues is assigned to the infrastructure development group, which relies partly on

external consultants for final tests based on external attack attempts.• At Chipmunk, specifications, design, and code are inspected by mixed teams, scaffolding and oracles are designed by analysts and developers integration, system, acceptance, and regression tests are assigned to the test and analysis team. unit tests are generated and executed by the developers coverage is checked by the testing team before starting integration and system testing• A specialist has been hired for analyzing faults and improving the process. The process improvement specialist

works incrementally while developing the system and proposes improvements at each release.