Software Testing Life Cycle STLC

Embed Size (px)

DESCRIPTION

a brief introduction to software life cycle model in software engineering

Citation preview

Software Testing Life Cycle STLCContrary to popular belief, Software Testing is not a just a single activity. It consists of series of activities carried out methodologically to help certify your software product. These activities (stages) constitute the Software Testing Life Cycle (STLC).

The different stages in Software Test Life Cycle -

Each of these stages have a definiteEntry and Exit criteria , Activities & Deliverables associated with it.In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverables for the different stages in STLC. Lets look into them in detail.Requirement AnalysisDuring this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.Activities Identify types of tests to be performed. Gather details about testing priorities and focus. PrepareRequirement Traceability Matrix (RTM). Identify test environment details where testing is supposed to be carried out. Automation feasibility analysis (if required).Deliverables RTM Automation feasibility report. (if applicable)Test PlanningThis phase is also calledTest Strategyphase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.Activities Preparation of test plan/strategy document for various types of testing Test tool selection Test effort estimation Resource planning and determining roles and responsibilities. Training requirementDeliverables Test plan/strategy document. Effort estimationdocument.Test Case DevelopmentThis phase involves creation, verification and rework of test cases & test scripts.Test data, is identified/created and is reviewed and then reworked as well.Activities Create test cases, automation scripts (if applicable) Review and baseline test cases and scripts Create test data (If Test Environment is available)Deliverables Test cases/scripts Test dataTest Environment SetupTest environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process andcan be done in parallel with Test Case Development Stage.Test team may not be involved in this activityif the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.Activities Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. Setup test Environment and test data Perform smoke test on the buildDeliverables Environment ready with test data set up Smoke Test Results.Test ExecutionDuring this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.Activities Execute tests as per plan Document test results, and log defects for failed cases Map defects to test cases in RTM Retest the defect fixes Track the defects to closureDeliverables Completed RTM with execution status Test cases updated with results Defect reportsTest Cycle ClosureTesting team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.Activities Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality Prepare test metrics based on the above parameters. Document the learning out of the project Prepare Test closure report Qualitative and quantitative reporting of quality of the work product to the customer. Test result analysis to find out the defect distribution by type and severity.Deliverables Test Closure report Test metricsFinally,summaryof STLC along with Entry and Exit CriteriaSTLC StageEntry CriteriaActivityExit CriteriaDeliverables

Requirement AnalysisRequirements Document available (both functional and non functional)Acceptance criteria defined.Application architectural document available.Analyse business functionality to know the business modules and module specific functionalities.Identify all transactions in the modules.Identify all the user profiles.Gather user interface/authentication, geographic spread requirements.Identify types of tests to be performed.Gather details about testing priorities and focus.Prepare Requirement Traceability Matrix (RTM).Identify test environment details where testing is supposed to be carried out.Automation feasibility analysis (if required).Signed off RTMTest automation feasibility report signed off by the clientRTMAutomation feasibility report (if applicable)

Test PlanningRequirements DocumentsRequirement Traceability matrix.Test automation feasibility document.Analyze various testing approaches availableFinalize on the best suited approachPreparation of test plan/strategy document for various types of testingTest tool selectionTest effort estimationResource planning and determining roles and responsibilities.Approved test plan/strategy document.Effort estimation document signed off.Test plan/strategy document.Effort estimation document.

Test case developmentRequirements DocumentsRTM and test planAutomation analysis reportCreate test cases, automation scripts (where applicable)Review and baseline test cases and scriptsCreate test dataReviewed and signed test Cases/scriptsReviewed and signed test dataTest cases/scriptsTest data

Test Environment setupSystem Design and architecture documents are availableEnvironment set-up plan is availableUnderstand the required architecture, environment set-upPrepare hardware and software requirement listFinalize connectivity requirementsPrepare environment setup checklistSetup test Environment and test dataPerform smoke test on the buildAccept/reject the build depending on smoke test resultEnvironment setup is working as per the plan and checklistTest data setup is completeSmoke test is successfulEnvironment ready with test data set upSmoke Test Results.

Test ExecutionBaselined RTM, Test Plan , Test case/scripts are availableTest environment is readyTest data set up is doneUnit/Integration test report for the build to be tested is availableExecute tests as per planDocument test results, and log defects for failed casesUpdate test plans/test cases, if necessaryMap defects to test cases in RTMRetest the defect fixesRegression testing of applicationTrack the defects to closureAll tests planned are executedDefects logged and tracked to closureCompleted RTM with execution statusTest cases updated with resultsDefect reports

Test Cycle closureTesting has been completedTest results are availableDefect logs are availableEvaluate cycle completion criteria based on - Time, Test coverage , Cost , Software Quality , Critical Business ObjectivesPrepare test metrics based on the above parameters.Document the learning out of the projectPrepare Test closure reportQualitative and quantitative reporting of quality of the work product to the customer.Test result analysis to find out the defect distribution by type and severityTest Closure report signed off by clientTest Closure reportTest metrics

Functional Testing Vs Non-Functional TestingFunctional Testingis the type of testing done against the business requirements of application. It is a black box type of testing.It involves the complete integration system to evaluate the systems compliance with its specified requirements. Based on the functional specification document this type of testing is to be carried out. In actual testing, testers need to verify a specific action or function of the code. For functional testing either manual testing or automation tools can be used but functionality testing would be easier using manual testing only. Prior to non Functional testing the Functional testing would be executed first.Five steps need to be keeping in mind in the Functional testing:1. Preparation of test data based on the specifications of functions2. Business requirements are the inputs to functional testing3. Based on functional specifications find out of output of the functions4. The execution of test cases5. Observe the actual and expected outputsTo carry out functional testing we have numerous tools available, here is the list ofFunctional testing tools.In thetypes of functional testingfollowing testing types should be cover: Unit Testing Smoke testing Sanity testing Integration Testing Interface Testing System Testing Regression Testing UATWhat is non Functional Testing?The non Functional Testing is the type of testing done against thenon functional requirements. Most of the criteria are not consider in functional testing so it is used tocheck the readiness of a system.Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. It can be started after the completion of Functional Testing. Thenon functional testscan be effective by using testing tools.The testing of software attributes which are not related to any specific function or user action like performance, scalability, security or behavior of application under certain constraints.Non functional testing has a great influence on customer and user satisfaction with the product. Non functional testing should be expressed in a testable way, not like the system should be fast or the system should be easy to operate which is not testable.Basically in the non functional test is used to majornon-functional attributes of software systems.Lets takenon functional requirementsexamples; in how much time does the software will take to complete a task? or how fast the response is.Following testing should consider innon functional testing types: Availability Testing Baseline testing Compatibility testing Compliance testing Configuration Testing Documentation testing Endurance testing Ergonomics Testing Interoperability Testing Installation Testing Load testing Localization testing and Internationalization testing Maintainability Testing Operational Readiness Testing Performance testing Recovery testing Reliability Testing Resilience testing Security testing Scalability testing Stress testing Usability testing Volume testing

What is Structural testing (Testing of software structure/architecture)? The structural testing is the testing of the structure of the system or component. Structural testing is often referred to as white box or glass box or clear-box testing because in structural testing we are interested in what is happening inside the system/application. In structural testing the testers are required to have the knowledge of the internal implementations of the code. Here the testers require knowledge of how the software is implemented, how it works. During structural testing the tester is concentrating on how the software does it. For example, a structural technique wants to know how loops in the software are working. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the functionality of the software. Structural testing can be used at all levels of testing. Developers use structural testing in component testing and component integration testing, especially where there is good tool support for code coverage. Structural testing is also used in system and acceptance testing, but the structures are different. For example, the coverage of menu options or major business transactions could be the structural element in system or acceptance testing.

Test Strategy and Test PlanTest StrategyA Test Strategy document is a high level document and normally developed by project manager. This document defines Software TestingApproach to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.The Test Strategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document.Some companies include the Test Approach or Strategy inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing.Components of the Test Strategy document Scope and Objectives Business issues Roles and responsibilities Communication and status reporting Test deliverability Industry standards to follow Test automation and tools Testing measurements and metrices Risks and mitigation Defect reporting and tracking Change and configuration management Training planTest PlanThe Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents.The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents.There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities.My own personal view is that when a testing phase starts and the Test Manager is controlling the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process. Test Plan id Introduction Test items Features to be tested Features not to be tested Test techniques Testing tasks Suspension criteria Features pass or fail criteria Test environment (Entry criteria, Exit criteria) Test delivarables Staff and training needs Responsibilities ScheduleThis is a standard approach to prepare test plan and test strategy documents, but things can vary company-to-companyDefinition of a Test Plan

A test plan can be defined as a document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.

In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including Scope of testing Schedule Test Deliverables Release Criteria Risks and ContingenciesIt is also be described as a detail of how the testing will proceed, who will do the testing, what will be tested, in how much time the test will take place, and to what quality level the test will be performed.

Few other definitions

The process of defining a test project so that it can be properly measured and controlled. The test planning process generates a high level test plan document that identifies the software items to be tested, the degree of tester independence, the test environment, the test case design and test measurement techniques to be used, and the rationale for their choice.

A testing plan is a methodological and systematic approach to testing a system such as a machine or software. It can be effective in finding errors and flaws in a system. In order to find relevant results, the plan typically contains experiments with a range of operations and values, including an understanding of what the eventual workflow will be.

Test plan is a document which includes, introduction, assumptions, list of test cases, list of features to be tested, approach, deliverables, resources, risks and scheduling.

A test plan is a systematic approach to testing a system such as a machine or software. The plan typically contains a detailed understanding of what the eventual workflow will be.

A record of the test planning process detailing the degree of tester indedendence, the test environment, the test case design techniques and test measurement techniques to be used, and the rationale for their choice.Test planning

Test planning involves scheduling and estimating the system testing process, establishing process standards and describing the tests that should be carried out.As well as helping managers allocate resources and estimate testing schedules, test plans are intended for software engineers involved in designing and carrying out system tests. They help technical staff get an overall picture of the system tests and place their own work in this context. Frewin and Hatton (Frewin and Hatton, 1986). Humphrey (Humphrey, 1989) and Kit (Kit, 1995) also include discussions on test planning.Test planning is particularly important in large software system development. As well as setting out the testing schedule and procedures, the test plan defines the hardware and software resources that are required. This is useful for system managers who are responsible for ensuring that these resources are available to the testing team. Test plans should normally include significant amounts of contingency so that slippages in design and implementation can be accommodated and staff redeployed to other activities.Test plans are not a static documents but evolve during the development process. Test plans change because of delays at other stages in the development process. If part of a system is incomplete, the system as a whole cannot be tested. You then have to revise the test plan to redeploy the testers to some other activity and bring them back when the software is once again available.For small and medium-sized systems, a less formal test plan may be used, but there is still a need for a formal document to support the planning of the testing process. For some agile processes, such as extreme programming, testing is inseparable from development. Like other planning activities, test planning is also incremental. In XP, the customer is ultimately responsible for deciding how much effort should be devoted to system testing.The structure of a test plan

Test plans obviously vary, depending on the project and the organization involved in the testing. Sections that would typically be included in a large system, are:The testing processA description of the major phases of the system testing process. This may be broken down into the testing of individual sub-systems, the testing of external system interfaces, etc.Requirements traceabilityUsers are most interested in the system meeting its requirements and testing should be planned so that all requirements are individually tested.Tested itemsThe products of the software process that are to be tested should be specified.Testing scheduleAn overall testing schedule and resource allocation. This schedule should be linked to the more general project development schedule.Test recording proceduresIt is not enough simply to run tests; the results of the tests must be systematically recorded. It must be possible to audit the testing process to check that it has been carried out correctly.Hardware and software requirementsThis section should set out the software tools required and estimated hardware utilisation.ConstraintsConstraints affecting the testing process such as staff shortages should be anticipated in this section.System testsThis section, which may be completely separate from the test plan, defines the test cases that should be applied to the system. These tests are derived from the system requirements specification.Test Case SpecificationsThe test plan focuses on how the testing for the project will proceed, which units will be tested and what approaches (and tools) are to be used during the various stages of testing. However it does not deals with details of testing a unit nor does it specify which test case are to be used.Test case specification has to be done separately for each unit. Based on the approach specified in the test plan first the feature to be tested for this unit must be determined. The overall approach stated in the plan is refined into specific test techniques that should be followed and into the criteria to be used for evaluation. Based on these the test cases are specified for testing unit.The two basic reasons test cases are specified before they are used for testing. It is known that testing has severe limitations and the effectiveness of testing depends very heavily on the exact nature of the test case. Even for a given criterion the exact nature of the test cases affects the effectiveness of testing.Constructing good test case that will reveal errors in programs is still a very creative activity that depends a great deal on the tester. Clearly it is important to ensure that the set of test cases used is of high quality. As with many other verification methods evaluation of quality of test case is done through "test case review" And for any review a formal document or work product is needed. This is the primary reason for having the test case specification in the form of a document.

Software Metrics for Reliability:The Metrics are used to improve the reliability of the system by identifying the areas of requirements (for specification),Coding (for errors),Testing (for verifying) phases.The different types of Software Metrics that are used are a) Requirements Reliability Metrics:-Requirements indicate what features the software must contain. So for this requirement document, a clear understanding between client and developer should exist. Otherwise it is critical to write these requirements .The requirements must contain valid structure to avoid the loss of valuable information.Next , the requirements should be thorough and in a detailed manner so that it is easy for the design phase. The requirements should not contain inadequate information .Next one is to communicate easily .There should not be any ambiguous data in the requirements. If there exists any ambiguous data , then it is difficult for the developer to implement that specification. Requirement Reliability metrics evaluates the above said quality factors of the requirement document.

b) Design and Code Reliability MetricsThe quality factors that exists in design and coding plan are complexity , size and modularity. If there exists more complex modules, then it is difficult to understand and there is a high probability of occurring errors. So complexity of the modules should be less.Next coming to size, it depends upon the factors such as total lines, comments, executable statements etc. According to SATC , the most effective evaluation is the combination of size and complexity.The reliability will decrease if modules have a combination of high complexity and large size or high complexity and small size. In the later combination also the reliability decreases because , the smaller size results in a short code which is difficult to alter.These metrics are also applicable to object oriented code , but in this , additional metrics are required to evaluate the quality.

c) Testing Reliability Metrics: Testing Reliability metrics uses two approaches to evaluate the reliability.First, it ensures that the system is fully equipped with the functions that are specified in the requirements. Because of this, the errors due to the lack of functionality decreases .Second approach is nothing but evaluating the code , finding the errors and fixing them.

The current practices of software reliability measurement can be divided into four categories.1) Product metrics2) project management busy3) process metrics4) Fault and failure metricsAs discussed earlier software size and complexity plays an important role in design and coding phase. One of the product metrics called function point metric is used to estimate the size and complexity of the program.Project Management metrics increases reliability by evaluating the Management process whereas process metrics can be used to estimate , monitor and improve the reliability and quality of the software.The final one, Fault and Failure Metrics determines, when the software is performing the whole functions that are specified by the requirement documents with out any errors. It takes the faults and failures that arises in the coding and analyzes them to achieve this task.Reliability growth models

There are various reliability growth models that have been derived from reliability experiments in a number of different application domains. As Kan (Kan, 2003) discusses, most of these models are exponential, with reliability increasing quickly as defects are discovered and removed. The increase then tails off and reaches a plateau as fewer and fewer defects are discovered and removed in the later stages of testing.The simplest model that illustrates the concept of reliability growth is a step function model (Jelinski and Moranda, 1972). The reliability increases by a constant increment each time a fault (or a set of faults) is discovered and repaired (Figure 1) and a new version of the software is created. This model assumes that software repairs are always correctly implemented so that the number of software faults and associated failures decreases in each new version of the system. As repairs are made, the rate of occurrence of software failures (ROCOF) should therefore decrease, as shown in Figure 1. Note that the time periods on the horizontal axis reflect the time between releases of the system for testing so they are normally of unequal length.

Figure1 Equal-step function model of reliability growthIn practice, however, software faults are not always fixed during debugging and when you change a program, you sometimes introduce new faults into it. The probability of occurrence of these faults may be higher than the occurrence probability of the fault that has been repaired. Therefore, the system reliability may sometimes worsen in a new release rather than improve.The simple equal-step reliability growth model also assumes that all faults contribute equally to reliability and that each fault repair contributes the same amount of reliability growth. However, not all faults are equally probable. Repairing the most common faults contributes more to reliability growth than does repairing faults that occur only occasionally. You are also likely to find these probable faults early in the testing process, so reliability may increase more than when later, less probable, faults are discovered.Later models, such as that suggested by Littlewood and Verrall (Littlewood and Verrall, 1973) take these problems into account by introducing a random element into the reliability growth improvement effected by a software repair. Thus, each repair does not result in an equal amount of reliability improvement but varies depending on the random perturbation (Figure 2).Littlewood and Verralls model allows for negative reliability growth when a software repair introduces further errors. It also models the fact that as faults are repaired, the average improvement in reliability per repair decreases. The reason for this is that the most probable faults are likely to be discovered early in the testing process. Repairing these contributes most to reliability growth.

Figure2 Random-step function model of reliability growthThe above models are discrete models that reflect incremental reliability growth. When a new version of the software with repaired faults is delivered for testing it should have a lower rate of failure occurrence than the previous version. However, to predict the reliability that will be achieved after a given amount of testing continuous mathematical models are needed. Many models, derived from different application domains, have been proposed and compared (Littlewood, 1990).

(c) Ian Sommerville 2008

Software qualityFrom Wikipedia, the free encyclopediaIn the context ofsoftware engineering,software qualityrefers to two related but distinct notions that exist whereverqualityis defined in a business context: Software functional quality reflects how well it complies with or conforms to a given design, based onfunctional requirementsor specifications. That attribute can also be described as the fitness for purpose of a piece of software or how it compares to competitors in the marketplace as a worthwhileproduct;[1] Software structural quality refers to how it meetsnon-functional requirementsthat support the delivery of the functional requirements, such as robustness or maintainability, the degree to which the software was produced correctly.Structural quality is evaluated through the analysis of the software inner structure, its source code, at the unit level, the technology level and the system level, which is in effect how its architecture adheres to sound principles ofsoftware architectureoutlined in a paper on the topic by OMG.[2]In contrast, functional quality is typically enforced and measured throughsoftware testing.Historically, the structure, classification and terminology of attributes and metrics applicable tosoftware quality managementhave been derived or extracted from theISO 9126-3and the subsequent ISO 25000:2005[3]quality model, also known as SQuaRE.[citation needed]Based on these models, theConsortium for IT Software Quality(CISQ) has defined five major desirable structural characteristics needed for a piece of software to providebusiness value: Reliability, Efficiency, Security, Maintainability and (adequate) Size.

ISO 9000

ISO 9000is a series of standards, developed and published by theInternational Organization for Standardization(ISO), that define, establish, and maintain an effective quality assurance system for manufacturing and service industries.[1][2]The standards are available throughnational standards bodies. ISO 9000 deals with the fundamentals of quality management systems,[3]including the eight management principlesupon which the family of standards is based.[3][4]ISO 9001 deals with the requirements that organizations wishing to meet the standard must fulfill.[5]Third-party certification bodies provide independent confirmation that organizations meet the requirements of ISO 9001. Over a million organizations worldwide[6]are independently certified, making ISO 9001 one of the most widely used management tools in the world today. Despite widespread use, the ISO certification process has been criticized[7][8]as being wasteful and not being useful for all organizations.[9][10]

Capability Maturity Model (CMM) E-Mail Print A AA AAA inShare Facebook Twitter Share This RSS ReprintsThe Capability Maturity Model (CMM) is a methodology used to develop and refine an organization's software development process. The model describes a five-level evolutionary path of increasingly organized and systematically more mature processes. CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development center sponsored by the U.S. Department of Defense (DoD). SEI was founded in 1984 to address software engineering issues and, in a broad sense, to advance software engineering methodologies. More specifically, SEI was established to optimize the process of developing, acquiring, and maintaining heavily software-reliant systems for the DoD. Because the processes involved are equally applicable to the software industry as a whole, SEI advocates industry-wide adoption of the CMM.The CMM is similar to ISO 9001, one of theISO 9000series of standards specified by the International Organization for Standardization (ISO). The ISO 9000 standards specify an effective quality system for manufacturing and service industries; ISO 9001 deals specifically with software development and maintenance. The main difference between the two systems lies in their respective purposes: ISO 9001 specifies a minimal acceptable quality level for software processes, while the CMM establishes a framework for continuous process improvement and is more explicit than the ISO standard in defining the means to be employed to that end.CMM's Five Maturity Levels of Software Processes At theinitiallevel, processes are disorganized, even chaotic. Success is likely to depend on individual efforts, and is not considered to be repeatable, because processes would not be sufficiently defined and documented to allow them to be replicated. At therepeatablelevel, basic project management techniques are established, and successes could be repeated, because the requisite processes would have been made established, defined, and documented. At thedefinedlevel, an organization has developed its own standard software process through greater attention to documentation, standardization, and integration. At themanagedlevel, an organization monitors and controls its own processes through data collection and analysis. At theoptimizinglevel, processes are constantly being improved through monitoring feedback from current processes and introducing innovative processes to better serve the organization's particular needs.\Comparison between ISO and CMMDifference between ISO 9000 and CMM(ISO 9000 VS CMM)ISO 900(INTERNATIONAL STANDARD ORGANISATION)CMM (CABABILITY MATURITY MODEL)

It applies to any type of industry .CMM is specially developed for software industry

ISO 9000 addresses corporate business processCMM focuses on the software Engineering activities.

ISO 9000 specifies minimum requirement.CMM gets nto technical aspect of software engineering.

ISO 9000 restricts itself to what is required.It suggests how to fulfill the requirements.

ISO 9000 provides pass or fail criteria.It provides grade for process maturity.

ISO 9000 has no levels.CMM has 5 levels: Initial Repeatable Defined Managed Optimization

ISO 9000 does not specifies sequence of steps required to establish the quality system.It reconnects the mechanism for step by step progress through its successive maturity levels.

Certain process elements that are in ISO are not included in CMM like:1.Contract management2.Purchase and customer supplied components3.Personal issue management4.Packaging ,delivery, and installation managementSimilarly other process in CMM are not included in ISO 90001.Project tracking2.Process and technology change management3.Intergroup coordinating to meet customers requirements4.Organization level process focus, process development and integrated management.

Computer-aided software engineeringFrom Wikipedia, the free encyclopedia

Example of a CASE tool.Computer-aided software engineering(CASE) is the application of a set of tools and methods to a softwaresystem with the desired end result of high-quality, defect-free, and maintainable software products.[1]It also refers to methods for the development ofinformation systemstogether with automated tools that can be used in thesoftware development process.[2]

CASE (computer-aided software engineering) E-Mail Print A AA AAA inShare Facebook Twitter Share This RSS ReprintsCASE (computer-aided software engineering) is the use of a computer-assisted method to organize and control the development of software, especially on large, complex projects involving many software components and people. Using CASE allows designers, code writers, testers, planners, and managers to share a common view of where a project stands at each stage of development. CASE helps ensure a disciplined, check-pointed process. A CASE tool may portray progress (or lack of it) graphically. It may also serve as a repository for or be linked to document and program libraries containing the project's business plans, design requirements, design specifications, detailed code specifications, the code units, test cases and results, and marketing and service plans.CASE originated in the 1970s when computer companies were beginning to borrow ideas from the hardware manufacturing process and apply them to software development (which generally has been viewed as an insufficiently disciplined process). Some CASE tools supported the concepts ofstructured programmingand similar organized development methods. More recently, CASE tools have had to encompass or accommodate visual programming tools andobject-oriented programming. In corporations, a CASE tool may be part of a spectrum of processes designed to ensure quality in what is developed. (Many companies have their processes audited and certified as being in conformance with the ISO 9000 standard.)Some of the benefits of CASE and similar approaches are that, by making the customer part of the process (through market analysis and focus groups, for example), a product is more likely to meet real-world requirements. Because the development process emphasizes testing and redesign, the cost of servicing a product over its lifetime can be reduced considerably. An organized approach to development encourages code and design reuse, reducing costs and improving quality. Finally, quality products tend to improve a corporation's image, providing a competitive advantage in the marketplace.reverse engineering E-Mail Print A AA AAA inShare Facebook Twitter Share This RSS ReprintsReverse engineering is taking apart an object to see how it works in order to duplicate or enhance the object. The practice, taken from older industries, is now frequently used on computer hardware and software. Software reverse engineering involves reversing a program'smachine code(the string of 0s and 1s that are sent to the logic processor) back into thesource codethat it was written in, using program language statements.Software reverse engineering is done to retrieve the source code of a program because the source code was lost, to study how the program performs certain operations, to improve the performance of a program, to fix abug(correct an error in the program when the source code is not available), to identify malicious content in a program such as avirusor to adapt a program written for use with one microprocessor for use with another. Reverse engineering for the purpose of copying or duplicating programs may constitute a copyright violation. In some cases, the licensed use of software specifically prohibits reverse engineering.Someone doing reverse engineering on software may use several tools to disassemble a program. One tool is a hexadecimal dumper, which prints or displays the binary numbers of a program inhexadecimalformat (which is easier to read than a binary format). By knowing the bit patterns that represent the processor instructions as well as theinstructionlengths, the reverse engineer can identify certain portions of a program to see how they work. Another common tool is the disassembler. The disassembler reads the binary code and then displays each executable instruction in text form. A disassembler cannot tell the difference between an executable instruction and the data used by the program so adebuggeris used, which allows the disassembler to avoid disassembling the data portions of a program. These tools might be used by acrackerto modify code and gain entry to a computer system or cause other harm.Hardware reverse engineering involves taking apart a device to see how it works. For example, if a processor manufacturer wants to see how a competitor's processor works, they can purchase a competitor's processor, disassemble it, and then make a processor similar to it. However, this process is illegal in many countries. In general, hardware reverse engineering requires a great deal of expertise and is quite expensive.Another type of reverse engineering involves producing3-Dimages of manufactured parts when a blueprint is not available in order to remanufacture the part. To reverse engineer a part, the part is measured by a coordinate measuring machine (CMM). As it is measured, a 3-D wire frame image is generated and displayed on a monitor. After the measuring is complete, the wire frame image is dimensioned. Any part can be reverse engineered using these methods.The termforward engineeringis sometimes used in contrast to reverse engineering.