270
Overview Testing Steps Looking at UAT from a high level, there are a few basic steps that need to be undertaken: Step Description Test Strategy Decide how we are going to approach the testing in terms of people, tools, procedures and support Test Scenarios What are the situations we want to test Test Scripts What are the actual inputs we will use? What are the expected results? Test Strategy Why do a Test Strategy? The Test Strategy is the plan for how you are going to approach testing. It is like a project charter that tells the world how you are going to approach the project. You may have it all in your head, and if you are the only person doing the work it might be OK. If however you do not have it all in your head, or if others will be involved, you need to map out the ground rules. Here are some of the things that need to be covered in a Test Strategy. You could use this as a template for your own strategy. Project Name Overview Testing stage Instructions: Identify the type of testing to be undertaken. Example: User Acceptance Testing Scheduled for Example: 01.04.06 to 15.04.06 Location Example: https://softwaretestinginterviewfaqs.wordpress.com/category/manual- testing-basics/

Testing overview

Embed Size (px)

Citation preview

Page 1: Testing overview

Overview

Testing Steps

Looking at UAT from a high level, there are a few basic steps that need to be

undertaken:

Step

Description

Test Strategy

Decide how we are going to approach the testing in terms of people, tools, procedures and support

Test Scenarios

What are the situations we want to test

Test Scripts

What are the actual inputs we will use? What are the expected results?

Test Strategy

Why do a Test Strategy? The Test Strategy is the plan for how you are going to approach testing. It is like

a project charter that tells the world how you are going to approach the project. You may have it all in your

head, and if you are the only person doing the work it might be OK. If however you do not have it all in

your head, or if others will be involved, you need to map out the ground rules. Here are some of the things

that need to be covered in a Test Strategy. You could use this as a template for your own strategy.

Project Name

Overview

Testing stage Instructions:

Identify the type of testing to be undertaken.

Example:

User Acceptance Testing

Scheduled for Example:

01.04.06 to 15.04.06

Location Example:

Testing will be carried out in the Test Center on Level X

Participants Instructions:

Identify who will be involved in the testing. If resources have not been

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 2: Testing overview

nominated, outline the skills required.

Example:

Testing Manager – J. Smith

2 Testers – To be nominated. The skills required are:

• Broad understanding of all the processes carried out by the accounts

receivable area.

• Should be familiar with manual processes currently undertaken for

reversing payments.

• Preferably spent time dealing with inquiries from customers over the phone

• Etc.

we plan to cover the product so as to develop an adequate assessment of quality.

A good test strategy is:

Specific

Practical

Justified

The purpose of a test strategy is to clarify the major tasks and challenges of the test project.

Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test

strategy.

Example of a poorly stated (and probably poorly conceived) test strategy:

“We will use black box testing, cause-effect graphing, boundary testing, and white box testing to test this

product against its specification.”

Test Strategy: Type of Project, Type of Software, when Testing will occur, Critical Success factors,

Tradeoffs

Test Plan – Why

· Identify Risks and Assumptions up front to reduce surprises later.

· Communicate objectives to all team members.

· Foundation for Test Spec, Test Cases, and ultimately the Bugs we find.

Failing to plan = planning to fail.

Test Plan – What

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 3: Testing overview

· Derived from Test Approach, Requirements, Project Plan, Functional Spec., and Design Spec.

· Details out project-specific Test Approach.

· Lists general (high level) Test Case areas.

· Include testing Risk Assessment.

· Include preliminary Test Schedule

· Lists Resource requirements.

What is test   Metrics? Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics, Software Testing Basics & Interview

FAQ | Tags: Testing Interview Questions, Testing Strategies, What is test Metrics? | Leave a comment

What is test Metrics?Test metrics consist of:

•Total test

•Test run

•Test passed

•Test failed

•Tests deferred

•Test passed the first time

what is three way traceability matrix?A three way traceability matrix is the document contains the requirements, test cases and defects if any.(mapping of all these).

what is the difference between delivarables and Release   notes? Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Basics, Interview

Faq's,Software Testing Basics & Interview FAQ Testing Interview Questions | Leave a comment

what is the difference between delivarables and Release notes? Release notes are provided for every new release of the build with fixed bugs. It contains what are all the

bugs fixed and what are pending bugs.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 4: Testing overview

Deliverables are provided at the end of the testing. Test plan, test cases, defect reports, documented

defects which are not fixed etc come under deliverables.

what is a testcase and usecase ?Use case is a description of a certain feature of an application in terms of actors ,actions and responses.

For example if the user enters valid userid and password and after clicking on login button the system

should display home page.

Here the user is ACTOR. The operations performed by him are ACTIONS. The Systems display of home

page is RESPONSE.

Actually using use cases we will write our test cases if any use cases are provided in the SRS.

A test case is a set of test inputs,executional conditions,with some expected results developed for

validating a particular functionality of AUT.

Traceability Matrix,Defect Leakage, Buffer Overflow, Bidirectional   Traceability Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Bidirectional Traceability,Buffer

Overflow, Defect Leakage, Traceability Matrix | Leave a comment

What is Traceability Matrix? Traceability Matrix is a document used for tracking the requirement, Test cases and the defect. This

document is prepared to make the clients satisfy that the coverage done is complete as end to end, this

document consists of Requirement/Base line doc Ref No., Test case/Condition, and Defects/Bug id.

Using this document the person can track the Requirement based on the Defect id.

Boundary value testingit is a technique to find whether the application is accepting the expected range of values and rejecting

the values which falls out of range.

Ex. A user ID text box has to accept alphabet characters (a-z) with length of 4 to 10 characters.

BVA is done like this, max value:10 pass; max-1: 9 pass;

max+1=11 fail ;min=4 pass;min+1=5 pass;min-1=3 fail;

Like wise we check the corner values and come out with a conclusion whether the application is

accepting correct range of values.

Equivalence testingit is normally used to check the type of the object.

Ex. A user ID text box has to accept alphabet characters ( a – z ) with length of 4 to 10 characters.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 5: Testing overview

In +ve condition we have test the object by giving alphabets. i.e a-z char only, after that we need to check

whether the object accepts the value, it will pass.

In -ve condition we have to test by giving other than alphabets (a-z) i.e A-Z,0-9,blank etc, it will fail.

What is Defect Leakage ?Defect leakage occurs at the Customer or the End user side after the application delivery. After the

release of the application to the client, if the end user gets any type of defects by using that application

then it is called as Defect leakage. This Defect Leakage is also called as Bug Leak.

What is Bidirectional Traceability ?Bidirectional Traceability must be implemented from both forward & backward(i.e from requirements to

end products and from end products to requirements) When the the requirements are fully managed then

traceability should be established between source requirements to its lower level requirements and from

lower level requirements to source. This helps us to determine all the source requirements have been

completely addressed.

What is Buffer Overflow?A buffer overflow occurs when a program or process tries to store more data in a buffer (temporary data

storage area) than it was intended to hold. Since buffers are created to contain a finite amount of data,

the extra information – which has to go somewhere – can overflow into adjacent buffers, corrupting or

overwriting the valid data held in them

What is a Memory Leak?A memory leak occurs when a programmer dynamically allocates memory space for a variable of some

type, but then fails to free up that space before the program completes. This leaves less free memory for

the system to use. Repeated running of the program or function that causes such memory loss can

eventually crash the system or result in a denial of service.

What is ‘configuration management’? Configuration management is a process to control and document any changes made during the life of a

project. Revision control, Change Control, and Release Control are important aspects of Configuration

Management

Explain Peer Review in Software   Testing. Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Peer Review in Software

Testing | Leave a comment

It is an alternative form of Testing, where some colleagues were invited to examine your work products for

defects and improvement opportunities.

Some Peer review approaches are,

Inspection – It is a more systematic and rigorous type of peer review. Inspections are more effective at

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 6: Testing overview

finding defects than are informal reviews.

Ex: In Motorola’s Iridium project nearly 80% of the defects were detected through inspections where only

60% of the defects were detected through informal reviews.

Team Reviews – It is a planned and structured approach but less formal and less rigorous comparing to

Inspections.

Walkthrough – It is an informal review because the work product’s author describes it to some

colleagues and asks for suggestions. Walkthroughs are informal because they typically do not follow a

defined procedure, do not specify exit criteria, require no management reporting, and generate no

metrics.

Pair Programming – In Pair Programming, two developers work together on the same program at a

single workstation and continuously reviewing their work.

Peers Desk check – In Peer Desk check only one person besides the author examines the work product.

It is an informal review, where the reviewer can use defect checklists and some analysis methods to

increase the effectiveness.

SDLC and STLCPosted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics, Software Testing Basics & Interview

FAQ | Tags: SDLC and STLC | Leave a comment

What is quality assurance? Software QA involves the entire software development PROCESS – monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to ‘prevention’.

What is the difference between QA and testing? Testing involves operation of a system or application under controlled conditions and evaluating the

results. It is oriented to ‘detection’.

Software QA involves the entire software development PROCESS – monitoring and improving the

process, making sure that any agreed-upon standards and procedures are followed, and ensuring that

problems are found and dealt with. It is oriented to ‘prevention’.

Describe the difference between validation and verification

Verification is done by frequent evaluation and meetings to appraise the documents, policy, code,

requirements, and specifications. This is done with the checklists, walkthroughs, and inspection meetings.

Validation is done during actual testing and it takes place after all the verifications are being done.

What are SDLC and STLC? Explain its different phases. SDLC

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 7: Testing overview

Requirement phase

Designing phase (HLD, DLD (Program spec))

Coding

Testing

Release

Maintenance

STLCSystem StudyTest planningWriting Test case or scriptsReview the test caseExecuting test caseBug trackingReport the defect

What is Error guessing and Error seeding? Error Guessing is a test case design technique where the tester has to guess what faults might occur and

to design the tests to represent them.

Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring

the rate of detection & removal and also to estimate the number of faults remaining in the program.

What is Cyclometric   complexity? Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Cyclometric complexity |Leave

a comment

What is Cyclometric complexity?The cyclomatic complexity is software metric that provides a quantitative measure of the logical complexity of a programCyclometic complexity is one of metric which issued to measure the logical complexity of programTo calculate CYCLOMETIC COMPLEXITYV (G) =E-N+2HereE————>NO OF EDGESN————>NO OF NODES

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 8: Testing overview

What you need to know about BVT (Build Verification   Testing) Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Build Verification Testing,Tips

for BVT success | Leave a comment

What is BVT?Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix.BVT is also called smoke testing or build acceptance testing (BAT) 

New Build is checked mainly for two things:Build validationBuild acceptanceSome BVT basics:It is a subset of tests that verify main functionalities.The Bat’s are typically run on daily builds and if the BVT fails the build is rejected and a new build is released after the fixes are done.The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is broken.Design BVTs carefully enough to cover basic functionality.Typically BVT should not run more than 30 minutes.BVT is a type of regression testing, done on each and every new build.BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not. Module integration testing is very important when different teams develop project modules. I heard many cases of application failure due to improper module integration. Even in worst cases complete project gets scraped due to failure in module integration.What is the main task in build release? Obviously file ‘check in’ i.e. to include all the new and modified project files associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether – all the new and modified files are included in release, all file formats are correct, every file version and language, flags associated with each file.These basic checks are worth before build release to test team for testing. You will save time and money by discovering the build flaws at the very beginning using BVT.Which test cases should be included in BVT?This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on which test cases you include in BVT.Here are some simple tips to include test cases in your BVT automation suite:Include only critical test cases in BVT.All test cases included in BVT should be stable.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 9: Testing overview

All the test cases should have known expected result.Make sure all included critical functionality test cases are sufficient for application test coverage.Also do not include modules in BVT, which are not yet stable. For some under-development features you can’t predict expected behavior as these modules are unstable and you might know some known failures before testing for these incomplete modules. There is no point using such modules or test cases in BVT.You can make this critical functionality test cases inclusion task simple by communicating with all those involved in project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project features and scenarios.Example: Test cases to be included in BVT for Text editor application (Some sample tests only):1) Test case for creating text file.2) Test cases for writing something into text editor3) Test cases for copy, cut, paste functionality of text editor4) Test cases for opening, saving, deleting text file.These are some sample test cases, which can be marked as ‘critical’ and for every minor or major changes in application these basic critical test cases should be executed. This task can be easily accomplished by BVT.BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there are new stable project modules available.What happens when BVT suite run: Say Build verification automation test suite executed after any new build.1) The result of BVT execution is sent to all the email ID’s associated with that project.2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT.3) If BVT fails then BVT owner diagnose the cause of failure.4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers.5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if it’s a bug then what will be his bug-fixing scenario.6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for further detail functionality, performance and other testes.This process gets repeated for every new build.Why BVT or build fails?BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are some other reasons to build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc.You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis.Tips for BVT success:1) Spend considerable time writing BVT test cases scripts.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 10: Testing overview

2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to debug and quickly know the failure cause.3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build failure due to new unstable modules and test cases.4) Automate BVT process as much as possible. Right from build release process to BVT result – automate everything.5) Have some penalties for breaking the build  some chocolates or team coffee party from developer who breaks the build will do.Conclusion: BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or tester and BVT result is communicated throughout the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT. These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds. This saves significant time, cost, and resources and after all no frustration of test team for incomplete build.Developers might be doing the unit and integration testing and not necessarily the BVT. BVT is most of the times done by test engineer. Once the build team deploys the test build on the test environments it’s the job of test engineer to perform BVT (sniff, sanity smoke etc…)If you are able to test the application and are executing the test cases, it’s natural that the BVT has been passed. Else you wouldn’t have been able to test the application… It doesn’t matter if the testing is done manually or is automated using any tool. If the build has got a new feature how can you automate it?Following is the process that you might follow: Developer initiates a mail to build team (also marked to test team with the description of what to be tested in the new build) to make the build >> Build team makes the build and deploys it on test machines and replies all asking the test team to continue testing, else if build fails he replies so in the mail >> If the BVT fails tester replies to mail stating that the BVT failed along with the logs whatever are available, else continues testing. 

Testing ProcessPosted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Testing Process | Leave a

comment

 Testing Process as below:1. Test Requirements: • Requirement Specification documents• Functional Specification documents• Design Specification documents (use cases, etc)

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 11: Testing overview

• Use case Documents• Test Trace-ability Matrix for identifying Test Coverage2. Test Planning: • Test Scope, Test Environment• Different Test phase and Test Methodologies• Manual and Automation Testing• Defect Mgmt, Configuration Mgmt, and Risk Mgmt. Etc• Evaluation & identification – Test, Defect tracking tools3. Test Environment Setup: • Test Bed installation and configuration• Network connectivity’s• All the Software/ tools Installation and configuration• Coordination with Vendors and others4. Test Case Design: • Test Traceability Matrix and Test coverage• Test Scenarios Identification & Test Case preparation• Test data and Test scripts preparation• Test case reviews and Approval• Base lining under Configuration Management5. Test Case Execution and Defect tracking: • Executing Test cases• Testing Test Scripts• Capture, review and analyze Test Results• Raised the defects and tracking for its closure6. Test Report and Acceptance: • Test summary reports• Test Metrics and process Improvements made• Build release• Receiving acceptance

Bug life cyclePosted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Bug Life Cycle | Leave a

comment

The different states of a bug can be summarized as follows:1. New2. Open3. Assign4. Test5. Verified6. Deferred7. Reopened

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 12: Testing overview

8. Duplicate9. Rejected and10. ClosedDescription of Various Stages:1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

Example For Bug Priority&SeverityPosted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Bug Priority and Severity |

Leave a comment

High Priority & Low Severity BugWhen on Home page of a website Name of site is wrong, it is low severity bug as it is not going to affect the major functionality but since it’s the home page of the website it may create bad impression on customers.Here it should be High Priority.High Priority & Low SeverityIf we want to take a printout of a page or any document then if Printer doesn’t allow us take printout, it is a High Priority Bug as it is related to Functionality but since we can take the

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 13: Testing overview

printout from another Printer or we if can change the configuration then it is Low Severity Bug as we are getting our work done anyways without letting it affected by malfunctioning of PrinterDefect severity determines the defect criticality whereas defect priority determines the defect immediacy or urgency of repair 1. High Severity & Low Priority: Suppose there is an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. In this application, there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request. 2. Low Severity & High Priority: Suppose there is a spelling mistake or content issue on the homepage of BT.com website which has daily laks of hits all over UK. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault. 3. High Severity & High Priority: Suppose there is an application which gives some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. In this application, there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will hamper the functionality of the application immediately within a week. It should be fixed urgently. 4. Low Severity & Low Priority: Suppose there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.

Sample bug reportPosted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Sample Bug Report | Leave a

comment

Below sample bug/defect report will give you exact idea of how to report a bug in bug tracking tool.Here is the example scenario that caused a bug:Lets assume in your application under test you want to create a new user with user information, for that you need to logon into the application and navigate to USERS menu > New User, then enter all the details in the ‘User form’ like, First Name, Last Name, Age, Address, Phone etc. Once you enter all these information, you need to click on ‘SAVE’ button in order to save the user. Now you can see a success message saying, “New User has been created successfully”.But when you entered into your application by logging in and navigated to USERS menu > New user, entered all the required information to create new user and clicked on SAVE button. BANG! The application crashed and you got one error page on screen. (Capture this error message window and save as a Microsoft paint file)Now this is the bug scenario and you would like to report this as a BUG in your bug-tracking tool.How will you report this bug effectively?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 14: Testing overview

Here is the sample bug report for above mentioned example: (Note that some ‘bug report’ fields might differ depending on your bug tracking system)SAMPLE BUG REPORT:Bug Name: Application crash on clicking the SAVE button while creating a new user.Bug ID: (It will be automatically created by the BUG Tracking tool once you save this bug)Area Path: USERS menu > New UsersBuild Number: Version Number 5.0.1Severity: HIGH (High/Medium/Low) or 1Priority: HIGH (High/Medium/Low) or 1Assigned to: Developer-XReported By: Your NameReported On: DateReason: DefectStatus: New/Open/Active (Depends on the Tool you are using)Environment: Windows 2003/SQL Server 2005Description: Application crash on clicking the SAVE button while creating a newuser, hence unable to create a new user in the application.Steps To Reproduce:1) Logon into the application2) Navigate to the Users Menu > New User3) Filled all the user information fields4) Clicked on ‘Save’ button5) Seen an error page “ORA1090 Exception: Insert values Error…”6) See the attached logs for more information (Attach more logs related to bug. IF any)7) and also see the attached screenshot of the error page.Expected result: On clicking SAVE button, should be prompted to a success message “New User has been created successfully”.(Attach ‘application crash’ screen shot… IF any)Save the defect/bug in the BUG TRACKING TOOL.  You will get a bug id, which you can use for further bug reference.Default ‘New bug’ mail will go to respective developer and the default module owner (Team leader or manager) for further action

Website Cookie Testing, Test cases for testing web application   cookies? Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Test cases for testing web

application cookies, Website Cookie Testing | Leave a commentWe will first focus on what exactly cookies are and how they work. It would be easy for you to understand the test cases for testing cookies when you

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 15: Testing overview

have clear understanding of how cookies work? How cookies stored on hard drive? And how can we edit cookie settings?What is Cookie?Cookie is small information stored in text file on user’s hard drive by web server. This information is later used by web browser to retrieve information from that machine. Generally cookie contains personalized user data or information that is used to communicate between different web pages.Why Cookies are used?Cookies are nothing but the user’s identity and used to track where the user navigated throughout the web site pages. The communication between web browser and web server is stateless.For example if you are accessing domain http://www.example.com/1.html then web browser will simply query to example.com web server for the page 1.html. Next time if you type page as http://www.example.com/2.html then new request is send to example.com web server for sending 2.html page and web server don’t know anything about to whom the previous page 1.html served.What if you want the previous history of this user communication with the web server? You need to maintain the user state and interaction between web browser and web server somewhere. This is where cookie comes into picture. Cookies serve the purpose of maintaining the user interactions with web server.How cookies work?The HTTP protocol used to exchange information files on the web is used to maintain the cookies. There are two types of HTTP protocol. Stateless HTTP and Stateful HTTP protocol. Stateless HTTP protocol does not keep any record of previously accessed web page history. While Stateful HTTP protocol do keep some history of previous web browser and web server interactions and this protocol is used by cookies to maintain the user interactions.Whenever user visits the site or page that is using cookie, small code inside that HTML page (Generally a call to some language script to write the cookie like cookies in JavaScript, PHP, Perl) writes a text file on users machine called cookie.Here is one example of the code that is used to write cookie and can be placed inside any HTML page:Set-Cookie: NAME=VALUE; expires=DATE; path=PATH; domain=DOMAIN_NAME;When user visits the same page or domain later time this cookie is read from disk and used to identify the second visit of the same user on that domain. Expiration time is set while writing the cookie. This time is decided by the application that is going to use the cookie.Generally two types of cookies are written on user machine.1) Session cookies: This cookie is active till the browser that invoked the cookie is open. When we close the browser this session cookie gets deleted.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 16: Testing overview

Some time session of say 20 minutes can be set to expires the cookie.2) Persistent cookies: The cookies that are written permanently on user machine and last for months or years.Where cookies are stored?When any web page application writes cookie it get saved in a text file on user hard disk drive. The path where the cookies get stored depends on the browser. Different browsers store cookie in different paths. E.g. Internet explorer store cookies on path“C:\Documents and Settings\Default User\Cookies”Here the “Default User” can be replaced by the current user you logged in as. Like “Administrator”, or user name like “Vijay” etc.The cookie path can be easily found by navigating through the browser options. In Mozilla Fire fox browser you can even see the cookies in browser options itself. Open the Mozila browser, click on Tools->Options->Privacy and then “Show cookies” button.How cookies are stored?Let’s take example of cookie written by rediff.com on Mozilla Fire fox browser:On Mozilla Fire fox browser when you open the page rediff.com or login to your rediffmail account, a cookie will get written on your Hard disk. To view this cookie simply click on “Show cookies” button mentioned on above path. Click on Rediff.com site under this cookie list. You can see different cookies written by rediff domain with different names.Site: Rediff.com Cookie name: RMIDName: RMID (Name of the cookie)Content: 1d11c8ec44bf49e0… (Encrypted content)Domain: .rediff.comPath: / (Any path after the domain name)send For: Any type of connectionExpires: Thursday, December 31, 2020 11:59:59 PMApplications where cookies can be used:1) To implement shopping cart: Cookies are used for maintaining online ordering system. Cookies remember what user wants to buy. What if user adds some products in their shopping cart and if due to some reason user don’t want to buy those products this time and closes the browser window? When next time same user visits the purchase page he can see all the products he added in shopping cart in his last visit.2) Personalized sites: When user visits certain pages they are asked which pages they don’t want to visit or display. User options are get stored in cookie and till the user is online, those pages are not shown to him.3) User tracking: To track number of unique visitors online at particular time.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 17: Testing overview

4) Marketing: Some companies use cookies to display advertisements on user machines. Cookies control these advertisements. When and which advertisement should be shown? What is the interest of the user? Which keywords he searches on the site? All these things can be maintained using cookies.5) User sessions: Cookies can track user sessions to particular domain using user ID and password.Drawbacks of cookies:1) Even writing Cookie is a great way to maintain user interaction, if user has set browser options to warn before writing any cookie or disabled the cookies completely then site containing cookie will be completely disabled and can not perform any operation resulting in loss of site traffic.2) Too many Cookies: If you are writing too many cookies on every page navigation and if user has turned on option to warn before writing cookie, this could turn away user from your site.3) Security issues: Some times user’s personal information is stored in cookies and if someone hacks the cookie then hacker can get access to your personal information. Even corrupted cookies can be read by different domains and lead to security issues.4) Sensitive information: Some sites may write and store your sensitive information in cookies, which should not be allowed due to privacy concernsSome Major Test cases for web application cookie testing:The first obvious test case is to test if your application is writing cookies properly on disk. You can use the Cookie Tester application also if you don’t have any web application to test but you want to understand the cookie concept for testing.Test cases: 1) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in the cookie.2) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format.3) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.4) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like “For smooth functioning of this site make sure that cookies are enabled on your browser”. There should not be any page crash due to disabling the cookies. (Please make

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 18: Testing overview

sure that you close all browsers, delete all previously written cookies before performing this test)5) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.6) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can’t be accessed by other domain say yahoo.com unless and until the cookies are corrupted and someone trying to hack the cookie data.8) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.9) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Fire fox, Netscape, Opera etc.10) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value says if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.

Test Plan TemplatePosted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Test Plan Template | Leave a

comment

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 19: Testing overview

It’s an Index of Test plan only.Each point will help you to elaborate your test plan step by step.Take this as a guideline and develop a full Test plan for your project.Table of Contents:1. Introduction1.1. Test Plan Objectives2. Scope2.1. Data Entry2.2. Reports File Transfer2.3. File Transfer2.4. Security3. Test Strategy 3.1. System Test3.2. Performance Test3.3. Security Test3.4. Automated Test3.5. Stress and Volume Test3.6. Recovery Test3.7. Documentation Test3.8. Beta Test3.9. User Acceptance Test4. Environment Requirements4.1. Data Entry workstations4.2 MainFrame5. Test Schedule6. Control Procedures6.1 Reviews6.2 Bug Review meetings6.3 Change Request6.4 Defect Reporting7. Functions to Be Tested8. Resources and Responsibilities8.1. Resources8.2. Responsibilities9. Deliverables10. Suspension / Exit Criteria11. Resumption Criteria12. Dependencies12.1 Personnel Dependencies12.2 Software Dependencies12.3 Hardware Dependencies12.3 Test Data & Database13. Risks13.1. Schedule

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 20: Testing overview

13.2. Technical13.3. Management13.4. Personnel13.5 Requirements14. Tools15. Documentation16. Approvals 

Difference between test bed and test   data? Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Difference between test bed

and test data | Leave a comment

Test Bed consist of test data and other hardware components i.e. its a biggerpicture than only software dataTest Data are data which have been specifically identified for use in executingtest scripts, ( They may form a subset of test bed)Test Bed: Preparation for test execution is called Test Bed, i.e., installing your application (including pre-requisites like IE 6.0, .Net 2.0, IIS, Oracle, creating Windows User with Administrator rights etc), preparing database/schema.That means making sure that the application is READY for test execution.Test Data: On the other hand, test data is the data which is being used in the test execution (Eg, Input needs to be entered while creating a New User, First Name, Last Name, Age, Address, Sex, and Phone Number etc comes under test dataTest Data are data which have been specifically identified for use in executing test scripts, and are used to verify the expected results obtained.Ex: Let us consider I have a login screen which I want to automate, parameterizeand run 10 iteration, so I have to prepare a set of data of all user id and password beforeI go for automation to get the expected results. This is call preparation of test data.Test Bed or Test Environment is setting up of the both hardware and software requirementsEx: Assume I have a PDA or handheld device or mobile, I have developed software which can change a format of a song from any format to mp4. After me converting the song, I will get a list of mp4 files as an xml file. I can transfer this file from the computer to the handheld device using USB cable.Now for checking this entire scenario, I have to set up my software environment( data) and also need to setup my hardware ( which is my PDA and my USB connected to my computer). 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 21: Testing overview

How to keep your data intact for any test   environment? Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Test environment | Leave a

comment

Many times more than one tester is responsible for testing some builds. In this case more than one tester will be having access to common test data and each tester will try to manipulate that common data according to his/her own needs. Best way to keep your valuable input data collection intact is to keep personal copies of the same data. It may be of any format like inputs to be provided to the application, input files such as word file, and excel file or other photo files.Check if your data is not corrupted: Filing a bug without proper troubleshooting is bad a practice. Before executing any test case on existing data make sure that data is not corrupted and application can read the data source.

How to prepare data considering performance test cases?Performance tests require very large data set. Particularly if application fetching or updating data from DB tables then large data volume play important role while testing such application for performance. Sometimes creating data manually will not detect some subtle bugs that may only be caught by actual data created by application under test. If you want real time data, which is impossible to create manually, then ask your manager to make it available from live environment.I generally ask to my manager if he can make live environment data available for testing. This data will be useful to ensure smooth functioning of application for all valid inputs.Take example of my search engine project ‘statistics testing’. To check history of user searches and clicks on advertiser campaigns large data was processed for several years which were practically impossible to manipulate manually for several dates spread over many years. So there is no other option than using live server data backup for testing. (But first make sure your client is allowing you to use this data)

What is the ideal test data?Test data can be said to be ideal if for the minimum size of data set all the application errors get identified. Try to prepare test data that will incorporate all application functionality, but not exceeding cost and time constraint for preparing test data and running tests.

How to prepare test data that will ensure complete test coverage?Design your test data considering following categories:Test data set examples: 1) No data: Run your test cases on blank or default data. See if proper error messages are generated.2) Valid data set: Create it to check if application is functioning as per requirements and valid input data is properly saved in database or files.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 22: Testing overview

3) Invalid data set: Prepare invalid data set to check application behavior for negative values, alphanumeric string inputs.4) Illegal data format: Make one data set of illegal data format. System should not accept data in invalid or illegal format. Also check proper error messages are generated.5) Boundary Condition data set: Data set containing out of range data. Identify application boundary cases and prepare data set that will cover lower as well as upper boundary conditions.6) Data set for performance, load and stress testing: This data set should be large in volume.This way creating separate data sets for each test condition will ensure complete test coverage.Conclusion:Preparing proper test data is a core part of “project test environment setup”. Tester cannot pass the bug responsibility saying that complete data was not available for testing. Tester should create his/her own test data additional to the existing standard production data. Your test data set should be ideal in terms of cost and time. Use the tips provided in this article to categorize test data to ensure complete functional test cases coverage.

Black Box Testing: Types and Techniques of   BBT Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Black box testing, Smoke and

Sanity testing, Types and techniques of BBT | Leave a comment

Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or application.Main focus in black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is also used for black box testing and white box testing is also sometimes called ’structural testing’. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s still discouraged.Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the application are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by black box testing.Black box testing occurs throughout the software development and testing life cycle i.e. in Unit, Integration, System, Acceptance and regression testing stages.Tools used for Black Box testing: Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 23: Testing overview

application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.Advantages of Black Box Testing- Tester can be non-technical.- Used to verify contradictions in actual system and the specifications.- Test cases can be designed as soon as the functional specifications are completeDisadvantages of Black Box Testing- The test inputs needs to be from large sample space.- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult- Chances of having unidentified paths during this testingMethods of Black box Testing:Graph Based Testing Methods: Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.Error Guessing: This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.Boundary Value Analysis: Many systems have tendency to fail on boundary. So testing boundary values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.Extends equivalence partitioningTest both sides of each boundaryLook at output boundaries for test cases tooTest min, min-1, max, max+1, typical valuesBVA techniques: 1. Number of variablesFor n variables: BVA yields 4n + 1 test case.2. Kinds of rangesGeneralizing ranges depends on the nature or type of variablesAdvantages of Boundary Value Analysis1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +13. Forces attention to exception handlingLimitations of Boundary Value AnalysisBoundary value testing is efficient only for variables of fixed values i.e. boundary.Equivalence Partitioning: Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 24: Testing overview

How this partitioning is performed while testing: 1. If an input condition specifies a range, one valid and one two invalid classes are defined.2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.4. If an input condition is Boolean, one valid and one invalid class is defined.Comparison Testing: Different independent versions of same software are used to compare to each other for testing in this method.Regression Testing: We can conduct the test on a modified build to ensure the bug fix work and occurrence of side effects in that build.Retesting: We can do the test on a build to ensure the correctness of the ——-modifiedSanity and smoke, we use script for Smoke but we don’t for Sanity. We use smoke when a product is new and sanity for releases but smoke is used when the changes in the product have greater impact on its functionality.Smoke Test:When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing. Smoke testing can be done for testing the stability of any interim build. Smoke testing can be executed for platform qualification tests.Sanity testing:Once a new build is obtained with minor revisions, instead of doing a through regression, sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes. It’s generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the app.Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after through regression cycles.1Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.2A smoke tests is scripted–either using a written set of tests or an automated testA sanity test is usually unscripted.3A Smoke tests are designed to touch every part of the application in a cursory way. It’s is shallow and wide.A Sanity test is used to determine a small section of the application is still working after a minor change.4Smoke testing will be conducted to ensure whether the most crucial functions of a program

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 25: Testing overview

work, but not bothering with finer details. (Such as build verification).Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.5Smoke testing is normal health check up to a build of an application before taking it to testing in depth.Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.Smoke testing don at developer side which means developed application is eligible for testing or not is called smoke testingsanity testing is nothing but testing side test the application whether the application is eligible for further testing is called the sanity testingHere are the differences you can see:SMOKE TESTING:Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.A smoke test is scripted, either using a written set of tests or an automated testA Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide.Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification).Smoke testing is normal health check up to a build of an application before taking it to testing in depth.SANITY TESTING:A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.A sanity test is usually unscripted.A Sanity test is used to determine a small section of the application is still working after a minor change.Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.Sanity testing is to verify whether requirements are met or not, checking all features breadth-first. 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 26: Testing overview

What is the difference between Test Condition and Test   Scenario? Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: mr, Test Condition and Test

Scenario, Traceability Matrix, Waterfall Model | Leave a comment

What is the difference between Test Condition and Test Scenario?

Ands 1:- Test condition is the condition which is the Process that you should follow to test an application.

Ex: – You Have Login Form.

Test Condition 1:- when User Name and Password are valid 

Then application will move forward.

The above one is the test condition which is the basic Condition where that tests process will get pass.

Test Scenario:-Test Scenario will tell you the possible Ways of scenarios (ways) that an application can

be tested.

For above Login Form you have simony ways of testing 

I.E Positive testing .Negative Testing, B.V.A Like.

Explain the waterfall model in detail?

Every testing process follow the waterfall model of the

Testing process.

Test strategy and planning

Test design

Test environment setup

Test execution

Defect analysis and tracking

Final reporting

Waterfall Model:

It is a linear sequential model

It is very simple model to implement

It is the first model.

It needs very few resources to implement

In this model there is no back tracking.

For example if any error occurred an any Stage of software development, it can’t be Corrected in that

build

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 27: Testing overview

Water fall model: – This is very simple model. It moves Like water fall from top to down of SDLC. The

drawback of This model is ineffectiveness of verification and 

Validation activities.

What is mean by mr?

MR means Modification request When ever the changes comes from the client then we call it As

Modification request, from that MR business people will Find the impact analysis for particular MR

What are bug traceability matrix and its format?

Functional specify action name, Number, Design specification Number, Test case number, Test result,

Test pass/fail, Defect Number, Defect status.

These all columns under traceability matrix format.

Difference b/w srs and functional specifications? What are the approaches you follow while integrating the   modules Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Difference b/w srs and

functional specifications? What are the approaches you follow while integrating the modules, SRS and Functional

Specifications | Leave a comment

SRS specifies the software requirements

I.e. requirements to make software 

Purpose

Scope

Target users (who will use this system)

Customers

Developpers

Testing people. Like

Description of each module

General consecrations for all modules

Functioonal requirements

Navigational requirements

Nonfnctional requirements

Performence requirements

Security requirements

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 28: Testing overview

Resource requiems

Software requirements (operating system)

Die-Hard ware Requirements

iii.Networking requirements

Appendix

i.acronys

ii.definitions…Ext

Ands FRS means Functional decomposition of each module.

It contains

Model screen shots of scenes of each module

Different elements of each page 

And their location, action of each element

What is Adhoc   testing? Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Adhoc testing? | 1 Comment

Hoc testing is a subset of Monkey testing. In this no Formal test cases are used; it is being done due to

lack of The time. No testing approach is being used in this Ado Testing, Random Testing, Monkey Testing

and Blind Testing are one and same When the time is very short to test the application we will Never

follow the test execution with test cases we pick the Test case randomly and execute them to see the

functional Flow

Difference between Tests Methodology and Testing   Techniques? Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Tests Methodology and

Testing Techniques | Leave a comment

what is difference between Tests Methodology and Testing Techniques?

Testing Methodology

We begin the testing process by developing a comprehensive Plan to test the general functionality and

special features On a variety of platform combinations. Strict quality Control procedures are used. The

process verifies that the Application meets the requirements specified in the system Requirements

document and are bug free. At the end of each Testing day, the team prepares a summary of completed

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 29: Testing overview

and

Failed tests. Our programmers address any identified issues, And the application is resubmitted to the

testing team until every item is resolved. All changes and retesting are Tracked through spreadsheets

available to both the testing And programming teams. Applications are not allowed to Launch until all

identified problems are fixed. A report is Prepared at the end of testing to show exactly what was Tested

and to list the final outcomes.

Our software testing methodology is applied in three Distinct phases: unit testing, system testing, and

Acceptance testing.

Unit Testing: 

The programmers conduct unit testing During the development phase. Programmers can test their

Specific functionality individually or with other units. However, unit testing is designed to test small pieces

of Functionality rather than the system as a whole. This allows The programmers to conduct the first

round of testing to eliminate bugs before they reach the testing staff.

System Testing:

The system is tested as a complete,Integrated system. System testing first occurs in the Development

environment but eventually is conducted in the Production environment. Dedicated testers, project

managers, Or other key project staff performs system testing

.

Functionality and performance testing are designed to catch Bugs in the system, unexpected results, or

other ways in Which the system does not meet the stated requirements. The Testers create detailed

scenarios to test the strength and Limits of the system, trying to break it if possible. Editorial reviews not

only correct typographical and Grammatical errors, but also improve the system’s overall Usability by

ensuring that on-screen language is clear and Helpful to users. Accessibility reviews ensure that the

System is accessible to users with disabilities.

Acceptance Testing:

The software is assessed against The requirements defined in the system requirements Document. The

user or client conducts the testing in the Production environment. Successful acceptance testing is

Required before client approval can be received.

Test Strategy

The test team worked with the project team members to Formulate a test plan, a schedule, and strategies

based on The Small IT Solution scope and identified small business Customer scenarios.

The identified customer scenario, for the solution testing,Is a small business of approximately 50 PCs.

The customerWould have in place a peer-to-peer network. They may alsoHave a limited number of

existing Microsoft servers or otherserver products not developed by Microsoft. The customer Approach

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 30: Testing overview

toward migration would be to implement Windows Small Business Server 2003, Standard Edition on new

hardware And then migrate existing data and configurations to the new

Platform.

The development team was expected to have unit tested the Recommendations and configurations

provided in the documentation. This includes verifying the service

Recommendations and specifying the correct sequence of steps to ensure proper configuration. The

deployment topology and Architectural choices, regarding the network and servers, Are assumed to have

been validated as best practices by the

Solution architect and Microsoft product groups relevant to The solution. The deployment topology,

architectural Choices and the validation results of unit testing were Provided before the solution testing

began.

Tell me 4 test case   techniques? Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Test Case Techniques |Leave

a comment

To design test cases, the following techniques will be used.

1).Boundary Value Analysis(BVA)

2).Equaling Class Partition (ECP)

3).Cause-Effect Graph analysis (CE_Graph)

4).Error Guessing

Difference between Smoke and Sanity   testing Posted: 02/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Difference between Smoke

and Sanity testing | Leave a comment

Smoke testing is non-exhaustive software testing, Ascertaining that the most crucial functions of a program Work, but not bothering with finer details.  

Sanity testing is a cursory testing; it is performed Whenever a cursory testing is sufficient to prove the Application is functioning according to specifications. This level of testing is a subset of regression testing. It Normally includes a set of core tests of basic GUI Functionality to demonstrate connectivity to the database, Application servers, printers, etc.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 31: Testing overview

Waterfall ModelPosted: 01/06/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Waterfall Model | Leave a

comment

The waterfall model: 

 Software Development Process: There are various software development approaches defined and designed which are used/employed during development process of software, these approaches are also referred as “Software Development Process Models”.Each process model follows a particular life cycle in order to ensure success in process of software development.

One such approach/process used in Software Development is “The Waterfall Model”.

Water Fall ModelThe classic software life cycle model.First Process Model to be introduced and followed widely in Software Engineering to

ensure success of the project.The whole process of software development is divided into separate process phases.All the phases are cascaded to each other so that second phase is started as and when

defined set of goals are achieved for first phase and it is signed off, so the name “Waterfall Model”.

Also Known as  The linear sequential model Waterfall Model Process Flow 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 32: Testing overview

Testing actually involves two steps:VerificationValidationVerification

Verification is substantiating that the software has been transformed from one form into another as intended with sufficient accuracy.Verification answers the question “Are we building the software right?”.

ValidationValidation is substantiating that the software functions with sufficient accuracy with respect to

its requirements specification.Validation answers the question “Are we building the right software?”In order to represent the testing activity as an ongoing process, the V&V boxes appear under

each product of the Waterfall Model.One important characteristic of the Waterfall Model is the  iteration rrows that join the

processes together. 

 Another important characteristic of the Waterfall Model is the maintenance arrows which connect the delivered software product to the various other products in the life cycle. 

Remember that any changes made to the software product after delivery are considered maintenance.

A small change such as the correction of a programming error may only require a change in the code of the software.

Requirement Analysis & DefinitionSystem & Software DesignImplementation & Unit TestingIntegration & System TestingOperations & MaintenanceRequirement Analysis & DefinitionAll possible requirements of the system to be developed are captured in this phase.Requirements are set of functionalities and constraints that the end-user (who will be

using the system) expects from the system. 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 33: Testing overview

The requirements are gathered from the end-user by consultation, these requirements are analyzed for their validity and the possibility of incorporating the requirements in the system to be development is also studied.

 Finally, a Requirement Specification document is created which serves the purpose of guideline for the next phase of the model.

 System & Software Design 

Before a starting for actual coding, it is highly important to understand what we are going to create and what it should look like?

The requirement specifications from first phase are studied in this phase and system design is prepared.

System Design helps in specifying hardware and system requirements and also helps in defining overall system architecture.

The system design specifications serve as input for the next phase of the model. Implementation & Unit Testing

On receiving system design documents, the work is divided in modules/units and actual coding is started.

The system is first developed in small programs called units, which are integrated in the next phase.

Each unit is developed and tested for its functionality; this is referred to as Unit Testing.Unit testing mainly verifies if the modules/units meet their specifications.

Integration & System Testing 

As specified above, the system is first divided in units which are developed and tested for their functionalities.

These units are integrated into a complete system during Integration phase and tested to check if all modules/units coordinate between each other and the system as a whole behaves as per the specifications.

After successfully testing the software, it is delivered to the customer.Operations & Maintenance 

This phase of “The Waterfall Model” is virtually never ending phase (Very long).Generally, problems with the system developed (which are not found during the

development life cycle) come up after its practical use starts, so the issues related to the system are solved after deployment of the system.

 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 34: Testing overview

Not all the problems come in picture directly but they arise time to time and needs to be solved; hence this process is referred as Maintenance. Advantages 

Testing is inherent to every phase of the waterfall modelIt is an enforced disciplined approachIt is documentation driven, that is, documentation is produced at every stageGreat at addressing subtle issues like scalability, reliabilitySpecifications are built into the projectDiscovery happens early. Projects follow a predictable cycle. Early identification of what

will be difficult or expensive about a project allows go/no-go decisions to be made before too much money is spent.

Simple and easy to use.Easy to manage due to the rigidity of the model – each phase has specific deliverables

and a review process.Phases are processed and completed one at a time.Works well for smaller projects where requirements are very well understood.Minimizes planning overhead since it can be done up front.Structure minimizes wasted effort, so it works well for technically weak or inexperienced

staff. Disadvantages The waterfall model is the oldest and the most widely used paradigm.However, many projects rarely follow its sequential flow. This is due to the inherent problems associated with its rigid format. Namely: 

It only incorporates iteration indirectly, thus changes may cause considerable confusion as the project progresses. 

As The client usually only has a vague idea of exactly what is required from the software product, this WM has difficulty accommodating the natural uncertainty that exists at the beginning of the project.

 The customer only sees a working version of the product after it has been coded. This

may result in disaster any undetected problems are precipitated to this stage.Relatively high risk of process paralysisOnly the final phase produces a non-documentation deliverable.Backing up to address mistakes is difficult

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 35: Testing overview

The problems with one phase are never solved completely during that phase and in fact many problems regarding a particular phase arise after the phase is signed off, this results in badly structured system as not all the problems (related to a phase) are solved during the same phase.

The project is not partitioned in phases in flexible way.  As the requirements of the customer goes on getting added to the list, not all the

requirements are fulfilled, this results in development of almost unusable system. These requirements are then met in never version of the system; this increases the cost of system development.

  

Test Cases ExamplePosted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Test Cases Example | Leave a

comment

1.Test Case for ATM

TC 1 :- succesful card insertion.

TC 2 :- unsuccessful operation due to wrong angle card insertion.

TC 3:- unsuccesssful operation due to invalid account card.

TC 4:- successful entry of pin number.

TC 5:- unsuccessful operation due to wrong pin number entered 3 times.

TC 6:- successful selection of language.

TC 7:- successful selection of account type.

TC 8:- unsuccessful operation due to wrong account type selected w/r to that inserted card.

TC 9:- successful selection of withdrawl option.

TC 10:- successful selection of amount.

TC 11:- unsuccessful operation due to wrong denominations.

TC 12:- successful withdrawl operation.

TC 13:- unsuccessful withdrawl operation due to amount greater than possible balance.

TC 14:- unsucceful due to lack of amount in ATM.

TC 15:- un due to amount greater than the day limit.

TC 16:- un due to server down.

TC 17:- un due to click cancel after insert card.

TC 18:- un due to click cancel after insert card and pin no.

TC 19:- un due to click cancel after language selection,account type selection,withdrawl selection, enter amount

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 36: Testing overview

 

 

2. Test cases for Mobile Phone

Test Cases for Mobile Phone:

1)Chek whether Battery is inserted into mobile properly

2)Chek Switch on/Switchoff of the Mobile

3)Insert the sim into the phone n chek

4)Add one user with name and phone number in Address book

5)Chek the Incoming call

6)chek the outgoing call

7)send/receive messages for that mobile

8)Chek all the numbers/Characters on the phone working fine by clicking on them..

9)Remove the user from phone book n chek removed properly with name and phone number

10)Chek whether Network working fine..

11)If its GPRS enabled chek for the connectivity.

 

3. Test cases for sending a message through mobile phone (assuming all the scenarios)

1.Check for the availability of the Mobile

2.Check the buttons on the mobile

3.Check the mobile is locked or unlocked

4.Check for the  unlock of the mobile

5.Select the menu

6.Check for the messages in menu

7.Select the messages

8.Check for the write message in messages menu

9.Select the write message

10.Check the buttons writing alphabets

11.Check for the chars  how many u can send

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 37: Testing overview

12.Write the message on the write message menu

13.Select the options

14.Select the send

15.Check whether asking for the phone No. of the receiver

16.Select the Search for the receiver Phone No.  if Exits

17.Enter the Phone No. of the receiver

18.Select Ok

19.Check for the request send

 

 

4. Test cases for Traffic Signal

1.verify if the traffic lights are having three lights(green,yellow,red)

2.verify that the lights turn on in a sequence

3.verify that lights turn on in a sequence based on time specified(greenlight-4.1min,yellowlight10sec,redlight 1 min)

5.verify that only one light glows at a time

6.verify if the speed of the Traffic light can be accelerated as time specified based on the traffic

7.verify if the traffic lights in some spots are sensor activated.

 

5.  Test case for 2 way switch

1. Check whether two switches are present.

2. Check whether both switches are connected properly.

3. Check power supplies for both switches.

4. Check on/off conditions are working properly.

5. Check whether any electornic applicace connected to the 2-way switches should not get power supply when both

switches are either in on state or off state.

6. Check whether any electornic applicace connected to the 2-way switches should get power supply when one

switch is in on state and other is in off state or vice versa.

 

6. Test cases Elevator

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 38: Testing overview

Some of the use cases would be:

1) Elevator is capable of moving up and down.

2) It is stopping at each floor.

3) It moves exactly to that floor when corresponding floor no is pressed.

4) It moves up when called from upward and down when called from downward.

5) It waits until ‘close’ button is pressed.

6) If anyon steps inbetween the door at the time of closing, door should open.

7) No break points exists

8) More usecases for the load that the elevator can carry (if required)

 

7. Test cases Calculator

1 It should have 9 numeric digits.

2 it should give proper output based on the operation.

3 it should  not allow charactres.

4 it should run from cell or battery not through power supply.

5 it should be small in size.

6 at least it should perform 4 basic operation such as

add,sub.div,mul

 

8. Test cases Bulb

 

Test cases for bulb

Check the bulb is req shap and size

Check the bulb is fitted and removed from holder

Check the bulb glow req illumunation r not

Check the bulb it should glow when we switch on

Check the bulb it should off when we switch off

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 39: Testing overview

Check the bulb material

Life of the bulb should meet the reqrmt

 

9. Test Case for Yahoo Page

 

1.Testing without entering any username and password

2.Test it only with Username

3.Test it only with password.

4 .User name with wrong password

5. Password with wrong user name

6. Right username and right password

7. Cancel, after entering username and pwd.

8.Enter long username and password that exceeds the set limit of characters.

9.Try copy/paste in the password text box.

10.After successfull sign-out, try “Back” option from your browser. Check whether it gets you to the “signed-in”

page.

Types and Levels of Testing in   Programming Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Types and Levels of Testing in

Programming | Leave a comment

Types and Levels of Testing in ProgrammingTesting is an important step in software development life cycle. The process of testing takes place at

various stages of development in programming. This is a vital step in development life cycle because the

process of testing helps to identify the mistakes and sends the program for correction.

This process gets repeated at various stages until the final unit or program is found to be complete thus

giving a total quality to the development process. The various levels and types of testing found in a

software development life cycle are:

White Box Testing For doing this testing process the person have to access to the source code of the product to be tested.

So it is essential that the person doing this white box testing have some knowledge of the program being

tested. Though not necessary it would be more worth if the programmer itself does this white box testing

process since this testing process requires the handling of source code.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 40: Testing overview

Black Box Testing This is otherwise called as functional testing. In contrary to white box testing here the person who is doing

the black box testing need not have the programming knowledge. This is because the person doing the

black box testing would access the output or outcomes as the end user would access and would perform

thorough functionality testing to check whether the developed module or product behaves in functionality

in the way it has to be.

Unit Testing This testing is done for each module of the program to ensure the validity of each module. This type of

testing is done usually by developers by writing test cases for each scenarios of the module and writing

the results occurring in each step for each module.

Regression testing We all know that development life cycle is subjected to continuous changes as per the requirements of

user. Suppose if there is a change in the existing system which has already been tested it is essential that

one has to make sure that this new changes made to the existing system do not affect the existing

functionality. For ensuring this regression testing is done.

Integration testing By making unit testing for each module as explained above the process of integrated testing as a whole

becomes simpler. This is because by correcting mistakes or bugs in each module the integration of all

units as a system and testing process becomes easier. So one might think why the integration is testing

needed. The answer is simple. It is needed because unit testing as explained test and assures

correctness of only each module. But it does not cover the aspects of how the system would behave or

what error would be reported when modules are integrated. This is done in the level of integration testing.

Smoke Test This is also called as sanity testing. This is mainly used to identify environmental related problems and is

performed mostly by test manager. For any application it is always necessary to have the environment

first checked for smooth running of the application. So in this testing process the application is run in the

environment technically called as dry run and checked to find that the application could run without any

problem or abend in between.

Alpha Testing The above different testing process described takes place in different stages of development as per the

requirement and needs. But a final testing is always made after a full finished product that is before it

released to end users and this is called as alpha testing. The alpha testing involves both the white box

testing and black box testing thus making alpha testing to be carried out in two phases.

Beta Testing This process of testing is carried out to have more validity of the software developed. This takes place

after the alpha testing. After the alpha phase also the generally the release is not made fully to all end

users. The product is released to a set of people and feedback is got from them to ensure the validity of

the product. So here normally the testing is being done by group of end users and therefore this beta

testing phase covers black box testing or functionality testing only.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 41: Testing overview

Having seen about testing levels and types let us now see how the testing process takes place in general.

After getting an idea of what all to be tested by communicating with developers and others in the design

phase of the software development life cycle the testing stage carries on parallel. Test plan is made ready

during the planning stage of testing. This test plan has details like environment of setting like software,

hardware, operating system used, the scope and limitation of testing, test type and so on. In the next

phase the test case is prepared which has details of each step for module to be checked, input which can

be used for each action are described and recorded for testing. It also has details about what is the

expected outcome or expected result of each action .The next phase is the actual testing phase. In this

phase the testers make testing based on the test plan and test case made ready and record the output or

result resulting from each module. Thus the actual output is recorded .Then a report is made to find the

error or defect between expected outcome and actual output in each module in each step. This is sent for

rework for developers and testing cycle again continues as above.

It does not mean that the system released is bug free or error free hundred percent. This is because no

real system could have null percentage error. But an important point to bear in mind is that a system

developed is a quality system only if the system could run for a period of time after its release without

error and after this time period only minimal errors are reported. For achieving this testing phase plays an

essential role in software development life cycle.

 

SQL PLUS Testing   FAQs Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: SQL PLUS Testing FAQs |

Leave a comment

SQL PLUS STATEMENTS

1. What are the types of SQL Statement ?

Data Definition Language : CREATE,ALTER,DROP,TRUNCATE,REVOKE,NO AUDIT & COMMIT.Data Manipulation Language : INSERT,UPublish PostPDATE,DELETE,LOCK TABLE,EXPLAIN PLAN & SELECT.Transactional Control : COMMIT & ROLLBACK

Session Control : ALTERSESSION & SET ROLESystem Control : ALTER SYSTEM.

2. What is a transaction ?

Transaction is logical unit between two commits and commit and rollback.                

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 42: Testing overview

3. What is difference between TRUNCATE & DELETE ?

TRUNCATE commits after deleting entire table i.e., can not be rolled back. Database triggers do not fire

on TRUNCATE

DELETE allows the filtered deletion. Deleted records can be rolled back or committed.

Database triggers fire on DELETE.

4. What is a join ? Explain the different types of joins ?

Join is a query which retrieves related columns or rows from multiple tables.

Self Join – Joining the table with itself.

Equi Join – Joining two tables by equating two common columns.

Non-Equi Join – Joining two tables by equating two common columns.

Outer Join – Joining two tables in such a way that query can also retrive rows that do not have

corresponding join value in the other table.

5. What is the Subquery ?

Subquery is a query whose return values are used in filtering conditions of the main query.

6. What is correlated sub-query ?

Correlated sub_query is a sub_query which has reference to the main query.

7. Explain Connect by Prior ?

Retrives rows in hierarchical order.

e.g. select empno, ename from emp where.

8. Difference between SUBSTR and INSTR ?

INSTR (String1,String2(n,(m)),

INSTR returns the position of the mth occurrence of the string 2 in

string1. The search begins from nth position of string1.

SUBSTR (String1 n,m)

SUBSTR returns a character string of size m in string1, starting from nth postion of string1.

9. Explain UNION,MINUS,UNION ALL, INTERSECT ?

INTERSECT returns all distinct rows selected by both queries.

MINUS – returns all distinct rows selected by the first query but not by the second.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 43: Testing overview

UNION – returns all distinct rows selected by either query

UNION ALL – returns all rows selected by either query,including all duplicates.          

10. What is ROWID ?

ROWID is a pseudo column attached to each row of a table. It is 18 character long, blockno, rownumber

are the components of ROWID.

11. What is the fastest way of accessing a row in a table ?

Using ROWID.

 CONSTRAINTS

12. What is an Integrity Constraint ?

Integrity constraint is a rule that restricts values to a column in a table.

13. What is Referential Integrity ?

Maintaining data integrity through a set of rules that restrict the values of one or more columns of the

tables based on the values of primary key or unique key of the referenced table.

14. What are the usage of SAVEPOINTS ?

SAVEPOINTS are used to subdivide a transaction into smaller parts. It enables rolling back part of a

transaction. Maximum of five save points are allowed.

15. What is ON DELETE CASCADE ?

When ON DELETE CASCADE is specified ORACLE maintains referential integrity by automatically

removing dependent foreign key values if a referenced primary or unique key value is removed.

16. What are the data types allowed in a table ?

CHAR,VARCHAR2,NUMBER,DATE,RAW,LONG and LONG RAW.

17. What is difference between CHAR and VARCHAR2 ? What is the maximum SIZE allowed for each

type ?

CHAR pads blank spaces to the maximum length. VARCHAR2 does not pad blank spaces. For CHAR it

is 255 and 2000 for VARCHAR2.

18. How many LONG columns are allowed in a table ? Is it possible to use LONG columns in WHERE

clause or ORDER BY ?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 44: Testing overview

Only one LONG columns is allowed. It is not possible to use LONG column in WHERE or ORDER BY

clause.

19. What are the pre requisites ?

I. to modify datatype of a column ?

ii. to add a column with NOT NULL constraint ?

To Modify the datatype of a column the column must be empty.

to add a column with NOT NULL constrain, the table must be empty.                    

20. Where the integrity constrints are stored in Data Dictionary ?

The integrity constraints are stored in USER_CONSTRAINTS.

21. How will you a activate/deactivate integrity constraints ?

The integrity constraints can be enabled or disabled by ALTER TABLE ENABLE constraint/DISABLE

constraint.

22. If an unique key constraint on DATE column is created, will it validate the rows that are inserted with

SYSDATE ?

It won’t, Because SYSDATE format contains time attached with it.

23. What is a database link ?

Database Link is a named path through which a remote database can be accessed.

24. How to access the current value and next value from a sequence ? Is it possible to access the current

value in a session before accessing next value ?

Sequence name CURRVAL, Sequence name NEXTVAL.

It is not possible. Only if you access next value in the session, current value can be accessed.

25. What is CYCLE/NO CYCLE in a Sequence ?

CYCLE specifies that the sequence continues to generate values after reaching either maximum or

minimum value. After pan ascending sequence reaches its maximum value, it generates its minimum

value. After a descending sequence reaches its minimum, it generates its maximum.

NO CYCLE specifies that the sequence cannot generate more values after reaching its maximum or

minimum value.

26. What are the advantages of VIEW ?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 45: Testing overview

To protect some of the columns of a table from other users.                 

To hide complexity of a query.To hide complexity of calculations.

27. Can a view be updated/inserted/deleted? If Yes under what conditions ?

A View can be updated/deleted/inserted if it has only one base table if the view is based on columns from

one or more tables then insert, update and delete is not possible.

28.If a View on a single base table is manipulated will the changes be reflected on the base table ?

If changes are made to the tables which are base tables of a view will the changes be reference on the

view.

 

 1.      Difference between group functions and single row functions. Group Function                                                         Single Row Function1. A group function operates                             2. A single row functionon many rows  returns one and                          result for one row.returns single result. Not allowed in Pl/sql procedural                                  Allowed in Pl/Sql                                                                    Procedural statements

          statements.eg SUM(),AVG,MIN,MAX  etc                                       eg UPPER,LOWER,CHR… 2.      Difference between DECODE and TRANSLATE

   DECODE is value by value                                 TRANSLATE is character bycharacter  replacement.                                             replacement.Ex  SELECT  DECODE(‘ABC’,’A’,1,’B’,2,’ABC’,3)        eg      SELECTfrom dual; o/p   3                                            TRANSLATE(‘ABCGH’,

‘ABCDEFGHIJ’, 1234567899) FROM DUAL;  o/p  12378

   (DECODE command is used to bring IF,THEN,ELSE logic to SQL.It tests for the IF values(s) and then aplies THEN value(s) when true, the ELSE value(s) if not.) 3.      Difference between TRUNCATE and DELETE 

TRUNCATE deletes much faster than DELETE Truncate                                                                                DeleteIt is a DDL statement                                       It is a DML statement

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 46: Testing overview

It is a one way trip,cannot                                One  can RollbackROLLBACKDoesn’t have selective features (where clause)    HasDoesn’t fire database triggers                                      DoesIt requires disabling of referential                       Does not requireconstraints.

 4.      What is a CO-RELATED SUBQUERY A CO-RELATED  SUBQUERY  is one that has a correlation

name as table or view designator in the FROM clause of the outerquery and the same correlation name as a qualifier of a searchcondition in the WHERE clause of the subquery.

 eg       SELECT  field1 from table1 X      WHERE  field2>(select avg(field2) from table1 Y                                                         where                                                         field1=X.field1);                   (The subquery in a correlated subquery is revaluated

for every row of the table or view named in the outer query.) 5.      What are various joins used while writing SUBQUERIES 

Self join-Its a join foreign key of a table references the same table.

 Outer Join–Its a join condition used where One can query all the rows of one of the

tables in the join condition even though they don’t satisfy the join condition. 

Equi-join–Its a join condition that retrieves rows from one or more tables in which one

or more columns in one table are equal to one or more columns in the second table.  6.      What are various constraints used in SQL  NULL ,NOT NULL ,CHECK ,DEFAULT  7.      What are different Oracle database objects  TABLES ,VIEWS ,INDEXES ,SYNONYMS ,SEQUENCES ,TABLESPACES etc 8.      What is difference between Rename and Alias Rename is a permanent name given to a table or column whereas Alias is a temporary name given to a table or column which do not exist once the  SQL statement is executed.  9.      What is a view 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 47: Testing overview

A view is stored procedure based on one or more tables, it’s a virtual table.What are various privileges that a user can grant to another userSELECT ,CONNECT ,RESOURCES 10.  What is difference between UNIQUE and PRIMARY KEY constraints

  A table can have only one PRIMARY KEY whereas there can be any number ofUNIQUE keys. The columns that compose PK are automatically define NOT NULL, whereas a column that compose a UNIQUE is not automatically defined to be mandatory must also specify the column is NOT NULL.

  11.  Can a primary key contain more than one columns  ? yes 

     12.  How you will avoid duplicating records in a query                  By using DISTINCT

  13.  What is difference between SQL and SQL*PLUS  SQL*PLUS is a command line tool where as SQL and PL/SQL language  interface and reporting tool. Its a command line tool that allows user to type SQL commands to be executed directly against an Oracle database. SQL is a language used to query the relational database(DML,DCL,DDL). SQL*PLUS commands are used to format query result, Set options, Edit SQL commands and PL/SQL.  14.  Which datatype is used for storing graphics and images  LONG RAW data type is used for storing BLOB’s (binary large objects).

 15.  How will you delete duplicating rows from a base table       DELETE    FROM  table_name A   WHERE rowid>(SELECT      min(rowid) from table_name B where          B.table_no=A.table_no);CREATE  TABLE new_table AS SELECT  DISTINCT * FROM old_table;

 DROP old_table     RENAME  new_table TO  old_table

DELETE  FROM table_name A   WHERE  rowid NOT IN (SELECT MAX(ROWID) FROMtable_name       GROUP BY column_name) 16.  What is difference between SUBSTR and INSTR SUBSTR returns a specified portion of a string

      eg  SUBSTR(‘BCDEF’,4)       output  BCDE          INSTR  provides character position in which a pattern    is found in a string.

eg  INSTR(‘ABC-DC-F’,’-‘,2)    output   7 (2nd occurence of ‘-‘)  17.  There is a string ‘120000 12 0 .125′ ,how you will find the         position of the decimal place

 INSTR(‘120000 12 0 .125′,1,’.’)                     output   13  18.  There is a ‘%’ sign in one field of a column. What will bethe query to find it. ?          ‘\’ Should be used before ‘%’.

  19.  When you use WHERE clause and when you use HAVING clause  HAVING clause is used when you want to specify a condition for a group function and it

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 48: Testing overview

is written after GROUP BY clause The WHERE  clause is used when you want to specify a condition for columns, single  row functions except group functions and it is written before GROUP BY clause if it is  used.  20.  Which is more faster – IN or EXISTSEXISTS is more faster than IN because EXISTS returns a Boolean value whereas IN returns a value.  21.  What is a OUTER JOIN  Outer Join–Its a join condition used where you can query all the rows of one of thetables in the join condition even though they don’t satisfy the join condition.22.  How you will avoid your query from using indexes

SELECT * FROM emp  Where emp_no+’ ‘=12345; i.e you have to concatenate the column name with space within codes in the where condition. SELECT   /*+ FULL(a)  */  ename, emp_no from emp  where emp_no=1234;

i.e using  HINTS 23.  What is a pseudo column. Give some examples It is a column that is not an actual column in the table. eg USER, UID, SYSDATE, ROWNUM,

ROWID, NULL, AND LEVEL.24.  Suppose customer table is there having different columns like customer no, payments.What will be the query to select top three max payments.SELECT customer_no, payments from customer C1 WHERE 3<=(SELECT COUNT(*) from customer

C2   WHERE C1.payment <= C2.payment) 25.  What is the purpose of a cluster. Oracle does not allow a user to specifically locate tables, since that is a part of the function of the RDBMS. However, for the purpose of increasing performance, oracle  allows a developer to create a CLUSTER.  A CLUSTER  provides a means for storing data from different tables together for faster retrieval than if the table placement were left to the RDBMS.26.  What is a cursor. ?  Oracle uses work area to execute SQL statements and store processing information PL/SQL construct called a cursor lets you name a work area  and access its stored info A cursor is a mechanism used to fetch more than one row in a Pl/SQl block.27.  Difference between an implicit & an explicit cursor.  PL/SQL declares a cursor implicitly for all SQL data manipulation statements, including quries that return only one row.  However,queries that return more than one row you must declare an explicit  cursor or use a cursor FOR loop. Explicit cursor is a cursor in which the cursor name is explicitly assigned to a SELECT statement via the  CURSOR…IS statement. An implicit cursor is used for all SQL statements Declare, Open, Fetch, Close. An explicit cursors are used to process multirow SELECT statements  An implicit cursor is used to process INSERT, UPDATE, DELETE and single  row SELECT. .INTO statements, 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 49: Testing overview

28.  What are cursor attributes                     %ROWCOUNT ,    %NOTFOUND ,  %FOUND ,%ISOPEN  29.  What is a cursor for loop.  

       Cursor For Loop is a loop where oracle implicitly declares a loop variable, the loop index that of the same record type as the cursor’s record.

  30.  Difference between NO DATA FOUND and %NOTFOUNDNO DATA FOUND is an exception raised only for the SELECT….INTO statementshen the where clause of the querydoes not  match any rows. When the where clauseof the explicit cursor does not match any rows the %NOTFOUND attribute is set toTRUE instead.  31.       What a SELECT FOR UPDATE cursor represent.  SELECT……FROM……FOR……UPDATE[OF column-reference][NOWAIT] The processing done in a fetch loop modifies the rows that have been retrieved by the cursor. A convenient way of modifying the rows is done by a method with two parts: the FOR UPDATE clause in the cursor declaration, WHERE CURRENT OF CLAUSE in an UPDATE or declaration statement.32.        What ‘WHERE CURRENT OF ‘ clause does in a cursor.LOOP    SELECT  num_credits  INTO  v_numcredits  FROM classes                           WHERE dept=123 and course=101; UPDATE  students  SET current_credits=current_credits+v_numcredits                          WHERE  CURRENT OF  X;  END  LOOP;                           COMMIT;                           END; 33.        What is use of a cursor variable? How it is defined.  A cursor variable is associated with different statements at run time, which can holddifferent values at run time. Static cursors can only be associated with  one run timequery. A cursor variable is reference type(like a pointer in C). Declaring a cursor variable: TYPE  type_name  IS REF CURSOR RETURN  return_type  type_name is the name of the reference type,return_type is a record type  indicating the types of the select list that will eventually be returned by the cursor variable.  34.        What should be the return type for a cursor variable.Can we use a scalar data type as return type.The return type for a cursor must be a record type.It can be declared explicitly as auser-defined or %ROWTYPE  can be used. eg  TYPE  t_studentsref  IS  REFCURSOR  RETURN  students%ROWTYPE35.        How you open and close a cursor variable.Why it is required.OPEN  cursor variable FOR  SELECT…StatementCLOSE cursor variable In order to associate a cursor variable with a particularSELECT statement  OPEN syntax is used.In order to free the resources used

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 50: Testing overview

for the query CLOSE statement is used.  36.        How you were passing cursor variables in PL/SQL 2.2.In PL/SQL 2.2 cursor variables cannot be declared in a package.This is because the storage for a cursor variable has to be allocated using Pro*C or OCI with version 2.2,the only means of passing a cursor variable to a PL/SQL block is  via bind variable or a procedure parameter.37.        Can cursor variables be stored in PL/SQL tables.If yes how.If not why.No, a cursor variable points a row which cannot be stored in a two-dimensional PL/SQL table.  38.        Difference between procedure and function.  Functions are named PL/SQL blocks that return a value and can be called with arguments procedure a named block that can be called with parameter. A procedure all is a PL/SQL statement by itself, while a Function call is called as part of an expression.  39.        What are different modes of parameters used in functions and procedures.      IN   ,       OUT ,                             INOUT

40.         What is difference between a formal and an actual parameterThe variables declared in the procedure and which are passed, as arguments are called  actual, the parameters in the procedure declaration. Actual parameters contain the values that are passed to a procedure and receive results. Formal parameters are the  placeholders for the values of actual parameters

  41.        Can the default values be assigned  to  actual parameters.       Yes  42.        Can a function take OUT parameters.If not why.  No.A function has to return a value,an OUT parameter cannot return a value.  43.        What is syntax for dropping a procedure and a function .Are these operations     possible.   Drop Procedure procedure_name   ,     Drop Function function_name  44.        What are ORACLE PRECOMPILERS.  Using ORACLE PRECOMPILERS ,SQL statements and PL/SQL blocks can be contained inside 3GL programs written in C,C++,COBOL,PASCAL,  FORTRAN,PL/1 AND ADA. The Precompilers are known as Pro*C,Pro*Cobol,… This form of PL/SQL is known as embedded pl/sql,the language in which pl/sql is embedded is known as the host language.   The prcompiler translates the embedded SQL and pl/sql  ststements into calls to the precompiler runtime library.The output must be compiled and linked with this library to  creater an executable.  45.        What is OCI. What are its uses.  Oracle Call Interface is a method of accesing database from a 3GL program. Uses–No precompiler is required,PL/SQL blocks are executed like  other DMLstatements.                       The OCI library provides  -functions to parse SQL statemets ,                                    -bind input variables ,   -bind output variables ,  -execute statements ,                              -fetch the results  46.        Difference between database triggers and form triggers. a)      Data base trigger(DBT) fires when a DML operation is performed on a data base table.Form

trigger(FT) Fires when user presses a key or navigates between fields  on the screen b)      Can be row

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 51: Testing overview

level or statement level  No distinction between row level and statement level.c)      Can manipulate data

stored in Oracle tables via SQL Can manipulate data in Oracle tables as well as variables in forms. d)      Can be fired from any session executing the triggering DML statements. Can be fired only

from the form that define the trigger. e)      Can cause other database triggers to fire.Can cause other

database triggers to fire,but not other form triggers.

  47.        What is an UTL_FILE.What are different procedures and functions associated     with it.UTL_FILE is a package that adds the ability to read and write to operating system files Procedures associated with it are FCLOSE, FCLOSE_ALL  and 5 procedures to output data to a file PUT, PUT_LINE, NEW_LINE, PUTF, FFLUSH.PUT, FFLUSH.PUT_LINE,FFLUSH.NEW_LINE. Functions associated with it are FOPEN, ISOPEN.48.   Can you use a commit statement within a database trigger.          No

 49.  What is the maximum buffer size that can be specified using the DBMS_OUTPUT.ENABLE function?

1,000,000

 

BASISCS OF PL/SQL

1. What is PL/SQL ?

PL/SQL is a procedural language that has both interactive SQL and procedural programming language

constructs such as iteration, conditional branching.

2. What is the basic structure of PL/SQL ?

PL/SQL uses block structure as its basic structure. Anonymous blocks or nested blocks can be used in

PL/SQL.

3. What are the components of a PL/SQL block ?

A set of related declarations and procedural statements is called block.

4. What are the components of a PL/SQL Block ?

Declarative part, Executable part and Execption part.

Datatypes PL/SQL

5. What are the datatypes a available in PL/SQL ?

Some scalar data types such as NUMBER, VARCHAR2, DATE, CHAR, LONG, BOOLEAN.

Some composite data types such as RECORD & TABLE.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 52: Testing overview

6. What are % TYPE and % ROWTYPE ? What are the advantages of using these over datatypes?

% TYPE provides the data type of a variable or a database column to that variable.

% ROWTYPE provides the record type that represents a entire row of a table or view or columns selected

in the cursor.

The advantages are : I. Need not know about variable’s data type

ii. If the database definition of a column in a table changes, the data type of a variable changes

accordingly.

7. What is difference between % ROWTYPE and TYPE RECORD ?

% ROWTYPE is to be used whenever query returns a entire row of a table or view.

TYPE rec RECORD is to be used whenever query returns columns of different

table or views and variables.

E.g. TYPE r_emp is RECORD (eno emp.empno% type,ename emp ename %type

);

e_rec emp% ROWTYPE

cursor c1 is select empno,deptno from emp;

e_rec c1 %ROWTYPE.

8. What is PL/SQL table ?

Objects of type TABLE are called “PL/SQL tables”, which are modelled as (but not the same as) database

tables, PL/SQL tables use a primary PL/SQL tables can have one column and a primary key.

CURSORS

9. What is a cursor ? Why Cursor is required ?

Cursor is a named private SQL area from where information can be accessed. Cursors are required to

process rows individually for queries returning multiple rows.

10. Explain the two type of Cursors ?

There are two types of cursors, Implict Cursor and Explicit Cursor.

PL/SQL uses Implict Cursors for queries.

User defined cursors are called Explicit Cursors. They can be declared and used.

11. What are the PL/SQL Statements used in cursor processing ?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 53: Testing overview

DECLARE CURSOR cursor name, OPEN cursor name, FETCH cursor name INTO or Record types,

CLOSE cursor name.

12. What are the cursor attributes used in PL/SQL ?

%ISOPEN – to check whether cursor is open or not

% ROWCOUNT – number of rows featched/updated/deleted.

% FOUND – to check whether cursor has fetched any row. True if rows are featched.

% NOT FOUND – to check whether cursor has featched any row. True if no rows are featched.

These attributes are proceded with SQL for Implict Cursors and with Cursor name for Explict Cursors.

13. What is a cursor for loop ?

Cursor for loop implicitly declares %ROWTYPE as loop index,opens a cursor, fetches rows of values from

active set into fields in the record and closes

when all the records have been processed.

eg. FOR emp_rec IN C1 LOOP

salary_total := salary_total +emp_rec sal;

END LOOP;

14. What will happen after commit statement ?

Cursor C1 is

Select empno,

ename from emp;

Begin

open C1; loop

Fetch C1 into

eno.ename;

Exit When

C1 %notfound;—–

commit;

end loop;

end;

The cursor having query as SELECT …. FOR UPDATE gets closed after COMMIT/ROLLBACK.

The cursor having query as SELECT…. does not get closed even after COMMIT/ROLLBACK.

15. Explain the usage of WHERE CURRENT OF clause in cursors ?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 54: Testing overview

WHERE CURRENT OF clause in an UPDATE,DELETE statement refers to the latest row fetched from a

cursor.

DATABASE TRIGGERS

16. What is a database trigger ? Name some usages of database trigger ?

Database trigger is stored PL/SQL program unit associated with a specific database table. Usages are

Audit data modificateions, Log events transparently, Enforce complex business rules Derive column

values automatically, Implement complex security authorizations. Maintain replicate tables.

17. How many types of database triggers can be specified on a table ? What are they ?

Insert Update Delete

Before Row o.k. o.k. o.k.

After Row o.k. o.k. o.k.

Before Statement o.k. o.k. o.k.

After Statement o.k. o.k. o.k.

If FOR EACH ROW clause is specified, then the trigger for each Row affected by the statement.

If WHEN clause is specified, the trigger fires according to the retruned boolean value.

18. Is it possible to use Transaction control Statements such a ROLLBACK or COMMIT in Database

Trigger ? Why ?

It is not possible. As triggers are defined for each table, if you use COMMIT of ROLLBACK in a trigger, it

affects logical transaction processing.

19. What are two virtual tables available during database trigger execution ?

The table columns are referred as OLD.column_name and NEW.column_name.

For triggers related to INSERT only NEW.column_name values only available.

For triggers related to UPDATE only OLD.column_name NEW.column_name values only available.

For triggers related to DELETE only OLD.column_name values only available.

20. What happens if a procedure that updates a column of table X is called in a database trigger of the

same table ?

Mutation of table occurs.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 55: Testing overview

21. Write the order of precedence for validation of a column in a table ?

I. done using Database triggers.

ii. done using Integarity Constraints.

I & ii.

EXCEPTION :

22. What is an Exception ? What are types of Exception ?

Exception is the error handling part of PL/SQL block. The types are Predefined and user_defined. Some

of Predefined execptions are. 

CURSOR_ALREADY_OPEN

DUP_VAL_ON_INDEX

NO_DATA_FOUND

TOO_MANY_ROWS

INVALID_CURSOR

INVALID_NUMBER

LOGON_DENIED

NOT_LOGGED_ON

PROGRAM-ERROR

STORAGE_ERROR

TIMEOUT_ON_RESOURCE

VALUE_ERROR

ZERO_DIVIDE

OTHERS.

23. What is Pragma EXECPTION_INIT ? Explain the usage ?

The PRAGMA EXECPTION_INIT tells the complier to associate an exception with an oracle error. To get

an error message of a specific oracle error.

e.g. PRAGMA EXCEPTION_INIT (exception name, oracle error number)

24. What is Raise_application_error ?

Raise_application_error is a procedure of package DBMS_STANDARD which allows to issue an

user_defined error messages from stored sub-program or database trigger.

25. What are the return values of functions SQLCODE and SQLERRM ?

SQLCODE returns the latest code of the error that has occured.

SQLERRM returns the relevant error message of the SQLCODE.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 56: Testing overview

26. Where the Pre_defined_exceptions are stored ?

In the standard package.

Procedures, Functions & Packages ;

27. What is a stored procedure ?

A stored procedure is a sequence of statements that perform specific function.

28. What is difference between a PROCEDURE & FUNCTION ?

A FUNCTION is alway returns a value using the return statement.

A PROCEDURE may return one or more values through parameters or may not return at all.

29. What are advantages fo Stored Procedures /

Extensibility,Modularity, Reusability, Maintainability and one time compilation.

30. What are the modes of parameters that can be passed to a procedure ?

IN,OUT,IN-OUT parameters.

31. What are the two parts of a procedure ?

Procedure Specification and Procedure Body.

32. Give the structure of the procedure ?

PROCEDURE name (parameter list…..)

is

local variable declarations

BEGIN

Executable statements.

Exception.

exception handlers

end;

33. Give the structure of the function ?

FUNCTION name (argument list …..) Return datatype is

local variable declarations

Begin

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 57: Testing overview

executable statements

Exception

execution handlers

End;

34. Explain how procedures and functions are called in a PL/SQL block ?

Function is called as part of an expression.

sal := calculate_sal (‘a822′);

procedure is called as a PL/SQL statement

calculate_bonus (‘A822′);

35. What is Overloading of procedures ?

The Same procedure name is repeated with parameters of different datatypes and parameters in different

positions, varying number of parameters is called overloading of procedures.

e.g. DBMS_OUTPUT put_line

36. What is a package ? What are the advantages of packages ?

Package is a database object that groups logically related procedures.

The advantages of packages are Modularity, Easier Applicaton Design, Information. Hiding,. reusability

and Better Performance.

37.What are two parts of package ?

The two parts of package are PACKAGE SPECIFICATION & PACKAGE BODY.

Package Specification contains declarations that are global to the packages and local to the schema.

Package Body contains actual procedures and local declaration of the procedures and cursor

declarations.

38. What is difference between a Cursor declared in a procedure and Cursor declared in a package

specification ?

A cursor declared in a package specification is global and can be accessed by other procedures or

procedures in a package.

A cursor declared in a procedure is local to the procedure that can not be accessed by other procedures.

39. How packaged procedures and functions are called from the following?

a. Stored procedure or anonymous block

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 58: Testing overview

b. an application program such a PRC *C, PRO* COBOL

c. SQL *PLUS

a. PACKAGE NAME.PROCEDURE NAME (parameters);

variable := PACKAGE NAME.FUNCTION NAME (arguments);

EXEC SQL EXECUTE

b.

BEGIN

PACKAGE NAME.PROCEDURE NAME (parameters)

variable := PACKAGE NAME.FUNCTION NAME (arguments);

END;

END EXEC;

c. EXECUTE PACKAGE NAME.PROCEDURE if the procedures does not have any

out/in-out parameters. A function can not be called.

40. Name the tables where characteristics of Package, procedure and functions are stored ?

User_objects, User_Source and User_error.

                     

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 59: Testing overview

       Oracle 

1. Three beauty pageant finalists-Cindy, Amy and Linda-The winner was musician. The one who was not

last or first was a math major. The one who came in third had black hair. Linda had red hair. Amy had no

musical abilities. Who was first?

           (A) Cindy                       (B) Amy                       (C) Linda                       (D) None of these

2. Two twins have certain peculiar characteristics. One of them always lies on Monday, Wednesday,

Friday. The other always lies on Tuesdays, Thursday and Saturdays. On the other days they tell the truth.

You are given a conversation.

                      Person A- today is Sunday, my name is Anil

                      Person B-today is Tuesday, my name is Bill                        What day is today?

           (A) Sunday                       (B) Tuesday                       (C) Monday                   (D) Thursday

3. The difference of a number and its reciprocal is 1/2.The sum of their squares is 

           (A) 9/4                           (B) 4/5                           (C) 5/3                           (D) 7/4

4. The difference of a number and its square is 870.What is the number?

           (A) 42                           (B) 29                               (C) 30                           (D) 32

5. A trader has 100 Kg of wheat, part of which he sells at 5% profit and the rest at 20% profit. He gains

15% on the whole. Find how much is sold at 5% profit?

           (A) 60                           (B) 50                               (C) 66.66                           (D) 33.3

6. Which of the following points are collinear?

           (A) (3,5)   (4,6)   (2,7)                               (B) (3,5)   (4,7)   (2,3)

           (C) (4,5)   (4,6)   (2,7)                               (D) (6,7)   (7,8)   (2,7)

7. A man leaves office daily at 7pm.a driver with car comes from his home to pick him from office and

bring back home. One day he gets free at 5.30 and instead of waiting for driver he starts walking towards

home. In the way he meets the car and returns home on car. He reaches home 20 minutes earlier than

usual. In how much time does the man reach home usually?

           (A) 1 hr 20 min                           (B) 1 hr                       (C) 1 hr 10 min                    (D) 55 min

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 60: Testing overview

8. If m:n = 2:3,the value of 3m+5n/6m-n is

           (A) 7/3                           (B) 3/7                           (C) 5/3                           (D) 3/5

9. A dog taken four leaps for every five leaps of hare but three leaps of the dog is equal to four leaps of

the hare. Compare speed?

           (A) 12:16                       (B) 19:20                       (C) 16:15                       (D) 10:12

10. A watch ticks 90 times in 95 seconds. And another watch ticks 315 times in 323 secs. If they start

together, how many times will they tick together in first hour?

           (A) 100 times                   (B) 101 times                   (C) 99 times                   (D) 102 times

11. The purpose of defining an index is 

           (A) Enhance Sorting Performance                       (B) Enhance Searching Performance

           (C) Achieve Normalization                                 (D) All of the above

12. A transaction does not necessarily need to be

           (A) Consistent               (B) Repeatable               (C) Atomic               (D) Isolated

13. To group users based on common access permission one should use 

           (A) User Groups               (B) Roles               (C) Grants               (D) None of the above

14. PL/SQL uses which of the following

           (A) No Binding           (B) Early Binding           (C) Late Binding           (D) Deferred Binding

15. Which of the constraint can be defined at the table level as well as at the column level

           (A) Unique                   (B) Not Null                   (C) Check                   (D) All the above

16. To change the default date format in a SQLPLUS Session you have to 

           (A) Set the new format in the DATE_FORMAT key in the windows Registry.

           (B) Alter session to set NLS_DATE-FORMAT.

           (C) Change the Config.ora File for the date base.

           (D) Change the User Profile USER-DATE-FORMAT.

17. Which of the following is not necessarily an advantages of using a package rather than independent

stored procedure in data base. 

           (A) Better performance.                                                   (B) Optimized memory usage.

           (C) Simplified Security implementation.                             (D) Encapsulation.

18. Integrity constrains are not checked at the time of 

           (A) DCL Statements.                           (B) DML Statements.

           (C) DDL Statements.                           (D) It is checked all the above cases.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 61: Testing overview

19. Roll Back segment is not used in case of a

           (A) DCL Statements.       (B) DML Statements.       (C) DDL Statements.       (D) all of the above.

20. An Arc relationship is applicable when

           (A) One child table has multiple parent relation, but for anyone instance of a child record only one

of the relations is applicable.

           (B) One column of a table is related to another column of the same table.

           (C) A child table is dependent on columns other than the primary key columns of the parent table.

           (D) None of the above.

21. What is true about the following C functions?

           (A) Need not return any value.                           (B) Should always return an integer.

           (C) Should always return a float.                        (D) Should always return more than one value.

22. enum number { a=-1, b=4, c,d,e,} what is the value of e?

           (A) 7                                (B) 4                               (C) 5                               (D) 3

23. Which of the following about automatic variables within a function is correct?

           (A) Its type must be declared before using the variable.                   (B) They are local.

           (C) They are not initialized to zero.                                                 (D) They are global.

24. Consider the following program segment

                                  int n, sum=5;

                                  switch(n)

                                  {

                                      case 2:sum=sum-2;

                                      case 3:sum*=5;

                                      break;

                                      default:sum=0;

                                  }

    if n=2, what is the value of the sum?

           (A) 0                           (B) 15                           (C) 3                           (D) None of these.

25. Which of the following is not an infinite loop?                                                           

   (A) x=0;                                                                           (B) # define TRUE 0…. 

         do{                                                                                    While(TRUE){….}

                /*x unaltered within the loop*/                              (C) for(;;)   {….}

            ….}

          While(x==0);                                                             (D) While(1) {….}                  

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 62: Testing overview

26. Output of the following program is 

                                main()

                                {

                                    int i=0;

                                    for(i=0;i<20;i++)

                                    {

                                        switch(i){

                                                        case 0:

                                                            i+=5;

                                                        case 1:

                                                            i+=2;

                                                        case 5:

                                                            i+=5;

                                                        default:

                                                            i+=4;

                                                        break;

                                                    }

                                    }

                              } 

           (A) 5,9,13,17                   (B) 12,17,22                   (C) 16,21                   (D) syntax error.

27. What does the following function print?

                                func(int i)

                                {

                                    if(i%2) return 0;

                                    else return 1;

                                }

                                main()

                                {

                                    int i=3;

                                    i=func(i);

                                    i=func(i);

                                    printf(“%d”,i);

                                } 

           (A) 3                                (B) 1                                   (C) 0                               (D) 2

28. What will be the result of the following program?

                                char*g()

                                {

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 63: Testing overview

                                    static char x[1024];

                                    return x;

                                }

                                main()

                                {

                                    char*g1=”First String”;

                                    strcpy(g(),g1);

                                    g1=g();

                                    strcpy(g1,”Second String”);

                                    printf(“Answer is:%s”, g());

                                }

           (A) Answer is: First String                           (B) Answer is: Second String

           (C) Run time Error/Core Dump                   (D) None of these

29. Consider the following program

                                main()

                                {

                                    int a[5]={1,3,6,7,0};

                                    int *b;

                                    b=&a[2];

                                }

      The value of b[-1] is

           (A) 1                               (B) 3                               (C) -6                               (D) none

30. Given a piece of code

                                int x[10];

                                int *ab;

                                ab=x;

      To access the 6th element of the array which of the following is incorrect?

           (A) *(x+5)                       (B) x[5]                       (C) ab[5]                       (D) *(*ab+5} .

 

     Oracle2

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 64: Testing overview

    This is the oracle paper held on July 13 2003  at NITK Surathkal.

The test has 2 sections : 30 technical and 30 aptitude and 60 min time.

 

Technical section:

its very easy any one can answer 25 qns without preperation. some are

1. how compiler treats variables of recursive functions

2. what is orthogonal matrix?

3. given two tables and asked 2 qns on those table ,

  one is on join and another is on NOT IN

4. given some qns on pointers( pretty easy)

5. given five qns on data structures like , lifo, fifo

6. qtn on primary key

7. how NULL in sql is treated?

8. given a doubly linked list and asked r->left->right->data

   ans: r->data

9:explain const char *ptr and char *const ptr

remaining i didn`t remember

 

aptiude

15 quant apti from rs agrval

15 verbal apti,

in this 4 are odd word out

and 4 are sentese ordering when jumbled senteses given

and 4 are reasoning  

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 65: Testing overview

 

                         Model Questions From the Exam conducted by Oracle Bangalore in 2002.

1. What is the output of the following program?

 

#include

#include

 void main( )

{

 int a=5,b=7;

 printf(“%d\n”,b\a);

}

 

A. 1.4   B. 1.0  C. 1  D. 0

 

2. What is the output of the following program listing?

 

#include

void main ( )

{

 int x,y:

y=5;

x=func(y++);

printf(“%s\n”,

          (x==5)?”true”;”false”);

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 66: Testing overview

}

 

int func(int z)

{

 if (z== 6)

  return 5;

 else

  return 6;

}

 

A  True  B false  C either a or b   D neither a nor b

 

 

3. What is the output of the following progarm?

 

#include

main( )

{

 int x,y=10;

 x=4;

 y=fact(x);

 printf(“%d\n”,y);

}

 

unsigned int fact(int x)

{

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 67: Testing overview

return(x*fact(x-1));

}

A. 24   B. 10  C. 4   D. none

 

 

4. Consider the following C program and chose collect answer

 

#include

void main( )

{

 inta[10],k;

 for(k=0;k<10;k++)

{

     a[k]=k;

}

printf (“%d\n”,k);

}

 

A. value of k is undefined ; unpredictable answer

B. 10

C. program terminates with run time error

D. 0

 

5. Consider the prog and select answer

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 68: Testing overview

 

#include

void main ( )

{

 int k=4,j=0:

 switch (k)

 {

  case 3;       j=300;  

  case 4:      j=400:

  case 5:        j=500;

 }

printf (“%d\n”,j);

}

 

A. 300  B. 400 C. 500  D. 0

 

6. Consider the following statements:

Statement 1

A union is an object consisting of a sequence of named members of various types

Statement 2

A structure is a object that contains at different times, any one of the several members of various types

Statement 3

C is a compiled as well as an interpretted language

Statement 4

It is impossible to declare a structure or union containing an instance of itself

A. all the statements are correct

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 69: Testing overview

B. except 4 all are correct

C. statemnt 3 is only correct

D. statement 1,3 are incorrect either 2 or 4 is correct

 

7. consider the following program listing and select the output

 

#include

main ( )

{

int a=010,sum=0,tracker:

for(tracker=0;tracker<=a;tracker++)

 sum+=tracker;

printf(“ %d\n”,sum);

}

 

A. 55    B. 36     C. 28      D. n

 

8.Spot the line numbers , that are valid according to the ANSI C standards?

 

Line 1:  #include

Line 2: void main()

Line 3: {

4 : int *pia,ia;

5 :float *pafa,fa;

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 70: Testing overview

6 :ia=100;

7 :fa=12.05;

8 :*pfa=&ia;

9 :pfa=&ia;

10 :pia=pfa;

11 :fa=(float)*pia;

12 :fa=ia;

13 :}

 

a. 8 and 9   b. 9 and 10   c. 8 and 10  d. 10 and 11

 

8. What is the o/p of the follow pgm?

  #include

main()

{

char char_arr[5]=”ORACL”;

char c=’E’;

prinf(“%s\n”,strcat(char_arr,c));

}

 

a:oracle   b. oracl    c.e  d.none

 

9. consider the following pgm listing

 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 71: Testing overview

  #include

main()

{

int a[3];

int *I;

a[0]=100;a[1]=200;a[2]=300;

I=a;

Printf(“%d\n”, ++*I);             

Printf(“%d\n”, *++I);             

 

Printf(“%d\n”, (*I)–);             

Printf(“%d\n”, *I);             

}

what is the o/p

 

a. 101,200,200,199   b. 200,201,201,100   c. 101,200,199,199   d. 200,300,200,100

 

10. which of the following correctly declares “My_var” as a pointer to a function that returns an integer

 

a. int*My_Var();      b. int*(My_Var()); c. int(*)My_Var();        d. int(*My_Var)();

 

11. what is the memory structure employed by recursive functions in a C pgm?

 

a. B tree    b. Hash table        c. Circular list          d. Stack

 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 72: Testing overview

12. Consider the follow pgm listing?

 

Line 1: #include

        2: void main()

        3: {

        4: int a=1;

        5: const int c=2;

       6: const int *p1=&c;

       7: const int*p2=&a;

      8: int *p3=&c;

      9: int*p4=&a;

     10:}

 

what are the lines that cause compilation errors?

 

a.7    b.8      c.6 and 7         d.no errors

 

13. what will be the o/p

  #include

  main()

{   inta[3];        int *x; int*y;       a[0]=0;a[1]=1;a[2]=2;   x=a++;  y=a;

printf(“%d  %d\n”, x,(++y));   }

 

a. 0,1  b. 1,1  c. error  d. 1,2

 

what is the procedure for swapping a,b(assume that a,b and tmp are of the same type?

a. tmp=a; a=b;b=temp;        b. a=a+b;b=a-b;a=a-b;

c. a=a-b;b=a+b;a=b-a;          d. all of the above

 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 73: Testing overview

  

Tracebility Matrix ChartPosted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Tracebility Matrix Chart |Leave

a comment

ORIGINATOR:                                                                    DATE:                             

 

PROJECT CODE:                                                                PROJECT:

 

 

SLNO Input document name & number

PMP

Version

 

BRD version

 

FSD version

TS version

Code

Version

Others

(Specify) version

Review Date

Reviewer’s Name

1.                   

2.                   

3.                   

4.                   

5.                   

 

 

SL NO

Functionality Reference

Ref. In TS Document

 

Unit Test Case Ref.

 

Assembly Test Case Ref.

System Test Case Ref.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 74: Testing overview

1.           

2.           

3.           

4.           

5.           

Web Testing BasicsPosted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Web Testing Basics | Leave a

comment

Let’s have first web testing checklist.

1) Functionality Testing

2) Usability testing

3) Interface testing

4) Compatibility testing

5) Performance testing

6) Security testing

1) Functionality Testing:

Test for – all the links in web pages, database connection, forms used in the web pages for submitting or getting

information from user, Cookie testing.

Check all the links:

o Test the outgoing links from all the pages from specific domain under test.

o Test all internal links.

o Test links jumping on the same pages.

o Test links used to send the email to admin or other users from web pages.

o Test to check if there are any orphan pages.

o Lastly in link checking, check for broken links in all above-mentioned links.

Test forms in all pages:

Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with

them. So what should be checked on these forms?

o First check all the validations on each field.

o Check for the default values of fields.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 75: Testing overview

o Wrong inputs to the fields in the forms.

o Options to create forms if any, form delete, view or modify the forms.

Let’s take example of the search engine project currently I am working on, In this project we have advertiser and

affiliate signup steps. Each sign up step is different but dependent on other steps. So sign up flow should get

executed correctly. There are different field validations like email Ids, User financial info validations. All these

validations should get checked in manual or automated web testing.

Cookies testing:

Cookies are small files stored on user machine. These are basically used to maintain the session mainly login

sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are

encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions

ends) check for login sessions and user stats after session end. Check effect on application security by deleting the

cookies. (I will soon write separate article on cookie testing)

Validate your HTML/CSS:

If you are optimizing your site for Search engines then HTML/CSS validation is very important. Mainly validate the

site for HTML syntax errors. Check if site is crawlable to different search engines.

Database testing:

Data consistency is very important in web application. Check for data integrity and errors while you edit, delete,

modify the forms or do any DB related functionality.

Check if all the database queries are executing correctly, data is retrieved correctly and also updated correctly. More

on database testing could be load on DB, we will address this in web load or performance testing below.

2) Usability Testing:

Test for navigation:

Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links

on the pages to surf different pages.

Usability testing includes:

Web site should be easy to use. Instructions should be provided clearly. Check if the provided instructions are correct

means whether they satisfy purpose.

Main menu should be provided on each page. It should be consistent.

Content checking: 

Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and

should not be used in site theme. You can follow some standards that are used for web page and content building.

These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc.

Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly

with proper sizes.

These are some basic standards that should be followed in web development. Your task is to validate all for UI

testing

Other user information for user help:

Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with proper tree

view of navigation. Check for all links on the sitemap.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 76: Testing overview

“Search in the site” option will help users to find content pages they are looking for easily and quickly. These are all

optional items and if present should be validated.

3) Interface Testing:

The main interfaces are:

Web server and application server interface

Application server and Database server interface.

Check if all the interactions between these servers are executed properly. Errors are handled properly. If database or

web server returns any error message for any query by application server then application server should catch and

display these error messages appropriately to users. Check what happens if user interrupts any transaction in-

between? Check what happens if connection to web server is reset in between?

4) Compatibility Testing:

Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:

o Browser compatibility

o Operating system compatibility

o Mobile browsing

o Printing options

Browser compatibility:

In my web-testing career I have experienced this as most influencing part on web site testing.

Some applications are very dependent on browsers. Different browsers have different configurations and settings that

your web page should be compatible with. Your web site coding should be cross browser platform compatible. If you

are using java scripts or AJAX calls for UI functionality, performing security checks or validations then give more

stress on browser compatibility testing of your web application.

Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera

browsers with different versions.

OS compatibility:

Some functionality in your web application is may not be compatible with all operating systems. All new technologies

used in web development like graphics designs, interface calls like different API’s may not be available in all

Operating Systems.

Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS

flavors.

Mobile browsing:

This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers.

Compatibility issues may be there on mobile.

Printing options:

If you are giving page-printing options then make sure fonts, page alignment, page graphics getting printed properly.

Pages should be fit to paper size or as per the size mentioned in printing option.

5) Performance testing:

Web application should sustain to heavy load. Web performance testing should include:

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 77: Testing overview

Web Load Testing

Web Stress Testing

Test application performance on different internet connection speed.

In web load testing test if many users are accessing or requesting the same page. Can system sustain in peak load

times? Site should handle many simultaneous user requests, large input data from users, Simultaneous connection to

DB, heavy load on specific pages etc.

Stress testing: Generally stress means stretching the system beyond its specification limits. Web stress testing is

performed to break the site by giving stress and checked how system reacts to stress and how system recovers from

crashes.

Stress is generally given on input fields, login and sign up areas.

In web performance testing web site functionality on different operating systems, different hardware platforms is

checked for software, hardware memory leakage errors,

6) Security Testing:

Following are some test cases for web security testing:

o Test by pasting internal url directly into browser address bar without login. Internal pages should not open.

o If you are logged in using username and password and browsing internal pages then try changing url options

directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing

the url site ID parameter to different site ID which is not related to logged in user. Access should denied for this

user to view others stats.

o Try some invalid inputs in input fields like login username, password, input text boxes. Check the system

reaction on all invalid inputs.

o Web directories or files should not be accessible directly unless given download option.

o Test the CAPTCHA for automates scripts logins.

o Test if SSL is used for security measures. If used proper message should get displayed when user switch from

non-secure http:// pages to secure https:// pages and vice versa.

o All transactions, error messages, security breach attempts should get logged in log files somewhere on web

server.

What you need to know about BVT (Build Verification   Testing) Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: BVT (Build Verification

Testing) | Leave a comment

What is BVT?

Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test

team for further testing. These test cases are core functionality test cases that ensure application is stable and can be

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 78: Testing overview

tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for

fix.

BVT is also called smoke testing or build acceptance testing (BAT)

New Build is checked mainly for two things:o Build validation

o Build acceptance

Some BVT basics:o It is a subset of tests that verify main functionalities.

o The BVT’s are typically run on daily builds and if the BVT fails the build is rejected and a new build is released

after the fixes are done.

o The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is

broken.

o Design BVTs carefully enough to cover basic functionality.

o Typically BVT should not run more than 30 minutes.

o BVT is a type of regression testing, done on each and every new build.

BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not.

Module integration testing is very important when different teams develop project modules. I heard many cases of

application failure due to improper module integration. Even in worst cases complete project gets scraped due to

failure in module integration.

What is the main task in build release? Obviously file ‘check in’ i.e. to include all the new and modified project files

associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether – all

the new and modified files are included in release, all file formats are correct, every file version and language, flags

associated with each file.

These basic checks are worth before build release to test team for testing. You will save time and money by

discovering the build flaws at the very beginning using BVT.

Which test cases should be included in BVT?

This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on

which test cases you include in BVT.

Here are some simple tips to include test cases in your BVT automation suite:o Include only critical test cases in BVT.

o All test cases included in BVT should be stable.

o All the test cases should have known expected result.

o Make sure all included critical functionality test cases are sufficient for application test coverage.

Also do not includes modules in BVT, which are not yet stable. For some under-development features you can’t

predict expected behavior as these modules are unstable and you might know some known failures before testing for

these incomplete modules. There is no point using such modules or test cases in BVT.

You can make this critical functionality test cases inclusion task simple by communicating with all those involved in

project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 79: Testing overview

BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project

features and scenarios.

Example: Test cases to be included in BVT for Text editor application (Some sample tests only):

1) Test case for creating text file.

2) Test cases for writing something into text editor

3) Test case for copy, cut, paste functionality of text editor

4) Test case for opening, saving, deleting text file.

These are some sample test cases, which can be marked as ‘critical’ and for every minor or major changes in

application these basic critical test cases should be executed. This task can be easily accomplished by BVT.

BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there

are new stable project modules available.

What happens when BVT suite run:

Say Build verification automation test suite executed after any new build.

1) The result of BVT execution is sent to all the email ID’s associated with that project.

2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT.

3) If BVT fails then BVT owner diagnose the cause of failure.

4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers.

5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if it’s a

bug then what will be his bug-fixing scenario.

6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for

further detail functionality, performance and other testes.

This process gets repeated for every new build.

Why BVT or build fails?

BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are some other reasons to

build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc.

You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis.

Tips for BVT success:

1) Spend considerable time writing BVT test cases scripts.

2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to

debug and quickly know the failure cause.

3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on

different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build

failure due to new unstable modules and test cases.

4) Automate BVT process as much as possible. Right from build release process to BVT result – automate

everything.

5) Have some penalties for breaking the build  Some chocolates or team coffee party from developer who breaks

the build will do.

Conclusion:

BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 80: Testing overview

smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or

tester and BVT result is communicated throughout the team and immediate action is taken to fix the bug if BVT fails.

BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT.

These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds.

This saves significant time, cost, resources and after all no frustration of test team for incomplete build.

If you have some experience in BVT process then please share it with our readers in comments below.

Website Cookie Testing, Test cases for testing web application   cookies? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Test cases for testing web

application cookies, Website Cookie Testing | Leave a comment

What is Cookie?

Cookie is small information stored in text file on user’s hard drive by web server. This information is later used by web

browser to retrieve information from that machine. Generally cookie contains personalized user data or information

that is used to communicate between different web pages.

Why Cookies are used?

Cookies are nothing but the user’s identity and used to track where the user navigated throughout the web site

pages. The communication between web browser and web server is stateless.

For example if you are accessing domain http://www.example.com/1.html then web browser will simply query to

example.com web server for the page 1.html. Next time if you type page ashttp://www.example.com/2.html then

new request is send to example.com web server for sending 2.html page and web server don’t know anything about

to whom the previous page 1.html served.

What if you want the previous history of this user communication with the web server? You need to maintain the user

state and interaction between web browser and web server somewhere. This is where cookie comes into picture.

Cookies serve the purpose of maintaining the user interactions with web server.

How cookies work?

The HTTP protocol used to exchange information files on the web is used to maintain the cookies. There are two

types of HTTP protocol. Stateless HTTP and Stateful HTTP protocol. Stateless HTTP protocol does not keep any

record of previously accessed web page history. While Stateful HTTP protocol do keep some history of previous web

browser and web server interactions and this protocol is used by cookies to maintain the user interactions.

Whenever user visits the site or page that is using cookie, small code inside that HTML page (Generally a call to

some language script to write the cookie like cookies in JAVAScript, PHP, Perl) writes a text file on users machine

called cookie.

Here is one example of the code that is used to write cookie and can be placed inside any HTML page:

Set-Cookie: NAME=VALUE; expires=DATE; path=PATH; domain=DOMAIN_NAME;

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 81: Testing overview

When user visits the same page or domain later time this cookie is read from disk and used to identify the second

visit of the same user on that domain. Expiration time is set while writing the cookie. This time is decided by the

application that is going to use the cookie.

Generally two types of cookies are written on user machine.

1) Session cookies: This cookie is active till the browser that invoked the cookie is open. When we close the

browser this session cookie gets deleted. Some time session of say 20 minutes can be set to expire the cookie.

2) Persistent cookies: The cookies that are written permanently on user machine and lasts for months or years.

Where cookies are stored?

When any web page application writes cookie it get saved in a text file on user hard disk drive. The path where the

cookies get stored depends on the browser. Different browsers store cookie in different paths. E.g. Internet explorer

store cookies on path “C:\Documents and Settings\Default User\Cookies”

Here the “Default User” can be replaced by the current user you logged in as. Like “Administrator”, or user name like

“Vijay” etc.

The cookie path can be easily found by navigating through the browser options. In Mozilla Firefox browser you can

even see the cookies in browser options itself. Open the Mozila browser, click on Tools->Options->Privacy and then

“Show cookies” button.

How cookies are stored?

Lets take example of cookie written by rediff.com on Mozilla Firefox browser:

On Mozilla Firefox browser when you open the page rediff.com or login to your rediffmail account, a cookie will get

written on your Hard disk. To view this cookie simply click on “Show cookies” button mentioned on above path. Click

on Rediff.com site under this cookie list. You can see different cookies written by rediff domain with different names.

Site: Rediff.com Cookie name: RMID

Name: RMID (Name of the cookie)

Content: 1d11c8ec44bf49e0… (Encrypted content)

Domain: .rediff.com

Path: / (Any path after the domain name)

Send For: Any type of connection

Expires: Thursday, December 31, 2020 11:59:59 PM

Applications where cookies can be used:

1) To implement shopping cart:

Cookies are used for maintaining online ordering system. Cookies remember what user wants to buy. What if user

adds some products in their shopping cart and if due to some reason user don’t want to buy those products this time

and closes the browser window? When next time same user visits the purchase page he can see all the products he

added in shopping cart in his last visit.

2) Personalized sites:

When user visits certain pages they are asked which pages they don’t want to visit or display. User options are get

stored in cookie and till the user is online, those pages are not shown to him.

3) User tracking: 

To track number of unique visitors online at particular time.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 82: Testing overview

4) Marketing:

Some companies use cookies to display advertisements on user machines. Cookies control these advertisements.

When and which advertisement should be shown? What is the interest of the user? Which keywords he searches on

the site? All these things can be maintained using cookies.

5) User sessions:

Cookies can track user sessions to particular domain using user ID and password.

Drawbacks of cookies:

1) Even writing Cookie is a great way to maintain user interaction, if user has set browser options to warn before

writing any cookie or disabled the cookies completely then site containing cookie will be completely disabled and can

not perform any operation resulting in loss of site traffic.

2) Too many Cookies:

If you are writing too many cookies on every page navigation and if user has turned on option to warn before writing

cookie, this could turn away user from your site.

3) Security issues:

Some times users personal information is stored in cookies and if someone hack the cookie then hacker can get

access to your personal information. Even corrupted cookies can be read by different domains and lead to security

issues.

4) Sensitive information:

Some sites may write and store your sensitive information in cookies, which should not be allowed due to privacy

concerns.

This should be enough to know what cookies are. If you want more cookie info see Cookie Central page.

Some Major Test cases for web application cookie testing:

The first obvious test case is to test if your application is writing cookies properly on disk. You can use theCookie

Tester application also if you don’t have any web application to test but you want to understand the cookie concept

for testing.

Test cases: 

1) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in

the cookie.

2) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted

format.

3) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if

browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.

4) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major

functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the

site. See if appropriate messages are displayed to user like “For smooth functioning of this site make sure that

cookies are enabled on your browser”. There should not be any page crash due to disabling the cookies. (Please

make sure that you close all browsers, delete all previously written cookies before performing this test)

5) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you

are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 83: Testing overview

For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this

prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are

getting crashed or data is getting corrupted.

6) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web

site under test. Access the web pages and check the behavior of the pages.

7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in

notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or

expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside

it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one

domain say rediff.com can’t be accessed by other domain say yahoo.com unless and until the cookies are corrupted

and someone trying to hack the cookie data.

8 ) Checking the deletion of cookies from your web application page: Some times cookie written by domain say

rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you

are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web

page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action

logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no

more invalid actions or purchase get logged from same user.

9) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing

the cookies properly on different browsers as intended and site works properly using these cookies. You can test your

web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera

etc.

10) If your web application is using cookies to maintain the logging state of any user then log in to your web

application using some username and password. In many cases you can see the logged in user ID parameter directly

in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and

press enter. The proper access message should be displayed to user and user should not be able to see other users

account.

These are some Major test cases to be considered while testing website cookies. You can write multiple test cases

from these test cases by performing various combinations. If you have some different application scenario, you can

mention your test cases in comments below.

Installation TestingPosted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Installation Testing | Leave a

comment

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 84: Testing overview

Have you performed software installation testing? How was the experience? Well, Installation testing (Implementation Testing) is quite interesting part of software testing life cycle.Installation testing is like introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.If your installation is successful on the new system then customer will be definitely happy but what if things are completely opposite. If installation fails then our program will not work on that system not only this but can leave user’s system badly damaged. User might require to reinstall the full operating system.In above case will you make any impression on user? Definitely not! Your first impression to make a loyal customer is ruined due to incomplete installation testing. What you need to do for a good first impression? Test the installer appropriately with combination of both manual and automated processes on different machines with different configuration. Major concerned of installation testing is Time! It requires lot of time to even execute a single test case. If you are going to test a big application installer then think about time required to perform such a many test cases on different configurations.We will see different methods to perform manual installer testing and some basic guideline for automating the installation process.To start installation testing first decide on how many different system configurations you want to test the installation. Prepare one basic hard disk drive. Format this HDD with most common or default file system, install most common operating system (Windows) on this HDD. Install some basic required components on this HDD. Each time create images of this base HDD and you can create other configurations on this base drive. Make one set of each configuration like Operating system and file format to be used for further testing.How we can use automation in this process? Well make some systems dedicated for creating basic images (use software’s like Norton Ghost for creating exact images of operating system quickly) of base configuration. This will save your tremendous time in each test case. For example if time to install one OS with basic configuration is say 1 hour then for each test case

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 85: Testing overview

on fresh OS you will require 1+ hour. But creating image of OS will hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!You can use one operating system with multiple attempts of installation of installer. Each time uninstalling the application and preparing the base state for next test case. Be careful here that your uninstallation program should be tested before and should be working fine.Installation testing tips with some broad test cases:1) Use flow diagrams to perform installation testing. Flow diagrams simplify our task. See example flow diagram for basic installation testing test case. 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 86: Testing overview

Add some more test cases on this basic flow chart Such as if our application is not the first release then try to add different logical installation paths.2) If you have previously installed compact basic version of application then in next test case install the full application version on the same path as used for compact version.3) If you are using flow diagram to test different files to be written on disk while installation then use the same flow diagram in reverse order to test uninstallation of all the installed files on disk.4) Use flow diagrams to automate the testing efforts. It will be very easy to convert diagrams into automated scripts.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 87: Testing overview

5) Test the installer scripts used for checking the required disk space. If installer is prompting required disk space 1MB, then make sure exactly 1MB is used or whether more disk space utilized during installation. If yes flag this as error.6) Test disk space requirement on different file system format. Like FAT16 will require more space than efficient NTFS or FAT32 file systems.7) If possible set a dedicated system for only creating disk images. As said above this will save your testing time.8 ) Use distributed testing environment in order to carry out installation testing. Distributed environment simply save your time and you can effectively manage all the different test cases from a single machine. The good approach for this is to create a master machine, which will drive different slave machines on network. You can start installation simultaneously on different machine from the master system.9) Try to automate the routine to test the number of files to be written on disk. You can maintain this file list to be written on disk in and excel sheet and can give this list as a input to automated script that will check each and every path to verify the correct installation.10) Use software’s available freely in market to verify registry changes on successful installation. Verify the registry changes with your expected change list after installation.11) Forcefully break the installation process in between. See the behavior of system and whether system recovers to its original state without any issues. You can test this “break of installation” on every installation step.12) Disk space checking: This is the crucial checking in the installation-testing scenario. You can choose different manual and automated methods to do this checking. In manual methods you can check free disk space available on drive before installation and disk space reported by installer script to check whether installer is calculating and reporting disk space accurately. Check the disk space after the installation to verify accurate usage of installation disk space. Run various combination of disk space availability by using some tools to automatically making disk space full while installation. Check system behavior on low disk space conditions while installation.13) As you check installation you can test for uninstallation also. Before each new iteration of installation make sure that all the files written to disk are removed after uninstallation. Some times uninstallation routine removes files from only last upgraded installation keeping the old version files untouched. Also check for rebooting option after uninstallation manually and forcefully not to reboot.I have addressed many areas of manual as well as automated installation testing procedure. Still there are many areas you need to focus on depending on the complexity of your software under installation. These not addressed important tasks includes installation over the network,

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 88: Testing overview

online installation, patch installation, Database checking on Installation, Shared DLL installation and uninstallation etc.Hope this article will be a basic guideline to those having trouble to start with software installation testing both manually or in automation.

Important testing ChecklistPosted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Important testing Checklist |

Leave a comment

Software Testingis the Process  of Systematically Runninga Software System Or Program To uncover errors or defects  Software Testing in Organizations aiming for  SEI Level 3 +:•An acknowledged competitive advantage     contributing to business success•Moved from ad hoc to an engineering discipline•Carrier for faster product delivery, with version controlled test plans and test cases•Led to reliability prediction through defect metrics•Focussed on defect elimination at the stage of defect induction itself Testing Phases:•–Unit Testing–Integration Testing–System Testing–Acceptance Testing All Test Plans and Test Cases shall be Reviewed and Approved before Testing starts. Unit Testing–Lowest-level component test–Key foundation for later levels of testing–Detects 65% – 75%  of all bugs–Stand-alone test–Ensures the conformance to each unit in DDD

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 89: Testing overview

 Integration Testing–An incremental series of tests of combinations or subassemblies of selected components in an overall system–Integration testing is incremental in that successively larger and more complex combinations of components are tested in sequence, proceeding from the unit level to eventually the full-integrated system–Ensures the conformance to each module in HLDD System Testing–Highest level of application functionality testing performed by the systems group–Ensures the conformance to the Functional Requirements as specified in the SRSAcceptance Testing–Independent test performed by the users or QA prior to accepting the delivered system–Ensures the conformance to the Functional Requirements as specified in the URD Testing Activities•Test Case Design•Review of Test Case Design•Testing•Recording Testing Results•Review and Sign-Off of Testing Results•Defect Reporting Testing Documents and Records Documents (SCIs)

Test Plan for the ProjectTest Cases Documents

RecordsTest ReportDefect LogReview Reports

 Unit Test Plan (UTP)https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 90: Testing overview

•Plan for the entire Unit Testing in the project••Identifies the Units/Sub-units covered during Unit Testing•Each feature in the DDD mapped to a Unit Test Case ID•To be prepared at DDD phase itself•The test cases are designed and documented•References to the Test Case document(s) shall be given in UTP•Deliverables at Unit Testing Phase shall be identified in UTP•Resources required shall be mentioned in Project Plan•Schedules shall be planned 

 Testing coverage shall be identified in the UTP:•Path Coverage•Statement Coverage•Decision (Logic/Branch) Coverage•Condition Coverage•Decision/Condition Coverage•Multiple-Condition Coverage •Functionality testing•User interface testing•Regression testingUnit Testing CoveragePath Coverage–Test cases will be written to cover all the possible paths of control flow through the program.Statement Coverage–Test cases will be written such that every statement in the program is executed at least onceDecision (Logic/Branch) Coverage–Test Cases will be written such that each decision has a true or false outcome at least once. Condition Coverage–Test Cases will be written such that each condition in a decision takes on all possible outcomes at least once.Decision/Condition Coveragehttps://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 91: Testing overview

–Test Cases will be written such that each condition in a decision takes on all possible outcomes at least once, each decision takes on all possible outcomes at least once, and each point of entry is invoked at least once.Multiple-Condition Coverage–Test Cases will be written such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once. Integration Test Plan (ITP) •Plan for the entire Integration Testing in the project••Identifies the Modules/Sub-Modules covered during Integration Testing•Each component of HLDD mapped to a Integration Test Case ID•Recommended to prepare at HLD phase itself•The test cases are designed and documented•References to the Test Case document(s) shall be given in ITP•Deliverables at Integration Testing Phase shall be identified in ITP•Resources required shall be mentioned in Project Plan•Schedules shall be plannedTesting coverage shall be identified in the ITP: •Functionality Testing•User interface testing•Dependency (API) testing•Smoke testing•Capacity and volume testing•Error / disaster handling and recovery•Concurrent execution testing•Equivalence partitioning•Boundary-value analysis•Cause-effect graphing•Error guessingEquivalence Partitioning•The test cases are partitioned into equivalence classes such that–each test case should invoke as many different input conditions as possible in order to minimize the total number of test cases necessary–if one test case in an equivalence class detects an error, all other test cases in the equivalence class would be expected to find the same errorBoundary-value analysishttps://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 92: Testing overview

–Test cases explore boundary conditions–Test cases consider input conditions as well as output conditionsCause-effect graphing–Technique to identify test cases by translating the specifications into Boolean logic network.–This is a systematic method of generating test cases representing combinations of conditionsError guessing–The test cases are written both by intuition and experience to uncover certain probable type of errors–The test case design is by the knack of “smelling out” errorsSystem Test Plan (STP)•Plan for the entire System Testing in the project••Identifies the features covered during System Testing•Each feature in the SRS mapped to a System Test Case ID•Recommended to prepare at SRS phase itself•The test cases are designed and documented•References to the Test Case document(s) shall be given in STP•Deliverables at System Testing Phase shall be identified in STP•Resources required shall be mentioned in Project Plan•Schedules shall be planned•Functionality Testing•User interface testing•Usability testing   •Volume Testing    •Stress Testing            •Security Testing   •Performance Testing    •Installation and upgrade testing•Standards conformance testing      •Configuration testing•Network and distributed environment testing     •Forward / backward compatibility testing   •Reliability Testing•Error / disaster handling and recovery testing   •Serviceability Testing  •Documentation testinghttps://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 93: Testing overview

•Procedure testing•Localization testingAcceptance Test Plan (ATP)•Plan for the entire Acceptance Testing in the project••Identifies the features covered during Acceptance Testing•Each feature in the URD mapped to a Acceptance Test Case ID•Recommended to prepare at URD phase itself•The test cases are designed and documented•References to the Test Case document(s) shall be given in ATP•Deliverables at Acceptance Testing Phase shall be identified in ATP•Resources required shall be mentioned in Project Plan•Schedules shall be planned•Alpha Test•Beta Test•All the tests covered under system testing can be repeated as per user requirementsTerminology      Functionality Testing–Functionality testing is the determination of whether each  functionality mentioned in SRS is actually implemented. The objective is to ensure that all the functional requirements as documented in the SRS are accomplished.User Interface Testing–Focus is on testing the user interface, navigation and negative user behaviorConcurrent Execution Testing–Focus is on testing with simple usage, standard usage and boundary situationsVolume Testing–Volume Testing is to ensure that the softwarecan handle the volume of data as specified in the SRSdoes not crash with heavy volumes of data, but gives an appropriate message and/or makes a clean exit.–Example:A compiler would be fed an absurdly large source program to compileStress Testing

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 94: Testing overview

–Executes the software in a manner that demands resources in abnormal quantity, frequency and volume and verifies that the system either performs normally, or displays message regarding the limitations of the systemUsability Testing–Usability testing is an attempt to uncover the software usability problems involving the human-factor–Example:•Are the error messages meaningful, easy to understand?•Security Testing–Attempts to verify that protection mechanism built into a system will in fact protect it from improper penetration–Example:•a database management system’s data security mechanisms •Performance Testing–Tests the run-time performance of software within the context of an integrated system–Example:•the response times under certain configuration conditions.Installation and Upgrade Testing–Focus is on testing whether the user would be install and upgrade the systemConfiguration Testing–Configuration testing includes either or both of the following:•testing the software with the different possible hardware configurations•testing each possible configuration of the software  Network and Distributed Environment  Testing–Focus is on testing the product in the required network and distributed environment  Dependency (API) Testing–Focus is on testing the API calls made by the system to other systemsLocalization Testing–Focus is on testing problems associated with multiple languages support, conversion and also hardware aspectsReliability Testing–The various software testing processes have the goal to test the software reliability. The “Reliability Testing” which is a part of System Testing encompasses the testing of any specific reliability factors that are stated explicitly in the SRShttps://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 95: Testing overview

Error / Disaster Handling and Recovery Testing–Forces the software to fail in a variety of ways and verifies that the system recovers and resumes processingServiceability Testing–Serviceability testing covers the serviceability or maintainability characteristics of the software–Example:•service aids to be provided with the system, e.g., storage-dump programs, diagnostic programs•the maintenance procedures for the systemDocumentation Testing–Documentation testing is concerned with the accuracy of the user documentation. This involves•Review of the user documentation for accuracy and clarity•Testing the examples illustrated in the user documentation by preparing test cases on the basis of these examples and testing the systemSmoke Testing–Focus is on testing the system releases as it is being built to uncover critical and showstopper errorsConfiguration Testing–Configuration testing includes either or both of the following:•testing the software with the different possible hardware configurations•testing each possible configuration of the softwareProcedure Testing–If the software forms a part of a large and not completely automated system, the interfaces of the developed software with the other components in the larger system shall be tested. These may include procedures to be followed byThe human operatorDatabase administratorTerminal user–These procedures are to be tested as part of System testing•Standards Conformance Testing–Focus is on testing whether the product conforms to prescribed and published standards•Alpha Test–Within a vendor, the last level of internal test prior to a limited beta release•Beta Test

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 96: Testing overview

–The vendor’s equivalent of a pilot test (usually with a greater number of participating sites than for a pilot)•Alpha Test–Within a vendor, the last level of internal test prior to a limited beta release•Beta Test–The vendor’s equivalent of a pilot test (usually with a greater number of participating sites than for a pilot)•Equivalence Test–A test using only a small sample of all possible test  conditions, but ones which are chosen to uncover almost as many defects as an      exhaustive test would have uncovered– The key question is :•What subset of all possible test cases has the        highest probability of detecting the most errors ?Regression Testing: Comprehensive re-test of an entire system :•after system delivery, when a modification has been made, or•before delivery and at the end of system test , after all test cases have been passed, but not passed together against the final version of the system productRegression Testing:•Regression testing requires that a regression test bed (comprehensive set of re-usable system test cases) be available throughout the useful life of the delivered system•Functional acceptance tests form the core of this regression test bed.Regression Testing:•The regression test must be maintained, to keep it aligned with the system as the system itself evolves . This maintenance may not be a trivial effort.•If a careful determination is made that only portions or subsystems will be affected by a particular change (i.e., the subsystems are decoupled and insulated), then only a partial regression re-test of the affected portions is absolutely necessaryRegression Testing:•If errors are detected during on-going system operation, then test case(s) should be added to the existing regression test bed to detect any possible further re-occurrences of the error or related errors•The additional effort to build a regression test facility is relatively minor if it is done during system development : the attitude should be that test cases

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 97: Testing overview

are designed and organized to have an on-going life after system delivery, not one-time throw-aways•The decision to perform any particular regression test is based on an analysis of the specific risks and costsRegression Testing:•Regression tests are more manageable and cost-effective they are coordinated with scheduled releases, versus a piece-meal approach to system modification•Full regression testing should always  be performed when the overall system architecture has been affected•A guideline for setting the boundaries of regression testing: include any interdependent, integrated applications; exclude tangential applications.•Assessing Test Effectiveness•Techniques  that provide means to assess the effectiveness, coverage and robustness of a set of test cases:–Error Seeding–Mutation AnalysisNote  : These techniques are talked about more than they are practiced.  Many people see them as somewhat impractical because they can be difficult to apply well. Error SeedingApproach:    Inject a small number of representative defects into the baseline product, and measure the percentage that are uncovered by test strategy variationsPurpose:   Determine the effectiveness of the test planning and execution, and predict (by extrapolation)  how many real defects remain hidden in the productExample :                           No. found     %             Total defectsSeeded defects             21    75%          8 (actual)Actual defects     241           75%       321(projected)Estimated Unfound defects :                         81Mutation AnalysisApproach:

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 98: Testing overview

   Create numerous (minor) variations of the baseline system or program. Then test each variation with the original, unchanged set of test cases, and determine how many of the tests behave as if the product was not changedPurpose:   Mutations which do not test differently from the baseline product are carefully examined, to determine whether in fact the tests are inadequate 

How can we write a good test   case? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: good test case, Testing

Interview Questions | Leave a comment

How can we write a good test case?

Essentially a test case is a document that carries a Test Case ID no, title, type of test being conducted, Input, Action

or Event to be performed, expected output and whether the Test Case has achieved the desired Output(Yes \ No)

Basically Test cases are based on the Test Plan, which includes each module and what to be tested in each module.

Further each action in the module is further divided into testable components from where the Test Cases are derived.

Since the test case handles a single event at a time normally, as long as it reflects its relation to the Test plan, it can

be called as a good test case. It does not matter whether the event passes or fails, as long as the component to be

tested is addressed and can be related to in the Test Plan, the Test Case can be called a good Test Case

 

unique-test-case-id: Test Case TitlePurpose: Short sentence or two about the aspect of the system is being tested. If this

gets too long, break the test case up or put more information into the feature descriptions.

Prereq: Assumptions that must be met before the test case can be run. E.g., “logged in”, “guest login allowed”, “user testuser exists”.

Test Data: List of variables and their possible values used in the test case. You can list specific values or describe value ranges. The test case should be performed once for each combination of values. These values are written in set notation, one per line. E.g.:loginID = {Valid loginID, invalid loginID, valid email, invalid email, empty}password = {valid, invalid, empty}

Steps: Steps to carry out the test. See step formating rules below.1. visit LoginPage2. enter userID3. enter password4. click login

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 99: Testing overview

5. see the terms of use page6. click agree radio button at page bottom7. click submit button8. see PersonalPage9. verify that welcome message is correct username

 

 

 test plan is a systematic approach to testing a system such as a machine or software. The plan typically contains a

detailed understanding of what the eventual workflow will be.

 

 Software Testing Portal

Contents 

1 Test plans in software development

1.1 Test plan template, based on IEEE 829 format

1.1.1 Test plan identifier

1.1.2 References

1.1.3 Introduction

1.1.4 Test items (functions)

1.1.5 Software risk issues.

1.1.6 Features to be tested

1.1.7 Features not to be tested

1.1.8 Approach (strategy)

1.1.9 Item pass/fail criteria

1.1.10 Entry & exit criteria

1.1.11 Suspension criteria & resumption requirements

1.1.12 Test deliverables

1.1.13 Remaining test tasks

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 100: Testing overview

1.1.14 Environmental needs

1.1.15 Staffing and training needs

1.1.16 Responsibilities

1.1.17 Planning risks and contingencies

1.1.18 Approvals

1.1.19 Glossary

1.2 Regional differences

2 Criticism of the overuse of test plans

3 Test plans in hardware development

4 IEEE 829-1998:

5 See also

6 External links

 

 

 

Test plans in software development

In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including

 

Scope of testing.

Schedule.

Test Deliverables.

Release Criteria.

Risks and Contingencies.

 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 101: Testing overview

Test plan template, based on IEEE 829 format

Test Plan Identifier (TPI).

References

Introduction

Test Items

Software Risk Issue

Features to be Tested

Features not to be Tested

Approach

Item Pass/Fail Criteria

Entry & Exit Criteria

Suspension Criteria and Resumption Requirements

Test Deliverables

Remaining Test Tasks

Environmental Needs

Staffing and Training Needs

Responsibilities

Planning Risks and Contingencies

Approvals

 

 

 

 

[edit] Test plan identifier

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 102: Testing overview

For example: “Master plan for 3A USB Host Mass Storage Driver TP_3A1.0″

 

Some type of unique company generated number to identify this test plan, its level and the level of software that it is

related to. Preferably the test plan level will be the same as the related software level. The number may also identify

whether the test plan is a Master plan, a Level plan, an integration plan or whichever plan level it represents. This is

to assist in coordinating software and testware versions within configuration management.

 Keep in mind that test plans are like other software documentation, they are dynamic in nature and must be kept up

to date. Therefore, they will have revision numbers.

 You may want to include author and contact information including the revision history information as part of either

the identifier section of as part of the introduction…

 

 References

List all documents that support this test plan

 Documents that are referenced include:

 Project Plan.

System Requirements specifications.

High Level design document.

Detail design document.

Development and Test process standards.

Methodology.

Low level design.

 

 Introduction

State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive

summary part of the plan.

 You may want to include any references to other plans, documents or items that contain information relevant to this

project/process.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 103: Testing overview

 Identify the objective of the plan or scope of the plan in relation to the Software Project plan that it relates to. Other

items may include, resource and budget constraints, scope of the testing effort, how testing relates to other

evaluation activities (Analysis & Reviews), and possible the process to be used for change control and

communication and coordination of key activities.

 As this is the “Executive Summary” keep information brief and to the point.

 

Intention of this project has to be included

 

 

[edit] Test items (functions)

These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of

what is to be tested. This can be developed from the software application inventories as well as other sources of

documentation and information.

 This can be controlled on a local Configuration Management (CM) process if you have one. This information

includes version numbers, configuration requirements where needed, (especially if multiple versions of the product

are supported). It may also include key delivery schedule issues for critical elements.

 This section can be oriented to the level of the test plan. For higher levels it may be by application or functional

area, for lower levels it may be by program, unit, module or build.

 

Software risk issues.

Identify what software is to be tested and what the critical areas are, such as:

 Delivery of a third party product.

New version of interfacing software.

Ability to use and understand a new package/tool, etc.

Extremely complex functions.

Modifications to components with a past history of failure.

Poorly documented modules or change requests.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 104: Testing overview

There are some inherent software risks such as complexity; these need to be identified.

 

Safety.

Multiple interfaces.

Impacts on Client.

Government regulations and rules.

Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user

and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.

 

The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the

software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a

particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster

and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.

 One good approach to define where the risks are is to have several [brainstorming] sessions.

Start with ideas, such as, what worries me about this project/application.

 

Features to be tested

This is a listing of what is to be tested from the user’s viewpoint of what the system does. This is not a technical

description of the software, but a USER’S view of the functions.

 

Set the level of risk for each feature. Use a simple rating scale such as High, Medium and Low(H, M, L). These

types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.

 Sections 4 and 6 are very similar, and the only true difference is the point of view. Section 4 is a technical type

description including version numbers and other technical information and Section 6 is from the User’s viewpoint.

Users do not understand technical software terminology; they understand functions and processes as they relate to

their jobs.

 Features not to be tested

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 105: Testing overview

This is a listing of what is ‘not’ to be tested from both the user’s viewpoint of what the system does and a

configuration management/version control view. This is not a technical description of the software, but a user’s view

of the functions.

 

Identify why the feature is not to be tested, there can be any number of reasons.

 Not to be included in this release of the Software.

Low risk, has been used before and was considered stable.

Will be released but not tested or documented as a functional part of the release of this version of the software.

Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by

the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project.

 

 Approach (strategy)

This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master,

acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes

should be identified. It is important to have instruction as to what is necessary in a test plan before trying to create

one’s own strategy. Make sure that you are apprenticed in this area before trying to teach yourself this important

step in engineering.

 

Are any special tools to be used and what are they?

Will the tool require special training?

What metrics will be collected?

Which level is each metric to be collected at?

How is Configuration Management to be handled?

How many different configurations will be tested?

Hardware

Software

Combinations of HW, SW and other vendor packages

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 106: Testing overview

What levels of regression testing will be done and how much at each test level?

Will regression testing be based on severity of defects detected?

How will elements in the requirements and design that do not make sense or are untestable be processed?

If this is a master test plan the overall project testing approach and coverage requirements must also be identified.

 

Specify if there are special requirements for the testing.

 

Only the full component will be tested.

A specified segment of grouping of features/components must be tested together.

Other information that may be useful in setting the approach are:

 

MTBF, Mean Time Between Failures – if this is a valid measurement for the test involved and if the data is

available.

SRE, Software Reliability Engineering – if this methodology is in use and if the information is available.

How will meetings and other organizational processes be handled?

  

Item pass/fail criteria

Show stopper issues. Specify the criteria to be used to determine whether each test item has passed or failed. Show

Stopper severity requires definition within each testing context.

 

Entry & exit criteria

Specify the criteria to be used to start testing and how you know when to stop the testing process.

  

Suspension criteria & resumption requirements

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 107: Testing overview

Suspension criteria specify the criteria to be used to suspend all or a portion of the testing activities while

resumption criteria specify when testing can resume after it has been suspended.

 

Unavailability of external dependent systems during execution.

When a defect is introduced that cannot allow any further testing.

Critical path deadline is missed so that the client will not accept delivery even if all testing is completed.

A specific holiday shuts down both development and testing.

System Integration Testing in the Integration environment may be resumed under the following circumstances:

 

When the external dependent systems become available again.

When a fix is successfully implemented and the Testing Team is notified to continue testing.

The contract is renegotiated with the client to extend delivery.

The holiday period ends.

Suspension criteria assumes that testing cannot go forward and that going backward is also not possible. A failed

build would not suffice as you could generally continue to use the previous build. Most major or critical defects

would also not constituted suspension criteria as other areas of the system could continue to be tested.

 

Test deliverables

List documents, reports, charts, that will be presented to stakeholders on a regular basis during testing and when

testing has been completed.

 Remaining test tasks

If this is a multi-phase process or if the application is to be released in increments there may be parts of the

application that this plan does not address. These areas need to be identified to avoid any confusion should defects

be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions

and prevent waste of resources chasing non-defects.

 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 108: Testing overview

If the project is being developed as a multi-party process, this plan may only cover a portion of the total

functions/features. This status needs to be identified so that those other areas have plans developed for them and to

avoid wasting resources tracking defects that do not relate to this plan.

 

When a third party is developing the software, this section may contain descriptions of those test tasks belonging to

both the internal groups and the external groups..

 Environmental needs

Are there any special requirements for this test plan, such as:

 

Special hardware such as simulators, static generators etc.

How will test data be provided. Are there special collection requirements or specific ranges of data that must be

provided?

How much testing will be done on each component of a multi-part feature?

Special power requirements.

An environment where there is more feedback than needs improvement and meets expectations

Specific versions of other supporting software.

Restricted use of the system during testing.

 

Staffing and training needs

Training on the application/system.

 

Training for any test tools to be used.

 

The Test Items and Responsibilities sections affect this section. What is to be tested and who is responsible for the

testing and training.

 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 109: Testing overview

 Responsibilities

Who is in charge?

 Don’t leave people in charge of the test plan who have never done anything resembling a test plan before; This is

vital, they will learn nothing from it and the test will fail.

 This issue includes all areas of the plan. Here are some examples:

 Setting risks.

Selecting features to be tested and not tested.

Setting overall strategy for this level of plan.

Ensuring all required elements are in place for testing.

Providing for resolution of scheduling conflicts, especially, if testing is done on the production system.

Who provides the required training?

Who makes the critical go/no go decisions for items not covered in the test plans?

Who is responsible for this risk.

 

Planning risks and contingencies

What are the overall risks to the project with an emphasis on the testing process?

 Lack of personnel resources when testing is to begin.

Lack of availability of required hardware, software, data or tools.

Late delivery of the software, hardware or tools.

Delays in training on the application and/or tools.

Changes to the original requirements or designs.

Complexities involved in testing the applications

   

Specify what will be done for various events, for example:

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 110: Testing overview

 Requirements definition will be complete by January 1, 20XX, and, if the requirements change after that date, the

following actions will be taken:

 The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as

most projects tend to have fixed delivery dates.

The number of tests performed will be reduced.

The number of acceptable defects will be increased.

These two items could lower the overall quality of the delivered product.

Resources will be added to the test team.

The test team will work overtime (this could affect team morale).

The scope of the plan may be changed.

There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.

Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in

the past.

 The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or

omitted completely, neither of which should be an acceptable option.

 Approvals

Who can approve the process as complete and allow the project to proceed to the next level (depending on the level

of the plan)?

 At the master test plan level, this may be all involved parties.

 When determining the approval process, keep in mind who the audience is:

 The audience for a unit test level plan is different from that of an integration, system or master level plan.

The levels and type of knowledge at the various levels will be different as well.

Programmers are very technical but may not have a clear understanding of the overall business process driving the

project.

Users may have varying levels of business acumen and very little technical skills.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 111: Testing overview

Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand

the business process. These types of individuals can cause more harm than good if they do not have the skills they

believe they possess.

Glossary

Used to define terms and acronyms used in the document, and testing in general, to eliminate confusion and promote

consistent communications.

 Changes Made by Abhishek Menon

  Regional differences

There are often localized differences in the use of this term. In some locations, test plan can mean all of the tests that

need to be run. Purists would suggest that a collection of tests or test cases is a Test suite.

 Some locations would consider what is described above as a test strategy. This usage is generally localized to the

Indian market.

 Some state that a test strategy creation precedes the test plan creation  test plan is a systematic approach to testing a

system such as a machine or software. The plan typically contains a detailed understanding of what the eventual

workflow will be.

 

 Software Testing Portal

Contents

1 Test plans in software development

1.1 Test plan template, based on IEEE 829 format

1.1.1 Test plan identifier

1.1.2 References

1.1.3 Introduction

1.1.4 Test items (functions)

1.1.5 Software risk issues.

1.1.6 Features to be tested

1.1.7 Features not to be tested

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 112: Testing overview

1.1.8 Approach (strategy)

1.1.9 Item pass/fail criteria

1.1.10 Entry & exit criteria

1.1.11 Suspension criteria & resumption requirements

1.1.12 Test deliverables

1.1.13 Remaining test tasks

1.1.14 Environmental needs

1.1.15 Staffing and training needs

1.1.16 Responsibilities

1.1.17 Planning risks and contingencies

1.1.18 Approvals

1.1.19 Glossary

1.2 Regional differences

2 Criticism of the overuse of test plans

3 Test plans in hardware development

4 IEEE 829-1998:

5 See also

6 External links

 

 

 

[edit] Test plans in software development

In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including

 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 113: Testing overview

Scope of testing.

Schedule.

Test Deliverables.

Release Criteria.

Risks and Contingencies.

 

[edit] Test plan template, based on IEEE 829 format

Test Plan Identifier (TPI).

References

Introduction

Test Items

Software Risk Issue

Features to be Tested

Features not to be Tested

Approach

Item Pass/Fail Criteria

Entry & Exit Criteria

Suspension Criteria and Resumption Requirements

Test Deliverables

Remaining Test Tasks

Environmental Needs

Staffing and Training Needs

Responsibilities

Planning Risks and Contingencies

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 114: Testing overview

Approvals

  

 Test plan identifier

For example: “Master plan for 3A USB Host Mass Storage Driver TP_3A1.0″

 

Some type of unique company generated number to identify this test plan, its level and the level of software that it is

related to. Preferably the test plan level will be the same as the related software level. The number may also identify

whether the test plan is a Master plan, a Level plan, an integration plan or whichever plan level it represents. This is

to assist in coordinating software and testware versions within configuration management.

 Keep in mind that test plans are like other software documentation, they are dynamic in nature and must be kept up

to date. Therefore, they will have revision numbers.

 You may want to include author and contact information including the revision history information as part of either

the identifier section of as part of the introduction…

  References

List all documents that support this test plan

 

Documents that are referenced include:

 Project Plan.

System Requirements specifications.

High Level design document.

Detail design document.

Development and Test process standards.

Methodology.

Low level design.

 

Introduction

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 115: Testing overview

State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive

summary part of the plan.

 

You may want to include any references to other plans, documents or items that contain information relevant to this

project/process.

 Identify the objective of the plan or scope of the plan in relation to the Software Project plan that it relates to. Other

items may include, resource and budget constraints, scope of the testing effort, how testing relates to other

evaluation activities (Analysis & Reviews), and possible the process to be used for change control and

communication and coordination of key activities.

 As this is the “Executive Summary” keep information brief and to the point.

 Intention of this project has to be included

  Test items (functions)

These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of

what is to be tested. This can be developed from the software application inventories as well as other sources of

documentation and information.

 This can be controlled on a local Configuration Management (CM) process if you have one. This information

includes version numbers, configuration requirements where needed, (especially if multiple versions of the product

are supported). It may also include key delivery schedule issues for critical elements. 

This section can be oriented to the level of the test plan. For higher levels it may be by application or functional

area, for lower levels it may be by program, unit, module or build.

  Software risk issues.

Identify what software is to be tested and what the critical areas are, such as:

 Delivery of a third party product.

New version of interfacing software.

Ability to use and understand a new package/tool, etc.

Extremely complex functions.

Modifications to components with a past history of failure.

Poorly documented modules or change requests.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 116: Testing overview

There are some inherent software risks such as complexity; these need to be identified.

 Safety.

Multiple interfaces.

Impacts on Client.

Government regulations and rules.

Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user

and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.

 The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the

software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a

particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster

and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.

 One good approach to define where the risks are is to have several [brainstorming] sessions.

 Start with ideas, such as, what worries me about this project/application.

Features to be tested

This is a listing of what is to be tested from the user’s viewpoint of what the system does. This is not a technical

description of the software, but a USER’S view of the functions. 

Set the level of risk for each feature. Use a simple rating scale such as High, Medium and Low(H, M, L). These

types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.

 Sections 4 and 6 are very similar, and the only true difference is the point of view. Section 4 is a technical type

description including version numbers and other technical information and Section 6 is from the User’s viewpoint.

Users do not understand technical software terminology; they understand functions and processes as they relate to

their jobs.

  Features not to be tested

This is a listing of what is ‘not’ to be tested from both the user’s viewpoint of what the system does and a

configuration management/version control view. This is not a technical description of the software, but a user’s view

of the functions.

 Identify why the feature is not to be tested, there can be any number of reasons.

 Not to be included in this release of the Software.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 117: Testing overview

Low risk, has been used before and was considered stable.

Will be released but not tested or documented as a functional part of the release of this version of the software.

Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by

the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project. 

Approach (strategy)

This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master,

acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes

should be identified. It is important to have instruction as to what is necessary in a test plan before trying to create

one’s own strategy. Make sure that you are apprenticed in this area before trying to teach yourself this important

step in engineering.

 

Are any special tools to be used and what are they?

Will the tool require special training?

What metrics will be collected?

Which level is each metric to be collected at?

How is Configuration Management to be handled?

How many different configurations will be tested?

Hardware

Software

Combinations of HW, SW and other vendor packages

What levels of regression testing will be done and how much at each test level?

Will regression testing be based on severity of defects detected?

How will elements in the requirements and design that do not make sense or are untestable be processed?

If this is a master test plan the overall project testing approach and coverage requirements must also be identified.

 Specify if there are special requirements for the testing.

 Only the full component will be tested.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 118: Testing overview

A specified segment of grouping of features/components must be tested together.

Other information that may be useful in setting the approach are:

 MTBF, Mean Time Between Failures – if this is a valid measurement for the test involved and if the data is

available.

SRE, Software Reliability Engineering – if this methodology is in use and if the information is available.

How will meetings and other organizational processes be handled?

 

 

[edit] Item pass/fail criteria

Show stopper issues. Specify the criteria to be used to determine whether each test item has passed or failed. Show

Stopper severity requires definition within each testing context.

  

Entry & exit criteria

Specify the criteria to be used to start testing and how you know when to stop the testing process.

  Suspension criteria & resumption requirements

Suspension criteria specify the criteria to be used to suspend all or a portion of the testing activities while

resumption criteria specify when testing can resume after it has been suspended.

 

Unavailability of external dependent systems during execution.

When a defect is introduced that cannot allow any further testing.

Critical path deadline is missed so that the client will not accept delivery even if all testing is completed.

A specific holiday shuts down both development and testing.

System Integration Testing in the Integration environment may be resumed under the following circumstances:

 

When the external dependent systems become available again.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 119: Testing overview

When a fix is successfully implemented and the Testing Team is notified to continue testing.

The contract is renegotiated with the client to extend delivery.

The holiday period ends.

Suspension criteria assumes that testing cannot go forward and that going backward is also not possible. A failed

build would not suffice as you could generally continue to use the previous build. Most major or critical defects

would also not constituted suspension criteria as other areas of the system could continue to be tested. 

 Test deliverables

List documents, reports, charts, that will be presented to stakeholders on a regular basis during testing and when

testing has been completed.

 

 

 Remaining test tasks

If this is a multi-phase process or if the application is to be released in increments there may be parts of the

application that this plan does not address. These areas need to be identified to avoid any confusion should defects

be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions

and prevent waste of resources chasing non-defects.

 If the project is being developed as a multi-party process, this plan may only cover a portion of the total

functions/features. This status needs to be identified so that those other areas have plans developed for them and to

avoid wasting resources tracking defects that do not relate to this plan.

 

When a third party is developing the software, this section may contain descriptions of those test tasks belonging to

both the internal groups and the external groups..

 

 

 Environmental needs

Are there any special requirements for this test plan, such as:

Special hardware such as simulators, static generators etc.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 120: Testing overview

How will test data be provided. Are there special collection requirements or specific ranges of data that must be

provided?

How much testing will be done on each component of a multi-part feature?

Special power requirements.

An environment where there is more feedback than needs improvement and meets expectations

Specific versions of other supporting software.

Restricted use of the system during testing.

 Staffing and training needs

Training on the application/system.

Training for any test tools to be used.

The Test Items and Responsibilities sections affect this section. What is to be tested and who is responsible for the

testing and training.

 Responsibilities

Who is in charge?

Don’t leave people in charge of the test plan who have never done anything resembling a test plan before; This is

vital, they will learn nothing from it and the test will fail.

This issue includes all areas of the plan. Here are some examples:

Setting risks.

Selecting features to be tested and not tested.

Setting overall strategy for this level of plan.

Ensuring all required elements are in place for testing.

Providing for resolution of scheduling conflicts, especially, if testing is done on the production system.

Who provides the required training?

Who makes the critical go/no go decisions for items not covered in the test plans?

Who is responsible for this risk.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 121: Testing overview

Planning risks and contingencies

What are the overall risks to the project with an emphasis on the testing process?

Lack of personnel resources when testing is to begin.

Lack of availability of required hardware, software, data or tools.

Late delivery of the software, hardware or tools.

Delays in training on the application and/or tools.

Changes to the original requirements or designs.

Complexities involved in testing the applications

Specify what will be done for various events, for example:

Requirements definition will be complete by January 1, 20XX, and, if the requirements change after that date, the

following actions will be taken:

 

The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as

most projects tend to have fixed delivery dates.

The number of tests performed will be reduced.

The number of acceptable defects will be increased.

These two items could lower the overall quality of the delivered product.

Resources will be added to the test team.

The test team will work overtime (this could affect team morale).

The scope of the plan may be changed.

There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.

Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in

the past.

The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or omitted

completely, neither of which should be an acceptable option.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 122: Testing overview

 Approvals

Who can approve the process as complete and allow the project to proceed to the next level (depending on the level

of the plan)?

At the master test plan level, this may be all involved parties.

When determining the approval process, keep in mind who the audience is:

The audience for a unit test level plan is different from that of an integration, system or master level plan.

The levels and type of knowledge at the various levels will be different as well.

Programmers are very technical but may not have a clear understanding of the overall business process driving the

project.

Users may have varying levels of business acumen and very little technical skills.

Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand

the business process. These types of individuals can cause more harm than good if they do not have the skills they

believe they possess.

Glossary

Used to define terms and acronyms used in the document, and testing in general, to eliminate confusion and promote

consistent communications.

 Regional differences

There are often localized differences in the use of this term. In some locations, test plan can mean all of the tests that

need to be run. Purists would suggest that a collection of tests or test cases is a Test suite.

 Some locations would consider what is described above as a test strategy. This usage is generally localized to the

Indian market.

 Some state that a test strategy creation precedes the test plan creation 

100 Manual Testing Interview   FAQ’s Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: inter, Interview Faq's,Manual

Testing Basics | Leave a comment

Q1. What is verification?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 123: Testing overview

 A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, and walkthroughs and inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN get free information. Click on a link! Q2. What is validation?

 A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed. Q3. What is a walkthrough?A: A walkthrough is an informal meeting for evaluation or informational purposes. A walkthrough is also a process at an abstract level. It’s the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code walkthroughs is to ensure the code fits the purpose. Walkthroughs also offer opportunities to assess an individual’s or team’s competency. Q4. What is an inspection?

 A: An inspection is a formal meeting, more formalized than a walkthrough and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.Q5. What is quality?

 A: Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization’s management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.Q6. What is good code?

 A: A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 124: Testing overview

engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.Q7. What is good design?

 A: Design could mean too many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.Q8. What is software life cycle?

A: Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out.Q9. Why are there so many software bugs?

 A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.There are unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 125: Testing overview

engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.Software development tools, including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.Q10. How do you introduce a new software QA process?

A: It depends on the size of the organization and the risks involved. For large organizations with high-risk projects, a serious management buy-in is required and a formalized QA process is necessary. For medium size organizations with lower risk projects, management and organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA processes should be balanced with productivity, in order to keep any bureaucracy from getting out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and managers, feedback to developers and good communication is essential among customers, managers, developers, test engineers and testers. Regardless the size of the company, the greatest value for effort is in managing requirement processes, where the goal is requirements that are clear, complete and testable.  Q11. Give me five common problems that occur during software development.

A: Poorly written requirements, unrealistic schedules, inadequate testing, and adding new features after development is underway and poor communication.  Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.The schedule is unrealistic if too much work is crammed in too little time.Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.It’s extremely common that new features are added after development is underway.Miscommunication either means the developers don’t know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.Q12. Do automated testing tools make testing easier?

A: Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile. A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret. If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change. One

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 126: Testing overview

problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts. Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task. You CAN learn to use automated testing tools, with little or no outside help. Get CAN get free information. Click on a link!Q13. Give me five solutions to problems that occur during software development.

A: Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they’re adequately reflected in related schedule changes. Use prototypes early on so customers’ expectations are clarified and customers can see what to expect; this will minimize changes later on.Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, and tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.Q14. What makes a good test engineer?

A: Rob Davis is a good test engineer because heHas a “test to break” attitude,Takes the point of view of the customer,Has a strong desire for quality,Has an attention to detail, He’s alsoTactful and diplomatic andHas well a communication skill, both oral and written. And heHas previous software development experience, too.Good test engineers have a “test to break” attitude. We, good test engineers, take the point of view of the customer; have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 127: Testing overview

provides a deeper understanding of the software development process gives the test engineer an appreciation for the developers’ point of view and reduces the learning curve in automated test tool programming.Q15. What makes a good QA engineer?

A: The same qualities a good test engineer has are useful for a QA engineer. Additionally, Rob Davis understands the entire software development process and how it fits into the business approach and the goals of the organization. Rob Davis’ communication skills and the ability to understand various sides of issues are important. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important.Q16. What makes a good resume?

A: On the subject of resumes, there seems to be an unending discussion of whether you should or shouldn’t have a one-page resume. The followings are some of the comments I have personally heard: “Well, Joe Blow (car salesman) said I should have a one-page resume.” “Well, I read a book and it said you should have a one page resume.” “I can’t really go into what I really did because if I did, it’d take more than one page on my resume.” “Gosh, I wish I could put my job at IBM on my resume but if I did it’d make my resume more than one page, and I was told to never make the resume more than one page long.” “I’m confused, should my resume be more than one page? I feel like it should, but I don’t want to break the rules.” Or, here’s another comment, “People just don’t read resumes that are longer than one page.” I have heard some more, but we can start with these. So what’s the answer? There is no scientific answer about whether a one-page resume is right or wrong. It all depends on who you are and how much experience you have. The first thing to look at here is the purpose of a resume. The purpose of a resume is to get you an interview. If the resume is getting you interviews, then it is considered to be a good resume. If the resume isn’t getting you interviews, then you should change it. The biggest mistake you can make on your resume is to make it hard to read. Why? Because, for one, scanners don’t like odd resumes. Small fonts can make your resume harder to read. Some candidates use a 7-point font so they can get the resume onto one page. Big mistake. Two, resume readers do not like eye strain either. If the resume is mechanically challenging, they just throw it aside for one that is easier on the eyes. Three, there are lots of resumes out there these days, and that is also part of the problem. Four, in light of the current scanning scenario, more than one page is not a deterrent because many will scan your resume into their database. Once the resume is in there and searchable, you have accomplished one of the goals of resume distribution. Five, resume readers don’t like to guess and most won’t call you to clarify what is on your resume. Generally speaking, your resume should tell your story. If you’re a college graduate looking for your first job, a one-page resume is just fine. If you have a longer story, the resume needs to be longer. Please put your experience on the resume so resume readers can tell when and for whom you did what. Short resumes — for people long

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 128: Testing overview

on experience — are not appropriate. The real audience for these short resumes is people with short attention spans and low IQ. I assure you that when your resume gets into the right hands, it will be read thoroughly.  Q17. What makes a good QA/Test Manager?

A: QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to increase productivity; able to promote cooperation between Software and Test/QA Engineers, have the people skills needed to promote improvements in QA processes, have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being adhered to; able to communicate with technical and non-technical people; as well as able to run meetings and keep them focused.Q18. What is the role of documentation in QA?

A: Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.Q19. What about requirements?

A: Requirement specifications are important and one of the most reliable methods of insuring problems in a complex software project is to have poorly documented requirement specifications. Requirements are the details describing an application’s externally perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable and testable. A non-testable requirement would be, for example, “user-friendly”, which is too subjective. A testable requirement would be something such as, “the product shall allow the user to enter their previously-assigned password to access the application”. Care should be taken to involve all of a project’s significant customers in the requirements process. Customers could be in-house or external and could include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, future software maintenance engineers, salespeople and anyone who could later derail the project. If his/her expectations aren’t met, they should be included as a customer, if possible. In some organizations, requirements may end up in high-level project plans, functional specification documents, design documents, or other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by test engineers in order to properly plan and execute tests. Without such documentation there will be no clear-cut way to determine if a software application is performing correctly. You CAN learn to capture requirements, with little or no outside help. Get CAN get free information. Click on a link!Q20. What is a test plan?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 129: Testing overview

A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it.Q21. What is a test case?

A: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a…Test case identifier;Test case name;Objective;Test conditions/setup;Input data requirements/steps, andExpected results.Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible.Q22. What should be done after a bug is found?

A: When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn’t create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.Q23. What is configuration management?

A: Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily adapt to your software tool and process needs.Q24. What if the software is so buggy it can’t be tested at all?

A: In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs. Since this type of problem can

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 130: Testing overview

severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem.Q25. How do you know when to stop testing?

A: This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are…Deadlines, e.g. release deadlines, testing deadlines;Test cases completed with certain percentage passed;Test budget has been depleted;Coverage of code, functionality, or requirements reaches a specified point;Bug rate falls below a certain level; orBeta or alpha testing period ends.Q26. What if there isn’t enough time for thorough testing?

A: Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:·         Which functionality is most important to the project’s intended purpose?·         Which functionality is most visible to the user?·         Which functionality has the largest safety impact?·         Which functionality has the largest financial impact on users?·         Which aspects of the application are most important to the customer?·         Which aspects of the application can be tested early in the development cycle?·         Which parts of the code are most complex and thus most subject to errors?·         Which parts of the application were developed in rush or panic mode?·         Which aspects of similar/related previous projects caused problems?·         Which aspects of similar/related previous projects had large maintenance expenses?·         Which parts of the requirements and design are unclear or poorly thought out?·         What do the developers think are the highest-risk aspects of the application?·         What kinds of problems would cause the worst publicity?·         What kinds of problems would cause the most customer service complaints?·         What kinds of tests could easily cover multiple functionalities?·         Which tests will have the best high-risk-coverage to time-required ratio?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 131: Testing overview

Q27. What if the project isn’t big enough to justify extensive testing?

A: Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under “What if there isn’t enough time for thorough testing?” do apply. The test engineer then should do “ad hoc” testing, or write up a limited test plan based on the risk analysis.Q28. What can be done if requirements are changing continuously?

A: Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application’s initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to…·         Ensure the code is well commented and well documented; this makes changes easier for the developers.·         Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.·         In the project’s initial schedule, allow for some extra time to commensurate with probable changes.·         Move new requirements to a ‘Phase 2′ version of an application and use the original requirements for the ‘Phase 1′ version.·         Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.·         Ensure customers and management understands scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that’s their job.·         Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.·         Design some flexibility into automated test scripts;·         Focus initial automated testing on application aspects that are most likely to remain unchanged;·         Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;·         Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;·         Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.Q29. What if the application has functionality that wasn’t in the requirements?

A: It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 132: Testing overview

functionality isn’t necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk.Q30. How can software QA processes be implemented without stifling productivity?

A: Implement QA processes slowly over time. Use consensus to reach agreement on processes and adjust and experiment as an organization grows and matures. Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings and promote training as part of the QA process. However, no one, especially talented technical types, like bureaucracy and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.Q34. What is software quality assurance?

A: Software Quality Assurance, when Rob Davis does it, is oriented to *prevention*. It involves the entire software development process. Prevention is monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. Software Testing, when performed by Rob Davis, is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes monitored by project managers. It depends on what best fits your organization’s size and business structure. Rob Davis can provide QA and/or Software QA. This document details some aspects of how he can provide software testing/QA service.Q35. What is quality assurance?

A: Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.Rob Davis’ QA service depends on the customers and projects. A lot will depend on team leads or managers, feedback to developers and communications among customers, managers, developers’ test engineers and testers.Q36. Process and procedures – why follow them?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 133: Testing overview

A: Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a process is repeatable. Once Rob Davis has learned and reviewed customer’s business processes and procedures, he will follow them. He will also recommend improvements and/or additions.Q37. Standards and templates – what is supposed to be in a document?

A: All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.Q38. What are the different levels of testing?

A: Rob Davis has expertise in testing at all testing levels listed below. At each test level, he documents the results. Each level of testing is either considered black or white box testing.Q39. What is black box testing?

A: Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.Q40. What is white box testing?

A: White box testing is based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths and conditions.Q41. What is unit testing?

A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.Q42. What is parallel/audit testing?

A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.Q43. What is functional testing?

A: Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers *should* perform functional testing.Q44. What is usability testing?

A: Usability testing is testing for ‘user-friendliness’. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.Q45. What is incremental integration testing?

A: Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application’s functionality are independent

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 134: Testing overview

enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.Q46. What is integration testing?

A: Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input. Q47. What is system testing?

A: System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels. You CAN learn system testing, with little or no outside help. Get CAN get free information. Click on a link!Q48. What is end-to-end testing?

A: Similar to system testing, the *macro* end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.Q49. What is regression testing?

A: The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.Q50. What is sanity testing?

A: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 135: Testing overview

normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.Q51. What is performance testing?

A: Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing verifies loads, volumes and response times, as defined by requirements.Q52. What is load testing?

A: Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.Q53. What is installation testing?

A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation test for a release is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application’s System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is performed, following installation testing.Q54. What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.Q55. What is recovery/error testing?

A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.Q56. What is compatibility testing?A: Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.Q57. What is comparison testing?A: Comparison testing is testing that compares software weaknesses and strengths to those of competitors’ products.Q58. What is acceptance testing?A: Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager; however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.Q59. What is alpha testing?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 136: Testing overview

A: Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by a group that is independent of the design team, but still within the company, e.g. in-house software test engineers, or software QA engineers.Q60. What is beta testing?A: Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.Q61. What testing roles are standard on most testing projects?

A: Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager. You CAN get a job in testing. Click on a link!Q62. What is a Test/QA Team Lead?

A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team.Q63. What is a Test Engineer?

A: Test Engineers are engineers who specialize in testing. We, test engineers, create test cases, procedures, scripts and generate data. We execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. We also…·         Speed up the work of the development staff;·         Reduce your organization’s risk of legal liability;·         Give you the evidence that your software is correct and operates properly;·         Improve problem tracking and reporting;·         Maximize the value of your software;·         Maximize the value of the devices that use it;·         Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;·         Help the work of your development staff, so the development team can devote its time to build up your product;·         Promote continual improvement;·         Provide documentation required by FDA, FAA, other regulatory agencies and your customers;·         Save money by discovering defects ‘early’ in the design process, before failures occur in production, or in the field;

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 137: Testing overview

·         Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.Q64. What is a Test Build Manager?

A: Test Build Managers deliver current software versions to the test environment, install the application’s software and apply software patches, to the both application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.Q65. What is a System Administrator?

A: Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.Q66. What is a Database Administrator?

A: Test Build Managers, System Administrators and Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to the both application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator.Q67. What is a Technical Analyst?

A: Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.Q68. What is a Test Configuration Manager?

A: Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.Q69. What is a test schedule?

A: The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.Q70. What is software testing methodology?

A: One software testing methodology is the use a three step process of…1.      Creating a test strategy;2.      Creating a test plan/design; and3.      Executing tests.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 138: Testing overview

This methodology can be used and molded to your organization’s needs. Rob Davis believes that using this methodology is important in the development and in ongoing maintenance of his customers’ applications.Q71. What is the general testing process?

A: The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.Q72. How do you create a test strategy?A: The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, and the test environment, a list of related tasks, pass/fail criteria and risk assessment.Inputs for this process:·         A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.·         A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.·         Testing methodology. This is based on known standards.·         Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.·         Requirements that the system can not provide, e.g. system limitations.Outputs for this process:·         An approved and signed off test strategy document, test plan, including test cases.·         Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.Q73. How do you create a test plan/design?

A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking…Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 139: Testing overview

Test scenarios are executed through the use of test procedures or scripts.Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.Test procedures or scripts include the specific data that will be used for testing the process or transaction.Test procedures or scripts may cover multiple test scenarios.Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.Inputs for this process:Approved Test Strategy Document.Test tools, or automated test tools, if applicable.Previously developed scripts, if applicable.Test documentation problems uncovered as a result of testing.A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.Outputs for this process:Approved documents of test scenarios, test cases, test conditions and test data.Reports of software design issues, given to software developers for correction.Q74. How do you execute tests?

A: Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 140: Testing overview

A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool.Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager’s formal acceptance.The test team reviews test document problems identified during testing, and update documents where appropriate.Inputs for this process:Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.Test tools, including automated test tools, if applicable.Developed scripts.Changes to the design, i.e. Change Request Documents.Test data.Availability of the test team and project team.General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.Test Readiness Document.Document Updates.Outputs for this process:Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.Changes to the code, also known as test fixes.Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems.Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.Formal record of test incidents, usually part of problem tracking.Base-lined package, also known as tested source and object code, ready for migration to the next level.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 141: Testing overview

Q75. What testing approaches can you tell me about?

A: Each of the followings represents a different testing approach:Black box testing,White box testing,Unit testing,Incremental testing,Integration testing,Functional testing,System testing,End-to-end testing,Sanity testing,Regression testing,Acceptance testing,Load testing,Performance testing,Usability testing,Install/uninstall testing,Recovery testing,Security testing,Compatibility testing,Exploratory testing, ad-hoc testing,User acceptance testing,Comparison testing,Alpha testing,Beta testing, andMutation testing.Q76. What is stress testing?

A: Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. For example, when a web server is stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given system or entity. It tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested, using scripts, bots, and various denials of service tools.Q77. What is load testing?

A: Load testing simulates the expected usage of a software program, by simulating multiple users that access the program’s services concurrently. Load testing is most useful and most relevant for multi-

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 142: Testing overview

user systems, client/server models, including web servers. For example, the load placed on the system is increased above normal usage patterns, in order to test the system’s response at peak loads. You CAN learn load testing, with little or no outside help. Get CAN get free information. Click on a link!Q79. What is the difference between performance testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing. You CAN learn testing, with little or no outside help. Get CAN get free information. Click on a link!Q80. What is the difference between reliability testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.Q81. What is the difference between volume testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.Q82. What is incremental testing?

A: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.Q83. What is software testing?

A: Software testing is a process that identifies the correctness, completeness, and quality of software. Actually, testing cannot establish the correctness of software. It can find defects, but cannot prove there are no defects. You CAN learn software testing, with little or no outside help. Get CAN get free information. Click on a link!Q84. What is automated testing?

A: Automated testing is a formally specified and controlled method of formal testing approach.Q85. What is alpha testing?

A: Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 143: Testing overview

is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.Q86. What is beta testing?

A: Following alpha testing, “beta versions” of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.Q87. What is the difference between alpha and beta testing?

A: Alpha testing is performed by in-house developers and software QA personnel. Beta testing is performed by the public, a few select prospective customers, or the general public.Q88. What is clear box testing?

A: Clear box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic. You CAN learn clear box testing, with little or no outside help. Get CAN get free information. Click on a link!Q89. What is boundary value analysis?

A: Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code is to exercise it at its natural boundaries.Q90. What is ad hoc testing?

A: Ad hoc testing is a testing approach; it is the least formal testing approach.Q91. What is gamma testing?

A: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks. Cynics tend to refer to software releases as “gamma testing”.Q92. What is glass box testing?

A: Glass box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.Q93. What is open box testing?

A: Open box testing is same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.Q94. What is black box testing?

A: Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software. You CAN learn to do black box testing, with little or no outside help. Get CAN get free information. Click on a link!Q95. What is functional testing?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 144: Testing overview

A: Functional testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software.Q96. What is closed box testing?

A: Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software.Q97. What is bottom-up testing?

A: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.Q98. What is software quality?

A: The quality of the software does vary widely from system to system. Some common quality attributes are stability, usability, reliability, portability, and maintainability. See quality standard ISO 9126 for more information on this subject.Q99. How do test case templates look like?

A: Software test cases are in a document that describes inputs, actions, or events, and their expected results, in order to determine if all features of an application are working correctly. Test case templates contain all particulars of every test case. Often these templates are in the form of a table. One example of this table is a 6-column table, where column 1 is the “Test Case ID Number”, column 2 is the “Test Case Name”, column 3 is the “Test Objective”, column 4 is the “Test Conditions/Setup”, column 5 is the “Input Data Requirements/Steps”, and column 6 is the “Expected Results”. All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. They also help in learning where information is located, making it easier for users to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. You CAN learn to create test case templates, with little or no outside help. Get CAN get free information. Click on a link!Q100. What is a software fault?

A: Software faults are hidden programming errors. Software faults are errors in the correctness of the semantics of computer programs.Q101. What is software failure?

A: Software failure occurs when the software does not do what the user expects to see.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 145: Testing overview

40 Testing Interview   Questions Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Interview Faq's, Manual

Testing Basics | Leave a comment

What’s Ad Hoc Testing ?A testing where the tester tries to break the software by randomly trying functionality of software.What’s the Accessibility Testing ?Testing that determines if software will be usable by people with disabilities.What’s the Alpha Testing ?The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the softwareWhat’s the Beta Testing ?Testing the application after the installation at the client place.What is Component Testing ?Testing of individual software components (Unit Testing).What’s Compatibility Testing ?In Compatibility testing we can test that software is compatible with other elements of system.What is Concurrency Testing ?Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.What is Conformance Testing ?The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.What is Context Driven Testing ?The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.What is Data Driven Testing ?Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 146: Testing overview

What is Conversion Testing ?Testing of programs or procedures used to convert data from existing systems for use in replacement systems.What is Dependency Testing ?Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.What is Depth Testing ?A test that exercises a feature of a product in full detail.

What is Dynamic Testing ?Testing software through executing it. See also Static Testing.What is Endurance Testing ?Checks for memory leaks or other problems that may occur with prolonged execution.What is End-to-End testing ?Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.What is Exhaustive Testing ?Testing which covers all combinations of input values and preconditions for an element of the software under test.What is Gorilla Testing ?Testing one particular module, functionality heavily.What is Installation Testing ?Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is Localization Testing ?This term refers to making software specifically designed for a specific locality.What is Loop Testing ?A white box testing technique that exercises program loops.What is Mutation Testing ?Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (’bugs’) and retesting

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 147: Testing overview

with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resourcesWhat is Monkey Testing ?Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.What is Positive Testing ?Testing aimed at showing software works. Also known as “test to pass”. See also Negative Testing.What is Negative Testing ?Testing aimed at showing software does not work. Also known as “test to fail”. See also Positive Testing.What is Path Testing ?Testing in which all paths in the program source code are tested at least once.What is Performance Testing ?Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as “Load Testing”.What is Ramp Testing ?Continuously raising an input signal until the system breaks down.What is Recovery Testing ?Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.What is the Re-testing testing ?Retesting- Again testing the functionality of the application.What is the Regression testing ?Regression- Check that change in code have not effected the working functionalityWhat is Sanity Testing ?Brief test of major functional elements of a piece of software to determine if its basically operational.What is Scalability Testing ?Performance testing focused on ensuring the application under test gracefully handles increases in work load.What is Security Testing ?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 148: Testing overview

Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.What is Stress Testing ?Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.What is Smoke Testing ?A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.What is Soak Testing ?Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.What’s the Usability testing ?Usability testing is for user friendliness.What’s the User acceptance testing ?User acceptance testing is determining if software is satisfactory to an end-user or customer.What’s the Volume Testing ?We can perform the Volume testing, where the system is subjected to large volume of data

As a manager what process did you adopt to define testing   policy? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Manual Testing Basics |Leave

a comment

Below are the important steps to define testing policy in general. But it can change

according to how you implemented in your organization. Let’s understand in detail the

below steps of implementing a testing policy in an organization.

Definition: – The first thing any organization need to do is define one unique definition

for testing within organization. So that every one is on the same mind set.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 149: Testing overview

How to achieve: – How are we going to achieve our objective?. Is there going to be a

testing committee, will there be compulsory test plans which needs to be executed etc etc.

Evaluate: – After testing is implemented in project how do we evaluate the same. Are we

going to derive metrics of defect per phase, per programmer etc etc. Finally it’s

important to let know every one how testing has added value to the project.

Standards : – Finally what are the standards we want to achieve by testing. For instance

we can define saying that more than 20 defects per KLOC will be considered below

standard and code review should be done for the same.

Figure:

Does Increase in testing always mean good to the   project? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Manual Testing Basics |Leave

a comment

No increase in testing does not mean always good for the product, company or the

project. In real test scenarios from 100% test plans only 20% test plans are critical from

business angle. Running those critical test plans will assure that the testing is proper

rather than running the full 100% test plans again and again. Below is the graph which

explains the impact of under testing and over testing. If you under test a system your

number of defect will increase, but on the contrary if you over test a system your cost of

testing will increase. Even if your defects come down your cost of testing has shooted

up.

What is the difference between Verification and   Validation? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: Verification and Validation | 1

Comment

Verification is a review with out actually executing the process while validation is

checking the product with actual execution. For instance code review and syntax check is

verification while actually running the product and checking the results is validation.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 150: Testing overview

What is the difference between Defect and   Failure? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: difference between Defect and

Failure | Leave a comment

Define Defect?

Defect is a variance from a normal flow

When a defect reaches the end customer it is termed as failure and if the defect is

detected internally and resolved it’s called as a defect.There are mainly three categories of defect:-Wrong: – The requirements have been implemented incorrectly. This defect is a variance

from the given specification.

Missing: – There is a requirement given by the customer and is not done. This is a

variance from specification, an indication that the specification was not implemented, or

a requirement of the customer was noted properly.

Extra: – A requirement incorporated into the product that was not given by the end

customer. This is always a variance from specifications, but may be an attribute desired

by the user of the product. However, it is considered a defect because it’s a variance from

the existing requirements.

Figure:

What is the difference between white box, black box and gray box   testing? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Tags: black box and gray box

testing, white box | Leave a comment

Black box testing is a testing strategy which is based solely on the requirements and

specifications. Black box testing requires no knowledge of the internal paths, structure, or implementation

of the software under test.

White box testing is a testing strategy which is based on the internal paths, code structure, and

implementation of the software under test. White box testing generally requires detailed programming

skills.

There is one more type of testing is called gray box testing. In this we look into the “box”

under test just long enough to understand how it has been implemented. Then we close up

the box and use our knowledge to choose more effective black box tests.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 151: Testing overview

Below figure shows how both the types of testers view an accounting application during testing. In case of

black box tester they view in terms of pure accounting application. While in terms of White box testing the

tester know about the internal structure of the application. In most scenarios white box testing is done by

developers as they know the

internals of the application. In Black box testing we check the overall functionality of the

application while in white box we do code reviews , view the architecture , remove bad

code practices and component level testing.

Beginner Guide For Software   Testing Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Interview Faq's, Manual Testing Basics | Leave a comment

Software Life Cycle The software life cycle typically includes the following: requirements analysis, design, coding, testing, installation and maintenance. In between, there can be a requirement to provide Operations and support activities for the product.Requirements Analysis. Software organizations provide solutions to customer requirements by

developing appropriate software that best suits their specifications. Thus, the life of software starts with

origin of requirements. Very often, these requirements are vague, emergent and always subject to

change.

Analysis is performed to – To conduct in depth analysis of the proposed project, To evaluate for technical

feasibility, To discover how to partition the system, To identify which areas of the requirements need to be

elaborated from the customer, To identify the impact of changes to the requirements, To identify which

requirements should be allocated to which components.

Design and Specifications. The outcome of requirements analysis is the requirements specification.

Using this, the overall design for the intended software is developed.

Activities in this phase - Perform Architectural Design for the software, Design Database (If applicable),

Design User Interfaces, Select or Develop Algorithms (If Applicable), Perform Detailed Design.

Coding. The development process tends to run iteratively through these phases rather than linearly;

several models (spiral, waterfall etc.) have been proposed to describe this process.

Activities in this phase – Create Test Data, Create Source, Generate Object Code, Create Operating

Documentation, Plan Integration, Perform Integration.

Testing. The process of using the developed system with the intent to find errors. Defects/flaws/bugs

found at this stage will be sent back to the developer for a fix and have to be re-tested. This phase is

iterative as long as the bugs are fixed to meet the requirements.

Activities in this phase – Plan Verification and Validation, Execute Verification and validation Tasks,

Collect and Analyze Metric Data, Plan Testing, Develop Test Requirements, Execute Tests.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 152: Testing overview

Installation. The so developed and tested software will finally need to be installed at the client place.

Careful planning has to be done to avoid problems to the user after installation is done.

Activities in this phase – Plan Installation, Distribution of Software, Installation of Software, Accept

Software in Operational Environment.

Operation and Support. Support activities are usually performed by the organization that developed the

software. Both the parties usually decide on these activities before the system is developed.

Activities in this phase – Operate the System, Provide Technical Assistance and Consulting, Maintain

Support Request Log.

Maintenance. The process does not stop once it is completely implemented and installed at user place;

this phase undertakes development of new features, enhancements etc.

Activities in this phase – Reapplying Software Life Cycle.

Various Life Cycle ModelsThe way you approach a particular application for testing greatly depends on the life cycle model it

follows. This is because, each life cycle model places emphasis on different aspects of the software i.e.

certain models provide good scope and time for testing whereas some others don’t. So, the number of

test cases developed, features covered, time spent on each issue depends on the life cycle model the

application follows.

No matter what the life cycle model is, every application undergoes the same phases described above as

its life cycle.

Following are a few software life cycle models, their advantages and disadvantages.

Waterfall ModelStrengths: 

•Emphasizes completion of one phase before moving on 

•Emphasises early planning, customer input, and design 

•Emphasises testing as an integral part of the life cycle •Provides quality gates at each life cycle phase

Weakness: 

•Depends on capturing and freezing requirements early in the life cycle 

•Depends on separating requirements from design 

•Feedback is only from testing phase to any previous stage 

•Not feasible in some organizations 

•Emphasises products rather than processes

Prototyping ModelStrengths: 

•Requirements can be set earlier and more reliably 

•Requirements can be communicated more clearly and completelybetween developers and clients 

•Requirements and design options can be investigated quickly and with low cost 

•More requirements and design faults are caught early

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 153: Testing overview

Weakness: 

•Requires a prototyping tool and expertise in using it – a cost for the development organisation 

•The prototype may become the production system

Spiral ModelStrengths: 

•It promotes reuse of existing software in early stages of development 

•Allows quality objectives to be formulated during development 

•Provides preparation for eventual evolution of the software product 

•Eliminates errors and unattractive alternatives early. 

•It balances resource expenditure. 

•Doesn’t involve separate approaches for software development and software maintenance. 

•Provides a viable framework for integrated Hardware-software system development.

Weakness: 

•This process needs or usually associated with Rapid Application Development, which is very difficult

practically. 

•The process is more difficult to manage and needs a very different approach as opposed to the waterfall

model (Waterfall model has management techniques like GANTT charts to assess)

Software Testing Life CycleSoftware Testing Life Cycle consist of six (generic) phases: 1) Planning, 2) Analysis, 3) Design, 4)

Construction, 5) Testing Cycles, 6) Final Testing and Implementation and 7) Post Implementation. Each

phase in the life cycle is described with the respective activities.

Planning. Planning High Level Test plan, QA plan (quality goals), identify – reporting procedures,

problem classification, acceptance criteria, databases for testing, measurement criteria (defect

quantities/severity level and defect origin), project metrics and finally begin the schedule for project

testing. Also, plan to maintain all test cases (manual or automated) in a database.

Analysis. Involves activities that – develop functional validation based on Business Requirements (writing

test cases basing on these details), develop test case format (time estimates and priority assignments),

develop test cycles (matrices and timelines), identify test cases to be automated (if applicable), define

area of stress and performance testing, plan the test cycles required for the project and regression

testing, define procedures for data maintenance (backup, restore, validation), review documentation.

Design. Activities in the design phase – Revise test plan based on changes, revise test cycle matrices

and timelines, verify that test plan and cases are in a database or requisite, continue to write test cases

and add new ones based on changes, develop Risk Assessment Criteria, formalize details for Stress and

Performance testing, finalize test cycles (number of test case per cycle based on time estimates per test

case and priority), finalize the Test Plan, (estimate resources to support development in unit testing).

Construction (Unit Testing Phase). Complete all plans, complete Test Cycle matrices and timelines,

complete all test cases (manual), begin Stress and Performance testing, test the automated testing

system and fix bugs, (support development in unit testing), run QA acceptance test suite to certify

software is ready to turn over to QA.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 154: Testing overview

Test Cycle(s) / Bug Fixes (Re-Testing/System Testing Phase). Run the test cases (front and back end),

bug reporting, verification, revise/add test cases as required.

Final Testing and Implementation (Code Freeze Phase). Execution of all front end test cases – manual

and automated, execution of all back end test cases – manual and automated, execute all Stress and

Performance tests, provide on-going defect tracking metrics, provide on-going complexity and design

metrics, update estimates for test cases and test plans, document test cycles, regression testing, and

update accordingly.

Post Implementation. Post implementation evaluation meeting can be conducted to review entire

project. Activities in this phase – Prepare final Defect Report and associated metrics, identify strategies to

prevent similar problems in future project, automation team – 1) Review test cases to evaluate other

cases to be automated for regression testing, 2) Clean up automated test cases and variables, and 3)

Review process of integrating results from automated testing in with results from manual testing.

What is a bug? Why do bugs occur?A software bug may be defined as a coding error that causes an unexpected defect, fault, flaw, or

imperfection in a computer program. In other words, if a program does not perform as intended, it is most

likely a bug.

There are bugs in software due to unclear or constantly changing requirements, software complexity,

programming errors, timelines, errors in bug tracking, communication gap, documentation errors,

deviation from standards etc.

· Unclear software requirements are due to miscommunication as to what the software should or shouldn’t

do. In many occasions, the customer may not be completely clear as to how the product should ultimately

function. This is especially true when the software is a developed for a completely new product. Such

cases usually lead to a lot of misinterpretations from any or both sides.

· Constantly changing software requirements cause a lot of confusion and pressure both on the

development and testing teams. Often, a new feature added or existing feature removed can be linked to

the other modules or components in the software. Overlooking such issues causes bugs.

· Also, fixing a bug in one part/component of the software might arise another in a different or same

component. Lack of foresight in anticipating such issues can cause serious problems and increase in bug

count. This is one of the major issues because of which bugs occur since developers are very often

subject to pressure related to timelines; frequently changing requirements, increase in the number of bugs

etc.

· Designing and re-designing, UI interfaces, integration of modules, database management all these add

to the complexity of the software and the system as a whole.

· Fundamental problems with software design and architecture can cause problems in programming.

Developed software is prone to error as programmers can make mistakes too. As a tester you can check

for, data reference/declaration errors, control flow errors, parameter errors, input/output errors etc.

· Rescheduling of resources, re-doing or discarding already completed work, changes in

hardware/software requirements can affect the software too. Assigning a new developer to the project in

midway can cause bugs. This is possible if proper coding standards have not been followed, improper

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 155: Testing overview

code documentation, ineffective knowledge transfer etc. Discarding a portion of the existing code might

just leave its trail behind in other parts of the software; overlooking or not eliminating such code can

cause bugs. Serious bugs can especially occur with larger projects, as it gets tougher to identify the

problem area.

· Programmers usually tend to rush as the deadline approaches closer. This is the time when most of the

bugs occur. It is possible that you will be able to spot bugs of all types and severity.

· Complexity in keeping track of all the bugs can again cause bugs by itself. This gets harder when a bug

has a very complex life cycle i.e. when the number of times it has been closed, re-opened, not accepted,

ignored etc goes on increasing.

Bug Life CycleBug Life Cycle starts with an unintentional software bug/behavior and ends when the assigned developer

fixes the bug. A bug when found should be communicated and assigned to a developer that can fix it.

Once fixed, the problem area should be re-tested. Also, confirmation should be made to verify if the fix did

not create problems elsewhere. In most of the cases, the life cycle gets very complicated and difficult to

track making it imperative to have a bug/defect tracking system in place.

See Chapter 7 – Defect Tracking

Following are the different phases of a Bug Life Cycle:

Open: A bug is in Open state when a tester identifies a problem area

Accepted: The bug is then assigned to a developer for a fix. The developer then accepts if valid.

Not Accepted/Won’t fix: If the developer considers the bug as low level or does not accept it as a bug,

thus pushing it into Not Accepted/Won’t fix state.

Such bugs will be assigned to the project manager who will decide if the bug needs a fix. If it needs, then

assigns it back to the developer, and if it doesn’t, then assigns it back to the tester who will have to close

the bug.

Pending: A bug accepted by the developer may not be fixed immediately. In such cases, it can be put

under Pending state.

Fixed: Programmer will fix the bug and resolves it as Fixed.

Close: The fixed bug will be assigned to the tester who will put it in the Close state.

Re-Open: Fixed bugs can be re-opened by the testers in case the fix produces problems elsewhere.

Cost of fixing bugsCosts are logarithmic; they increase in size tenfold as the time increases. A bug found and fixed during

the early stages – requirements or product spec stage can be fixed by a brief interaction with the

concerned and might cost next to nothing.

During coding, a swiftly spotted mistake may take only very less effort to fix. During integration testing, it

costs the paperwork of a bug report and a formally documented fix, as well as the delay and expense of a

re-test.

During system testing it costs even more time and may delay delivery. Finally, during operations it may

cause anything from a nuisance to a system failure, possibly with catastrophic as an aircraft or an

emergency service.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 156: Testing overview

When can testing be stopped/reduced?It is difficult to determine when exactly to stop testing. Here are a few common factors that help you

decide when you can stop or reduce testing:

· Deadlines (release deadlines, testing deadlines, etc.) 

· Test cases completed with certain percentage passed 

· Test budget depleted 

· Coverage of code/functionality/requirements reaches a specified point 

· Bug rate falls below a certain level 

· Beta or alpha testing period ends

3. Software Testing Levels, Types, Terms and Definitions Testing Levels and TypesThere are basically three levels of testing i.e. Unit Testing, Integration Testing and System Testing.

Various types of testing come under these levels.

Unit Testing 

To verify a single program or a section of a single program 

Integration Testing 

To verify interaction between system components 

Prerequisite: unit testing completed on all components that compose a system 

System Testing 

To verify and validate behaviors of the entire system against the original system objectives

Software testing is a process that identifies the correctness, completeness, and quality of software.

Following is a list of various types of software testing and their definitions in a random order:

· Formal Testing: Performed by test engineers 

· Informal Testing: Performed by the developers 

· Manual Testing: That part of software testing that requires human input, analysis, or evaluation. 

· Automated Testing: Software testing that utilizes a variety of tools to automate the testing process.

Automated testing still requires a skilled quality assurance professional with knowledge of the automation

tools and the software being tested to set up the test cases. 

· Black box Testing: Testing software without any knowledge of the back-end of the system, structure or

language of the module being tested. Black box test cases are written from a definitive source document,

such as a specification or requirements document. 

· White box Testing: Testing in which the software tester has knowledge of the back-end, structure and

language of the software, or at least its purpose. 

· Unit Testing: Unit testing is the process of testing a particular complied program, i.e., a window, a

report, an interface, etc. independently as a stand-alone component/program. The types and degrees of

unit tests can vary among modified and newly created programs. Unit testing is mostly performed by the

programmers who are also responsible for the creation of the necessary unit test data. 

· Incremental Testing: Incremental testing is partial testing of an incomplete product. The goal of

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 157: Testing overview

incremental testing is to provide an early feedback to software developers. 

· System Testing: System testing is a form of black box testing. The purpose of system testing is to

validate an application’s accuracy and completeness in performing the functions as designed. 

· Integration Testing: Testing two or more modules or functions together with the intent of finding

interface defects between the modules/functions. 

· System Integration Testing: Testing of software components that have been distributed across

multiple platforms (e.g., client, web server, application server, and database server) to produce failures

caused by system integration defects (i.e. defects involving distribution and back-office integration). 

· Functional Testing: Verifying that a module functions as stated in the specification and establishing

confidence that a program does what it is supposed to do. 

· End-to-end Testing: Similar to system testing – testing a complete application in a situation that mimics

real world use, such as interacting with a database, using network communication, or interacting with

other hardware, application, or system. 

· Sanity Testing: Sanity testing is performed whenever cursory testing is sufficient to prove the

application is functioning according to specifications. This level of testing is a subset of regression testing.

It normally includes testing basic GUI functionality to demonstrate connectivity to the database,

application servers, printers, etc. 

· Regression Testing: Testing with the intent of determining if bug fixes have been successful and have

not created any new problems. 

· Acceptance Testing: Testing the system with the intent of confirming readiness of the product and

customer acceptance. Also known as User Acceptance Testing. 

· Adhoc Testing: Testing without a formal test plan or outside of a test plan. With some projects this type

of testing is carried out as an addition to formal testing. Sometimes, if testing occurs very late in the

development cycle, this will be the only kind of testing that can be performed – usually done by skilled

testers. Sometimes ad hoc testing is referred to as exploratory testing. 

· Configuration Testing: Testing to determine how well the product works with a broad range of

hardware/peripheral equipment configurations as well as on different operating systems and software. 

· Load Testing: Testing with the intent of determining how well the product handles competition for

system resources. The competition may come in the form of network traffic, CPU utilization or memory

allocation. 

· Stress Testing: Testing done to evaluate the behavior when the system is pushed beyond the breaking

point. The goal is to expose the weak links and to determine if the system manages to recover gracefully. 

· Performance Testing: Testing with the intent of determining how efficiently a product handles a variety

of events. Automated test tools geared specifically to test and fine-tune performance are used most often

for this type of testing. 

· Usability Testing: Usability testing is testing for ‘user-friendliness’. A way to evaluate and measure how

users interact with a software product or site. Tasks are given to users and observations are made. 

· Installation Testing: Testing with the intent of determining if the product is compatible with a variety of

platforms and how easily it installs. 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 158: Testing overview

· Recovery/Error Testing: Testing how well a system recovers from crashes, hardware failures, or other

catastrophic problems. 

· Security Testing: Testing of database and network software in order to keep company data and

resources secure from mistaken/accidental users, hackers, and other malevolent attackers. 

· Penetration Testing: Penetration testing is testing how well the system is protected against

unauthorized internal or external access, or willful damage. This type of testing usually requires

sophisticated testing techniques. 

· Compatibility Testing: Testing used to determine whether other system software components such as

browsers, utilities, and competing software will conflict with the software being tested. 

· Exploratory Testing: Any testing in which the tester dynamically changes what they’re doing for test

execution, based on information they learn as they’re executing their tests. 

· Comparison Testing: Testing that compares software weaknesses and strengths to those of

competitors’ products. 

· Alpha Testing: Testing after code is mostly complete or contains most of the functionality and prior to

reaching customers. Sometimes a selected group of users are involved. More often this testing will be

performed in-house or by an outside testing firm in close cooperation with the software engineering

department. 

· Beta Testing: Testing after the product is code complete. Betas are often widely distributed or even

distributed to the public at large. 

· Gamma Testing: Gamma testing is testing of software that has all the required features, but it did not

go through all the in-house quality checks. 

· Mutation Testing: A method to determine to test thoroughness by measuring the extent to which the

test cases can discriminate the program from slight variants of the program. 

· Independent Verification and Validation (IV&V): The process of exercising software with the intent of

ensuring that the software system meets its requirements and user expectations and doesn’t fail in an

unacceptable manner. The individual or group doing this work is not part of the group or organization that

developed the software. 

· Pilot Testing: Testing that involves the users just before actual release to ensure that users become

familiar with the release contents and ultimately accept it. Typically involves many users, is conducted

over a short period of time and is tightly controlled. (See beta testing) 

· Parallel/Audit Testing: Testing where the user reconciles the output of the new system to the output of

the current system to verify the new system performs the operations correctly. 

· Glass Box/Open Box Testing: Glass box testing is the same as white box testing. It is a testing

approach that examines the application’s program structure, and derives test cases from the application’s

program logic. 

· Closed Box Testing: Closed box testing is same as black box testing. A type of testing that considers

only the functionality of the application. 

· Bottom-up Testing: Bottom-up testing is a technique for integration testing. A test engineer creates and

uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 159: Testing overview

level components are tested first. The objective of bottom-up testing is to call low-level components first,

for testing purposes. 

· Smoke Testing: A random test conducted before the delivery and after complete testing.

Testing Terms· Bug: A software bug may be defined as a coding error that causes an unexpected defect, fault or flaw.

In other words, if a program does not perform as intended, it is most likely a bug. 

· Error: A mismatch between the program and its specification is an error in the program. 

· Defect: Defect is the variance from a desired product attribute (it can be a wrong, missing or extra data).

It can be of two types – Defect from the product or a variance from customer/user expectations. It is a

flaw in the software system and has no impact until it affects the user/customer and operational system.

90% of all the defects can be caused by process problems. 

· Failure: A defect that causes an error in operation or negatively impacts a user/ customer. 

· Quality Assurance: Is oriented towards preventing defects. Quality Assurance ensures all parties

concerned with the project adhere to the process and procedures, standards and templates and test

readiness reviews. 

· Quality Control: quality control or quality engineering is a set of measures taken to ensure that

defective products or services are not produced, and that the design meets performance requirements.

· Verification: Verification ensures the product is designed to deliver all functionality to the customer; it

typically involves reviews and meetings to evaluate documents, plans, code, requirements and

specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings. 

· Validation: Validation ensures that functionality, as defined in requirements, is the intended behavior of

the product; validation typically involves actual testing and takes place after verifications are completed.

[/size]

4. Most common software errorsFollowing are the most common software errors that aid you in software testing. This helps you to identify

errors systematically and increases the efficiency and productivity of software testing.

Types of errors with examples

· User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing

information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error

messages. Performance issues – Poor responsiveness, Can’t redirect output, Inappropriate use of key

board 

· Error Handling: Inadequate – protection against corrupted data, tests of user input, version control;

Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems. 

· Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside

boundary. 

· Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect

conversion from one data representation to another, Wrong formula, Incorrect approximation. 

· Initial and Later states: Failure to – set data item to zero, to initialize a loop-control variable, or re-

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 160: Testing overview

initialize a pointer, to clear a string or flag, Incorrect initialization. 

· Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack

underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result,

Missing/wrong default, Data Type errors. 

· Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit

or user abort. 

· Race Conditions: Assumption that one event or task finished before another begins, Resource races,

Tasks starts before its prerequisites are met, Messages cross or don’t arrive in the order sent. 

· Load Conditions: Required resources are not available, No available large memory area, Low priority

tasks not put off, Doesn’t erase old files from mass storage, Doesn’t return unused memory. 

· Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence, Misunderstood status

or return code, Wrong operation or instruction codes. 

· Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or

program files. 

· Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case,

Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to

reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify

fixes, Failure to provide summary report.

5. The Test Planning ProcessWhat is a Test Strategy? What are its Components?Test Policy – A document characterizing the organization’s philosophy towards software testing.

Test Strategy – A high-level document defining the test phases to be performed and the testing within

those phases for a programme. It defines the process to be followed in each project. This sets the

standards for the processes, documents, activities etc. that should be followed for each project.

For example, if a product is given for testing, you should decide if it is better to use black-box testing or

white-box testing and if you decide to use both, when will you apply each and to which part of the

software? All these details need to be specified in the Test Strategy.

Project Test Plan – a document defining the test phases to be performed and the testing within those

phases for a particular project.

A Test Strategy should cover more than one project and should address the following issues: An

approach to testing high risk areas first, Planning for testing, How to improve the process based on

previous testing, Environments/data used, Test management – Configuration management, Problem

management, What Metrics are followed, Will the tests be automated and if so which tools will be used,

What are the Testing Stages and Testing Methods, Post Testing Review process, Templates.

Test planning needs to start as soon as the project requirements are known. The first document that

needs to be produced then is the Test Strategy/Testing Approach that sets the high level approach for

testing and covers all the other elements mentioned above.

Test Planning – Sample Structure

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 161: Testing overview

Once the approach is understood, a detailed test plan can be written. Usually, this test plan can be written

in different styles. Test plans can completely differ from project to project in the same organization.

IEEE SOFTWARE TEST DOCUMENTATION Std 829-1998 – TEST PLANPurpose To describe the scope, approach, resources, and schedule of the testing activities. To identify the items

being tested, the features to be tested, the testing tasks to be performed, the personnel responsible for

each task, and the risks associated with this plan.

OUTLINE A test plan shall have the following structure: 

· Test plan identifier. A unique identifier assign to the test plan. 

· Introduction: Summarized the software items and features to be tested and the need for them to be

included. 

· Test items: Identify the test items, their transmittal media which impact their 

· Features to be tested 

· Features not to be tested 

· Approach 

· Item pass/fail criteria 

· Suspension criteria and resumption requirements 

· Test deliverables 

· Testing tasks 

· Environmental needs 

· Responsibilities 

· Staffing and training needs 

· Schedule 

· Risks and contingencies 

· Approvals

Major Test Planning TasksLike any other process in software testing, the major tasks in test planning are to – Develop Test

Strategy, Critical Success Factors, Define Test Objectives, Identify Needed Test Resources, Plan Test

Environment, Define Test Procedures, Identify Functions To Be Tested, Identify Interfaces With Other

Systems or Components, Write Test Scripts, Define Test Cases, Design Test Data, Build Test Matrix,

Determine Test Schedules, Assemble Information, Finalize the Plan

6. Test Case DevelopmentA test case is a detailed procedure that fully tests a feature or an aspect of a feature. While the test plan

describes what to test, a test case describes how to perform a particular test. You need to develop test

cases for each test listed in the test plan.

General Guidelines

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 162: Testing overview

As a tester, the best way to determine the compliance of the software to requirements is by designing

effective test cases that provide a thorough test of a unit. Various test case design techniques enable the

testers to develop effective test cases. Besides, implementing the design techniques, every tester needs

to keep in mind general guidelines that will aid in test case design:

a. The purpose of each test case is to run the test in the simplest way possible. [Suitable techniques -

Specification derived tests, Equivalence partitioning]

b. Concentrate initially on positive testing i.e. the test case should show that the software does what it is

intended to do. [Suitable techniques - Specification derived tests, Equivalence partitioning, State-

transition testing]

c. Existing test cases should be enhanced and further test cases should be designed to show that the

software does not do anything that it is not specified to do i.e. Negative Testing [Suitable techniques -

Error guessing, Boundary value analysis, Internal boundary value testing, State-transition testing]

d. Where appropriate, test cases should be designed to address issues such as performance, safety

requirements and security requirements [Suitable techniques - Specification derived tests]

e. Further test cases can then be added to the unit test specification to achieve specific test coverage

objectives. Once coverage tests have been designed, the test procedure can be developed and the tests

executed [Suitable techniques - Branch testing, Condition testing, Data definition-use testing, State-

transition testing]

Test Case – Sample StructureThe manner in which a test case is depicted varies between organizations. Anyhow, many test case

templates are in the form of a table, for example, a 5-column table with fields:

Test Case ID 

Test Case Description 

Test Dependency/Setup 

Input Data Requirements/Steps 

Expected Results 

Pass/Fail

Test Case Design TechniquesThe test case design techniques are broadly grouped into two categories: Black box techniques, White

box techniques and other techniques that do not fall under either category.

Black Box (Functional) - Specification derived tests 

- Equivalence partitioning 

- Boundary Value Analysis 

- State-Transition Testing 

White Box (Structural) - Branch Testing 

- Condition Testing 

- Data Definition – Use Testing 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 163: Testing overview

- Internal boundary value testing 

Other- Error guessing

Specification Derived Tests 

As the name suggests, test cases are designed by walking through the relevant specifications. It is a

positive test case design technique.

Equivalence Partitioning 

Equivalence partitioning is the process of taking all of the possible test values and placing them into

classes (partitions or groups). Test cases should be designed to test one value from each class. Thereby,

it uses fewest test cases to cover the maximum input requirements.

For example, if a program accepts integer values only from 1 to 10. The possible test cases for such a

program would be the range of all integers. In such a program, all integers upto to 0 and above 10 will

cause an error. So, it is reasonable to assume that if 11 will fail, all values above it will fail and vice versa.

If an input condition is a range of values, let one valid equivalence class be the range (0 or 10 in this

example). Let the values below and above the range be two respective invalid equivalence values (i.e. -1

and 11). Therefore, the above three partition values can be used as test cases for the above example.

Boundary Value Analysis 

This is a selection technique where the test data are chosen to lie along the boundaries of the input

domain or the output range. This technique is often called as stress testing and incorporates a degree of

negative testing in the test design by anticipating that errors will occur at or around the partition

boundaries.

For example, a field is required to accept amounts of money between $0 and $10. As a tester, you need

to check if it means upto and including $10 and $9.99 and if $10 is acceptable. So, the boundary values

are $0, $0.01, $9.99 and $10.

Now, the following tests can be executed. A negative value should be rejected, 0 should be accepted (this

is on the boundary), $0.01 and $9.99 should be accepted, null and $10 should be rejected. In this way, it

uses the same concept of partitions as equivalence partitioning.

State Transition Testing 

As the name suggests, test cases are designed to test the transition between the states by creating the

events that cause the transition.

Branch Testing 

In branch testing, test cases are designed to exercise control flow branches or decision points in a unit.

This is usually aimed at achieving a target level of Decision Coverage. Branch Coverage, need to test

both branches of IF and ELSE. All branches and compound conditions (e.g. loops and array handling)

within the branch should be exercised at least once.

Condition Testing 

The object of condition testing is to design test cases to show that the individual components of logical

conditions and combinations of the individual components are correct. Test cases are designed to test the

individual elements of logical expressions, both within branch conditions and within other expressions in a

unit.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 164: Testing overview

Data Definition – Use Testing 

Data definition-use testing designs test cases to test pairs of data definitions and uses. Data definition is

anywhere that the value of a data item is set. Data use is anywhere that a data item is read or used. The

objective is to create test cases that will drive execution through paths between specific definitions and

uses.

Internal Boundary Value Testing 

In many cases, partitions and their boundaries can be identified from a functional specification for a unit,

as described under equivalence partitioning and boundary value analysis above. However, a unit may

also have internal boundary values that can only be identified from a structural specification.

Error Guessing 

It is a test case design technique where the testers use their experience to guess the possible errors that

might occur and design test cases accordingly to uncover them.

Using any or a combination of the above described test case design techniques; you can develop

effective test cases.

What is a Use Case?A use case describes the system’s behavior under various conditions as it responds to a request from

one of the users. The user initiates an interaction with the system to accomplish some goal. Different

sequences of behavior, or scenarios, can unfold, depending on the particular requests made and

conditions surrounding the requests. The use case collects together those different scenarios.

Use cases are popular largely because they tell coherent stories about how the system will behave in

use. The users of the system get to see just what this new system will be and get to 

react early.

7. Defect TrackingWhat is a defect?As discussed earlier, defect is the variance from a desired product attribute (it can be a wrong, missing or

extra data). It can be of two types – Defect from the product or a variance from customer/user

expectations. It is a flaw in the software system and has no impact until it affects the user/customer and

operational system.

What are the defect categories?With the knowledge of testing so far gained, you can now be able to categorize the defects you have

found. Defects can be categorized into different types basing on the core issues they address. Some

defects address security or database issues while others may refer to functionality or UI issues.

Security Defects: Application security defects generally involve improper handling of data sent from the

user to the application. These defects are the most severe and given highest priority for a fix. 

Examples: 

- Authentication: Accepting an invalid username/password 

- Authorization: Accessibility to pages though permission not given

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 165: Testing overview

Data Quality/Database Defects: Deals with improper handling of data in the database. 

Examples: 

- Values not deleted/inserted into the database properly 

- Improper/wrong/null values inserted in place of the actual values

Critical Functionality Defects: The occurrence of these bugs hampers the crucial functionality of the

application. 

Examples:- Exceptions

Functionality Defects: These defects affect the functionality of the application. 

Examples: 

- All Javascript errors 

- Buttons like Save, Delete, Cancel not performing their intended functions 

- A missing functionality (or) a feature not functioning the way it is intended to 

- Continuous execution of loops

User Interface Defects: As the name suggests, the bugs deal with problems related to UI are usually

considered less severe. 

Examples: 

- Improper error/warning/UI messages 

- Spelling mistakes 

- Alignment problems

 How is a defect reported?Once the test cases are developed using the appropriate techniques, they are executed which is when

the bugs occur. It is very important that these bugs be reported as soon as possible because, the earlier

you report a bug, the more time remains in the schedule to get it fixed.

Simple example is that you report a wrong functionality documented in the Help file a few months before

the product release, the chances that it will be fixed are very high. If you report the same bug few hours

before the release, the odds are that it wont be fixed. The bug is still the same though you report it few

months or few hours before the release, but what matters is the time.

It is not just enough to find the bugs; these should also be reported/communicated clearly and efficiently,

not to mention the number of people who will be reading the defect.

Defect tracking tools (also known as bug tracking tools, issue tracking tools or problem trackers) greatly

aid the testers in reporting and tracking the bugs found in software applications. They provide a means of

consolidating a key element of project information in one place. Project managers can then see which

bugs have been fixed, which are outstanding and how long it is taking to fix defects. Senior management

can use reports to understand the state of the development process.

How descriptive should your bug/defect report be?You should provide enough detail while reporting the bug keeping in mind the people who will use it – test

lead, developer, project manager, other testers, new testers assigned etc. This means that the report you

will write should be concise, straight and clear. Following are the details your report should contain:

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 166: Testing overview

- Bug Title 

- Bug identifier (number, ID, etc.) 

- The application name or identifier and version 

- The function, module, feature, object, screen, etc. where the bug occurred 

- Environment (OS, Browser and its version) 

- Bug Type or Category/Severity/Priority 

o Bug Category: Security, Database, Functionality (Critical/General), UI 

o Bug Severity: Severity with which the bug affects the application – Very High, High, Medium, Low, Very

Low 

o Bug Priority: Recommended priority to be given for a fix of this bug – P0, P1, P2, P3, P4, P5 (P0-

Highest, P5-Lowest) 

- Bug status (Open, Pending, Fixed, Closed, Re-Open) 

- Test case name/number/identifier 

- Bug description 

- Steps to Reproduce 

- Actual Result 

- Tester Comments

What does the tester do when the defect is fixed?Once the reported defect is fixed, the tester needs to re-test to confirm the fix. This is usually done by

executing the possible scenarios where the bug can occur. Once retesting is completed, the fix can be

confirmed and the bug can be closed. This marks the end of the bug life cycle.

8. Types of Test ReportsThe documents outlined in the IEEE Standard of Software Test Documentation covers test planning, test

specification, and test reporting. 

Test reporting covers four document types:

1. A Test Item Transmittal Report identifies the test items being transmitted for testing from the

development to the testing group in the event that a formal beginning of test execution is desired

Details to be included in the report – Purpose, Outline, Transmittal-Report Identifier, Transmitted Items,

Location, Status, and Approvals.

2. A Test Log is used by the test team to record what occurred during test execution

Details to be included in the report – Purpose, Outline, Test-Log Identifier, Description, Activity and Event

Entries, Execution Description, Procedure Results, Environmental Information, Anomalous Events,

Incident-Report Identifiers

3. A Test Incident report describes any event that occurs during the test execution that requires further

investigation

Details to be included in the report – Purpose, Outline, Test-Incident-Report Identifier, Summary, Impact

4. A test summary report summarizes the testing activities associated with one or more test-design

specifications

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 167: Testing overview

Details to be included in the report – Purpose, Outline, Test-Summary-Report Identifier, Summary,

Variances, Comprehensiveness Assessment, Summary of Results, Summary of Activities, and Approvals

9. Software Test AutomationAutomating testing is no different from a programmer using a coding language to write programs to

automate any manual process. One of the problems with testing large systems is that it can go beyond

the scope of small test teams. Because only a small number of testers are available the coverage and

depth of testing provided are inadequate for the task at hand.

Expanding the test team beyond a certain size also becomes problematic with increase in work over

head. Feasible way to avoid this without introducing a loss of quality is through appropriate use of tools

that can expand individual’s capacity enormously while maintaining the focus (depth) of testing upon the

critical elements.

Consider the following factors that help determine the use of automated testing tools: 

· Examine your current testing process and determine where it needs to be adjusted for using automated

test tools. 

· Be prepared to make changes in the current ways you perform testing. 

· Involve people who will be using the tool to help design the automated testing process. 

· Create a set of evaluation criteria for functions that you will want to consider when using the automated

test tool. These criteria may include the following: 

o Test repeatability 

o Criticality/risk of applications 

o Operational simplicity 

o Ease of automation 

o Level of documentation of the function (requirements, etc.) 

· Examine your existing set of test cases and test scripts to see which ones are most applicable for test

automation. 

· Train people in basic test-planning skills.

Approaches to AutomationThere are three broad options in Test Automation:

Full Manual Reliance on manual testing 

Responsive and flexible 

Inconsistent 

Low implementation cost 

High repetitive cost 

Required for automation 

High skill requirements

Partial Automation 

Redundancy possible but requires duplication of effort 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 168: Testing overview

Flexible 

Consistent 

Automates repetitive tasks and high return tasks

Full Automation 

Reliance on automated testing 

Relatively inflexible 

Very consistent 

High implementation cost 

Economies of scale in repetition, regression etc 

Low skill requirements

Fully manual testing has the benefit of being relatively cheap and effective. But as quality of the product

improves the additional cost for finding further bugs becomes more expensive. Large scale manual

testing also implies large scale testing teams with the related costs of space, overhead and infrastructure.

Manual testing is also far more responsive and flexible than automated testing but is prone to tester error

through fatigue.

Fully automated testing is very consistent and allows the repetitions of similar tests at very little

marginal cost. The setup and purchase costs of such automation are very high however and maintenance

can be equally expensive. Automation is also relatively inflexible and requires rework in order to adapt to

changing requirements.

Partial Automation incorporates automation only where the most benefits can be achieved. The

advantage is that it targets specifically the tasks for automation and thus achieves the most benefit from

them. It also retains a large component of manual testing which maintains the test teams flexibility and

offers redundancy by backing up automation with manual testing. The disadvantage is that it obviously

does not provide as extensive benefits as either extreme solution.

Choosing the right tool· Take time to define the tool requirements in terms of technology, process, applications, people skills,

and organization.

· During tool evaluation, prioritize which test types are the most critical to your success and judge the

candidate tools on those criteria.

· Understand the tools and their trade-offs. You may need to use a multi-tool solution to get higher levels

of test-type coverage. For example, you will need to combine the capture/play-back tool with a load-test

tool to cover your performance test cases.

· Involve potential users in the definition of tool requirements and evaluation criteria.

· Build an evaluation scorecard to compare each tool’s performance against a common set of criteria.

Rank the criteria in terms of relative importance to the organization.

Top Ten Challenges of Software Test Automation1. Buying the Wrong Tool 

2. Inadequate Test Team Organization 

3. Lack of Management Support 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 169: Testing overview

4. Incomplete Coverage of Test Types by the selected tool 

5. Inadequate Tool Training 

6. Difficulty using the tool 

7. Lack of a Basic Test Process or Understanding of What to Test 

8. Lack of Configuration Management Processes 

9. Lack of Tool Compatibility and Interoperability 

10. Lack of Tool Availability

10. Introduction To Software StandardsCapability Maturity Model - Developed by the software community in 1986 with leadership from the SEI.

The CMM describes the principles and practices underlying software process maturity. It is intended to

help software organizations improve the maturity of their software processes in terms of an evolutionary

path from ad hoc, chaotic processes to mature, disciplined software processes. The focus is on identifying

key process areas and the exemplary practices that may comprise a disciplined software process.

What makes up the CMM? The CMM is organized into five maturity levels: 

· Initial 

· Repeatable 

· Defined 

· Manageable 

· Optimizing

Except for Level 1, each maturity level decomposes into several key process areas that indicate the areas

an organization should focus on to improve its software process.

Level 1 – Initial Level: Disciplined process, Standard, Consistent process, Predictable process,

Continuously Improving process

Level 2 – Repeatable: Key practice areas – Requirements management, Software project planning,

Software project tracking & oversight, Software subcontract management, Software quality assurance,

Software configuration management

Level 3 – Defined: Key practice areas – Organization process focus, Organization process definition,

Training program, Integrated software management, Software product engineering, Intergroup

coordination, Peer reviews

Level 4 – Manageable: Key practice areas – Quantitative Process Management, Software Quality

Management

Level 5 – Optimizing: Key practice areas – Defect prevention, Technology change management, Process

change management

Six SigmaSix Sigma is a quality management program to achieve “six sigma” levels of quality. It was pioneered by

Motorola in the mid-1980s and has spread to many other manufacturing companies, notably General

Electric Corporation (GE).

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 170: Testing overview

Six Sigma is a rigorous and disciplined methodology that uses data and statistical analysis to measure

and improve a company’s operational performance by identifying and eliminating “defects” from

manufacturing to transactional and from product to service. Commonly defined as 3.4 defects per million

opportunities, Six Sigma can be defined and understood at three distinct levels: metric, methodology and

philosophy…

Training Sigma processes are executed by Six Sigma Green Belts and Six Sigma Black Belts, and are

overseen by Six Sigma Master Black Belts.

ISOISO – International Organization for Standardization is a network of the national standards institutes of

150 countries, on the basis of one member per country, with a Central Secretariat in Geneva,

Switzerland, that coordinates the system. ISO is a non-governmental organization. ISO has developed

over 13, 000 International Standards on a variety of subjects.

11. Software Testing CertificationsCertification Information for Software QA and Test Engineers

CSQE – ASQ (American Society for Quality)’s program for CSQE (Certified Software Quality Engineer) –

information on requirements, outline of required ‘Body of Knowledge’, listing of study references and

more.

CSQA/CSTE – QAI (Quality Assurance Institute)’s program for CSQA (Certified Software Quality Analyst)

and CSTE (Certified Software Test Engineer) certifications.

ISEB Software Testing Certifications – The British Computer Society maintains a program of 2 levels of

certifications -ISEB Foundation Certificate, Practitioner Certificate.

ISTQB Certified Tester - The International Software Testing Qualifications Board is a part of the

European Organization for Quality – Software Group, based in Germany. The certifications are based on

experience, a training course and test. Two levels are available: Foundation and Advanced.

12. Facts about Software EngineeringFollowing are some facts that can help you gain a better insight into the realities of Software Engineering.

1. The best programmers are up to 28 times better than the worst programmers. 

2. New tools/techniques cause an initial LOSS of productivity/quality. 

3. The answer to a feasibility study is almost always “yes”. 

4. A May 2002 report prepared for the National Institute of Standards and Technologies (NIST)(1)

estimates the annual cost of software defects in the United States as $59.5 billion. 

5. Reusable components are three times as hard to build 

6. For every 25% increase in problem complexity, there is a 100% increase in solution complexity. 

7. 80% of software work is intellectual. A fair amount of it is creative. Little of it is clerical. 

8. Requirements errors are the most expensive to fix during production. 

9. Missing requirements are the hardest requirement errors to correct. 

10. Error-removal is the most time-consuming phase of the life cycle. 

11. Software is usually tested at best at the 55-60% (branch) coverage level. 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 171: Testing overview

12. 100% coverage is still far from enough. 

13. Rigorous inspections can remove up to 90% of errors before the first test case is run. 

14. Maintenance typically consumes 40-80% of software costs. It is probably the most important life cycle

phase of software. 

15. Enhancements represent roughly 60% of maintenance costs. 

16. There is no single best approach to software error removal.

13. References- Software Engineering – Roger S Pressman 

- Software Testing – Ron Patton 

- Effective Methods of Software Testing – William E. Perry 

- Articles by James A Whittaker

Defect Tracking Toolshttp://testingfaqs.org/t-track.html List of defect tracking tools. Both commercial and freeware

tools are included.

Hyperlinks for CertificationsCSQE – http://www.asq.org/cert/types/csqe/index.html CSQA/CSTE – http://www.softwarecertifications.com/ ISEB Software Testing Certifications

–http://www.bcs.org/BCS/Products/Qualifications/ISEB/Areas/SoftTest/ ISTQB Certified Tester – http://www.isqi.org/isqi/eng/cert/ct/

Testing TechniquesPosted: 30/05/2009 | Author: Ganesh Raman | Filed under: Interview Faq's, Manual Testing Basics | Leave a comment

o Black Box Testingo White Box Testingo Regression Testingo These principles & techniques can be applied to any type of testing. Black Box Testing Testing of a function without knowing internal structure of the program. White Box Testing Testing of a function with knowing internal structure of the program.  Regression Testing To ensure that the code changes have not had an adverse affect to the other modules or on existing functions. Functional Testing Study SRS

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 172: Testing overview

Identify Unit FunctionsFor each unit function

Take each input functionIdentify Equivalence classForm Test casesForm Test cases for boundary valuesFrom Test cases for Error Guessing

Form Unit function v/s Test cases, Cross Reference MatrixFind the coverage

What makes a good Software QA   engineer? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Leave a comment

The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see ‘what’s missing’ is important for inspections and reviews. Testing 

An examination of the behavior of a program by executing on sample data sets.Testing comprises of set of activities to detect defects in a produced material.To unearth & correct defectsTo detect defects early & to reduce cost of defect fixingTo avoid user detecting problemsTo ensure that product works as users expected it to.

 

Basic definitions of software testing and quality   assurance Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Interview Faq's, Manual Testing Basics | Leave a comment

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 173: Testing overview

acceptance criteria: The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE 610]acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to

determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other

authorized entity to determine whether or not to accept the system. [After IEEE 610]

accessibility testing: Testing to determine the ease by which users with disabilities can use a component or system.

[Gerrard]

accuracy: The capability of the software product to provide the right or agreed results or effects with the needed

degree of precision. [ISO 9126] See also functionality testing.

actual result: The behavior produced/observed when a component or system is tested.

ad hoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design

technique is used, there are no expectations for results and randomness guides the test execution activity.

adaptability: The capability of the software product to be adapted for different specified environments without

applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See

also portability testing.

agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating

development as the customer of testing and emphasizing the test-first design paradigm.

alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at

the developers’ site, but outside the development organization. Alpha testing is often employed as a form of internal

acceptance testing.

analyzability: The capability of the software product to be diagnosed for deficiencies or causes of failures in the

software, or for the parts to be modified to be identified. [ISO 9126] See also maintainability testing.

anomaly: Any condition that deviates from expectation based on requirements specifications, design documents,

user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but

not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation.

[IEEE 1044] See also defect, deviation, error, fault, failure, incident, problem.

attractiveness: The capability of the software product to be attractive to the user. [ISO 9126]

audit: An independent evaluation of software products or processes to ascertain compliance to standards,

guidelines, specifications, and/or procedures based on objective criteria, including documents that specify:

(1) the form or content of the products to be produced

(2) the process by which the products shall be produced

(3) how compliance to standards or guidelines shall be measured. [IEEE 1028]

audit trail: A path by which the original input to a process (e.g. data) can be traced back through the process, taking

the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out.

[After TMap]

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 174: Testing overview

automated testware: Testware used in automated testing, such as tool scripts.

availability: The degree to which a component or system is operational and accessible when required for use. Often

expressed as a percentage. [IEEE 610]

 

Bback-to-back testing: Testing in which two or more variants of a component or system are executed with the same

inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE 610]

baseline: A specification or software product that has been formally reviewed or agreed upon, that thereafter serves

as the basis for further development, and that can be changed only through a formal change control process. [After

IEEE 610]

basic block: A sequence of one or more consecutive executable statements containing no branches.

basis test set: A set of test cases derived from the internal structure or specification to ensure that 100% of a

specified coverage criterion is achieved.

behavior: The response of a component or system to a set of input values and preconditions.

benchmark test: (1) A standard against which measurements or comparisons can be made. (2) A test that is be used

to compare components or systems to each other or to a standard as in (1). [After IEEE 610]

bespoke software: Software developed specifically for a set of users or customers. The opposite is off-the-shelf

software.

best practice: A superior method or innovative practice that contributes to the improved performance of an

organization under given context, usually recognized as ‘best’ by other peer organizations.

beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise

involved with the developers, to determine whether or not a component or system satisfies the user/customer needs

and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in

order to acquired feedback from the market.

big-bang testing: A type of integration testing in which software elements, hardware elements, or both are

combined all at once into a component or an overall system, rather than in stages. [After IEEE 610] See also

integration testing.

black box testing: Testing, either functional or non-functional, without reference to the internal structure of the

component or system.

black box test design techniques: Documented procedure to derive and select test cases based on an analysis of the

specification, either functional or non-functional, of a component or system without reference to its internal

structure.

blocked test case: A test case that cannot be executed because the preconditions for its execution are not fulfilled.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 175: Testing overview

bottom-up testing: An incremental approach to integration testing where the lowest level components are tested first,

and then used to facilitate the testing of higher level components. This process is repeated until the component at the

top of the hierarchy is tested. See also integration testing.

boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest

incremental distance on either side of an edge, for example the minimum or maximum value of a range.

boundary value analysis: A black box test design technique in which test cases are designed based on boundary

values.

boundary value coverage: The percentage of boundary values that have been exercised by a test suite.

branch: A basic block that can be selected for execution based on a program construct in which one of two or more

alternative program paths are available, e.g. case, jump, go to, ifthen- else.

branch coverage: The percentage of branches that have been exercised by a test suite. 100% branch coverage implies

both 100% decision coverage and 100% statement coverage.

branch testing: A white box test design technique in which test cases are designed to execute branches.

business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or

knowledge of business processes.

 

CCapability Maturity Model (CMM): A five level staged framework that describes the key elements of an effective

software process. The Capability Maturity Model covers practices for planning, engineering and managing software

development and maintenance. [CMM]

Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an effective

product development and maintenance process. The Capability Maturity Model Integration covers practices for

planning, engineering and managing product development and maintenance. CMMI is the designated successor of

the CMM. [CMMI]

capture/playback tool: A type of test execution tool where inputs are recorded during manual testing in order to

generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support

automated regression testing.

CASE: Acronym for Computer Aided Software Engineering.

CAST: Acronym for Computer Aided Software Testing. See also test automation.

cause-effect graph: A graphical representation of inputs and/or stimuli (causes) with their associated outputs

(effects), which can be used to design test cases.

cause-effect graphing: A black box test design technique in which test cases are designed from cause-effect graphs.

[BS 7925/2]

certification: The process of confirming that a component, system or person complies with its specified

requirements, e.g. by passing an exam.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 176: Testing overview

changeability: The capability of the software product to enable specified modifications to be implemented. [ISO

9126] See also maintainability.

classification tree method: A black box test design technique in which test cases, described by means of a

classification tree, are designed to execute combinations of representatives of input and/or output domains.

[Grochtmann]

code coverage: An analysis method that determines which parts of the software have been executed (covered) by the

test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

co-existence: The capability of the software product to co-exist with other independent software in a common

environment sharing common resources. [ISO 9126] See portability testing.

complexity: The degree to which a component or system has a design and/or internal structure that is difficult to

understand, maintain and verify. See also cyclomatic complexity.

compliance: The capability of the software product to adhere to standards, conventions or regulations in laws and

similar prescriptions. [ISO 9126]

compliance testing: The process of testing to determine the compliance of component or system.

component: A minimal software item that can be tested in isolation.

component integration testing: Testing performed to expose defects in the interfaces and interaction between

integrated components.

component specification: A description of a component’s function in terms of its output values for specified input

values under specified conditions, and required non-functional behavior (e.g. resource-utilization).

component testing: The testing of individual software components. [After IEEE 610]

compound condition: Two or more single conditions joined by means of a logical operator (AND, OR or XOR), e.g.

‘A>B AND C>1000’.

concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of

time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or

system. [After IEEE 610]

condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test condition.

condition coverage: The percentage of condition outcomes that have been exercised by a test suite. 100% condition

coverage requires each single condition in every decision statement to be tested as True and False.

condition determination coverage: The percentage of all single condition outcomes that independently affect a

decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies

100% decision condition coverage.

condition determination testing: A white box test design technique in which test cases are designed to execute single

condition outcomes that independently affect a decision outcome.

condition testing: A white box test design technique in which test cases are designed to execute condition outcomes.

condition outcome: The evaluation of a condition to True or False.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 177: Testing overview

configuration: The composition of a component or system as defined by the number, nature, and interconnections of

its constituent parts.

configuration auditing: The function to check on the contents of libraries of configuration items, e.g. for standards

compliance. [IEEE 610]

configuration control: An element of configuration management, consisting of the evaluation, co-ordination,

approval or disapproval, and implementation of changes to configuration items after formal establishment of their

configuration identification. [IEEE

610]

configuration identification: An element of configuration management, consisting of selecting the configuration

items for a system and recording their functional and physical characteristics in technical documentation. [IEEE

610]

configuration item: An aggregation of hardware, software or both, that is designated for configuration management

and treated as a single entity in the configuration management process. [IEEE 610]

configuration management: A discipline applying technical and administrative direction and surveillance to: identify

and document the functional and physical characteristics of a configuration item, control changes to those

characteristics, record and report change processing and implementation status, and verify compliance with specified

requirements. [IEEE 610]

consistency: The degree of uniformity, standardization, and freedom from contradiction among the documents or

parts of a component or system. [IEEE 610]

control flow: An abstract representation of all possible sequences of events (paths) in the execution through a

component or system.

conversion testing: Testing of software used to convert data from existing systems for use in replacement systems.

COTS: Acronym for Commercial Off-The-Shelf software.

coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test

suite.

coverage analysis: Measurement of achieved coverage to a specified coverage item during test execution referring to

predetermined criteria to determine whether additional testing is required and if so, which test cases are needed.

coverage item: An entity or property used as a basis for test coverage, e.g. equivalence partitions or code statements.

coverage tool: A tool that provides objective measures of what structural elements, e.g. statements, branches have

been exercised by the test suite.

cyclomatic complexity: The number of independent paths through a program. Cyclomatic complexity is defined as:

L – N + 2P, where – L = the number of edges/links in a graph – N = the number of nodes in a graph – P =

the number of disconnected parts of the graph (e.g. a calling graph and a subroutine). [After McCabe]

Ddata definition: An executable statement where a variable is assigned a value.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 178: Testing overview

data driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that

a single control script can execute all of the tests in the table. Data driven testing is often used to support the

application of test execution tools such as capture/playback tools. [Fewster and Graham] See also keyword driven

testing.

data flow: An abstract representation of the sequence and possible changes of the state of data objects, where the

state of an object is any of: creation, usage, or destruction. [Beizer]

data flow analysis: A form of static analysis based on the definition and usage of variables.

data flow coverage: The percentage of definition-use pairs that have been exercised by a test case suite.

data flow test: A white box test design technique in which test cases are designed to execute definition and use pairs

of variables.

debugging: The process of finding, analyzing and removing the causes of failures in software.

debugging tool: A tool used by programmers to reproduce failures, investigate the state of programs and find the

corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any

program statement and to set and examine program variables.

decision: A program point at which the control flow has two or more alternative routes. A node with two or more

links to separate branches.

decision condition coverage: The percentage of all condition outcomes and decision outcomes that have been

exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100%

decision coverage.

decision condition testing: A white box test design technique in which test cases are designed to execute condition

outcomes and decision outcomes.

decision coverage: The percentage of decision outcomes that have been exercised by a test suite. 100% decision

coverage implies both 100% branch coverage and 100% statement coverage.

decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or

actions (effects), which can be used to design test cases.

decision table testing: A black box test design techniques in which test cases are designed to execute the

combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal]

decision testing: A white box test design technique in which test cases are designed to execute decision outcomes.

decision outcome: The result of a decision (which therefore determines the branches to be taken).

defect: A flaw in a component or system that can cause the component or system to fail to perform its required

function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a

failure of the component or system.

defect density: The number of defects identified in a component or system divided by the size of the component or

system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points).

Defect Detection Percentage (DDP): the number of defects found by a test phase, divided by the number found by

that test phase and any other means afterwards.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 179: Testing overview

defect report: A document reporting on any flaw in a component or system that can cause the component or system

to fail to perform its required function. [After IEEE 829]

defect management: The process of recognizing, investigating, taking action and disposing of defects. It involves

recording defects, classifying them and identifying the impact. [After IEEE 1044]

defect masking: An occurrence in which one defect prevents the detection of another. [After IEEE 610]

definition-use pair: The association of the definition of a variable with the use of that variable. Variable uses include

computational (e.g. multiplication) or to direct the execution of a path (“predicate” use).

deliverable: Any (work) product that must be delivered to someone other that the (work) product’s author.

design-based testing: An approach to testing in which test cases are designed based on the architecture and/or

detailed design of a component or system (e.g. tests of interfaces between components or systems).

desk checking: Testing of software or specification by manual simulation of its execution.

development testing: Formal or informal testing conducted during the implementation of a component or system,

usually in the development environment by developers. [After IEEE 610]

documentation testing: Testing the quality of the documentation, e.g. user guide or installation guide.

domain: The set from which valid input and/or output values can be selected.

driver: A software component or test tool that replaces a component that takes care of the control and/or the calling

of a component or system. [After TMap]

dynamic analysis: The process of evaluating behavior, e.g. memory performance, CPU usage, of a system or

component during execution. [After IEEE 610]

dynamic comparison: Comparison of actual and expected results, performed while the software is being executed,

for example by a test execution tool.

dynamic testing: Testing that involves the execution of the software of a component or system.

 

 

Eefficiency: The capability of the software product to provide appropriate performance, relative to the amount of

resources used under stated conditions. [ISO 9126]

efficiency testing: The process of testing to determine the efficiency of a software product.

elementary comparison testing: A black box test design techniques in which test cases are designed to execute

combinations of inputs using the concept of condition determination coverage. [TMap]

emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a

given system. [IEEE 610] See also simulator.

entry criteria: the set of generic and specific conditions for permitting a process to go forward with a defined task,

e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted)

effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 180: Testing overview

entry point: The first executable statement within a component.

equivalence partition: A portion of an input or output domain for which the behavior of a component or system is

assumed to be the same, based on the specification.

equivalence partition coverage: The percentage of equivalence partitions that have been exercised by a test suite.

equivalence partitioning: A black box test design technique in which test cases are designed to execute

representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

error: A human action that produces an incorrect result. [After IEEE 610]

error guessing: A test design technique where the experience of the tester is used to anticipate what defects might be

present in the component or system under test as a result of errors made, and to design tests specifically to expose

them.

error seeding: The process of intentionally adding known defects to those already in the component or system for the

purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. [IEEE

610]

error tolerance: The ability of a system or component to continue normal operation despite the presence of erroneous

inputs. [After IEEE 610].

exception handling: Behavior of a component or system in response to erroneous input, from either a human user or

from another component or system, or to an internal failure.

executable statement: A statement which, when compiled, is translated into object code, and which will be executed

procedurally when the program is running and may perform an action on data.

exercised: A program element is said to be exercised by a test case when the input value causes the execution of that

element, such as a statement, decision, or other structural element.

exhaustive testing: A test approach in which the test suite comprises all combinations of input values and

preconditions.

exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process

to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when

there are still outstanding parts of the task which have not been finished. Exit criteria are used by testing to report

against and to plan when to stop testing. [After Gilb and Graham]

exit point: The last executable statement within a component.

expected result: The behavior predicted by the specification, or another source, of the component or system under

specified conditions.

exploratory testing: Testing where the tester actively controls the design of the tests as those tests are performed and

uses information gained while testing to design new and better tests. [Bach]

 

 

F

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 181: Testing overview

fail: A test is deemed to fail if its actual result does not match its expected result.

failure: Actual deviation of the component or system from its expected delivery, service or result. [After Fenton]

failure mode: The physical or functional manifestation of a failure. For example, a system in failure mode may be

characterized by slow operation, incorrect outputs, or complete termination of execution.

Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and analysis of identifying

possible modes of failure and attempting to prevent their occurrence.

failure rate: The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit

of time, failures per number of transactions, failures per number of computer runs. [IEEE 610]

fault tolerance: The capability of the software product to maintain a specified level of performance in cases of

software faults (defects) or of infringement of its specified interface. [ISO 9126] See also reliability.

fault tree analysis: A method used to analyze the causes of faults (defects).

feasible path: A path for which a set of input values and preconditions exists which causes it to be executed.

feature: An attribute of a component or system specified or implied by requirements documentation (for example

reliability, usability or design constraints). [After IEEE 1008]

finite state machine: A computational model consisting of a finite number of states and transitions between those

states, possibly with accompanying actions. [IEEE 610]

formal review: A review characterized by documented procedures and requirements, e.g. inspection.

frozen test basis: A test basis document that can only be amended by a formal change control process. See also

baseline.

Function Point Analysis (FPA): Method aiming to measure the size of the functionality of an information system.

The measurement is independent of the technology. This measurement may be used as a basis for the measurement

of productivity, the estimation of the needed resources, and project control.

functional integration: An integration approach that combines the components or systems for the purpose of getting

a basic functionality working early. See also integration testing.

functional requirement: A requirement that specifies a function that a component or system must perform. [IEEE

610]

functional test design technique: Documented procedure to derive and select test cases based on an analysis of the

specification of the functionality of a component or system without reference to its internal structure. See also black

box test design technique.

functional testing: Testing based on an analysis of the specification of the functionality of a component or system.

See also black box testing.

functionality: The capability of the software product to provide functions which meet stated and implied needs when

the software is used under specified conditions. [ISO 9126]

functionality testing: The process of testing to determine the functionality of a software product.

 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 182: Testing overview

Gglass box testing: See white box testing.

 

Hheuristic evaluation: A static usability test technique to determine the compliance of a user interface with recognized

usability principles (the so-called “heuristics”).

high level test case: A test case without concrete (implementation level) values for input data and expected results.

horizontal traceability: The tracing of requirements for a test level through the layers of test documentation (e.g. test

plan, test design specification, test case specification and test procedure specification).

 

Iimpact analysis: The assessment of change to the layers of development documentation, test documentation and

components, in order to implement a given change to specified requirements.

incremental development model: A development life cycle where a project is broken into a series of increments,

each of which delivers a portion of the functionality in the overal project requirements. The requirements are

prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life

cycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases.

incremental testing: Testing where components or systems are integrated and tested one or some at a time, until all

the components or systems are integrated and tested.

incident: Any event occurring during testing that requires investigation. [After IEEE 1008]

incident management: The process of recognizing, investigating, taking action and disposing of incidents. It involves

recording incidents, classifying them and identifying the impact. [After IEEE 1044]

incident management tool: A tool that facilitates the recording and status tracking of incidents found during testing.

They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of

incidents and provide reporting facilities.

incident report: A document reporting on any event that occurs during the testing which requires investigation.

[After IEEE 829]

independence: Separation of responsibilities, which encourages the accomplishment of objective testing. [After DO-

178b]

infeasible path: A path that cannot be exercised by any set of possible input values.

informal review: A review not based on a formal (documented) procedure.

input: A variable (whether stored within a component or outside) that is read by a component.

input domain: The set from which valid input values can be selected.. See also domain.

input value: An instance of an input. See also input.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 183: Testing overview

inspection: A type of review that relies on visual examination of documents to detect defects, e.g. violations of

development standards and non-conformance to higher level documentation. The most formal review technique and

therefore always based on a documented procedure. [After IEEE 610, IEEE 1028]

installability: The capability of the software product to be installed in a specified environment [ISO 9126]. See also

portability.

installability testing: The process of testing the installability of a software product. See also portability testing.

installation guide: Supplied instructions on any suitable media, which guides the installer through the installation

process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process

description.

installation wizard: Supplied software on any suitable media, which leads the installer through the installation

process. It normally runs the installation process, provides feedback on installation results, and prompts for options.

instrumentation: The insertion of additional code into the program in order to collect information about program

behavior during execution.

instrumenter: A software tool used to carry out instrumentation.

intake test: A special instance of a smoke test to decide if the component or system is ready for detailed and further

testing. An intake test is typically carried out at the start of the test execution phase.

integration: The process of combining components or systems into larger assemblies.

integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated

components or systems. See also component integration testing, system integration testing.

interface testing: An integration test type that is concerned with testing the interfaces between components or

systems.

interoperability: The capability of the software product to interact with one or more specified components or

systems. [After ISO 9126] See also functionality.

interoperability testing: The process of testing to determine the interoperability of a software product. See also

functionality testing.

invalid testing: Testing using input values that should be rejected by the component or system. See also error

tolerance.

isolation testing: Testing of individual components in isolation from surrounding components, with surrounding

components being simulated by stubs and drivers, if needed.

 

Kkeyword driven testing: A scripting technique that uses data files to contain not only test data and expected results,

but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts

that are called by the control script for the test. See also data driven testing.

 

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 184: Testing overview

LLCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by

line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear

sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ coverage: The percentage of LCSAJs of a component that have been exercised by a test suite. 100% LCSAJ

coverage implies 100% decision coverage.

LCSAJ testing: A white box test design technique in which test cases are designed to execute LCSAJs.

learnability: The capability of the software product to enable the user to learn its application. [ISO 9126] See also

usability.

load test: A test type concerned with measuring the behavior of a component or system with increasing load, e.g.

number of parallel users and/or numbers of transactions to determine what load can be handled by the component or

system.

low level test case: A test case with concrete (implementation level) values for input data and expected results.

 

Mmaintenance: Modification of a software product after delivery to correct defects, to improve performance or other

attributes, or to adapt the product to a modified environment. [IEEE 1219]

maintenance testing: Testing the changes to an operational system or the impact of a changed environment to an

operational system.

maintainability: The ease with which a software product can be modified to correct defects, modified to meet new

requirements, modified to make future maintenance easier, or adapted to a changed environment. [ISO 9126]

maintainability testing: The process of testing to determine the maintainability of a software product.

management review: A systematic evaluation of software acquisition, supply, development, operation, or

maintenance process, performed by or on behalf of management that monitors progress, determines the status of

plans and schedules, confirms requirements and heir system allocation, or evaluates the effectiveness of

management approaches to achieve fitness for purpose. [After IEEE 610, IEEE 1028]

maturity: (1) The capability of an organization with respect to the effectiveness and efficiency of its processes and

work practices. See also Capability Maturity Model, Test Maturity Model. (2) The capability of the software product

to avoid failure as a result of defects in the software. [ISO 9126] See also reliability.

measure: The number or category assigned to an attribute of an entity by making a measurement [ISO 14598].

measurement: The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO

14598]

measurement scale: A scale that constrains the type of data analysis that can be performed on it. [ISO 14598]

memory leak: A defect in a program’s dynamic store allocation logic that causes it to fail to reclaim memory after it

has finished using it, eventually causing the program to fail due to lack of memory.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 185: Testing overview

metric: A measurement scale and the method used for measurement. [ISO 14598]

milestone: A point in time in a project at which defined (intermediate) deliverables and results should be ready.moderator: The leader and main person responsible for an inspection or other review process.

monitor: A software tool or hardware device that run concurrently with the component or system under test and

supervises, records and/or analyses the behavior of the component or system. [After IEEE 610]

multiple condition coverage: The percentage of combinations of all single condition outcomes within one statement that have been exercised by a test suite. 100% multiple condition coverage implies 100% condition determination coverage.multiple condition testing: A white box test design technique in which test cases are designed to execute

combinations of single condition outcomes (within one statement).

mutation analysis: A method to determine test suite thoroughness by measuring the extent to which a test suite can

discriminate the program from slight variants (mutants) of the program.

NN-switch coverage: The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]

N-switch testing: A form of state transition testing in which test cases are designed to execute all valid sequences of

N+1 transitions. [Chow] See also state transition testing.

negative testing: Tests aimed at showing that a component or system does not work. Negative testing is related to

the testers’ attitude rather than a specific test approach or test design technique. [After Beizer].

non-conformity: Non fulfillment of a specified requirement. [ISO 9000]

non-functional requirement: A requirement that does not relate to functionality, but to attributes of such as

reliability, efficiency, usability, maintainability and portability.

non-functional testing: Testing the attributes of a component or system that do not relate to functionality, e.g.

reliability, efficiency, usability, maintainability and portability.

non-functional test design techniques: Methods used to design or select tests for nonfunctional testing.

 

Ooff-the-shelf software: A software product that is developed for the general market, i.e. for a large number of

customers, and that is delivered to many customers in identical format.

operability: The capability of the software product to enable the user to operate and control it. [ISO 9126] See also

usability.

operational environment: Hardware and software products installed at users’ or customers’ sites where the

component or system under test will be used. The software may include operating systems, database management

systems, and other applications.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 186: Testing overview

operational profile testing: Statistical testing using a model of system operations (short duration tasks) and their

probability of typical use. [Musa]

operational testing: Testing conducted to evaluate a component or system in its operational environment. [IEEE 610]

output: A variable (whether stored within a component or outside) that is written by a component.

output domain: The set from which valid output values can be selected. See also domain.

output value: An instance of an output. See also output.

 

Ppair programming: A software development approach whereby lines of code (production and/or test) of a component

are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews

are performed.

pair testing: Two testers work together to find defects. Typically, they share one computer and trade control of it

while testing.

Pass: A test is deemed to pass if its actual result matches its expected result.

pass/fail criteria: Decision rules used to determine whether a test item (function) or feature has passed or failed a

test. [IEEE 829]

path: A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit

point.

path coverage: The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100%

LCSAJ coverage.

path sensitizing: Choosing a set of input values to force the execution of a given path.

path testing: A white box test design technique in which test cases are designed to execute paths.

performance: The degree to which a system or component accomplishes its designated functions within given

constraints regarding processing time and throughput rate. [After IEEE 610] See efficiency.

performance indicator: A high level metric of effectiveness and/or efficiency used to guide and control progressive

development, e.g. Defect Detection Percentage (DDP) for testing. [CMMI]

performance testing: The process of testing to determine the performance of a software product. See efficiency

testing.

performance testing tool: A tool to support performance testing and that usually has two main facilities: load

generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of

input data. During execution, response time measurements are taken from selected transactions and these are logged.

Performance testing tools normally provide reports based on test logs and graphs of load against response times.

phase test plan: A test plan that typically addresses one test level.

portability: The ease with which the software product can be transferred from one hardware or software environment

to another. [ISO 9126]

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 187: Testing overview

portability testing: The process of testing to determine the portability of a software product.

postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test

procedure.

post-execution comparison: Comparison of actual and expected results, performed after the software has finished

running.

precondition: Environmental and state conditions that must be fulfilled before the component or system can be

executed with a particular test or test procedure.

Priority: The level of (business) importance assigned to an item, e.g. defect.

process cycle test: A black box test design technique in which test cases are designed toexecute business procedures

and processes. [TMap]

process: A set of interrelated activities, which transform inputs into outputs. [ISO 12207]

project: A project is a unique set of coordinated and controlled activities with start and finish dates undertaken an

objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]

project test plan: A test plan that typically addresses multiple test levels.

pseudo-random: A series which appears to be random but is in fact generated according to some prearranged

sequence.

 

Qquality: The degree to which a component, system or process meets specified requirements and/or user/customer

needs and expectations. [After IEEE 610]

quality assurance: Part of quality management focused on providing confidence that quality requirements will be

fulfilled. [ISO 9000]

quality attribute: A feature or characteristic that affects an item’s quality. [IEEE 610]

quality management: Coordinated activities to direct and control an organization with regard to quality. Direction

and control with regard to quality generally includes the establishment of the quality policy and quality objectives,

quality planning, quality control, quality assurance and quality improvement. [ISO 9000]

 

Rrandom testing: A black box test design technique where test cases are selected, possibly using a pseudo-random

generation algorithm, to match an operational profile. This technique can be used for testing non-functional

attributes such as reliability and performance.

recoverability: The capability of the software product to re-establish a specified level of performance and recover

the data directly affected in case of failure. [ISO 9126] See also reliability.

recoverability testing: The process of testing to determine the recoverability of a software product. See also

reliability testing.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 188: Testing overview

regression testing: Testing of a previously tested program following modification to ensure that defects have not

been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed

when the software or its environment is changed.

release note: A document identifying test items, their configuration, current status and other delivery information

delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase. [After

IEEE 829]

reliability: The ability of the software product to perform its required functions under stated conditions for a

specified period of time, or for a specified number of operations. [ISO 9126]

reliability testing: The process of testing to determine the reliability of a software product.

replaceability: The capability of the software product to be used in place of another specified software product for

the same purpose in the same environment. [ISO 9126] See also portability.

requirement: A condition or capability needed by a user to solve a problem or achieve an objective that must be met

or possessed by a system or system component to satisfy a contract, standard, specification, or other formally

imposed document. [After IEEE 610]

requirements-based testing: An approach to testing in which test cases are designed based on test objectives and test

conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes

such as reliability or usability.

requirements management tool: A tool that supports the recording of requirements, requirements attributes (e.g.

priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and

requirements change management. Some requirements management tools also provide facilities for static analysis,

such as consistency checking and violations to pre-defined requirements rules.

requirements phase: The period of time in the software life cycle during which the equirements for a software

product are defined and documented. [IEEE 610]

resource utilization: The capability of the software product to use appropriate amounts and types of resources, for

example the amounts of main and secondary memory used by the program and the sizes of required temporary or

overflow files, when the software performs its function under stated conditions. [After ISO 9126] See also

efficiency.

resource utilization testing: The process of testing to determine the resource-utilization of a software product.

result: The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports,

and communication messages sent out. See also actual result, expected result.

resumption criteria: The testing activities that must be repeated when testing is re-started after a suspension. [After

IEEE 829]

re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of

corrective actions.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 189: Testing overview

review: An evaluation of a product or project status to ascertain discrepancies from planned results and to

recommend improvements. Examples include management review, informal review, technical review, inspection,

and walkthrough. [After IEEE 1028]

reviewer: The person involved in the review who shall identify and describe anomalies in the product or project

under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.

risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood.

risk analysis: The process of assessing identified risks to estimate their impact and probability of occurrence

(likelihood).

risk-based testing: Testing oriented towards exploring and providing information about product risks. [After

Gerrard]

risk control: The process through which decisions are reached and protective measures are implemented for reducing

risks to, or maintaining risks within, specified levels.

risk identification: The process of identifying risks using techniques such as brainstorming, checklists and failure

history.

risk management: Systematic application of procedures and practices to the tasks of identifying, analyzing,

prioritizing, and controlling risk.

robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or

stressful environmental conditions. [IEEE 610] See also errortolerance, fault-tolerance.

root cause: An underlying factor that caused a non-conformance and possibly should be permanently eliminated

through process improvement.

 

Ssafety: The capability of the software product to achieve acceptable levels of risk of harm to people, business,

software, property or the environment in a specified context of use. [ISO 9126]

safety testing: The process of testing to determine the safety of a software product.

scalability: The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]

scalability testing: Testing to determine the scalability of the software product.

scribe: The person who has to record each defect mentioned and any suggestions for improvement during a review

meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.

scripting language: A programming language in which executable test scripts are written, used by a test execution

tool (e.g. a capture/replay tool).

security: Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or

deliberate, to programs and data. [ISO 9126]

security testing: Testing to determine the security of the software product.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 190: Testing overview

severity: The degree of impact that a defect has on the development or operation of a component or system. [After

IEEE 610]

simulation: The representation of selected behavioral characteristics of one physical or abstract system by another

system. [ISO 2382/1]

simulator: A device, computer program or system used during testing, which behaves or operates like a given system

when provided with a set of controlled inputs. [After IEEE 610, DO178b] See also emulator.

smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to

ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and

smoke test is among industry best practices. See also intake test.

software quality: The totality of functionality and features of a software product that bear on its ability to satisfy

stated or implied needs. [After ISO 9126]

specification: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements,

design, behavior, or other characteristics of a component or system, and, often, the procedures for determining

whether these provisions have been satisfied. [After IEEE 610]

specification-based test design technique: See black box test design technique.

specified input: An input for which the specification predicts a result.

stability: The capability of the software product to avoid unexpected effects from modifications in the software.

[ISO 9126] See also maintainability.

state diagram: A diagram that depicts the states that a component or system can assume, and shows the events or

circumstances that cause and/or result from a change from one state to another. [IEEE 610]

state table: A grid showing the resulting transitions for each state combined with each possible event, showing both

valid and invalid transitions.

state transition: A transition between two states of a component or system.

state transition testing: A black box test design technique in which test cases are designed to execute valid and

invalid state transitions. See also N-switch testing.

statement: An entity in a programming language, which is typically the smallest indivisible unit of execution.

statement coverage: The percentage of executable statements that have been exercised by a test suite.

statement testing: A white box test design technique in which test cases are designed toexecute statements.

static analysis: Analysis of software artifacts, e.g. requirements or code, carried out without execution of these

software artifacts.

static analyzer: A tool that carries out static analysis.

static code analysis: Analysis of program source code carried out without execution of that software.

static code analyzer: A tool that carries out static code analysis. The tool checks source code, for certain properties

such as conformance to coding standards, quality metrics or data flow anomalies.

static testing: Testing of a component or system at specification or implementation level without execution of that

software, e.g. reviews or static code analysis.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 191: Testing overview

statistical testing: A test design technique in which a model of the statistical distribution of the input is used to

construct representative test cases. See also operational profile testing.

status accounting: An element of configuration management, consisting of the recording andreporting of information

needed to manage a configuration effectively. This information includes a listing of the approved configuration

identification, the status of proposed changes to the configuration, and the implementation status of the approved

changes. [IEEE 610]

Stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified

requirements. [IEEE 610]

structural coverage: Coverage measures based on the internal structure of the component.

structural test design technique: See white box test design technique.

stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component

that calls or is otherwise dependent on it. It replaces a called component. [After IEEE 610]

subpath: A sequence of executable statements within a component.

suspension criteria: The criteria used to (temporarily) stop all or a portion of the testing activities on the test items.

[After IEEE 829]

suitability: The capability of the software product to provide an appropriate set of functions for specified tasks and

user objectives. [ISO 9126] See also functionality.

Software Usability Measurement Inventory (SUMI): A questionnaire based usability test technique to evaluate the

usability, e.g. user-satisfaction, of a component or system. [Veenendaal]

syntax testing: A black box test design technique in which test cases are designed based upon the definition of the

input domain and/or output domain.

system: A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]

system integration testing: Testing the integration of systems and packages; testing interfaces to external

organizations (e.g. Electronic Data Interchange, Internet).

system testing: The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]

 

Ttechnical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to

be taken. A technical review is also known as a peer review. [Gilb and Graham, IEEE 1028]

test approach: The implementation of the test strategy for a specific project. It typically includes the decisions made

that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test

process, the test design techniques to be applied, exit criteria and test types to be performed.

test automation: The use of software to perform or support test activities, e.g. test management, test design, test

execution and results checking.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 192: Testing overview

test basis: All documents from which the requirements of a component or system can be inferred. The

documentation on which the test cases are based. If a document can be amended only by way of formal amendment

procedure, then the test basis is called a frozen test basis. [After TMap]

test case: A set of input values, execution preconditions, expected results and execution postconditions, developed

for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with

a specific requirement. [After IEEE 610]

test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and

execution preconditions) for a test item. [After IEEE 829]

test charter: A statement of test objectives, and possibly test ideas. Test charters are amongst other used in

exploratory testing. See also exploratory testing.

test comparator: A test tool to perform automated test comparison.

test comparison: The process of identifying differences between the actual results produced by the component or

system under test and the expected results for a test. Test comparison can be performed during test execution

(dynamic comparison) or after test execution.

test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a

function, transaction, quality attribute, or structural element.

test data: Data that exists (for example, in a database) before a test is executed, and thataffects or is affected by the

component or system under test.

test data preparation tool: A type of test tool that enables data to be selected from existing databases or created,

generated, manipulated and edited for use in testing.

test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test

approach and identifying the associated high level test cases. [After IEEE 829]

test design tool: A tool that support the test design activity by generating test inputs from a specification that may be

held in a CASE tool repository, e.g. requirements management tool, or from specified test conditions held in the tool

itself.

test design technique: A method used to derive or select test cases.

test environment: An environment containing hardware, instrumentation, simulators, software tools, and other

support elements needed to conduct a test. [After IEEE 610]

test evaluation report: A document produced at the end of the test process summarizing all testing activities and

results. It also contains an evaluation of the test process and lessons learned.

test execution: The process of running a test by the component or system under test, producing actual result(s).

test execution automation: The use of software, e.g. capture/playback tools, to control the execution of tests, the

comparison of actual results to expected results, the setting up of test preconditions, and other test control and

reporting functions.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 193: Testing overview

test execution phase: The period of time in a software development life cycle during which the components of a

software product are executed, and the software product is evaluated to determine whether or not requirements have

been satisfied. [IEEE 610]

test execution schedule: A scheme for the execution of test procedures. The test procedures are included in the test

execution schedule in their context and in the order in which they are to be executed.

test execution technique: The method used to perform the actual test execution, either manually or automated.

test execution tool: A type of test tool that is able to execute other software using an automated test script, e.g.

capture/playback. [Fewster and Graham]

test harness: A test environment comprised of stubs and drivers needed to conduct a test.

test infrastructure: The organizational artifacts needed to perform testing, consisting of test environments, test tools,

office environment and procedures.

test item: The individual element to be tested. There usually is one test object and many test items. See also test

object.

test level: A group of test activities that are organized and managed together. A test level is linked to the

responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance

test. [After TMap]

test log: A chronological record of relevant details about the execution of tests. [IEEE 829]

test logging: The process of recording information about tests executed into a test log.

test manager: The person responsible for testing and evaluating a test object. The individual, who directs, controls,

administers plans and regulates the evaluation of a test object.

test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test

manager.

Test Maturity Model (TMM): A five level staged framework for test process improvement, related to the Capability

Maturity Model (CMM) that describes the key elements of an effective test process.

Test Process Improvement (TPI): A continuous framework for test process improvement that describes the key

elements of an effective test process, especially targeted at system testing and acceptance testing.

test object: The component or system to be tested. See also test item.

test objective: A reason or purpose for designing and executing a test.

test oracle: A source to determine expected results to compare with the actual result of the software under test. An

oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but

should not be the code. [After Adrion]

test performance indicator: A metric, in general high level, indicating to what extent a certain target value or

criterion is met. Often related to test process improvement objectives, e.g. Defect Detection Percentage (DDP).

test phase: A distinct set of test activities collected into a manageable phase of a project, e.g. the execution activities

of a test level. [After Gerrard]

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 194: Testing overview

test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies

amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester

independence, the test environment, the test design techniques and test measurement techniques to be used, and the

rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process

[After IEEE 829]

test planning: The activity of establishing or updating a test plan.

test policy: A high level document describing the principles, approach and major objectives of the organization

regarding testing.

test point analysis (TPA): A formula based test estimation method based on function point analysis. [TMap]

test procedure: See test procedure specification.

test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as

test script or manual test script. [After IEEE 829]

test process: The fundamental test process comprises planning, specification, execution, recording and checking for

completion. [BS 7925/2]

test repeatability: An attribute of a test indicating whether the same results are produced each time the test is

executed.

test run: Execution of a test on a specific version of the test object.

test script: Commonly used to refer to a test procedure specification, especially an automated one.

test specification: A document that consists of a test design specification, test case specification and/or test

procedure specification.

test strategy: A high-level document defining the test levels to be performed and the testing within those levels for a

programme (one or more projects).

test suite: A set of several test cases for a component or system under test, where the post condition of one test is

often used as the precondition for the next one.

test summary report: A document summarizing testing activities and results. It also contains an evaluation of the

corresponding test items against exit criteria. [After IEEE 829]

test target: A set of exit criteria.

test tool: A software product that supports one or more test activities, such as planning and control, specification,

building initial files and data, test execution and test analysis. [TMap] See also CAST.

test type: A group of test activities aimed at testing a component or system regarding one or more interrelated

quality attributes. A test type is focused on a specific test objective, i.e. reliability test, usability test, regression test

etc., and may take place on one or more test levels or test phases. [After TMap]

testability: The capability of the software product to enable modified software to be tested. [ISO 9126] See also

maintainability.

testability review: A detailed check of the test basis to determine whether the test basis is at an adequate quality

level to act as an input document for the test process. [After TMap]

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 195: Testing overview

testable requirements: The degree to which a requirement is stated in terms that permit establishment of test designs

(and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After

IEEE 610]

tester: A technically skilled professional who is involved in the testing of a component or system.

testing: The process consisting of all life cycle activities, both static and dynamic, concerned with planning,

preparation and evaluation of software products and related work products to determine that they satisfy specified

requirements, to demonstrate that they are fit for purpose and to detect defects.

testware: Artifacts produced during the test process required to plan, design, and execute tests, such as

documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and

any additional software or utilities used in testing. [After Fewster and Graham]

thread testing: A version of component integration testing where the progressive integration of components follows

the implementation of subsets of the requirements, as opposed to the integration of components by levels of a

hierarchy.

traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.top-down testing: An incremental approach to integration testing where the component at the top of the component

hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to

test lower level components. The process is repeated until the lowest level components have been tested.

 

Uunderstandability: The capability of the software product to enable the user to understand whether the software is

suitable, and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability.

unreachable code: Code that cannot be reached and therefore is impossible to execute.

usability: The capability of the software to be understood, learned, used and attractive to the user when used under

specified conditions. [ISO 9126]

usability testing: Testing to determine the extent to which the software product is understood, easy to learn, easy to

operate and attractive to the users under specified conditions. [After ISO 9126]

use case testing: A black box test design technique in which test cases are designed to execute user scenarios.

user test: A test whereby real-life users are involved to evaluate the usability of a component or system.

 

VV-model: A framework to describe the software development life cycle activities from requirements specification to

maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software

development life cycle.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 196: Testing overview

validation: Confirmation by examination and through provision of objective evidence that the requirements for a

specific intended use or application have been fulfilled. [ISO 9000]

variable: An element of storage in a computer that is accessible by a software program by referring to it by a name.

verification: Confirmation by examination and through the provision of objective evidence that specified

requirements have been fulfilled. [ISO 9000]

vertical traceability: The tracing of requirements through the layers of development documentation to components.

volume testing: Testing where the system is subjected to large volumes of data. See also resource-utilization testing.

 

Wwalkthrough: A step-by-step presentation by the author of a document in order to gather information and to establish

a common understanding of its content. [Freedman and Weinberg, IEEE 1028]

white box test design technique: Documented procedure to derive and select test cases based on an analysis of the

internal structure of a component or system.

white box testing: Testing based on an analysis of the internal structure of the component or system.

Wide Band Delphi: An expert based test estimation technique that aims at making an accurate estimation using the

collective wisdom of the team members.

 

100 Manual Testing Interview   FAQ’s Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Interview Faq's, Manual Testing Basics | Leave a comment

Q1. What is verification?

 A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, and walkthroughs and inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN get free information. Click on a link! Q2. What is validation?

 A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed. Q3. What is a walkthrough?A: A walkthrough is an informal meeting for evaluation or informational purposes. A walkthrough is also a process at an abstract level. It’s the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 197: Testing overview

walkthroughs is to ensure the code fits the purpose. Walkthroughs also offer opportunities to assess an individual’s or team’s competency. Q4. What is an inspection?

 A: An inspection is a formal meeting, more formalized than a walkthrough and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.Q5. What is quality?

 A: Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization’s management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.Q6. What is good code?

 A: A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.Q7. What is good design?

 A: Design could mean too many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.Q8. What is software life cycle?

A: Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design,

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 198: Testing overview

documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out.Q9. Why are there so many software bugs?

 A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.There are unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.Software development tools, including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.Q10. How do you introduce a new software QA process?

A: It depends on the size of the organization and the risks involved. For large organizations with high-risk projects, a serious management buy-in is required and a formalized QA process is necessary. For medium size organizations with lower risk projects, management and organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA processes should be balanced with

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 199: Testing overview

productivity, in order to keep any bureaucracy from getting out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and managers, feedback to developers and good communication is essential among customers, managers, developers, test engineers and testers. Regardless the size of the company, the greatest value for effort is in managing requirement processes, where the goal is requirements that are clear, complete and testable.  Q11. Give me five common problems that occur during software development.

A: Poorly written requirements, unrealistic schedules, inadequate testing, and adding new features after development is underway and poor communication.  Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.The schedule is unrealistic if too much work is crammed in too little time.Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.It’s extremely common that new features are added after development is underway.Miscommunication either means the developers don’t know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.Q12. Do automated testing tools make testing easier?

A: Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile. A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret. If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change. One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts. Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task. You CAN learn to use automated testing tools, with little or no outside help. Get CAN get free information. Click on a link!Q13. Give me five solutions to problems that occur during software development.

A: Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 200: Testing overview

Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they’re adequately reflected in related schedule changes. Use prototypes early on so customers’ expectations are clarified and customers can see what to expect; this will minimize changes later on.Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, and tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.Q14. What makes a good test engineer?

A: Rob Davis is a good test engineer because heHas a “test to break” attitude,Takes the point of view of the customer,Has a strong desire for quality,Has an attention to detail, He’s alsoTactful and diplomatic andHas well a communication skill, both oral and written. And heHas previous software development experience, too.Good test engineers have a “test to break” attitude. We, good test engineers, take the point of view of the customer; have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process gives the test engineer an appreciation for the developers’ point of view and reduces the learning curve in automated test tool programming.Q15. What makes a good QA engineer?

A: The same qualities a good test engineer has are useful for a QA engineer. Additionally, Rob Davis understands the entire software development process and how it fits into the business approach and the goals of the organization. Rob Davis’ communication skills and the ability to understand various sides of issues are important. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 201: Testing overview

Q16. What makes a good resume?

A: On the subject of resumes, there seems to be an unending discussion of whether you should or shouldn’t have a one-page resume. The followings are some of the comments I have personally heard: “Well, Joe Blow (car salesman) said I should have a one-page resume.” “Well, I read a book and it said you should have a one page resume.” “I can’t really go into what I really did because if I did, it’d take more than one page on my resume.” “Gosh, I wish I could put my job at IBM on my resume but if I did it’d make my resume more than one page, and I was told to never make the resume more than one page long.” “I’m confused, should my resume be more than one page? I feel like it should, but I don’t want to break the rules.” Or, here’s another comment, “People just don’t read resumes that are longer than one page.” I have heard some more, but we can start with these. So what’s the answer? There is no scientific answer about whether a one-page resume is right or wrong. It all depends on who you are and how much experience you have. The first thing to look at here is the purpose of a resume. The purpose of a resume is to get you an interview. If the resume is getting you interviews, then it is considered to be a good resume. If the resume isn’t getting you interviews, then you should change it. The biggest mistake you can make on your resume is to make it hard to read. Why? Because, for one, scanners don’t like odd resumes. Small fonts can make your resume harder to read. Some candidates use a 7-point font so they can get the resume onto one page. Big mistake. Two, resume readers do not like eye strain either. If the resume is mechanically challenging, they just throw it aside for one that is easier on the eyes. Three, there are lots of resumes out there these days, and that is also part of the problem. Four, in light of the current scanning scenario, more than one page is not a deterrent because many will scan your resume into their database. Once the resume is in there and searchable, you have accomplished one of the goals of resume distribution. Five, resume readers don’t like to guess and most won’t call you to clarify what is on your resume. Generally speaking, your resume should tell your story. If you’re a college graduate looking for your first job, a one-page resume is just fine. If you have a longer story, the resume needs to be longer. Please put your experience on the resume so resume readers can tell when and for whom you did what. Short resumes — for people long on experience — are not appropriate. The real audience for these short resumes is people with short attention spans and low IQ. I assure you that when your resume gets into the right hands, it will be read thoroughly.  Q17. What makes a good QA/Test Manager?

A: QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to increase productivity; able to promote cooperation between Software and Test/QA Engineers, have the people skills needed to promote improvements in QA processes, have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being adhered to; able to

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 202: Testing overview

communicate with technical and non-technical people; as well as able to run meetings and keep them focused.Q18. What is the role of documentation in QA?

A: Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.Q19. What about requirements?

A: Requirement specifications are important and one of the most reliable methods of insuring problems in a complex software project is to have poorly documented requirement specifications. Requirements are the details describing an application’s externally perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable and testable. A non-testable requirement would be, for example, “user-friendly”, which is too subjective. A testable requirement would be something such as, “the product shall allow the user to enter their previously-assigned password to access the application”. Care should be taken to involve all of a project’s significant customers in the requirements process. Customers could be in-house or external and could include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, future software maintenance engineers, salespeople and anyone who could later derail the project. If his/her expectations aren’t met, they should be included as a customer, if possible. In some organizations, requirements may end up in high-level project plans, functional specification documents, design documents, or other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by test engineers in order to properly plan and execute tests. Without such documentation there will be no clear-cut way to determine if a software application is performing correctly. You CAN learn to capture requirements, with little or no outside help. Get CAN get free information. Click on a link!Q20. What is a test plan?

A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it.Q21. What is a test case?

A: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a…

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 203: Testing overview

Test case identifier;Test case name;Objective;Test conditions/setup;Input data requirements/steps, andExpected results.Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible.Q22. What should be done after a bug is found?

A: When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn’t create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.Q23. What is configuration management?

A: Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily adapt to your software tool and process needs.Q24. What if the software is so buggy it can’t be tested at all?

A: In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem.Q25. How do you know when to stop testing?

A: This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are…Deadlines, e.g. release deadlines, testing deadlines;

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 204: Testing overview

Test cases completed with certain percentage passed;Test budget has been depleted;Coverage of code, functionality, or requirements reaches a specified point;Bug rate falls below a certain level; orBeta or alpha testing period ends.Q26. What if there isn’t enough time for thorough testing?

A: Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:·         Which functionality is most important to the project’s intended purpose?·         Which functionality is most visible to the user?·         Which functionality has the largest safety impact?·         Which functionality has the largest financial impact on users?·         Which aspects of the application are most important to the customer?·         Which aspects of the application can be tested early in the development cycle?·         Which parts of the code are most complex and thus most subject to errors?·         Which parts of the application were developed in rush or panic mode?·         Which aspects of similar/related previous projects caused problems?·         Which aspects of similar/related previous projects had large maintenance expenses?·         Which parts of the requirements and design are unclear or poorly thought out?·         What do the developers think are the highest-risk aspects of the application?·         What kinds of problems would cause the worst publicity?·         What kinds of problems would cause the most customer service complaints?·         What kinds of tests could easily cover multiple functionalities?·         Which tests will have the best high-risk-coverage to time-required ratio?Q27. What if the project isn’t big enough to justify extensive testing?

A: Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under “What if there isn’t enough time for thorough testing?” do apply. The test engineer then should do “ad hoc” testing, or write up a limited test plan based on the risk analysis.Q28. What can be done if requirements are changing continuously?

A: Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application’s initial design

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 205: Testing overview

allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to…·         Ensure the code is well commented and well documented; this makes changes easier for the developers.·         Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.·         In the project’s initial schedule, allow for some extra time to commensurate with probable changes.·         Move new requirements to a ‘Phase 2′ version of an application and use the original requirements for the ‘Phase 1′ version.·         Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.·         Ensure customers and management understands scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that’s their job.·         Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.·         Design some flexibility into automated test scripts;·         Focus initial automated testing on application aspects that are most likely to remain unchanged;·         Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;·         Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;·         Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.Q29. What if the application has functionality that wasn’t in the requirements?

A: It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the functionality isn’t necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk.Q30. How can software QA processes be implemented without stifling productivity?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 206: Testing overview

A: Implement QA processes slowly over time. Use consensus to reach agreement on processes and adjust and experiment as an organization grows and matures. Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings and promote training as part of the QA process. However, no one, especially talented technical types, like bureaucracy and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.Q34. What is software quality assurance?

A: Software Quality Assurance, when Rob Davis does it, is oriented to *prevention*. It involves the entire software development process. Prevention is monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. Software Testing, when performed by Rob Davis, is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes monitored by project managers. It depends on what best fits your organization’s size and business structure. Rob Davis can provide QA and/or Software QA. This document details some aspects of how he can provide software testing/QA service.Q35. What is quality assurance?

A: Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.Rob Davis’ QA service depends on the customers and projects. A lot will depend on team leads or managers, feedback to developers and communications among customers, managers, developers’ test engineers and testers.Q36. Process and procedures – why follow them?

A: Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a process is repeatable. Once Rob Davis has learned and reviewed customer’s business processes and procedures, he will follow them. He will also recommend improvements and/or additions.Q37. Standards and templates – what is supposed to be in a document?

A: All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 207: Testing overview

for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.Q38. What are the different levels of testing?

A: Rob Davis has expertise in testing at all testing levels listed below. At each test level, he documents the results. Each level of testing is either considered black or white box testing.Q39. What is black box testing?

A: Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.Q40. What is white box testing?

A: White box testing is based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths and conditions.Q41. What is unit testing?

A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.Q42. What is parallel/audit testing?

A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.Q43. What is functional testing?

A: Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers *should* perform functional testing.Q44. What is usability testing?

A: Usability testing is testing for ‘user-friendliness’. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.Q45. What is incremental integration testing?

A: Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application’s functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.Q46. What is integration testing?

A: Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 208: Testing overview

accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input. Q47. What is system testing?

A: System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels. You CAN learn system testing, with little or no outside help. Get CAN get free information. Click on a link!Q48. What is end-to-end testing?

A: Similar to system testing, the *macro* end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.Q49. What is regression testing?

A: The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.Q50. What is sanity testing?

A: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.Q51. What is performance testing?

A: Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing verifies loads, volumes and response times, as defined by requirements.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 209: Testing overview

Q52. What is load testing?

A: Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.Q53. What is installation testing?

A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation test for a release is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application’s System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is performed, following installation testing.Q54. What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.Q55. What is recovery/error testing?

A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.Q56. What is compatibility testing?A: Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.Q57. What is comparison testing?A: Comparison testing is testing that compares software weaknesses and strengths to those of competitors’ products.Q58. What is acceptance testing?A: Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager; however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.Q59. What is alpha testing?A: Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by a group that is independent of the design team, but still within the company, e.g. in-house software test engineers, or software QA engineers.Q60. What is beta testing?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 210: Testing overview

A: Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.Q61. What testing roles are standard on most testing projects?

A: Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager. You CAN get a job in testing. Click on a link!Q62. What is a Test/QA Team Lead?

A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team.Q63. What is a Test Engineer?

A: Test Engineers are engineers who specialize in testing. We, test engineers, create test cases, procedures, scripts and generate data. We execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. We also…·         Speed up the work of the development staff;·         Reduce your organization’s risk of legal liability;·         Give you the evidence that your software is correct and operates properly;·         Improve problem tracking and reporting;·         Maximize the value of your software;·         Maximize the value of the devices that use it;·         Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;·         Help the work of your development staff, so the development team can devote its time to build up your product;·         Promote continual improvement;·         Provide documentation required by FDA, FAA, other regulatory agencies and your customers;·         Save money by discovering defects ‘early’ in the design process, before failures occur in production, or in the field;·         Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.Q64. What is a Test Build Manager?

A: Test Build Managers deliver current software versions to the test environment, install the application’s software and apply software patches, to the both application and the operating system,

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 211: Testing overview

set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.Q65. What is a System Administrator?

A: Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.Q66. What is a Database Administrator?

A: Test Build Managers, System Administrators and Database Administrators deliver current software versions to the test environment, install the application’s software and apply software patches, to the both application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator.Q67. What is a Technical Analyst?

A: Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.Q68. What is a Test Configuration Manager?

A: Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.Q69. What is a test schedule?

A: The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.Q70. What is software testing methodology?

A: One software testing methodology is the use a three step process of…1.      Creating a test strategy;2.      Creating a test plan/design; and3.      Executing tests.This methodology can be used and molded to your organization’s needs. Rob Davis believes that using this methodology is important in the development and in ongoing maintenance of his customers’ applications.Q71. What is the general testing process?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 212: Testing overview

A: The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.Q72. How do you create a test strategy?A: The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, and the test environment, a list of related tasks, pass/fail criteria and risk assessment.Inputs for this process:·         A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.·         A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.·         Testing methodology. This is based on known standards.·         Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.·         Requirements that the system can not provide, e.g. system limitations.Outputs for this process:·         An approved and signed off test strategy document, test plan, including test cases.·         Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.Q73. How do you create a test plan/design?

A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking…Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.Test scenarios are executed through the use of test procedures or scripts.Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.Test procedures or scripts include the specific data that will be used for testing the process or transaction.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 213: Testing overview

Test procedures or scripts may cover multiple test scenarios.Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.Inputs for this process:Approved Test Strategy Document.Test tools, or automated test tools, if applicable.Previously developed scripts, if applicable.Test documentation problems uncovered as a result of testing.A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.Outputs for this process:Approved documents of test scenarios, test cases, test conditions and test data.Reports of software design issues, given to software developers for correction.Q74. How do you execute tests?

A: Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 214: Testing overview

Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager’s formal acceptance.The test team reviews test document problems identified during testing, and update documents where appropriate.Inputs for this process:Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.Test tools, including automated test tools, if applicable.Developed scripts.Changes to the design, i.e. Change Request Documents.Test data.Availability of the test team and project team.General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.Test Readiness Document.Document Updates.Outputs for this process:Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.Changes to the code, also known as test fixes.Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems.Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.Formal record of test incidents, usually part of problem tracking.Base-lined package, also known as tested source and object code, ready for migration to the next level.Q75. What testing approaches can you tell me about?

A: Each of the followings represents a different testing approach:Black box testing,

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 215: Testing overview

White box testing,Unit testing,Incremental testing,Integration testing,Functional testing,System testing,End-to-end testing,Sanity testing,Regression testing,Acceptance testing,Load testing,Performance testing,Usability testing,Install/uninstall testing,Recovery testing,Security testing,Compatibility testing,Exploratory testing, ad-hoc testing,User acceptance testing,Comparison testing,Alpha testing,Beta testing, andMutation testing.Q76. What is stress testing?

A: Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. For example, when a web server is stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given system or entity. It tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested, using scripts, bots, and various denials of service tools.Q77. What is load testing?

A: Load testing simulates the expected usage of a software program, by simulating multiple users that access the program’s services concurrently. Load testing is most useful and most relevant for multi-user systems, client/server models, including web servers. For example, the load placed on the system is increased above normal usage patterns, in order to test the system’s response at peak loads. You CAN learn load testing, with little or no outside help. Get CAN get free information. Click on a link!

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 216: Testing overview

Q79. What is the difference between performance testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing. You CAN learn testing, with little or no outside help. Get CAN get free information. Click on a link!Q80. What is the difference between reliability testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.Q81. What is the difference between volume testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.Q82. What is incremental testing?

A: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.Q83. What is software testing?

A: Software testing is a process that identifies the correctness, completeness, and quality of software. Actually, testing cannot establish the correctness of software. It can find defects, but cannot prove there are no defects. You CAN learn software testing, with little or no outside help. Get CAN get free information. Click on a link!Q84. What is automated testing?

A: Automated testing is a formally specified and controlled method of formal testing approach.Q85. What is alpha testing?

A: Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.Q86. What is beta testing?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 217: Testing overview

A: Following alpha testing, “beta versions” of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.Q87. What is the difference between alpha and beta testing?

A: Alpha testing is performed by in-house developers and software QA personnel. Beta testing is performed by the public, a few select prospective customers, or the general public.Q88. What is clear box testing?

A: Clear box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic. You CAN learn clear box testing, with little or no outside help. Get CAN get free information. Click on a link!Q89. What is boundary value analysis?

A: Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code is to exercise it at its natural boundaries.Q90. What is ad hoc testing?

A: Ad hoc testing is a testing approach; it is the least formal testing approach.Q91. What is gamma testing?

A: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks. Cynics tend to refer to software releases as “gamma testing”.Q92. What is glass box testing?

A: Glass box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.Q93. What is open box testing?

A: Open box testing is same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.Q94. What is black box testing?

A: Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software. You CAN learn to do black box testing, with little or no outside help. Get CAN get free information. Click on a link!Q95. What is functional testing?

A: Functional testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 218: Testing overview

Q96. What is closed box testing?

A: Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, or the “inner workings” of the software.Q97. What is bottom-up testing?

A: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.Q98. What is software quality?

A: The quality of the software does vary widely from system to system. Some common quality attributes are stability, usability, reliability, portability, and maintainability. See quality standard ISO 9126 for more information on this subject.Q99. How do test case templates look like?

A: Software test cases are in a document that describes inputs, actions, or events, and their expected results, in order to determine if all features of an application are working correctly. Test case templates contain all particulars of every test case. Often these templates are in the form of a table. One example of this table is a 6-column table, where column 1 is the “Test Case ID Number”, column 2 is the “Test Case Name”, column 3 is the “Test Objective”, column 4 is the “Test Conditions/Setup”, column 5 is the “Input Data Requirements/Steps”, and column 6 is the “Expected Results”. All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. They also help in learning where information is located, making it easier for users to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. You CAN learn to create test case templates, with little or no outside help. Get CAN get free information. Click on a link!Q100. What is a software fault?

A: Software faults are hidden programming errors. Software faults are errors in the correctness of the semantics of computer programs.Q101. What is software failure?

A: Software failure occurs when the software does not do what the user expects to see.

40 Testing Interview   Questions https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 219: Testing overview

Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Interview Faq's, Manual Testing Basics | Leave a comment

What’s Ad Hoc Testing ?A testing where the tester tries to break the software by randomly trying functionality of software.What’s the Accessibility Testing ?Testing that determines if software will be usable by people with disabilities.What’s the Alpha Testing ?The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the softwareWhat’s the Beta Testing ?Testing the application after the installation at the client place.What is Component Testing ?Testing of individual software components (Unit Testing).What’s Compatibility Testing ?In Compatibility testing we can test that software is compatible with other elements of system.What is Concurrency Testing ?Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.What is Conformance Testing ?The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.What is Context Driven Testing ?The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.What is Data Driven Testing ?Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.What is Conversion Testing ?Testing of programs or procedures used to convert data from existing systems for use in replacement systems.What is Dependency Testing ?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 220: Testing overview

Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.What is Depth Testing ?A test that exercises a feature of a product in full detail.

What is Dynamic Testing ?Testing software through executing it. See also Static Testing.What is Endurance Testing ?Checks for memory leaks or other problems that may occur with prolonged execution.What is End-to-End testing ?Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.What is Exhaustive Testing ?Testing which covers all combinations of input values and preconditions for an element of the software under test.What is Gorilla Testing ?Testing one particular module, functionality heavily.What is Installation Testing ?Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is Localization Testing ?This term refers to making software specifically designed for a specific locality.What is Loop Testing ?A white box testing technique that exercises program loops.What is Mutation Testing ?Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (’bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resourcesWhat is Monkey Testing ?Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 221: Testing overview

What is Positive Testing ?Testing aimed at showing software works. Also known as “test to pass”. See also Negative Testing.What is Negative Testing ?Testing aimed at showing software does not work. Also known as “test to fail”. See also Positive Testing.What is Path Testing ?Testing in which all paths in the program source code are tested at least once.What is Performance Testing ?Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as “Load Testing”.What is Ramp Testing ?Continuously raising an input signal until the system breaks down.What is Recovery Testing ?Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.What is the Re-testing testing ?Retesting- Again testing the functionality of the application.What is the Regression testing ?Regression- Check that change in code have not effected the working functionalityWhat is Sanity Testing ?Brief test of major functional elements of a piece of software to determine if its basically operational.What is Scalability Testing ?Performance testing focused on ensuring the application under test gracefully handles increases in work load.What is Security Testing ?Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.What is Stress Testing ?

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 222: Testing overview

Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.What is Smoke Testing ?A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.What is Soak Testing ?Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.What’s the Usability testing ?Usability testing is for user friendliness.What’s the User acceptance testing ?User acceptance testing is determining if software is satisfactory to an end-user or customer.What’s the Volume Testing ?We can perform the Volume testing, where the system is subjected to large volume of data

As a manager what process did you adopt to define testing   policy? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Leave a comment

Below are the important steps to define testing policy in general. But it can change

according to how you implemented in your organization. Let’s understand in detail the

below steps of implementing a testing policy in an organization.

Definition: – The first thing any organization need to do is define one unique definition

for testing within organization. So that every one is on the same mind set.

How to achieve: – How are we going to achieve our objective?. Is there going to be a

testing committee, will there be compulsory test plans which needs to be executed etc etc.

Evaluate: – After testing is implemented in project how do we evaluate the same. Are we

going to derive metrics of defect per phase, per programmer etc etc. Finally it’s

important to let know every one how testing has added value to the project.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 223: Testing overview

Standards : – Finally what are the standards we want to achieve by testing. For instance

we can define saying that more than 20 defects per KLOC will be considered below

standard and code review should be done for the same.

Figure:

Does Increase in testing always mean good to the   project? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Leave a comment

No increase in testing does not mean always good for the product, company or the

project. In real test scenarios from 100% test plans only 20% test plans are critical from

business angle. Running those critical test plans will assure that the testing is proper

rather than running the full 100% test plans again and again. Below is the graph which

explains the impact of under testing and over testing. If you under test a system your

number of defect will increase, but on the contrary if you over test a system your cost of

testing will increase. Even if your defects come down your cost of testing has shooted

up.

What is the difference between Verification and   Validation? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Leave a comment

Verification is a review with out actually executing the process while validation is

checking the product with actual execution. For instance code review and syntax check is

verification while actually running the product and checking the results is validation.

What is the difference between Defect and   Failure? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Leave a comment

Define Defect?

Defect is a variance from a normal flow

When a defect reaches the end customer it is termed as failure and if the defect is

detected internally and resolved it’s called as a defect.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/

Page 224: Testing overview

There are mainly three categories of defect:-Wrong: – The requirements have been implemented incorrectly. This defect is a variance

from the given specification.

Missing: – There is a requirement given by the customer and is not done. This is a

variance from specification, an indication that the specification was not implemented, or

a requirement of the customer was noted properly.

Extra: – A requirement incorporated into the product that was not given by the end

customer. This is always a variance from specifications, but may be an attribute desired

by the user of the product. However, it is considered a defect because it’s a variance from

the existing requirements.

Figure:

What is the difference between white box, black box and gray box   testing?? Posted: 30/05/2009 | Author: Ganesh Raman | Filed under: Manual Testing Basics | Leave a comment

Black box testing is a testing strategy which is based solely on the requirements and

specifications. Black box testing requires no knowledge of the internal paths, structure, or implementation

of the software under test.

White box testing is a testing strategy which is based on the internal paths, code structure, and

implementation of the software under test. White box testing generally requires detailed programming

skills.

There is one more type of testing is called gray box testing. In this we look into the “box”

under test just long enough to understand how it has been implemented. Then we close up

the box and use our knowledge to choose more effective black box tests.

Below figure shows how both the types of testers view an accounting application during testing. In case of

black box tester they view in terms of pure accounting application. While in terms of White box testing the

tester know about the internal structure of the application. In most scenarios white box testing is done by

developers as they know the

internals of the application. In Black box testing we check the overall functionality of the

application while in white box we do code reviews , view the architecture , remove bad

code practices and component level testing.

https://softwaretestinginterviewfaqs.wordpress.com/category/manual-testing-basics/