testing2 (1)

Embed Size (px)

Citation preview

  • 8/8/2019 testing2 (1)

    1/7

    Part A

    Q1. Why does knowing how the Software works influence how and what you should test?

    Ans. If we dont know how your software works and if we test only with a black-box view of the

    software, we won't know if our test cases adequately cover all the parts of the software or if some of the

    cases are redundant or missing. An experienced black-box tester can design a fairly efficient suite of

    tests for a program but he can't know how good that suite is without some white-box knowledge. When

    we come to know about how our software works and what kind of actions it takes to solve our problems

    and produce desired results then we can develop test cases to check its capabilities to work under

    different kinds of conditions.

    In white box testing view ,when we run our software under dynamic conditions if we do not know howone module works like how it calls other modules we cannot develop any test case to check whether it is

    performing its function right or not ?

    Q2. What is the biggest problem of White-Box Testing either Static or Dynamic?

    Ans. In white box testing there are three big problems which leads to incomplete development of the

    software. These three problems are:

    1. Complexity: Being able to see every constituent part of an application means that a tester must have

    detailed programmatic knowledge of the application in order to work with it properly. This high-degree

    of complexity requires a much more highly skilled individual to develop test case.

    2. Fragility. While introspection is supposed to overcome the issue of application changes breaking test

    scripts the reality is that often the names of objects change during product development or new paths

    through the application are added. The fact that white-box testing requires test scripts to be tightly tied

    to the underlying code of an application means that changes to the code will often cause white-box test

    scripts to break. This, then, introduces a high degree of script maintenance into the testing process.

    3. Integration. For white-box testing to achieve the degree of introspection required it must be tightly

    integrated with the application being tested. This creates a few problems. To be tightly integrated with

    the code you must install the white-box tool on the system on which the application is running. This is

    okay, but where one wishes to eliminate the possibility that the testing tool is what is causing either a

    performance or operational problem, this becomes impossible to resolve. Another issue that arises is

    that of platform support. Due to the highly integrated nature of white-box testing tools many do not

    provide support for more than one platform, usually Windows. Where companies have applications

    that run on other platforms, they either need to use a different tool or resort to manual testing.

  • 8/8/2019 testing2 (1)

    2/7

    Q3. How could you guarantee that your Software would never have a Configuration Problem?

    Ans. We cannot easily guarantee that our software would never have a configuration problem. But it is

    possible when we ship the hardware and software together as one package, the software would only

    work on that hardware, and the hardware would have to be completely sealed, not having a single

    interface to the outside world.

    One example for such type of software is embedded system software such as micro wave oven. In

    which only the software is only work for that very hardware. In such kind of equipments we may say

    that we have configured completely but not hundred percent.

    Q4. Create the equivalence partitioning and write test cases to test the login screen containing username

    and password?

    Ans. The following test cases formed to test the login screen containing user name and password:

    Test Case Number 01

    Test Purpose: - For All the users logging form Login window. It consist Username and Password fields.

    Test procedure: - If User enters valid username with alphanumeric, password

    Expected Result: - Login window access

    Actual Result: - same as Expected result

    Pass/Fail Criteria: - Test pass

    Remarks: - No

    Test Case number 02:

    If user enter invalid username with symbols like *, &,. /? , valid password

    Expected Result: - Login window opened

    Actual result: - Login does\'t access. It displays error message.

  • 8/8/2019 testing2 (1)

    3/7

    Pass/Fail Criteria: - Test case failed

    Remarks: - There is error message displays. Like enter valid user name

    Q5. Explain the key elements involved in formal reviews?

    Ans. Formal review is the just reviewing the code by the members of only one team, the team which

    created the software. It is the steps in which static white box testing is performed. There are four major

    key element in the formal review. These are describes as follows:

    1. Identify Problems: in formal review method first thing is to find the proble. We find out thingsthat are not correct as well as the things which are missing. The designer of the software is held

    responsible for this kind of wrong or missing things problems but actually what is needed to be

    done is that we should focus on redeveloping the software by correcting the design.

    2. Follow Rules: In this the rules that are designed for the software development process are

    followed. These are fixed set of rules that must be followed to meet for good development. These

    types of rules are

    How much time will be spent?

    What can be commented on?

    3. Prepare: Each participant is expected to prepare for and contribute to the review. Depending on

    the type of review, participants may have different roles. They need to know what their duties

    and responsibilities are and be ready to actively full-fill them at the review. Most of the problems

    found through the review process are found during preparation, not at the actual review.

    4. Write a Report: A written report must be created that describes the results of the review in nut-

    shell and make that report available to the rest of the product development team. It's imperativethat others are told the results of the meeting how many problems were found, where they were

    found, and so on.

    Part B

  • 8/8/2019 testing2 (1)

    4/7

    Q6. Is it acceptable to release a Software Product that has Configuration Bugs?

    Ans. Although, it is not a good idea to release a product with configuration bugs, But still products are

    released with configuration bugs. The one reason of this is that we are not able to get all the

    configuration bugs and we will never be able to fix all of bugs. As in all testing, the process is risk

    based. Configuration testing is the process of checking the operation of the software we are testing with

    all various types of hardware. Now it is important to note that the software testers are not able to test

    software with different hardware configuration. Because there are so many technologies are there and

    the technologies changes day by day. We and our team will need to decide what you can fix and what

    you can't. Leaving in an obscure bug that only appears with a rare piece of hardware is an easy decision.

    Others won't be as easy.

    Q7. In addition to age and popularity what other criteria might you use to equivalence Partition

    Hardware for Configuration testing?

    Ans. Age and popularity is the one criteria, that is used for equivalence partition. There are several

    criteria such as region or country is a possibility in which some hardware devices such as DVD players

    only work with DVDs in their geographic region. Another might be consumer or business. Some

    hardware is specific to one, but not the other. Think of others that might apply to your software. It is

    some time also depends upon the type.

    Q8. What are the different levels of Testing and the goals of different levels? For each level Which

    Testing Approach is more suitable?

    Ans. Testing is the investigation conducting to provide information about the quality of the product or

    service under the test with respect to context in which it is intended to operate. To steaming the process

    of testing different level of testing are produced. In these levels the software is tested in steps by steps,

    in line with the planned build and release strategies. There levels are described as follows:

    1. Unit Test: Aunit is smallest testable piece of software that can be can be compiled, linked,loaded. E.g. functions/procedures, classes, interfaces. This type of testing is normally done by

    programmer. Test cases written after coding. In unit testing we verify the program specifications

    to the internal logic of the program or module and validate the logic.

  • 8/8/2019 testing2 (1)

    5/7

    Goal: These types of tests are usually written by developers as they work on code, to ensure that

    the specific function is working as expected. One function might have multiple tests, to catchcorner cases or other branches in the code. Unit testing alone cannot verify the functionality of a

    piece of software, but rather is used to assure that the building blocks the software uses work

    independently of each other.

    2. Integration testing: Integration testing is used to test for correct interaction between system units

    and systems built by merging existing libraries. In this modules are coded by different people.

    And mainly tests the interfaces among units. Top down integration testing is both the detection as

    well as the correction of programming/code generation problems.

    Examples: Regression testing, integration testing, smoke testing,

    Goal: Integration tests works to expose defects in the interfaces and interaction between

    integrated components (modules). Progressively larger groups of tested software components

    corresponding to elements of the architectural design are integrated and tested until the software

    works as a system

    3. System Test: System testing is used to test of overall interaction of components and find

    disparities between implementation and specification. It Involve load, performance, reliabilityand security testing System testing tests a completely integrated system to verify that it meets its

    requirements.

    Examples: System testing, Recovery testing, Security testing Load & Stress testing, performance

    testing, Installation testing

    Goal: Verifies proper execution of the entire application components including interfaces to

    other applications. Both functional and structural types of tests are performed to verify that the

    system is functionally and operationally sound.

    4. Acceptance testing: the Acceptance testing is done to demonstrates the satisfaction of user. Theusers are essential part of process so their statisfection must be required. This type of testing

    generally merged with System Testing and it is cumulative done by test team and customer.

    Example: -UAT ensures that the project satisfies the customer requirements, Alpha testing, Beta

    testing, Long term testing, Compatibility testing

    http://en.wikipedia.org/wiki/System_testinghttp://en.wikipedia.org/wiki/System_testing
  • 8/8/2019 testing2 (1)

    6/7

    Goal: - System integration testing verifies that a system is integrated to any external or third party

    systems defined in the system requirements.

    Q9. Relate verification and validation to the Quality control and Quality Assurance with an example?

    Ans. Quality Assurance is process of enforcing quality control standards by applying the planned,systematic quality activities and working to improve the processes that are used in producing the web

    sites and its components, infrastructure and content. Quality assurance examines the processes of site

    implementation from inputs to output.

    Quality control involves the formal and systematic use of testing to measure the achievements of a

    specified standards and recommendations; the measurement and enforcement of defined level of

    standards.

    Here's an example to drive home the point. Let's say a project manager asked the sponsor to approve the

    Business Requirements Report. If you were the sponsor, how would you validate and verify that the

    business requirements seemed complete and correct?

    One solution would be for you to actually review the document and the business requirements. If you

    did that, you would be performing a quality control activity, since your actions would be based on

    validating the deliverable itself.

    However, let's say the document was thirty pages long and that you (as the sponsor) did not have the

    expertise, the time, or the inclination to do a specific content review. In that case, you wouldn't ask to

    review the document itself. Instead, you would ask the project manager to describe the process used to

    create the document. Let's say you received the following reply.

    Project manager - "I gathered eight of your major users in a facilitated session. After the meeting, I

    documented the requirements and asked the group for their feedback, modifications, etc. I then took

    these updated requirements to representatives from the Legal, Finance, Manufacturing and Purchasing

    groups and they added requirements that were needed to support company standards. We then had ameeting with the four managers in your area that are most impacted by this system. These managers

    added a few more requirements. I then asked your four managers to sign off on the requirements and

    you can see their signatures on the last page."

    If you were the sponsor, would you now feel comfortable to sign the requirements? If it were me, Iwould feel pretty comfortable.

    That's the difference. Validation process with Quality control activities are focused on the deliverable

    itself and verification process with Quality assurance activities are focused on the process used to create

    the deliverable. They are both powerful techniques and both must be performed to ensure that the

    deliverables meet your customers quality requirements.

    http://en.wikipedia.org/wiki/System_integration_testinghttp://en.wikipedia.org/wiki/System_integration_testing
  • 8/8/2019 testing2 (1)

    7/7

    Q10. In a code review check list there are some items as given below .Categories them.1. Is the entire conditional path reachable?

    2. If the pointers are used, are they initialized properly?

    3. Is there any part of code unreachable?

    4. Has the use of similar looking operators (e.g. &,&& or =,== in C)checked ?

    5. Does the code follow the coding conventions of the organization

    Ans.

    1. Ans. Is the entire conditional path reachable?

    Category: -Control Flow Errors

    2. If the pointers are used, are they initialized properly?

    Category: - Data Declaration Errors

    3. Is there any part of code unreachable?

    Category: - Reference Errors

    4. Has the use of similar looking operators (e.g. &,&& or =,== in C)checked ?

    Category: -Code inspection errors and statement errors with comparison errors.