29
Requirements Check: Verifiability: Is the requirement realistically testable? Comprehensibility: Is the requirement properly understood? Traceability: Is the origin of the requirement clearly stated? Adaptability: Can the requirement be changed without a large impact on other requirements? The primary reason to test is to find bugs? 1. Find defects. 2. Maximize bug count. 3. Block premature product releases. 4. Help managers make ship / no-ship decisions. 5. Minimize technical support costs. 6. Assess conformance to specification. 7. Conform to regulations. 8. Find safe scenarios for use of the product. 9. Assess quality. 10. Verify correctness of the product. Your project is in a tight schedule and time available is very less. List major areas you will test in such a situation. The test team does not know when to stop testing. a. Discuss with customer. b. High risk areas identified. c. Priority will be given to main functionalities. d. Negative test cases can be reduced. e. Field level test case validation can be avoided. f. All priority test cases are covered. g. Test Manager will be able to report the status with some confidence level of completion. h. Stability of the project. i. Quality goals have been met. j. Test metrics including mean time between failure percent covered. Independent Test Groups A test group that reports along a different hierarchy than the development group. True independence implies the ability to block a release if it is in the organization's best interest to do so.

Requirements Check

Embed Size (px)

DESCRIPTION

scm

Citation preview

Requirements Check: Verifiability: Is the requirement realistically testable?

Comprehensibility: Is the requirement properly understood?

Traceability: Is the origin of the requirement clearly stated?

Adaptability: Can the requirement be changed without a large impact on other requirements?

The primary reason to test is to find bugs?1. Find defects.

2. Maximize bug count.3. Block premature product releases.4. Help managers make ship / no-ship decisions.5. Minimize technical support costs.6. Assess conformance to specification.7. Conform to regulations.8. Find safe scenarios for use of the product.9. Assess quality.10. Verify correctness of the product.

Your project is in a tight schedule and time available is very less. List major areas you will test in such a situation. The test team does not know when to stop testing. a. Discuss with customer.b. High risk areas identified.c. Priority will be given to main functionalities.d. Negative test cases can be reduced.e. Field level test case validation can be avoided.f. All priority test cases are covered.g. Test Manager will be able to report the status with some confidence level of completion.h. Stability of the project.i. Quality goals have been met.j. Test metrics including mean time between failure percent covered.

Independent Test Groups A test group that reports along a different hierarchy than the development group. True independence implies the ability to block a release if it is in the organization's best interest to do so.

Best PracticeThe modern independent test group is a value-added group that brings special expertise to the test floor. Because of specialized expertise and dedicated resources they are more effective at doing many kinds of testing than the typical development group would be. Among the kinds of testing effectively done by a value-added independent test group are: network, configuration compatibility, usability, performance, security, acceptance, hardware/software integration, distributed processing, recovery, platform compatibility, third-party software.The process of performance engineeringExecute scheduled tests Scheduled tests are those that are identified in the Performance Engineering Strategy document to validate the collected performance requirements. Scheduled tests shouldn't be conducted until baseline and benchmark tests are shown to meet the related performance requirements. There are countless types of measurements that can be gathered during scheduled tests. Requirements, analysis, and design will dictate what measurements will be collected and later analyzed. Also, required measurements may change throughout the course of testing based on the results of previous tests. Measurements collected during this activity may include but aren't limited to the following: End-to-end system response time (user experience).

Transactions per second for various components.

Memory usage of various components by scenario.

CPU usage of various components by scenario.

Component throughput.

Component bandwidth.

Definitions of SCMAccording to the SEI-CMM"Software Configuration Management involves identifying the configuration of the software (i.e. selected software work products and their descriptions) at given points in time, systematically controlling changes to the configuration and maintaining the integrity and traceability of the configuration throughout the software life cycle ". The work products placed under software configuration management include the software products that are delivered to the customer (e.g., software requirements document, code) and the items that are identified with or required to create the software products (e.g. compiler).According to IEEE-Std-610"Configuration Management is a discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements."

SCM- Configuration Identification:1. Identify structure of software systems.2. Identify Individual Components.3. Developing Software Hierarchy.4. Defining relationships and interfaces.5. Releasing configuration documents.6. Establishing baselines.Configuration Items1. Test plans.2. Test Procedures.3. Test Data.4. Test Results.5. User Documentation.6. Quality Plans.

Why is SCM essential?SCM is essential to program and software project success for the following reasons:1. SCM is the means by which shared information (whether it is produced, used by, or released from a software development or support activity) is controlled and maintained.2. SCM methods provide a means to identify, track, and control system development from the inception of the concept for the system until it is replaced or retired.3. Management of baselines and engineering products through SCM provides a sustained control of the information as the engineering functions work their way through the process.4. SCM provides visibility into project control change indicators, including SCM churn per month, requirement change per month, and number of defects that are open and closed.Successful change and issue management enables to: Bring applications to market faster. Improve application quality. Automate team communication. Achieve higher CMM levels and better ISO compliance. Eliminate the problems of dropped issues.

Why do we need SCM?In many software companies, project leaders and software developers do not perceive software support functions such as software quality assurance and SCM as value-added. Developers frequently view SCM as a hindrance to product improvements because of the overhead associated with the change control function of SCM. But on closer examination, one can see that poor configuration management often causes the most frustrating software problems. Some examples of poor practices are The latest version of source code cannot be found. A difficult defect that was fixed at great expense suddenly reappears. A developed and tested feature is mysteriously missing. A fully tested program suddenly does not work. The wrong version of the code was tested. There is no traceability between the software requirements, documentation, and code. Programmers are working on the wrong version of the code. The wrong versions of the configuration items are being base lined. No one knows which modules comprise the software system delivered to the customer. Most of these problems were revealed during software process assessments conducted by the authors. Projects, under great pressure to meet difficult deadlines, found their scarce time resource constantly under attack because of the rework caused by these reasons. One developer stated that he had "fixed" a problem three times, and three months after each delivery the problem recurred. Project leaders cannot afford to have their software developers redoing what was already done.Principles of Reengineering Organize around outcomes, not tasks. Have those who use the output of the process perform the process. Subsume information processing work into the real work that produces the information. Treat geographically dispersed resources as though they were centralized. Link parallel activities instead of integrating their results. Put the decision point where the work is performed, and build control into the process and Capture information once and at the source.Requirement Engineering:Requirement engineering is the systematic use of proven principles, techniques, languages and tools for the cost effective analysis, documentation and ongoing evolution of user needs and specifications of the external behavior of the system to satisfy those user needs. Requirement engineering is not conducted in sporadic, random or otherwise haphazard fashion, but instead is the systematic use of proven approaches. Requirement engineering activities results in the specification of software operational characteristics (functions, data and behavior), indicate software interface with other system elements and establish constraint that software must meet.Sand Box Environment:A security measure in the Java development environment. The sandbox is a set of rules that are used when creating an applet that prevents certain functions when the applet is sent as part of a Web page. When a browser requests a Web page with applets, the applets are sent automatically and can be executed as soon as the page arrives in the browser. If the applet is allowed unlimited access to memory and operating system resources, it can do harm in the hands of someone with malicious intent. The sandbox creates an environment in which there are strict limitations on what system resources the applet can request or access. Sandboxes are used when executable code comes from unknown or untrusted sources and allow the user to run untrusted code safely.

Risk Management and Confidence:To focus testing on risky areas and to find the important bugs, you had to practice some kind of Risk Management. Testing and Risk Management are twins; they belong together. But Risk Management has to be accompanied by something which may be called Confidence Management. To do appropriate Risk Management and to spot the areas of risk, you have to examine the whole process of software development, e.g.: Are there specifications? Are they complete, understandable and up-to-date? Did those people who wrote them really understand the needs of the customer? Did the customer himself understand his requirements and did he communicate it properly? Did development get the message? Have the market changed meanwhile? What are the competitors doing? Does development uses new technology? Do the developers have experience with this technology? Are there implications for testing? Are the developers being familiar with the code? Or is the code heavily patched and poorly documented? Are there still developers who designed or wrote it in the past? Does development run a reliable Configuration Management? Or will a new build contain old bugs that had been fixed some builds ago? Must there be an exhaustive regression test for each build? Or will a bug once fixed remain fixed?Identify exploratory tests:This is the aspect of performance engineering in which unplanned tests to detect and exploit performance issues are developed to aid in the tuning effort. To be effective, these tests must be researched in collaboration with the technical stakeholders who have intimate knowledge of the area of the system exhibiting performance issues. The results of this research lead the project back into the Develop Test Assets aspect, where exploratory tests are created and documented.Exploratory tests are designed to exploit the specific system area or function suspected to contain a performance bottleneck based on previous results analysis. Typically, the suspect tier or component of the bottleneck is identified and then decisions are made about the metrics that need to be collected to determine if the bottleneck does, in fact, reside in that area, and to better understand the bottleneck. Finally, the type of test that's required is identified and described so that it can be developed. We'll discuss this in detail in Part 10 of this series.Benefit Realization TestingThe Benefit Realization test is a test or analysis conducted after an application is moved into production in order to determine whether the application is likely to deliver the original projected benefits.

Anti-Virus ToolsAnti-Virus tools installed on the firewall provide a reasonable good security to the files coming into the network over the Internet. But many a times they fail when the virus is in a zip file or in a small portion in a large file.Configuration Management:A Configuration is the complete technical description required to build, test, accept, install, operate, maintain and support a system. It includes all documentation pertaining to the system as well as the system itself. Technical and organizational activities comprising configuration identification, configuration control, configuration status accounting and configuration audit. This includes the processes of identifying and defining the Configuration Items, recording and reporting the status of Configuration Items and requests for change, and verifying the completeness and correctness of Configuration Items.Configuration Item are Aggregation of hardware, network, software, application, environment, services or any of its discrete portions, that is designated for Configuration Management and treated as single entity in the Configuration Management process. Configuration Items may vary widely in complexity, size and type - from an entire system (including all hardware, software and documentation) to a version or variant of a single module.

What is SCM Process?1. Configuration Identification.

2. Change Control.

3. Configuration Status Accounting.

4. Configuration Audit.

5. Base lining.

6. Version Control.

7. Build Management.

8. Release Management.

Key SCM ActivitiesManaging software change requests:1. Obtain approval and sign offs.

2. Obtaining Project Status.

3. Tracking Bugs and fixes.

4. Coordinating communication between groups.

5. Managing the distribution of changes. Requirement Elicitation or Discovery:

Sometimes called requirements elicitation or requirements discoveryInvolves technical staff working with customers to find out about the application domain, the services that the system should provide and the systems operational constraintsMay involve end-users, managers, engineers involved in maintenance, domain experts, trade Unions, etc. These are called stakeholders.Problems of Requirement Analysis

Stakeholders dont know what they really want.

Stakeholders express requirements in their own terms.

Different stakeholders may have conflicting requirements.

Organizational and political factors may influence the system requirements.

The requirements change during the analysis process. New stakeholders may emerge and the business environment change.

The primary reason to test is to find bugs?1. Find defects.

2. Maximize bug count.3. Block premature product releases.4. Help managers make ship / no-ship decisions.5. Minimize technical support costs.6. Assess conformance to specification.7. Conform to regulations.8. Find safe scenarios for use of the product.9. Assess quality.10. Verify correctness of the product.

Your project is in a tight schedule and time available is very less. List major areas you will test in such a situation. The test team does not know when to stop testing. a. Discuss with customer.b. High risk areas identified.c. Priority will be given to main functionalities.d. Negative test cases can be reduced.e. Field level test case validation can be avoided.f. All priority test cases are covered.g. Test Manager will be able to report the status with some confidence level of completion.h. Stability of the project.i. Quality goals have been met.j. Test metrics including mean time between failure percent covered.

Your project is in a tight schedule and time available is very less. List major areas you will test in such a situation. The test team does not know when to stop testing. a. Discuss with customer.b. High risk areas identified.c. Priority will be given to main functionalities.d. Negative test cases can be reduced.e. Field level test case validation can be avoided.f. All priority test cases are covered.g. Test Manager will be able to report the status with some confidence level of completion.h. Stability of the project.i. Quality goals have been met.j. Test metrics including mean time between failure percent covered.

Your project is in a tight schedule and time available is very less. List major areas you will test in such a situation. The test team does not know when to stop testing. a. Discuss with customer.b. High risk areas identified.c. Priority will be given to main functionalities.d. Negative test cases can be reduced.e. Field level test case validation can be avoided.f. All priority test cases are covered.g. Test Manager will be able to report the status with some confidence level of completion.h. Stability of the project.i. Quality goals have been met.j. Test metrics including mean time between failure percent covered.

Basic methodology of introducing new processes. DMADV Define the process and where it would fail to meet customer needs. Measure and determine if process meets customer needs. Analyze the options to meet customer needs. Design in changes to the process to meet customers needs. Verify the changes have met customer needs.

Requirements analysis- Developer issues Software developers and end users often have different vocabularies. Consequently, they can believe they are in perfect agreement until the finished product is supplied. The duty to bridge that gap is often assigned to Business Analysts, who analyze and document the business processes of business units affected by the proposed business solution, and Business Systems Analysts, who analyze and document the proposed business solution from a systems perspective. Software developers often try to make the requirements fit an existing system or model, rather than develop a system specific to the needs of the client. Analysis is often carried out by programmers, rather than business analysts. It is often the case that programmers lack the people skills and the domain knowledge to understand a business process properly. Requirements analysisRequirements analysis, in systems engineering and software engineering, encompasses all of the tasks that go into the instigation, scoping and definition of a new or altered system. Requirements analysis is an important part of the system design process; whereby requirements engineers, business analysts, along with systems engineers or software developers, identify the needs or requirements of a client; having identified these requirements they are then in a position to design a solution.Requirements analysis is also known under other names: requirements engineering requirements gathering requirements capture operational concept documenting systems analysis requirements specification Business Components moduleThe Business Components module enables SMEs (Subject Matter Experts) with a high-level knowledge of their application to create and manage business components in Quality Center. These components provide the basis for Business Process Testing.The Business Components module enables you to define the business components shell, which comprises an overview of the information that is required at the test-creation level. You can also define manual steps for the component, and choose whether to convert it to an automated keyword-driven or Win Runner component. For keyword-driven components, you can begin implementing the automated steps in the keyword view. Components can be created and used to build business process tests even if the implementation of the application has not yet begun.Analyze resultsAnalysis of test results is both the most important and the most difficult part of performance engineering. Proper design and execution of tests as well as proper measurement of system and/or component activities make the analysis easier. Analysis should identify which requirements are being met, which ones aren't, and why. When the analysis shows why systems or components aren't meeting requirements, then the system or component can be tuned to meet those requirements. Analysis of results may answer the following questions (and more): Are user expectations being met at various user loads? Do all components perform as expected under load? What components cause bottlenecks? What components need to be or can be tuned? Do additional tests need to be developed to determine the exact cause of a bottleneck? Are databases and/or servers adequate? Are load balancers functioning as expected? Is the network adequate? The Analyze Results aspect focuses on determining if the performance acceptance criteria have been met, and if not, what the bottlenecks are and whose responsibility it is to fix those bottlenecks. This aspect involves close coordination with stakeholders to ensure that both the performance engineering team and stakeholders agree that all requirements are being validated. System administrators may also be involved in results analysis. Keeping a record of the tests being analyzed and the results of that analysis is an important part of this activity.The process of performance engineeringDevelop test assetsA test asset is a piece of information that will remain at the completion of a performance engineering project. Some people refer to these items as "artifacts." These assets are developed during this aspect of the process: Performance Engineering Strategy document Risk Mitigation Plan automated test scripts The Develop Test Assets aspect begins before performance testing is scheduled to start. The Performance Engineering Strategy document and the Risk Mitigation Plan can be started immediately upon completion of the Evaluate System aspect. Automated test script development can begin only after development of a stand-alone component or when the entire application is believed to be stable and has undergone at least initial functional testing.This aspect concludes when: The Performance Engineering Strategy document has been completed and approved by the stakeholders, Mitigation strategies have been defined for all known risks, and All load-generation scripts have been created and individually tested (for the "testable" sections of the application). Tune systemTuning must occur at a component level while keeping the end-to-end system in mind. Even tuning each component to its best possible performance won't guarantee the best possible overall system performance under the expected user load. After tuning a component, it's important not only to retest that component but also to rebenchmark the entire system. Resolving one bottleneck may simply uncover another when system wide tests are re executed.The Tune System aspect may address but isn't limited to the following topics Web server configuration database design and configuration application or file server configuration cluster management network components server hardware adequacy batch process scheduling/concurrency load balancer configuration firewall or proxy server efficiency This aspect of the performance engineering project is a highly collaborative effort involving the performance engineering team and the development team. Often, once tuning begins the performance engineer must be available to retest and analyze the effects of the changes made by the developer before any other activity can occur. The information gained from that analysis is critical to the developer or system administrator who's making the actual changes to the system. It's very rare for the performance engineering team to make actual changes to the system on their own. The activities associated with tuning the system need to be at least loosely documented so that differences from the original design can be captured and lessons can be passed on to future developers and performance engineers.Formal Entry and Exit CriteriaThe concept of a formal entry and exit criteria goes back to the evolution of the waterfall development processes and a model called ETVX, an IBM invention. The idea is that every process step, be it inspection, functional test, or software design, has a precise entry and precise exit criteria. These are defined by the development process and are watched by management to gate the movement from one stage to another. It is arguable as to how precise any one of the criteria can be, and with the decrease of emphasis development, process entry and exit criteria went out of currency. However, this practice allows much more careful management of the software development process.

Test Plan & Contents of Test planA software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project: Title Identification of software including version/release numbers Revision history of document including authors, dates, approvals Table of Contents Purpose of document, intended audience Objective of testing effort Software product overview Relevant related document list, such as requirements, design documents, other test plans, etc. Relevant standards or legal requirements Traceability requirements Relevant naming conventions and identifier conventions Overall software project organization and personnel/contact-info/responsibilities Assumptions and dependencies Project risk analysis Testing priorities and focus Scope and limitations of testing Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable Outline of data input equivalence classes, boundary value analysis, error classes Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems Test environment validity analysis - differences between the test and production systems and their impact on test validity. Test environment setup and configuration issues.Creating Projects in VSS Salient Features1. Any project can be labeled for further reference.2. The files and folders located in the database can be shared with any other project.3. The History of changes can be displayed using the Show History option.4. The difference reports can be generated for all the files.5. A shortcut that connects to a particular database can be created using VSS. Security features1. All users are assigned rights for specific projects and sub-projects.2. Individual Users can have customized access rights for specific folder.Business Process Reengineering DefinitionBusiness Process Reengineering is the fundamental re-thinking and radical redesign of business processes to achieve dramatic improvements in critical contemporary measures of performance, such as cost, quality, service and speed.When to reengineer? Companies that find themselves in deep trouble. Companies that are not yet in trouble, but whose management has the foresight to see trouble coming. Companies which have no discernible difficulties either now or on the horizon. Which process do we reengineer? Processes that are in the deepest trouble? Processes which have the greatest impact on the customers? Feasibility- Which of the companys processes are suitable to successful redesign?Software Configuration Management:1. Version Management. 2. Source Code Control. 3. Configuration Management. 4. Build Management. 5. Life Cycle Management. 6. Process Management. Good SCM Helps1. Brings applications faster to the market. 2. Improve Application Quality. 3. Automate team communication. 4. Eliminate the issues of dropped issues or missed handouts. According to SEI-CMM SCM is defined as:Software Configuration Management involves identifying the configuration of the software at given points in time, systematically controlling the changes to the configuration and maintaining the integrity and traceability of configuration throughout the software life cycle. The Nature of ReliabilityThe nature of reliability speaks to the probability of failure-free operation for a specified time in a specified environment for a given purpose. This, of course, means quite different things depending on the system and the users of that system. Informally, reliability is a measure of how well system users think it provides the services they require. The nature of reliability usually requires an operational profile for its definition. The operational profile defines the expected pattern of software usage. Also, reliability must consider fault consequences. Not all faults are equally serious. A system is perceived as more unreliable if there are more serious faults. Reliability is said to have improved when software faults which occur in the most frequently used parts of the software are removed. Removing x percent of software faults will not necessarily lead to an x percent reliability improvement. Reliability is sometimes given as system reliability, which is the probability that a given system will perform a required task or mission for a specified time in a specified environment, and software reliability, which is the probability that a given piece of software will not cause the failure of a system for a specified time under specified conditions. Since that system can be the software itself, software reliability is basically the estimation of the probability of software failure during a specified exposure period.A requirement stability index (RSI) is a metric used to organize, control, and track changes to the originally specified requirements for a new system project or product. Typically, a project begins, after consultation with customers or clients and research into their needs, with the creation of a requirements document. The document expresses what the customer or client needs and expects and, at least implicitly, what the developer will provide. The client or customer representative group reviews the document and, if in agreement with its specifications, signs it. This process (called signing off) is intended to ensure that customer representatives or clients have agreed - in writing - on the specifics involved. Almost as you might expect, however, once the design and development process is underway, customers or clients think of changes or additions they would like, a phenomenon known as requirement creep. An important part of project management, requirements management has become more challenging with the faster pace of technology. The RSI gives developers a means of continuing to document requirements as they change throughout the development process, and to monitor deviations from those originally specified.Six Sigma'The performance of a product is determined by how much margin exists between the design requirement of its characteristics (and those of its parts/steps), and the actual value of those characteristics. These characteristics are produced by processes in the factory, and at the suppliers. Each process attempts to reproduce its characteristics identically from unit to unit, but within each process some variation occurs. A variation of the process is measured in Std. Dev, (Sigma) from the Mean. The normal variation, defined as process width, is +/-3 Sigma about the mean. Approximately 2700 parts per million parts/steps will fall outside the normal variation of +/- 3 Sigma. This, by itself, does not appear disconcerting. However, when we build a product containing 1200 parts/steps, we can expect 3.24 defects per unit (1200 x .0027), on average. This would result in a rolled yield of less than 4%, which means less than 4 units out of every 100 would go through the entire manufacturing process without a defect. Thus, we can see that for a product to be built virtually defect-free, it must be designed to accept characteristics which are significantly more than +/- 3 sigma away from the mean.Requirements analysis- General problemThe general difficulties involved with requirements analysis are increasingly well known: The right people with adequate experience, technical expertise, and language skills may not be available to lead the requirements engineering activities; The initial ideas about what is needed are often incomplete, wildly optimistic, and firmly entrenched in the minds of the people leading the acquisition process; and The difficulty of using the complex tools and diverse methods associated with requirements gathering may negate the hoped for benefits of a complete and detailed approach. VSS Administrator tasksTo use VSS, you (or someone else in your work group) must perform some administrative tasks to set up a VSS Database. As the VSS administrator, it is your job to make the VSS database as useful to VSS clients as possible. There are various ways in which you can change the behavior of VSS, for example to allow clients to use multiple file checkouts, or to take advantage of VSS's new web features.As the administrator, you can do any of the following: Create a New Database Creating Users and assigning rights Deleting/Modifying user rights Create Web Site Projects Create Shadow Folders Customize the SRCSAFE.INI or SS.INI File Enable Keyword Expansion Enable Multiple Checkouts Open a Database Set Default File Types Also the administrator has to take periodic backups of the database and ensure that the database is not tampered with. Also it is the duty of the administrator to revoke the rights of the users who have left the project team.What Is the People CMM?The People Capability Maturity Model is a roadmap for implementing workforce practices that continuously improve the capability of an organizations workforce. Since an organization cannot implement all of the best workforce practices in an afternoon, the People CMM introduces them in stages. Each progressive level of the People CMM produces a unique transformation in the organizations culture by equipping it with more powerful practices for attracting, developing, organizing, motivating, and retaining its workforce. Thus, the People CMM establishes an integrated system of workforce practices that matures through increasing alignment with the organizations business objectives, performance, and changing needs.The People CMMs primary objective is to improve the capability of the workforce. Workforce capability can be defined as the level of knowledge, skills, and process abilities available for performing an organizations business activities. Workforce capability indicates an organizations: readiness for performing its critical business activities, likely results from performing these business activities, and potential for benefiting from investments in process improvement or advanced technology. People Capability Maturity Model (PCMM)The People CMM describes an evolutionary improvement path from ad hoc, inconsistently performed workforce practices, to a mature infrastructure of practices for continuously elevating workforce capability. The philosophy implicit the People CMM can be summarized in ten principles.1. In mature organizations, workforce capability is directly related to business performance.2. Workforce capability is a competitive issue and a source of strategic advantage.3. Workforce capability must be defined in relation to the organizations strategic business objectives.4. Knowledge-intense work shifts the focus from job elements to workforce competencies.5. Capability can be measured and improved at multiple levels, including individuals, workgroups, workforce competencies, and the organization.6. An organization should invest in improving the capability of those workforce competencies that are critical to its core competency as a business.7. Operational management is responsible for the capability of the workforce.8. The improvement of workforce capability can be pursued as a process composed from proven practices and procedures.9. The organization is responsible for providing improvement opportunities, while individuals are responsible for taking advantage of them.10. Since technologies and organizational forms evolve rapidly, organizations must continually evolve their workforce practices and develop new workforce competencies.Check out Process in VSS

1. Open the project which you want to modify

2. Select the file you want to modify

3. Right Click and then select the check out option (You can also go the SourceSafe menu and click the check out option there)

4. If the default working folder is present then the file will checked out to the working folder that resides in the local C or D drive

5. If the working folder has not been specified, it should be created then

6. In the pop-up menu that appears add the comments if necessary

7. Then change the location where the files need to be checked out if necessary. By default it is working that you have specified

8. If dont get the local copy option in selected then VSS does not place the writable copy of the file into the working folder. But it marks that the file has been checked out.

9. You can either check out the file exclusively or marked it as checked out using multiple check out option (Multiple check out works only if the administrator provides that option)

10. One of the advanced option asks what VSS should do if the copy already exists in the local working folder/directory

11. It can be either Asks (asks for user preference), replace (replace the writable file with the read only version), skip (skips the check out option), merge (Merges the changes between the writable file and the one you are now checking out) and the default

12. Finally select the time that has to be attached to the file. The time stamp can be the current time, last checked out time, last modification time or default time.

13. Click ok to complete the check out operation

14. Now VSS place a writable copy of a file in the local folder.

Functional DecompositionA technique used during planning, analysis and design; creates a functional hierarchy for the software. Functional Testing Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.Monkey TestingTesting a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out. Negative TestingTesting aimed at showing software does not work. Also known as "test to fail".Quality AssuranceAll those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer. Quality CircleA group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.Path TestingTesting in which all paths in the program source code are tested at least once. Performance TestingTesting conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also known as "Load Testing".Test Driven DevelopmentTesting methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code. Test DriverA program or test tool used to execute a tests. Also known as a Test Harness.Test SuiteA collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.Test SpecificationA document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.Storage TestingTesting that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage. Exhaustive TestingTesting which covers all combinations of input values and preconditions for an element of the software under test. Boundary Value Analysis (BVA)?BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed successfully using requirements specifications and user documentation.What is a software version?A software Version is an initial release (or re-release) of a software associated with a complete compilation (or re compilation) of the software.Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.Defects Severity and PrioritySeverity tells us how bad the defect is. Priority tells us how soon it is desired to fix the problem. The tester or test manager usually fills out the Severity field when an issue is first submitted into the defect tracking system. Product management then usually fills out the priority field, following a meeting to gather information about the issue. Some may argue that these fields are the most important in the whole report, allowing a degree of impact and urgency to be associated with the description. The values for the priority and severity fields are usually High, Medium, and Low (or something similar).The Complete Data Migration MethodologyMost software implementation efforts are conducted to satisfy one of the following initiatives: Deploying a new On Line Transactional Processing (OLTP) system Deploying an On Line Analytical Processing (OLAP) system Each type of system may be replacing and/or enhancing functionality currently delivered by one or more legacy systems. This sort of systems evolution means that organizations are working to grow at or near the pace that the ever-changing world of technology dictates. Choosing a new technological direction is probably the easiest task in accomplishing the entire effort. Complications arise when we attempt to bring together the information currently maintained by the legacy system(s) and transform it to fit into the new system. We refer to the building of this bridge between the old and new systems as data migration. Data migration is a common component most systems development efforts. Common problem has to do with the theoretical design differences between hierarchical legacy systems and relational systems. Two of the cornerstones of hierarchical systems, namely de-normalization and redundant storage are strategies that make the relational purist shrink.The most significant problem with data migration projects is that people really do not understand the complexity of data transformation until they have undergone a number of arduous migration projects. Having made these points, it is obvious to this author that there is a desperate need for a sound methodological approach with which organizations can tackle migration projects. Although there is no way to avoid unpleasant surprises in data migrations, we can certainly be prepared to confront and resolve them.Code InspectionA formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.Code WalkthroughA formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.How to implement Identifying the area of improvement. Finding solutions for it. Implement the solution. Look for further improvement through Total employee involvement. Center Management's Responsibilities while implementing Kaizen Be determined to introduce KAIZEN as a corporate strategy. Provide support and direction for Kaizen by allocating resources. Establish policy for Kaizen and "cross functional goals. Realize Kaizen goals through policy deployment and audits. Build systems, procedures and structures conducive to Kaizen.The Umbrella of KAIZEN: Another key aspect of KAIZEN is an on-going, never-ending improvement process. The difficult part is, how to keep it going and maintain the momentum once it has been introduced. Many companies have tried to introduce such projects as quality circles, reengineering, and lean production. While some of them have been successful, most have failed to make such a project a going concern. Many Western companies introduced quality circles by involving employees but most companies have simply given up the idea of quality circle activities by now. This happened because management failed to build internal infrastructures, systems and procedures that would assure the continuing of quality circle activities. This has happened because most Western companies lacked the concept of KAIZEN. Various aspects of KAIZEN are, Customer orientation TQC (Total Quality Control) Quality Improvement Zero defects Robotics Quality Circles Suggestion Systems Automation Discipline in the workplace TPM (Total Productive Maintenance) Productivity Improvement New Product Development.Basic tips for KAIZEN Activities Discard conventional fixed ideas. Think of how to do it, not why it cannot be done. Do not make excuses. Start by questioning current practices. Do not seek perfection. Do it right away even if for only 50% of target. Correct it right away, if you make mistake. Do not spend money for KAIZEN use your wisdom. Wisdom is brought out when faced with hardship. Ask 'WHY?" five times and seek root causes. Seek the wisdom of ten people rather than the knowledge of one. KAIZEN ideas are infiniteRequirements analysis - Stakeholder issues:Steve McConnell, in his book Rapid Development, details a number of ways users can inhibit requirements gathering: Users don't understand what they want. Users won't commit to a set of written requirements. Users insist on new requirements after the cost and schedule have been fixed. Communication with users is slow. Users often do not participate in reviews or are incapable of doing so. Users are technically unsophisticated. Users don't understand the software development process. This commonly leads to the situation where user requirements keep changing even when the software development has been started. Because new requirements may sometimes mean changing the technology as well, the importance of finalizing user requirements before the commencement of development should be made very clear to the Business Users. Knowing their objectives and expectations regarding the solution beforehand and documenting agreed requirements is fundamental to the success of a project.Best approach to software test estimation:The 'best approach' is highly dependent on the particular organization and project and the experience of the personnel involved.For example, given two software projects of similar complexity and size, the appropriate test effort for one project might be very large if it was for life-critical medical equipment software, but might be much smaller for the other project if it was for a low-cost computer game. A test estimation approach that only considered size and complexity might be appropriate for one project but not for the other. One type of approach to consider. Metrics-Based Approach: A useful approach is to track past experience of an organization's various projects and the associated test effort that worked well for projects. Once there is a set of data covering characteristics for a reasonable number of projects, then this 'past experience' information can be used for future test project planning. For each particular new project, the 'expected' required test time can be adjusted based on whatever metrics or other information is available, such as function point count, number of external system interfaces, unit testing done by developers, risk levels of the project, etc. In the end, this is essentially 'judgment based on documented experience', and is not easy to do successfully.Automated Test GenerationAlmost 30% of the testing task can be the writing of test cases. To first order of approximation, this is a completely manual exercise and a prime candidate for savings through automation. However, the technology for automation has not been advancing as rapidly as one would have hoped. While there are automated test generation tools they often produce too large a test set, defeating the gains from automation. On the other, there do exist a few techniques and tools that have been recognized as good methods for automatically generating test cases. The practice needs to understand which of these methods are successful and in what environments they are viable. There is a reasonable amount of learning in the use of these tools or methodologies but they do pay off past the initial ramp up.CMM = Capability Maturity Model CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors. Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable. Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated. Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance. Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high. Level 5 - the focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required. Business Component (or Component) an easily-maintained, reusable unit comprising one or more steps that perform a specific task. A business component can be defined as a non-automated or automated component. Business components may require input values from an external source or from other components, and they can return output values to other components. Business Process Test a scenario comprising a serial flow of business components, designed to test a specific business process of an application. Business Process Test Run-Time ParametersVariable values that a business process tests can receive and then pass to business components for use as input component values. Business Process Test Status a status value that is automatically generated based on the status values of all the business components in a business process test.Requirements Management Plan: A requirements management plan is a component of the project management plan. Generally, the purpose of RM is to ensure customer, developer and tester have a common understanding of what the requirements for an undertaking are. Several subordinate goals must be met for this to take place: in particular, requirements must be of good quality and change must be controlled. The plan documents how these goals will be achieved. Depending on your project standards, a variety of sections might be included in your RM plan. Some examples are: introduction to RM and document overview. document scope. issues affecting implementation of the plan, such as training on the RM tool. applicable documents, such as policies and standards. terms and definitions used in the plan - if you use the term requirement to include several requirement categories define it here. methods and tools that will be used for the RM process (or the requirements for selecting a tool if one is not selected). the RM process, including any diagrams of the process. authorities and responsibilities of participants. strategy for achieving requirement quality, including traceability and change control. Requirement Management in TestingEvery software project arises out of a business problem. Requirements gathering and analysis try to identify the business problem to be solved and probable characteristic a software product needs to have as a solution to the business problem. Requirements are the foundation stone on which a software product is built. Gathering and managing requirements is one of the biggest challenges a project manager faces in a project. Robust requirement management process is one of the stepping-stones of success of a project.TestabilityVery often customers come up with requirements that are not testable. To determine the testability of a requirement following questions can be asked:1. Can we define the acceptance criteria for this requirement?If the answer is no then this requirement is not testable.2. Clearly state the assumption you have made on this requirement. Check if the assumption is conflicting with any other assumption/requirement made so far.If yes then the set of requirements are not testable.3. Is this requirement clashing with any other requirement?If yes then the set of requirements are not testable.4. Can it be broken into multiple requirements?If yes then the set of requirements are not testable. You will need to revisit the requirement again.People Capability Maturity Model FrameworkThe People Capability Maturity Model (People CMM) is a tool that helps you successfully address the critical people issuesin your organization. The People CMM employs the process maturity framework of the highly successful Capability Maturity Model for Software (SWCMM) [Paulk 95] as a foundation for a model of best practices for managing and developing an organizations workforce. The Software CMM has been used by software organizations around the world for guiding dramatic improvements in their ability to improve productivity and quality, reduce costs and time to market, and increase customer satisfaction. Based on the best current practices in fields such as human resources, knowledge management, and organizational development, the People CMM guides organizations in improving their processes for managing and developing their workforce. The People CMM helps organizations characterize the maturity of their workforce practices, establish a program of continuous workforce development, set priorities for improvement actions, integrate workforce development with process improvement,and establish a culture of excellence.Dependency TestingExamines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth TestingA test that exercises a feature of a product in full detail. Performance Testing

Testing to determine the expected processing delay as a function of the applied load; also to determine resource utilization under load. Equivalently, testing to determine the maximum number of simultaneous users and/or transactions that the system can sustain.

Best Practice

If the cost of testing and analysis is far lower than the expected gain and if you have stable software and valid user profiles, then this is an important part of the toolkit. Its use is widespread in telecommunications, embedded systems, and other system software.

Given the heavy capital investment in a performance testing laboratory staffed by experts, this kind of testing can forestall the worst kinds of performance problems met in the field. If throughput issues are meaningful for your application, then you can't afford not to do this testing.Dear All,Thetenure with Cognizant has been memorable and rewarding for me. I am thankful to you all for making this journey worthwhile and enjoyable. I am now moving on to take up challenges outside Cognizant and today would be my last day with Cognizant. Going forward, I can be contacted at [email protected] . Do keep in touch and am sure our paths will cross again. Until then.. Take care & All the best. Different Types of Testing:

Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests. Baseline: The point at which some deliverable produced during the software engineering process is put under formal change controlBinary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specificationBreadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail