4
20 BETTER SOFTWARE MARCH/APRIL 2013 www.TechWell.com I have always been fascinated with creating methods for efficient delivery, particularly during testing. In the 1990s, I was stretching my theories to the brink and loving the ride. The adoption of evolutionary methods brought about many solutions for better efficiency, including the idea to test smaller, more frequently, and earlier. In today’s age of automation and complex integrated infrastructures, we often encounter the unresolved issue of how to get high-value testing within the condensed time-to-market window. Automated frameworks and modularized scripts provide a partial solution, but they are not independently intelligent enough to provide consistently high value or highly efficient testing. To solve this, we need to select tests that require us to examine what is needed to test within each unique increment, THINKSTOCKPHOTOS.COM cycle, or iteration. Every change, whether done for improve- ment or remediation, presents an opportunity for the software ecosystem (applications, browsers, web services, and vendor software) to fail. This results in a much greater need on our part to perform high-value testing. High-value testing does not mean that you need to per- form all end-to-end testing or run the full suite of tests. This can potentially create a bottleneck and dampen the velocity. To properly perform high-value testing requires a precise and often unique test response for each new change, which entails a medley of testing types, each working in concert to ensure the quality goals. This is a modern-day necessity to fully ensure the end-user experience, the ecosystem stability, and product health.

BSM 15-2 Mar-April 2013 DigEd copy - StickyMinds · Most testing, whether agile or not, requires pre-planned executions, which are largely categorized into either “new change”

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: BSM 15-2 Mar-April 2013 DigEd copy - StickyMinds · Most testing, whether agile or not, requires pre-planned executions, which are largely categorized into either “new change”

20 BETTER SOFTWARE MARCH/APRIL 2013 www.TechWell.com

I have always been fascinated with creating methods for efficient delivery, particularly during testing. In the 1990s, I was stretching my theories to the brink and

loving the ride. The adoption of evolutionary methods brought about many solutions for better efficiency, including the idea to test smaller, more frequently, and earlier. In today’s age of automation and complex integrated infrastructures, we often encounter the unresolved issue of how to get high-value testing within the condensed time-to-market window.

Automated frameworks and modularized scripts provide a partial solution, but they are not independently intelligent enough to provide consistently high value or highly efficient testing. To solve this, we need to select tests that require us to examine what is needed to test within each unique increment,

THIN

KSTO

CKPH

OTOS

.COM

cycle, or iteration. Every change, whether done for improve-ment or remediation, presents an opportunity for the software ecosystem (applications, browsers, web services, and vendor software) to fail. This results in a much greater need on our part to perform high-value testing.

High-value testing does not mean that you need to per-form all end-to-end testing or run the full suite of tests. This can potentially create a bottleneck and dampen the velocity. To properly perform high-value testing requires a precise and often unique test response for each new change, which entails a medley of testing types, each working in concert to ensure the quality goals. This is a modern-day necessity to fully ensure the end-user experience, the ecosystem stability, and product health.

Page 2: BSM 15-2 Mar-April 2013 DigEd copy - StickyMinds · Most testing, whether agile or not, requires pre-planned executions, which are largely categorized into either “new change”

www.TechWell.com MARCH/APRIL 2013 BETTER SOFTWARE 21

The goal is for you to create an intelligent testing trove (security tests, functional tests, data accuracy tests, perfor-mance tests, usability tests, interoperability tests, etc.) that can be succinctly arranged and rearranged across varying sets of browsers, platforms, and hardware. This variety of intelli-gent tests is scalable to varying business goals and marries the quality categories to the unique business requirements to create test goals. The adapting tests are always targeted at the most relevant business and quality goals, which yield the most im-portant results for the team to use for decision making.

Experience 1: Quality GoalsOne of my recent challenges involved a two-week sprint

with thirty-eight backlog items (including requested system

changes), of which most were small, front-end UI changes to multiple web applications. In this case, the test team executed all the tests and performed regression, and the sprint was given the green light. This was followed by an uneventful implemen-tation.

To our chagrin, on the day after implementation we re-ceived a call from an executive informing us that one of the two user profiles was redirecting to a broken page after login, and the other had severe performance issues, taking over three minutes to authenticate the user. This particular login page and process had not been changed by the recent implementation, and our previous regression testing had only targeted the login process of one user profile using test data. We did not test the less-common user profile (which resulted in the broken page).

Page 3: BSM 15-2 Mar-April 2013 DigEd copy - StickyMinds · Most testing, whether agile or not, requires pre-planned executions, which are largely categorized into either “new change”

22 BETTER SOFTWARE MARCH/APRIL 2013 www.TechWell.com

This would have resulted in higher test efficiency and better test precision based on the goals of data accuracy, security, us-ability, and content consistency.

Both of these experiences resulted from misplaced testing rigor, or the lack of intelligent test design because of low busi-ness domain knowledge. Adaptive testing would have allowed the teams to focus on the greater goal of the changes and to creatively fashion test solutions by combining and rearranging tests, types of tests, browser and OS combinations, or hard-ware configurations, according to the need in different envi-ronments. For example, high-value tests for production may encompass 40 percent usability (of both functions and con-tent), 30 percent interoperability (of critical user flows), 20 percent security (user authentication and data flows), and 10 percent performance. It all depends on what the quality goals are for that particular test run and environment. The testing value shifts with different changes and potentially with each unique need of the code promotion.

Incorporating Adaptive Testing MethodsHere are a few ways that I have found to be successful in

incorporating adaptive testing methods to gain precision:

1. BECOME SELF-ADAPTING.Break out of the pre-defined test scope by creating versa-

tility and flexibility in your testing suite. You can do this by engineering a flexible framework that allows unique combina-tions of small, executable tests and grouping test assets based on quality goals. This will provide the ability to re-integrate parts of stories (or test cases) into new, high-value runs. You can define the new executions by precise needs and execute them in combination or independently. The core principle here is the flexibility of test assets, which presents endless options for creative execution.

2. DEFINE THE TEST GOALS.Most testing, whether agile or not, requires pre-planned

executions, which are largely categorized into either “new change” or “regression testing of existing functionality.” This traditional separation of test effort hinders the creative blending of testing types and methods. With adaptive methods, you can drive the testing based on the goals, regardless of whether it is new change or existing change. By combining meaningful tests together into a logical flow of quality-based goals, you can accomplish testing of the new delta along with additional “regression” coverage under the theme of the test goal. Commonly used goals include usability, integration and interoperability, user and data security, data accuracy, and brand testing.

3. ADAPT DATA-DRIVEN TESTING. You should support your test selections with data and

analytics of past-run metrics, user analytics, and test-failure analysis. This will allow the team to clearly see testing needs and define test goals. For example, user analytics might reveal that 60 percent of your customers used a tablet device to access

When constructing the initial user stories and tests, we knew that the login process was a critical path and should be included in regression. But, we had designed stories for general login with test data since it seemed stable in the test environ-ment. As we learned, however, 70 percent of the generated rev-enue was connected to this line of business, and the production environment was conclusively different, which could render some of our test results useless.

In the retrospective meeting, it was clear to me that a few key areas were being underserved, resulting in a growing problem. For one thing, the testing value was beneath the quality need, meaning that the best test results did not accu-rately predict the system behavior or confidently indicate that the business goals would be met in production. After I digested this premise, the exact root cause of the issues was less rele-vant, because we didn’t have a process that would allow us to detect errors left or right of the established regression. The established regression comprised previously created user stories and tests.

The regression suite was enormously inefficient and took two to three days to execute. The thought of having such an ineffective, time-expensive process boggled my mind. The pro-duction problem was revealed to be a service breakdown be-tween the content management system (production instance only) and the middleware, which would have never been caught due to the established coverage gap and lack of testing in the production environment. From this, I concluded that there was a potential of ongoing defect migrations into production as well as unknown issues residing in the production environment that were both just waiting to be encountered by a customer. We could have used adaptive testing during production, as this method would have created a focus on quality goals—in this case, critical process flows, usability, and content.

Experience 2: Unique Project NeedsUsing adaptive testing also would have been beneficial in

another case of mine, when a financial client added a new on-line product to its services and rebranded old content in an ef-fort to create a better customer experience. During this project, two decisions were made: We wanted to use a limited set of test data that represented only a fraction of users and functions, and we wanted to eliminate security testing from the scope of work. The testing efforts were from local (decentralized) teams that executed system, integration, performance, and user ac-ceptance tests on their allotted work stream. The end of the project revealed a scattering of moderate to minor defects, which were sanctioned as acceptable in production.

Upon release into production, major processing errors oc-curred that displayed the wrong user’s account information to users. This allowed an account holder to view and change another account holder’s information. The decision to remove security testing and usage of small-scale data limited the testing and was solely based on controlling exorbitant testing costs. If this team had employed adaptive testing, the use of smaller, more precise tests could have been shared across teams, al-lowing them to arrange tests per their unique project needs.

Page 4: BSM 15-2 Mar-April 2013 DigEd copy - StickyMinds · Most testing, whether agile or not, requires pre-planned executions, which are largely categorized into either “new change”

www.TechWell.com MARCH/APRIL 2013 BETTER SOFTWARE 23

Contact [email protected] or 301.654.9200 ext. 403 for additional information and registration details

WWW.ALPI.COM

YOUR SHINETIME TO

IT’S

Technology and Methodology CoursesHP: Quality Center ALM, QuickTest Pro, and LoadRunner

Microsoft: Test Manager, Coded UI, and Load Test

Process Improvement: ISTQB Certification, Managing Change, Requirements Definition, Enterprise Quality

Interactive Learning Method™

Bring your Workplace to the Classroom & Take the Classroom to your Workplace™

Post Training SupportFree refresher courses

Process and tool implementation services

Bulk Training ProgramSavings of up to 40% with credits good for a year

ALPI’S TRAINING OFFERS:

Since 1993, ALPI has empowered clients with innovative solutions delivered by our staff of flexible and creative professionals. Trainings are held virtually, at our state-of-the-art facility located just outside of the Nation’s Capital, or onsite at your company location.

Distinguish yourself from your peers and gain a competitive edge

Attend class in-person or virtually from office/home

your site, and 40 percent of the existing customers use mobile devices to post product opinions on social media. This data tells you that the tablet presentation (usability and branding) will be important to test, and the ease of launching to social media from a mobile phone (interoperability and performance) should also be precisely targeted by combining tests that focus on these areas.

4. PERFORM EVERGREEN MAINTENANCE. Continuous integration of the testing baseline is best for

adaptive testing, because you can rely on your test execution selections to be relevant and up to date. You don’t want several generations of automation or old test cases hanging around that can be inadvertently selected or rendered inefficient by not being execution ready. Ongoing fluid development, testing, and test-baseline integra-tion (of retrospective feedback, production fixes, planned change, etc.) will decrease the need for large maintenance windows and provide a foun-dation for continuous testing.

5. EXTEND TESTING TO PRODUCTION AND BEYOND.Testing based on adaptive goals is valuable

across the entire lifecycle and lifespan; however, the greatest benefit can be seen during produc-tion. The results of early cycle, preproduc-tion testing can lead to a high-performing live product. The product owners will thank you because you are assisting them with customer retention. Technology staff will thank you for providing aid in an accelerated discovery, fix, and deploy cycle.

6. MONITOR AND MEASURE.You should measure test velocity and preci-

sion by capturing test execution metrics and comparing them to the test goals and the defect types. Production monitoring and issue resolu-tion should be fed into the test baseline and uti-lized as a production quality metric. This can be used to identify potential areas of risk and aid with test selection. Common metrics that indi-cate quality and health include the number and criticality of defect hotspots, the time between defect identification and recovery time, and the time between test execution and test goal com-parisons.

The Simple Truth Adaptive test methods create fluid and con-

tinuous testing, which in turn provides a force of adaptive patterns and relevant results. Testing can no longer be defined by an inflexible, un-changeable, one-toned function of test execu-

tion. What were once called regression, performance, and se-curity tests are now combined needs that can be incorporated into a standard testing process. This method serves best when done in a lightweight and self-adapting way. Adaptive testing provides nimble test solutions that bend and shift with the changing needs of the market or the environment. {end}

[email protected]

This article first appeared on AgileConnection.com.