45
1 www.automatedtestinginstitute.com Automated Software Testing Magazine August 2009 ....... AUTOMATED An AUTOMATED TESTING INSTITUTE Publication - www.automatedtestinginstitute.com SOFTWARE TESTING MAGAZINE AUGUST 2009 $8.95 . 5 STEPS TO FRAMEWORK DEVELOPMENT HOW TO CREATE A FRAMEWORK FOR MAINTAINABILITY EVOLUTIONARY ROAD INTRODUCING YOUR PROJECT TO NEW TECHNOLOGY GIVE ME THE KEY...WORD: BUILDING A KEYWORD DRIVER SCRIPT IN 4 STEPS

utomAted

Embed Size (px)

Citation preview

Page 1: utomAted

1www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

.......AutomAtedAn AutomAted testing institute Publication - www.automatedtestinginstitute.com

SoftwAre teSting MAGAZINEAuguST 2009 $8.95.

5 StepS to Framework DevelopmentHow To CreATe A FrAMework For MAinTAinAbiliTy

evolutionary roaDinTroduCing your ProjeCT To new TeCHnology

Evolution of AutomAtEd SoftwArE

tESting

Give me the key...worD: Building A Keyword driver Script in 4 StepS

Page 2: utomAted

Worksoft Certify® for SAP®

AutomateAccelerateOptimize

Get ahead of the curve and maximize the value of your SAP investments with Worksoft Certify®, the only code-free approach to end-to-end, cross-platform business process validation and automated testing.

Worksoft Certify® for SAP® is a specialized lifecycle management solution for SAP that automates and optimizes SAP lifecycle tasks from change impact analysis through functional testing, performance testing and audit documentation. Certify for SAP is unmatched in the industry for accelerating time to value and reducing overall project costs by up 40-80%.

Are you ready to test smarter and deploy faster?

Contact us at 1.866.836.1773 or visit www.worksoft.com to learn more.

Still Testing SAP Manually?

C

M

Y

CM

MY

CY

CMY

K

Worksoft_Ad072009_Final.pdf 1 7/21/09 8:48 AM

Page 3: utomAted

AutomAtedSoftwAre teStingAugust 2009, Volume 1, Issue 2

ContentsCover StoryEvolution of AutomAtEd SoftwArE tESting 14Reviewing how test automation and its tools have evolved is useful only if it helps to understand where we go from here. To do that, you must appreciate the technology and market forces that have shaped them and why, and what that portends for the future. By Linda G. Hayes

FeatureSEvolutionAry roAd 22Software technology changes in existing applications often present roadblocks on the road to successful test automation. Learn what steps will help you navigate your test automation over and around these barriers. By J.L. Perlin

5 StEpS to frAmEwork dEvElopmEnt 28Developing an automated test framework can greatly increase the ROI of your test automation efforts, but it can also be a daunting task. Read this article for a step-by-step framework development process. By Dion Johnson

ColumnS & DepartmentSEditoriAl 4evolve, adapt and BuildKeeping automation on pace with three modes of change.

AuthorS And EvEntS 6Learn about AST authors and upcoming events.

SwEAt thE tEchniquE 8hand me the key...wordLearn how to build a keyword driver script.

opEn SourcEry 12Contribute to open Source projects in 4 StepsLearn how to contribute to your favorite open source projects.

up-to-dAtE with Ati 42Find out what’s happening on the Automated Testing Institute’s Online Reference.

hot topicS in AutomAtion 38microblogging is hot!Test automation meets microblogging.

Automated Software Testing (AST) Magazine is an Automated Testing Institute (ATI) publication that serves as a companion

to the ATI Online Reference. For more information regarding the magazine visit:

http://www.astmagazine.automatedtestinginstitute.com

i ‘B’log to u 40Read featured blog posts from the web.

3www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Page 4: utomAted

4 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

For anything to survive, it must be able to change itself by three different ways: evolving, adapting and building. This has been evident in the natural world, and it also holds true in the world of software. This reality should be especially clear for software testers and test automators, because the very mission and reason for their existence is to ensure systems are effectively evolving, adapting and being built. This conceptual trilogy is not confined, however, to only the systems tested using automation, but also applies to test automation itself.

Evolution is defined as a gradual process in which something changes into a different and usually more complex or better form. This change is normally an innate response to some measured environmental stimulus that occurs over time. Adaptation is the act of adjusting oneself to different conditions or a shifting environment. This type of change is typically marked by a deliberate and direct response to immediate modification in the environment. Building is the act of increasing or strengthening, and may be a response to some outside stimulus, but it could also be based on a self-imposed decision to improve one’s situation. The discipline of test automation has experienced and been shaped by each of these modes of change in the past, and continue to be defined by them even today. Understanding these three modes of change will help to better understand the current state of test automation, and will therefore help us all to chart a course forward. Therefore,

to aid in this understanding, this issue of the Automated Software Testing Magazine is dedicated to exploring each of these modes of change.

In the cover story, Linda Hayes delivers what is possibly one of the greatest articles

ever on the topic of the test automation evolution. Titled “The Evolution of Automated Software Testing”, this article looks at software test automation from a macroscopic point of view and explores the gradual changes in software systems over the years. It takes us through a trip down software computing memory lane, moving us from mainframe to Internet domination, and explaining how the test automation discipline has responded to each evolutionary shift by moving from text capture/playback tools to the broad use of commercial automation

frameworks. In addition, Hayes describes where the automation evolutionary approach has come up wanting, and proposes what the next test automation evolutionary response should be.

Next, J.L. Perlin provides a featured perspective on a modern day test automation adaptation with a peek into a project’s encounter with contemporary changes to the software environment.

In this article, an existing software application with a substantial suite of automated tests is faced with changes to existing application components. These changes threaten to render all existing automated tests useless as the automated test tool contends with problems communicating with the new components. This is a problem that many automators have come across, and if you haven’t, there is a good chance that you will. So J.L. Perlin offers an approach for effectively navigating your test automation through such modern day technological changes.

...possibly one of the greatest articles ever on the topic of the test

automation evolution

Evolve, Adapt and Buildby Dion Johnson

editorial

(Continued on page 37)

Page 5: utomAted

5www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

How Do You Network?Whatever your choice, the Automated Testing Institute is there!

Facebook MyspaceTwitter

YouT

Stay up-to-date with test automation by following the Automated Testing Institute on your so-cial networking site of choice.

Myspace Twitter YouTube LinkedIn Blogger

For more information, visit http://www.networking.automatedtestinginstitute.com

Facebook

Page 6: utomAted

6 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

Managing EditorDion Johnson

Contributing EditorEdward Torrie

Director of Marketing and EventsChristine Johnson

A publICATIon of ThE AuTomATED TEsTIng InsTITuTE

ConTACT usAST [email protected]

ATI Online [email protected]

linda g. hayes, in the lead feature, offers an eyewitness account of test automation evolution. With over 20 years of experience in software development and testing, Linda is the founder of three software companies including AutoTester, the first PC-based test automation tool, and Worksoft, Inc., pioneer of next generation code-free test solutions. Linda holds degrees in accounting, tax and law and is a frequent industry speaker and award-winning author

on software quality. Named one of Fortune Magazine’s People to Watch and one of the Top 40 Under 40 by Dallas Business Journal, she is a regular columnist and contributor to numerous publications, including the Automated Software Testing Magazine, StickyMinds.com and Better Software magazine. She is the author of the Automated Testing Handbook, contributing author to Testing SAP R/3:A Manager’s Guide, and co-editor of Dare to be Excellent with Alka Jarvis on best practices in the software industry. Her article “Quality is Everyone’s Business” won a Most Significant Contribution award from the Quality Assurance Institute and was published as part of the Auerbach Systems Development Handbook. You can contact Linda at [email protected].

J.l. perlin offers a featured article that provides an approach for handling modern day technology changes. Perlin has over 20 years of manual & automated testing experience and has managed QA teams in both the public and private sector for organizations including Lockheed Martin, Social Security Administration (SSA), General Electric, and the Centers for Medicare & Medicaid Services (CMS). Mr. Perlin is currently a lead test automation engineer working on critical care systems for Philips Patient Monitoring Informatics. You can email Mr. Perlin at [email protected].

dion Johnson provides step-by-step instructions for building an automated test framework in which you may develop and maintain your automated tests. Dion has several years of experience in providing IT services to both government and private industry in a manner that has demonstrated expertise in multiple areas of the software development lifecycle. With a Bachelor of Science degree in Electrical Engineering, he has spent much of

his professional career as a consultant in the areas of quality assurance (QA), quality control (QC), software process improvements, requirements analysis and software test automation. As a regular conference speaker and presenter, Dion has delivered award winning and highly acclaimed presentations at many of the most prestigious industry conferences, including the StarEast and StarWest International Conference on Software Testing, Analysis and Review, the Quality Assurance Institute Conference and Better Software Conference. He also teaches college and business level classes relative to testing and test automation, and has several published articles in various IT publications.

Who’s In This Issue?

authors and events

AutomAtedSoftwAre teSting

ATI and Partner Events

August 17ATI honors finalists Announcedhttp://www.atihonors.automatedtestinginsti-tute.com

August 17 - 31, 2009ATI honors Voting periodhttp://www.atihonors.automatedtestinginsti-tute.com

Page 7: utomAted

7www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Page 8: utomAted

8 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

Often called “Table Driven”, a Keyword framework interprets automated tests that are developed in a tabular format using a vocabulary of keywords. A keyword is typically a verb or verb-like phrase that is associated with an external library function responsible for executing the application commands necessary to satisfy the objective of the test. Therefore, in a keyword framework, a script that may normally appear as illustrated in Figure 1 will instead appear as illustrated in Figure 2.

Figure 2 illustrates a tabular automated test known as a keyword file, with the keywords that dictate the actions to be performed shown in the Action column. There are several advantages to using keyword files for automated test development including increased reusability. Reusability is increased with a Keyword Driven framework,

because many of the functions are created in such a way to not only be reusable for multiple tests within a single application, but also for tests across multiple applications. Redundancy may therefore be decreased across all applications that an organization may be responsible for automating. Earlier script development also results from using this type of framework, due to an increased ability for automation to begin before the application is delivered. Using information gathered from corresponding manual test cases, requirements or other documentation, keyword files can be created well before the application is ready for automation. Another

advantage is that it is easier to read keyword files than traditional scripts. Keyword files typically mirror manual test procedures, and the natural language of the verb-like phrases makes reading a keyword file similar to reading a collection of sentences. Finally, while the Keyword framework itself may be more technical, the keyword files created during everyday application automation are a lot less technical than files with statements written in code. This allows less technical individuals to effectively contribute to automated test creation. These advantages don’t mean that we’ve escaped scripting altogether, however. We haven’t evaded scripting; we’ve merely changed the nature of the scripting that needs to be performed. There are at least two forms of scripting that remain: function development and driver script creation. If you purchase a commercial keyword framework or obtain an open source keyword framework, you may eliminate the need for custom development of the driver script and may also reduce the function development to only those functions that are specific to your application and necessary for more efficient automation. If you choose to create your own framework however, the script development is solely up to you. This article will focus on the development of the driver script that interprets a keyword file. Following are steps for successfully creating and implementing a keyword driver

hand me the key...wordBuilding A Keyword Driver Script In 4 Steps

Sweat the technique

Figure 1: Code-based interface

Figure 2: keyword Driven interface

Page 9: utomAted

9www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

script:1. Create the keyword file structure2. Identify potential keywords3. Identify keyword utility functions4. Create driver script

Step 1. IdentIfy ColumnS In Keyword fIle

The keyword file columns will vary from framework to framework. The only constant

is that a column will almost certainly exist for keywords to be identified. In addition, the keyword file may have one or more of the columns illustrated in Figure 2. These columns are defined as follows:• Action – Contains the step’s main

keyword, and identifies the action that will be executed

• Screen – Contains the name of the screen on which the action is to be performed

• Object – Contains the name of the object on which the action is to be performed

• Value – Contains values that may be entered into the application

• Recovery – Contains Exception Handling keywords.

• Comment – Contains helpful information

about the step to anyone reading the keyword file.

Step 2. IdentIfy potentIal KeywordS

The keywords created will largely depend on the functions deemed necessary for implementation of the associated framework. Not all of the keywords will be identified during framework design and development,

but will rather continuously become apparent as tests are automated. If developed properly, the keyword framework will be seamlessly scalable with respect to the addition of new keywords over time, but there are some generic keywords that may be identified upfront, including:• INPUT – Inputs a single data item• PRESS – Presses or clicks a button or

link• VERIFY_SCREEN – Verifies a screen

has appeared• VERIFY_FIELD_VALUE – Verifies

entered of selected field data• VERIFY_TEXT – Verifies dynamic

screen text• INVOKE_APPLICATION – Opens the

application under test

Step 3. IdentIfy Keyword utIlIty funCtIonS, argumentS and return ValueS

Function definitions are a product of the framework and of the automator’s personal preferences for how the function is to be implemented. One of the keys in developing an effective automation framework utility

function, however, is to make sure it is independent, reusable, and able to be implemented as intuitively as possible. Function arguments play a big role in ensuring your functions met these criteria. For example, a function associated with the INPUT keyword might need the following arguments:• Screen – This argument contains the

screen on which the input action is to occur

• Object – This argument contains the object on which the input action is to

• Value – This argument contains the value to be entered into an object

These arguments correspond to columns in the keyword file shown in Figure 2, therefore,

The Driver ScripT callS funcTionS baSeD on The KeyworD in The

acTion column

Figure 3: Driver Script pseudocode and keyword File interaction

Keyword fIle

drIVer SCrIpt

Page 10: utomAted

10 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

the arguments will be fed values from these three keyword file columns via a Driver Script.

Step 4. deSIgn and deVelop drIVer SCrIpt

As illustrated in Figure 3, the driver script is responsible for calling the keyword file, reading all of the steps, and executing utility functions based on the keywords in each step of the keyword file.

If the driver script is developed using Ruby, it may appear as shown in Figure 4. Line 1 loads the UtilityFunctions.rb file. This is a file you develop that contains your user defined utility functions. Lines 3 through 7 are responsible for setting up and initializing the keyword file which is developed in an Excel spreadsheet. Line 9 begins the loop that reads all of the rows in the keyword file.

Figure 4: Sample Driver Script

The ATI Newsletter keeps you abreast of the latest information relative to our Online Reference. Sent to registered members of the site on a bi-weekly basis, this newsletter keeps you informed on the follow:

ATI’s featured Content• Featured Videos• Featured Tutorials• Featured Articles• Featured Tools• Featured Publication• Featured Poll• Featured Forum Post

popular Content• Popular Tools• Popular Articles

Automation omg!• Interesting News• Current Events

Register on the ATI site today to receive this email newsletter. To register visit:

www.registration.automatedtestinginstitute.com

ati newsletter

(Continued on page 37)

Page 11: utomAted

11www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Tired of searches that

return irrelevant information?

google - AutomAted teSting SeArch engine

The Automated Testing institute has partnered with google to create a software test automation search engine. if you’re looking for software test automation information, you will get the best results by using this search engine, because it only searches the sites

that matter most to automators. you can also restrict searches to blogs and forum sites.

www.googleautomation.com

Page 12: utomAted

12 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

Ask not what your tool can do for you. Ask what you can do for your tool. My fellow automators, how often have you pointed out the shortcomings of tools with which you have worked? Now, how often have you sought to take an active role in fixing these shortcomings? I’m sure many of you are thinking, “Fixing tool shortcomings is not my job! I use these tools to help me better perform my job. If I spend time fixing tools, then I won’t have any time to do my real job!” I hear all loud and clear, and you make good points, but while fixing tool shortcomings is not your responsibility, it does have several benefits. Fixing tool shortcomings is normally not an option with commercial tools, but the beauty of open source tools, is that it is very much a viable option. The May 2009 issue of the Automated Software Testing Magazine contained an editorial by Dion Johnson called, “Scripting Calisthenics”, and this article presented various options for keeping automation skills sharp during periods of inactivity. The article was referenced in various automation Internet forums with a question posed to the forum’s community that asked, “How do you keep your skills sharp during periods of inactivity?” One of the community’s users suggested that contributing to open source projects is a good way to keep scripting skills sharp. We here at the Automated Testing Institute thought this was an enterprising answer and decided to research and provide information for all of

you on how this may be accomplished. In our research we came up with 4 steps.

Step 1 – ChooSIng a projeCt

There are numerous open source projects ready and willing to receive contributions from eager scripters, and the project list grows with each passing day. Following are links that identify and provide access to some of these projects:

• www.sourceforge.net• www.opensourcetesting.org• www.code.google.com/soc/

The challenge is now in determining which project you should contribute to. The simplest way to pick an open source project is to look for one that maintains a tool that you use and/or are interested in. For example, given that you are reading the Automated Software Testing magazine right now, it may be safe to assume that you are either

interested in automated software testing or interested in one of the authors writing about automated software testing. Either way, this is a good indicator that a project that maintains an automated test tool may be a good fit for you. Aside from simply picking a project because you like it, you may also want to pick a project that is modest in size; particularly if you are a beginner to open source development. Smaller projects typically have smaller code bases, provide easier access to other developers, and have simpler bug submission processes. Therefore,

smaller projects may make it easier for you to quickly become productive. Another option is to start your own open source project. Maybe there is a browser extension or automated test tool plug-in that you’d like to see created to support automation of a particular technology; if so, you can lead the effort in developing the desired extension or plug-in. Just search around first and make sure a project doesn’t already exist for what you intend on doing. Not sure how to create a project? Some pretty good instructions are laid out at ehow.com (http://www.ehow.com/how_2091738_start-open-source-project-sourceforge.html) for creating an open source project with SourceForge.

Step 2 – pICKIng your ContrIbutIon

Now that you’ve picked a project, the next step is determining what you’re going to contribute to that project? You may begin by asking yourself what shortcoming you’d like to strengthen, and determining if it’s going

Ask not what your tool can do for youLearn to Contribute to Open Source Projects in 4 Steps

open Sourcery

Page 13: utomAted

13www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

to be a ‘fix’ or a ‘feature’? In other words, do you want to fix an existing bug, or do you want to add a new feature? At this point in the process you may have already decided what you want your contribution to be, and that decision may have played a big part in the project you chose to join. If not, however, there are some tips you can use for picking a contribution. The first and most obvious thing to do is to actually use open source tools. There are a bunch of them that perform a wide range of functions. If you work on a computer at all, you should have no problem finding at least one open source tool that can help you with some of your tasks. Pick one, and then identify some of the features you’d like to see changed. What you come up with, could be your contribution to the project. In addition, some projects maintain readily available “to-do” lists that identify items that need to be fixed. Also, the project’s bug reporting system can be used for gathering ideas for what to fix.

Step 3 – learn the projeCt Culture

Much like countries and commercial businesses, each open source project has its own unique culture, and your success in a project largely depends on your ability to learn and fit into that project’s culture. As much as you may like to believe success is only about technical abilities, it unfortunately goes beyond that. No matter how good your coding skills are, if key players on the project don’t like the ‘new guy’, there is a good chance that your proposals will get rejected. It is, therefore, important for you to be aware of who the core maintainers are, who the key contributors are, and who the more vocal members of the project are. You’ll also want to study how they communicate with one another. This will provide insight into who you may want to reach out to, and how best to reach out to them. It will be important to make yourself known, but it’s equally

important to do it with a productive tone. One way that you’ll be able to communicate with people on the project is by joining the project’s chat sessions and/or mailing list if they have one. Mailing lists are often used for discussing recent code changes or to propose new code changes, so they offer a

great avenue for proposing your changes and soliciting feedback. Making your intentions known to the key group members early will improve the chances of your change being accepted, or it may allow you to receive feedback explaining why your proposed change should be reconsidered. Using the bug system to study existing bug reports is also a great way to learn the culture of the project, because the reports indicate who the key developers are and what types of contributions normally get accepted. Aside from the bug system, you’ll want to appropriately utilize other project tools, such as the version control system. Developers on these projects often have little patience for individuals that can’t seem to follow rules, standards and directions. To obtain information on tools and conventions used by the project, locate and read the developer docs.

Step 4 – Code and poSt

At this point, it’s time to get down to the business of developing your fix or feature, and it’s imperative to begin by making sure you have the most up-to-date project code. Next, search the code for the location in which the change is to be applied – a task that is often simplified by using an Integrated Development Environment (IDE). Now you can begin scripting. Your scripting may be as simple as changing a line or function,

or it may be as complicated as modifying several code modules. However extensive your scripting task, be sure to use the code conventions defined by the project, including conventions for indenting, variable naming, and commenting. Upon completion you need to post

your changes to the project, a task that is accomplished differently for different projects. Some projects submit changes via the mailing list, and sometimes the project uses a version control system.

Whatever vehicle you use for sharing your update, that update is normally presented in the form of a patch. A patch is a small piece of software that contains your fix or new feature for a computer application. One way to create a patch is to run what’s called a diff program. The diff program compares original files or directories against modified files or directories, and produces an output file called a patch that contains the differences. Upon submitting the patch, be prepared to make changes as suggested by the project maintainers. For information on creating and submitting a patch, visit ehow.com (http://www.ehow.com/how_2091741_make-patch-open-source-project.html). Before submitting your patch to the open source project, be sure you understand the open source licensing used by the project to ensure the patch is not rejected for legal reasons, and to ensure any copyright you may wish to impose on your developed code is preserved.

ConCluSIon

Therefore, my fellow automators, if you find yourself complaining about what your test automation tool is not doing for you, ask yourself, “What am I doing for my test automation tool?” If more people had this mentality, we’d all be a little more skilled, and the discipline of test automation would probably be a lot better off.

...if key players on the project don’t like the ‘new guy’, there is a good chance that your proposals will get

rejected

Page 14: utomAted

14 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

by linda Hayes

Evolution of AutomAtEd SoftwArE tESting

An Eye Witness Account

Page 15: utomAted

15www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Evolution of AutomAtEd SoftwArE tESting

An Eye Witness Account

Test automation has a history of failure. Despite the explosion of software into every aspect of our lives, only a small percentage of tests are actually automated today. Not that there aren’t pockets of success or flashes of promise, but after more than a quarter century of investment by vendors, customers and consultants alike, manual testing predominates.

So reviewing how test automation tools have evolved is useful only if it helps to understand where we go from here. To do that, you need to appreciate the technology and market forces that have shaped them and why, and what that portends for the future. For that reason, this article sets the context of each evolutionary period within the state of software development and the applications market at the time.

Finally, test automation is a very broad topic, so when I say test automation I am referring specifically to the automation of an otherwise manual test against a user interface (UI). This does not mean that test tools cannot be used for non-UI testing, or that there are not many other types of testing that are automated by a wide array of approaches and tools, only that they are outside the scope of this article.

Page 16: utomAted

Text Capture/PlaybackMainframes dominated the early enterprise landscape. The mainframe was a monolithic environment, closely guarded behind glass walls and accessible only to a few. Computing resources were scarce and expensive, so development processes were formal and highly structured, requiring flow charts and extensive documentation before being committed to code. The majority of software was custom developed for back office applications such as accounting and payroll. Annual release cycles were the norm, usually punctuated by emergency patches.The first commercial test tools emerged as a result of the transition from batch to online applications, when multi-user terminal access first became available and manual testing became a necessity. These tools recorded the keystroke inputs and character-based screen outputs of a terminal session, then replayed the keystrokes and compared the screens.

This approach was seductive. It sounded so simple: just do the test as usual but record it into a script file, then play the test back as needed and compare the results. Unfortunately it was also deceptive. Issues

arose around dynamic content, like date and time stamps or frequently changing data fields. Timing synchronization was also problematic, especially as the user community expanded and performance fluctuated. Whenever the system changed or ran slower than the original recording, context was lost and errors occurred.

But the killer issue turned out to be maintenance. If you recorded 100 transactions, there were 100 sets of the keystrokes and screens, so any change had a wide ripple effect. It was usually easier to re-record the test than it was to fix it, which defeated the value premise of automation and encouraged reversion to manual testing.

So early on, maintenance loomed as a primary obstacle to test automation success, with dynamic content and timing close behind.

Text Capture/Playback/ScriptComputing soon spread to mid-range systems, but the real breakthrough came with the announcement of the IBM PC. Personal computers had been around for a few years, but these 8-bit platforms were generally perceived as truly “personal” computers used strictly by hobbyists and a few propeller heads (the early term for geeks). But with the imprimatur of Big Blue, PCs with 16-bit power entered the corporate computing world.

Application development continued to expand on the larger platforms, but software for PCs was initially limited either to productivity tools such as spreadsheets or word processors, or to terminal emulators that turned them into smart terminals. The DOS operating system sporting a character-based interface, was single-threaded and severely limited on memory and storage; the first machines didn’t even offer hard drives, so functionality was constrained.

The increasing popularity of spreadsheets and word processors propelled PCs into the corporate community, and the use of terminal emulators allowed them to replace “dumb” terminals. This drove manual testing from the mainframe to the PC. Soon automation tools

TheArchaeological Dig

Mainframe domination

Text Capture/Playback

System Evolution

Automation Evolutionary Response

ibM PC Announced

emulator drivers

August 2009Automated Software Testing Magazine www.automatedtestinginstitute.com16

Page 17: utomAted

appeared for the PC, capable of driving these emulators and other PC-based applications, and these began to replace the mainframe-based capture/playback tools because they were cheaper and more powerful.

One of these tools was from my company, AutoTester. Our tool was originally developed by my first company, Petroware, when we ported our oil and gas accounting software from an early IBM desktop to the PC. Designed as a “session recorder” by my brother to automate his programming tasks, we quickly realized we could use the tool for many purposes including demos, training, task automation and testing. But the more we used it the more we found we had to enhance it with additional commands to perform logic and branching as well as to handle variable data.

Before long, we had developed a proprietary language around the recording that enabled the scripts to handle timing and synchronization, make decisions based on results, and externalize data into files. Now a single script could loop through 100 records from a file and execute reliably regardless of system speed or dynamic content. Because it also provided more power and control through its scripting language, increasing reliability while reducing maintenance, we consistently replaced older capture/playback tools and handily won competitions for new customers.

Yet in spite of our early success, nagging concerns persisted. Even though we thought we had addressed the maintenance, dynamic content and timing issues caused by capture/playback, we still observed customers eventually reverting to manual testing. I was first puzzled then increasingly alarmed. What could be causing this?

I got my answer from a customer who sent their test manager to our offices for training. She was ideal for her job: she had over 20 years of experience with the application that was being rewritten and knew the functionality inside and out. But after only a few hours of training, she began to show signs of discomfort, and on the second day she openly burst into tears. Horrified, I asked her what was wrong. She explained that she was afraid she would lose her job since there was no way she

could use our tool because she was not a programmer.

Until she uttered those words it had never occurred to me that all testers weren’t skilled at scripting. After all, Petroware had been a software company and we wrote code for a living. Although we sold AutoTester as enhanced capture/playback/scripting, this was just a euphemism for programming. And the stark fact of the matter is that most manual testers are subject matter experts, not programmers.

So in the end we traded one set of problems for another: we addressed the maintenance, dynamic content and timing issues but introduced a level of skill and complexity that excluded the majority of our target users.

I was devastated. Secretly convinced we had developed an essentially defective product that tormented our own customers, we started a project to try to reduce or eliminate the programming aspect by generating scripts, and while we made progress it soon became academic. Windows was born.

GUI Capture/Playback/ScriptThe release of Windows brought even more opportunity but also more challenges. As a multi-threaded operating system it resolved many low level issues; the maximum memory limit exploded and resources were managed across sessions. Windows also employed an API layer that converted mouse events into operating system messages that were intelligible and returned screen contents as a mix of objects that could be queried to return text and images. Recorded scripts became more legible and reliable and the extra memory headroom enabled even more advanced functionality.

Another dynamic entered the mix as well. Personal computers were becoming not only ubiquitous but operating platforms in their own right. Instead of just smart terminals for mainframes or personal productivity tools, they offered powerful standalone applications.

windows released

gui Capture/replay

Client/Server becomes the norm

Custom Frameworks

internet Takes Center Stage

Commercial Frameworks

17www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Page 18: utomAted

This transition from mainframes to personal computers caused a corresponding shift from a formal and ponderous development process to a more flexible, free-wheeling approach. Mainframes were so costly that few had access to them; personal computers were so cheap and plentiful that the masses could afford them. This led to software companies being born in garages by kids barely out of high school or

college with no patience or need for the constraints of formality.

The type of applications changed as well. Whereas previously most applications performed back office tasks that were limited to a few internal users, newer applications expanded their reach to front line operations such as order entry and customer service. Many of the original custom applications were replaced by commercial, off-the-shelf systems and new development moved into areas that interacted more directly with core business processes.

As a result, development cycles began to collapse. Market pressure began to dictate functionality and annual delivery cycles gave way to quarterly releases. More capability was needed faster and faster. This added even more demand for test automation, because manual testing was less and less capable of keeping pace with changes and the risk profile headed north.

This new, more hospitable technical environment and market also brought new competitors. Frankly, I was hoping that the latest market entrants would know something we didn’t and show us how to assure that our customers were successful over the long term. But the newest tools presented the same problems. Scripting became even more sophisticated to deal with ever more robust applications, and initial apparent success with recording messages instead of keystrokes was soon replaced by disappointment and failure due to the skills and effort required to write and maintain ever more complex script code.

Windows didn’t save test automation.

Presented with this reality, the most successful tool vendors aggressively enlisted partners and consultants who could offload the scripting effort from the manual testers. A new class of tester was created: the automation engineer. Whereas previously manual testers were courted with capture/playback, now they were relegated to the

slow lane as scripters took over automation.

Custom Frameworks The PC continued to gain computing power, moving to a 32-bit architecture, and Windows matured into a more stable and capable operating system that was a virtual monopoly on desktop machines. Client/server application development became the new norm, allowing PCs to offload tasks from backend mainframes and servers and provide users with ever-easier yet more powerful interfaces. The battle cry began to replace expensive mainframes with networks of PCs and servers.

But the early days of client/server application development introduced even more issues for test automation. Commercial development toolkits became available, offering libraries of sexy new graphical controls that augmented the standard Win 32 object class library. Other component libraries also appeared offering everything from calculators to calendars to spell checkers, allowing programmers to quickly assemble rich user interfaces with less and less effort.

The problem was that many of these new controls were non-standard, so the standard Windows messaging layer was no longer applicable. A custom control did not necessarily implement the same messages that the standard classes did. Testing tools that relied on the Windows API began to fail, diverting more and more automation time and effort to dealing with custom controls and other non-standard behaviors.

This new complexity raised the stakes. Now automation engineers were thrust into the development environment, forcing them to learn and deal with the underlying implementation of the interface. Whereas previously one had only to be comfortable with using logical constructs and data file processing, now it was necessary to delve into object methods and properties and grapple with installing source hooks or writing test APIs. This effort siphoned time and resources away from actually automating tests and into a struggle just to interact with the application’s user interface.

It could not have come at a worse time, as pressure on development continued to increase. The new client/server applications penetrated mainstream operations whose delivery schedules were driven by user demand and stability became even more important. Greater and greater functionality was needed and it had to be delivered faster and better. Quarterly cycles soon gave way to monthly.

More and more companies invested in automation to meet this demand and automation engineers began looking for ways to reduce the time and effort to code scripts. It became clear that test scripts were no different than source code and adopting reusable components made sense as a strategy to reduce development and maintenance. Thus frameworks were born.

...initial apparent success with re-cording messages instead of key-

strokes was soon replaced by disap-pointment and failure...

August 2009Automated Software Testing Magazine www.automatedtestinginstitute.com18

Page 19: utomAted

Originally frameworks provided a basic infrastructure for test execution, handling common tasks such as data file processing, error and test logging and reporting. But they gradually became more robust, providing simplified interfaces that employed spreadsheets and flat files to allow non-technical testers to construct automated test cases using pre-built script components. This introduced a role-based approach that allocated tasks based on skill sets and enabled cooperation between automation engineers and manual testers.

A number of framework types emerged, differentiated primarily by how the functionality was decomposed into components. The most prevalent was referred to as “keyword” or “action word” based and it entailed writing script code components around business tasks, such as “Create Order” or “Invoice Customer”, then allowing users to invoke these from a spreadsheet and pass in data values. Others organized their components around screens or windows, such as “Add Customer Information” or “Delete Customer Information”.

These frameworks succeeded in reducing overall development and maintenance while allowing non-coders to design and execute automated tests more easily, but still required custom code. Test designers could write linear test cases but could not make decisions and control the flow of the test based on real-time execution results. That still required code.

But frameworks fell victim to their own success. When the architect and chief (often only) automation engineer left the company or moved to another area, his or her replacement almost inevitably believed it would be easier and better to rewrite the code left behind than it would be to maintain it. With no de facto standards, almost every automation engineer adopted their own pet approaches and designs.

Large companies soon found themselves with multiple tools and frameworks, some of which had been rewritten more than once.

Complicating this was the fact that shrinking development schedules and expanding functionality meant more and more tests were needed. There was simply not enough time and people available to write, maintain and support one or more frameworks while also developing, executing and maintaining the tests themselves. Eventually the overhead became so heavy that companies – again –

reverted to manual testing just to keep up with delivery schedules.

Commercial FrameworksBy now the Web was taking center stage as a ubiquitous user interface. This was both good and bad. It was good because it provided a basic set of standards for access through browser APIs and HTML classes, but bad because the ability to instantly deploy changes meant delivery cycles could be daily. And, as static pages and controls gave way to dynamically scripted ones, object names deteriorated into gibberish and behavior became unpredictable.

Pressure to deliver automation faster soared. Several enterprising automation engineers realized that they were constantly re-inventing the same framework over and over and decided to commercialize their solution and market it as a library of pre-built components. This helped to accelerate the implementation curve and reduce the overall costs. But depending on the type of framework, there was still a fair amount of application-specific code required to be developed, usually complicated by the custom control or dynamic content challenge.

One of these framework types – my favorite - was designed around object classes. Commonly referred to as “table-driven” or “class-action”, this approach sought to reduce or remove application-specific code by using components at the object class level: For example, “Press Button” or “Input Text”. A key advantage was that the same object classes and actions could be used across applications that used the same class library, allowing greater reusability with less code. It was even possible to share libraries across companies.

As the amount of code that needed to be written continued to

Secretly convinced we had

developed an essentially defective product that tormented

our own customers, we started a project to try to reduce or

eliminate the programming

aspect...

evolution in thought

19www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Page 20: utomAted

decline, the battlefield moved to the objects themselves. Virtually all modern test tools rely on an inventory of the application’s testable objects. Variously referred to as an “application map”, “GUI map” or “object repository”, this inventory contains a list of each object in the application’s interface including its name, class and the attributes used to identify it. This map enables an object to be added to a test, located and acted upon during execution, and provides a central point of maintenance in the event of changes.

But as previously noted, object inventories were becoming increasingly dynamic and unpredictable. Building the map was a slow

and tedious process, often requiring the engineer to navigate to each page and objects. But with dynamic pages creating scripted objects on the fly, using names that weren’t just incomprehensible but also changing depending on user choices, session parameters or a host of other factors, this was often an impossible task. Without a predictable and reusable object inventory, automated tests are unusable. Trying to maintain them with daily releases was crazy enough, but if the attributes changed within a release without any predictability, it was simply impossible.

So history shows us that every time test tools and implementation approaches are enhanced to meet the latest challenge, new ones are introduced. Vendors and customers both find themselves in a perpetual state of catch-up, seemingly doomed to repeat the past by continually reverting to manual testing.

What’s NextThe only way to break this cycle seems to be to anticipate the future and be ready for it, instead of reacting to it. The harder question is to know what’s coming. Since I don’t own a crystal ball, all I can do is look at history and current trends to project what’s coming.

History says that:

• Development will continue to accelerate. Software will continue to insinuate itself into every aspect of our lives and jobs, driving demand for more functionality faster. Companies will adopt packaged applications when possible. Developers will take advantage of increasing levels of abstraction to keep up, using code libraries, code generators, web services and other middleware technologies to quickly assemble composite applications.

• Test automation architectures will continue to shift from code to data. The only way to keep up with the blistering pace of development will be to minimize or remove altogether the need for custom code and accelerate or automate the acquisition of the application map.

• Test automation penetration will continue to decline - unless something changes. The accepted rule of thumb says that 30-40% or even 50% of a project should be budgeted for testing, but that assumed old-fashioned coding, not snap-together systems. There is no way that hand-crafted test scripts can keep up with high-speed application production.

So what has to change? In my opinion, our only hope is for test automation to be integrated into the development process. No application should be deemed complete without a test harness. Let’s make developers responsible for making their software testable instead of engaging in a constant struggle to compensate for their latest bad behavior. Nonsense names, closed controls and unpredictable context will disappear when developers are both accountable and rewarded for successful test automation.

Such a harness would provide a test object inventory and interface with defined methods and properties to be shared by developers and testers alike. It would be kept current with all software changes and be part of a holistic test process that includes automated unit, integration, system and user acceptance.

Test tool vendors needn’t become extinct; just shift their focus to managing data instead of code. Automation engineers needn’t become an endangered species; just shift their focus to managing the object inventory and interface metadata. But don’t react just yet; history also tells us that changes take decades. Even today there are companies who believe capture/playback is a viable automation strategy, though we have known better for almost 30 years.

But until management sees the light and directs development to play their part, we all need to continue finding ways to reduce the amount of test code that must be written and maintained, work closely with development to correct testability issues and manage the impact of changes, and educate management that the historical ratio of development to test effort is outdated but that the extra investment in automation is worth it. And keep trying.

Nonsense names, closed controls and unpredictable context will disap-pear when developers are both accountable and rewarded for success-

ful test automation.

August 2009Automated Software Testing Magazine www.automatedtestinginstitute.com20

The Automated Testing institute (ATi) conducted a podcast interview with linda Hayes. To listen in, visit

http://www.podcasts.automatedtestinginstitute.com.

Page 21: utomAted

21www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Submit your Stories!Now’s Your Chance to be Heard

real adventures in automation

Ad

Have something to say?

ATI is here to listen

The Automated Testing institute believes that one of the best ways to improve upon test

automation is to listen to the lessons learned by others. So, if you have an interesting story that you’d like to share with the world, contact us at [email protected] to find

out how.

Page 22: utomAted

August 2009Automated Software Testing Magazine www.automatedtestinginstitute.com22

by j.l. Perlin

Ev

ol

ut

ion

ar

yR

oa

d

Navigating Existing

Frameworks

through Modern

Day Technological

Evolutions

Page 23: utomAted

23www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

So, why do products evolve? let’s use the automobile as an example. The first self-propelled road vehicle was a steam-powered military tractor invented by the French engineer Cugnot. it was a ground-breaking invention, but, alas, due to low speeds (2.5 miles per hour), operating dangers, and the need to stop every ten to fifteen minutes to build up steam, it was not an improvement over the horse and buggy.

The next attempt at making a successful automobile was in the early 1800’s when a dutch engineer used an engine that ran on rechargeable batteries to design the first electric vehicle. The vehicle was initially successful due to a smaller and lighter weight engine that allowed the vehicle to travel faster and further than its predecessor. despite the improvements, the top-

speed of the vehicle was still slower than the horse and buggy, the range of the vehicle was very short (about 15 miles before a recharge was required), and the cost was exorbitant and out of reach for most people of that era.

Fast forward to today, and we see that over the years cars have become much more fuel efficient, safer, and easier to drive. improvements to electric and combustion technologies have led

to the introduction of hybrid vehicles and all-electric cars, with steps made to bring us closer to the day when vehicles we drive every day can run on hydrogen. but, why did the motor vehicle evolve in this manner? what was the catalyst for new improvements and changes to the overall design? it is the same underlying factor that causes other products to evolve, including computer systems and software, and that factor is market demand.

Evolution of a Product

it is a natural occurrence for products to evolve over time. Advancements in technology and consumer demands spur radical changes in design that make products more efficient, easier to use, and less costly to produce. How can you prepare test automation to best handle these changes? it is becoming increasingly essential to plan for change when deciding upon an automation approach. in order to do this it helps to understand why these changes are occurring so we as test automation engineers can understand the catalysts behind these changes and be more prepared not only for these new technologies and their effects on automation, but also for the interactions we will have with the organizational stakeholders who have introduced these new improvements.

Page 24: utomAted

August 2009

How technology Evolution Affects Test Automation

Technologies of the Information Age are evolving at a rate of speed that

increases exponentially with each passing day. Applications are growing in size and complexity. One major software component that has undergone massive changes is the operating system. Much like the automobile engine, operating systems have undergone numerous changes since their inception in order to meet consumer demands. These demands have led to changes that have made operating systems increasingly efficient, faster, intuitive, scalable, and more secure while providing powerful features and useful functionality previously unavailable to users.

It seems pretty clear cut, no? Changes in market demands spur innovations in technology. What, however, does this mean for software t e s t

engineers that test

these evolving applications? It means that we have to lift the hood, kick the

t i res and take this new technology for a test

drive to ensure it effectively and

efficiently improves upon the proverbial horse and buggy. Changing major or even minor pieces of an application will most likely require changes to other components of

the system. And all of these new and updated components need to be tested, along with the legacy components that aren’t changing to ensure that they still perform properly under the new design. The true challenge for software test engineers is determining whether or not the changing technology affects the way in which the product is tested. In addition, we must determine if we can

use the same tools we used prior to the technology change to test these new components, or if we need to look at new tools and/or a new approach.

Just like the products themselves, these tools need to be more efficient,

easier to use, and affordable. But most

of all the automated tools need to

be able to access the new components of the application and have the ability to perform all of the methods against the test objects that our tests require to fulfill their objectives. As technology moves forward, so must our testing approaches, processes, and tools.

In today’s environment finding a tool that meets your project requirements can be a difficult task, especially when you consider how quickly technology has been moving

forward and the number of products that have been made available to development teams to help them create large, complex applications in less time. Automated test tools often have

issues when interacting with newly released third-party custom controls and components, and these have become very popular in recent years. Although very useful to developers, they can be a major problem for the

a u t o m a t i o n engineers. There may also be

issues with development technologies like .NET, Windows Presentation

Foundation (WPF), ActiveX, etc., that are being used by your developers to design the bulk of your application. The majority of the time we cannot find a tool that can address all of our needs, but at times we do find one that can handle the majority of the tasks at hand.

A Modern-Day Evolutionary Scenario

Imagine that you’re working at an organization and have been there for a couple of years, and during this time you and your

Automated Software Testing Magazine www.automatedtestinginstitute.com24

THE TRUE CHALLENGE FOR SOFTWARE TEST ENGINEERS IS

DETERMINING WHETHER OR NOT THE CHANGING TECHNOLOGY

AFFECTS THE WAY IN WHICH THE PRODUCT IS TESTED.

Page 25: utomAted

25www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

team have been building and maintaining a suite of automated tests to execute against the product that your company markets. When automation was first introduced to the project new test automation resources were hired (including you) and the first assignment was to evaluate the AUT to identify the test tool requirements. Most importantly you wanted to find a tool that could interact with all of the different components of the application, allowing you to automate more tests and exercise more of the AUT functionality. You looked at a number of different tools and compared them using a set of pre-determined criteria/requirements.

Upon completing the evaluation you selected the test tool that would provide the most value to your project, identified an automation approach that best suited your project, and then began automating the smoke and regression tests. Eventually your team

built a large suite of tests that you designed with great deference given to maintainability, and you felt pretty comfortable knowing that most updates and fixes made to the application could be addressed easily using your well-planned framework.

After several new versions of your application were released the development team started looking at new 3rd-party application controls that could be used to improve upon the current design. In a meeting, the development team presented the proposed controls, and these controls appeared to provide required application functionality and also looked much nicer than the previous, less powerful controls that were presently in use. It would still be up to the test team, however, to determine whether or not these new controls were an improvement over the proverbial ‘horse and buggy’, therefore

your concern was centered on testability. The first few controls presented by development were proposed for use in portions of the application that were minimally accessed by users, and that required only a small set of tests for validation. Worst case scenario: If your automated tool couldn’t handle these controls you may have only had a handful of additional tests to execute manually. Then the other shoe dropped. A new grid control was presented that would be accessed by all of the product’s users and a large portion of your automated tests. In other words, if you’re automated test tool couldn’t handle this control; the majority of your automated tests would become useless. You gently but firmly communicated your concerns regarding the affect that these new controls would have on the existing automation suite so the developers agreed to send you a sample grid allowing you to evaluate how

your automated test tool would react to the newly proposed grid controls.

Researching Solutions

After receiving the sample grids as promised you began trying to access the controls using your existing automated testing tool, and that’s when you learned the bad news; the tool didn’t fully recognize the grid components. It recognized that a new control object existed, but couldn’t access any of the rows, columns, menus, or buttons contained within this new grid control. Was there a way to solve this issue? The first step you took was to contact the automated test tool vendor to see if they supported these controls. To this they responded “No, but you can search our knowledge base to see if anyone has done this”. Searching their knowledge base turned up several threads that

posed similar questions on how to automate tests for these controls but, unfortunately lacked any useful responses. Some of these threads were several years old, so it was becoming apparent that the tool vendor probably had no plan to release any add-ins for this technology. You next contacted the manufacturer of these 3rd-party controls to see if it was possible to further explore automation possibilities with your tool. The manufacturers’ response was that this was theoretically possible, but that the controls do not officially support your automated testing tool. When you asked how to make the controls accessible by the automated tool, you were only informed that a few of their customers had attempted automating tests against these controls but their results weren’t readily unavailable. Hmmm… someone was trying to sell something here.

It was time to find other options for

making your tool adapt to the new technology.

Option 1: Extensibility – Some automated testing tools offer extensibility, which means that you’re able to integrate the tool with the new controls by executing small snippets of custom code via calls from the test script. This solution is often the last resort, because it could require extensive setup, coding, and almost always requires assistance from the development team.

Option 2: Workaround – Another possible solution might be to identify a workaround in the application. Was there another screen in the application that you could be used to accomplish the same goals? Was there a way to navigate around this control while accomplishing the same test objectives? If so, this would allow the automated tests to execute, but you and the developers would

IT WoULD STILL BE UP To THE TEST TEAM, HoWEVER, To

DETERMINE WHETHER OR NOT THESE NEW CONTROLS WERE AN

IMPRoVEMENT oVER THE PRoVERBIAL ‘HoRSE AND BUggY’

Page 26: utomAted

26 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

have to remain aware of the fact that the bypassed controls would need to be tested manually.

Option 3: Programmatic Solution – Another option would be to design a programmatic solution; a coded solution offered by developers providing a custom way for your tests to bypass this screen while still accomplishing the test objectives. For example, if the new controls were being implemented on an ‘Add User’ screen, was there a way that the developers could allow you to bypass this screen using pre-built user data stored in the user information database that your test could access and use? This would again result in the tests for the new controls being reserved for manual execution, but it would also allow all of your tests that need to access information processed on this screen to continue running properly.

Analyzing ImpactWhen identifying different options it also makes sense to look at the impact that the new control has on the application to determine if automating tests to exercise and

verify its functionality is worth the effort. For example, if a new control is being added but it only affects a small number of tests, you might want to see if the effort of automating the tests against the control is cost-effective. It may make more sense to allow the test to remain manual. Conversely, if the new control is going to be used frequently and is touched by a large number of your automated scripts, it might be worth the effort. Ultimately, it boils down to how easy or difficult it will be for automation to adapt, and if the effort is worth the result.

Presenting Automation’s Case

So you’ve discovered that your automated testing tool has problems accessing and manipulating these new 3rd-party controls. Should you just run to the development team and tell them that the new control can’t be automated? Before doing this you need to keep several things in mind. The developer(s) who discovered these new controls and are pushing for their implementation have not only a technical stake in this but an emotional stake, as well. Put yourself in the developer’s

shoes. You’ve done your research and have discovered this cool 3rd-party control that will fulfill the new and existing system requirements and/or is a major improvement over the previous control. You’ve performed your evaluation, probably of proof-of-concept, and have presented this idea to the rest of the development team. Now some automation engineer is walking in saying it shouldn’t be implemented because it will break the majority of the automated tests. That isn’t going to happen without a fight.

Offering Options

Since it is good policy to avoid fighting in the workplace, the best thing to do would be to walk into the developer’s office not only with a list of constraints, but also a list of options. Keeping a positive demeanor is of utmost importance. If possible, the most effective way to communicate this information to development would be via the Quality Assurance (QA) Manager. Inter-team communication is one of the major roles of a manager, so as an automation engineer you might as well take advantage of it. Additionally, program management may be involved and have input regarding the final decision, so it is best for them to communicate with someone that they are familiar and comfortable with.

It is important to include caveats when communicating these options. For example, you might have a programmatic workaround in mind, but this would require developer involvement, add time to the project schedule, remove focus from other current automation activities, and forego any testing of the new control itself since it is being bypassed. You might have found an add-in that could resolve most of the issues you had documented during the evaluation, but the add-in was expensive, code needed to be re-written to replace the previous code used to manipulate the previous controls, and the add-in still wouldn’t provide all of the functionality required to meet the test objectives for this application component. Or you might explain that your automated test tool extensibility features would allow you to handle this new control via automation, but it would also

which way to Maximum roi?

Page 27: utomAted

27www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

require developer participation to help design the code. Until this code is developed and deployed automation test coverage would be drastically reduced.

You or your manager might also want to point out how this new technology would affect other teams in the organization. Business Analysts would have to rewrite, add, or revise requirements. Technical Writers would have to update User’s Guides and other documentation to reflect the changes to functionality. The Sales Team would need to get up-to-speed on these new controls so they could market these improvements to existing

and potential customers. The Training Team would have to revise presentations, manuals, and other training materials.

The number one objective in making this a successful meeting is to avoid friction. Don’t make the developer view you as a roadblock or the enemy. If you are excited about this new control and agree that it will improve your product, let them know! Tell them that you will do everything within your power to make this work. Forming and maintaining a collaborative relationship with the developers is of paramount importance.

often the development and QA teams can find middle ground. The collective decision may be to delay the addition of these new controls until you have fully prepared the automation framework to handle them. All parties may agree on QA halting all new automated test design while you concentrate your efforts on this compatibility assignment. Stakeholders may decide to search for other controls that may be more automation-friendly. Or the parties may decide that the impact of adding these new controls does not result in a positive

return-on-investment after considering all of the activities that need to take place in order for the effort to be successful.

lessons learned

After experiencing similar scenarios at several different IT organizations I was able to learn some valuable lessons that resulted in a set of guidelines that I now follow to make it through these situations while experiencing as little pain as possible. The most important guideline is to ensure that other teams in your organization have a good understanding of automated testing processes and procedures. This can be accomplished

by offering to conduct formal presentations or brown bag lunches, possibly resulting in enhanced development processes that keep automation in mind when developing new code. The developers may work harder to ensure that new code provides hooks for the automation tool to grab on to. Another good guideline to follow is to always be aware of what development is researching and may be planning for future versions of your product. The best way to do this is to ensure that you are invited to development meetings where these items may be discussed. This allows automation to have a voice in the decision making process throughout the project lifecycle and also allows you proactively conduct research prior to any formal planning. Finally, be sure not only to research the technology being considered by your organization, but similar technologies offered by other vendors, that may present more options.

Following these guidelines will increase the chance of effectively implementing new technologies while reducing the impact

on test automation. This will in turn help your organization to release an improved, well-tested product to its customers while maintaining positive, synergistic inter-team relationships within your organization.

Final Thoughts

Products naturally evolve or they go extinct, overtaken by new products that improve upon the previous design. New technologies are a part of this evolution, and automated testing tools and methodologies will continue to evolve to address changes in technology that come down the pipeline in the future. So much like we’ve moved from steam to

gas to hybrid to electric in car engine design and application, the same has happened for IT systems and software development. It is important to keep up-to-date with new technologies and their impact on automation, make your organization aware of automation processes and the affect their decisions will have on these processes, all while keeping an eye on their possible effect on your current automation model. This will help to make the addition of new technologies to your product as painless an experience as possible while leveraging all of the advantages that automated testing offers to your organization.

...ENSURE THAT OTHER TEAMS IN YOUR ORGANIzATION

HAVE A GOOD UNDERSTANDING OF AUTOMATED TESTING

PROCESSES AND PROCEDURES.

the automated testing institute (ati) conducted a podcast interview with J.l.

perlin that offers a discussion of this article, as well as his test automation philosophy.

look for it at:

http://www.podcasts.automatedtestinginstitute.com

Page 28: utomAted

August 2009Automated Software Testing Magazine www.automatedtestinginstitute.com28

The creation of a

well thought out

framework that

is customized to

your particular

environment can

be time consum-

ing, so this article

identifies 5 simple

steps that will help

in speeding up the

process.

5 StepS to Framework Development

It is not very difficult these days to sell those who are involved in the Information Technology (IT) industry on the importance of regression automation – at least the theoretical importance. It seems as though most understand that test automation saves time, is repeatable, and if done correctly, can be very reliability.Putting this theory to action proves to be problematic, however, due to the commonly held belief that automation should require no additional thought other than that which requires one to execute existing manual test procedures while being recorded by an automated test tool.In order to facilitate the desire to crank out quick and thoughtless automated tests, the industry is creating more and more “user friendly” GUI automation testing tools, both commercial and open source, and these tools offer record and playback features that generate code or code-less visual scripts that may seemingly

Page 29: utomAted

29www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

5 StepS to Framework Development

test an application with little effort. These applications, with their increased capabilities, can be extremely helpful, but also add to the problem. One of the major selling points for some of these applications is that, “With our application, you save a lot of time because you don’t have to spend time thinking about, and planning out how you want to perform your tests. You just record your steps, and playback the resulting script.” It is important to avoid falling into the trap of believing that some magical tool is going to eliminate the need for thinking and planning, because automated testing is most effective when planned and implemented within a framework. The term framework

is often thought of as the collective structures that compose a unit testing tool. In this article, however, frameworks will be discussed in a different context; one where it is defined as a set of abstract concepts, processes, procedures and environment in which automated tests will be designed, created and implemented. In addition, this framework definition includes the physical structures used for test creation and implementation, as well as the logical interactions among those components.The creation of a well thought out framework, customized to your environment can be challenging, so this article identifies 5 simple steps that will help serve as your blueprint for getting it done.

Page 30: utomAted

30 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

1 Perform Initial Framework Design

Designing an automated test framework involves first identifying the type of framework that will be developed. This typically requires researching and understanding your software development environment as well as the application(s) under test (AUTs), and determining which framework will work best (see Table 1).

When in doubt about the framework that will work best in your environment, the safest choice is a functional decomposition framework. Functional decomposition frameworks can be as simple or as complex as deemed necessary in order to handle the demands of the automation scope. Once a decision has been made about the framework type, it is imperative to identify the basic components that will compose the framework, and have a high-level picture of how these components interact with one another. Figure 1 presents a basic design example that you may decide

to use or build upon in your automation efforts. There are four main segments in this framework design:

• Driver Segment• Test Segment• Function Segment• Parameter Segment

The Driver Segment, represented by a script or tool, reads from the Execution Level File, which stores execution level information for each test script (test name, test priority, etc). Next, the Driver Segment

calls upon the Parameter Segment to initialize the environment, and set parameters necessary for the test run. After calling the Parameter Segment, the Driver Segment calls and executes tests based on the associated priority. Control is then yielded to the test scripts in the Test Segment for AUT verification. The Test Segment frequently calls functions from the Function Segment for performing succinct test case actions that collectively compose a test case. In addition, the tests in the Test Segment may call upon Data Files to obtain the data necessary for test execution.

Figure 1: Framework Component interactions

Framework Type

Description

Linear All components that are executed by automated scripts in this type of framework largely exist within the body of that script. There is little-to-no modularity or reusability in a Linear framework.

Data-driven Most of the components that are executed by a Data-driven script largely exist within the body of that script, with the exception of the data. The data used in Data-driven scripts is typically stored in a file that is external to the script, which promotes script reusability

Functional Decomposition

This is a framework that involves the creation of modular functions for executing fundamental actions that occur during test execution. These actions may then be called upon and executed independent of one another, thus allowing them to be reused by multiple tests within the automated test bed.

Keyword This type of framework is strongly dependent on modularity concepts embodied by functional decomposition. In comparison to the Functional Decomposition framework, the keyword framework evaluates the functions at a much higher level of abstraction, so that tests are no longer developed using code, but rather developed in a tabular format that is interpreted and executed by a driver script or utility.

Model-based Often called an “Intelligent framework”, Model-based frameworks go beyond creating automated tests that are executed by the tool. These frameworks are typically “given” information about the application, in the form of state models, and the framework “creates” and executes tests in a semi-intelligent, dynamic manner.

table 1: Framework types

Page 31: utomAted

31www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

2 Create the Directory Structure

Test automation is a small-scale form of software development that involves creating a software package. It is, therefore, important for this package to have a preplanned physical structure. The physical structure is similar, but different from the component design identified in Figure 1. The component design is more of an abstract representation of how the framework components work together, while the physical structure identifies how and where these components will be stored.

The upfront identification of the physical structure is important for several reasons. Obviously, it is important because you need to know where things are going to be stored. It is also important because the framework may need to be collectively moved as a single unit. Such a move may be prompted by the need to have the

automated tests execute on a different server, or run in a completely different environment. With such considerations made during the upfront creation of the physical structure, an ultimate increase of the framework’s portability will almost certainly be a positive by-product. The physical structure may be maintained by a test management tool, some sort of Integrated Development Environment (IDE), or simply by a file system. Below is a pictorial representation of a sample physical structure for an automated testing framework. All of the components are housed under a single Root directory, which aids in portability. In this root directory are several sub folders, including:

• Initialization Files• Configuration Files• Function Library• Driver Files• Test Scripts

These items will be discussed in subsequent steps.

rootDirectory

initFiles

ConfigFiles

Functionlibrary

DriverFiles

testScripts

Init (initialization)

Files bring the tests

environment to a controlled stable point.

This will increase the

chances of the test scripts

running properly.

Config (configuration)Files establish

parameter values that are necessary for the test run.

Config parameters are similar to Init parameters but

are likely to change more

frequently

The Function Library holds a set of functions

used by the automated test

suite. It isnormally

loaded at the beginning of

a test run, and its functions

are called throughout the

test run.

A Driver Script coordinates

test activities by calling and executing the test scripts. It may run init files, set up

config parameters, and run test

scripts.

Test Scripts are the components

that actually perform

verification activities on

the application under test

(AUT). Testscripts perform

actions, and generate results

files.

Figure 2: Directory Structure

Page 32: utomAted
Page 33: utomAted

The Automated Testing Institute (ATI) Online Reference is the web’s foremost comprehen-sive online resource for obtaining, exchanging and certifying industry standard test automation knowledge, information, and techniques.

Visit us today!

Page 34: utomAted

34 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

3 Develop Init and Config Files

An Initialization File contains a set of parameters or code that brings the environment to a controlled stable point. If the environment is initialized, there is a better chance of the test scripts running as designed. A Configuration File contains a set of parameters needed for the test run. The main difference between initialization parameters and configuration parameters is that initialization parameters are not likely to change from test run to test run, or from machine to machine. Instead, initialization parameters will remain fairly constant throughout the project life cycle. Initialization parameters/actions that are addressed in Initialization Files include those that are responsible for:

• Setting Directory Paths – Directory paths help in finding key automation components during runtime. At a minimum, all of the main directories of the automated test framework should be included in this.

• Loading Library Files – Test automation library files contain reusable functions that may be shared across multiple tests. These libraries may be loaded during the automation initialization process.

• Modifying Application Settings – Initialization of application specific settings is often important for the successful execution

of the automated test suite. These settings are often specific to application technology. For example, in web application testing you might initialize the environment by doing the following:

» Clearing the cache » Turning off auto-complete » Turning off browser animation effects » Setting cookies » Setting frame format

Configuration parameters are those parameters that may change more frequently, but upon changing, they will permeate through multiple tests within that suite. Parameters that may be set in the configuration script include:

• User Ids and passwords,• Pointers to databases,• Public variables and constants, and• Pointers to the application (domains, URLs, etc).

To help better understand configuration parameters, let’s examine a scenario represented by the figure below. The automated tests in this environment may be executed by several different testers. Tester 1 may log-in on one machine to run the tests, while Tester 2 logs into another. Each tester may have their own set of test data, and must also choose, at run-time, what front-end server to connect to, then what back-end data server to connect to. No one should have to make all of these determinations every time they sit down to run their automated test bed. Configuration parameters may be set up to maintain the desired configurations.

Tester 1

Tester 2

Test Data Application Layer Database Layer

Figure 3: Configurations

Page 35: utomAted

35www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

4 Develop the Function Library

Often this step is placed after the creation of automated tests (Step 5) through the following process.

1. Generating multiple test scripts with identical code,2. Identifying the redundant code,3. Copying the redundant code into a function within a function

library, then4. Replacing the redundant

code in each of those scripts with a call to the function.

This approach is extremely inefficient, and produces unnecessary rework. The more effective approach is to develop functions before developing tests via the following process.

1. Identifying redundant actions,

2. Creating functions for these actions within a function library, then

3. Adding calls to the functions when creating the test scripts.

Most of the functions that will be necessary for the efficient implementation of an automated test bed can be identified before you create a single automated test script. This is possible because Step 1 involved reviewing documentation related to the software development environment, test cases and the AUT; in addition, it may involve reviewing the AUT itself, if it is available. This review time should provide some insight into some of the repetitive actions that take place while exercising and testing the application, therefore, there’s no need to wait until the test script creation begins before functions are created. Various types of functions that may be necessary include:

• Core Business (Utility) functions• Navigation functions

• Error handling functions• Loading functions• Miscellaneous verification functions

Core Business function exercise basic application functionality. For example, a function that is responsible for logging into the application would be considered a core business function. These functions can be identified from the requirements or from the manual test cases, related to the groups of tests that are slated for automation. Most applications have several main navigation paths that are taken many times during testing. And if the application modifies these paths, all developed automated tests that follow these paths would be broken. Therefore, developing navigation functions is imperative.

Error Handling functions are created to deal with unexpected events that may occur during testing. Some events that will definitely need error handling include:

• The failure of an expected window to appear.• The failure of an expected object to appear.• The failure of an expected value to appear.• The appearance of an unexpected popup window.

Loading functions do exactly as the name implies – load things.

These functions load files and compiled modules for use by the

automation tool.

Although many automated test tools provide built in verification

checks, it is often necessary and more efficient to develop custom

Verification Functions to verify certain application attributes. Some user-defined verification functions that you may want to consider creating include the following:

• Window check • Date/time check• Login ID check

When creating functions, it is important to identify pertinent function arguments and return values.

FUNCTION LIBRARY

Contains

building

blocks

for

tests

Page 36: utomAted

36 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

5 Develop the Test Scripts

The final step in the framework development process is the creation of automated test scripts. The automated test interface is largely a

function of the type of automation framework and automated tool that is in use. For Linear and Data Driven frameworks, the tests are normally code-based, as illustrated in Figure 4, and therefore fairly technical. These tests are therefore composed of code that is very specific to the test script in which it exists. Some tools offer a tree structure and/or “keyword” interface that adds a more graphical and icon-based view of code, but when these tools are used in a Linear or Data Driven framework, the tests still follow the basic precepts of a code-based test. Tests in Functional Decomposition frameworks are still code-based, but as the framework becomes more defined, the tests become slightly less technical. This is due to the fact that the tests are largely created by treating reusable components as building blocks, and stacking them together. Figure 5 reveals how the statements in Figure 4 might be written in a Functional Decomposition framework. Statements 1

through 3 in Figure 4 have been parameterized and placed in a function called “Login”, while steps 4 through 8 have been parameterized and

placed in a function called “Verify_Screen”. Note: depending on the configuration parameters that are created, the username (“John”) and password (“JPass”), may be stored in the configuration file and delivered to the script via configuration parameters. Keyword frameworks reduce the technical nature of automated test development even further by allowing them to be developed in a tabular format. For example, the Keyword equivalent of the statements illustrated in Figure 4 might appear as illustrated in Figure 6.

The columns in this illustration represent different elements of the test that collectively perform the desired actions:

• Screen – The screen on which the automated test step occurs

• Object – The object on which the automated test step operates

• Action – The keyword that is tied to a reusable function and

identifies what activity will take place on the test step

• Value – The value that may be entered into the

application

• Recovery – The error handling actions

for this step

• Comment – A note about the step’s main

purpose, used to provide helpful information to anyone reading the keyword file

The keyword file is interpreted and executed using a driver script. Following all five of the steps presented in this article from start

to finish will aid you in creating an automated test framework that will provide an environment in which your automated tests will thrive across multiple releases of

the AUT.

Figure 4: Code-based interface

Building Blocks

Figure 5: Functional Decomposition interface

Figure 6: keyword Driven interface

Page 37: utomAted

37www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Finally, yours truly chimes in on the third mode of change, defined by building up your test automation framework. This change mode has no particular catalyst, but is instead unprovoked, requiring implementation by a project looking to improve their test automation return-on-investment. I provide a step-by-step approach for building an automated test framework that will ultimately help to sustain multiple automated tests over an extended period of time, therefore helping your automated tests proactively keep pace with application changes.

So, while I may be biased, I do believe you will thoroughly enjoy the three featured articles in this issue. These articles, along with the department articles - focused on keyword driver script development, open source tool contributions, and automation blogging - will help you understand present day test automation, and help us all define the next phase in automation evolutionary development.

Evolve, Adapt and build(Continued from page 4)

Lines 10 through 22 use the DT_Get_Value function to get the values from each of the columns of the keyword file (the ‘DT_’ functions are not native Ruby functions, but are instead custom functions). Line 24 through 41 compose a branching construct that interprets the keyword in the Action column of the keyword file (as illustrated in Figure 3). This construct is the heart of the keyword file in that it ties each keyword to the execution of a specific utility function. In addition, this construct determines which columns will deliver arguments to the function. Line 27 and 28 provide an example of this concept. If the current keyword file row being read by the driver script uses the “INPUT” keyword, Line 28 will call the utility Input function, and will deliver values from the Screen, Object and Value column into the function’s arguments. Each utility function should be developed to return a value that the driver script assigns to a variable called ‘ret’. If the

function is successful, the return value will be 0 (zero). If the function fails, the return value will be 1. In the event of a failure Lines 43 through 45 call an Exception Handling routine that you develop called utilityRecover. The recovery action defined in the Recovery column of the keyword file will determine what action is taken on a failure. This is an effective and efficient mechanism for implementing global exception handling on every line of an automated test. Line 46 closes the loop that cycles through each line of the keyword file, while Line 49 closes the keyword Excel file.

hand me the Key...word (Continued from page 10)

DT_OPEN, DT_GET_VALUE and DT_CLOSE are not native Ruby functions. This are custom functions for manipulating and Excel file. For guidance on how to create

these functions, use the Data-driven Scripting With Excel entry at:

http://www.techniques.automatedtestinginstitute.com

Page 38: utomAted

38 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

Automation microblogging is hot!hot topics in automation

Do you know about the AA-FTT, a community committed to better functional testing tools for Agile teams? See http://is.gd/1HUvz

Microblogging is a form of communication based on the concept of blogging (also known as web logging), that allows subscribers of the microblogging service to broadcast brief messages to other subscribers of the service. The main difference between microblogging and blogging is in the fact that microblog posts are much shorter, with most services restricting messages to about 140 to 200 characters.Recently popularized by Twitter, there are numerous other microblogging services, including Plurk, Jaiku, Pownce and Tumblr, and the list goes on-and-on because there is a tremendous appeal to microblogging. The appeal is in the brevity of the message (important in today’s fast paced society), the ability to stay connected in real-time with one’s interests, and the portability of the microblogging services. Many of these services allow messages to be sent and received in a variety ways, including instant messages, text messages, email, video clips and audio sound bites. In addition, message can be sent and received via a variety of tools,

including Internet browsers, mobile phones, personal digital assistants (PDAs), and desktop client software.Microblogging is a powerful tool for relaying an assortment of information, a power that has definitely not been lost on the test automation community. See some of the recent Twitter messages below for an example of how test automators have used microblogging.

Twitter name: testobsessedpost Date/Time: 11:49 AM Jul 22nd

usage: Community Introduction

Working with #qftest for automated gui testing... just found its not going to work for webtesting on MAC OS X... bastards!

Twitter name: sietstweets post Date/Time: 5:09 AM Jul 30thusage: Sharing Tool Frustrations

Page 39: utomAted

39www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Automation microblogging is hot!

Software Application Test Analyst Paris, France!!!: London-London, Job Purpose: • Automate test plans.. http://ub0.cc/3R/3F

Twitter name: Monsterjobsuk post Date/Time: 8:08 AM Jul 29th

usage: Job Announcement

We’ve just released our new TestComplete course onto the public schedule. Check it out; http://tinyurl.com/nkofpa

Twitter name: TestComplete post Date/Time: 8:37 AM Jul 31st

usage: Course Announcement

Automated testing of production environment http://bit.ly/kvduU Good article, so true !

Twitter name: fredberinger post Date/Time: 11:39 PM Jul 21st

usage: Sharing Articles

Microblogging is a powerful tool for relaying an assortment of

information

Page 40: utomAted

40 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

i ‘B’log to u

Automation blogs are one of the greatest sources of up-to-date test automation information, so the Automated Testing Institute has decided to keep you up-to-date with some of the latest blog posts from around the web. Read below for some interesting posts, and keep an eye out, because you never know when your post will be spotlighted.

latest from the Blogosphere

Imagine playing a video game blindfolded or even with the heads up display turned off. You cannot monitor your character’s health, your targeting system is gone. There is no look ahead radar and no advance warning of any kind. In gaming, the inability to access information about the campaign world is debilitating and a good way to get your character killed.

There are many aspects of testing software that fall into this invisible spectrum. Software itself is invisible.

blog name: google Testing Blogpost Date July 1, 2009

post Title: The Plague of BlindnessAuthor: James A. Whittaker

Read More at:

http://googletesting.blogspot.com/2009_07_01_archive.html

I will say that in general I am not a big fan of GUI automation for a litany of reasons, but mostly because it is misused, or applied to tests that are simply more effectively and efficiently performed by a person. However, I also understand the GUI automation does provide value when used in the right context. I am not against GUI automation, but I certainly don’t advocate automating a test just because we think we can, or because we have some bizarre idealistic belief that everything should be automated.

blog name: I.M. Testypost Date August 1, 2009

post Title: UI Automation Out of ControlAuthor: Bj Rollison

Read More at:

http://blogs.msdn.com/imtesty/archive/2009/08/01/auto-mation.aspx

Read more at:

Page 41: utomAted

41www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Automation blogs are one of the greatest sources of up-to-date test automation information, so the Automated Testing Institute has decided to keep you up-to-date with some of the latest blog posts from around the web. Read below for some interesting posts, and keep an eye out, because you never know when your post will be spotlighted.

latest from the Blogosphere

I’ve been looking into testing with fuzzers lately, and finally got the chance to do this on a live project. While there are a good deal of black-box fuzzing tools out there, if you want to go beyond that you are often on your own. At the other end of the spectrum, MSDN has a nice article on white box fuzzing.

blog name: Collaborative Software Testingpost Date July 29, 2009

post Title: Fuzzing Through The Side DoorAuthor: Jonathan Kohl

Read More at:

http://www.kohl.ca/blog/archives/2009_07.html

Test-Automation, for example, is very appealing to developers, because automation is what they do. It’s no surprise, when devs look at testing as a computer science problem, Automation is the first thing to come to mind. So we have generation after generation talking about test automation as the be all and end all of the test process, without ever having actually studied the very human, cognitive and communication process of software testing, nor having done any research on failure modes of software defects.

blog name: Creative Chaospost Date July 31, 2009

post Title: The Meme’s the thingAuthor: Matthew

Read More at:

http://xndev.blogspot.com/

Page 42: utomAted

42 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

up-to-Date with ati

This Week In Automation Series This Week In Automation is an ATi produced video series that summarizes current

events relative to test automation. with an often tongue-in-cheek approach to discussing the latest

news in tools, events (both small and large) and concepts, this series serves as the video companion

to ATi’s newsletter, while keeping you abreast of what is going on in the world of software test automa-

tion. See the latest edition of This Week In Automation at http://www.twia.automatedtestinginstitute.

com.

See No evil, HeAr No evil Have you seen the latest podcast produced by ATi? if not, navigate over to http://

podcasts.automatedtestinginstitute.com and listen to the latest podcast featuring an interview with Ms.

linda g Hayes. in this interview, she discusses her article entitled “evolution of Automated Software

Testing: An eyewitness Account”. in addition, she discusses her thoughts on the present and future of

test automation in general.

It’s NeWS To Me over the past few weeks ATi has brought the user community some interesting and very popu-

lar news stories related to test automation. These stories include:

• Microsoft Powerpoint defect Posing Security risk

• espionage and Software: u.S. Vulnerable to Cyberspies!?

• Microsoft Powerpoint Security Vulnerability resolved

For these and other stories visit http://www.news.automatedtestinginstitute.com.

Page 43: utomAted

43www.automatedtestinginstitute.com Automated Software Testing MagazineAugust 2009

Coming Highly rATed wanna know which of the hundreds of indexed tool and article archives are the most

popular? well, here goes. For the purposes of this section, popularity is based on the number of hits

and the rating received. Two of the most popular indexed tools on the site are FunFX and jSystem.

Two of the most popular indexed articles are “Test Automation roi” by dion johnson and “Automating

Software Testing: A life-Cycle Methodology” by elfriede dustin. be sure to visit http://www.tools.

automatedtestinginstitute.com and http://www.articles.automatedtestinginstitute.com for the latest

tools and articles, and be sure to rate them after you have read them!

Automation Honors Test automation is an integral part of ensuring the production of quality systems in a

world where software development is becoming increasingly fast-paced. And one way to help elevate

test automation so that it can be more effective is to celebrate it and its practitioners. Thus, the ATi

Automation Honors have been created to celebrate excellence in the discipline of test automation from

the corporate level down to the practitioner level. This celebration specifically pays tribute to:

• Those that have displayed leadership in moving and keeping test automation in its proper place

as a distinct iT discipline,

• Those that drive innovation within the field, and

• Those that display excellence in automation implementation, thus playing a big role in the delivery

of a high quality product to customers and/or production.

• Tools, individuals, publications, websites and more.

The nomination period has reached a conclusion, and finalists will be identified by august 17, at

which time voting will begin and commence through october 31, 2009. For more information on the

Automation Honors, the nominating process or the honoree categories, visit http://www.atihonors.

automatedtestinginstitute.com.

Page 44: utomAted

44 Automated Software Testing Magazine www.automatedtestinginstitute.com August 2009

How Do You Network?Whatever your choice, the Automated Testing Institute is there!

Facebook MyspaceTwitter

YouT

Stay up-to-date with test automation by following the Automated Testing Institute on your so-cial networking site of choice.

Myspace Twitter YouTube LinkedIn Blogger

For more information, visit http://www.networking.automatedtestinginstitute.com

Facebook

Page 45: utomAted

Worksoft Certify® for SAP®

AutomateAccelerateOptimize

Get ahead of the curve and maximize the value of your SAP investments with Worksoft Certify®, the only code-free approach to end-to-end, cross-platform business process validation and automated testing.

Worksoft Certify® for SAP® is a specialized lifecycle management solution for SAP that automates and optimizes SAP lifecycle tasks from change impact analysis through functional testing, performance testing and audit documentation. Certify for SAP is unmatched in the industry for accelerating time to value and reducing overall project costs by up 40-80%.

Are you ready to test smarter and deploy faster?

Contact us at 1.866.836.1773 or visit www.worksoft.com to learn more.

Still Testing SAP Manually?

C

M

Y

CM

MY

CY

CMY

K

Worksoft_Ad072009_Final.pdf 1 7/21/09 8:48 AM