40
nt Session Presented by: Malcolm Isaacs Brought to you by: 340 Corporate Way, Suite Orange Park, FL 32073 8882 W2 Concurre 4/9/2014 10:30 AM “A Guide to CrossBrowser Functional Testing” HP 300, 688770 9042780524 [email protected] www.sqe.com

A Guide to Cross-Browser Functional Testingv

Embed Size (px)

DESCRIPTION

The term “cross-browser functional testing” usually means some variation of automated or manual testing of a web-based application on different mobile or desktop browsers. The aim of the testing might be to ensure that the application under test behaves or looks the same way on different browsers. Another meaning could be to verify that the application works with two or more browsers simultaneously. Malcolm Isaacs examines these different interpretations of cross-browser functional testing and clarifies what each means in practice. Malcolm explains some of the many challenges of writing and executing portable and maintainable automated test scripts which are at the heart of cross-browser testing. Learn some practical approaches to overcome these challenges, and take back manual and automated testing techniques to validate the consistency and accuracy of your applications—whatever browser they run in.

Citation preview

Page 1: A Guide to Cross-Browser Functional Testingv

 

 

 

nt Session 

 

Presented by: 

Malcolm Isaacs 

  

Brought to you by: 

  

340 Corporate Way, Suite   Orange Park, FL 32073 888‐2

W2 Concurre4/9/2014   10:30 AM     

“A Guide to Cross‐Browser Functional Testing”  

 

HP   

    

300,68‐8770 ∙ 904‐278‐0524 ∙ [email protected] ∙ www.sqe.com 

Page 2: A Guide to Cross-Browser Functional Testingv

Malcalm Isaacs HP  

Malcolm Isaacs is a member of the functional architecture team in HP Software’s Application Lifecycle Management group, focusing on best practices and methodologies across all aspects of the software development lifecycle. During the course of his career, Malcolm has worked in various positions including software engineer, team leader, and architect. In 2003 he joined Mercury (later acquired by HP), where he worked on a number of products which specialize in supporting traditional and agile software development lifecycles from planning through deployment. Follow Malcolm on Twitter @MalcolmIsaacs, contact him at [email protected], and read his HP blog.

Page 3: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

A Guide to Cross-Browser Functional TestingSTARCanada 2014Malcolm IsaacsTwitter: @MalcolmIsaacs

Page 4: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Agenda

Web Applications

Testing Web Applications

Challenges of Testing

Overcoming the Challenges

Page 5: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Web Applications

Presenter
Presentation Notes
A ‘Web Application’ is any application that uses a browser as a client. I’m going to be talking about Cross-Browser testing, to ensure that the application runs correctly and consistently on different browsers. <Click> That’s what I mean by ‘Cross’ – not ‘Angry’. <Click> These are the most common browsers out there today. This graph shows how popular they are. Chrome is the leader, and is still rising. Internet Explorer has dropped in popularity over the past few years. Firefox is also declining somewhat. But mobile devices are taking over.
Page 6: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Mobile Apps- Native

Presenter
Presentation Notes
Let’s talk about Mobile Apps for a minute. Mobile devices are everywhere. Who’s using one right now? Who had to look up from it right now? There are three main types of Mobile App: <Click> Native apps can access the mobile device’s hardware directly. These apps usually have good performance and look great on the device. There are a lot of advantages in writing native apps – direct access to the the platform’s APIs and capabilities, running at a decent speed. On the other hand, native apps need to be rewritten for each platform. They also need to be updated on the device, so people might be using different versions of the app at the same time
Page 7: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Mobile Apps - HTML

Presenter
Presentation Notes
Using a browser on the mobile device to display HTML5. These apps can access some features of the hardware via the browser. HTML apps are usually created as cross-platform from the get-go, and are run on the web server, with the UI being rendered by the device. They also enjoy centralized updates, so everyone will be using the same version of the application at the same time. HTML apps are identical to desktop web applications – they are run on the web server, with the UI being rendered by the browser on the device. The 2014 IDC Prediction on Application Development and Deployment predicted that HTML5 will not replace native development through 2017. It will coexist with native.
Page 8: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Mobile Apps - Hybrid

Presenter
Presentation Notes
Hybrid apps run on the device itself, but use HTML for the non-hardware specific parts of the app. Hybrid apps offer a platform-specific shell, but with most of the logic running on the server. When we talk about cross-browser testing on mobile devices, we’re typically talking about the second way – using the device’s browser to display HTML. Nevertheless, some of the topics we’re going to address today apply to hybrid apps as well.
Page 9: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Versions

Presenter
Presentation Notes
There are often multiple versions of native and hybrid mobile apps running at any given time. Web applications, including mobile web applications, enjoy centralized updates, so typically, everyone will be using the same application at the same time. But while the app is centralized, the browser itself is not. So different versions of the browser are likely to be in use at the same time. Just to complicate things even further - not everyone uses the same operating system, and those that are using the same operating system use different versions of the operating system. So that’s another variable – we might have two machines running identical browsers, but on top of different operating systems, which can causes differences of behavior.
Page 10: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Architecture

Web Application Back-end Services

Internet

Presenter
Presentation Notes
Let’s take a look at a typical application’s architecture. Most of the logic is shared, and the UI is sent out to the browser for rendering. And this is the weak link where most of the cross-browser bugs happen, as each browser has its rendering engine, with its own unique interpretation of how to understand the HTML and how to display it. To account for this, most applications will also have a look at who’s sending the request, and tweak the UI slightly based on the browser’s identification or capabilities where necessary.
Page 11: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Some ExamplesHP’s Travel Request System. Forgot to select your country?

Presenter
Presentation Notes
I want to show you some examples of bugs, both on desktop and mobile browsers, so that you can get an idea of the type of things we’re seeing. This comes from HP’s Travel Request system. If you forget to select your country in the ‘Agency’ drop-down box at the top of the page, both Chrome and Firefox tell me that I can’t book for my country. This doesn’t help the user understand what they did wrong. IE gives me a sensible message – “Please make a selection from the Agency drop down list”.
Page 12: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Some Examples (cont)

Presenter
Presentation Notes
In the same application, you need to select dates, such as when you’re departing and returning from your trip. <Click> The date picker is resizable on Chrome and Firefox, but not on IE. <Click> Chrome and Firefox also display the address bar, which is useless since it’s read-only. So what, you might ask? Well, it’s not really important. But it is a difference. In the developers’ defense, the application is designed for IE only. But it wouldn’t have been hard for the developers to check the browser and pop up a message saying that you need IE to run it.
Page 13: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Some Examples (cont)Pages render differently…

Comments on the HP blogging platform:

Presenter
Presentation Notes
Here’s an example where the spacing is incorrect on Chrome. Seems like a minor defect, but it’s the sort of bug that throws off alignment with other elements such as images.
Page 14: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Some Examples (cont)Pages render differently…

HP Agile Manager

Presenter
Presentation Notes
The ‘Columns’ menu is missing in Firefox!
Page 15: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Some Examples (cont)

Image Source: http://thedailywtf.com/Articles/Internet-is-Broken--Time-to-Get-a-New-Phone.as

Presenter
Presentation Notes
Here’s an example from the mobile world…
Page 16: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Some Examples (cont)

Image Source: http://thedailywtf.com/Articles/Internet-is-Broken--Time-to-Get-a-New-Phone.as

Presenter
Presentation Notes
I discovered this on the 19th of November 2013. It’s Healthcare.gov’s login page. Nothing complicated. Works on Chrome and IE. Mess on Firefox.
Page 17: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Some Examples (cont)

Login Write “1”

1

Locked?

Lock

Login Locked?

Write “2”

Logout

Logout

Unlock

2

2

2

Time

Image Source: http://www.pantherkut.com/2008/03/20/do-you-think-your-cat-can-order-their-own-food-online/

Presenter
Presentation Notes
Some applications let you log in to an application multiple times. This is OK if you need to read something, but if you need to write something, changes on one browser could be overwritten by the other browser. <Click till 2 is displayed on the top> <Click> Leading to a reaction similar to this cat’s…
Page 18: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Application Under Test

Prevention

Presenter
Presentation Notes
So we’ve seen some of the potential issues. We can prevent them by testing. The developers are very likely to be aware of issues, and will be coding accordingly, but their ability to test on multiple platforms is limited. So we need to perform cross-browser testing.
Page 19: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

• Standard Multi-Browser testing• Compatibility/Consistency Testing• Multi-Version Testing• Concurrent Testing

• Single-Browser Concurrent Testing• Single-Browser Distributed Concurrent

Testing• Multi-Browser Concurrent Testing• Multi-Browser Distributed Testing

• Multi-Version Concurrent Testing

Types of Cross-Browser Testing

Presenter
Presentation Notes
I did some research about what cross-browser testing means, mainly by talking to a lot of QA people in various organizations, and I got a lot of different replies. I tried to classify the different definitions they gave me as follows: Standard multi-browser testing Compatibility Testing – also known as Consistency testing. Makes sure that the AUT looks the same regardless of the browser or browser version used to access it. Typically the tester will choose one browser to be the ‘baseline’ browser, ensure that all of the functional tests work correctly (typically by standard multi-browser testing), and then check that each page in the baseline browser is rendered correctly. Multi-Version testing Concurrent Testing Single Browser Concurrent Testing - The same browser is used on the same computer at the same time. Most browsers support tabbed browsing, which allows multiple web pages to be opened within a single browser window. Typically, each tab is implemented as a separate process, so single browser concurrent testing can be performed by opening up two tabs within the same browser. • Single Browser Distributed Concurrent Testing – The same browser is used on different computers at the same time • Multi-Browser Concurrent Testing – Different browsers are used on the same computer at the same time • Multi-Browser Distributed Testing – Different browsers are used on different computers at the same time It is also possible to test that the AUT works with two or more different versions of the same browser on the same computer at the same time. This is called Multi-Version Concurrent Testing, and I won’t be talking about it, because hardly anyone does it (understandably). Let’s look at these types in more detail.
Page 20: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Application Under Test

Standard Multi-Browser Testing

Presenter
Presentation Notes
Standard multi-browser testing is a basic check that the application under test (AUT) supports more than one browser, for example, Internet Explorer, Firefox, Chrome, and Safari. This type of test can run on a single computer, first on one browser and then on another. Alternatively, it can run on different computers. The same test script is typically run on the different browsers without modification. It’s best if only one user is active during the testing, if that’s at all possible.
Page 21: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Application Under Test

Compatibility or Consistency Testing

Presenter
Presentation Notes
Compatibility, or Consistency testing, is a check that the different browsers not only behave the same, but that they also look the same. One strategy is to pick a browser to be used as the baseline, and then perform standard testing on that browser. Once the baseline browser is shown to be working properly, it is used as the standard that other browsers are compared to. Usually, testers will perform a scenario on two or more browsers at the same time, where one of these browsers is the baseline. They then check that each screen is the same as the baseline. This type of testing is usually done manually, because the alternative is comparing bitmaps against the baseline. There are often subtle differences between the bitmaps that will cause comparisons to fail, even though the bitmaps appear to be identical to the naked eye. There are also acceptable differences, such as times and dates that can cause bitmaps to fail. And let’s not forget about maintenance. If the application’s UI changes, each bitmap affected by the change needs to be updated in the automated test.
Page 22: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Mobile Apps…

Presenter
Presentation Notes
This is a screenshot of the HP Software Developers Blog as viewed on Chrome, and as viewed on an iPhone. The same URL was used to access the page on both devices, but the page that is returned is different. Just about everything about it is different, from the color scheme, to the text, to the controls on the page. Clearly, standard multi-browser testing doesn’t apply to a site which significantly adapts itself depending on the requester. Neither does compatibility, or consistency testing. Since the two pages not only look so different, they also behave differently, so it would be challenging to say the least to create a single script that works on both pages, whether we are talking about automated scripts or manual scripts. And even if it could be done, the spaghetti code would make it virtually impossible to maintain. But all is not lost. We can still write standard multi-browser tests across different mobile devices, and perform compatibility testing across different mobile devices. We just need to make sure we’re comparing apples with apples, and oranges to oranges. We should be able to write one script that works for both iPhones and Android phones, and another script that works on both an iPad and an Android tablet. Generally, if the form factor is the same, and the application is designed to be the same on each of the devices, it should be possible to test that the functionality and user experience is consistent across the different devices. Obviously, if the applications are tailored to the specific device, each device will need to be tested independently, although it may be possible to reuse some of the testing assets.
Page 23: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Application Under Test

Multi-Version Browser Testing

Presenter
Presentation Notes
This is a test that your AUT works on more than one version of a browser, such as Internet Explorer 9 and 10, or all versions of Firefox from 10 onwards. Typically, only one user is active during the testing. The same test script is used for the different browsers. In many cases, the tests must be run on different computers, since some browsers do not allow more than one version to be installed on the same computer at one time.
Page 24: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Application Under Test

Single-Browser Concurrent Testing

Presenter
Presentation Notes
Concurrent testing checks that your AUT works with two or more browsers at the same time. The same user might be logged into the different browsers, or different users could be logged in, depending on the aim of the test and the requirements of the AUT. Aims of the test include: • Ensuring that there is no unexpected interaction between the browsers • Checking that the AUT allows a user to be logged in more than once. If a user can be logged in more than once, check that any updates made by one user are reflected in the other browser(s). In this scenario, we are testing with two instances of the same browser on a single machine.
Page 25: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Application Under Test

Single-Browser Distributed Concurrent Testing

Presenter
Presentation Notes
In single-browser distributed concurrent testing, we have two different machines, both of which are using the same browser.
Page 26: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Application Under Test

Multi-Browser Concurrent Testing

Presenter
Presentation Notes
In multi-browser concurrent testing, we have two different browsers open on the same machine.
Page 27: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Application Under Test

Multi-Browser Distributed Concurrent Testing

Presenter
Presentation Notes
And in multi-browser distributed concurrent testing, we have two different machines, each of which are using a different browser.
Page 28: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Image Sources: http://en.wikipedia.org/wiki/File:Abraham_Lincoln_November_1863.jpg, http://en wikipedia org/wiki/File:Phineas Taylor Barnum portrait jpg

“You can test on all the browsers some of the time, and on some of the browsers all the time,

but you cannot test on all of the browsers all the time.”

Best Practices –Choose your Browsers!

Presenter
Presentation Notes
You all recognize the guy on the left, right? It’s Abraham Lincoln. 16th president of the US. And the guy on the right is…? P. T. Barnum. He was a famous showman during the 19th century. He was also a prankster, and a Connecticut politician. In fact, he’s recorded as having met Lincoln at the White House – but with his museum of freaks, not as a politician. There’s a very famous quote which one of these men said. Unfortunately, history is not clear about which of them – if at all – said it. Any idea what the quote is? <Click> “You can test on all the browsers some of the time, and on some of the browsers all the time, but you cannot test on all of the browsers all the time” OK, it’s virtually certain that this was never said by them. The actual quote is “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all of the people all the time”. Regardless of who said it, it’s true. You simply can’t test all of your functionality on all browsers all the time. You need to choose the browsers you want to test in order to get the most optimal coverage. Do you already know which browsers and browser versions you’re going to support? Do you know which operating systems you’re going to support? For example, if you’re developing in-house applications, your company may have a policy about the officially supported browsers. What browsers do your users typically use? Even if you intend to support all of the major browsers, you might find that most of your users are only using one or two of them. So do most of your testing on that browser. If you don’t know, take an educated guess based on the type of application and the user demographic. Validate the assumptions using analytics tools, like Google Analytics, to find out more about who are accessing your application. Analyze your application’s usage. What are the most important paths?. An infrequently accessed and unimportant feature on a rarely-used browser is not worth testing if time is short or resources are stretched.
Page 29: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Test the API layer(s) independentlyTest the GUI layer for regression automatically across all browsersIf you can’t, use exploratory testing to check the baseline browser…then use scripted testing to test the cross-browser compatibilityThink about automating at least some of this testing

Image Source: http://commons.wikimedia.org/wiki/File:Mille-feuille_20100916.j

Best Practices –Methodology

Presenter
Presentation Notes
You should aim to test the API layer of the application independently. Automated unit regression and system tests should cover the API layer - You can also simulate requests as they come from different browsers, such as REST calls. Do it as part of Continuous Integration This is an Agile approach As well as finding issues in the API layer very quickly, it helps isolate problems in the UI layer since we know the API layer is OK <Click> Test the GUI layer for regressions automatically. Although many organizations that do automatic regression testing still do some of it manually – I’ve seen 40% or more. It’s not ideal. <Click> Use exploratory testing to test the baseline <Click> NEED TO VALIDATE!!! - And use scripted testing to test the cross-browser compatibility, to make sure you cover all the screens on the different browsers
Page 30: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Maximize the viewing area• It’s usually best to test in full-screen viewEnsure the font size and page zoom are set correctlyKeep the typical end-user configuration in mind when you create the baselineFor compatibility testing, test on the same screen size and resolution as the baselineTurn off the browser’s auto-update featureMake sure any HTTPS certificates are installedInstall any plugins the application needs

Image Source: http://www.flickr.com/photos/30592906@N00/1924189728

Best Practices –Prepare the Browser

Presenter
Presentation Notes
So before you actually start your testing, regardless of what type of testing you’re doing, there are some best practices you should follow to prepare your browser. This is an image of a browser that has just about every toolbar installed. They take up valuable screen space, and slow the browser down. I wouldn’t like to do my testing on this. <Click> It’s easiest if you maximize the viewing area, by putting the browser into full-screen mode. Remove any unnecessary toolbars, and close down any panes that you might have opened, such as the F12 development pane. This is particularly applicable to manual testing, but also with automated testing, especially if you’re comparing bitmaps <Click> While you’re at it, ensure that the font size and page zoom are set up correctly, and that the screen is at the same resolution as the baseline. <Click> Remember who your audience is. The baseline should be representative of the browser configuration your users are most likely to use. <Click> When you’re testing, you’ll need to be comparing apples with apples. Make sure the browser you’re testing is configured the same way as the baseline, with the same zoom settings, etc. It’s easier and quicker to find the content to be compared if the browsers are configured the same way. <Click> You need some control over the browser you’re testing. If the browser updates itself during your testing session, you run the risk of the tests not being able to run, if the tool doesn’t support the new version, or errors in the tests which need to be maintained. So you can turn off the auto-update feature, at least on some of your machines. You might want to leave it alone on a control machine, so that you get quick feedback about whether the tests run on the new version without compromising your test suite. <Click> If the application uses SSL, make sure you’ve got the HTTP certificates installed before testing <Click> And make sure any plugins the application might need are installed as well. <Click> UFT REFERENCE: Ensure the browser is configured to run UFT tests Chrome and Firefox have agents that are installed by UFT. If you’ve disabled these agents, automated tests won’t be able to replay, so make sure they’re enabled.
Page 31: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Best Practices – Write Portable Tests

Presenter
Presentation Notes
Most of the cross-browser test types require that a single test script can be run for different browsers without modification. If we need to modify the script to work for a different browser, we end up with multiple copies of the script, each one capable of working for only one browser. This makes maintenance of the test script very difficult, because if the AUT’s functionality changes, each of the test scripts need to be updated separately. If we plan to execute our tests manually, it is relatively easy to write a script that will run on all browsers. The manual test will contain an instruction to open up a specific browser. From that point on, unless the AUT’s behavior changes depending on the browser, each step will apply to whatever browser is being used, without having to specify any browser-dependent characteristics. But for automated tests, the situation is a bit more complicated. Tools such as UFT maintain a repository of the objects (Object Repository, or OR) that are in each screen of an application, using a combination of identifiers and properties of the object in order to uniquely identify it. When the script is called upon to perform an action on some object, it consults the OR in order to find the required object in the browser’s page. For many applications, the properties used to identify the objects in one browser are sufficient to identify the equivalent object in a different browser. So the same script and OR can be used on different browsers, with the only difference being the specific browser that is opened at the start of the test (and closed at the end of it). The script and OR can be created by recording a scenario using one browser, and then played back on the same or a different browser. Unfortunately, life isn’t always this simple. Modern applications that use technologies such as AJAX (used to create asynchronous behavior in the browser) can pose a challenge to testers, for reasons including: • The state of an application is difficult to determine – clicking the ‘back’ button might revert to the previous state of the AJAX application, or it might return to the full page visited prior to the current page • Dynamic page updates can interfere with user interaction, and consequently, with test script interaction. For example, the user might initiate an action (such as a search) by pressing a button, and continue working with the page. At some point, that action completes, and a popup is displayed while the user (or the script) is doing something else. This can confuse the test script. • Objects are created and displayed on the screen dynamically, so these dynamic objects are not always recorded and stored in the OR. Furthermore, they might be displayed differently on different browsers, and consequently need to be located and identified using different criteria depending on the browser.
Page 32: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.Image Source: http://thestar.blogs.com/photoblog/2013/06/boris-spremo-.htm

Best Practices – Write Portable TestsAvoid Browser-Specific Behavior

Presenter
Presentation Notes
Whenever possible, you should avoid making your script dependent on a specific browser. <Click> For example, links are displayed differently on Firefox and Chrome, using different font and color properties. If you design a test or component that runs a checkpoint on a Link object, and the test may run on different browsers, deselect the font, color, and backgroundColor properties in the checkpoint. Alternatively, you can define regular expressions for these properties, to enable the checkpoint to pass for different values. <Click> This is a photo of a construction worker, Larry Porter, working on the construction of the CN Tower here in Toronto. This is an excuse for me bring something of Toronto into the presentation, as he’s sitting on part of the tower’s framework. Specifically, he’s working on the structure that’ll be used to lift the restaurant up. Let the framework do the hard work of identifying objects and their properties: UFT and Selenium have well-defined object models. Or you can create your own framework on top, to suit your own testing needs.
Page 33: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Best Practices – Write Portable TestsMinimize browser-specific behavior

Presenter
Presentation Notes
Accounting for Browser-Specific Behavior This is an HP travel adaptor. I had no idea HP even made them. The point is, in the same way you need to adapt your gadgets to the country you find yourself in, you need to adapt your tests to the browsers they’re running on. If your test requires behavior that depends on the browser currently being used, and you can’t find a generic way to write the script, you can use a parameter to track which browser is currently being used. <Click> The script’s code can interrogate that parameter’s value to determine how to behave. The value of the parameter will typically be assigned when the test starts. It isn’t important what the actual value is as long as it’s unique, although it’s usually best to give it a name which reflects the browser. So you can define a string called activeBrowser, and set it to one of: • “Firefox” • “Chrome” • “IE” • “Safari” If the browser is specified as part of the test, the string can be declared and assigned within the test. If it is passed in from outside, the most maintainable way to implement this is to define the parameter as an Action parameter, as follows: The value of the parameter can be queried in the script by using Parameter("activeBrowser") . Therefore, if you want to implement different behavior for different browsers, you can use the code in the box. However, it is strongly recommended to avoid the need for this kind of code, as it makes maintenance of the test script much more difficult. If you do have to use this kind of code, it’s best to refactor it so that it can be reused by many tests, which makes the maintenance easier as it’s in one place.
Page 34: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Browser("Search").Page("Search").WebEdit("q").Set “Hello world!”

Best Practices – Write Portable TestsHide the HTML implementation

Presenter
Presentation Notes
Some web applications employ a custom implementation of a page’s content depending on the browser used to access the site. Although the page itself might look similar when rendered in the different browsers, the underlying implementation could be significantly different. This makes it virtually impossible to write a truly portable script. A way around this in UFT is to use the Object Repository Manager to create a separate object repository for each browser. The script will then select the appropriate repository depending on the browser being used when the test runs. Once it’s been selected, the rest of the script can be portable, and any browser-specific or implementation-dependent behavior is taken care of by the object repository. In this example, there’s a column in the data table for the browser, and there’s an object repository for each browser available: Once this piece of code has been executed in the test, the rest of the script can be browser-independent, for example: Although this enables the script itself to be portable, the disadvantage of this approach is that changes in the application will require each of the object repositories to be modified, which complicates the maintenance of the test. <Click> Selenium allows you to hide the HTML implementation by using Page Objects. This provides a way of interacting with an object that represents the page, rather than the page’s implementation.
Page 35: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Test 1

OR…?

Test 2 Test 3 Test 1 Test 2 Test 3 Test 1 Test 2 Test 3

Test 1 Test 2 Test 3

AND

System tests

Unit tests

Executing Tests Sequentially

Presenter
Presentation Notes
So you’ve written some tests. Now you need to run them. So do you run all of the tests on one browser, then repeat the same tests on the next browser? Or do you run each test once for each browser, and when you’ve exhausted the browsers, move on to the next test, and run that test on each browser? The most common technique is to run each test on all of the browsers, and then move to the next test. The advantages of this approach are as follows: If there’s an error on one of the browsers, it’s easy to compare results with the other browsers Each test can be responsible for opening the browsers, rather than having a separate driver which runs externally The biggest disadvantage is that each test is atomic, and needs to set up its own browser context. So if a test needs the browser to be at a certain page, having navigated through a specific path, the test needs to make sure that happens. So the two approaches are valid. It depends on what you’re testing. Unit tests tend to use the bottom approach, of running each test on each browser, before moving to the next test. System and integration tests will benefit from the top approach of opening the browser, running the longe Situations like Standard Multi-Browser Testing typically require that the test be run once for each browser, as follows: For each browser: • Open the browser • Run the test scenario to completion • Close the browser There are a number of different ways this can be achieved. This section describes the most common methods. Each Script Opens and Closes its Own Browsers In this scenario, each test script starts the same way: One or more reusable actions are called, which perform the setup for the test, and then the main test script is run. The test script is data driven via a standard global table, and each iteration runs on a different browser. You should create a test which will be used to maintain some general reusable actions that will be used by the scripts. These scripts might include: • Open Browser • Login • Close Browser The Open Browser action will contain a line which looks something like this: SystemUtil.Run DataTable("BrowserName", dtGlobalSheet), http://www.hp.com This line depends on a column in the global data table called “BrowserName”, which should look something like the following: You will need to make sure that each test that uses these reusable actions has a column in the global data table called “BrowserName”, and that this data table has the relevant values. This will need to be done manually, and if you add a new browser, you will need to add the new browser as a new row to the global table of each test. The global table will typically contain additional data that you need to maintain to account for differences between the browsers. This additional data will also need to be included in each test that uses these reusable actions. The advantage of this method is that all of the setup and teardown is contained within each test’s report, making it easier to understand exactly what happened during the test. There are also a number of disadvantages: • Each test needs to implement the same data in the global data table, which results in the data being difficult to maintain. – A way around this is to use a mechanism for having a single copy of the data, which is then associated with each test prior to it running. You can use HP ALM’s Data Management feature for to create a data table in ALM, and associate the table to each test. Although some management is necessary when creating a test set, it allows a single copy of the data to be maintained. • Each test needs to contain calls to each of the reusable actions in the correct order, in order to open the browser, set it up, and close it. – If you use HP ALM, you can create a ‘template test’ which contains all of the steps (as well as any required add-ins, function libraries and recovery scenarios) required. All new tests created from the template test will automatically be configured according to the template test: This is the preferred method for creating tests which can run on multiple browsers on the same machine. Scripts are Driven Externally This method relies on a number of test scripts being run in sequence. A setup script is run which opens a specified browser. Then the test script which contains the actual test is run. Finally, a script which cleans up is run. Mechanisms that can be employed to accomplish this include: • Running the tests from a batch file • Creating a test set in HP Quality Center or ALM • Using UFT’s Test Batch Runner to create a list of tests and run them in order • Defining a job in a Continuous Integration system such as Jenkins, which runs the tests in the specified order This has the advantage that the browser can be opened first, and then multiple test scripts can be run on the same browser. There are again a number of disadvantages: • There is no single report that contains all of the information from all of the test scripts that were run. • If the tests need to pass information to each other (eg. The name of the browser that was opened), the tester will have to create a mechanism to transfer the data (eg. The test that opens the browser writes it to a file, and the script that contains the actual test reads it from the file). This process is difficult to maintain. • Dependencies between tests are introduced. A test might rely on an earlier test putting the browser into a specific state, and this violates the principle that tests should be independent. Not only is it difficult to document these dependencies, it is also difficult to ensure that the dependencies are enforced. This method is not recommended. Use the method described in the previous section (‘Each Script Opens and Closes its Own Browsers’) instead.
Page 36: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Test 1

Test 2

Concurrent Testing –Multiple Browsers on the Same Machine

Presenter
Presentation Notes
Concurrent testing checks that your AUT works with two or more browsers at the same time. The same user might be logged into the different browsers, or different users could be logged in, depending on the aim of the test and the requirements of the AUT. Aims of the test include: • Ensuring that there is no unexpected interaction between the browsers • Checking that the AUT allows a user to be logged in more than once. If a user can be logged in more than once, check that any updates made by one user are reflected in the other browser(s). • Checking that the AUT prevents a user from logging in more than once, and that the browser behaves correctly when a user attempts to log in twice. Note that some applications will not allow the second login, while other applications will allow the second login and log the first session out. Some advanced concurrent testing scenarios require the test to access more than one browser on a single machine, and interact with these browsers in turn during the test. For example, a web application might require that if one browser updates data in the application, another browser should reflect the change almost immediately. To test this, we could theoretically write two separate tests, one for each browser, and add a mechanism to synchronize the tests. But since UFT only allows one test to run at any given time on the same machine, a solution is to write a single test which consists of a series of actions as follows: 1. Action to open up the browsers 2. Multiple portable actions to perform small pieces of functionality in the browser specified by an action parameter that is passed to the action each time it’s called. This will allow, for example, a ‘login’ action to be called twice, once for each browser 3. Action to close the browsers The test should be repeated so that the purposes of the browsers are swapped. Therefore, for example, if we wanted to test a scenario that if Chrome is open and the user is at screen A, and then Firefox is opened and the user executes a flow that leads to screen B, we should repeat the test with the order swapped, so that Firefox would be opened at screen A, and the flow leading to screen B would be in Chrome. This is usually achieved using multiple iterations of the test, where each iteration opens the browsers in a different order. It should be noted that scenarios involving executing browsers in parallel are not common for automated testing, and are usually tested only in organizations that have reached the most mature levels of test automation. Most organizations test these scenarios manually.
Page 37: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Independent tests

Image Source: https://wiki.jenkins-ci.org/display/JENKINS/Lo

Concurrent Testing –Multiple Browsers on Different Machines

Presenter
Presentation Notes
There are two situations which require running tests on different machines with different browsers. This section explains them both. Independent Tests The requirement is to run the tests on the different machines independently, such that the test running on a browser on one machine can run independently of the tests on the other machines. Each test can start and finish in its own time, and doesn’t need to wait for or synchronize with any of the tests running on the other machines. In this scenario, the following methods can be used to execute them: • Creating a test set in HP Quality Center or ALM, and assigning a different machine for each test in the set. Furthermore, the tests can be scheduled to run in parallel. • Defining a job in a Continuous Integration system such as Jenkins3, where each test can be run on a different slave node. Using Selenium Grid to run Selenium tests on different nodes
Page 38: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Application Under Test

Dependent tests

Single-Browser Distributed Concurrent Testing

Presenter
Presentation Notes
Dependent Tests It becomes even more complicated if the situation involves a test that drives two browsers on two different machines, with dependencies in the test. An example of this situation might be a web application that requires that if one browser updates data in the application, another browser should reflect the change almost immediately. The implication for testing is that the test on one machine needs to wait for (and get results from) the browser on the other machine before it can continue. One method that can be used to implement this might be to configure a master test which drives slave tests on the remote machines running the browsers, such that each of the tests on the remote machines will write a notification to some file after each step, and wait for a signal to continue. Typically, the master test will monitor the file, and wait for the remote tests to write to it. Once the remote tests have written to it, the master test provides a signal for the slaves to continue. This can be done by writing to a different file, which the remote tests monitor. Load Testing Scenarios Best Practices © Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Note that the remote computers must be configured correctly in order to run tests on them. See the section ‘How to Run Automation Scripts on a Remote Computer’ in UFT’s Help for details. However, as mentioned previously, situations involving executing browsers in parallel are not common for automated testing, and are usually tested only in organizations that have reached the most mature levels of test automation.
Page 39: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

The Future of Testing Blog

LoadRunner and Performance Center Blog

Application Lifecycle Management and App Transformation Blog

HP Software Developers Blog

Resources

Presenter
Presentation Notes
Page 40: A Guide to Cross-Browser Functional Testingv

© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Thank you