34
Stop Wasting Your Time When Testing eCommerce Sites How to Get the Most Accurate Tested Results By Keith Hagen of the ConversionIQ Team at Inflow 303-905-1504 | 1799 Pennsylvania St. #500 Denver, CO 80203 | GoInflow.com

Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Stop Wasting Your Time When Testing eCommerce Sites

How to Get the Most Accurate Tested ResultsBy Keith Hagen of the ConversionIQ Team at Inflow

303-905-1504 | 1799 Pennsylvania St. #500 Denver, CO 80203 | GoInflow.com

Page 2: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Contents

Chapter 1: Introduction 1

See tests win, but don’t see benefit after implementing? 1

How we came to care about testing 1

Reasons to read this eBook 1

My epiphanies 3

The Test Window 3

Chapter 2: Why Tests “Go Wrong” 4

The most common reasons tests don’t perform well: 4

Common reasons tests do not provide the correct results: 4

Deciding what to test 5

QA for technical issues - cross browser testing 6

Avoid testing during seasons 6

Chapter 3: Test Configuration 7

Run the test at 100 percent of traffic 7

Equal weighting of variations 7

Test targeting 8

Chapter 4: Setting up Tests 9

Custom Code vs. Test Design Editors 9

Chapter 5: While Testing 10

Run a test long enough 10

Run a test with enough participants and goal completions 10

Segmentation 10

Don’t turn off variations while testing 11

Avoid making other site changes during the test period 12

Avoid traffic source changes 12

Avoid running conflicting tests 12

Use 7-day cycles 12

Visits vs. Visitors 13

The 3 Laws of Testing 13

Law #1 – Don’t interrupt someone’s experience 14

Law #2 - Test against the buying cycle 16

Law #3 - Count every conversion (or at least most of them) 16

Determine your site’s test cycle (how long to run a test) 17

The “Test Window” 18

Chapter 6: Test Analysis 19

Test tools 19

Determine the best reporting source (test tool or Google Analytics) 19

When to base results on your test tool’s reports 19

When to report out of Google Analytics (preferred method) 20

Test tool & Google Analytics data discrepancies 20

Visitors vs. Sessions 20

Time-zone differences 20

Timeouts 20

Delay of Google Analytics data 21

Sampling 21

Google Analytics test reporting 21

Segmentation 24

Chapter 7: The Test Elephant in the Room 26

Conclusion 27

vwo.com GoInflow.comStop Wasting Your Time When Testing eCommerce Sites

Page 3: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Chapter 1: Introduction

See tests win, but don’t see benefit after implementing?

The reality is, you should test anything truly significant on your site, because the first rule of Conversion Rate Optimiza-

tion (CRO) is “Don’t make things worse.”

But what if you are getting false wins? You can test all you want and never actually see results once the test winners are

implemented, which is really common.

The question is why? That “why” is the reason for publishing this eBook.

If you have a test winner, you should have a winner after you implement. That is if you are testing and analyzing properly.

However, from speaking with hundreds of organizations, it’s clear few people are testing properly.

The good news is, testing properly is easily achieved if you use the right methodology. It does not matter what test tool

you use if you are not making some basic, logical considerations about real people (that is who we are testing with) and

how they interact with the site.

Reasons to read this eBook

The goal of this eBook is to give you all the knowledge you need to run and report out on tests properly so you

get the TRUTH out of your tests and implement only true winners, in return making you more money. This is a

brain dump of evolved practices and procedures of a half dozen experts, taken from years of experience and

thousands of tests. This is the book I wish I would’ve had before running my first few hundred tests.

How we came to care about testing

“We’ve run over 100 tests with your test tool, and we’re not seeing anywhere near the same results once we implement

the winner,” I said to the representative of a popular test tool company. They were concerned, as concerned as I was,

because it was a common issue that no one seemed to be able to answer: Why, when the test results tell you one

thing, can’t you track similar results after you implement?

I’ve never focused on testing, as it has always been a means to an end. But when the means potentially derails the end,

you need to take control, and that is exactly where we all are with A/B testing today.

Stop Wasting Your Time When Testing eCommerce Sites page 1vwo.com GoInflow.com

Page 4: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

I firmly believe that you want to get insights about why customers are not converting on your site, or what would make

them act more—and THAT is 90 percent of the work involved in optimizing an eCommerce site. The other 10 percent is

coming up with a way to implement and test those insights.

For years I have toiled away on the 90 percent, striving to get the most powerful and actionable (testable) insights pos-

sible in the least amount of time because, after all, that is what I was being paid to do across the 80 or so eCommerce

sites I’ve worked with. During this time, however, I also came to realize that how we tested needed to change—not to be

better, but just to give the correct results.

Since 2010, I’ve had the benefit of constantly testing. I’d run dozens of tests prior to that on the sites I “owned,” but that

was just dabbling. In hindsight, I never ran and reported out on one test properly. As a result, I likely implemented

changes that hurt the businesses I was trying to improve. Over the last six years, however, I’ve personally run more

than 2,000 tests using seven different test tools and now oversee the running of 90-110 tests at any one time across 30

eCommerce sites.

ABOVE: A screenshot of the test board my team and I use to manage the “efforts” we are defining, developing, QA’ing, testing, reporting on and implementing. At the time of writing this, there were 10 tests in some stage of development, six in QA, four requesting launch from the clients and 101 tests actively running.

Stop Wasting Your Time When Testing eCommerce Sites page 2vwo.com GoInflow.com

Page 5: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

The reason we, as eCommerce consultants, test so much is threefold:

1. We run continuous improvement programs for eCommerce websites.

2. We have to validate our insights and recommendations.

3. We have to prove our value.

Believe me, if I could avoid testing we would! It’s a lot of hard work and extremely humbling. Imagine testing everything

YOU get paid to do, and then being evaluated, and not just on a six-eight week project, but throughout an ongoing

program that effectively never stops. In short, we’re “under the gun” to deliver results (and prove it) at every bi-weekly

meeting.

It’s because of this “under the gun” reality I’ve lived during the past six years that I’ve come to care so much about how

to test properly, because you can’t avoid results, especially in a continuous program where you are still around when

Year over Year ROI (YoY) analysis is being done.

My epiphanies

I’m in the epiphany business. I get to know a company, its eCommerce interactions, its users, their experiences with the

site, their needs, why they don’t purchase and what would make them buy more. I don’t make a single recommendation

until I’ve had compelling epiphanies about what to do.

Like my epiphanies around eCommerce, what I’ve realized and learned, through all those tests, is that testing is not as

simple as most bloggers, eCommerce publishers, test tool vendors and conference speakers make it out to be.

In fact, simply “following the lines” of a test report, without advanced testing tool configurations and targeting, will often

yield the wrong results. There are just too many factors to consider and too many considerations to factor before you

can really get to the TRUTH of any test.

The Test Window

In this eBook, I’ll introduce the methodology I call the “Test Window.” This is how

you will not only find the TRUE outcome of your testing, but also gain insights to

further drive optimizations and profit for the weeks, months and quarters ahead.

The Test Window is a simple approach to reporting out on tests so that you can be

more certain about which variation won and what degree of impact you are really

looking at.

I do have to warn you that the Test Window methodology will be work, but it will be

worth it as it yields more accurate and insightful conclusions. It will also make you

question your test tool’s reports, which you should.

Stop Wasting Your Time When Testing eCommerce Sites page 3vwo.com GoInflow.com

Page 6: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Chapter 2: Why Tests “Go Wrong”The most important thing to know about testing is that it is very easy… to get wrong.

The most common reasons tests don’t perform well:1. Test ideas are not based on real insights.

2. The tests selected to run are not prioritized by potential.

3. Test ideas are not validated with analytics or other sources.

4. Other site changes were made.

5. The test had technical issues.

6. Traffic sources changed.

7. Conflicting tests were started or ran.

You can see the top three reasons listed all have to do with what is selected to be tested, then the remaining

issues involve management of how and when the tests were run.

Beyond not getting a “winner” for the reasons mentioned above, it is possible that the results being shown in testing are

just not true, despite what the test tool may be showing.

Common reasons tests do not provide the correct results:● Not enough participants

● Not run long enough

● Seasonality

● A variation is turned off during the test

● The sample rate is less than 100 percent of traffic

● Not interpreted against the consumers buying cycle

● Test is not accurately measured

Unfortunately, most tests that appear to be winners, but are not, don’t get identified and end up be time-wasters at

best, or worse end up hurting sales with the very same effort that was thought to be making them better.

The reasons for the disparity in results are normally related to not having tested and reported out on the test properly.

Still other times, a test winner will show a positive impact once implemented, but in the end hurt long-term sales (by

negatively impacting the pipeline), or sales coming in on a particular channel (i.e. CPC). While this is completely avoid-

able through proper post-test analysis, the work to ensure that a win is truly a win is seldom performed.

Stop Wasting Your Time When Testing eCommerce Sites page 4vwo.com GoInflow.com

Page 7: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Deciding what to test

Let Insights Lead, Not Your Leaders

Insights drive results. Testing without insights is like throwing spaghetti

against the wall hoping it sticks (hence the term “Spaghetti Testing”). An or-

ganization spends precious time and resources on testing, so don’t squander

it with Spaghetti Testing.

Thinking back to the eCommerce tests I have conducted the last few years,

the tests that ran without insights might have won about half the time

while we know those with insights win nearly 90 percent of the time (and

those that do lose, usually lend insights to later winners).

So how to get your insights? You get close to the customer. How do you do that? You don’t care.

What you do care about is what I call the Insights Trifecta:

1. The Time to get the Insight

2. The Quality of the Insight

3. The Ability to Act on the Insight

You want to quickly get quality insights that you can act on. Again, you don’t care how you do it, but should think about

how you can best get good insights you can test without weeks of waiting on developers or until more research is done.

Once you have real insights, you will find they drive your testing, as there is no one in the room (including “the boss”)

who will stand in the way of testing compelling insights.

It might sound harsh to say insights should lead and not your organization’s leaders, but the reality is good leaders will

be the first to agree to prioritize an actionable, impactful insight that has great potential, over that leader’s own whims,

fancy or gut instincts.

Here are some quick ways to get quality and actionable insights that drive website testing:

1. User testing

2. Heuristics (try looking at your own site from the perspective of different customers and in the same frame of

mind they are in when there). This takes time and energy, but is the method you can do right now.

3. Online polls and surveys

4. Site feedback forms

5. On page analytics (heat-maps)

6. Web analytics (i.e. Google Analytics)

7. User screen recordings

Those are listed in the general order I find them valuable. You should never come to a conclusion without gaining

the insight or evidence of it in at least two separate sources (i.e. user test and Web analytics).

Image courtesy of instructables.com

Stop Wasting Your Time When Testing eCommerce Sites page 5vwo.com GoInflow.com

Page 8: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

QA for technical issues - cross browser testing

Often a test treatment will render poorly or differently in a particular type of browser. For this reason, you want to try and

resolve rendering issues across all major browsers and versions for the device types your are targeting. The first issue

here, of course, is that you don’t want to show a broken test page or element to users because it will affect the results

of the test. The second issue is you want a balanced test, conducted across the same user base that uses your site.

Most test tools will allow you to exclude certain browser/device/version mixes so that your test does not show to users

on those device/browser/version mixers. The reality is that removing browsers from being tested may influence your test

results, given that a browser may be disproportionately representing one of your buyer segments more than others.

For instance: Below is a simple button test unintentionally rendering with a blue border around a button on an

iPad Mobile Safari - despite it working on all other device/browser combinations. This rendering issue alone could

be enough to make a winning test into a neutral one that is dismissed and ineffective.

So make sure your test variations work in as many browsers as possible. There are plenty of cross-browser testing tools

out there (we have used crossbrowsertesting.com for several years now and recommend it).

Avoid testing during seasons

The Black Friday/Cyber Monday period and other peak periods is a great time to test peak-period hypothesis (i.e. does

free shipping work?). However, it is not a great time to test most treatments. While some seasons may not be truly

representative of the website’s typical traffic, the most common scenario is that user intention to purchase is so high,

people are far more likely to purchase regardless of the test variation and the changes will not show the impact they

normally would had the treatment been more of a factor.

Stop Wasting Your Time When Testing eCommerce Sites page 6vwo.com GoInflow.com

Page 9: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Chapter 3: Test Configuration

Run the test at 100 percent of traffic

Today, many purchases online involve more than one device or one browser (i.e. researching on a smartphone, then

purchasing on a laptop). However, test tools are limited to tracking a user on a single browser/device combination (See

“The Testing Elephant in the Room” section later). This means that someone who sees one variation of a test may come

back to purchase on another device and be provided another variation.

Showing a variation more often than another will give it the advantage since it is more likely to be seen with more conti-

nuity by those using more than one device or browser. This is called “continuity bias,” and to avoid it during testing, it

is recommended you run tests at 100 percent of traffic and split that traffic evenly between variations.

When less than 100 percent of traffic is sampled for a test, the result (in today’s cross-device world) is that the control

will be served more often than the variations, thus giving it the advantage.

For instance: If users visit your site from work and are not included in a test because you are only allowing 50

percent of people to participate, then that same user goes home later in a buying cycle and gets included into the

test on their home computer. The user is then more likely to favor the control due to continuity bias.

Side Note:

This may be a good time to look at your past results of tests that were run with less than 100

percent of user participation and see if the Control won more than its fair (expected) share of tests.

Equal weighting of variations

For the same reasons as mentioned above with running a test at 100 percent, you also want to ensure that all variations

are equally weighted (i.e. testing four variations including the control should see 25 percent of traffic go to each). Not

weighing variations will cause one to be seen more than another, lending the variation with the greatest share of traffic

with an advantage.

Stop Wasting Your Time When Testing eCommerce Sites page 7vwo.com GoInflow.com

Page 10: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Test targeting

Test results are easily diluted (test will have to run longer) or contaminated by not targeting the test to the right audience.

The most common issues of inappropriate test targeting are:

1. Geo (i.e. including international visitors in a test that is USA specific).

2. Device (i.e. including tablets in a mobile phone test).

3. Cross Category Creep (i.e. test for Flip Flops spreads into all Sandal pages).

4. Acquisition vs. Retention (i.e. including repeat customer in a test for first time customers).

If the right audience and pages are not targeted, then it will take much longer to see any significant results with confi-

dence and will indicate that the change is not significant, leaving you to stop the test and not gaining the sales you were

almost so close to.

Stop Wasting Your Time When Testing eCommerce Sites page 8vwo.com GoInflow.com

Page 11: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Chapter 4: Setting up Tests

So, after much effort gaining insights into your site’s users and how they are behaving on your site, you’re finally ready

to set up a test that you feel compelled to run and is far more likely to have a positive impact against your eCommerce

objectives. Here are the things you’ll want to consider in setting up a test so it yields valid and actionable results.

Custom Code vs. Test Design Editors

After trying to set up a few tests via any test design editor, you may find that the test treatments do not render or behave

as you expected across all browser/device combinations.

While design editors hold great promise for “anyone” to be able to set up a test, the reality is there is only a narrow

range of test types (i.e. text only changes) that can be done through a test design editor alone, without the custom code

required to have it work well, and consistently across browser types and versions.

The best practice is to write custom code because most modern websites have dynamic elements that visual editors

can’t identify properly.

Here are some specific cases that visual editors can’t detect:

1. 1. Page elements that are inserted or modified after the page has been loaded, such as some shopping cart

buttons, Facebook like buttons, Facebook fan boxes and security seals like Norton.

2. 2. Page elements that changes with user interaction, such as shopping cart row changes when user add or

remove elements, reviews and page comments.

3. 3. Responsive websites that have duplicated elements, such as sites with multiple headers (desktop header is

hidden for mobile devices and mobile header is hidden for desktop computers).

Stop Wasting Your Time When Testing eCommerce Sites page 9vwo.com GoInflow.com

Page 12: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

PRO TIP: If you are like a lot of non-techies yet advanced users (agencies, enterprises and

mid-market) are very comfortable with coding and run many advanced tests, you’ll want a robust

code console to be able to directly edit code/copy-paste code from your dev environment. Code

editors such as the VWO Code Editor (which we use) has the advanced functionality to view and

edit javascript code, which is generated when we make visual changes to a web page. This gives

you the best of both worlds, as long as you have the expertise to leverage its power.

It also allows you to add custom JS and CSS code while creating variations.

While in the beginning you may be able to avoid running complex tests that require custom code, you will eventually

graduate to a level of testing that demands it, thus requiring a front-end developer setup the tests you will want to run.

Stop Wasting Your Time When Testing eCommerce Sites page 10vwo.com GoInflow.com

Page 13: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Chapter 5: While Testing

We’ve covered what to consider before you test, but the truth is a test can be running well only to be contaminated by

those running it. Below is a short but important list of things to remember while running a test. Failing to do any of

these things will change the result of the test to some degree and make the confidence level of the winner negligible.

Run a test long enough

The most common reason I can cite for why tests are run improperly is because they have not been given a chance to

run long enough. The reasons for this vary from not knowing better to succumbing to the pressure to get a result quick-

ly. Whatever the reason, not running a test long enough will rob you of the truth, so you might as well have not run the

test in the first place.

A test should be run for at least a full seven-day cycle PLUS the length of the buying cycle. This is due to pur-

chase behavior being different during different days of the week. While daily variations are different for each site, there

is normally a weekly pattern.

For instance: Your site sells children’s bicycles, and your analytics tell you that 14 percent of purchases occur

between four and seven days after the user’s first visit to the site. From analysis, it’s discovered that a significant

amount of Monday purchases are from weekend research, and Tuesday and Wednesday purchases have a high

degree of purchases from same-day visitors (who are trying to get the product by the weekend). Therefore, any

test that starts on Monday and ends on Thursday will be biased toward the test variation that is preferred by the

spontaneous visitor (i.e. homepage highlight of bikes less than $100). However, this may not really be a winner if

the rest of the weekday’s results were included (where purchases are more likely looking for quality and may be

turned off by a “discount site”).

It is advised that you let your test run for at least two weeks.

Run a test with enough participants and goal completions

Naturally, the more variations a test has, the more participants it will need. Consider 100 conversions per variation to

be a MINIMUM, and only after the test has run long enough (see above) and other criteria have been met (see below).

If you’re using a next generation test tool like VWO, which uses Smart stats, a bayesian powered statistics engine, you

can have more confidence in your test tool results when looking within the tool itself. *Bayesian powered statistics en-

gine, SmartStats, produces faster predictive results between A and B variations, using potential loss to end tests.

Stop Wasting Your Time When Testing eCommerce Sites page 11vwo.com GoInflow.com

Page 15: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

How do you estimate how long to run your test? Your test tool will often have a handy duration testing tool, such as

VWO’s one below.

Stop Wasting Your Time When Testing eCommerce Sites page 13vwo.com GoInflow.com

Page 16: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

If you’re not using a test tool, we suggest you still estimate your test upfront. This tool from Analytics Toolkit may also be

helpful.

Segmentation

When judging if you have enough goal completion, don’t forget to consider segmentation. If your site receives a good

amount of paid traffic for example, you will want to make sure you receive enough goal conversions JUST FOR paid

traffic, since implementing a treatment that performs poorly for paid traffic is likely a loser for your site, despite it being

an overall “winner.”

Don’t turn off variations while testing

It seems logical that to shorten the time needed to test, one can turn off losing variations. In fact, most testing software

typically makes this very easy. The reality, unfortunately, is that turning off a variation can skew results, making them

untrustworthy.

For instance: Let’s say you have four total variations (three treatments and the control). Each variation receives

25 percent of traffic. After 10 days, one treatment is turned off and from that point on, each variation receives 33

percent of traffic. Then you again turn off another variation, leaving each remaining variation with 50 percent of the

traffic.

Stop Wasting Your Time When Testing eCommerce Sites page 14vwo.com GoInflow.com

Page 17: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

As the example below shows, when the green variation was turned off, the control improved, just like it did at the start

of the test when it benefitted from spontaneous type people buying right away (the control was for a free shipping offer).

The control again lifts off when the pink variation is turned off showing that for a third time, when the mix of new and

returning visitors shifts to include more new visitors, the variation (this time the control) disproportionately benefits since

it does well at converting the first time spontaneous type person. This graph would look very different if the control nev-

er saw those three bumps in conversion rate, and because the control is the baseline variation, the result of the winner

(blue variation) would be more clear and confident, and the test would not have had to run so long.

The above scenario is extremely common, and at a surface level seems benign, however, the reality is that the differenc-

es in the variations themselves may create a case where the test is corrupted by changing the weight of the remaining

variations.

Often a variation is favored by different types of mindsets, such as the spontaneous or analytical. If one variation is

preferred over the other, changing the weight of the remaining variations will result in the variation favored by sponta-

neous people to suddenly improve. Whilst the other variations, favored by more analytical people, will not see as much

improvement until their buying cycle has concluded, perhaps days or weeks later.

Since we never know who likes a variation for what reason, the safest thing to do is not to eliminate any

variations.

Avoid making other site changes during the test period

While “code freezes” are most likely impossible to implement while testing, it is important to avoid any site changes that

could cause an issue while testing.

For instance: If you are testing a trust element like Norton’s Shopping Guarantee, avoid changes that may impact

trust of the site, including site style changes, other trust seals, header elements (like contact or shipping informa-

tion) or any other site-wide “assurance” elements (i.e. chat).

Avoid traffic source changes

When there is a change in the distribution of traffic sources (i.e. paid search increases), test results will be unreliable

until the test participants brought in have had a chance to go through their entire buying cycle.

Stop Wasting Your Time When Testing eCommerce Sites page 15vwo.com GoInflow.com

Page 18: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

For instance: Paid search visitors may be less trusting and less sophisticated when it comes to the web. This

traffic source often responds well to trust factors like trust seals. Increasing paid traffic during the test may result

in a sharp increase in conversions that is not sustained as the test continues for a longer period and the number

of non-spontaneous visitors get factored into the results.

Avoid running conflicting tests

It’s easy to run more than one test at a time, however, tests may conflict with user impact. It is common to see test re-

sults of one test change when another test is started or stopped. This is typically the result of the tests sharing the same

purchase funnel, or impacting the same concept.

If you are running more than one test, do a bit of analytics work to see how many people will be affected by both tests

(i.e. user’s common to the two pages involved in the separate tests). If it’s more than 10 percent, then you will want to

strongly consider how the two tests impact the user’s single experience. By using common sense and good judgement,

you and your team will be able to estimate which tests can be run at the same time.

Use 7-day cycles

When testing, you most likely have to test against a full week cycle. This is because people often behave differently

during different parts of the week.

For instance: If your site sells toys for small children, your site’s reality might be that a lot of research traffic

occurs on the weekend when the children are available for questioning (i.e. “Hey Ty, what’s the coolest toy in

the world these days?”). Another reality for a toy site might be that often the “Add to Cart” button does not need

to get hit until Tuesday evening, given a lot of toys are not needed until the weekend when birthday parties are

typically held. Consider this and now ask yourself if a test run from Wednesday through Sunday (five full days with

lots of data) is enough?

The reality is, almost every eCommerce site (from the more than 100 eCommerce analytics I’ve done test analysis on)

has a seven-day cycle, you may have to figure out which days to start and stop, but it’s there.

Therefore, if you don’t use a seven-day cycle in your testing, your results are going to be weighted higher for one part of

the week than another. Using the toy store example above, ask yourself how valid would a test that excluded Tuesdays

be? What if the test was started on a Wednesday and run 12 days to include two weekends and only one “best sales

day” Tuesday?

Visits vs. Visitors

When running tests, make sure you test against visitors (aka users) not visits (aka sessions). In testing, you are asking

the question, “Did they buy?” with “they” being the keyword. You are not asking, “Did they buy on this visit? How about

this one?”

Stop Wasting Your Time When Testing eCommerce Sites page 16vwo.com GoInflow.com

Page 19: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

If your test tool reports test results in terms of sessions, then it is very possible the tool will tell you that your variation is

a loser when it could be a winner, leaving you wasting your time and costing you money everyday you don’t have that

treatment on the site.

For instance: Let’s say you run a test involving trust, such as adding a shopping guarantee to the site. Let’s

assume that your site, like most eCommerce sites, sells more to returning visitors than first time visitors. It is very

possible, in this case, that improving trust on an eCommerce site will result in more people putting that site on

their “short list” and returning to the site later. Because the variation is more successful at getting people back to

the site, it is going to see an increase in sessions, and because it has more sessions than the control, its con-

version rate in the session-based test may be lower and the test tool will tell you it is not performing as well as it

really is performing.

The fact is, a good portion of the tests you will run on your eCommerce site will result in improvements that bring vis-

itors back to the site more often, and, therefore, using a testing tool that is based on sessions and using its reports to

judge results is risky and potentially wrong.

If you do have to use a test tool that is session based, make sure your tests are integrated with your analytics platform

and report out of analytics using visitor (users), disregarding your test tool’s reports.

The 3 Laws of Testing

The information so far in this guide included high-level best practices. Now we’ll get into how to test properly and why.

At the heart of testing properly is the “3 Laws of Testing.” These laws should not be broken when testing or they will

ruin the test results, making them null and void to the degree you can’t have confidence in them.

Imagine thousands of people visiting (offline) a national car dealership chain to see SUVs. They spend a few minutes

and walk out with nothing more than a “hello” to the hovering salesperson. Now imagine half of those people return after

having visited an average of four other dealerships, and test driving a dozen other SUVs. On this subsequent visit, the

buyers make a beeline to the SUV they have mentally placed on their “shortlist.” This time when they get there, they see

the salesperson who says, “All SUVs sold today come with an interior cleaning kit.” They immediately buy the SUV be-

cause that tiny perk “sealed the deal” just when they were the most susceptible to it. Now the dealership’s salespeople

tell the “higher-ups” after the first weekend that the tactic is working like a charm. Then the marketing department rolls

Stop Wasting Your Time When Testing eCommerce Sites page 17vwo.com GoInflow.com

Page 20: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

out a massive national campaign to promote the offer. The public’s response? Crickets – no increase in shoppers, no

higher level of sales on SUVs, nothing!

This scenario is easily explained when everyone realizes it was the timing and setting of the offer that worked, not

the offer in-and-of itself.

This type of situation occurs on a daily basis with eCommerce testing, only it all happens behind a browser and is buff-

ered with analytical data. However, this time it is the test – and who is allowed in it – that are the culprits.

As you review these laws, you will realize that most tests are inherently giving inaccurate results, sometimes at the ex-

pense of the business, and just like the example of the sofas above, it is completely understandable.

The 3 Laws of Testing

Valid Results

1Stable Experience

2Buying Cycle

3All Conversions

Law #1 – Don’t interrupt someone’s experience

A test will be invalid if people who should not be in a test get into the test. This includes visitors returning later in

their buying cycle.

Letting returning visitors into a test is VERY likely to result in one of two scenarios:

Positive response: The return visitors are more susceptible to the treatment than those visiting the site for the first

time, who are still in their research phase, and may respond overly positive.

For instance: Below a variation (blue) has an awesome start and the control (orange) is doing nothing special.

Stop Wasting Your Time When Testing eCommerce Sites page 18vwo.com GoInflow.com

Page 21: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

In the above example the lines tell us we have a winner after 10 days (Aug. 9). You can imagine the test team sharing

early results after a few days, setting expectations and piquing interest from everyone.

Funny enough, if we take out the “awesome start” that existed when we were allowing returning visitors to complete

their four-day buying cycle while in the test, we get the chart below:

Simply excluding the positive response of returning visitors seeing the treatment (in this case a promo offer in exchange

for the user’s email), we can see how the control is now the potential winner, and implementing what was thought to

be the winner may have a negative impact on sales as long as the promo treatment is run.

Negative response: The other possible outcome to changing someone’s experience is when return visitors see the

treatment and are now lost (i.e. navigation change) or confused (i.e. page layout test).

For instance: Below, a familiar pattern is seen when the test variation (purple) overtakes the control (green). This

often happens when a site’s returning visitors are allowed into the test and prefer the control because it has a

“continuity of experience,” resulting in the variation to appear to perform poorly until the initial group of returning

visitors exits the test. In the test below, it took roughly 12 days for this returning visitor bias to abate.

Stop Wasting Your Time When Testing eCommerce Sites page 19vwo.com GoInflow.com

Page 22: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Most tests do not run longer than a week, so in the case of the above example, and other cases like it (this is a common

issue), unless returning visitors are excluded, the true result (a big win) will never be seen. Even worse, you may end

up implementing a losing variant and harming sales.

Law #2 - Test against the buying cycle

When looking at test results, a test must be run and analyzed against its buying cycle. This means testing a person

from their very first visit and all subsequent visits until they purchase. If you know that 95 percent of purchases

happen within three days of the user’s visit, then you have a three-day buying cycle.

Your test cycle will be the number of weeks you test (you want to test in full-week periods) plus the full length of your

buying cycle added on so the last participant let into the test has an adequate chance to complete their purchase

(which is Law #3).

Law #3 - Count every conversion (or at least most of them)

If a participant has entered a test, their actions should be counted. This sounds obvious, but is seldom done well, result-

ing in inaccurate testing. In order to do this, a test needs to be looked at so all visitors are given the chance to purchase

after entering a test. If a test is just “turned off,” participants in that test who have yet to purchase have been left out.

Since it is common to see one particular variation do well with returning visitors, leaving out these later conversions will

skew the test toward the variations that favor the less methodical type people.

Determine your site’s test cycle (how long to run a test)

You likely have been involved in discussions about how long a test should run (I answer this question almost daily it

seems). The biggest factor in how long to run a test is your site’s test cycle.

Every site has a minimum test cycle that a test should be run, so the 3 Laws of Testing are adhered to.

To find your site’s test cycle in Google Analytics, simply start with a segment like the one below where you define that

you want to view only users who had their first session during a one-week period. Then set a condition where transac-

tions are greater than zero.

Stop Wasting Your Time When Testing eCommerce Sites page 20vwo.com GoInflow.com

Page 23: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

This type of segment will tell you when people whose first visit was that week, eventually purchased on your

site. You can start by looking at a range such as two months, then work backwards to figure out when 95 percent of

the purchases in that two months were. In the example below, the site has a three-week test cycle because 95 percent

of purchases for the two months occurred in the first three weeks from the beginning of the period you started tracking

purchases.

You may be wondering, “Why 95 percent?” This is a simple rule of thumb and, from experience, I have rarely seen the

final 5 percent of purchases change a test’s conclusions, however, I have seen the last 10 percent do it!

The “Test Window”

Based on the 3 Laws of Testing, it may be obvious what the “Test Window” is and how to run your tests properly. It’s

easiest to think of the Test Window in terms of steps.

Step 1: Only let new visitors into the test. This way returning visitors later in their purchase cycle will not skew results

and potentially set the test off to a false start. Get as many people into the test as possible.

Step 2: Don’t look at the test for a full seven days. If you don’t have a statistical winner at this point (most test tools

will tell you the test has reached 95 percent confidence), let the test run for another seven-day cycle and don’t peak.

Stop Wasting Your Time When Testing eCommerce Sites page 21vwo.com GoInflow.com

Page 24: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Step 3: Turn off the test to new visitors once you have a statistical winner (at seven-day intervals). Turning off the

test to new visitors will allow the participants already in the test to complete their buying cycle. Leave the test running

for a full buying cycle.

Step 4: Report out on the test. To report out on the test’s overall results, you will simply look at your test tools test

report. Now, because you used the Test Window, you will be able to believe the results because:

1. Everyone in the test had a consistent site experience, spending it in the same test variation (no one seeing the

control on a previous visit only to later experience a treatment).

2. A full seven-day cycle was used so weekend days and weekdays were weighted realistically.

3. Every user (or 95 percent of them at least) was allowed to complete their buying cycle.

Test Window example: Below shows a test run using the Test Window. We only allowed first time visitors into the test.

You can see the first few days favored the control, which is common due to the control bias described earlier in this

document (even with limiting test participants to new visitors, there are still enough returning visitors from other devices,

browsers, etc. to create a bias). Once the test is turned off to new visitors and test participants are allowed to complete

their buying cycle, the variation gains strength.

Stop Wasting Your Time When Testing eCommerce Sites page 22vwo.com GoInflow.com

Page 25: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Chapter 6: Test Analysis

Test tools

The Test Window example above is taken from Visual Website Optimizer (VWO), which makes it possible to use the

Test Window. However, the Test Window can be configured with most popular test tools. Here is the overview of how to

configure your test tool.

1. Allow only new visitors into a test at the beginning. To do this, set a site cookie to track whether a visitor

is new or not. In your test platform, only include a participant into the test if their cookie tells you it is their first

visit.

2. Turning off tests to new visitors toward the end: To let a test “run out” so participants can complete their

buying cycle, simply change your test tool’s targeting rules so it is configured to accept new test participants

ONLY if they have a specific condition that cannot be met (i.e. URL parameter with a value of “NoNewVisi-

torsSept23”). Doing this will keep the test running only for those people already in it.

If you don’t have a sophisticated test tool that allows you to run a test using the Test Window, no worries, as long as you

integrate with Google Analytics.

Determine the best reporting source (test tool or Google Analytics)

Depending on what you are testing, and the site you are testing on, you may be best served by reporting out on from

the test tool or Google Analytics. Here’s how to decide:

When to base results on your test tool’s reports

1. When your site is slow to load. Your test tool and your analytics data may greatly differ due to the test tool’s

calls to its server timing out. When this happens, good test tools will exclude the user from the test (“kick them

out” of the test if they had already been included on a previous page or visit), but analytics will still count those

visitors as test participants, not knowing the test tool has excluded them.

So, if your site loads slow (even one key page such as the cart), look to your test tool for reporting.

2. When you can determine a visitor is making their first visit to the site. This is done via looking at custom

site cookies.

3. When you can turn a test off to new visitors as to allow returning visitors to complete their buying cycle.

4. When you have so much traffic that Google Analytics samples your data (more than 500,000 sessions

(visits) in any report period).

5. When your test tool can view results against key segments (i.e. paid search traffic) AND ANY business criti-

cal segments you have already identified.

Stop Wasting Your Time When Testing eCommerce Sites page 23vwo.com GoInflow.com

Page 26: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

For Instance: A Web security company may want to know how men between the ages of 35-44 years with an

interest in “Internet & Telecom” performed on a test given this group is identified as a key influencer to all other

groups of buyers.

When to report out of Google Analytics (preferred method)

1. When your test is well integrated with Google Analytics to the point that the vast majority of data from your

test tool is accounted for in Google Analytics.

2. When your test tool does not allow you to apply the “Test Window” methodology.

3. When you need to report out on key segments that your test tool does not break data down by.

4. When your test period will not result in sampling or you have Google Analytics Premium and can ac-

cess un-sampled data. Below is where Google lets you know if the data on which the test is based is sampled

(incomplete).

Test tool & Google Analytics data discrepancies

When looking at data between your test tool and Google Analytics, there are a variety of reasons the numbers will not

match up. For the most part, these problems will be covered next and can be addressed so “apples to apples” are eas-

ier to realize.

Visitors vs. Sessions

A lot of test tools report against unique visitors, so when someone comes back three times they are shown as a single

visitor. In Google Analytics, however, by default, reports are shown by session (visits), so you will see three sessions for

that one visitor in the test tool.

To resolve this, it’s necessary to be careful to report out by users in Google Analytics, not by the default session. Gen-

erally, users (as called in Google Analytics) is the better metric to use, as the reality is most sites do not convert most

visitors on one visit.

Note: See Step 5 of Google Analytics Test Reporting below for how to do this.

Time-zone differences

In some test tools, data is logged based on the time settings for the website visitor, not the website itself, while other

test tools default to the time of visit to GMT. This means you could see different daily results based on the time zone

your Google Analytics account is set to and what your test tool is set to.

Timeouts

The biggest issue in integrating a test tool with analytics is that some users may be assigned into the test initially and

then later dropped out due to the test tool timing out. This could be due to the users having connectivity issues, the test

running on a page that is slow to render, etc.

When a test participant times out of a test, they do not see the treatment any longer and, therefore, are no longer count-

ed in the test. Your analytics platform will, however, still include them in the test and report out on their activity, regard-

Stop Wasting Your Time When Testing eCommerce Sites page 24vwo.com GoInflow.com

Page 27: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

less of the fact they acted outside of the test. In theory, any level of timeout renders your analytics data unreliable. It is

best to look at tests while running, and determine if the number of test participants and conversion differs significantly

between the test and analytic platforms.

Delay of Google Analytics data

When comparing the data between your test tool and analytics platform, remember that it typically takes up to a few

hours for Google Analytics to update all its custom data and events, so looking at the same day’s data may result in the

test tool showing more participants and conversions than Google Analytics. It is also entirely possible for data to be

delayed up to 48 hours.

Sampling

Google Analytics samples its data when more than 500,000 sessions are being used within any date range, which

means the sampled data you are looking at is really just an estimate. When possible, it is best to ensure your Google

Analytics data is un-sampled.

To see if your test data is being sampled, apply a custom “Advanced segment” and look to the top of the page:

Results based on 40.84 percent of all traffic (59.16 percent of traffic excluded from test).

Sampled test results are extremely volatile and should never be used for test reporting. It is possible to pull re-

sults week-by-week from the Google Analytics API and avoid sampled data, however, applying the Test Window method

to this requires expertise, special tools and potentially additional configuration of Google Analytics, which goes beyond

the scope of this eBook.

Google Analytics test reporting

When you are confident that your test tool is properly integrated with Google Analytics and have run a good Test Win-

dow within a good test cycle, you will want to report out of Google Analytics in order to get much greater insights as to

the results. Google Analytics will allow you to see the test results by key segments like paid and organic search traffic,

as well as any custom segments you depend on to figure out your KPIs.

Stop Wasting Your Time When Testing eCommerce Sites page 25vwo.com GoInflow.com

Page 28: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

The following are the steps that need to be taken to report out on a test using Google Analytics and the Test

Window.

Step 1: Look at the “User Defined” report in “Audiences.”

Step 2: Select the “Custom Dimension” of your test (passed in from your test tool).

Step 3: Apply an “Advanced Segment” that filters sessions based on the date of the user’s first session.

Above the “Advanced Segment” is created and impacts the data (new visits stop on Nov. 2).

Stop Wasting Your Time When Testing eCommerce Sites page 26vwo.com GoInflow.com

Page 29: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Step 4: Navigate to the eCommerce report.

Step 5: You will need to look at this report, but by user, not session. To do this, create a custom report by user. Click

“Customize” at the top of your “canned report” and configure your report to look like this:

Stop Wasting Your Time When Testing eCommerce Sites page 27vwo.com GoInflow.com

Page 30: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

From this, your results will appear like so:

Step 6: This Google Analytics report, unfortunately, still bases the conversion rate shown on sessions, not users. Even

worse, the test results are based on a sample of 37 percent of visitors and, therefore, can’t be considered valid.

In this case, we are working with a Google Analytics Premium account and can export unsampled data with the “Ad-

vanced Segment” applied. From here we use the resulting spreadsheet Google Analytics provides to calculate the “User

Conversion Rate,” which you can do via a CSV export if your data is unsampled.

Here is the report Google Analytics gave us.

The number of users and transactions in the test have changed due to the data now being un-sampled. The number of

transactions went up 32 and the variation transactions decreased seven. This illustrates how sampled data can change

your test results.

Step 7: To see results from segmentation data, simply change the “Advanced Segment” you are using to additionally

apply segmentation (i.e. Paid Search Traffic), then repeat Step 6 as needed.

Segmentation

It is advised that you segment your test by the following segments:

Paid Search Traffic: To ensure this PPC traffic is not “taking a hit.” PPC is the most common channel to see respond-

ing differently to tests in my experience. I call PPC traffic “skittish” and attribute it to the lower level of confidence users

have when they arrive from this channel.

Stop Wasting Your Time When Testing eCommerce Sites page 28vwo.com GoInflow.com

Page 31: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

If PPC traffic does not respond well to a test, it’s not just a wasted expense for every click from the point the “winner”

variation is implemented, you have to wonder what Google is thinking when it sees that users come right back and keep

looking.

Organic Search: Hurting conversions on this channel will potentially impact your search rankings, or so it is commonly

believed, since Google is thought to put so much focus on user experience now.

New visitors: “New visitors,” as termed by Google Analytics, essentially means “first time visitors.” In fact, these same

users, when they return to the site for a second visit, will be counted again as returning visitors. So, while you want to

look at how new visitors respond to your test, it is important to realize that you are really seeing how people responded

on their first visit, which is a decent proxy for the impulse shopper.

It is important to note that a lot of “new” visitors will not actually be there for the first time, and this is what I term “The

Test Elephant in the Room” and discuss on the next page.

Returning visitors: Looking at a test by “returning visitors” is invaluable, as it tells you which variations got more peo-

ple back to a site (this is more pronounced when viewed using the Test Window). You will also view the results excluding

the impulsive new visitor segment, which often dominates early test results.

Visit count: In addition to looking at the “returning visitor” segment, it can also be enlightening to review test results by

visit count. For this I like to compare results for visit counts: one (new visitor), two (first time returning), three-five (highly

considerate) and five or more (most methodical).

Custom: No segment is potentially more important than one that you use as part of your core site analysis and/or KPIs.

For instance: Your company may sell shoes and have a brick and mortar presence in a particular market region.

For you, you’d likely want to see the test results by that market region, as well as for the additional goal of how

many more people you may have driven into the store.

Stop Wasting Your Time When Testing eCommerce Sites page 29vwo.com GoInflow.com

Page 32: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

Chapter 7: The Test Elephant in the Room

Back in 2002-2008, the tests I ran were pretty good indicators of how the treatment would perform on a site. Then the

iPhone came out. Then Android phones and the iPad and so on.

The key thing to understanding this “elephant” is that the vast number of test tools are dependent on cookies, as is

Google Analytics, and cookies track users by device and browser. As soon as someone changes their browser or their

device, the test tool and Google Analytics can not track them. This happens when:

5. People go from their work to their home computer or vice versa.

6. People go from a mobile device to their computer or vice versa.

7. People change browsers (i.e. start on Safari but use Chrome on their next visit).

Testing is very analytical and the Web is full of great articles that discuss statistical relevance, test confidence, standard

variations, empirical bias, etc. Unfortunately, all these “threats” to proper statistical test analysis are dwarfed by the fact

that we can’t track visitors across devices and experiences.

THINKwithGoogle tells us that the majority of people use more than one device as part of their shopping experience.

The research I’ve conducted on several eCommerce sites has resulted in a current estimation that 20-25 percent of

purchases involved more than one device.

So, why is this an issue?

Cross device behavior is an issue because tracking test participants across devices is not possible, and this means that

one user can experience more than one test variation.

Stop Wasting Your Time When Testing eCommerce Sites page 30vwo.com GoInflow.com

Page 33: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

For instance: Let’s say your site is testing pricing tactics. In your

test, someone may arrive initially on your site from their home

tablet and see one price strategy, then go to work on Monday

and visit again from their work computer and see pricing set

or presented in a different manner. Since people tend to take

note of the price of a purchase they are considering, they may

be confused when they see the price or the information has

changed.

Right now we have to live with this “elephant in the room.” We

have no choice, and only know that it is going to get bigger and bigger

as mobile eCommerce grows. At the end of the day, all we can do is

ensure the other factors, those we can control, are accounted for and that

we gain the highest level of confidence possible.

The Test Window and other considerations mentioned in this book are the best

tools you have to mitigate the uncertainties around running tests in a world where there

is a huge “elephant” eating away at any statistical certainty you think you have from “pretty test

report dashboards.”

Conclusion You’ve read how to test properly, what factors to consider in testing and how to use a Test Window in running your

tests. Now, get ready to explain why you see results differently than what the test tool might be giving you, (which

could also explain what everyone else is seeing).

You understand that letting returning site visitors into a test contaminates results, as does just turning off a test outside

of 7-day cycles and before participants can complete their buying cycle.

The fact is, you now know more than most people who are involved in testing websites do. Prepare to explain it, and

advocate for the truth in your own testing.

Let’s first repeat that you don’t have to test everything, and testing should NEVER be the only focus—(improvement

should be). However, when you do test, and especially if your organization has achieved a culture of testing, advocate

for using the Test Window. It’s not magic, and there is no genius to it—just 1000’s of tests, questions and epiphanies.

If you ever have questions, feel free to reach out to me directly at [email protected].

Stop Wasting Your Time When Testing eCommerce Sites page 31vwo.com GoInflow.com

Page 34: Stop Wasting Your Time When Testing eCommerce Sites · When Testing eCommerce Sites vwo.com GoInflow.com page 1 I firmly believe that you want to get insights about why customers

vwo.com GoInflow.com

Stop Wasting Your Time When Testing eCommerce Sites

Thank You for Reading this eBook

Sign up for a Free 30-Day Trial and

get access to the complete VWP suite.

ACTIVATE MY

FREE 30-DAY TRIAL

OR

REQUEST A

PERSONAL DEMO