5

Click here to load reader

The Case for Incrementality

Embed Size (px)

DESCRIPTION

To measure ROI accurately, you must look not only at how much your customers spent, but how much MORE they spent because of your marketing. This white paper explores the concept of incrementality and shows you what you measure and how to control variables.

Citation preview

Page 1: The Case for Incrementality

©2012 Catalyst1Page

The Case for IncrementalityHow to Measure the Real ROI of Your Marketing Programs

By Marc Solomon, Senior Strategist, Catalyst

Page 2: The Case for Incrementality

The Case for Incrementality

©2012 Catalyst2Page

As an analyst and a bit of a data geek, I’m excited by the continuing trend of marketers using data to guide overall strategy and specific decisions. New technology makes it easier and cheaper to gather, store and manage “Big Data.” Of course, all of this data is useful only if it provides true insights that are actionable, and most marketers are not fully or even correctly leveraging their data. A recent CEB (Corporate Executive Board) study provides some interesting perspective on this front. Harvard Business Review summarized the findings in a blog post, concluding that “the vast majority of marketers still rely too much on intuition—while the few who do use data aggressively for the most part do it badly.”

For marketers, new CRM (Customer Relationship Management) tools make it easier not only to track individual sales, but also track the reach of online and offline campaigns. And the real value comes from connecting the dots so you can see at the customer level how much your target audience is buying … right?

Well, that’s only partially correct.

What really matters is not how much your target audience bought, but how much more they bought because of your marketing.

Incremental vs. Total Sales

Let’s compare the results of two hypothetical campaigns. Each required $100,000 of marketing investment. Your CRM data tells you that the customers you targeted with Campaign A generated $500,000 in sales, and the customers in Campaign B generated $250,000. So you might conclude that the first campaign is the clear winner. It generated double the sales, didn’t it?

Not necessarily. What if Campaign A was targeted to your best customers, who would have spent $500,000 even without your marketing? This means, then, that the $100,000 marketing spend actually did nothing for you. And, what if Campaign B was targeted to prospects who would not have otherwise purchased anything if you hadn’t invested your $100,000 to market to them? This is starting to look like a pretty nice return on your investment, and you’re probably now much more excited about Campaign B.

Marketing Spend

Actual Sales (with marketing)

Expected Sales (no marketing)

Incremental Sales (due to marketing)

$100,000

$500,000

$500,000

$0

$100,000

$250,000

$0

$250,000

Campaign A Campaign B

Which Campaign Delivered Greater ROI?

Page 3: The Case for Incrementality

The Case for Incrementality

©2012 Catalyst3Page

This is an extreme example to make a point: You should be focused on incremental, not total, sales when evaluating campaign impact. Put another way, it’s important to focus on causation, not just correlation. It’s not enough to know that customers you targeted made purchases (correlation); you want to know if the marketing drove those purchases (causation). The difference between correlation and causation is important, and explained nicely on the Adobe Digital Marketing Blog.

Creating a Baseline

The tricky part in calculating incremental sales is figuring out the difference between sales levels with and without marketing. How do you know what customers would have bought in an alternate reality where they didn’t receive any marketing?

You can use historic trends to estimate baseline “business as usual” sales you’d expect without new marketing, but this is imprecise at best and, could be massively wrong. There are so many factors that impact sales—product/pricing changes, economy, competition, weather, etc.—that it is nearly impossible to predict sales with high accuracy.

A much better approach is to set up a controlled experiment—a concept that may bring back memories of 10th grade science class. You can get a refresher at Wikipedia: Controlled Experiment. The key point is to isolate the factor that you’re trying to measure—in this case, the impact of the marketing campaign—and hold constant all other factors.

To do this, you need to randomly pick some of your target audience to be held out from the campaign as a ‘control’ or baseline group. It’s critical that this group looks just like the customers who will receive the marketing campaign, so that you can be confident that any difference in sales between the groups was driven by the campaign and not some underlying difference between the customers or their market environment. In the example below, you can see a $3 difference between the test and control groups, which can be attributed to the marketing campaign.

Sales per Customer

Page 4: The Case for Incrementality

The Case for Incrementality

©2012 Catalyst4Page

Isolating Channel Effectiveness

This approach can be leveraged not only to measure the impact of an overall campaign but also the impact of specific campaign channels or tactics. In the case below, this enables distinct measurement of both the direct mail impact and email impact.

Sales per Customer

Keep in mind that the same basic principle of randomly assigning customers to each of the cells is critical; otherwise you don’t have a fair apples-to-apples comparison. It is tempting and, in some ways logical, to use the email channel for all customers who have provided email addresses and appropriate permissions. However, this introduces a bias to the marketing test. The customers who provided email addresses are unlikely to be the same as those who have not. They probably are different demographically, and more importantly, they may have a different attitude toward their relationship with you.

If you ran a test in which you sent emails to everyone you could, then compared their sales to customers you didn’t email, you would not be able to determine how much of the difference was driven by the emails and how much was explained by underlying differences between the customers who did/didn’t grant you the ability to email.

Page 5: The Case for Incrementality

The Case for Incrementality

©2012 Catalyst5Page

Practical Considerations

While measuring incremental sales in this way is the theoretically “right” way to assess marketing impact, the reality is that this isn’t always viable or worth the effort. It works extremely well with direct marketing (online and offline) but is very difficult with mass marketing/advertising, where it’s impossible to create an identical random control group. It is typically worthwhile when there are large target audiences and there is minimal opportunity cost to holding out a statistically significant control group. But it may not be worthwhile if you need to hold out a large portion of your audience from any marketing for the sole purpose of robust measurement. For example, in the above case about isolating the impact of email campaigns, you may decide that you don’t need a robust measurement of email impact. Since emails are cheap to send and presumably have some positive impact, you may not want to bother creating and managing a holdout population for the sake of measurement.

However, even in cases where a robust controlled experiment isn’t possible or isn’t worth the effort, it’s important to remember that it’s the incremental, not total, sales that are important to determining your marketing ROI. And if you can’t measure incrementality directly, then it’s a good time to resort to an estimate based on historic trends and forecasts. While that won’t be perfect and could be quite wrong, it will still give you a better sense of your marketing impact than looking purely at the total sales figures.

...it’s important to remember that it’s the incremental, not total, sales that are important to determining your marketing ROI.

About the AuthorMarc Solomon, who holds an MBA from the Stanford Graduate School of Business, has leveraged analytics to drive results for Fortune 500 companies for nearly 20 years. As senior strategist, Marc uses his keen understanding of key market trends and insight into target audience behaviors and attitudes to provide strategic recommendations. He’s the former vice president of consumer marketing and analytics at Opower, vice president of marketing and analysis at Capital One, and a consultant with Booz-Allen & Hamilton.

800.836.7720 | www.catalystinc.com | Facebook | Twitter | LinkedIn