Why Great Marketers Must Be Great Skeptics

Preview:

Citation preview

Rand Fishkin, Wizard of Moz | @randfish | rand@moz.com

Why Great MarketersMust Be Great Skeptics

This Presentation Is Online Here:

bit.ly/mozskeptics

Great SkepticismDefining

I have some depressing news…

Does anyone in this room believe that the Earth doesn’t revolve around the Sun?

The Earth (and everything in the solar system,

including the Sun) revolves around our system’s

gravitational Barycenter, which is only sometimes

near the center of the Sun.

Let’s try a moremarketing-centric example...

In 2009, Conversion Rate Experts built us a new landing page, and

increased our subscribers by nearly

25%. What did they do?

Via CRE’s Case Study

One of the most commonly cited facts

about CRE’s work is the “long landing page.”

The Crap Skeptic The Good Skeptic The Great Skeptic

Let’s change our landing page to be a long one

right now!

We should A/B test a long

landing page in our conversion

funnel.

How do we know page length was

responsible? What else changed?

The Crap Skeptic The Good Skeptic The Great Skeptic

“I do believe sadly it’s going to take some diseases coming back to realize that we need to change and develop vaccines that are safe.”

“Listen, all magic is scientific principals presented like "mystical hoodoo" which is fun, but it's sort of irresponsible.”

"The good thing about science is that it's true whether or not you believe in it." 

In fact, we’ve changed our landing pages numerous

times to shorter versions and seen equal success. Length, it

would seem, was not the primary factor in this page’s

success.

What separates the crap, good, & great?

Assumes one belief-reinforcing data point is evidence enough

Doesn’t question what’s truly causal vs. merely correlated

Doesn’t seek to validate

Doesn’t make assumptions about why a result occurred

Knows that correlation isn’t necessarily causal

Validates assumptions w/ data

Seeks to discover the reasons underlying the results

Knows that correlationdoesn’t imply causality

Thoroughly validates, but doesn’t let imperfect knowledge stop

progress

Will more conversion tests lead to better results?

Testing

Obviously the more tests we run, the better we can

optimize our pages. We need to build a “culture of

testing” around here.

Via Wordstream’s What is a Good Conversion Rate?

Via Wordstream’s What is a Good Conversion Rate?

Do Those Who Test More Really Perform Better?

Hmm… There’s no correlation between those who run more tests across more pages and

those who have higher conversion rates. Maybe the number of tests isn’t the right

goal.

Trust

Word of Mouth

LikabilityDesign

Associations

Word of Mouth

Amount of Pain

CTAs

UX

Effort Required

Process

Historical Experiences

Social Proof

Copywriting

CONVERSION DECISION

Timing

Discovery Path

Branding

Price(it’s a complex process)

How do we know where ourconversion problems lie?

Ask Smart Questions to the Right PeoplePotential Customers

Who Didn’t BuyThose Who Tried/Bought But

Didn’t Love ItCustomers Who

Bought & Loved It

Professional, demographic, & psychographic characteristics

Professional, demographic, & psychographic characteristics

Professional, demographic, & psychographic characteristics

What objections did you have to buying?

What objections did you have; how did you overcome them?

What objections did you overcome; how?What would have made you

stay/love the product?What would have made

you overcome them?

What do you love most? Can we share?

We can start by targeting the right kinds of

customers. Trying to please everyone is a recipe for

disaster.

Our tests should be focused around overcoming the

objections of the people who best match our

customer profiles

Testing button colors

Testing headlines, copy, visuals, & form fields

Designing for how customers think about their problems &

your solution

THIS!

Does telling users we encrypt data scare them?

Security

Via Visual Website Optimizer

A/B Test Results

They found that without the secure icon had over 400% improvement on conversions as compared to having the image.

[Note: results ARE statistically significant]

We need to remove the security messages on our

site ASAP!

We should test this.

Is this the most meaningful test we can perform right now?

(I’m not saying it isn’t, just that we should prioritize intelligently)

Via Kayak’s Most Interesting A/B Test

vs.

Via Kayak’s Most Interesting A/B Test

A/B Test Results

“So we decided to do our own experiment about this and we actually found the opposite that when we removed the messaging, people tended to book less.”

- Vinayak Ranade, Director of Engineering for Mobile, KAYAK

Good thing we tested!

Good thing we tested!

Your evidence is no match for my

ignorance!

What should we expect from sharing our content on social media?

Social CTR

Just find the average social CTRs and then try to match

them or do better. No brainer.

306/701 = 43.6%... WTF??

Phew! We’re not alone.

Via Chartbeat

Assuming social metrics and engagement correlate was a flawed assumption. We need

to find a better way to measure and improve social sharing.

OK. We can create some benchmarks based on these numbers and their averages,

then work to improve them over time.

That is an insane amount of variability!

There are other factors at work here. We need to understand them before we can create

smart metrics or useful expectations

Timing

Source

Audience Affinity

Formatting

Network-Created Limitations to Visibility

Brand Reach

Traffic

Engagement

Let’s start by examining the data and impacts of timing.

Via Facebook Insights

Via Followerwonk

Via Google Analytics

There’s a lot of nuance, but we can certainly see how

messages sent at certain times reach different sizes and

populations of our audience.

Comparing a tweet or share sent at 9am Pacific against tweets

and shares sent at 11pm Pacific will give us misleading data.

But, we now know three things:

#1 - When our audience is online

#2 – Sharing just once is suboptimal

#3 – To be a great skeptic (and marketer), we should attempt to understand each of these inputs with similar rigorousness

Do they work? Can we make them more effective?

Share Buttons

After relentless testing, OKTrends found that the following share buttons worked best:

OKTrends found that removing all but a single button (the “like” on Facebook) had

the most positive effect.

And that waiting until the visitor had scrolled to the bottom of the article produced the

highest number of actions

We should remove all our social sharing buttons and replace them with a single slide-over social CTA for

Facebook likes!

Buzzfeed has also done a tremendous amount of social button testing & optimization…

And sometimes they do this…

And sometimes this…

Is Buzzfeed still in testing mode?

Nope.They’ve found it’s best to show different buttons based on both the type of content and how you

reached the site.

OK… Well, then let’s do that… Do it now!

Testing a small number of the most impactful social button

changes should produce enough evidence to give us a direction

to pursue.

Buzzfeed & OKTrends share several unique qualities:1) They have huge amounts of

social traffic2) Social shares are integral to their

business model3) The content they create is

optimized for social sharing

Unless we also fit a number of these criteria, I have to ask again:

Is this the most meaningful test we can perform right now?

BTW – it is true that testing social buttons can coincide with a lot of other tests (since it’s on content

vs. the funnel), but dev resources and marketing bandwidth probably are not infinite

Does it still work better than standard link text?

Anchor Text

Psh. Anchor text links obviously work. Otherwise

Google wouldn’t be penalizing all these sites for

getting them.

It has been a while since we’ve seen a public test of anchor text. And there’s no way to know for

sure how powerful it still is.

Testing in Google is very, very hard. There’s so many confounding

variables – we’d have to choose our criteria carefully and repeat the test multiple times to feel confident of

any result.

1) Three word, informational keyword phrase with relatively light competition and stable rankings

Test Conditions:

2) We selected two results (“A” and “B”), ranking #13 (“A”) and #20 ( “B”) in logged-out, non-personalized results

3) We pointed links from 20 pages on 20 unique, high-DA, high-trust, off-topic sites at both “A” and “B”

A) We pointed 20 links from 20 domains at this result with anchor text exactly matching the query

phrase

#11

#12

#13

#14

#15

#16

#17

#18

#19

#20

B) We pointed 20 links from the same 20 pages as “A” to this URL

with anchor text that did not contain any words in the query

#11

#12

#13

#14

#15

#16

#17

#18

#19

#20

#1

#2

#3

#4

#5

#6

#7

#8

#9

#10

After 20 days, all of the links had been indexed by Google. “A” and “B” both moved up 4 positions. None of the other results moved more than 2

positions.

See? Told you it works.

While both results moved up the same number of positions, it’s

almost certainly the case that #13 to #9 was against more serious

challengers, and thus anchor text would seem to make a difference. That said, I’d want to repeat this

a few times.

Princess Bubblegum and I are in agreement. We should do the test at least 2-3 more times keeping as

many variables as possible the same.

1) Three word, informational keyword phrase with relatively light competition and stable rankings

Early Results from a Second Test:

2) We selected two results (“A” and “B”), ranking #20 (“A”) and #14 ( “B”) in logged-out, non-personalized results

3) We pointed links from 20 pages on 20 unique, high-DA, high-trust, off-topic sites at both “A” and “B”

B) We pointed 20 links from 20 domains to this URL with anchor

text that did not contain any words in the query

#11

#12

#13

#14

#15

#16

#17

#18

#19

#20

A) We pointed 20 links from the same pages/domains at this result with anchor text exactly matching

the query phrase

#11

#12

#13

#14

#15

#16

#17

#18

#19

#20

#1

#2

#3

#4

#5

#6

#7

#8

#9

#10

After 16 days, all of the links had been indexed by Google. “A” moved up 19 positions to #1! B moved up 5 positions to #9. None of the other results moved more than 2 positions.

Good thing we tested!

This is looking more conclusive, but we

should run at least one more test.

Anchor text = rankings. Stick a

fork in it!

Does it influence Google’s non-personalized search rankings?

Google+

Good discussion about Google+ correlations in this post

Google+ is just too damn high.

Good discussion about Google+ correlations in this post

From a comment Matt Cutts left on the blog post:“Most of the initial discussion on this thread seemed to take from the blog post the idea that more Google +1s led to higher web ranking. I wanted to

preemptively tackle that perception.”

Good discussion about Google+ correlations in this post

To me, that’s Google working really hard to NOT say “we don’t use any data from Google+ (directly or indirectly) at all in our ranking algorithms.” I would

be very surprised if they said that.

Google explicitly SAID +1s don’t affect rankings. You

think they’d lie so blatantly? As if.

The correlations are surprisingly high for something with no

connection. There have been several tests showing no result,

but if all it takes is a Google+ post, let’s do it!

First, remember how hard it is to prove causality with a public test

like this. And second, don’t let anything but consistent, repeatable, provable results sway your opinion.

#21

#22

#23

#24

#25

#26

At 10:50am, the test URL ranked #26 in logged-out, non-

personalized, non-geo-biased, Google US results.

42 minutes later, after ~30 shares, 40 +1s, and several other G+ accounts posting

the link, the target moved up to position #23

#21

#22

#23

#24

#25

#26

#21

#22

#23

#24

#25

#26

48 hours later, after 100 shares of the post, 95 +1s, and tons of additional posts, the result was

back down to #25

At least we proved one thing – the Google+ community is

awesome. Nearly 50 people shared the URL in their own

posts on G+!

Many G+ users personalized results, however, were clearly

affected.

#21

#22

#23

#24

#25

#26

#27

#28

#29

#30

Something very strange is happening in relation to the test URL in my personalized results,

though. It’s actually ranking LOWER than in non-personalized

results.

Could Google be donking up the test?

Sadly, it’s impossible to know.

GASP!!! The posts did move the result up, then someone

from Google must have seen it and is messing with

you!!!

Sigh… It’s possible that Jenny’s right, but impossible to prove. We don’t know for sure what caused the initial movement, nor can we

say what’s causing the weird personalized results.

More testing is needed, but how you do it without any potential

monkey wrenches is going to be a big challenge.

That said, remember this:

Phew! We’re not alone.

Via Chartbeat

If I were Google, I wouldn’t use Google+ activity by itself to rank anything, but I would connect G+

to my other data sources and potentially increase a page’s

rankings if many pieces of data told a story of engagement &

value for visitors.

Ready to Be Your Own Skeptic?

Rand Fishkin, Wizard of Moz | @randfish | rand@moz.com

bit.ly/mozskeptics