Upload
jenny-halasz
View
874
Download
0
Embed Size (px)
DESCRIPTION
Technical factors still matter! Learn how we increased a client's revenue by implementing only technical strategies.
Citation preview
@JennyHalasz
THE PERIODIC TABLE OF SEO RANKING FACTORS: 2013 EDITIONWhy Technical SEO Still WorksJenny HalaszJune 11, 2013
@JennyHalasz
@JennyHalasz
The amount we increased a client’s revenue from organic search just by making technical changes. Cleaned up duplicate content, compressed images and scripts to save load time, and created and submitted new XML sitemaps.
800%
@JennyHalasz
Crawl budget is an important concept to understand. The search engines have millions of sites to crawl; they want to spend the most time on the ones that have frequent changes. Similarly, you want to set up clear signals about which are your most important pages and not waste time on duplicate content, redirect loops, or slow load time.
You Have to Crawl Before You Walk
@JennyHalasz
Crawl stats in Google Webmaster tools can indicate health or “illness” of a site. The top one is an example of a site that may have a problem.
Which one has a problem?
@JennyHalasz
Which one is healthier? It’s a trick question. They are both healthy. While the one on the top shows declining numbers of pages crawled, they haven’t been making any changes to their content. So they don’t need Google to crawl all of those pages everyday. It’s not like Google will forget about a page just because it hasn’t crawled it in a while.
Now which one has a problem?
@JennyHalasz
• Consistent crawl patterns• Increased crawling when changes are made,
decreased when site is static• Big spikes or drops can signal a problem
The key here is to find out what’s normal for your site. And use this only as a diagnostic – an indicator, not a definitive measure of site health.
What Do You Look For?
@JennyHalasz
So what is the duplicate content problem, and what does it have to do with crawl budget? Because search engines use the location (both literal – with site architecture- and figurative – with link value) as a signal of value, they get “confused” for lack of a better word whenever they encounter URLs that are different but have the same content on them.
What is the Duplicate Content Problem?
http://blog.bigmouthmedia.com/files/2011/12/SEO8.png
@JennyHalasz
One aside here is that sites rarely get in trouble for duplicate content unless they are doing it on purpose. Panda is often referred to as the “duplicate content” penalty, but that’s not correct. Panda is designed to identify low quality content. It just so happens that a lot of low quality content is also duplicative, either within sites or across sites. More often than not, the search engines will choose one version of the duplicate content. But that may not be the one you want it to be. And rel=canonical is not the only way to fix it. In fact, in most cases, it’s not even the best way. But you’ll learn more about that this afternoon in “The Crazy Complicated Technical Issues That Completely Sabotage the Best SEO Efforts”.
Duplicate Content Can Cause:
• Distributed link value• Wasted crawl budget• Search engine confusion• The wrong page to rank• Poor user experience• “Thin” content penalties*
@JennyHalasz
I jest, but the key here is that it’s all relative. If your site already loads in 2.5 seconds, improving your load time by a small fraction isn’t going to be that significant. If your site takes more than 6 seconds to load, you have a problem, and you need to fix it.
Page Speed
@JennyHalasz
The amount of time it should take a page to load… on desktop. By the end of 2013, one third of all searches will come from mobile devices59% of the public think a webpage should load in 3 seconds or less on their mobile3 Seconds
@JennyHalasz
Load times for the top U.S. retail websites are 22 percent slower than in December 2011, with a median load time for first-time visitors to a retail website’s home page at 7.25 seconds.
Radware Study (Spring 2013)
http://bit.ly/1aVYqlo
@JennyHalasz
In December 2012, the median webpage contained 79 requests (such as images, HTML, and CSS/JavaScript files), an increase of 8.22% from December 2011 median of 73 requests.
Radware Study (Spring 2013)
http://bit.ly/1aVYqlo
@JennyHalasz
• Pages are getting slower (and larger).• Browsers are not keeping up. Firefox is fastest,
followed by Chrome and IE9.• Many site owners are not implementing
potential performance improvements.
Radware Study (Spring 2013)
@JennyHalasz
URLs & Architecture
Add stat about more than three choices – Peter Shankman
@JennyHalasz
It’s ok to create pages for users that happen to have keywords in them, but don’t make life any more difficult for users than it has to be.Clean up your XML sitemaps. Only submit valid pages, use realistic update timeframes, and don’t put them all at 90% priority.Keep it Simple Sweetheart, but don’t believe that garbage about keeping your URLs under “x number of characters.”
• Don’t create pages just for the sake of keywords• Clarify which pages have value (XML sitemaps)• KISS, but don’t panic about URL length
Best Practices for URLs
@JennyHalasz
1. Google is your boss. • Get things in order. Have an agenda. Don’t
waste time.
2. User intent comes first. • Build site architecture based on what
users want.
3. Speed matters.• The average user’s attention span is 3
seconds. Don’t waste it.
3 Takeaways!
@JennyHalasz
THANK YOU!ARCHOLOGY
Jenny Halasz, Presidenthttp://www.archology.com
Follow us @archologyweb