Site-speed is one factor of the ranking algorithm. This wasn’t always the case however. Prior to mid-2010 site-speed was solely a question of usability and user experience.
This remains an important consideration – how well do you think you’ll achieve your goals if you present users with a site that loads slowly? Unless you happen to be a unique site – which unfortunately is extremely unlikely – they’ll at best complete what they’re trying to do and not come back. At worst they’ll abandon the shopping cart, leave their form partial filled or cancel that download. All of these lost opportunities add up every day across the year (Amazon said that 1 extra second of latency would cost them $1.6 Billion dollars a year).
We don’t want that! So what can we do?
The Testing Tools of the Trade
Before you start to tinker you need a decent benchmark of the site’s current performance. That way when you’ve made under the hood adjustments you can compare with the initial scores. We don’t want to accidentally make things slower!
There are a number of high-quality testing tools available – free to use. They don’t always look at the same aspects of your site, so it’s worth trying one or two. Also bear in mind that different pages will score differently for site-speed. I’d therefore recommend testing your homepage and any other important landing or feature pages. After making changes re-test using the same pages to ensure you can accurately assess the improvements you’ve made.
Site-speed testing tools:
- Yahoo! Yslow: https://developer.yahoo.com/yslow/
- Google Pagespeed Insights: https://developers.google.com/speed/pagespeed/insights/
- GTMetrix: http://gtmetrix.com/
- Pingdom: http://tools.pingdom.com/fpt/
Just a few quick notes on these: Webpagetest.org allows you to set a browser and location which is pretty handy for extended testing. Google now includes mobile usability as a score – this is well worth paying serious attention to (but is outside the scope of this post). And finally, GTmetrix combines the Yslow and Google features for ease of use.
Common Areas For Speed Improvements
With Google’s Pagespeed Insights there tend to be a common set of suggestions that will crop up time and again. Certainly for me, most clients I’ve worked with start out with 3/4/5+ of these:
- Optimise images
- Leverage Browser Caching
- Minify CSS
- Minify HTML
- Improve server response time
- Avoid landing page redirects
- Enable compression
- Prioritise visible content
Optimise Images – Keep them to an appropriate size, rather than using code to resize them. Also losslessly compress them to a format such as .jpg.
Leverage Browser Caching – Set an expiry time on your static resources – usually by editing your .htaccess file. This way return visitors won’t have to re-download the whole page.
Improve Server Response Time – This one has the potential to be rather complicated. You’re aiming for a sub 200ms time here and may arrive at it by a combination of the following methods: improving database queries, improving application logic, addressing competing resources on the server’s CPU and/or memory, upgrading outdated libraries etc. We recommend you get your developer to run application monitoring & if you are using a non-bespoke solution to also consult web resources and the author’s documentation.
Avoid Landing Page Redirects – It’s not really a landing page if you have to move them to another place, is it? Also remember it introduces another, unnecessary, step that must be completed prior to loading the page the user actually wants.
Enable Compression – This should just be a simple setting for your web server and according to Google the use of gzip compression could offer up to a 90% reduction in the size of server response. Apache users (most people) check out mod_deflate and speak to your developer about getting this set up.
Rankings and the Time To First Byte
In August/September 2013 Moz and Zoompf worked together on research that suggested that a site’s Time To First Byte (TTFB) was strongly correlated to higher rankings. Whilst this was extremely thorough research it’s not 100% confirmed (you’d have to ask Google!). Also remember this is correlation which as you know does not imply causation!
However, it’s still well worth a read and the main takeaway from it is that they recommend a TTFB of less than 500ms – no more than 100ms for network latency and no more than 400ms for backend processing.
Read the full article here.
Enterprise Level Speed Improvements
Most of the improvements listed so far are accessible to the majority of people. However, if you have an extremely popular site you probably (hopefully) have a larger budget for your speed improvements as you naturally have many more users.
After perhaps spending some of your budget on extra optimisation for the application and its database you’ll need to look at your server setup and options. At a very basic level you can throw extra resources its way; increasing bandwidth, upping the memory, upgrading the CPU and moving away from a shared solution to a dedicated server setup.
Beyond this there are a couple of approaches you may use that basically involving using increasing amounts of servers!
Within one location you may use multiple servers and load balance between them – this will help to keep you within that 400ms backend limit.
If you’re covering a large enough geographical area a Content Delivery Network (CDN) will allow you to serve your users from a physically closer location. This will reduce your 100ms network latency limit and probably affect your 400ms limit through splitting the workload.
Finally as a multinational you may want to do all of the above independently within each country you serve. This begins to make sense if you need separate domains, have separate products, have to comply with differing regulations etc.
The next step would be having Google style data centres across the globe!
If your website needs a speed boost give us a call – we work with most software out there, but we’re particularly good at Magento and WordPress.