SEM an ever evolving changeling – why you need to read about stuff like this
The post that follows is a little anoraky and gets in to some of that miniscule stuff that people like me like to muse about on occasions and disappear up our own back passages but hey, do feel free to read it, you might even learn something.
Todays poll asks you to evaluate your own understanding of SEO. I’ve given 5 possible responses. I thought it would be fun to look at SEO or SEM or whatever label you wanted to apply, and look at the topic in general getting people to think about their own skillsets and understandings.
Some of us think our understanding is good, real good. Nothing wrong with a bit of self belief and I already see that there is one person who believes that their understanding is “Fantastic, I’m the man”.
I put myself in the ‘Very good, I know a lot’ category. I only say this because I’ve been around a while at done this daily for years, I’ve got sites ranked on some very competitive terms. I understand what is required to do well with a site in the search engines. I’ve had sites banned too, it’s called pushing the envelop. Sometimes in order to go forward you need to take a step back or two.
I could have said ‘Fantastic I’m the man’ too, yet in reality in my opinion the only persons who can really say this and are worthy are the people who write the algoes themselves or someone who for some mad unbeknown reason that they’d keep very close to their chests, could get ranked for Viagra or Mortgage or any other high cost PPC keyword within a week and stay there for the duration – Yeah that’s not many people.
Chasing the magic bullets
I’ve said before too that SEO isn’t brain surgery, it’s pretty simple stuff, once you’ve grasped the basics. The problems begin when you get into the minutae and try to look for magic bullets like ideal keyword densities, or page layouts or kw to inward link ratios and other unknowable intangibles. I say unknowable because they are exactly that. The only people who really know are those who have written the algoes and…well, most of those guys aren’t exactly going to begin telling the world what they are or when they are changed.
In other words, search algortihms are constantly moving targets. What is widely accepted as a good bet today won’t necessarily be a good bet tommorrow. It evolves constantly. The only way to begin to keep up is to look at the trends. See who is ranking and why. If you look at enough serps, then you will eventually get a feel for what does and what does not work.
The days of flooding a site with zillions of links or keyword stuffing are for most of the good algoes out there long gone. It doesn’t take a genius to look at a genuinely popular site and see how it grows. Toolbars, Click through data, stats packages, Unique IP’s, user agents, link data, entrance data, bounce rates, page content are just some of the things that contribute to determining what is and what isn’t rankworthy.
It’s all about the users…
Yep, that old all about the user chestnut but it really is! If you wan’t your website to rank in the search engines and make a serious effort at doing well with it then, yeah whilst it may be easy to buy a line of “just make a kick arse site and forget about ranking and leave it to the engines” the reality is however that it doesn’t hurt to think like a search engine and try and understand where they might be coming from and why. At the very least you might avoid making some huge mistake advised by some nincompoop somewhere.
It might help to look at some of those factors and explore a few of the issues attached to each.
Toolbars and Stats packages
Toolbars are a fantastic way of measuring user behaviour. A toolbar could in theory, measure and record every little thing you do, every click, every interaction, every minute spent, every IP address used, every button clicked could be recorded and measured for every site you visit. Privacy paranoia issues aside, such data could be seriously useful for measuring a sites worthiness or value. Google, Yahoo and MSN or Live as they so ridiculously call themselves all have toolbars. I’ve no idea of their uptake none of them publish any figures, but its safe to say that their users are in the millions. That’s quite a substantial set of metrics that are very difficult to manipulate externally.
Catching a cloaker
The toolbar could also be used to compare data stored about a particular url. If the content seen by the toolbar was radically different to that seen by the search engine spider for example, then this could be an indication of cloaked or alternative content which on the whole is considered a huge no no by the search engines. No amount of IP or useragent cloaking is going to be able to interfere with a user installed browser embedded toolbar.
Stats packages like Google’s Analytics are used by webmasters to glean info about their sites.
Google for example provides a comperhensive free stats package that is of very good quaility, giving them massive insights into the behaviours and traffic make up of a huge number of websites. The value to the site owners are huge, but the value to the data hoarders like Google is even bigger still.
Some people like to say how cool Google is for letting them use such a cool package for free. I’d argue the opposite and say they should pay site owners to install the damn thing, but hey – I use it here, its a neat little tool for someone who isn’t too bothered with a data monster having access to everything they do, skynet anyone? 😉
Anyhow I digress, the point is that similar to the toolbar example above, such stats packages above give priceless insights to user behaviours and site metrics. Comparisons can be made and scaled and applied to known winners and applied accordingly. If a site or page has a high bounce rate then it could mean that the page isn’t as relevant for a query, or is lacking in quality, some other thing to look at. It could act as a flag for some kind of manual review even, at least on known competitive or popular search queries.
Entrance page data (the page that a user lands on) could be another signal of quaility or high interest. Lets assume that a site has a high % of non referal data. Whereby people have just typed in the url into their browser address bar. This could be due to say a TV advertising or paper media campaign or word of mouth thing where people had seen or heard of this great new website. Such a website might not have a reliable link profile or authority score, yet still be of intrinsic social worth. Such a site might generate an off the scale link profile that might resemble something similar to a paid link algo manipulation. Entrance page data, in these scenarios would be invaluable in terms of deciding whether the site in particular deserved to be ranked or boosted for any associated queries.
Most SERPs these days are tracked by the engines. Each link to website x y or z will be wrapped inside a little script that will check the position of the url within the serp. One would suspect that they would then look to take other factors into consideration and analyse that data too.
They could for example look at page titles and compare them against user queries and look for relationships that up such click frequencies. Such data could be used to develop all manner of new products and services. Similar to how supermarkets stock their shelves, or manufacturers present their products or write their advertising copy. Certain combinations work where others don’t. Non static ever changing constantly evolving massively complex but…measurable and noteworthy nonetheless.
They could look for users who had clicked a url only to hit the back button shortly afterwards. A high incidence of such occurences measured against sufficient data could well indicate a signal of low quality. This could then be folded in on any susbsequent data refresh.
Just a few things to think about there, there are lots of others relative to authority and how that’s arrived at, domain names and the factors applicable, content and distribution thereof, social media and how that can help or hinder – the point is, that these and the things mentioned above are aspects of the mix that you can’t really ignore. If you are serious about your business, then you just have to keep your eye on the ball and if you can’t then at the very least you’d better think about employing an SEO who can.
As much I’d like to think that the world is this super nice fluffy place full of people wishing to help me do really well and succeed and stuff, I’m also long enough in the tooth to know that there are also a bunch of people quite happy to kick my arse and trample all over me at the 1st opportunity. It’s by and large how business works, to the victor goes the spoils and all that stuff.
I don’t know who said it, but its one of lifes truisms. Keep your friends close and your enemies closer still. For me that’s as relevant to SEO as it is to war or any other scenario where you could end up getting squashed. If you know the reasons why you might get squashed either heavily or lightly even then you might just be able to do a thing or two to prevent it. Trust me I know what its like to get squashed it’s damn painful.
Search engines are our frenemies.Posted on: 27th June 2007, by : Rob Watts