Jill talks about changing url’s in her post here. In my opinion, if you have a good ranking URL then to change it for the sake of a position or 2 is a little silly and potentially destructive.
That said, it did get me thinking about the whole keywords in url thing around whether it is a good thing or a bad thing to use them.
Perhaps bad thing doesn’t really come in to it. Besides excessive use, I can’t think of any bad reason at all. I think if anything they are good thing as they are both descriptive for humans and may also gain you a little weight in any link based algorithm that gave weight to keywords in the anchor text of a link, especially if people chose to link to you using the url only. Seen within a SERP they may also inspire a user to click through, simply via the fact that they tie the page to to the user query.
Now, if that url is picked up by a search engine then any anchor text attribution will either be of the form 123456789 or keyword-keyword. Keyword-keyword would certainly be of more benefit especially as -’s are treated as space delimiters. (Jill does cover this in her piece, so do go checkout what she said)
So what to do? Do we create nice juicy keyword urls in our CMS’s or do we just stick to short xyzpagename.htm conventions? I think it’s pretty clear to say that we’d be better served long term by using keywords in our URL’s, if only for the user benefits mentioned previously.
Algorithmically do keywords in a url even matter?
It’s hard to prove or disprove absolutely. I’ve tested this in the past and at the time I came to the conclusion that keywords in a url were worth doing and did give you an additional asset. Yet I can’t say with any certainty that the same applies today and forver more, simply because there are too many variables at play and you can’t ever be certain for sure around what SERPs are being weighted in which way and why. IMO different SERPs have different entry criteria, what might be easy to rank for in one space will be doubly difficult in another, simply because of how the algo has been weighted at the backend.
Search algorithms are a constantly moving target, (a little like search guidelines ) They are updated and modified to take into account both the changing nature of the Internet itself as well as the actions of SEO’s looking to exploit a flaw or two.
How would you test such a thing?
There are all manner of ways of testing things, or reverse engineering algorithms to test and see how they work. I won’t focus too much on the software that already exists out there other than to say that some programs allow you to analyse SERPs and look at things like keyword placement and densities and back link numbers and other contributing factors to overall SERP position, none of which do any kind of definitive ‘that’s the whole unifying answer to what you seek’ simply because there are too many hidden variables that we don’t have absolute access and scrutiny of. These might be the trust rank number of the page or domain that links out, the human factor of the edited SERP whereby a search engine employee has artificailly downgraded or boosted a particular page or domain.
Thankfully, for the purpose of this little test, I think there is still a way to determine whether keywords in a url have a contributory benefit.For the basis of this example, in a test of ‘do keywords in the url have any bearing on a serp’ here is what you might want to try.
Create 2 pages of equal size and structure.
Lets say that each page has a title tag, a h1 tag a paragraph of random nonsense text with an instance of the ‘magic’ keyword. The magic keyword would be something like huggersaurus, that mythical friendly dinosaur with a penchant for squashing people with love.
Page one would would mention the keyword in the title, the Hn tag, the p tag and in the url.
Page two would would mention the keyword in the title, the Hn tag, the p tag but not within the url.
We would then link to these pages using our anchor text and see what one would be returned first in any SERP.
We would need to vary the other words with our title and Hn and paragraph tags in a way that created two different pages of equal size and keyword density. It wouldn’t really help our test if one was demoted on the basis of some dupe content penalty.
We’d also need to ensure that for the purposes of our test, we measured and monitored what page we linked to first and how.
For example, I might well create a link_to_page_one_ here, then a link_to_page_two_here.
Any bot encountering such links *might* well take into account what link was cited first and apply a small degree of weight in any date_encountered_timestamp field. To account for this, we would run another test in tandem that reversed the positions, so that we linked to page with keywords in the url second, rather than 1st. The pages would be of equal size and structure albeit with a different keyword.We could then look at what page was returned in any SERP and draw our various conclusions. If page with keyword in URL was returned 1st, then we could say that keywords in the URL do have a slight advantage over those that do not.
If we wanted to, we could also play around a little more and link to the pages in different ways. We could see if anchor text gave a significant boost to our pages and record how variances affected the outcomes. We could for example link to the page with the single keyword or multiple keywords, or the absence of the keywords and rinse and repeat until we were happy with our results.
Lots of SEO’s do this sort of stuff, it’s a great way of learning about algo’s and weightings and how the positioning of elements can and does have an effect of the makeup of a SERP. That said, lots of SEO’s don’t bother either, simply because they already have an instinctual feel for what works and what doesn’t. They know how to get pages ranked and know the best methods for doing so. They don’t need to test such things and unless you are an anorak geek, neither should you really! It’s fun to play around with it though, dont you think?