Why WP Needs a Compare Cache Against Post Plugin

Recently I had a few blogs hacked.

The reason, simple – I didn’t upgrade my WP installs and I didn’t check every single plugin I installed for safety and security (who does) . Lots of people don’t, a simple query on Google will show you 1000’s that haven’t either.

Why? Well, one of the reasons is that it used to be very difficult, or shall we say, cumbersome to do so. You had to jump through all kinds of hoops and it was a basic PITA.

Well, apparently, it isn’t anymore. There’s even an automatic upgrade wordpress plugin that checks your install and helps you sort it out.

Ok, so yes, lots of people are going to scream serves you right Rob, you should have updated your code..but come on, get real, who does so religiously and what about holidays or illness or time away from the computer, hacks are going to happen, even with the latest suped up versions.

Continue reading

Google spidering its own custom search results?

I was doing a little search this evening and found this amongst the results on the 1st page.

I thought it might be something related to me being signed in to a Google account so I signed out and tried again. Same result. So I opened up IE7 and same result.

I tried the same query on but it didn’t replicate.

Who is this mystery user=016597473608235241540 I wondered, no one it seems there’s a few more of them too. A little query reveals just 152 results which in the grand scheme of things is just tiny . The question is however why? Why spider customer results and include them in a SERP however small the sample.

Noindex perhaps? How about robots.txt even

The result page itself has the distance in Kilometres but no link to any defined map or directions/distance page. Just two links to wikipedia – crap really, not a good user experience.

I’m sure there’s a better result out there than this one.

Unfairness inherent in authorities – just another flaw in an algo

Before I say too much else I just wanted to say that generally in most cases I think it unnecessary to be too specific when highlighting the failings and flaws of others. It’s too easy to point fingers and say, oh look at how crap so and so is, or look at how so and so are doing that. In most cases it’s simply not necessary, you can say the same thing without making an enemy for yourself.

Why am I gabbing on about this? Well I guess I’ve been partially inspired by a piece by a guy named Loren Baker at search engine journal, a site I read regularly and most of the time simply love to bits. Yet today, I was left with a bit of a hmmmn taste in my mouth asking myself whether it was really necessary to out the guys he did in the way he did. In one fell swoop he has effectively smashed the revenue stream of one particular website ( or seriously diminished its efficacy) and no doubt condemned the sites advertising to declining revenue streams at some latter point.

The power of the written word eh?

Ok, so sure , anyone could have dobbed these guys in via a search engine report link, we all know that and hey perhaps people have already. The point is though that SEJ is read regularly has a hefty subscriber base what is written there is practically guaranteed to be read by Googlies and Yahoos and Msn search dudes. I don’t know Loren, so I can’t comment on the type of guy he is or even try to second guess his motives. At worst he might have a payday loans site at position 11 and at best he might just be as perplexed as us all by the apparent power of the noscript tag and authority domains and is wondering why this is still so effective, I expect it is the latter.

Where is the juice – Noscript tag or Authority domain?

To think that noscript content could have such an impact on SERPs in isolation would be pretty silly.

Lets get this straight right here right now. The noscript tag is no magic bullet. The examples highlighted at SEJ are not (or weren’t) sitting at positions 1 and 3 in Google simply because of a few links contained in a noscript tag, they were there because the sites that contained their links were from sites of multiple themes and disciplines all of which contained the hit counter code from

False authority too easily attained

Why does (or soon to be did) have so much power and authority?

For those of you who may have been asleep for the past 3 or so years, domain authority in SEM terms relates to a domains ability to rank or convey link juice or pass pagerank. The idea is that if enough domains are linking to a singular site then it might well mean that the site or sites being linked to from so many different points (domains) in the web graph, could well be an on topic site for the keywords being used to link through to it. It’s one of the reasons why blogs and SMO sites are considered favourably in the search ranking fraternity. The idea is bolstered by the belief that individual bloggers are less interested in gaming search engine rankings than the minority of so called SEO’s and webmasters that are. The democratic effect of lots of people talking about a topic dictate that this social effect should be looked at and noticed and absorbed in any over all ranking score.

This all sounds somewhat perfect and idylic even. A meritocritous way of ranking sites from the social chatter ofweblogs and other live mediums. Harder to game, seemingly more reliable in any scoring system.

The applied semantic technology of old (we were told) was a vital tool for classifying content into its various themes and classifications. People have blogged and bragged about the importance of getting on topic themed links from related sources ( me included at some point I’m sure) yet when we look at that example it shows that in reality huge aspects of all this is bollocks. Forget your themed links from the right sites and directories, feck that, just go out and get any type of link from any type of domain that you can for your singular target keyword and…kazaaam, you’ll get the rank you want.

I was going to show what I meant further by using the Google link command link: yet curiously it shows no backlinks already, I wonder why that might be 😉

Anyways, not to worry we can use Yahoo’s site explorer with that funny old seo-rd parameter that they like to chuck in there and note that there are actually 2500 + reported backlinks for that domain. I can’t say whether this accurate or not as the SE’s may already have applied their SEO paranoid counter measures, but the point is, that a cursory glance over the sites shown reveals that domains that used the hitcounter code were from a very broad range of domains and blogs. They were not all from finance or loan related sites, in fact very very few of the sites discussed finance or laons in anyway at all!

Their backlinks came from .edu’s, .orgs, .coms, blogs, websites about religion, books, wood, horses in fact you name it and there was probably a site of one sort or another linking back to’s advertisers.

What it sreveals is that Google in particular doesn’t appear to work too hard in establishing domain authority. It seems to rely on numbers and not very much else. Why else would an uber competive term like payday loans be so easily and readily attainable?

Success for attaining payday loan SERP numero uno status was arrived at just like this.

1. Create a keyword domain that discussed finance and loan stuff within its content.

2. Get lots of links from lots of different domains with your ideal keywords

Yep, that was all there was to it. No need to get the right types of links from the right types of sites, just get links of whatever type and you are good to go.

So they went to and checked out their advertising rates and happily used their advertising program to boost them up the SERP’s. had domain authority, built upon the juice conveyed back from the 1000’s of domains and sites that linked backed to it within their code. This told Google and perhaps other search engines that here was a site that was being linked to from lots of different domains and IP addresses. It must therefore, be some kind of useful resource and worthy of whatever authority score the algo decided to bestow.

Yet, if you look at that and weigh it against the idea of the social web and multiple voices linking to singular things with related keywords then you see that in this regard, just shouldn’t have been in the same kind of crowd. It hadn’t done anything wrong, hit counters have been around long before Google or link text algorithms; it’s how they work, they sit on a site and link back to the mothership to read things like referals and times and dates and click paths.

So to me at least it shows that the whole ‘authority’ thing is at best a little weak and at worst completley and utterley underdeveloped. Why isn’t the algo detecting multiple same text incursions?

Why doesn’t it count the number of instances of keyword anchor text and decide that a number above a certain threshold or % maybe skewed and perhaps marked down a touch?

Why doesn’t it look insider the containers of where these links are found and make a judgement on that basis. In the payday loan example all of the links were inside a noscript tag! Yet, the algo again didn’t detect this fact and allowed the domain to rank for its keywords.

Why doesn’t it look at the placement of the code itself and notice a pattern? Whatever happened to the concept of Block Level Link Analysis?

The tactic as described is nothing new, there are 1000’s of others all doing the same. Just go to do a search on Google or yahoo fro free hit counter and see who is advertising. I’d bet that most are employing similar tactics to boost their own sites or sites of clients up the SERPs. It’s an exploit that is likely to be grown and adapted.

Is it going to be closed anytime soon? Hell, who knows. Surely it doesn’t take too much effort to say if link is this or that then discount its value. It makes you wonder what some of those search guys get up to all day…

Blogspot domains identified as fine purveyors of Spam

Free content based domains are beacons for Spam

I was over at Bill Slawski’s excellent blog earlier today researching text to link proximity stuff, and stumbled across a post Microsoft Follows the Money to Find Spammers which referred to this interesting Spam research paper from Microsoft entitled: Spam Double Funnel: Connecting Web Spammers with Advertisers. For the geekazoids amongst you there’s lots of interesting snippets and observations. Bill’s already covered most of the headlines over at his blog, so I won’t regurgitate that.

What stuck out to me was set amongst the conclusions, the main one being that blogspot domains were the biggest culprits when it came to originators of spam.


…doorway domains, we showed that the free blog-hosting site had an-order-of-magnitude higher spam appearances in top search results than other hosting domains in both benchmarks, and was responsible for about one in every four spam appearances (22% and 29% in the two respectively, to be exact). In addition, at least three in every four unique blogspot URLs that appeared in top-50 results for commercial queries were spam (77% and 75%). We also showed that over 60% of unique .info URLs in our search results were spam, which was an-order-of-magnitude higher than the spam percentage number for .com URLs.


I don’t know if the findings of papers like these bear any weight or consideration in any subsequent re-jigs of search engine algorithms. Only the search engines truly know what is and what isn’t a consideration in any equation. We can certainly say that if a mainstream domain owned and controlled by a party other than the search engines were to be responsible in similar ways, then their tenure in the SERPs (search engine results pages) would be very short lived. Their authority score would suffer, as would their overall trustrank. In essence once identified they’d be dead in the water.

Search algorithms aren’t changed on a whim of course, its a relatively safe bet to assume that search models are consistently tested and evaluated internally, before any public release. Documents like the one referenced, give interesting insights into the minds of the people who look at webspam.

Perhaps it’s for these very reasons that people behind other platforms that allow human access to write and create content make such public pronouncements detailing there determinism to eliminate or at least drastically reduce spam in their indices. After all failure to do so, in light of the above for example, could quickly lead to a diminution in trust and authority with the resultant knock on effect of poor ranking ability and negative monetisation effects that would usually follow significantly reduced traffic levels. By publicly affirming their commitment to tackle it, they may well save themselves from the heavy axe a search engineer can wield.

Jason Calacanis of Mahalo was kinda right when he said

When I had SEOs on the last CalacanisCast they raved about Squidoo and it’s ability to game the system, and if SEOs love your platform you have a HUGE problem.

The fact is, that web spammers, (not all SEO’s are web spammers Mr C) will indeed game the system. Some see it as their job to take competitive edges and work with them to the max; the rationale being if they didn’t then somebody else would.

I guess its up to platform owners to ensure that access and effectiveness are reduced. It’s a big reason why wordpress and all the major blogging platforms introduced nofollow into their software. For those who don’t know, nofollow restricts the ability of a link to pass pagerank, or link juice or link love or whatever else you want to call it, to the page to which it points.

Perhaps Mahalo and Squidoo and Blogspot should just ‘nofollow’ everything they link out to, maybe they should just close it all off to spiders and bots. They haven’t been created for the benefits of search engines after all…

Perhaps serious individual content creators should just go out and buy a domain for $20, grab a WP install, get some cheap blog hosting and just run their own show. It isn’t exactly rocket science after all. It does make you wonder why a person would bother writing content and help make some other guy rich …unless of course you we’re writing it to funnel people elsewhere and monetise it to your own ends.

I do have some sympathy with what those guys say though, It narks me a little though, as it suggests that people like me are scum sucker sleaze buckets. Most of us aren’t, it’s just a small minority of uber spammer who spoil it for everyone else.

Maybe the likes of Mr Godin and Mr Calacanis could help by using the term web spammers instead of SEO’s. It’s a far more accurate descriptor.

Meantime, if you are blogging and on a free platform, then perhaps you ought to at least consider moving on..

Comment spammers suck bottom

I like the Akismet spam catcher feature of wordpress, it catches a shed lot of stuff that I’d otherwise have to fanny about with.

Occasionally it makes a little mistake and puts a genuine comment or two in to the sin bin but on the whole it does a pretty good job.

Usually I just delete and forget, but today one actually raised a smile so here I am blogging about it.


I found myself thinking, well um…, if I knew who you were I’d really love to answer.

I’d like to be able to track you down and ask you why you seriously bother with this stuff. Do you really think it works anymore? What do you expect me to say in response to your post? Gee thanks! 😀 I’m so glad you stopped by and shat on my page 😀 Please do come back and shat some more, thanks very much no really! No, exactly, unlikely.So ok, if you are an A grade arse (and I suspect you are) you probably use some kind of automated script that scans serps and looks for various strings in various well known blog platforms hoping that you hit lucky and find a blog that doesn’t premod or add nofollows. Heck, I bet you don’t even care about the nofollow thing too, its a link after all right?

Its not victimless dude, it wastes peoples time, its negative karma, it’ll come back and bite you on the arse, do yourself a big one and stop…At least consider using the little tenure you have left on this earth to do something meaningful and productive, this really isn’t good stuff, seriously.

So if any Y! rep out there is reading this and has a way of locating mr and deleting his arse out of existence then hey, dont let me stop you, I’m sure he’d appreciate it. 😀



Give a little link love say no to nofollow remove the link condoms

Nofollow and wordpress why I’m removing the rewrite

linklove.jpgI was having a read here and there today about nofollow, and was left saying to myself hmmn well at least I don’t employ the damn thing, and if I do its usually with a nudge and a wink poking fun at something or other. I then fired up the firefox search status plugin and switched on the highlight nofollow option and carried on flicking through various tabs and links surprised to see the number of red rel nofollow flags popping up here there and everywhere.

It was kind of ironic to read Andy Beal’s mini diatribe about wikipedia only to see his comments section littered with a whole lot of red dashed boxes! Every single link in every commenters comment, including the link to their sites are nofollowed, even Andy’s own!

Continue reading

Watch your CMS – it could be getting you into trouble

Graywolf blogged about some Disney Blog getting  de-indexed for hidden text.

Seems that some blogging platforms/cms’s have issues that could get your site removed  for web spamming by inserting text  that is hidden.

One commenter there had this to say:

Some freely available WordPress templates (specifically from hidden links from the designer linking to certain cancer websites. I am sure most people do not see that as it is kind of sneaky.

I do wonder why they don’t just ignore  such aspects for ranking purposes. If its identified as hidden algorithmically then it can be ignored as a ranking factor too…no? Or am I missing some bigger picture here.

%d bloggers like this: