Category Archives: yahoo

Unfairness inherent in authorities – just another flaw in an algo

Before I say too much else I just wanted to say that generally in most cases I think it unnecessary to be too specific when highlighting the failings and flaws of others. It’s too easy to point fingers and say, oh look at how crap so and so is, or look at how so and so are doing that. In most cases it’s simply not necessary, you can say the same thing without making an enemy for yourself.

Why am I gabbing on about this? Well I guess I’ve been partially inspired by a piece by a guy named Loren Baker at search engine journal, a site I read regularly and most of the time simply love to bits. Yet today, I was left with a bit of a hmmmn taste in my mouth asking myself whether it was really necessary to out the guys he did in the way he did. In one fell swoop he has effectively smashed the revenue stream of one particular website ( or seriously diminished its efficacy) and no doubt condemned the sites advertising to declining revenue streams at some latter point.

The power of the written word eh?

Ok, so sure , anyone could have dobbed these guys in via a search engine report link, we all know that and hey perhaps people have already. The point is though that SEJ is read regularly has a hefty subscriber base what is written there is practically guaranteed to be read by Googlies and Yahoos and Msn search dudes. I don’t know Loren, so I can’t comment on the type of guy he is or even try to second guess his motives. At worst he might have a payday loans site at position 11 and at best he might just be as perplexed as us all by the apparent power of the noscript tag and authority domains and is wondering why this is still so effective, I expect it is the latter.

Where is the juice – Noscript tag or Authority domain?

To think that noscript content could have such an impact on SERPs in isolation would be pretty silly.

Lets get this straight right here right now. The noscript tag is no magic bullet. The examples highlighted at SEJ are not (or weren’t) sitting at positions 1 and 3 in Google simply because of a few links contained in a noscript tag, they were there because the sites that contained their links were from sites of multiple themes and disciplines all of which contained the hit counter code from Hitcountermaster.com.

False authority too easily attained

Why does (or soon to be did) Hitcountermaster.com have so much power and authority?

For those of you who may have been asleep for the past 3 or so years, domain authority in SEM terms relates to a domains ability to rank or convey link juice or pass pagerank. The idea is that if enough domains are linking to a singular site then it might well mean that the site or sites being linked to from so many different points (domains) in the web graph, could well be an on topic site for the keywords being used to link through to it. It’s one of the reasons why blogs and SMO sites are considered favourably in the search ranking fraternity. The idea is bolstered by the belief that individual bloggers are less interested in gaming search engine rankings than the minority of so called SEO’s and webmasters that are. The democratic effect of lots of people talking about a topic dictate that this social effect should be looked at and noticed and absorbed in any over all ranking score.

This all sounds somewhat perfect and idylic even. A meritocritous way of ranking sites from the social chatter ofweblogs and other live mediums. Harder to game, seemingly more reliable in any scoring system.

The applied semantic technology of old (we were told) was a vital tool for classifying content into its various themes and classifications. People have blogged and bragged about the importance of getting on topic themed links from related sources ( me included at some point I’m sure) yet when we look at that example it shows that in reality huge aspects of all this is bollocks. Forget your themed links from the right sites and directories, feck that, just go out and get any type of link from any type of domain that you can for your singular target keyword and…kazaaam, you’ll get the rank you want.

I was going to show what I meant further by using the Google link command link:http://www.hitcountermaster.com yet curiously it shows no backlinks already, I wonder why that might be ;)

Anyways, not to worry we can use Yahoo’s site explorer with that funny old seo-rd parameter that they like to chuck in there and note that there are actually 2500 + reported backlinks for that domain. I can’t say whether this accurate or not as the SE’s may already have applied their SEO paranoid counter measures, but the point is, that a cursory glance over the sites shown reveals that domains that used the hitcounter code were from a very broad range of domains and blogs. They were not all from finance or loan related sites, in fact very very few of the sites discussed finance or laons in anyway at all!

Their backlinks came from .edu’s, .orgs, .coms, .co.uk blogs, websites about religion, books, wood, horses in fact you name it and there was probably a site of one sort or another linking back to hitcountermaster.com’s advertisers.

What it sreveals is that Google in particular doesn’t appear to work too hard in establishing domain authority. It seems to rely on numbers and not very much else. Why else would an uber competive term like payday loans be so easily and readily attainable?

Success for attaining payday loan SERP numero uno status was arrived at just like this.

1. Create a keyword domain that discussed finance and loan stuff within its content.

2. Get lots of links from lots of different domains with your ideal keywords

Yep, that was all there was to it. No need to get the right types of links from the right types of sites, just get links of whatever type and you are good to go.

So they went to hitcountermaster.com and checked out their advertising rates and happily used their advertising program to boost them up the SERP’s. Hitmastercounter.com had domain authority, built upon the juice conveyed back from the 1000′s of domains and sites that linked backed to it within their code. This told Google and perhaps other search engines that here was a site that was being linked to from lots of different domains and IP addresses. It must therefore, be some kind of useful resource and worthy of whatever authority score the algo decided to bestow.

Yet, if you look at that and weigh it against the idea of the social web and multiple voices linking to singular things with related keywords then you see that in this regard, hitmastercounter.com just shouldn’t have been in the same kind of crowd. It hadn’t done anything wrong, hit counters have been around long before Google or link text algorithms; it’s how they work, they sit on a site and link back to the mothership to read things like referals and times and dates and click paths.

So to me at least it shows that the whole ‘authority’ thing is at best a little weak and at worst completley and utterley underdeveloped. Why isn’t the algo detecting multiple same text incursions?

Why doesn’t it count the number of instances of keyword anchor text and decide that a number above a certain threshold or % maybe skewed and perhaps marked down a touch?

Why doesn’t it look insider the containers of where these links are found and make a judgement on that basis. In the payday loan example all of the links were inside a noscript tag! Yet, the algo again didn’t detect this fact and allowed the domain to rank for its keywords.

Why doesn’t it look at the placement of the code itself and notice a pattern? Whatever happened to the concept of Block Level Link Analysis?

The tactic as described is nothing new, there are 1000′s of others all doing the same. Just go to do a search on Google or yahoo fro free hit counter and see who is advertising. I’d bet that most are employing similar tactics to boost their own sites or sites of clients up the SERPs. It’s an exploit that is likely to be grown and adapted.

Is it going to be closed anytime soon? Hell, who knows. Surely it doesn’t take too much effort to say if link is this or that then discount its value. It makes you wonder what some of those search guys get up to all day…

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Yahoo’s confesses its algo is poor and needs a little help

Yahoo! annouced a new tag today, supposedly aimed at helping webmasters to section off aspects of their pages so that spiders don’t index content that is superfluous to the meat and gravy of the page. The ‘what a great way to flag seo’d pages’ factor aside, lets look at what they are saying with regard to usefulness and the webmaster.

The “robots-nocontent” tag is a useful tool for webmasters.

  • It can improve our focus on the main content of your pages.
  • It helps target your pages in search results by making sure the appropriate deep page in your site can surface for the right queries.
  • It helps improve the abstracts for your pages in results by identifying unrelated text on the page and thus omitting it from consideration for the search result summaries.

Those bullet points are interesting. Lets have a look at them in reverse. Nope, not reverse order, but reverse logic. Lets see what can be determined by flipping the logic around.

Continue reading

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Ask.com CEO less than enthusiastic for Y!’s pay for inclusion model

Ross Dunn over at Stepforth Seo wrote an interesting piece discussing Y!’s revamped search marketing progam.

The idea once was, and um…still is by all accounts is that you pay to be included via extra spidering of your urls and based upon your ‘natural’ ranking, you then rank in the SERP’s.

Hmmn I find myself wondering. WHY?

Why did they even bother resurrecting this unworkable, accusatory minefield?

If you are ranking ok naturally, then why would you do this? Why would you pay to use this service?
If you are not ranking well ‘naturally’ then again, why would you do this? If paid inclusion gives you no ranking boost, then why do it?

Even if you were mad enough or greedy enough to use it and exhausted your budget, where would you revert to afterwards?

Would you still get spidered regularly? Would you plummet like a stone?

Just makes no sense. If you were to plummet like a stone then its clear that your ranking was dependant upon the money in your account.

It cannot be both things, what am I missing here?

Jim Lanzone CEO of ask.com, whilst commenting had a number of things to say.

Three years later, I’m still against paid inclusion, because I still think it is hypocritical to charge for something we need to do anyway to be the best search service we can be. I also think it’s a dis-service to our users to blur the line that much between paid content and editorial content.

Absolutely! Where is the editorial transparency? Why shouldn’t users have a right to know who got to where and how? Isn’t advertising supposed to be labelled as such, so that its clearly identifiable? Is this advertising or isn’t it?
If Y! really think that some 3 years after a product is greeted less than enthusiastically, that they can just repackage it and expect people to buy in then, wow. That’s a huge signal.

Lets just put to one side the idea that during these 3 years not one amongst their number could gain sufficient voice and traction to say “hang on a fuckin minute, haven’t we already tried this and gotten poo pooed?” Lets, put to one side what the FTC might just have to say about it all. Lets just for one minute look in disbelief at what the logic of their program dictates.

If they expect their index to be increasingly made up of commercial sites that have paid to be included. Then they are making a clear distinction between paid and unpaid. They are saying that their index values freshness. They will present fresh content by increasing the spidering rate for sites that have paid for it. Good content, useful new stuff thats springing up everywhere else can go to the hinterlands.

IOW, they just aren’t too interested in helping shape a dynamic evolving web, at least not publically! So much for a ranking algorithm based on document relevance, or popularity or usefulness! So much for even calling it a search engine anymore. Give it a couple of years with a program like this and you might as well call it the Yahoo xml feed directory!
Even back in 2004, it really did remind me of the debacle that was Look$mart, it had all the signs of vaguery and incomprehensiveness that helped do that firm a swift about turn. And yet, here we are again, a relaunch! I thought it had died and gone away, seriously!

They love php over at Y! They even have Rasmus on their staff. Maybe they can ask him how to escape the \$ signs in their code.

Seriously, would they be that surprised to hear people thinking in terms of

[php]

if($prosubmitparticipant){

$rankboost= $postionone;

$increasedprofits = “yay!”;

}

if($basicsubmitparticipant){

$rankboost= ($positionone – 8);

$increasedprofits =”Hmmn”;

}
$urlrank = ($documentscore + $rankboost);
[/php]

Don’t they get it? Didn’t they listen to the concerns back in 2004?

Do Internet searchers get good, accurate information? Or are the results of the search skewed to favor those who’ve paid to be in the index? The jury’s out on that one.

Jim’s points are too good to pass over. When referencing their paid inclusion pro model he asked the question.

What are the odds that out of 2 million results for a given query, their partner sites will be ranked highly enough, consistently enough, on their own to: a) generate enough traffic for the partner site to make it worth participating in the program; and b) generate enough revenue for Yahoo to make it worth operating the program?

Again, duh! Absolutely. Where is the logic that argues against a person saying something like – The paid inclusion program is evidence of the Yahoo Serps being full of nothing but undisclosed advertiser urls? Why would you even say that the program is aimed at advertisers looking to spend $5000 per month if you weren’t in some way going to intimate that they might get some kind of leg up for doing so; and if that is or was the case, then where is the transparency for the search engine users?

Jim’s right again when he says

I just know that 75% of the clicks on a major search engine typically go into the top 5 results on the page. It would just be too much of a coincidence if paid (and unmarked) partners got those rankings/clicks instead of non-paying sites.

It just makes no sense. In fact this aspect of yahoo search marketing is IMO just a lot of old poorly presented rubbish. It’s written in a way that leaves me scratching my head.

Does that matter? Well, it should do. It’s people like me who decide whether or not to spend clients money in this way. Maybe they don’t care even.Perhaps they’ll just target individuals and sell them a line that spins it postively.
PPC program great, that works, tried and tested. A revamped overture with a few extra bells and whistles.
PFI in this form. Nah, not for me, nor my clients either. Too many whatifs and buts for my liking.

Hey Y! AltaVistaaaaaaaaaaaaaa!

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

NOYDIR wordpress plugin

Inspired by Joost de vaalk’s noodp plugin for wordpress and the recent Y! announcement, I decided to provide a NOYDIR plugin.

You can download it here. Unzip it, put it in your plugins folder, and activate it. It then adds the following line to the section of your theme:

<META NAME="ROBOTS" CONTENT="NOYDIR">

For those who are wondering what this is for, you can check out the Y! searchblog.

<META NAME=”ROBOTS” CONTENT=”NOYDIR”>
<META NAME=”Slurp” CONTENT=”NOYDIR”>

So there you have it. If you do not want them to use your directory title and description in their SERPs you can now opt out by inserting either of those tags.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS