Month: February 2007

Bait and switch a legitimate traffic building tool?

Lyndon wrote a good post today talking about bait and switch or as he called it “switchbait“. I’m glad he did, cos I was in one of those ‘shit, what shall I blab about today’ moods.
The method is as old as the hills. Build a domain, get the visitors, move them on to somewhere else. At least thats a quick and dirty interpretation.

Some of you seasoned SERP watchers might recall the days when highly tuned cloaked content found its way into search engine indices. Titles were carefully crafted to grab the users attention and get them to click on through. When the user clicked the result, the domain then redirected them on to some affiliate site that paid the redirecting site a referall fee. It still happens, but its not as endemic.

The search engines hated that of course. They weren’t interested in the line of thought that said “where’s the harm, everybody wins” as ultimately they wanted to control or at least give the allusion that they did, the make up of their results pages. To allow cloaked content to stay within their indices unchallenged would give credence to the view that they were easy to game and simple to manipulate. No fortune 500 company really wants to give out those kinds of signals as if such a view gained momentum it might snowball and overspill onto other core products. Weakened confidence in the technology, doesn’t take too long to equate to reduced uptake and use. The house of cards could quickly implode, seriously affecting revenue models and streams.

The engines today seemed to have gotten a grip on traditional sneaky redirects. I haven’t seen a meta refreshed, or unescaped obfuscated javascript redirect for quite some time. Ive seen the odd 301 or 302 redirect, but with these its more dificult to ascertain intent.

The javascript redirect using window.location.href can redirect a javascript enabled browser to new content. Search engines don’t really like this method as historically they didn’t read javascript too well, especially when it was disguised as var1=lo var2 = ca var3 = ti etc etc. The bot would see the keywords and markup and score the page as it would most others, but the search engine user would never see it. The page author preferring them to see some money paying page instead.

Its a similar scenario for the meta refresh too, albeit slightly different in that a meta refresh actually equated to a 302 server header, or temporary redirect. Temporary redirects are used in all manner of ways to say that the content that was once here has now gone and has moved elsewhere, but may be back at some point. Not everyone has always had access to server side redirects a la header (“Location: fullyquailifiedurl”); so the meta refresh tag was a handy method for achieving the same, which was, moving the user on to somewhere else.

301’s and 302’s are in tech circles, a recognised way of redirecting users and their agents on to new locations (urls) Domains change hands, content is altered, urls change too. There needed to be a legitimate way of letting people know, without just plonking the old page before them and embedding a big fat THIS CONTENT HAS MOVED TO message.

The knowledge of how search engines interpret such things can be used in all manner of ways. At best it can be used to legitimately move a user on as described previously. At worst it can be used to trick or deceive; in the worst extremes it’s the user who is deceived, referred onto something heinous or unrelated – and at best the search engine, deceived into believing that spidered content was what would be showed to its users.

How far away is Lyndons example from what is described prior? Lyndon proposes to build a domain, create leverage and authority and then subsquently apply it to a 3rd party.Is this any differrent from showing the various stakeholders say Technorate, Digg, Y! or Google one thing only to subsequently move the goalposts and move it all on?

To my mind, no not really. Unless Lyndon had told us his intent we’d never have known. Domains are bought and sold and change hands everyday. Its called business. What if Lyndon had done exactly as described, yet told no one, or simply redirected/moved the blog/domain to a directory on his clients/affilaite sites. Perfectly legitimate of course, yet to announce the intent to do this for manipulation purposes suddenly puts it all in a different light.

The bottom line is that its quite one thing to create stuff for the technology and traffic providers and use it to your better advantage, but do so in a way where they can decide or determine that your intent was one of use and abuse and you might well find your efforts were wasted. Do it a way that is elegant and sophisticated as described by Lyndon and no doubt used and applied daily by 100’s of other savvy marketers, and you’ll be on to lots of sure fire winners.

Valuing your readership – Front page links for top commenters

I recently installed a plugin that enables you to show your top commenters. It’s there to the right of the screen. I wanted to reward my mostive active participants.
As some of you may have read already, I don’t nofollow my commenters I think it’s a lame thing to do to people who take the time to comment on what you have said. Some blogging platforms place restrictions at the core program level making it very difficult for people to do very much about it. Not everyone can get in there and hack or change things they dislike. Lots of bloggers probably don’t even realise that their commenters are nofollowed simply because they are not as tech savvy as the next person. Not everyone surfs with a customised css file or firefox search status plugin! It’s refreshing to read that people with a broad reach like Robert Scoble are re-evaluating their positions although some like Anil Dash remain less convinced.

Anyways, this isn’t another anti nofollow rant, its more a case of talking about building a readership and rewarding those who participate and some of the stuff that has to be done to make that happen.

Continue reading

Mybloglog is evil – no seriously, it must be!

Seems like mybloglog has had a bad week. I read today over at Andy’s that mybloglog banned a guy named shoemoney for reasons relative to general not very niceiness, at least that must have been their perception. Mr shoe posted a few mblID’s. These can be obtained from user avatars uploaded by mbl users. I use them myself in my mbl tracking script. A reason for banning? No of course not. I think you have to look a little more closely to perhaps begin to understand why.

The reasons behind Mr shoes ban seem to have their roots in him posting various exploits that can be applied and used to basically, fuck with how mbl works. I don’t think this is a bad thing generally, in fact its good to have people point out flaws; especially when they can be patched with relative ease. Constructive criticism is always good.Its a delicate balance though, if someone took it upon themselves to attack and criticise with regularity, posting things that made me look dumb or stupid then my gut might be inclined to say hey do me a favour blokey, just piss off out of it if you don’t like what I am doing. That would of course ( as appears to be panning out to be the case) , be a mistake as I’d open myself to all kinds of attacks from followers, detractors and cronies.

John Andrews nails it with his comment at Andy Beals.

wow… it’s amazing to see so many users adopting our service so fast. We are really excited to see the validation that the MBL platform is capable of so much more, and also how amazingly innovative the blogging community is. We’ll have to fix some of the loop holes of course, and we’ve got great people working on keeping things moving forward, but keep the feedback coming and let us know what we’re doing right and what you need from us…

People like Matt Cutts have been using similar approaches for years, we all know where it got those guys too.

MBL’s crime it appears is that they didn’t code things perfectly and that enabled people to do things like, surf as other people using a cookie exploit, or add co-authors without consent or add other sites to peoples accounts, again without their consent.

Ok, so yes, not the best things in the world to have had happen, it undermines faith and trust in whatever else could be ‘leaking out’ but come on lets face it, its not exactly the end of the world, or a reason to be filed under heinous crimesville but it’ll gain one a little attention if you come out and support a position one way or the other.

My personal take is one of so what who really really cares, who died even? I’ll still use mybloglog I think its a bit of harmless fun and a good way of getting new eyeballs on to what you do and say. Its a cracking little site that created a lot of interest and buzz in a segement that is continually evolving and growing. So it has a few holes that tech head nerds will point at and say OMG, how bad is that..yeah – so – and.

Some might wonder why MBL is such a focus, why are these evil seo types so interested? Well, SEO’s types tend to be the ones who push and poke and prod, its the nature of getting up where you need to be that drives it. SE algos are that little harder to get at these days,the requirement to gain traction and influence within their algo parameters dictates that people will look at the most cost and time efficient ways of increasing their scores. Like it or not, MBL offers a means of gaining attention. Attention = links, links = better scores, better scores = more money blah blah blah. Digg, reddit, delicious, wikipedia, dmoz all had or still have even, similar issues. Its the downsided price of success on the net.

Thankfully for MBL at least, most users are just happy to stick the thing on the their blog and leave it at that. They love the stat functionality, love the little people icons, love the little community and ‘blog love’ thing in general. I think its cool too, which is why I’ll continue to use it until something better comes along.
Overall, a storm in a teacup methinks. Could have been handled better, on all sides.

Update:Mybloglog reinstated Shoemoney

Using search engine query strings to optimise your content.

First off a word of caution: This method could get you into trouble so watch yourself. A competitor could scream cheater! Fact is, its not cheating its not cloaking either, its using the referer string in combination with the query string to deliver content.

Many long tail searches often land on pages that don’t really cut the mustard for the query. This is lose lose. You lose in terms of outputting a page that isn’t really relevant to what the user was looking for and the user loses by having to hit the back button.

Lets say for example sake that you have a high authority page that ranks for practically everything. You might have a sentence within your copy that matches what a user has entered into a search engine; yet your page isn’t really about what the sentence refers to. The sentence just happens to fit in amongst the context of 3 or 4 hundred other words, but doesn’t really apply to what the user is after – net result, one disappointed user, x kb’s of wasted bandwidth.

You could therefore, offer these users an option by way of an optimised representation that catered for this deficit. I’ll keep it simple here and assume that you have a site or a blog that concentrates on a particular set of core products. Lets assume that you run a small niche power tool website. You sell things like drills, sanders, planes and other related items. You blog daily on various products and methods and talk at length about all sorts of aspects relative to DIY or general maintenance.

The code below looks at the query string entered by the user at the refering search engine. It then checks that string against a selection of predefined words and delivers a message based upon those words.

“Welcome visitor from refering search engine your query contained the word predefined word a page containing predefined word products from query string can be found here > linktowhereever

$queryurl = $_SERVER['HTTP_REFERER'];
$refer = parse_url($_SERVER['HTTP_REFERER']);
$refer= $refer['host'];
if(strstr ($refer, 'yahoo')){
$qstring = str_replace('+',' ',$match[1]);
if(stristr ($qstring, 'drill')){
$optilink = "";
$message = 'Welcome visitor from $refer your query contained the word drill a page containing drill related products can be found at this link <a href=$optilink>$qstring</a>';
if(stristr ($qstring, 'planes')){
$optilink = "";
$message = 'Welcome visitor from $refer your query contained the word planes a page containing power plane products can be found at this link <a href=$optilink>$qstring</a>';
if(stristr ($qstring, 'sanders')){
$optilink = "";
$message = 'Welcome visitor from $refer your query contained the word sanders a page containing power sander products can be found <a xhref="$optilink"> </a>at this link &lt;a href=$optilink&gt;$qstring&lt;/a&gt; ';

By adopting this approach you could deliver the message via a floating layer, or pop up window on exit. You could even output it at the start of the content and place it within a little paragraph.

//continue with the rest of the content

The search engines would rather that they decide what pages to return based upon their calculations of relevancy.

The fact is that sometimes they do a pretty crap job at it and could do with a little help. Besides, we should be able to decide what we do with our visitors. Its not for the search engines to dictate to us. My view is, that provided its related and adds value to the user, then there is no real harm in giving them that little bit more. Its a not a cut and dry case of smoke and mirrors cloaking with sneaky redirects or any of that stuff, its just taking things one step further and deciding to help out a little.

If you consider that some websites have pages that change daily, if not hourly then the reasons to employ such methods becomes even more apparent. How many times have you visited a page, only to find that what you were looking for was not there? I have, and in those cases I often had to go to the search engines cache to see what it was. That or I have to embark on a site search at my destination to find what I was after. Methods as proposed would reduce instances of those scenarios.

Using the HTTP Referer to personalise your pages

Ever been to one of those pages that said ‘hi visitor from domain name and wondered how they knew where you’d came from?

Well, for those of you who don’t (wonder that is) just um..go and read something else or make a cup of tea or something. For those of you who do, read on…

Continue reading

Blackjack oops Blackyack a profitable gamble on a typo?

Looking for blackjack? Sorry this post is about blackyack, agreed the y and the j are in pretty close proximity, so the typo is kind of understandable. If you want to have a gamble and lose your shirt, feel free to click the cards. 😀


bj.JPGSo I was over at davens blog and read about some partypoker affiliate program thing. Had a little look, and signed up for their affiliate program. Online gaming is massively competitive and not something that I’d bother my arse with to be honest. The mountains just too bloody high, and I don’t have the gloves or the hiking boots to get up there.

Continue reading

Add value to your WP blog posts with digg dugg

The other day I added digg dugg to my list of plugins. Its pretty cool I like it. I was looking at a way of pulling in some related material to tag on to the bottom of my blog posts. I wanted to be able to plugin a keyword and pull posts from somewhere based upon whatever that word was. I looked at the digg duggplugin and thought cool, that might well work.
I couldn’t see a way of adding db variables straight off the bat into the digg dugg function, so after a little bit of fannying around with single.php I came up with this approach. (I may have missed something and wasted the past hour or so doing this, but what the hey, it works, Im happy)

If you want to display upcoming digg posts using the 1st tag that you used for your blog post. Then here is how you can do it.
After installing and activating the digg dugg plugin.
Pull out single.php from your wordpress blog theme.

In between the comments_template() and the endwhile lines place the code as shown.

<?  comments_template();
function get_string_between($string, $start, $end){
$string = " ".$string;
$ini = strpos($string,$start);
if ($ini == 0) return "";
$ini += strlen($start);
$len = strpos($string,$end,$ini) - $ini;
return substr($string,$ini,$len);
$mytags= get_the_category_list($separator, $parents);
$mystring = get_string_between($mytags, '">', '</a>');
echo"Top 5 upcoming $mystring related Digg Stories";
dd_diggdugg('', 5, 1, $mystring, 'upcoming');
endwhile; ?>

Save it, upload it. Thats it. Done.

Your posts will now fetch the last 5 or whatever number you wish to specify upcoming posts relative to the 1st word you used to tag your original post.

Optional. You could also edit the digg dugg plugin which incidently has a bunch of cool features. and add the nofollow link option. In diggdugg.php just look for instances of a href type links and add rel=nofollow inside the tags.

%d bloggers like this: