22 Feb 2012

SEO Worst Practices

SEOs love to talk and write about industry best practices, as well they should. To best serve their clients, following industry-established SEO best practices should always be the goal.

But what about those shady types on the fringes of the SEO community who advocate, hmmm, let’s call it a less reputable route? Not surprisingly, these SEO are also typically the same ones who claim they can absolutely get you to be the #1 spot in the search engine results pages (SERPs). Guaranteed, no less! There’s a reason the old adage, “If it’s too good to be true, it probably isn’t” remains so relevant in our lives.

If you run a business and are shopping for SEO services and consulting, if any of the following techniques are mentioned (and please do ask!), you can be confident that the consultant in question is following well-known (in professional circles) SEO Worst Practices. The results of such efforts used on your site will likely backfire, causing your website to be penalized (lowered in rank) or, if excessively egregious, perhaps even purged from the index. This applies to both Google and Bing.

Let’s take a look at a few top choices in SEO worst practices:


Keyword stuffing

If the grand plan for getting your site to #1 includes adding ~150 highly searched for (but largely irrelevant) terms and phrases into the <meta> keywords tag, not only is this poor form in terms of webpage spam, but it’s also a hopelessly out-of-date and obsolete technique. First of all, stuffing the <meta> keywords tag is a tactic that was old 10 years ago. Because so many sites used this tag in an attempt to increase their page rank for targeted keywords by repeating those words countless times (or, alternatively, expand the relevance of the page to keywords not otherwise used on the page), the search engines long ago abandoned using the <meta> keywords tag for keyword relevance. Google has come right out and stated it does not use the <meta> keywords tag for keyword relevance. Bing has taken a more nuanced position in that the tag is actually not really used today, but states there are hundreds of ranking factors that are considered, and one day, if the intentional spamming of this tag finally dies off due to neglect, it might eventually become useful again. Maybe. But not today, not soon, and no promises beyond that.

The act of keyword stuffing not only occurs in the <meta> keywords tag, but can also occur in <title> tags, <img> alt text, heading tags, anchor text, even sometimes boldly in plain body text!

The search engines crawl web pages and see what is in the code. They see the text within the <body> tag, as well as the page metadata. They see when a word is repeatedly used to excess, and they can mitigate any attempted beneficial manipulation in their page ranking assessment. Keyword stuffing is dumb, clumsy, ineffective, and amateurish SEO.

Keyword stuffing caveat

If the use of repeated terms is legitimate to the business of the page, the search engines will understand that and accommodate that in their search for what otherwise is web spam. For example, if you are an attorney who can help clients with income taxes, tax deductions, tax penalties from the IRS, interest on back taxes, business taxes, rules for excise taxes, estate tax planning (you see the point), the repetition of the word “tax” is not spam because the phrases in question are normal word usage, not artificially done for web spam.

The difference between this sample usage and keyword stuffing is intent. The search engines spend a huge amount of time and resources trying to parse legitimate from illegitimate intent. If you are incorrectly identified as a spammer and your site suddenly tanks in the rankings, and it’s not due to larger algorithm changes like Google’s series of Panda updates, then you can appeal a penalty in Google and/or Bing. Just note that if you were penalized for violations of the official webmaster guidelines of Google or Bing, your entire site must be cleaned up of all web spam techniques and republished before you apply for reconsideration.


Hidden content

Search engines don’t like seeing pages that have hidden content (content that’s crawlable in the code, but doesn’t show in the browser window). That’s considered to be the equivalent of telling the search engines, “Here, this extra bit of content is just for you.” They consider that to be a maliciously manipulative attempt to earn page ranking credit for material not actually shown in the page. As Martha Stewart might say, that’s not a good thing.

Webmasters use a multitude of easily detected techniques in the effort to hide content from display. They use code like <div style=”display: none;”> to hide entire passages of body text. They use style attributes to make text the same color as the background, rendering it invisible, or configure the font size to be so small that it’s unreadable. There are many such “tricks” used to stuff extra text in a page. Of course, the search engines see all of this coding and can interpret that it’s intended to be hidden from view. If the intent of the usage is malicious, penalties can ensue.

Hidden content caveat

The use of <input type=”hidden”> controls are not by themselves suspicious, as some controls are not revealed in the default view of a page. The issue is always intent. If passages of text are hidden, especially if they contain keyword stuffing as mentioned earlier, this is what raises the red flags for search engines.


Cloaking

Cloaking is where the web server uses the identity of the user agent making the request for the page to determine which version of the content is returned. For example, if an IE 6 user agent requests a page and then the Googlebot user agent requests the same page, but the content is different, that indicates there is user agent filtering (cloaking) occurring for the purposes of manipulating what content search engines see. The goal of malicious cloaking is always to artificially inflate the rank of the page the user sees. Cloaking can show users normal pages and serve search engine crawlers keyword-stuffed pages. Alternatively, cloaking can serve search engines nicely optimized, relevant pages but serve users junk sales pitches for illicit pharmaceuticals, porn, or other such content that would not rank otherwise for that query.

Cloaking caveat

Pages that filter for mobile browsers to show abbreviated or custom-formatted versions of the desktop page are not considered to be malicious cloaking. Again, it comes down to intent. Is the web server attempting to maliciously manipulate the search rankings for a given page? The use of cloaking on search engine user agents is not a good idea. The search engines will detect it and penalize a site employing that technique accordingly. Filtering user agents for mobile devices, as long as the content stays similar, is relevant to the search query, and useful to users, is not a problem.


Intent is key

All of these SEO worst practices are based on the intention to deceive, be it the search engine crawler or the human end user. As long as businesses create great sites that are of value to human visitors, have compelling content, and are easy to navigate (meaning they are also easy to crawl), they will be assessed accordingly. Those are the sites that earn backlinks and citations from other sites, and that is what is needed for white hat SEO.

No Comments

Comments closed

Sorry, the comment form is closed at this time.