Do you believe in the Chupacabra? If so, you aren’t alone. This mythical creature has captivated people all over North America despite there being very little evidence of it’s existence. Why is it that myths are sometimes so widely accepted? In my opinion there are two reasons: They are often times repeated over and over, and few supply logical information disproving them. When information travels in this manner, myths are sure to flourish.
The SEO community is no stranger to myth spreading. So in an attempt to end a few of the larger prevailing myths I will talk briefly about some myths that I run into often. If you want to argue for or against anything I have mentioned here, add your thoughts to the comments!
Google Understands “Quality”
I know what you are thinking, wait a second “high quality” is what SEOs talk about all the time! Google must understand it! It would be awesome if SEO was that simple. Unfortunately, the truth is, quality is subjective. Meaning that what I think is “high quality”, others may not.
A few nights ago, I had baked trout for dinner. I loved every bit of it, but some folks hate fish all together and would never touch it. That is because we all have different taste. Like “quality”, taste is subjective. Robots do not like seafood. Though, they don’t dislike it either. Because robots can’t taste, and they are indifferent to food. In the same regard, Google’s crawlers are indifferent to “quality”.
But just because quality is subjective doesn’t mean Google is ignoring it. Instead Google’s algorithms use a handful of techniques such as natural language processing, page segmentation, user behavior, inbound links, machine learning, and other things; to make educated assumptions about the quality of content and links. Most of the time these assumptions are spot on, but every now and then its possible to find a great piece of content ranking very low, simply because Google is making the wrong assumption about it.
It is these algorithmic assumptions that power updates like Panda and Penguin. So while Google doesn’t understand quality, it can make very good guesses at it.
When A Page Drops In Rankings, Google Is Penalizing It
When a page or domain loses substantial rankings or organic traffic there are typically two possible reasons: an algorithmic change has occurred or a manual penalty has been applied.
Algorithmic changes in the rankings are constantly happening. 9 times out of 10, changes to a page’s rankings are a result of an algorithmic change. The point of having algorithms is so Google can better manage the billions of web pages in it’s index with out manual oversight.
However, sometimes manual oversight is needed when a web site is in violation of Google’s guidelines. In this case Google will override it’s algorithm and apply a rankings decrease. Usually, Google will then notify the web master through Google Webmaster Tools informing them of the change and why it occurred. If a web master has never violated Google’s guidelines there should be no reason for Google to apply a penalty.
There Is A Secret Formula That Helps Pages Rank
Often times I will hear clients ask what the correct keyword density is. Or whats the optimal outbound/inbound link ratio. Unfortunately, there is no formula that works for all web sites. This is in large part because each page is interpreted a bit differently. Google reportedly uses hundreds of different ranking factors, each with varying significance on an individual basis. This is why having a firm understanding in the fundamentals of SEO, and applying that to a much larger marketing agenda is so important.
Links Are The Most Important Ranking Factor
Links are still very important to ranking web pages. However, their importance is wavering, and slowly becoming less relevant. With the future of social media paving the way for new ranking metrics links could take a “back seat” in the future. Smart SEOs are advising clients to start to mix their link building efforts with social engagement, branding, and other traditional marketing strategies.
The Algorithm Is Static
Google’s algorithm is constantly changing. We see regular updates that Google announces on their blog. But small changes are also taking place on a regular basis. If this wasn’t enough there is also increasing evidence that suggest that the algorithms are applied differently for different scenarios. For example many are starting to believe that the Panda and Penguin filter is applied more aggressively to certain verticals. And many have seen the algorithm applied differently to certain types of queries, such as brand names. The algorithm is constantly changing and is often times more ambiguous than it is definable. Which is why analysis is so important when trying to understand how and why a site is ranking the way it is.
There are many more myths surrounding Google’s algorithms. What are some that you have heard, or help spread?