Duplicate Content: What You Should Know for 2014
After years of debate on the issue of duplicate content, there is finally a definitive answer on how to deal with duplicate content. Previously, duplicate content was a hot button issue as SEOs and webmasters everywhere worried about the rankings of their website being hit due to the presence of duplicate content. There were equal complaints regarding the fact that copies of high ranking websites were sometimes outranking the original versions of websites such that illicit website owners were essentially being rewarded for plagiarism.
So where have we come fromwitht the duplicate content issue?
Gone are the days where website owners were feeling pressured to rewrite every morsel of repetitive content on their websites. This was an expensive and time consuming method of dealing with this issue but unfortunately it had to be done. Companies were hiring writing firms to rework their words in order to benefit their search engine rankings but often to the detriment of conversion rates.
Furthermore, ecommerce sites were hit particularly hard by the SEO requirements of needing to have unique content on every page because of the fact that ecommerce stores are often made up of thousands of products. With the difference between two product pages simply being the coloror size of the product, did it really make logical sense to write different content for each of those pages?
Eliminating duplicate content is simply not logical
In Matt Cuttslatest briefing on Google’s search engine algorithm updates, he has acknowledged that up to 25 percent of the content online is duplicate content. While plagiarism may account for a small percentage of the duplicate content, the vast majority of the duplicate content is due to syndication and quotation.
Without syndication, the majority of important content on the web would probably never been. Plugins such as Related Posts for content management systems and other upsell bars exist for the purpose of aiding in increasing conversions rather than search engine rankings.The most important factor in website design and content optimization should be conversion rates and sales, not search engine rankings alone. Therefore eliminating duplicate content simply doesn’t make sense. Also pages within one’s own site, should not be penalized for having duplicate content according to Cutts.
How to deal with duplicate content
Today SEOs and website owners no longer have to worry about duplicate content issues. Matt Cutts has announced that website owners now have a way to deal with duplicate content. The duplicate content within a website can be dealt with by using the rel=canonical tag to indicate which page is the primary page that the content should appear on. For blog owners, this will likely be the full page of a published article. For ecommerce site owners, this will likely be the product page for a particular product. If you are trying to determine where the rel=canonical tag should go, consider which page the majority of the visitors should be visiting when they view a piece of content that is duplicated in several places on the website.
Cutts also stated that website owners will not be penalized for duplicate content unless they are using the duplicate content to spam. This alone should eliminate website owners’ concerns about stolen content outranking their own contentand having to rewrite content simply for the sake of eliminating duplicate content.
How has your site fared in the latest round of Google algorithm updates? Let us know in the comments. If you need assistance withor have questions about dealing with the duplicate content on your website, get in touch with Infintech Designs today and let us review your website.