There has been a lot of buzz in the SEO community about how duplicate content is handled by Google. Talks of the duplicate content penalty for sites that scrape content have become common, but no one seems to be sure.
Recently, Google released guidelines it uses to address duplicate content and the information they gave doesn’t seem to be far from the realities SEO’s already knew. Regardless, these factors are great for stamping authority’s sake. Here is the duplicate content status according to Google in simple terms.
What Google says about duplicate content?
The Google status released recently revealed its sentiments on a number of issues surrounding the duplicate content state in its index. The following were some of the high lights that came out:
- The 301 permanent redirects that are used to redirect users to certain web pages or moving content to avoid duplicate content issues is correct and in good light.
- When you use citations and quotes in content Google accepts them. They are not duplicates.
- It’s the best practice when you prevent Google from indexing pages that are more orless likely to contain duplicate content like print versions of the website,etc.
- When work is translated into different languages, it’s not seen as duplicate content
- Google does filter out content that is duplicate while ranking those which have authentic content. This means there is not really much of a penalty for duplicate content. It’s simply filtered from the rankings.
As any SEO knows, these facts are nothing new to what they already know. However, another face of the Google declaration was contradictory. This is because where as Google is clear on how it deals with duplicate content, it remains non-committal on how the perpetrators are punished because stealing content is bad. Google tells webmasters not to worry about duplicate content as it willnot help the scrappers or content spammers. If they copy your content and beat you in Google rankings, you are free to file DMCA request for that content topenalize that site.
The other factor that came from the new developments at Google also showed that where as there is no standard penalty, at their ‘discretion’ they can punish scrappers who they feel openly violate their mandate.
The other point that came out of Google’s stand on duplicate content is the need for SEO’s to syndicate their content carefully. As long as there is a link tothe original post, there is no problem. As such, webmasters need to know wheretheir content is syndicated to avoid a duplicate content penalty. In practice, implementing this is hard when spammers steal content and cannot be punished. Their work can affect your websites status when they steal content and don’t link to you.
Overall, the state of SEO is dependent on duplicate content whether Google admits it or not. And inessence, content scrapping seems to be working for content spammers. Until it’s controlled or at least penalized by Google in the future, all SEO’s can do is wish scrapping content away or focus on other factors that will boost their search engine results.