Duplicate content is a popular term often thrown around in the SEO world. With so many myths and misconceptions regarding the matter, it’s easy to leave webmasters questioning whether their content will be penalised. Producing or not recognising duplicate content can be a dangerous move in your SEO game plan – which can be damaging for your website’s SERP ranking (aka. how high up your web page sits on Google’s search results).
It’s estimated that up to 29% of active web pages are duplicate content, which doesn’t sit well with search engines knowing that they could be leading users to the wrong page and not the original source. With that in mind, last week Google subtly announced they can proactively recognise that something could be a duplicate, even before crawling it.
There are a number of reasons why, when producing content for your website, you should always aim for rich, relevant and unique content. This one now tops it all.
So what does it mean for you?
Producing duplicate content confuses Google’s and other search engine robots that crawl pages of the internet. They’re unable to differentiate which web page is the original source and as a result, penalise both pages.
Penalties include having your ranking affected and being dragged down in the numbers, to having your page removed entirely from a search engine, to what Moz have said, things “worse than simply removing a page from its index”.
How to avoid duplicate content
First and foremost, you have to be original. Even if Nas rapped ‘no idea’s original’, you can still put your own spin on a story or product.
You’ve come up with a brilliant blog idea for your website, you have the keywords at the ready but there’s just one problem: there has already been over one hundred different versions of your idea that’s been published onto other websites. Don’t let it dishearten you. Producing keyword rich and relevant content will help you along the way on your journey to ranking highly in search results.
Another quick win is to ensure your URL structure is in order. This even means that if you have various link extensions of the same web page (ie. .co.uk/coffee and .co.uk/coffee?colour=black), you need to get that nipped in the bud, as search engines recognise these as two separate pages. If that’s something you feel is out of your remit, we’re here. Drop us a message and we can talk about what we can do for you.
If you must have duplicate content
A canonical tag (rel=”canonical”) is a trick to tell the robots that there is duplicate content and it’s meant to be there. By acknowledging this with a simple bit of code in your web page’s head, you’re letting SE crawlers know that the content of this page is a copy, as well as letting them know the true source of the content (which doubles up with pointing the crawler to the page you want ranking).
On the web page www.echowebsolutions.co.uk/gavin-thinks-he-knows-seo, we enter the following code in the <head>:
<link rel=”canonical” href=”https://www.echowebsolutions.co.uk/leahs-expert-blog-post-about-seo” />
Here, Leah’s blog is the master copy, and I’ve just been lazy and copied half of her content. But at least I told Google that.
A best practice when starting out (and just in case your web pages start producing additional jargon at the end of URLs) is to include this code on all of your web pages that do include duplicate content. To up your anti-duplicate-content plan tenfold, you can also start implementing meta tags such as =”noindex, nofollow” to pages that you don’t want crawlers to touch. After a little bit of research, that is. By following this basic guide, you are well on your way to a healthier website.
However, we understand terminology and coding can start to build up and become convoluted quite quickly. Contact Echo today for improving your brand’s SEO, content and search engine rankings. We’re experts at this stuff and make a great coffee.Contact Us Give Us A Call