Posted by EllieWilkinson
Welcome to the third installment of Next Level! In the previous Next Level blog post, we shared a workflow showing you how to take on your competitors using Moz tools. We're continuing the educational series with several new videos all about resolving duplicate content. Read on and level up!
Dealing with duplicate content can feel a bit like doing battle with your site's evil doppelgänger—confusing and tricky to defeat! But identifying and resolving duplicates is a necessary part of helping search engines decide on relevant results. In this short video, learn about how duplicate content happens, why it's important to fix, and a bit about how you can uncover it.
Now that you have a better idea of how to identify those dastardly duplicates, let's get rid of 'em once and for all. Watch this next video to review how to use Moz Analytics to find and fix duplicate content using three common solutions. (You'll need a Moz Pro subscription to use Moz Analytics. If you aren't yet a Moz Pro subscriber, you can always try out the tools with a 30-day free trial.)
Workflow summary
Here's a review of the three common solutions to conquering duplicate content:
- 301 redirect. Check Page Authority to see if one page has a higher PA than the other using Open Site Explorer, then set up a 301 redirect from the duplicate page to the original page. This will ensure that they no longer compete with one another in the search results. Wondering what a 301 redirect is and how to do it? Read more about redirection here.
- Rel=canonical. A rel=canonical tag passes the same amount of ranking power as a 301 redirect, and there's a bonus: it often takes less development time to implement! Add this tag to the HTML head of a web page to tell search engines that it should be treated as a copy of the "canon," or original, page:
<head> <link rel="canonical" href="http://moz.com/blog/" /> </head>
If you're curious, you can read more about canonicalization here.
- noindex, follow. Add the values "noindex, follow" to the meta robots tag to tell search engines not to include the duplicate pages in their indexes, but to crawl their links. This works really well with paginated content or if you have a system set up to tag or categorize content (as with a blog). Here's what it should look like:
<head> <meta name="robots" content="noindex, follow" /> </head>
If you're looking to block the Moz crawler, Rogerbot, you can use the robots.txt file if you prefer—he's a good robot, and he'll obey! More about meta robots (and robots.txt) here.
Can't get enough of duplicate content? Want to become a duplicate content connoisseur? This last video explains more about how Moz finds duplicates, if you're curious. And you can read even more over at the Moz Developer Blog.
We'd love to hear about your techniques for defeating duplicates! Chime in below in the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
No comments:
Post a Comment