: How duplicate must two pages be in order for one to require rel=canonical? I've been reading up on: <link rel="canonical" href="(original url)"> And one thing that I'm trying to figure
I've been reading up on:
<link rel="canonical" href="(original url)">
And one thing that I'm trying to figure out is the lowest percent of duplication must any two pages need to be in order for one to require the above tag. Any ideas?
More posts by @Harper822
1 Comments
Sorted by latest first Latest Oldest Best
In this Google page: support.google.com/webmasters/answer/66359?hl=en
Duplicate content generally refers to substantive blocks of content
within or across domains that either completely match other content or
are appreciably similar. Mostly, this is not deceptive in origin.
Examples of non-malicious duplicate content could include:
Discussion forums that can generate both regular and stripped-down pages targeted at mobile devices
Store items shown or linked via multiple distinct URLs
Printer-only versions of web pages
If your site contains multiple pages with largely identical content,
there are a number of ways you can indicate your preferred URL to
Google. (This is called "canonicalization".)
In this page: searchengineland.com/googles-matt-cutts-duplicate-content-wont-hurt-you-unless-it-is-spammy-167459
Duplicate content is a huge topic in the search engine optimization
(SEO) space; heck, we even have a category devoted to the topic. But
should we worry about it? Google’s head of search spam, Matt Cutts,
said he wouldn’t stress about it — that is, unless it is spammy
duplicate content.
In a video posted today, Matt Cutts answers, “How does required
duplicate content (terms and conditions, etc.) affect search?”
Matt Cutts said twice that you should not stress about it, in the
worse non-spammy case, Google may just ignore the duplicate content.
Matt said in the video, “I wouldn’t stress about this unless the
content that you have duplicated is spammy or keyword stuffing.”
Google has said time and time again, duplicate content issues are
rarely a penalty. It is more about Google knowing which page they
should rank and which page they should not. Google doesn’t want to
show the same content to searchers for the same query; they do like to
diversify the results to their searchers.
From this page: moz.com/learn/seo/duplicate-content
The Three Biggest Issues with Duplicate Content
Search engines don't know which version(s) to include/exclude from their indices
Search engines don't know whether to direct the link metrics (trust, authority, anchor text, link juice, etc.) to one page, or
keep it separated between multiple versions
Search engines don't know which version(s) to rank for query results
Some back-ground:
Duplicate content is no longer measured by looking an entire page and has not since about 2008. Semantic links are used. In this case, fragments of content between pages are matched using semantic links which normally helps tie content together by topic, expertise, author, citations, and so on. If a high number of semantic links are discovered, then the pages are marked as duplicate content. No-one knows what this tolerance would be. For example, how many semantic links per X amount of content would be considered duplicate? Since content is often quoted, I would assume that the tolerance would be somewhat high. The earmark of duplicate content is that:
It is spammy in nature.
It adds no value.
It is designed to be deceptive.
I would say that any page that falls into one or more of these categories and contains a high number of semantic links would be worrisome at least.
Conclusion:
While it would not be highly likely that a duplicate content penalty would be applied if some pages are a sub-set of another page's content, it might best to use the canonical tag to point to the page that is best for SERP performance. It may not always be the page with the complete content, but rather a more succinct version of the content. That would be up to you. Also consider which of the similar or duplicate pages have authority and the highest value back link profile? Either way, the canonical tag is used to help the search engine know which specific page of the rather similar pages is best for search. Assuming that you do use the canonical tag, consider that the pages linking to the original may lose some of their authority and link value. This is not clear. It is also not clear if the value of the duplicate pages is transferred to the original page using a canonical tag. While MOZ equates the canonical tag to a 301 redirect, how Google handles the canonical tag remains a mystery.
In that respect, no-one can really say how much is too much, however, I would venture to assume that a canonical tag is your best friend.
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.