Mobile app version of vmapp.org
Login or Join
Eichhorn148

: Are duplicate pages that are never linked to bad for SEO? I'm using WordPress and I added some rewrite functionality so that certain posts that normally have a link like: http://example.com/category/post-name

@Eichhorn148

Posted in: #DuplicateContent #Seo

I'm using WordPress and I added some rewrite functionality so that certain posts that normally have a link like:
example.com/category/post-name

Now look like:
example.com/special/path/post-name

But if you were to visit either link, the same content is served. Working on fixing this, but I'm curious if I should set up redirects or canonicals.

On my website, there are absolutely no references to the initial link, only the second link. My guess is that as long as there aren't links pointing to the old link, Google doesn't know about it and won't magically find it either, making 301s or canonicals more of a "just in case".

Is my thinking correct?

10.02% popularity Vote Up Vote Down


Login to follow query

More posts by @Eichhorn148

2 Comments

Sorted by latest first Latest Oldest Best

 

@Megan663

Internal duplication is rarely bad for SEO. These days Googlebot is very good at detecting duplication and handling it appropriately.

Yes, Googlebot is likely to find and crawl the duplicate URLs eventually. However, in the case that Googlebot finds two URLs on the same site with the same content, it simply picks one to index. The one that it chooses is going to be either the one it found first, or the one with higher Pagerank. In all cases that is likely to be the one you have linked.

Google won't hand out any penalties for internal duplication. The worst thing that can happen is that Google will occasionally index a page on a URL that you would not prefer. It is also possible that Googlebot will use a lot of bandwidth and crawl budget crawling duplicate sections of your site that won't get indexed.

Other answers correctly tell you how to fix the problem, but I wanted to give a realistic expectation about how "bad" it could be.

See also: What is duplicate content and how can I avoid being penalized for it on my site?

10% popularity Vote Up Vote Down


 

@Jennifer507

You would think you thinking is correct, but it actually it isn't.

I have worked on many, many sites and some URL that do not have any physical links to them (or none we were aware of) always managed to get indexed in Google.

Who knows where Google finds the links, but invariably it does. So this is something you should definitely fix.

IF you can 301 redirect the duplicate pages to a single URL that would be the best fix, or if you need the duplicate URL to be live for what ever reason, set a canonical tag on the duplicate URL referencing a single URL.

<link rel="canonical" href="http://example.com/special/path/post-name" />


If for some reason you cannot set a canonical tag, you can set the robots meta tag to noindex them.

In the header section of the page:

<META NAME="ROBOTS" CONTENT="NOINDEX, FOLLOW">


Or in the HTTP header

HTTP/1.1 200 OK
Date: Tue, 25 May 2010 21:42:43 GMT
(…)
X-Robots-Tag: noindex
(…)


And as a very last resort, if you could not implement any of the above, you can block them in your robots.txt file, using something like:

Disallow: /category/

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme