Mobile app version of vmapp.org
Login or Join
Moriarity557

: Disallow, canonical, or noindex for a coupon ID parameter where Google has already found some duplicate content? We've changed our software and domain a week ago. Done the 301 redirction successfully.

@Moriarity557

Posted in: #CanonicalUrl #Googlebot #GoogleSearch #Seo

We've changed our software and domain a week ago. Done the 301 redirction successfully. However, we're still facing a lot of problems, bugs, things we didn't think about and more...

Our website is a coupon/campaign publishing website. The URL structure is domain/page?coupon_id=[NUMBER]. The "coupon_id" query adds a small piece of content to page. But rest of the page stays same.



We forgot to insert "CouponId" query as a disallow rule to robots.txt at the beginning. Even though pages have got canonical tag (weird), Google indexed these pages with CouponId query.



When we figured out this mistake, we updated disallow rules on robots.txt ASAP. Added the rule for CouponId query as a disallow. Checked canonical , it exists for pages. But Google crawled some these pages before we do it.

Questions


Adding Disallow rule for CouponId query on robots.txt didn't change results yet. So, This caused a big problem on SERP ranking. Should I worry about this or will waiting solve the problem eventually?
What if we keep the canonical tag and use "no index,follow" tag while/when the URL has coupon_id query? Because basically when we add a new coupon, there will be a new content but it'll show the page duplicated.

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Moriarity557

1 Comments

Sorted by latest first Latest Oldest Best

 

@Eichhorn148

You need to remove your robots.txt disallow rule. If Google could crawl those URLs and see the canonical, it would remove the query string version from the search results.

When the query string URLs are blocked by robots.txt Google can't crawl them and find out that they have a canonical version. Google may continue to index blocked content indefinitely when it has external links.

Another problem with blocking them in robots.txt is that you don't get any link juice benefit from the inbound links. Allowing Google to crawl passes PageRank into your site better.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme