: Google has indexed our archives Upon first submitting our sitemap, we didn't take into account the index of archive pages. With 50,000 articles, we're seeing lots of archive pages results (i.e.
Upon first submitting our sitemap, we didn't take into account the index of archive pages. With 50,000 articles, we're seeing lots of archive pages results (i.e. News (Page 40 of 876))
Since then we've placed <meta name="robots" content="noindex"> in the head of our archives template and resubmitted the sitemap. Now, Google is indexing the sitemap slowly but surely and we still see the archives showing up in results. It's been around a week.
Is there something else we should be doing, or is it just a waiting game?
More posts by @Margaret670
2 Comments
Sorted by latest first Latest Oldest Best
It will be a matter of waiting for the pages to be recrawled and the noindex seen.
I would not block by robots.txt as this is not a definite method of removing already indexed pages, allow the pages to be crawled for the noindex tag to be seen.
If I block Google from crawling a page using a robots.txt disallow
directive, will it disappear from search results?
developers.google.com/webmasters/control-crawl-index/docs/faq#h17
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.