: Fetch as google on development environment We are working on a new version of a website. The current version is indexed by google, everything looks fine. The new version will have more or less
We are working on a new version of a website. The current version is indexed by google, everything looks fine. The new version will have more or less the same content & links, but we are changing the design slightly & implement a new architecture (eg. using AngularJS etc). We want to test the new version which is still under development to see how google will fetch it. We added a robots.txt file to 'disallow /' because we don't want google to index our development server.
Is it safe or recommended to use the "fetch as google" tool before the release of the new version?
PS: I think this post suggests to wait until the release:
Manual "Fetch as Google" or better wait? But then, how could we test that google will see all the content correctly?
More posts by @Dunderdale272
1 Comments
Sorted by latest first Latest Oldest Best
There are two possibilities, use the noindex meta or use the X-Robots-Tag header. We used both.
developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag?hl=en
Our solution was to change the robots.txt and allow only google to crawl our pages:
eg.
User-agent: *
Disallow: /
User-agent: Googlebot
Allow: /
but disallow it to index them by using the header X-Robots-Tag: noindex and also adding the meta eg. <meta name="googlebot" content="noindex" />
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.