: Should an Allow or Disallow directive be used in robots.txt to allow Googlebot to crawl the whole site? User-agent: * Disallow: / User-agent: Googlebot Allow: / I use this command in my robots.txt
User-agent: *
Disallow: /
User-agent: Googlebot
Allow: /
I use this command in my robots.txt file. But i do not think this is right. What should be the right command?
Some article i have found where told not to do this in robots.txt
#Code to not allow any search engines!
User-agent: *
Disallow: /
And also found that we should disallow Googlebot except the js and css file.
User-agent: Googlebot
Allow: /*.js*
Allow: /*.css*
Allow: /google/
So what should be the right way to do?
More posts by @Harper822
1 Comments
Sorted by latest first Latest Oldest Best
Googlebot should understand your Allow: directive, but that is not the standard way to to allow crawling. The standard way to allow crawling is to disallow nothing. I'd use:
User-agent: *
Disallow: /
User-agent: Googlebot
Disallow:
This is documented in the "To allow all robots complete access" example on the official robots.txt site: www.robotstxt.org/robotstxt.html
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.