Mobile app version of vmapp.org
Login or Join
Harper822

: Should an Allow or Disallow directive be used in robots.txt to allow Googlebot to crawl the whole site? User-agent: * Disallow: / User-agent: Googlebot Allow: / I use this command in my robots.txt

@Harper822

Posted in: #Googlebot #GoogleSearch #SearchEngines

User-agent: *
Disallow: /

User-agent: Googlebot
Allow: /


I use this command in my robots.txt file. But i do not think this is right. What should be the right command?
Some article i have found where told not to do this in robots.txt
#Code to not allow any search engines!
User-agent: *
Disallow: /


And also found that we should disallow Googlebot except the js and css file.

User-agent: Googlebot
Allow: /*.js*
Allow: /*.css*
Allow: /google/


So what should be the right way to do?

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Harper822

1 Comments

Sorted by latest first Latest Oldest Best

 

@BetL925

Googlebot should understand your Allow: directive, but that is not the standard way to to allow crawling. The standard way to allow crawling is to disallow nothing. I'd use:

User-agent: *
Disallow: /

User-agent: Googlebot
Disallow:


This is documented in the "To allow all robots complete access" example on the official robots.txt site: www.robotstxt.org/robotstxt.html

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme