Mobile app version of vmapp.org
Login or Join
Shanna517

: Google can't access non-existent robots.txt I set up a website a few weeks ago and I'm trying to get Google to crawl it. When logging into Google's Search Console (Webmaster Tools) and within:

@Shanna517

Posted in: #GoogleSearchConsole #RobotsTxt

I set up a website a few weeks ago and I'm trying to get Google to crawl it. When logging into Google's Search Console (Webmaster Tools) and within:

Crawl > Crawl Errors

It reports:


Google couldn't crawl your site because we were unable to access your
site's robots.txt file. More Info.


In the "More info" link, Google says I don't need a robots.txt file, so I'm not sure what I need to do to make the site indexed on Google.

Does this affect my site being indexed? How can I fix this issue?

10.02% popularity Vote Up Vote Down


Login to follow query

More posts by @Shanna517

2 Comments

Sorted by latest first Latest Oldest Best

 

@Angela700

After reading the questions and comments, I would suggest doing any of the following:


Create an robots.txt with only one line in it. Maybe something like this:

# it works
Or if you don't really want a robots.txt file, then configure your server so that all requests to robots.txt result in an HTTP 410 status code, meaning the file is gone and it should not be requested ever again.


If your server is apache, you can easily add the following contents to .htaccess in the document root folder of your site, or inside the directory tags where the directory is the document root in the main server configuration.

RewriteEngine On
RewriteRule ^robots.txt$ - [R=410,NC,L]


This will cause any request to robots.txt (regardless of letter casing) to produce an HTTP 410 status code.

I added a forward slash before the dot in the file name to make the dot a literal character instead of a rule-processing character.

The advantage to having a plain robots.txt file as opposed to no robots.txt file is that your error logs won't be filled up with requests to robots.txt.

10% popularity Vote Up Vote Down


 

@Si4351233

You do not need a robots.txt file for the site to enter Google's index.

Since Google checks every site for a robots.txt your site is returning a 404 error which will return notifications with crawl errors. Simply ignore this error or create an emptyrobots.txt so that your website returns 200 OK status.

It should be noted that a site returning a 404 status is not an ERROR implying that your site requires fixing, for pages that do not exist then a server responding with a 404 status means the server is working as intended.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme