Mobile app version of vmapp.org
Login or Join
Hamaas447

: Is this "robots.txt" file really preventing all crawling of our website? I'm trying to find out why our SEO is so poor So I've been assigned to take a look at our SEO (an area I have some,

@Hamaas447

Posted in: #RobotsTxt #Seo

So I've been assigned to take a look at our SEO (an area I have some, but not amazing competence in), and the first thing I noticed is that our robots.txt file says the following:

# go away
User-agent: *
Disallow: /


Now, I'm pretty competent at reading computer, and as far as I can tell, this says ALL spiders shouldn't look at ANYTHING in the root directory or below.

Am I reading this correctly? Because that just seems insane.

10.02% popularity Vote Up Vote Down


Login to follow query

More posts by @Hamaas447

2 Comments

Sorted by latest first Latest Oldest Best

 

@Smith883

I've put this sort of robots.txt in place when first developing a site because I don't want it to be indexed by Google and others before it's ready.

I've also forgotten to edit that after the site has gone live. *facepalm*

10% popularity Vote Up Vote Down


 

@Lee4591628

Maybe someone didn't want to pay for spider traffic?

Regardless, you are reading it correctly:
www.robotstxt.org/robotstxt.html

Web site owners use the /robots.txt file to give instructions about
their site to web robots; this is called The Robots Exclusion
Protocol. It works likes this: a robot wants to vists a Web site
URL, say www.example.com/welcome.html. Before it does so, it
firsts checks for www.example.com/robots.txt, and finds:


User-agent: *
Disallow: /



The "User-agent: *" means this section applies to all robots. The
"Disallow: /" tells the robot that it should not visit any pages on
the site.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme