: Why would I use a robots.txt file? From what I understand after reading Google's Controlling Crawling and Indexing: The purpose of robots.txt file is to disallow crawling of some URLs, but those
From what I understand after reading Google's Controlling Crawling and Indexing:
The purpose of robots.txt file is to disallow crawling of some URLs, but those can still be indexed (and appear in search results) if they are linked from crawlable pages
To prevent a page from being indexed I need to make it crawlable and add the noindex meta tag in its head
So, why would I setup a robots.txt file if, in the end, it has no impact on whether pages appear in search results or not?
More posts by @Rivera981
1 Comments
Sorted by latest first Latest Oldest Best
in the end, it has no impact on whether pages appear in search results
or not?
You are wrong , if you prevent a page from being crawled , it means it will never show up in result search.
Let's suppose that you have a thread page that contains pretty good and rich content , then you have a comment section,
the comment section has 10 pages but not very interesting content.
You will need to prevent google from crawling these pages to act faster and reference the threads that have good content.
That's just one example , there are other security reasons.
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.