: How does "Noindex:" in robots.txt work? I ran across this article in my SEO news today. It seems to imply that you can you use Noindex: directives in addition to the standard Disallow:
I ran across this article in my SEO news today. It seems to imply that you can you use Noindex: directives in addition to the standard Disallow: directives in robots.txt.
Disallow: /page-one.html
Noindex: /page-two.html
Seems like it would prevent search engines from crawling page one, and prevent them from indexing page two.
Is this robots.txt directive supported by Google and other search engines? Does it work? Is it documented?
More posts by @Eichhorn148
1 Comments
Sorted by latest first Latest Oldest Best
Here is what Google's John Mueller says about Noindex: in robots.txt:
We used to support the no-index directive in robots.txt
as an experimental feature.
But it's something that I wouldn't rely on.
And I don't think other search engines are using that at all.
deepcrawl.com has done some testing of the feature and discovered that:
It still works with Google
It does prevent URLs from appearing in the search index
URLs that have been noindexed in robots.txt are marked as such in Google Search Console
Given that Google calls the feature "experimental" and has not officially documented it, I wouldn't recommend using it. It sounds like even if it works today, that support could be removed at any time.
Instead, use robots meta tags that are well supported and documented to prevent indexing:
<meta name="robots" content="noindex" />
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.