: There are multiple ways to do this (combining them is obviously a sure way to accomplish this): 1) Use robots.txt to block the files from search engines crawlers: User-agent: * Disallow: /pdfs/
There are multiple ways to do this (combining them is obviously a sure way to accomplish this):
1) Use robots.txt to block the files from search engines crawlers:
User-agent: *
Disallow: /pdfs/ # Block the /pdfs/directory.
Disallow: *.pdf # Block pdf files. Non-standard but works for major search engines.
2) Use rel="nofollow" on links to those PDFs
<a href="something.pdf" rel="nofollow">Download PDF</a>
3) Use the x-robots-tag: noindex HTTP header to prevent crawlers from indexing them. Place this code in your .htaccess file:
<FilesMatch ".pdf$">
header set x-robots-tag: noindex
</FilesMatch>
More posts by @Kevin317
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.