: Crawl errors for pdf download links in my website We are building a local business classified website which has about 6000 unique listingsunder huge number of categories and sub-categories. For
We are building a local business classified website which has about 6000 unique listingsunder huge number of categories and sub-categories.
For each listing there is a details page in which we have included a link from where the users can download pdf containing all the details and information regarding that particular listing.
When the users click that button/link for download request. the pdf file is downloaded in the same page without opening a download link in the new tab.
What we saw on webmaster shooked us. the server is returning a 503 server error pointing all the pdf links of each listings. You can find the attached image of the crawl errors in our webmaster dashboard.
Help us in resolving this issue..
Looking forward for you help.
More posts by @Dunderdale272
1 Comments
Sorted by latest first Latest Oldest Best
Usually it is supposed to block GoogleBot if you don't want it to crawl.
You can simply do it by editing your robots.txt and blocking access to the pdf folder. For example:
User-agent: *
Disallow: listings/index/getpdf/*
If you want Google to access and index it, you should let him see your files without redirecting it or asking for download. You can do it through .htaccess.
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.