Mobile app version of vmapp.org
Login or Join
Dunderdale272

: Crawl errors for pdf download links in my website We are building a local business classified website which has about 6000 unique listingsunder huge number of categories and sub-categories. For

@Dunderdale272

Posted in: #GoogleSearchConsole #Indexing #Seo #WebCrawlers

We are building a local business classified website which has about 6000 unique listingsunder huge number of categories and sub-categories.

For each listing there is a details page in which we have included a link from where the users can download pdf containing all the details and information regarding that particular listing.

When the users click that button/link for download request. the pdf file is downloaded in the same page without opening a download link in the new tab.

What we saw on webmaster shooked us. the server is returning a 503 server error pointing all the pdf links of each listings. You can find the attached image of the crawl errors in our webmaster dashboard.

Help us in resolving this issue..

Looking forward for you help.

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Dunderdale272

1 Comments

Sorted by latest first Latest Oldest Best

 

@Sent6035632

Usually it is supposed to block GoogleBot if you don't want it to crawl.

You can simply do it by editing your robots.txt and blocking access to the pdf folder. For example:

User-agent: *
Disallow: listings/index/getpdf/*


If you want Google to access and index it, you should let him see your files without redirecting it or asking for download. You can do it through .htaccess.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme