: How do I stop googlebot from guessing urls parameters When the googlebot visits our sites it seemingly tries to visit urls that do not exist resulting in exception being thrown. The url is
When the googlebot visits our sites it seemingly tries to visit urls that do not exist resulting in exception being thrown. The url is is like ourdomain/thepage.aspx/x/8/x/ etc.x stands for no value in our application. It seems like googlebot seems which parameters it can use, because the urls are no link from the page.
Is there any way to control this behaviour?
More posts by @Odierno851
1 Comments
Sorted by latest first Latest Oldest Best
There is a lengthy article on how to fix crawl errors and on typical problems. Some hints may be of use for you (when to reply with 301 HTTP status codes, when with 404 Status codes etc.).
You should doublecheck if the attempt comes really from the googlebot and is not some attempt from an external source to gain information by playing with the URL. Or, by checking if there is a referer, if it comes from an external link actually.
It's not the same situation here as with infinite loops or "endless" followable links one sometimes sees on result pages with pagination. But you should fix your code to handle the case of "crap" parametrization.
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.