: Is it expected behaviour that Google also crawls #! urls? I'm a developer for an AJAX webapp and, following Google's specification for crawlable webapps, we use #! to indicate that it's a AJAX
I'm a developer for an AJAX webapp and, following Google's specification for crawlable webapps, we use #! to indicate that it's a AJAX application such that we can serve a static page to Google instead. This all works perfectly fine: Google fetches the ?_escaped_fragment_ URLs instead.
However, in the logs we found that even though we follow this specification Google also fetches the original AJAX pages, and in the process it generates script errors.
Is it expected behaviour that Google visits the AJAX URLs, even though it knows we have specially prepared pages for it? I can imagine that Google does this to train it's AJAX crawler, but I cannot find any information about it.
Additionally, does this have any influence on our ranking?
More posts by @Alves908
1 Comments
Sorted by latest first Latest Oldest Best
To prevent misuse the inner workings of Googlebot are kept secret so there is no real "expected behaviour". While searching the net I stumbled upon the following:
... search engines understand URL parameters but often ignore fragments.
This implies URL fragments might get crawled.
On another page I found a second hint. I always thought Googlebot didn't do Javascript but apparently I'm wrong.
... we decided to try to understand pages by executing JavaScript.
So what this learns us is Googlebot might fetch the AJAX pages and as your front-end logs show it does. The reason might be it is checking if your page is trustworthy by comparing what it gets back from the AJAX-call versus the html-snapshot (via _escaped_fragment_)? That's just a guess.
Sources:
support.google.com/webmasters/answer/81766?hl=en&ref_topic=6003039 googlewebmastercentral.blogspot.be/2014/05/understanding-web-pages-better.html
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.