Mobile app version of vmapp.org
Login or Join
Gretchen104

: Where does Google connect the AJAX-fetched content? A lot was written about how Google indexes AJAX. I'm reading up. For instance, this experiment in Oct 2013 was insightful. It appears

@Gretchen104

Posted in: #Ajax #CrawlableAjax #Seo

A lot was written about how Google indexes AJAX. I'm reading up. For instance, this experiment in Oct 2013 was insightful. It appears that Google does index the content when it's loaded in $(document).ready(...). Google does that without special provisions on webmaster's part, without site map with #! . Google connected the AJAX-fetched content to the page that fetches it.

The same experiment have shown, unfortunately, that Google didn't index the AJAX-fetched content that were loaded from a $(...).click(...) event. That's where the map with #! is needed.

My question. Where does Google connect the AJAX-fetched content from the site map? If a user finds what is looking for in the AJAX-fetched content, where will Google direct the user? How does it know which page loads a particular chunk of AJAX-fetched content?

Related:
SEO Friendly AJAX Example Site (ca Oct 2011)

Update:
After reading Stephen's answer, I wondered why everybody say the same thing and I can't seem to get it. Then I've read the following, and Stephen's answer began to make sense.
Google's FAQ on AJAX crawling #85 : Best Practices with Dynamic Content (31 min video, May 2010)
It turned out that I've picked not the best place to start figuring this out, initially.

10.03% popularity Vote Up Vote Down


Login to follow query

More posts by @Gretchen104

3 Comments

Sorted by latest first Latest Oldest Best

 

@Sent6035632

The first paragraph of your question is actually true and very important. However, the Googlebot seems to not allow AJAX calls anymore. But nonetheless the Googlebot executes JavaScript.

As the other answers say you still need to consider what Google says about AJAX crawling. However, if you do it right you do not need to serve "HTML snapshots" anymore.

10% popularity Vote Up Vote Down


 

@Yeniel560

There are two slightly different concerns here: Googlebot parsing hashbang links and Googlebot executing JavaScript.

Originally, Googlebot did not parse JavaScript at all so it would only work with the HTML returned by the server for each request it made to URLs. It would follow links given in the HTML but ignore the "anchor" part (anything after #) since that is intended for in-page links.

Later on, when JS became much more common, they decided to add a sort of "workaround" for their lack of JS support. Any time they saw a link with an anchor beginning #! Googlebot would check an special URL (the one with _escaped_fragment) to find the content that would have been loaded on that page. At this point Googlebot still did not understand JS, it was making a special case for certain HTML. It would not know about anything loaded via JS onload.

More recently, Google have confirmed they now parse JS. However, as far as I know Googlebot doesn't really go around "clicking" things (otherwise there is the potential to start accidentally spamming forms and the like). In short, anything that happens without interaction - such as on page load - is fair game.

Hope that clears it up.

10% popularity Vote Up Vote Down


 

@Heady270

For the #! style AJAX URLs you need to provide Googlebot with:


A list of hash bang URLs -- These are the URLs that it will refer users to
The corresponding hunk of content served at the _escaped_fragment equivalent


For more information see developers.google.com/webmasters/ajax-crawling/

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme