Mobile app version of vmapp.org
Login or Join
Debbie626

: Google is successfully indexing our JS pages. Should we stop providing _Escaped_Fragment_= Snapshots I've seen a few references to Google now crawling JS sites on this forum but nothing definitive.

@Debbie626

Posted in: #Seo

I've seen a few references to Google now crawling JS sites on this forum but nothing definitive. Nor have I seen any updates to the recommendations from Google. We can vouch for this. It's happening. The question is what does it mean? Is it time to get off the snapshots train?

Background:
We have used Ember.js to generate our HTML for over a year. We create snapshots of our pages (pre-render) and serve them to Google using "_escaped_fragment_=" best practices. Sadly, our SEO has been rather poor.

I've been digging into this recently and discovered that Google is indexing the live, JS version of our pages (even though we have been pointing it at our snapshots). So, should we stop pointing Google at our snapshots and rely on their crawling?

Case: The "Donald Trump" page on Countable

Here's the current version of the Donald Trump Page on Countable - this is the version that Google is fetching properly (above) and has indexed:



Here's the outdated cached-snapshotted version of the Donald Trump page on Countable (note the numerous differences):



Search for the rich snippet from the current version of the page - "Donald Trump is a Republican candidate for President and business magnate, investor, and television personality"

Fetch As Google correctly Renders the current version of the Donald Trump page on Countable

Unfortunately, and this is really lame, the "fetching" tab shows our "noscript" HTML which is absolutely not what Google is fetching, rendering or indexing:

So, as a recap, do we ignore Google's advice (and the evidence of the Fetched HTML tab) in favor of the empirical evidence that Google is, in fact, successfully rendering our live JS?

Thanks in advance for your help.

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Debbie626

1 Comments

Sorted by latest first Latest Oldest Best

 

@Welton855

Google has gotten really good at reading & processing JavaScript-based content for web-search. For the most part, if the files (JS, CSS, as well as any AJAX/JSON/JSONP responses) aren't blocked by robots.txt and can be crawled normally, we'll be able to render the pages like a browser would, and will use that for web-search. I suspect at some point we'll deprecate our recommendation to use the AJAX-crawling proposal (escaped-fragment/hash-bang-URLs), though we'll probably support crawling & indexing of that content for a longer time.

The "Fetch as Google" tool in Search Console has two modes - just the raw response (which will return the HTML of the page) or the rendered view (which will show a screenshot of the rendered version). Depending on what you're trying to diagnose, one or the other will make sense. At the moment, there's no HTML view of the rendered page (you could approximate this by using a Googlebot user-agent in your browser though).

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme