: How to make page content crawlable when it is usually requested via AJAX on page load I hope that the contents which is pulled on page load could be crawled by google robot, how to do that
I hope that the contents which is pulled on page load could be crawled by google robot, how to do that ?
I went to Google Webmasters tool to ensure that whether Google crawler could see the content pulled by the script or not, but sadly the crawler do not see that.
I know there are some ways to make AJAX content crawlable. like moz.com/blog/how-to-allow-google-to-crawl-ajax-content the way is put query parameter on url after !#. But my query parameter is in script?
<script src= "https://script.google.com/macros/s/AKfycbwjTaclaKdzdr4IgCOM_PpIWJvxFSGkz2qLgrhfJ5fNYf09djI/exec?id=1rnLnui4IXBKZcsDH5zxFHS_rO4TpRPGQD9RY4C_OMAc&callback=pullDone"></script>
Demo page: radiansmile.github.io/web/Test/ajax_at_Initial.html
How could I solve this problem?
More posts by @Deb1703797
1 Comments
Sorted by latest first Latest Oldest Best
The solution would be to serve the content as html for search engines and js for normal users. But that should be done in a way that is not considered cloaking. So the ideal way for it to happen would be to have exactly two copies of the same page - one using js and another using html and render the page accordingly based on the user agent of the requester. If a crawler is found, serve the html version of the page, and for others, serve the js based page. There are services that convert a page into html, cache it, identify the user agent and serve accordingly. Prerender is one such service.
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.