: How Google crawler behaves when the HTML page is 1.6MB? Is it possible that Google crawler doesn't fetch all the content on the page when it's size is 1.6 MB? How does Google treat big documents?
Is it possible that Google crawler doesn't fetch all the content on the page when it's size is 1.6 MB? How does Google treat big documents?
More posts by @Carla537
2 Comments
Sorted by latest first Latest Oldest Best
When some one search for something than Google crawler, find out the best information about from Google database.First it crawl all data and then index with asked questions and then display the result.
There are in general no page size limits established by Google. Only Google News pages should be not bigger then 256 kB.
But the size of the page makes some limitations by design, which could prevent proper crawling. Such limitations, where the crawler just gets off, are i.e.:
exorbitant nesting of the DOM tree
too much links in above-the-fold area
load time too slow (i.e. because of too many assets included in the head)
assets, which Google means they were important, aren't crawlable (blocked by robots, unavailable by server)
too many 404 errors coming from page's assets (images etc.)
content, loaded dynamically AND which visibility is triggered by user's action
The page's size doesn't matter. What matters, is whether the crawler has any obstacle in its way.
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.