Mobile app version of vmapp.org
Login or Join
Heady270

: Does Google now factor the prominence of how a link is rendered into their PageRank algorithm? Google's John Mueller now says that Googlebot renders the pages it crawls to determine what content

@Heady270

Posted in: #Google #Googlebot #Pagerank

Google's John Mueller now says that Googlebot renders the pages it crawls to determine what content is on them. They do this because so many sites are now heavily JavaScript centric and this allows them to better understand AJAX content.

I was also thinking that they might be using this combat a historical weakness in their PageRank algorithm. The original PageRank algorithm passed an equal amount of juice to each of the links on the page. Later I saw some evidence links higher on the page (in source code order) might pass more PageRank than lower links (say in the footer). Now that they are rendering pages they could use that rendereing to pass more PageRank to links:


Above the fold
That are larger (take up more rendered pixels)
In places that are more likely to be clicked (heatmap)


It would allow them to easily ignore links that would otherwise be spammy:


Rendered off the page
Hidden with CSS


This tool would allow Google to implement the reasonable surfer model of interlinked web pages. This would do a better job of identifing high quality content than the random surfer model on which the original PageRank algorithm was based.

Is there any evidence that Google is now using rendered link prominence as a signal to determine how much PageRank to pass across each link on a page?

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Heady270

1 Comments

Sorted by latest first Latest Oldest Best

 

@Voss4911412

I suspect you are simply misunderstanding the "renders the pages" phrase.
Googlebot stills views pages as raw text, it doesn't see pages as "text rendered into pixels".
Googlebot is running the javascript/css/whatever it finds to discover(aka "render") text that was previously not visible to it in the raw html text source (ie because it was pulled in via AJAX calls).

You can experiment yourself using the Fetch as Googlebot tool support.google.com/webmasters/answer/6066468?hl=en&ref_topic=6066464
If you browse with js disabled for a while you'll notice many sites simply display nothing or garbage. Google wants to be able to index those sites properly and thus has no choice but to run at least some js.

Consider that it is very much in Google's financial interests to get people to present content as text to them. OCR and machine vision are expensive operations.

to address your comments


I've seen evidence that Google is indexing text that is only produced by JavaScript.


Yes, they are definitely doing that.
But, bear in mind they admit they cannot run all js. If they were actually rendering their crawl data to pixels exactly like a browser then there would be no limitations on what js they could run. Therefore they must be using algorithms that understand the visual impact of "most" js but not all of it.


Also, Google's mobile algorithm and associated tool clearly shows that they are looking at rendered pixels to make decisions that affect ranking. One warning from the mobile friendly tool is that two links are rendered too close together.


It must be possible to calculate the position and sizes of all those elements without actually going through the process of converting positional element data to pixels.

The "mobile-friendly" metrics the smartphone googlebot appears to be using are ...


viewport configured
distance between touch elements
font size
flash usage
content size relative to viewport


These all seem easily calculated using just numbers as opposed to analysing a rendered bitmap.


In addition, "Fetch as Google" in webmaster tools now has a "Fetch and Render" option in addition to the usual "Fetch" option. It will then show a screenshot of your page pixel for pixel as rendered by Googlebot plus another screenshot rendered as a browser user agent.


The tool has to be visual to be of any use to human webmasters. They state the main purpose of the tool is to discover if extra resources such as js and css are blocked for the googlebot. If the little thumbnail looks different to what you expect your site to look like, then they invite you to check your robots.txt to see if it is blocking resources from the "smartphone googlebot".
Just becuase the tool is visual doesn't mean the normal googlebot activity operates in exactly the same way. They specifically call it a "simulation".

to summarise and paraphrase MY understanding of your question

"is there evidence of googlebot using these examples of ranking signals on the assumption that they are only available if googlebot is rendering to pixels which I assume it's now doing"

I cannot credibly answer a definitive "no" as only the opposite could be proven if evidence was found and published. But I would address the assumptions and intent of your question with these points

I think the assumption googlebot is rendering to pixels is unfounded and unnecessary.

Even if googlebot is rendering to pixels there is no indication they are being used for your suggested ranking signals.

Even if your suggested ranking signals are being used there might be no need for googlebot to render to pixels to detect them.

Even if someone proved your suggested ranking signals were being used that would not prove googlebot was rendering to pixels.

Some hopefully useful references to consider
www.googlewebmastercentral.blogspot.ie/2014/05/understanding-web-pages-better.html
".....help our algorithms understand that the pages are optimized for mobile."
support.google.com/webmasters/answer/6066467?rd=1
"It’s a small-scale simulation of the real deal"

and

"fetch and render mode tells Googlebot to crawl and display your page as browsers would display it to your audience"

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme