: If i have multiple links to the same site on a page is it better to have only 1 passing link equity or all passing link equity? SHORT QUESTION If i have multiple links to the same site
SHORT QUESTION
If i have multiple links to the same site on a page is it better to have only 1 passing link equity (and the rest rel="nofollow") or all passing link equity ?
LONG QUESTION, WITH BACKGROUND INFO
Ive a situation where ive got 3 sites that are all interlinked. There is a genuine reason for doing this, its not a link scheme or pyramid. (The 3 sites are each for different sub companies of the same parent company, they all work in the same sector, but provide different services, ie. one company provides consultancy, the other implementation and the other technical support).
So that each company can offer a full set of services in several places on each site we refer to the other companies, ie. "technical support is provided by our sister company Y" with the words "company Y" being a link. There a probably 3-4 links to each company on the index page (its a long 1 page website).
What im concerned about is it looking spammy to a search engines eyes, from a users point of view im not concerned as it genuinely donst look spammy.
One of the companies is in a more competitive niche to the others so id like all the link value to flow to that company, at the moment i only have the first link to that company passing link equity, all subsequent links are rel="nofollow".
Is this the best way to do this or should i make all the links pass equity ?
(each site is on a different server so there is no issue with linking between the same IPs)
More posts by @Welton855
1 Comments
Sorted by latest first Latest Oldest Best
You have a few things going on here, some of which needs a rethink or a new understanding. However, this is one time where one of my mini-tutorials will be helpful. Please be patient. It will make sense I promise.
There is no problem linking between sites on the same IP address. This is just plain 'ole SEO pucky so watch where you step. Separate servers is not an issue either. Anyone who tells you it is, really has not followed what Google says but SEO parrot sites. Google is very clear in how they handle these issues and it is not as dramatic as you think.
Here is what happens.
First things first, Google is a registrar for a reason. They can acquire whois information without being restricted by privacy whois proxies. Be that as it may, Google has for a very long time been collecting information about all sites that it indexes and has retained this information even if a site is deleted, transferred, etc.
Why do they do this?
Semantic clusters. I talk about semantics with respect to linguistics here a lot. But occasionally, I have to talk about semantics in regards to fact linking and developing trust networks. The original idea behind trust networks has nothing to do with networks, though it applies perfectly. A trust network is a linking and communication matrix where individual entities can absolutely trust each other. Google's PageRank is an example of an incomplete trust network. This is evidenced within the original research paper.
Where one site links to another value is passed. We know this, but how is value assessed? For example, using a simple PR trust model, we can say that Site A links to Site B and therefore Site A trusts Site B. Granted. However since we are dealing with a networking scheme, it gets complicated quickly. Here is what I mean. Site B links to Site C and therefore Site B trusts Site C. In a linking scheme, trust is passed. Does it make sense that Site A trusts Site C? Of course not. Something is missing. In a trust network, there is a mechanism where one entity can absolutely trust another, however, on the web, this mechanism does not exist except for HTTPS certifications. Hence why the push for certificates by Google and why there is value to changing a site to HTTPS. However, given that HTTP without a certificate is a part of the design of the web, how does Google establish a trust model where Site A can trust Site C and pass value?
Again, keep in mind that the PR model is to pass link value from one site to another and that once you calculate the value of a link to any site, the value of the target site changes and thus the value it passes to other sites. PR is a recursive algorithm where PR is calculated over and over until the PR calculations reach a point where no or little difference for any site can be achieved. For this, all calculations for link value must be taken from a link index and that the link index remain static for a period. Fortunately, this process can be rather fast. This had historically been done quarterly. No-one knows how often link values are calculated today, however, I rather suspect more often than before. But here is an important take-away. Link values are not immediate and can fluctuate easily based upon metrics way down the line. Therefore it is impossible to determine a links value, however, some things can be counted upon.
Back to the trust model.
Since HTTP is the reality and the right of any site, Google has had to deal with this reality. This is where semantic clusters come into play.
A semantic cluster is a linking scheme between entities with a relationship. The relationship can be anything. In this case, Google is looking for sites that are related and understanding how they are related. Without getting into the big list of how this is done, I will tell you that registration information and network information is used at a minimum. This includes site owner along with other whois data, the registrar, the host, the host network, the locale both GEOIP and other locale data, data on the site itself including e-mail addresses; name, address, and phone number (NAP), limited schema mark-up, author information, templating: JavaScript, CSS, images, and any other similar file, semantic linguistics similarity, etc. I could go on and on, however, you are getting the point that Google will find the relationship between sites regardless of what anyone will do. This means that the fact that your sites are on separate machines with separate IP addresses means nothing in this regard.
Next onto spam detection.
Part of spam detection is the historical performance of any linked site, registrar, host, network, etc. This is why I always recommend always using a well trusted and high quality registrar and host. If at any point a site is penalized, this is a knock in the trust metrics that can effect any site that is related. Keep in mind that a site that is not related to your site in a way that you might imagine, can effect your site. Here are a few examples. One is where a site uses a shared server. One site on the shared server can easily effect the other sites on the server. This also extends to CDNs. It was very common not too long ago that a site begins using a CDN and their site instantly begins to be penalized. This is because of the IP address that both sites share.
Of course there is much more to spam detection than this. One I will mention are links with link text of one or two terms and in particular terms that are found to be used by spammers. Since this list shifts daily, there is a potential danger that cannot be seen. But this is just a side trip.
Given how sites are found to be related, how one site links to another is important, however, there are things you can do to communicate trust to Google. One is schema mark-up or older traditional NAP formats. Schema mark-up is more effective in this area because there is a level of trust in the passing of data using mark-up that Google has previously says is valuable. In this case, a parent company along with the companies that it owns, can be related using contact information. Addresses, phone numbers, personnel profiles, e-mail addresses, etc. Any way that linkage between sites can be made in this regard should be done where it makes sense.
Since it is reasonable to link sites that a series of companies own, it is also important that these sites link to each other appropriately. Following the model above, using an About page and possibly a Contact page is perfect to link between sites. These are special pages. Googles examines them a bit differently. Given that, linking between sites on these pages signal the sites relationships.
Separate from this, linking advice changes.
Google does not like sitewide links. There is appropriate push back on this because Google will even admit that this is a rule that is somewhat tolerated though no-one can tell you what the limitations are. What is forgotten are citations. Links are often referred to as citations, however, to do this adds confusion. Citations in the semantic world and indeed within the search engine world is a simple mention. This can be a name, a quotation, or other text entity that indicates a reference to another entity. It is perfectly acceptable to replace sitewide links with a simple textual citation. This is especially powerful in replacing the forbidden sitewide header, footer, and sidebar links.
Similarly, linking between sites within content can be tolerated to a point, however, excessive linking can be seen as manipulation. Can anyone tell you how many links are too many? No. Here citations clearly do not replace links. For this, links between sites should be used sparingly. The question between using a follow and nofollow link becomes important. To completely avoid the notion of manipulation, nofollow can be a tool. Not using nofollow opens up the potential that any search engine may determine that you have linked between sites too much. It is perfectly reasonable that sites link to each other. It is prudent that they do.
So what model do you use?
The advice on this varies and should always be taken with salt. However, it may be advisable to link from the About or Contact page without a nofollow so that some value does pass between sites. It may also be advisable that links withing content be nofollow to avoid any issues with a caveat added at the end of this answer intended to temper this advice. You can of course create other links that signal relationships without a nofollow such as from home page to home page but do not do this sitewide. The more valuable links are within content. Part of the reason for this is that links from headers, footers, and sidebars are discounted. This does not mean that you cannot link from these areas, however, fully semantic links within content are best.
You have another issue.
Linking just one or two terms is suspect and a strong signal to search engines of manipulation.
Links are an important semantic clue. In fact, links are one of the very strongest semantic clues there is and not to take advantage of this is foolish. One or two terms do not make for a strong semantic signal, in fact, they are weak and will not see full potential if much at all. For this, any full sentence or partial sentence where the link text contains a subject, predicate, and object, just as you remember in your English class, are the best. It answers the question, "What about..." If you link using the single term car, ask yourself "What about car?" Are you linking to the history of cars in the US for example? This gives a stronger semantic signal that benefits the target page. Another important clue is the content surrounding the link. This helps search engines not only understand the link and the target page, but also indicates whether the link is relevant or not.
Lastly, the old PR model you see mentioned all over the place is invalid. For example, a PR6 page with two links does not pass PR3 on each link. There are variations within the model that has always existed. PageRank caps limit the amount or PR any page can pass. This is an important part of the PR model that retains a more natural curve. As part of this, the link is assessed and a metric for the link value is given from 0 to .9. This is where a fully semantic link comes well into play. Links with one or two terms are deemed low value. As well, links within headers, footers, and sidebars are discounted with the exception of navigational links. Links placed higher within content are deemed more valuable generally than links placed lower within content. However, links within important content segments are given higher value and this can be anywhere within the content. Confusing? The PR cap calculated with the link value is what is passed which following the example above, would be much less than PR3.
So given that fact, if there is important content that with a high semantic value link to another site, it is possible that the link should be follow. But do this extremely sparingly and only for the most important semantic links within the most important content that signals the importance of the target page. You do not want to do this on every page, just between the very most important content that signals the target of the link. As well, the link should contain as much semantic value as you can give. Full sentences are okay. The sentence can be somewhat complex, but not Steinbeck complex.
I might have missed a point or two along the way. If I have, I will add them later as I think of any. However, this answer, as it is, should give you enough actionable information to make good decisions and develop a strong strategy.
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.