Mobile app version of vmapp.org
Login or Join
Alves908

: How does Google penalize for word-for-word copied content and "aggregation"? I found a similar answer here but it doesn't make sense to me because I found the following on the first page of

@Alves908

Posted in: #DuplicateContent #Google

I found a similar answer here but it doesn't make sense to me because I found the following on the first page of Google:



I found two similar blurbs

These two blurbs seem very similar and when you look at both pages you find that the Princeton page is just a copied version of the first section of the Wikipedia page.

Why is Princeton's site ranking?

Following the logic laid out in the previous article, how is this possible?


Search engines need to penalize some instances of duplicate content that are designed to spam their search index such as:


scraper sites which copy content wholesale...



Doesn't Princeton's site fall under the "scraper sites" mentioned in the other explanation of how Google penalizes duplicate content?

Here are the two links:


Princeton Page
Wikipedia Page

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Alves908

1 Comments

Sorted by latest first Latest Oldest Best

 

@Ann8826881

The Princeton copy is not a full copy of the article and still falls within the parameters of Content Syndication on the page you linked to: What is duplicate content and how can I avoid being penalized for it on my site?

It does give full credit and links back to the original as suggested in the Content Syndication section.

As to why the Princeton page ranks so high in the SERPs? Likely because Princeton enjoys significant rank and trust in general. The page does not appear to be spammy which is what Google's concern is.

Is this page scraped? No. I doubt it. I am sure it was a manual attempt. Cut and paste? Yes. Scraped? No.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme