Mobile app version of vmapp.org
Login or Join
Berumen354

: Blogspot noindex search label The default robots.txt of Blogspot is: User-agent: Mediapartners-Google Disallow: User-agent: * Disallow: /search Allow: / Sitemap: http://castbird-sourcing.blogspot.com/feeds/posts/default?orderby=UPDATED

@Berumen354

Posted in: #Googlebot #GoogleIndex #GoogleSearch #RobotsTxt #Seo

The default robots.txt of Blogspot is:

User-agent: Mediapartners-Google
Disallow:
User-agent: *
Disallow: /search
Allow: /
Sitemap: castbird-sourcing.blogspot.com/feeds/posts/default?orderby=UPDATED

But when I site:castbird-sourcing.blogspot.com the Google search show something like:

In order to show you the most relevant results, we have omitted some entries very similar to the 32 already displayed.
If you like, you can repeat the search with the omitted results included.


When I expand the result, I see something like:

castbird-sourcing.blogspot.com/search/label/gadget
A description for this result is not available because of this site's robots.txt – learn more.


My questions are:


does this very similar to the 32 already displayed problem harm the
SEO in general?
shouldn't Googlebot have already ignored everything
/search? Why do Google still indexing these pages?
How do I completely remove indexed /search link from the Google result?

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Berumen354

1 Comments

Sorted by latest first Latest Oldest Best

 

@Bethany197

does this very similar to the 32 already displayed problem harm the
seo in general?


Google decides which page to show among those with similar content based on user's query and other algorithms. For example, if a user searches for gadgets and you have a label page for gadgets, that would be a more appropriate result than specific post pages from your blog.

See this page.


Matt Cutts said twice that you should not stress about it, in the
worse non-spammy case, Google may just ignore the duplicate content.
Matt said in the video, “I wouldn’t stress about this unless the
content that you have duplicated is spammy or keyword stuffing.”


.


shouldn't googlebot have already ignored everything /search? Why do
google still indexing these pages?


See this page.


While Google won't crawl or index the content of pages blocked by
robots.txt, we may still index the URLs if we find them on other pages
on the web. As a result, the URL of the page and, potentially, other
publicly available information such as anchor text in links to the
site, or the title from the Open Directory Project (www.dmoz.org), can
appear in Google search results.


.


How do I completely remove indexed /search link from the google
result?


I'm not sure if it works but you can try the method here. However, I would suggest that you leave the pages as they are and they won't cause any harm to your seo. There are many Blogger blogs which perform well in SERPs even though Google has the label search pages added to its database.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme