Mobile app version of vmapp.org
Login or Join
Gloria169

: What are best practices for SEO on an internal "results page" I have an online application which has an internal feature that returns blocks of text. You can think of this page almost like

@Gloria169

Posted in: #Search #Seo

I have an online application which has an internal feature that returns blocks of text. You can think of this page almost like a search-results page. Depending on the input variables, different combinations of text will be returned.

In other words, this 'results' page is not necessarily unique text. Multiple different input variables, may return very similar text output, or repeated text output.

Is it best just to tell Google to ignore this page entirely? How does one tell Google that this is a dynamic page that may or may not have loads of repeated content (of very similar content) on it?

Loads of sites have internal "Search". How do SEO's handle the pages for returned search results?

10.02% popularity Vote Up Vote Down


Login to follow query

More posts by @Gloria169

2 Comments

Sorted by latest first Latest Oldest Best

 

@Barnes591

If you would like to block Google from crawling the CONTENT on your 'results' page, then go ahead and block the URL in robots.txt

But, this will not stop Google from indexing that 'results' page's URL. In order to do that, you must use the noindex robots meta tag in the header of your 'results' page or the X-Robots-Tag can be used as an element of the HTTP header response for a given URL. See this page from Google Developers - Robots meta tag and X-Robots-Tag HTTP header specifications

NOTE: If you are using the noindex or X-Robots-Tag, then make sure you do not block this page using robots.txt. See this page from Google - Block search indexing with meta tags

Further Reading:


"Robots.txt and Meta Robots - SEO Best Practices" on Moz.com (scroll down to the "SEO Best Practice" section)
"5 Googley SEO Terms – Do They Mean What You Think They Mean?" on SearchEngineWatch.com

10% popularity Vote Up Vote Down


 

@Eichhorn148

It's against Google's guidelines to index automatically generated content, which they count search results as.

Google Guidelines on Automatically generated content

Depending on on your set up, one the easiest ways to block Google from crawling your search results is blocking the URLs in robots.txt.

Learn About Robots.txt with Interactive Examples

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme