Mobile app version of vmapp.org
Login or Join
Sue5673885

: Should I block search engines with robots.txt on my Facebook app? There's a few subdomains on my homepage where I tak advantage of robots.txt to block search engines for indexing their contents,

@Sue5673885

Posted in: #Facebook #RobotsTxt #SearchEngines #WebCrawlers

There's a few subdomains on my homepage where I tak advantage of robots.txt to block search engines for indexing their contents, like my images and downloads subdomains as I don't want the direct URLs for this content to be indexed.

I do it like this:

User-agent: *
Disallow: /


Now I have a new subdomain 'facebook' which I'll be using to host apps developed for Facebook. My question is, should I use the code above to also block this subdomain?

The app itself shouldn't be accessed and used through the direct URL (but it still works I believe, didn't test it though), it should be used through the canvas URL, which is something like apps.facebook.com/app_name. I don't mind search engines to index this URL, it's the right one to index. But I don't think it makes sense for them to index something like 'facebook.mydomain.com/apps/app_name'.

Should I block search engines with robots.txt for Facebook applications or should I ignore it? Any good reason for allowing search engines to crawl it?

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Sue5673885

1 Comments

Sorted by latest first Latest Oldest Best

 

@Pope3001725

Block it. If there's no reason for them to be there by directly accessing that URL then you should do your best to prevent users from stumbling upon it. You never know if there is a vulnerability or some other issue related to direct access of that content so if there's no need for someone to access it that way why even leave the possibility open for problems related to this? If that content is designed for access through Facebook then make sure that's the only way to get there.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme