Mobile app version of vmapp.org
Login or Join
Yeniel560

: Is browser and bot whitelisting a practical approach? With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like

@Yeniel560

Posted in: #Browsers #WebCrawlers

With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity.

I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit.

I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment.

The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Yeniel560

1 Comments

Sorted by latest first Latest Oldest Best

 

@Kristi941

If you are limiting access to a minority of users then a whitelist is the safest route to take. It's easy to manage and literally delimits who is allowed access. For example, I use a MAC Address whitelist for my wireless router since very few people actually need access to it and they are all known quantities.

If you are dealing with a large audience, like the whole world, then running a whitelist is no longer feasible. At that point it becomes more complex depending on your business rules. If there are secure areas of your website you need to make sure you have a user authentication system in place and it is using best practices (e.g. not using MD5 hashes for passwords). You do your best to prevent XSS attacks and SQL injections, etc. (In other words the site is built well).

After that, if you experience issues that go beyond those basic security issues, you can start considering more drastic measures with the understanding that there may be unintended and undesirable side effects. Bot blacklisting isn't necessarily a bad thing to do. No one in their right mind will surf the net pretending to be a hated bot. I block a bunch of known bots in my htaccess file as I feel it is safe enough to do without harming legitimate traffic. Something like htaccess firewall also is a safe way to keep bad people out without affecting good users.

Blocking browsers is pointless since user-agents can be spoofed and a browser can't harm you in-and-of itself.

Blocking IPs is where it gets hairy. If you're getting hounded by one IP address blocking it probably is ok. I have a few IPs blocked of known scraper sites. But you need to keep in mind that attackers consistently change IPs so blocking them isn't always an effective solution anyway. Blocking IP ranges is even more dangerous as you then block out entire regions and groups of people. This usually comes with the unintended side effect of blocking legitimate users. This practice is usually best left to websites that are geared for a certain area (usually a country) or it's an ecommerce site that block high risk regions (Asia, middle east, Africa). Otherwise this should always be a last resort and even then it should be revisted to see if the block can be lifted.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme