: Rate Limiting Individual URL Requests I have a Flask application I’m developing that relies heavily on external website interaction and is initiated by the end-user. If I leave the application
I have a Flask application I’m developing that relies heavily on external website interaction and is initiated by the end-user. If I leave the application without any sort of bandwidth control/rate limiting then this application may be abused by actors with nefarious intentions.
My goal is a fairly simple 2 stage approach:
Rate limit individual IP sources from performing more than x number of connections a minute. This can be easily achieved with iptables. Here’s a similar example to my goal:
iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 15
--connlimit-mask 32 -j REJECT --reject-with tcp-reset
Rate limit the applications ability to perform more than x number of lookups per URL. Example:
APP ---- 10 pps ---> stackexchange.com PERMIT
APP ---- 25 pps ---> google.com DENY / 15 SECOND BACKOFF
So far as I can tell, iptables has no way of tracking separate URLs. It’s only able to rate limit these connections as a whole. That also doesn't seem like the only limitation to what I'm trying to achieve. If there were a way to setup iptables in this way, it might provide some issues with my web application since these requests are user-initiated.
I’m using Flask, a viable option might be using a before_request hook and manually tracking these destinations with a data store such as Redis. However, this is pretty high up in the stack to be dealing with connections in this manner. What I really need (or think I do) is an intelligent firewall application that can dissect requests in a custom way and close connections when certain breakpoints have been reached.
Is there any way to achieve what I’m trying to do?
If so, how?
More posts by @Deb1703797
1 Comments
Sorted by latest first Latest Oldest Best
iptables deals with the Internet and Transport layers (in the Internet model) or alternatively layers 3 and 4 in the OSI model, with a few exceptions (filtering on MAC addresses, NAT protocol helpers).
URIs are part of the Application layer. iptables doesn't deal with them.
You could use iptables to direct all your outgoing TCP port 80 traffic through a web proxy, which could do your rate limiting (e.g., maybe Squid's delay pools could do it. Or Apache probably can with mod_proxy.) Doing this with HTTPS is more difficult (though maybe you can just configure your app to use a web proxy, which would be a better approach than a transparent proxy anyway).
But you really should move both of your rate limits in to your application. The reason being the UX you're setting up is terrible; "connection refused" fails to explain at all what is happening. It'd be much better for your users if you instead gave an error page explaining that they're making requests too fast, who to contact for support, maybe give an option to solve a CAPTCHA to continue, etc.
It's reasonable to have a connection rate limit on incoming connections, but it should be to protect the app from falling over due to a DoS attack—so many requests that your app can't even serve the rate exceeded error page to the user. It should be a fair bit higher than when you start serving the error page (and maybe should be a global limit, not a per-source-IP limit). Note that if your app is running through a web server and/or reverse proxy, you can likely configure incoming rate limits there (instead of via iptables)—and do very cheap rejects by sending a static error page, and not even passing the request to your app.
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.