Rate Limiting with nginx
This article explains how to use the nginx HttpLimitReqModule to limit the number of requests for a given session. This is useful, for example, if your site is hammered by a bot doing multiple requests per second and thus increasing your server load. With the ngx_http_limit_req_module, you can define a rate limit, and if a visitor exceeds this rate, he will get a 503 error.
1 Using the HttpLimitReqModule (ngx_http_limit_req_module)
Open your nginx.conf...
nano /etc/nginx/nginx.conf
... and define an area where the session states are stored - this must go inside the http {} container:
http { [...] limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; [...] }
This area is called one and is allocated 10MB of storage. Instead of the variable $remote_addr, we use the variable $binary_remote_addr which reduces the size of the state to 64 bytes. There can be about 16,000 states in a 1MB zone, so 10MB allow for about 160,000 states, so this should be enough for your visitors. The rate is limited to one request per second. Please note that you must use integer values here, so if you'd like to set the limit to half a request per second, you'd use 30r/m (30 requests per minute).
To put this limit to work, we use the limit_req directive. You can use this directive in http {}, server {}, and location {} containers, but in my opinion it is most useful in location {} containers that pass requests to your app servers (PHP-FPM, mongrel, etc.) because otherwise, if you load a single page with lots of images, CSS, and JavaScript files, you would probably exceed the given rate limit with a single page request.
So let's put this in a location ~ \.php$ {} container:
[...] location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; limit_req zone=one burst=5; } [...]
limit_req zone=one burst=5; specifies that this rate limit belongs to the session storage area we defined before (because of zone=one) which means the rate limit is 1r/s. You can imagine the meaning of burst like a kind of queue. It means that if you exceed the rate limit, the following requests are delayed, and only if you have more requests waiting in the queue than specified in the burst parameter, will you get a 503 error (e.g. like this:
).
If you don't want to use this queue (i.e. deliver a 503 immediately if someone exceeds the rate limit), you must use the nodelay option:
[...] location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; limit_req zone=one burst=5 nodelay; } [...]
Don't forget to reload nginx to make your changes take effect:
service nginx reload
2 Links
- nginx: http://nginx.net/
- HttpLimitReqModule: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
About the Author
Falko Timme is the owner of Timme Hosting (ultra-fast nginx web hosting). He is the lead maintainer of HowtoForge (since 2005) and one of the core developers of ISPConfig (since 2000). He has also contributed to the O'Reilly book "Linux System Administration".