Why You Should Always Use Nginx With Microcaching
Everybody knows how hard is to push out as much as possible from your webserver(s). In my daily occupation as a hosting engineer that means I fairly often get the same question, "Wow, cool website, but can it cope with big-time traffic?".
The "normal" situation
A "normal" website running under Apache with mod_php should be able to put out 20 requests per second with ease, but what if you get like 50 requests per second (not strange with some websites like websites for political parties etc.)? The answer in my opinion is to drop Apache, because as it is right now Apache just isn't cutting it anymore.
YES! Nginx!
In comes Nginx! So what you do is you set up your website on Nginx and you run a quick loadtest (for instance a 1000 requests with 200 concurrent users) and you see that you don't get much more than Apache, but how? It's real simple, it's due to the fact that Nginx doesn't have a php module built in so you would need a fastcgi processor to process the php pages (I suggest using php-fpm as it is better than spawn-cgi). So what should I use now you ask? Use microcaching!
What the hell is Microcaching?
What is microcaching? Well the theory is that you cache your files etc. for a short amount of time (like for instance 1 second). What this means is when a user requests the page it caches it so the next request for any other will come from cache, and with 100 users requesting within 5 seconds only 1 in 20 users will have to build up the full page (and with Nginx and a good structured site this isn't any problem).
I don't believe it!
You better believe it! Let me give you an example, take this website you are on right now. Let's say we have a loadtest of 1000 requests with 200 concurrent users. If you run this website under Apache you would get between 10-40 requests a second, max! And your webserver would be under some serious load and you would be forced to expand your environment. Under Nginx with php-fpm without microcaching it's the same thing (maybe a bit more requests but your server would have a lot of php-fpm processes running to process the requests). With microcaching you get a whopping 300-450 requests a second!
Ok, give it to me!
Microcaching is actually easy to set up, below is a example config that you could run for any website made with PHP (in this case it is specific for Wordpress). Take a look:
# # your website # server { listen 80; server_name <your hostnames>; access_log <your access log> main; error_log <your error log>; root <your root folder>; location / { index index.php index.html index.htm; } if (!-e $request_filename) { rewrite ^(.+)$ /index.php?q=$1 last; } location ~ \.php$ { # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set if ($http_cookie ~* "_mcnc") { set $no_cache "1"; } # Bypass cache if flag is set fastcgi_no_cache $no_cache; fastcgi_cache_bypass $no_cache; fastcgi_cache microcache; fastcgi_cache_key $server_name|$request_uri; fastcgi_cache_valid 404 30m; fastcgi_cache_valid 200 10s; fastcgi_max_temp_file_size 1M; fastcgi_cache_use_stale updating; fastcgi_pass localhost:9000; fastcgi_pass_header Set-Cookie; fastcgi_pass_header Cookie; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; #fastcgi_intercept_errors on; include fastcgi_params; } }
You should also put in the cache format and cache zone in your nginx.conf, add these lines to your http {} block:
fastcgi_cache_path /var/cache/nginx2 levels=1:2 keys_zone=microcache:5m max_size=1000m; log_format cache '$remote_addr - $remote_user [$time_local] "$request" ' '$status $upstream_cache_status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"';
Try it!
I encourage everyone to try it for themselves and see the improvement in performance! I know it's a big switch to go from Apache to Nginx (config-wise) but you'll get the hang of it really fast!
Check out http://livebyt.es for more articles coming soon!