Why You Should Always Use Nginx With Microcaching

Everybody knows how hard is to push out as much as possible from your webserver(s). In my daily occupation as a hosting engineer that means I fairly often get the same question, "Wow, cool website, but can it cope with big-time traffic?".


The "normal" situation

A "normal" website running under Apache with mod_php should be able to put out 20 requests per second with ease, but what if you get like 50 requests per second (not strange with some websites like websites for political parties etc.)? The answer in my opinion is to drop Apache, because as it is right now Apache just isn't cutting it anymore.


YES! Nginx!

In comes Nginx! So what you do is you set up your website on Nginx and you run a quick loadtest (for instance a 1000 requests with 200 concurrent users) and you see that you don't get much more than Apache, but how? It's real simple, it's due to the fact that Nginx doesn't have a php module built in so you would need a fastcgi processor to process the php pages (I suggest using php-fpm as it is better than spawn-cgi). So what should I use now you ask? Use microcaching!


What the hell is Microcaching?

What is microcaching? Well the theory is that you cache your files etc. for a short amount of time (like for instance 1 second). What this means is when a user requests the page it caches it so the next request for any other will come from cache, and with 100 users requesting within 5 seconds only 1 in 20 users will have to build up the full page (and with Nginx and a good structured site this isn't any problem).


I don't believe it!

You better believe it! Let me give you an example, take this website you are on right now. Let's say we have a loadtest of 1000 requests with 200 concurrent users. If you run this website under Apache you would get between 10-40 requests a second, max! And your webserver would be under some serious load and you would be forced to expand your environment. Under Nginx with php-fpm without microcaching it's the same thing (maybe a bit more requests but your server would have a lot of php-fpm processes running to process the requests). With microcaching you get a whopping 300-450 requests a second!


Ok, give it to me!

Microcaching is actually easy to set up, below is a example config that you could run for any website made with PHP (in this case it is specific for Wordpress). Take a look:

# your website
server {
    listen       80;
    server_name  <your hostnames>;
    access_log  <your access log>  main;
    error_log <your error log>;
    root   <your root folder>;
    location / {
        index  index.php index.html index.htm;
    if (!-e $request_filename) {
        rewrite ^(.+)$ /index.php?q=$1 last;
    location ~ \.php$ {
        # Setup var defaults
        set $no_cache "";
        # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie
        if ($request_method !~ ^(GET|HEAD)$) {
            set $no_cache "1";
        # Drop no cache cookie if need be
        # (for some reason, add_header fails if included in prior if-block)
        if ($no_cache = "1") {
            add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
            add_header X-Microcachable "0";
        # Bypass cache if no-cache cookie is set
        if ($http_cookie ~* "_mcnc") {
                    set $no_cache "1";
        # Bypass cache if flag is set
        fastcgi_no_cache $no_cache;
        fastcgi_cache_bypass $no_cache;
        fastcgi_cache microcache;
        fastcgi_cache_key $server_name|$request_uri;
        fastcgi_cache_valid 404 30m;
        fastcgi_cache_valid 200 10s;
        fastcgi_max_temp_file_size 1M;
        fastcgi_cache_use_stale updating;
        fastcgi_pass localhost:9000;
        fastcgi_pass_header Set-Cookie;
        fastcgi_pass_header Cookie;
        fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_param  PATH_INFO          $fastcgi_path_info;
        fastcgi_param  PATH_TRANSLATED    $document_root$fastcgi_path_info;
        #fastcgi_intercept_errors on;
        include fastcgi_params;

You should also put in the cache format and cache zone in your nginx.conf, add these lines to your http {} block:

fastcgi_cache_path /var/cache/nginx2 levels=1:2 keys_zone=microcache:5m max_size=1000m;
log_format cache '$remote_addr - $remote_user [$time_local] "$request" '
'$status $upstream_cache_status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';


Try it!

I encourage everyone to try it for themselves and see the improvement in performance! I know it's a big switch to go from Apache to Nginx (config-wise) but you'll get the hang of it really fast!

Check out http://livebyt.es for more articles coming soon!

Share this page:

Suggested articles

9 Comment(s)

Add comment


By: Julian Fernandes

What if i use Varnish? Should i still use Nginx with microcaching, or Varnish itself will be better?

Right now i have a WordPress blog with W3 Total Cache set for Disk (enhanced) page cache, Varnish, Nginx, PHP-FPM and CloudFlare Pro on top of all that. Speed is awesome, but i wonder if i could use microcaching in this setup? Was gonna try it last night, but there was too many visitors online.



By: frank

If you already have other caching systems I don't think you need microcaching, from what I understand microcaching caches items that generally are so dynamic that usually are not cached, but caching them for only seconds can reduce the load if the requests towards those cached objects is at least at the order of hundreds per second...

 Or maybe I am completely wrong :D

By: Gabriel

Hi, To answer your question simply, yes you can use Nginx microcaching with varnish, I did a test on a machine that has Varnish on port 80 and Nginx on port 8080

The microcaching comes on nginx layer which is cool and helps the server big time on nginx level.

I believe in caching so 2 3 layers of cache? If they don't conflict each other why not! :) 


By: Anonymous

Thanks for the article.

However, a lot of sites now are 2.0, meaning that the data included in the pages change all the time, so that the building block isn't in the web server or even the web application, but rather in the database in the back. Keeping multiple DB servers with write accesses in sync is a much harder task.

By: grails

Microcaching is so cool.  It gives you the impression that the site is still dynamic, but greatly improves performance!

By: Gwyneth Llewelyn

Thanks for the tutorial :)

I've tried this out on one of my WordPress installations, and used <a href="https://rtcamp.com/tutorials/nginx/upstream-cache-status-in-response-header/">this tip</a> to make sure that I could test out if the cache was really working or not.

That way, I can test with <b>curl -I http://my.website.tld</b> and see if I get a cache hit, miss, or bypass.

This works flawlessly for the homepage. I can then shutdown php5-fpm and Nginx will continue to respond to requests for the homepage — which is pretty cool indeed :D And the Nginx cache definitely starts to get a few entries :-)

Now, the problem is: this will work only for the homepage, everything else will be bypassed.

On rtCamp's site they suggest to turn off permalinks, then turn them back on, enable the Nginx helper plugin, turn it off, and so forth... whatever sequence I use, the result is always the same: everything works for the homepage. Nothing works for the rest of the content (it always gets bypassed).

I wonder if anyone has a clue why this is the case.

By: Gwyneth Llewelyn

It's stupid to reply to myself after a year, but I figured out what I had wrong. I'm also using WordPress, and some of my WordPress-specific configurations was forcing all URL rewrites to always include a query string. Since I was excluding anything with a query string from being cached, only the homepage worked... :)

By: Liew CheonFong

I see that you set the microcache to 10s in code, not 1s as you stated in article.

By: Enrique

Does this cahce acts for 1 second always? or we can increase the cache time (like hours for example) and also clean the cache at any moment somehow?