Setting Up A High-Availability Load Balancer (With Failover And Session Support) With HAProxy/Keepalived On Debian Lenny

Version 1.0
Author: Falko Timme

This article explains how to set up a two-node load balancer in an active/passive configuration with HAProxy and keepalived on Debian Lenny. The load balancer sits between the user and two (or more) backend Apache web servers that hold the same content. Not only does the load balancer distribute the requests to the two backend Apache servers, it also checks the health of the backend servers. If one of them is down, all requests will automatically be redirected to the remaining backend server. In addition to that, the two load balancer nodes monitor each other using keepalived, and if the master fails, the slave becomes the master, which means the users will not notice any disruption of the service. HAProxy is session-aware, which means you can use it with any web application that makes use of sessions (such as forums, shopping carts, etc.).

From the HAProxy web site: "HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the Net."

I do not issue any guarantee that this will work for you!


1 Preliminary Note

In this tutorial I will use the following hosts:

  • Load Balancer 1:, IP address:
  • Load Balancer 2:, IP address:
  • Web Server 1:, IP address:
  • Web Server 2:, IP address:
  • We also need a virtual IP address that floats between lb1 and lb2:

Here's a little diagram that shows our setup:

    shared IP=
        |            |              |           |
     +--+--+      +--+--+      +----+----+ +----+----+
     | lb1 |      | lb2 |      |  http1  | |  http2  |
     +-----+      +-----+      +---------+ +---------+
     haproxy      haproxy      2 web servers (Apache)
     keepalived   keepalived

The shared (virtual) IP address is no problem as long as you're in your own LAN where you can assign IP addresses as you like. However, if you want to use this setup with public IP addresses, you need to find a hoster where you can rent two servers (the load balancer nodes) in the same subnet; you can then use a free IP address in this subnet for the virtual IP address.

http1 and http2 are standard Debian Lenny Apache setups with the document root /var/www (the configuration of this default vhost is stored in /etc/apache2/sites-available/default). If your document root differs, you might have to adjust this guide a bit.

To demonstrate the session-awareness of HAProxy, I'm assuming that the web application that is installed on http1 and http2 uses the session id JSESSIONID.


2 Preparing The Backend Web Servers

We will configure HAProxy as a transparent proxy, i.e., it will pass on the original user's IP address in a field called X-Forwarded-For to the backend web servers. Of course, the backend web servers should log the original user's IP address in their access logs instead of the IP addresses of our load balancers. Therefore we must modify the LogFormat line in /etc/apache2/apache2.conf and replace %h with %{X-Forwarded-For}i:


vi /etc/apache2/apache2.conf
#LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined

Also, we will configure HAProxy to check the backend servers' health by continuously requesting the file check.txt (translates to /var/www/check.txt if /var/www is your document root) from the backend servers. Of course, these requests would totally bloat the access logs and mess up your page view statistics (if you use a tool like Webalizer or AWstats that generates statistics based on the access logs).

Therefore we open our vhost configuration (in this example it's in /etc/apache2/sites-available/default) and put these two lines into it (comment out all other CustomLog directives in your vhost configuration):

vi /etc/apache2/sites-available/default
SetEnvIf Request_URI "^/check\.txt$" dontlog
CustomLog /var/log/apache2/access.log combined env=!dontlog

This configuration prevents that requests to check.txt get logged in Apache's access log.

Afterwards we restart Apache:

/etc/init.d/apache2 restart

... and create the file check.txt (this can be an empty file):

touch /var/www/check.txt

We are finished already with the backend servers; the rest of the configuration happens on the two load balancer nodes.


3 Installing HAProxy


We can install HAProxy as follows:

aptitude install haproxy


4 Configuring The Load Balancers

The HAProxy configuration is stored in /etc/haproxy/haproxy.cfg and is pretty straight-forward. I won't explain all the directives here; to learn more about all options, please read and

We back up the original /etc/haproxy/haproxy.cfg and create a new one like this:


cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_orig
cat /dev/null > /etc/haproxy/haproxy.cfg
vi /etc/haproxy/haproxy.cfg

        log   local0
        log   local1 notice
        #log loghost    local0 info
        maxconn 4096
        user haproxy
        group haproxy

        log     global
        mode    http
        option  httplog
        option  dontlognull
        retries 3
        maxconn 2000
        contimeout      5000
        clitimeout      50000
        srvtimeout      50000

listen webfarm
       mode http
       stats enable
       stats auth someuser:somepassword
       balance roundrobin
       cookie JSESSIONID prefix
       option httpclose
       option forwardfor
       option httpchk HEAD /check.txt HTTP/1.0
       server webA cookie A check
       server webB cookie B check

Afterwards, we set ENABLED to 1 in /etc/default/haproxy:

vi /etc/default/haproxy
# Set ENABLED to 1 if you want the init script to start haproxy.
# Add extra flags here.
#EXTRAOPTS="-de -m 16"
Share this page:

Suggested articles

9 Comment(s)

Add comment


By: Anonymous

When haproxy starts, it attempts to bind to the address in the listen statement ( listen webfarm  But this address doesn't exist on lb2 until lb1 fails over to lb2, so haproxy fails to run initially.

 I got around this by adding the following three lines to the keepalived configuration file (on both lb1 and lb2), right underneath the virtual_ipaddress{} line:

        notify_master /opt/keepalived/bin/
        notify_backup /opt/keepalived/bin/
        notify_fault /opt/keepalived/bin/ simply starts haproxy. simply does a killall -TERM haproxy.  I couldn't really figure out the difference between a LB being a "backup" vs a LB "fail", so in both cases, I just killall haproxy.

 So when a LB1 & LB2 start up, LB1 is the master, so it starts haproxy.  LB2 is the backup, so it does a killall haproxy (which doesn't do anything since it never started, but that's ok.)  If LB1 fails for whatever reason (pull the network cord, system crashes hard, etc) then LB2 starts haproxy.  Depending on how hard LB1 crashed, it may have had time to switch take down haproxy or not (probably not.)  When LB1 comes back up and becomes the master again, it runs haproxy, and LB2 is demoted back to backup status and does a killall haproxy.


By: Varun Batra

I am not doign practical right now so just going through the article. It might be a stupid question but just wondering how should I be configuring DNS server or A records? 

By: mgssolid


First, Thank you for this tutorial.

I have implemented the same architecture using 2 RHEL7 with Keepalived and Haproxy installed on each server. When testing keepalived service crash; the failover to slave server works fine, the VIP is assigned the slave server which is then declared as newly primary server.That is not the case when crashing Haproxy service. Failover procedure is not triggered and users requests are still routed to the primary server, which is obviously not responding.I think vrrp_script is not working correctly on redhat 7 as killall command does not exist in this version. Is there any other alternatives i can use in this case ?

By: Tomas

"yum install psmisc" will provide you killall command on RHEL7.

By: Charls


First of all thanks for this guide!

I have a question regarding this conf. I've followed it but I cannot access my backend servers through the virtual IP (in your case of haproxy/keepalived as I get a connection refused error although I can access to my backend servers through their IP (in your case I dont have any iptables rule set.

I am missing something?


By: Thanh

Yes, i have same problem. Did you solve it?

If yes, can you help me, please?


By: Milinda

Thanks a lot this is really helpful articles.

By: zoran

This will never work. haproxy will try to bind to which is virtual IP set by keepalilved. On the server  that is not active keepalived it will fail to start (for instance lb2). If for some reason lb1 crashes lb2 will neve become active since priority will always be lower then lb1

By: Sushil Rangari

Hello All,

We have two redis web servers behind haproxy, but i need all traffic should go to Redis-web1 only and haproxy should divert traffic to Redis-web2 only when Redis-web1 is down ?

Is this possible ? Please suggest

ThanksSushil R