apache cluster ultramonkey question

Discussion in 'HOWTO-Related Questions' started by Randy, Mar 13, 2007.

  1. Randy

    Randy New Member

    Hi,
    I have a apache cluster based on debian sarge as described in the howto on this site. When I type ipvsadm -L -n, I sometimes notice that the amount of active connections doesn't change. The weight is in this case for both servers set to 1 still.
    ipvsadm -L -n
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:port Scheduler Flags
    -> RemoteAddress:port Forward Weight ActiveConn InActConn
    TCP xx.xx.xx.xx:80 lblc
    -> yy.y..y.y:80 Route 1 0 0
    -> zz.z..z..z:80 Local 1 0 0

    When I use the options -L -n -c I also don't see the connections being distributed accross the servers. The table is empty and All connection go via the virtual ip to the same server. So no load balancing taking place....
    Ik have to restart heartbeat to get things going again. But this interrupts the sites for short while, so I don't like to do this.
    Did anybody at some point have the same issue? I don't see strange things in my logs (ha, ldirectord,messages).
    I'm at a loss on this one at the moment.
    Greetings
    Randy
     
  2. falko

    falko Super Moderator

  3. Randy

    Randy New Member

    Hi Falko.
    Thanks for your interrest. I will split it up in two situations. I have 2 servers running (loader1 and loader2)

    1> = situation on both servers where balancing has stoppend.
    2> = situation on both servers where balancing is working

    1>Situation where loadbalancing has stoppend !!
    -------------------------------------------
    Master:

    ipvsadm -L -n
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:port Scheduler Flags
    -> RemoteAddress:port Forward Weight ActiveConn InActConn
    TCP 82.xx.xxx.xxx:80 lblc
    -> 82.xx.xxx.yyy:80 Route 1 0 0
    -> 82.xx.xxx.zzz:80 Local 1 0 0
    (remark: connections stay zero no matter how many connections there are, all connections now go the the same server)
    ----------------------------------------------------------------
    loader1:~# ipvsadm -L -n -c
    IPVS connection entries
    pro expire state source virtual destination
    loader1:~#
    ----------------------------------------------------------------
    ip addr sh eth0
    2: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:15:c5:f4:7a:bd brd ff:ff:ff:ff:ff:ff
    inet 192.168.125.239/24 brd 192.168.125.255 scope global eth0
    inet 192.168.125.246/24 brd 192.168.125.255 scope global secondary eth0:0
    inet6 fe80::215:c5ff:fef4:7abd/64 scope link
    valid_lft forever preferred_lft forever
    ----------------------------------------------------------------
    loader1:~# ldirectord ldirectord.cf status
    ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 17780
    ----------------------------------------------------------------
    loader1:~# /etc/ha.d/resource.d/LVSSyncDaemonSwap master status
    master running
    ----------------------------------------------------------------
    So you see all is running except for the loadbalancing function

    =============================================================================
    Slave:

    loader2:~# ipvsadm -L -n
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:port Scheduler Flags
    -> RemoteAddress:port Forward Weight ActiveConn InActConn
    ----------------------------------------------------------------
    loader2:~# ip addr sh eth0
    2: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:15:c5:f4:7f:04 brd ff:ff:ff:ff:ff:ff
    inet 192.168.125.238/24 brd 192.168.125.255 scope global eth0
    inet 192.168.125.245/24 brd 192.168.125.255 scope global secondary eth0:0
    inet6 fe80::215:c5ff:fef4:7f04/64 scope link
    valid_lft forever preferred_lft forever
    (this different vip has to do with another service running not being apache)
    ----------------------------------------------------------------
    loader2:~# ldirectord ldirectord.cf status
    ldirectord is stopped for /etc/ha.d/ldirectord.cf
    loader2:~#
    ----------------------------------------------------------------
    loader2:~# /etc/ha.d/resource.d/LVSSyncDaemonSwap master status
    master stopped
    (ipvs_syncbackup pid: 18709)
    =============================================================================
    =============================================================================

    2> Now situation where Load balancing works !!
    =============================================================================
    Master:

    ipvsadm -L -n -c
    IPVS connection entries
    pro expire state source virtual destination
    TCP 02:58 ESTABLISHED 62.rrr.rrr.rrr:1366 82.xx.xxx.xxx:80 82.xx.xxx.yyy:80
    TCP 02:42 ESTABLISHED 62.rrr.rrr.rrr:1365 82.xx.xxx.xxx:80 82.xx.xxx.yyy:80
    loader1:~#
    ----------------------------------------------------------------
    loader1:~# ipvsadm -L -n
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:port Scheduler Flags
    -> RemoteAddress:port Forward Weight ActiveConn InActConn
    TCP 82.xx.xxx.xxx:80 lblc
    -> 82.xx.xxx.yyy:80 Route 1 2 0
    -> 82.xx.xxx.zzz:80 Local 1 2 0
    (see the diffs ?)
    ----------------------------------------------------------------

    loader1:~# ip addr sh eth0
    2: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:15:c5:f4:7a:bd brd ff:ff:ff:ff:ff:ff
    inet 192.168.125.239/24 brd 192.168.125.255 scope global eth0
    inet 192.168.125.246/24 brd 192.168.125.255 scope global secondary eth0:0
    inet6 fe80::215:c5ff:fef4:7abd/64 scope link
    valid_lft forever preferred_lft forever
    ----------------------------------------------------------------
    loader1:~# /etc/ha.d/resource.d/LVSSyncDaemonSwap master status
    master running
    (ipvs_syncmaster pid: 15639)
    =============================================================================
    Slave:

    loader2:~# ipvsadm -L -n -c
    IPVS connection entries
    pro expire state source virtual destination
    TCP 10:15 ESTABLISHED 62.rrr.rrr.rrr:1369 82.xx.xxx.xxx:80 127.0.0.1:80
    TCP 10:42 ESTABLISHED 62.rrr.rrr.rrr:1373 82.xx.xxx.xxx:80 82.xx.xxx.yyy:80
    TCP 10:34 ESTABLISHED 62.rrr.rrr.rrr:1372 82.xx.xxx.xxx:80 82.xx.xxx.yyy:80
    TCP 10:41 ESTABLISHED 62.rrr.rrr.rrr:1374 82.xx.xxx.xxx:80 82.xx.xxx.zzz:80
    ----------------------------------------------------------------
    loader2:~# ipvsadm -L -n
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:port Scheduler Flags
    -> RemoteAddress:port Forward Weight ActiveConn InActConn
    loader2:~#
    ----------------------------------------------------------------
    loader2:~# /etc/ha.d/resource.d/LVSSyncDaemonSwap master status
    master stopped
    (ipvs_syncbackup pid: 19054)
    ----------------------------------------------------------------
    loader2:~# ldirectord ldirectord.cf status
    ldirectord is stopped for /etc/ha.d/ldirectord.cf
    ========================================================================

    I think that I have a problem with my firewall because sometimes I see that traffic going to 225.0.0.0 or 224.0.0.22 is being blocked. Except for these messages, the loadbalancing just stops without errors, which doesn't surprise me since all daemons are still running. I believe the 224 and 225 addresses are the broadcast addresses for things like heartbeat. The strange thing is that I don't see these messages constantly appearing (huh..). I'm using guarddog in a kde environment.
    Do you have an advice on a better front-end to iptables since I read that guarddog doesn't really know how to handle multicast traffic and shuts everything down by default (which is normally a good thing).
    Could I use the firewall of ispconfig and would I also be able to create a dmz with that ? I also haven't seen how you can allow port ranges with that (only allow single ports).
    Please note that there is a whole range of other config files which have to do with my setup. It would however be very impolite to post everything here for the sake of being complete. The 192 addresses are my internal network and the 82 addresses are external ip's.
    I hope I was able to be clear and understandable.
    Thanx
    Randy
     
    Last edited: Mar 16, 2007
  4. falko

    falko Super Moderator

    Do you have the same problems when your firewall is switched off?
     
  5. Randy

    Randy New Member

    Hi,
    No, I haven't tested it with the firewall turned of. These are live systems (no heavy loads yet). I think that it woud be too risky to test it that way.I have, however, tested it during the past weekend with a new iptables config (made with fwbuilder) which did'nt reject broadcasts and multicasts. Same results, everything works but the loadbalancer stops distributing the connections after a while.
    I'm at a complete loss.

    Randy
     
  6. Randy

    Randy New Member

    I have now tested it with the firewall turned off. Same situation. I noticed however that the only messages that i received in the /var/log/messages was an incidental: martian source xxxxxx(ip address of external gateway) from xxxx (Virtual IP) on dev eth1 (external interface).
    Ususally I didn't think that this could do any harm, but now I'm starting to wonder.... Could it also be that I might have problems with internal routing? Maybe this is a not so smart question, but if I don't find a remedy I'm going to have to find a way to detect that loadbalancing has stoppen so I can restart it automatically with a script (monit e.g.).
    Greetings,
    Randy
     
  7. Randy

    Randy New Member

    I think I've solved it. Since I noticed that there were no error messages anywhere and the only strange messages were regarding the firewall traffic It seemed to me that the broad/multicast traffic was slowly being choked to death one way or another. So what I did (and I couldn't think of anything else) was to redo the sysctl . And sure enough, that seemed to have done the trick. All is working again as it did before and loadbalancing is constant again for several days now, which is a relieve since I have only two servers doing the loadbalancing and clustering of nfs / apache / mysql (master-master replication) and ispconfig. We can now take one server off-line again for maintenance and keep on serving the sites on the other server without any probs or interruptions.
    Nice howto's Falko and Till... Thanks again, we have learned a lot (coming out of the netware world)...
    Greetings
    Randy
     
  8. falko

    falko Super Moderator

    I'm glad you got it solved. :)
     

Share This Page