High-Availability

Want to support HowtoForge? Become a subscriber!
 

Setting Up A High-Availability Load Balancer (With Failover and Session Support) With HAProxy/Heartbeat On Debian Lenny

Setting Up A High-Availability Load Balancer (With Failover and Session Support) With HAProxy/Heartbeat On Debian Lenny

This article explains how to set up a two-node load balancer in an active/passive configuration with HAProxy and heartbeat on Debian Lenny. The load balancer sits between the user and two (or more) backend Apache web servers that hold the same content. Not only does the load balancer distribute the requests to the two backend Apache servers, it also checks the health of the backend servers. If one of them is down, all requests will automatically be redirected to the remaining backend server. In addition to that, the two load balancer nodes monitor each other using heartbeat, and if the master fails, the slave becomes the master, which means the users will not notice any disruption of the service. HAProxy is session-aware, which means you can use it with any web application that makes use of sessions (such as forums, shopping carts, etc.).

Distributed Replicated Storage Across Four Storage Nodes With GlusterFS On Debian Lenny

Distributed Replicated Storage Across Four Storage Nodes With GlusterFS On Debian Lenny

This tutorial shows how to combine four single storage servers (running Debian Lenny) to a distributed replicated storage with GlusterFS. Nodes 1 and 2 (replication1) as well as 3 and 4 (replication2) will mirror each other, and replication1 and replication2 will be combined to one larger storage server (distribution). Basically, this is RAID10 over network. If you lose one server from replication1 and one from replication2, the distributed volume continues to work. The client system (Debian Lenny as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86-64 servers with SATA-II RAID and Infiniband HBA.

Setting Up A High-Availability Load Balancer With HAProxy/Keepalived On Debian Lenny

Setting Up A High-Availability Load Balancer (With Failover And Session Support) With HAProxy/Keepalived On Debian Lenny

This article explains how to set up a two-node load balancer in an active/passive configuration with HAProxy and keepalived on Debian Lenny. The load balancer sits between the user and two (or more) backend Apache web servers that hold the same content. Not only does the load balancer distribute the requests to the two backend Apache servers, it also checks the health of the backend servers. If one of them is down, all requests will automatically be redirected to the remaining backend server. In addition to that, the two load balancer nodes monitor each other using keepalived, and if the master fails, the slave becomes the master, which means the users will not notice any disruption of the service. HAProxy is session-aware, which means you can use it with any web application that makes use of sessions (such as forums, shopping carts, etc.).

High-Availability Storage With GlusterFS On Debian Lenny - Automatic File Replication Across Two Storage Servers

High-Availability Storage With GlusterFS On Debian Lenny - Automatic File Replication (Mirror) Across Two Storage Servers

This tutorial shows how to set up a high-availability storage with two storage servers (Debian Lenny) that use GlusterFS. Each storage server will be a mirror of the other storage server, and files will be replicated automatically across both storage servers. The client system (Debian Lenny as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86-64 servers with SATA-II RAID and Infiniband HBA.

Using iSCSI On Debian Lenny (Initiator And Target)

Using iSCSI On Debian Lenny (Initiator And Target)

This guide explains how you can set up an iSCSI target and an iSCSI initiator (client), both running Debian Lenny. The iSCSI protocol is a storage area network (SAN) protocol which allows iSCSI initiators to use storage devices on the (remote) iSCSI target using normal ethernet cabling. To the iSCSI initiator, the remote storage looks like a normal, locally-attached hard drive.

High-Availability Storage Cluster With GlusterFS On Ubuntu

High-Availability Storage Cluster With GlusterFS On Ubuntu

In this tutorial I will show you how to install GlusterFS in a scalable way to create a storage cluster, starting with 2 servers on Ubuntu 8.04 LTS server. Files will be replicated and splitted accross all servers which is some sort of RAID 10 (raid 1 with < 4 servers). With 4 servers that have each 100GB hard drive, total storage will be 200GB and if one server fails, the data will still be intact and files on the failed server will be replicated on another working server. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86-64 server with SATA-II RAID and Infiniband HBA.

Xen Cluster Management With Ganeti On Debian Lenny

Xen Cluster Management With Ganeti On Debian Lenny

Ganeti is a cluster virtualization management system based on Xen. In this tutorial I will explain how to create one virtual Xen machine (called an instance) on a cluster of two physical nodes, and how to manage and failover this instance between the two physical nodes.

DRBD 8.3 Third Node Replication With Debian Etch

DRBD 8.3 Third Node Replication With Debian Etch

The recent release of DRBD 8.3 now includes The Third Node feature as a freely available component. This document will cover the basics of setting up a third node on a standard Debian Etch installation. At the end of this tutorial you will have a DRBD device that can be utilized as a SAN, an iSCSI target, a file server, or a database server.

Getting High With Lenny

Getting High With Lenny

The aim here is to set up some high available services on Debian Lenny (at the time of writing still due to be released). Most of the documentation available for such a setup I found on the net are based on Xen but I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed. DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.

Setting Up A High-Availability Load Balancer (With Failover and Session Support) With Perlbal/Heartbeat On Debian Etch

Setting Up A High-Availability Load Balancer (With Failover and Session Support) With Perlbal/Heartbeat On Debian Etch

This article explains how to set up a two-node load balancer in an active/passive configuration with Perlbal and heartbeat on Debian Etch. The load balancer sits between the user and two (or more) backend Apache web servers that hold the same content. Not only does the load balancer distribute the requests to the two backend Apache servers, it also checks the health of the backend servers. If one of them is down, all requests will automatically be redirected to the remaining backend server. In addition to that, the two load balancer nodes monitor each other using heartbeat, and if the master fails, the slave becomes the master, which means the users will not notice any disruption of the service. Perlbal is session-aware, which means you can use it with any web application that makes use of sessions (such as forums, shopping carts, etc.).

first page
previous page
5
next page
last page
XML feed
"Facebook" is a registered trademark of Facebook, Inc. All rights reserved.