RedHat Cluster Suite And Conga - Linux Clustering

This how to describes an easy step by step installation of the RedHat Cluster Suite on three CentOS nodes and prepare them as nodes of a cluster. You will also install the Management suite which is web based and is known as Conga.

You will use three nodes to form the cluster and one node as the cluster management node and as a cluster node it will not take part. All the nodes and the management node should be resolvable either by host file entries or by DNS.

Cluster Nodes:

    eth0- - external-lan
    eth1- - internal-lan cluster
    eth0- - external-lan
    eth1- - internal-lan cluster
    eth0- - external-lan
    eth1- - internal-lan cluster

Cluster Management Node:


As the cluster, its management interface and the service deamons use tcp, for the purpose of this article you can disable the firewalls at these nodes.

OS - All Nodes:
    CentOS 6 Minimal

Cluster Nodes - Software Installation:

yum groupinstall "High Availability"
yum install ricci

Cluster Management Node - Software Installation:

yum groupinstall "High Availability Management"
yum install ricci

Copy this initial sample cluster config file into /etc/cluster/cluster.conf at all the nodes cnode1, cnode2, cnode3.

<?xml version="1.0"?>
<cluster config_version="1" name="cl1">
    <clusternode name="cnode1" nodeid="1"/>
    <clusternode name="cnode2" nodeid="2"/>
    <clusternode name="cnode3" nodeid="3"/>

This initial file states that the cluster name is cl1 and defines the cluster nodes.

Now some services have to be configured and started at the nodes first and then at the management node as below.

Cluster Nodes:

chkconfig iptables off
chkconfig ip6tables off
chkconfig ricci on
chkconfig cman on
chkconfig rgmanager on
chkconfig modclusterd on

Create a password for the ricci service user with

passwd ricci

service iptables stop
service ip6tables stop
service ricci start
service cman start
service rgmanager start
service modclusterd start

Cluster Management Node:

chkconfig iptables off
chkconfig ip6tables off
chkconfig luci on
chkconfig ricci on

service iptables stop
service ip6tables stop
service luci start
service ricci start

luci service is the management service that presents the web based cluster interface via https at port 8084 and can be accessed in any browser at
https://<cluster management node FQDN or hostname:8084>/

ricci service is the underlying daemon that helps in cluster configuration sync and file copy, service start, stop etc. and uses tcp port 11111.

cman, rgmanager and modclusterd are the actual cluster services which futher start other services that actually make the clustering happen and keep it live.

Open a browser and enter the conga node url which in tis case is https://centos:8084/

After clicking 'ok' to the initial warning information you will be presented with the login screen. Enter the root user and root password of that system and start the interface.

Now click Add cluster and add the first node cnode1 and the ricci password, click 'ok' and it will detect the other two nodes also, add the ricci passwords and the cluster will be added to the Cluster Management interface. The cluster can be managed and configured from this interface. Care should be taken as the cluster.conf file sometimes does not get synced to all cluster nodes, they will get fenced due to version misconfiguration. At such times copy the cluster.conf file from node1 to all the other nodes. If all the nodes are in sync then the uptime is shown in the cluster nodes list.

Getting a cluster, configuring and managing, your cluster is up, live and configured in no time and later other clusters can be added into this management interface for easy maintenance.

- Bellamkonda Sudhakar

Share this page:

Suggested articles

7 Comment(s)

Add comment


By: saorabh

hear for each 3 nodes we have 3 external ips, if the 3 nodes to configure with one public ip then how will be the configuration steps, can u pls help me out , thanks saorabh

By: Anonymous

You have "yum install ricci" for both the cluster nodes and the management node.  Did you mean to do "yum install luci" for the management node?  Thanks for the article.

By: Anonymous

sir we have also configure 2 node cluster and both are working fine.
we have do manually switch over and shutdown the one node then check the service is switch over to other node.

But we have one problem whenever one node network fail ya down then service is not switch over to other node.

Please suggest to me how to configure network fail-over configuration. 

we have use RHEL 6 6.3.
Both server is HP Proliant g8 server and HP P2000 storage
luci configure in one node1 not use to another system ya server.
cluster server name :  1 ) node1  2 ) node2 

one doubt sir for luci configure required one another system. can i use in any one node server.

Thanks a lot sir for helping ..


If you're talking about the non-heartbeat network failing but not failing over, I noticed that to.  I put in a metric in the quorum disk settings to ping the default gateway. If it cannot ping, then failover will occur.

By: ben

The heuristics for quorumd are helpful for this reason.  Thanks for the tip on the ping.  I have it pinging the default gateway as my heuristic.

 I also noticed that when I put in a cluster IP and put that as the first resource, and make every resource a child resource of the cluster IP, as soon as I unplug the public IP cable, everything fails over immediately.  If you have any resources not dependent on the cluster IP, then the ping heuristic is very handy



By: Anonymous


IF management node will the cluster will continue or not?

By: Kory

If you have a two node cluster Red Hat 5  ricci, luci installed and you need to bring both server offline to move to a different datacenter, what is the proper way to do this without failover to node 2?  Node 1 has all the resouces services and Node 2 has no resources associated to it.   Would you shutdown cluster services on node 2 then shutdown and power off node 2.  Then  node 1 shutdown cluster services and then shutdown and power off node 1 (I don't know how fencing would respond to this).  Then once moved to new location (Networking, IP's everything is configured the same no vlan changes either, identical)  Bring up node 1 first than node 2;


Current state

[email protected]:/etc/cluster # clustat

Cluster Status for cmax-cluster @ Mon Sep 17 16:18:57 2018

Member Status: Quorate


 Member Name                             ID   Status

 ------ ----                             ---- ------

 cmax-cluster-node1                          1 Online, Local, rgmanager

 cmax-cluster-node2                          2 Online, rgmanager


 Service Name                   Owner (Last)                   State

 ------- ----                   ----- ------                   -----

 service:LVM                    cmax-cluster-node1             started

 service:NFS                    cmax-cluster-node1             started

 service:Network                cmax-cluster-node1             started

 service:chartmax               cmax-cluster-node1             started

[email protected]:/etc/cluster #