How to Configure a Proxmox VE 4 Multiple Node Cluster

Proxmox VE 4 supports the installation of clusters and the central management of multiple Proxmox servers. You can manage multiple Proxmox servers from one web management console. This feature is really handy when you have a larger server farm.

Proxmox Cluster features:

  • Centralized web management.
  • Support for multiple authentication methods.
  • Ease migration of virtual machines and containers in the cluster.

For more details, please check the Proxmox website.

In this tutorial, we will build a Proxmox 4 cluster with 3 Proxmox servers and 1 NFS Storage server. The Proxmox servers use Debian, the NFS server uses CentOS 7.  The NFS storage is used to store ISO files, templates, and the virtual machines.

Prerequisites

  • 3 Proxmox server

    pve1
        IP          : 192.168.1.114
        FQDN     : pve1.myproxmox.co
        SSH port: 22

    pve2
        IP          : 192.168.1.115
        FQDN     : pve2.myproxmox.co
        SSH port: 22

    pve3
        IP           : 192.168.1.116
        FQDN      : pve3.myproxmox.co
        SSH port : 22

  • 1 CentOS 7 server as NFS storage with IP 192.168.1.101
  • Date and time mus be synchronized on each Proxmox server.

Step 1 - Configure NFS Storage

In this step, we will add the NFS storage noge for Proxmox and allow multiple Proxmox nodes to read and write on the shared storage.

Log in to the NFS server with ssh:

ssh [email protected]

Create new new directory that we will share with NFS:

mkdir -p /var/nfsproxmox

Now add all proxmox IP addresses to the NFS configuration file, I'll edit the "exports" file with vim:

vim /etc/exports

Paste configuration below:

/var/nfsproxmox 192.168.1.114(rw,sync,no_root_squash)
/var/nfsproxmox 192.168.1.115(rw,sync,no_root_squash)
/var/nfsproxmox 192.168.1.116(rw,sync,no_root_squash)

Save the file and exit the editor.

To activate the new configuration, re-export the NFS directory and make sure the shared directory is active:

exportfs -r
exportfs -v

Reload NFS exports.

Step 2 - Configure Host

The next step is to configure the hosts file on each Proxmox node.

Log into the pve1 server with ssh:

ssh [email protected]

Now edit the hosts file with vim:

vim /etc/hosts

Make sure pve1 on the file and then add pve2 and pve3 to the hosts file:

192.168.1.115 pve2.myproxmox.co pve2 pvelocalhost
192.168.1.116 pve3.myproxmox.co pve3 pvelocalhost

Save the file and reboot the pve1:

reboot

Next pve2 - login to the server with ssh:

ssh [email protected]

Edit the hosts file:

vim /etc/hosts

add configuration below:

192.168.1.114 pve1.myproxmox.co pve1 pvelocalhost
192.168.1.116 pve3.myproxmox.co pve3 pvelocalhost

Save the file and reboot:

reboot

Next pve3 - login to pve3 server with ssh:

ssh [email protected]

edit the hosts file:

vim /etc/hosts

now add configuration below:

192.168.1.114 pve1.myproxmox.co pve1 pvelocalhost
192.168.1.115 pve2.myproxmox.co pve2 pvelocalhost

Save the file and reboot pve3:

reboot

Step 3 - Create the cluster on Proxmox server pve1

Before creating the cluster, make sure the date and time are synchronized on all nodes and that the ssh daemon is running on port 22.

Log in to the pve1 server and create the new cluster:

ssh [email protected]
pvecm create mynode

Result:

Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.

The command explained:

pvecm: Proxmox VE cluster manager toolkit
create: Generate new cluster configuration
mynode: cluster name

Now check the cluster with command below:

pvecm status

Proxmox status on node 1.

Step 3 - Add pve2 and pve3 to cluster

In this step, we will add the Proxmox node pve2 to the cluster. Login to the pve2 server and add to pve1 "mynode" cluster:

ssh [email protected]
pvecm add 192.168.1.114

add: adding node pve2 to the cluster that we've created on pve1 with IP: 192.168.1.114.

Add node 2.

Then add pve3 to the cluster.

ssh [email protected]
pvecm add 192.168.1.114

Add node 3.

Step 4 - Check the Proxmox cluster

If the steps above have been executed without an error, check the cluster configuration with:

pvecm status

Check Proxmox cluster status.

if you want to see the nodes, use the command below:

pvecm nodes

Show list of Proxmox nodes.

Step 5 - Add the NFS share to the Proxmox Cluster

Open Proxmox server pve1 with your browser: https://192.168.1.114:8006/ and log in with your password.

Open Proxmox UI

You can see the pve1, pve2 and pve3 server on the left side.

Now go to the tab "Storage" and click "add". Choose the storage type, we use NFS on Centos server.

Add NFS storage in Proxmox.

Fill in the details of the NFS server:

NFS server details.

ID: Name of the Storage
Server: IP address of the storage
Export: Detect automatically of the shared directory
Content: Content type on the storage
Nodes: Available on node 1,2 and 3
Backups: Max backups

Click add.

And now you can see the NFS storage is available on all Proxmox nodes.

Proxmox node setup

Conclusion

Proxmox VE 4 supports clusters of up to 32 physical nodes. The centralized Proxmox management makes it easy to configure all available nodes from one place. There are many advantages if you use a Proxmox cluster e.g. it's easy to migrate a VM from one node to another node. You can use 2 Proxmox servers for a multi-node setup, but if you want to set up Proxmox for high availability, you need 3 or more Proxmox nodes.

Share this page:

4 Comment(s)

Add comment

Comments

From: Diego

What is the best option for the nfs share /var/nfsproxmox? A raid, an lvm?

From: RonaldDJ

An NFS share from a remote SAN or NAS

From: Patrick

Your /etc/hosts looks strange for me. Why two occurences of pvelocalhost with two different IP ?

192.168.1.114 pve1.myproxmox.co pve1 pvelocalhost192.168.1.115 pve2.myproxmox.co pve2 pvelocalhost

From: San

Thanks for explaining the tough topic of cluster, this is Really impressive.