How to build a Ceph Distributed Storage Cluster on CentOS 7

Ceph is a widely used open source storage platform. It provides high performance, reliability, and scalability. The Ceph free distributed storage system provides an interface for object, block, and file-level storage. Ceph is build to provide a distributed storage system without a single point of failure.

In this tutorial, I will guide you to install and build a Ceph cluster on CentOS 7. A Ceph cluster requires these Ceph components:

  • Ceph OSDs (ceph-osd) - Handles the data store, data replication and recovery. A Ceph cluster needs at least two Ceph OSD servers. I will use three CentOS 7 OSD servers here.
  • Ceph Monitor (ceph-mon) - Monitors the cluster state, OSD map and CRUSH map. I will use one server.
  • Ceph Meta Data Server (ceph-mds) - This is needed to use Ceph as a File System.


  • 6 server nodes, all with CentOS 7 installed.
  • Root privileges on all nodes.

The servers in this tutorial will use the following hostnames and IP addresses.

hostname        IP address


All OSD nodes need two partitions, one root (/) partition and an empty partition that is used as Ceph data storage later.

Step 1 - Configure All Nodes

In this step, we will configure all 6 nodes to prepare them for the installation of the Ceph Cluster. You have to follow and run all commands below on all nodes. And make sure ssh-server is installed on all nodes.

Create a Ceph User

Create a new user named 'cephuser' on all nodes.

useradd -d /home/cephuser -m cephuser
passwd cephuser

After creating the new user, we need to configure sudo for 'cephuser'. He must be able to run commands as root and to get root privileges without a password.

Run the command below to create a sudoers file for the user and edit the /etc/sudoers file with sed.

echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
chmod 0440 /etc/sudoers.d/cephuser
sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers


Install and Configure NTP

Install NTP to synchronize date and time on all nodes. Run the ntpdate command to set a date and time via NTP protocol, we will use the us pool NTP server. Then start and enable NTP server to run at boot time.

yum install -y ntp ntpdate ntp-doc
hwclock --systohc
systemctl enable ntpd.service
systemctl start ntpd.service


Install Open-vm-tools

If you are running all nodes inside VMware, you need to install this virtualization utility. Otherwise skip this step.

yum install -y open-vm-tools


Disable SELinux

Disable SELinux on all nodes by editing the SELinux configuration file with the sed stream editor.

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config


Configure Hosts File

Edit the /etc/hosts file on all node with the vim editor and add lines with the IP address and hostnames of all cluster nodes.

vim /etc/hosts

Paste the configuration below:        ceph-admin        mon1        osd1        osd2        osd3        client

Save the file and exit vim.

Now you can try to ping between the servers with their hostname to test the network connectivity. Example:

ping -c 5 mon1

Configure the Ceph nodes

Step 2 - Configure the SSH Server

In this step, I will configure the ceph-admin node. The admin node is used for configuring the monitor node and the osd nodes. Login to the ceph-admin node and become the 'cephuser'.

ssh [email protected]
su - cephuser

The admin node is used for installing and configuring all cluster nodes, so the user on the ceph-admin node must have privileges to connect to all nodes without a password. We have to configure password-less SSH access for 'cephuser' on 'ceph-admin' node.

Generate the ssh keys for 'cephuser'.


leave passphrase blank/empty.

Next, create the configuration file for the ssh configuration.

vim ~/.ssh/config

Paste configuration below:

Host ceph-admin
        Hostname ceph-admin
        User cephuser

Host mon1
        Hostname mon1
        User cephuser

Host osd1
        Hostname osd1
        User cephuser

Host osd2
        Hostname osd2
        User cephuser

Host osd3
        Hostname osd3
        User cephuser

Host client
        Hostname client
        User cephuser

Save the file.

Generate SSH key

Change the permission of the config file.

chmod 644 ~/.ssh/config

Now add the SSH key to all nodes with the ssh-copy-id command.

ssh-keyscan osd1 osd2 osd3 mon1 client >> ~/.ssh/known_hosts
ssh-copy-id osd1
ssh-copy-id osd2
ssh-copy-id osd3
ssh-copy-id mon1
ssh-copy-id client

Type in your 'cephuser' password when requested.

Copy SSH key to all nodes

When you are finished, try to access osd1 server from the ceph-admin node.

ssh osd1


Step 3 - Configure Firewalld

We will use Firewalld to protect the system. In this step, we will enable firewald on all nodes, then open the ports needed by ceph-admon, ceph-mon and ceph-osd.

Login to the ceph-admin node and start firewalld.

ssh [email protected]
systemctl start firewalld
systemctl enable firewalld

Open port 80, 2003 and 4505-4506, and then reload the firewall.

sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=2003/tcp --permanent
sudo firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent
sudo firewall-cmd --reload

From the ceph-admin node, login to the monitor node 'mon1' and start firewalld.

ssh mon1
sudo systemctl start firewalld
sudo systemctl enable firewalld

Open new port on the Ceph monitor node and reload the firewall.

sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
sudo firewall-cmd --reload

Finally, open port 6800-7300 on each of the osd nodes - osd1, osd2 and os3.

Login to each osd node from the ceph-admin node.

ssh osd1
sudo systemctl start firewalld
sudo systemctl enable firewalld

Open the ports and reload the firewall.

sudo firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
sudo firewall-cmd --reload

Firewalld configuration is done.

Step 4 - Configure the Ceph OSD Nodes

In this tutorial, we have 3 OSD nodes and each node has two partitions.

  1. /dev/sda for the root partition.
  2. /dev/sdb is an empty partition - 30GB in my case.

We will use /dev/sdb for the Ceph disk. From the ceph-admin node, login to all OSD nodes and format the /dev/sdb partition with XFS.

ssh osd1
ssh osd2
ssh osd3

Check the partition with the fdisk command.

sudo fdisk -l /dev/sdb

Format the /dev/sdb partition with XFS filesystem and with a GPT partition table by using the parted command.

sudo parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%
sudo mkfs.xfs /dev/sdb -f

Now check the partition, and you will get xfs /dev/sdb partition.

sudo blkid -o value -s TYPE /dev/sdb

Format /dev/sdb on all nodes with XFS

Step 5 - Build the Ceph Cluster

In this step, we will install Ceph on all nodes from the ceph-admin node.

Login to the ceph-admin node.

ssh [email protected]
su - cephuser


Install ceph-deploy on the ceph-admin node

Add the Ceph repository and install the Ceph deployment tool 'ceph-deploy' with the yum command.

sudo rpm -Uhv
sudo yum update -y && sudo yum install ceph-deploy -y

Make sure all nodes are updated.

After the ceph-deploy tool has been installed, create a new directory for the ceph cluster configuration.


Create New Cluster Config

Create the new cluster directory.

mkdir cluster
cd cluster/

Next, create a new cluster configuration with the 'ceph-deploy' command, define the monitor node to be 'mon1'.

ceph-deploy new mon1

The command will generate the Ceph cluster configuration file 'ceph.conf' in the cluster directory.

Initial Ceph Cluster Configuration

Edit the ceph.conf file with vim.

vim ceph.conf

Under [global] block, paste configuration below.

# Your network address
public network =
osd pool default size = 2

Save the file and exit vim.


Install Ceph on All Nodes

Now install Ceph on all other nodes from the ceph-admin node. This can be done with a single command.

ceph-deploy install ceph-admin mon1 osd1 osd2 osd3

The command will automatically install Ceph on all nodes: mon1, osd1-3 and ceph-admin - The installation will take some time.

Now deploy the ceph-mon on mon1 node.

ceph-deploy mon create-initial

The command will create the monitor key, check and get the keys with with the 'ceph' command.

ceph-deploy gatherkeys mon1

Gathering keys on monitor node

Adding OSDS to the Cluster

When Ceph has been installed on all nodes, then we can add the OSD daemons to the cluster. OSD Daemons will create their data and journal partition on the disk /dev/sdb.

Check that the /dev/sdb partition is available on all OSD nodes.

ceph-deploy disk list osd1 osd2 osd3

Deploy Disks

You will see the /dev/sdb disk with XFS format.

Next, delete the /dev/sdb partition tables on all nodes with the zap option.

ceph-deploy disk zap osd1:/dev/sdb osd2:/dev/sdb osd3:/dev/sdb

The command will delete all data on /dev/sdb on the Ceph OSD nodes.

Now prepare all OSDS nodes. Make sure there are no errors in the results.

ceph-deploy osd prepare osd1:/dev/sdb osd2:/dev/sdb osd3:/dev/sdb

If you see the osd1-3 is ready for OSD use result, then the deployment was successful.

Prepare all OSD nodes

Activate the OSDs with the command below:

ceph-deploy osd activate osd1:/dev/sdb1 osd2:/dev/sdb1 osd3:/dev/sdb1

Check the output for errors before you proceed. Now you can check the sdb disk on OSD nodes with the list command.

ceph-deploy disk list osd1 osd2 osd3

OSD nodes successfully added to the cluster

The results is that /dev/sdb has now two partitions:

  1. /dev/sdb1 - Ceph Data
  2. /dev/sdb2 - Ceph Journal

Or you can check that directly on the OSD node with fdisk.

ssh osd1
sudo fdisk -l /dev/sdb

The /dev/sdb partition layout

Next, deploy the management-key to all associated nodes.

ceph-deploy admin ceph-admin mon1 osd1 osd2 osd3

Change the permission of the key file by running the command below on all nodes.

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

The Ceph Cluster on CentOS 7 has been created.

Step 6 - Testing the Ceph setup

In step 4, we've installed and created our new Ceph cluster, then we added OSDS nodes to the cluster. Now we can test the cluster and make sure there are no errors in the cluster setup.

From the ceph-admin node, log in to the ceph monitor server 'mon1'.

ssh mon1

Run the command below to check the cluster health.

sudo ceph health

Now check the cluster status.

sudo ceph -s

And you should see the results below:

Ceph Cluster Help

Make sure Ceph health is OK and there is a monitor node 'mon1' with IP address ''. There should be 3 OSD servers and all should be up and running, and there should be an available disk of about 75GB - 3x25GB Ceph Data partition.

Congratulation, you've build a new Ceph Cluster successfully.

In the next part of the Ceph tutorial, I will show you how to use Ceph as a Block Device or mount it as a FileSystem.


Share this page:

Suggested articles

5 Comment(s)

Add comment


From: helloworld at: 2016-12-14 18:27:14

Very Thanks

From: Torben at: 2016-12-16 18:49:06

Great article. Can't wait to read the next part :)

From: till at: 2016-12-20 15:11:27

The next part has just been published- You can find it here:

From: Thomas Kwan at: 2016-12-17 02:56:16


How to replace VM by KVM?

From: javat at: 2017-02-25 12:53:52

I just tried following your instructions and it works perfect! :) .. Thanks a lot :)  Just to say I am using iptables instead of firewalld and I was getting this error:

                        health HEALTH_ERR                64 pgs are stuck inactive for more than 300 seconds                64 pgs peering                64 pgs stuck inactive


Because, I configured wrong the following rule at the OSDs:

-A INPUT -p tcp -m multiport --dports 6800,7300 -j ACCEPT

- The right one is this one:

-A INPUT -p tcp -m multiport --dports 6800:7300 -j ACCEPT

I know it is a stupid mistake by my side :( . The reason is that by default, Ceph OSDs bind to the first available ports on a Ceph node beginning at port 6800 and it is neccessary to open at least three ports beginning at port 6800 for each OSD. So in my first rule, I was opening only 2 ports.

At the beginning, I thought it was a mistake in the ceph configuration but after having a look on the ceph logs at the OSDs and see errors like network type, I realised it was a network o firewall issue and indeed, firewall stupid mistake by my side.

In any case, I like to think that one must learn from his/her errors, so I share it in case someone else have the same issue :)

For the rest, I followed the tutorial step by step and it works perfectly with CentOS 7 + Ceph Jewel. I did not find any mistake on the tutorial. Ah, and I did it using virtual box too.