How to install a Ceph Storage Cluster on Ubuntu 16.04

Ceph is an open source storage platform, it provides high performance, reliability, and scalability. It's a free distributed storage system that provides an interface for object, block, and file-level storage and can operate without a single point of failure.

In this tutorial, I will guide you to install and build a Ceph cluster on Ubuntu 16.04 server. A Ceph cluster consists of these components:

  • Ceph OSDs (ceph-osd) - Handles the data storage, data replication, and recovery. A Ceph cluster needs at least two Ceph OSD servers. We will use three Ubuntu 16.04 servers in this setup.
  • Ceph Monitor (ceph-mon) - Monitors the cluster state and runs the OSD map and CRUSH map. We will use one server here.
  • Ceph Meta Data Server (ceph-mds) - this is needed if you want to use Ceph as a File System.

Prerequisites

  • 6 server nodes with Ubuntu 16.04 server installed
  • Root privileges on all nodes

I will use the following hostname / IP setup:

hostname        IP address

ceph-admin        10.0.15.10
mon1                 10.0.15.11
osd1                  10.0.15.21
osd2                  10.0.15.22
osd3                  10.0.15.23
client                 10.0.15.15

Step 1 - Configure All Nodes

In this step, we will configure all 6 nodes to prepare them for the installation of the Ceph Cluster software. So you have to follow and run the commands below on all nodes. And make sure that ssh-server is installed on all nodes.

Create the Ceph User

Create a new user named 'cephuser' on all nodes.

useradd -m -s /bin/bash cephuser
passwd cephuser

After creating the new user, we need to configure cephuser for passwordless sudo privileges. This means that 'cephuser' can run and get sudo privileges without having to enter a password first.

Run the commands below to achieve that.

echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
chmod 0440 /etc/sudoers.d/cephuser
sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers

Install and Configure NTP

Install NTP to synchronize date and time on all nodes. Run the ntpdate command to set the date and time via NTP. We will use the US pool NTP servers. Then start and enable NTP server to run at boot time.

sudo apt-get install -y ntp ntpdate ntp-doc
ntpdate 0.us.pool.ntp.org
hwclock --systohc
systemctl enable ntp
systemctl start ntp

Install Open-vm-tools

If you are running all nodes inside VMware, you need to install this virtualization utility.

sudo apt-get install -y open-vm-tools

Install Python and parted

In this tutorial, we need python packages for building the ceph-cluster. Install python and python-pip.

sudo apt-get install -y python python-pip parted

Configure the Hosts File

Edit the hosts file on all nodes with vim editor.

vim /etc/hosts

Paste the configuration below:

10.0.15.10        ceph-admin
10.0.15.11        mon1
10.0.15.21        ceph-osd1
10.0.15.22        ceph-osd2
10.0.15.23        ceph-osd3
10.0.15.15        ceph-client

Save the hosts file and exit the vim editor.

Now you can try to ping between the server hostnames to test the network connectivity.

ping -c 5 mon1

Ceph cluster Installation on Ubuntu 16.04

Step 2 - Configure the SSH Server

In this step, we will configure the ceph-admin node. The admin node is used for configuring the monitor node and osd nodes. Login to the ceph-admin node and access the 'cephuser'.

ssh [email protected]
su - cephuser

The admin node is used for installing and configuring all cluster node, so the user on the ceph-admin node must have privileges to connect to all nodes without a password. We need to configure password-less SSH access for 'cephuser' on the 'ceph-admin' node.

Generate the ssh keys for 'cephuser'.

ssh-keygen

Leave passphrase is blank/empty.

Next, create a configuration file for the ssh config.

vim ~/.ssh/config

Paste the configuration below:

Host ceph-admin
        Hostname ceph-admin
        User cephuser

Host mon1
        Hostname mon1
        User cephuser

Host ceph-osd1
        Hostname ceph-osd1
        User cephuser

Host ceph-osd2
        Hostname ceph-osd2
        User cephuser

Host ceph-osd3
        Hostname ceph-osd3
        User cephuser

Host ceph-client
        Hostname ceph-client
        User cephuser

Save the file and exit vim.

Ceph-admin configuration

Change the permission of the config file to 644.

chmod 644 ~/.ssh/config

Now add the key to all nodes with the ssh-copy-id command.

ssh-keyscan ceph-osd1 ceph-osd2 ceph-osd3 ceph-client mon1 >> ~/.ssh/known_hosts
ssh-copy-id ceph-osd1
ssh-copy-id ceph-osd2
ssh-copy-id ceph-osd3
ssh-copy-id mon1

Type in your cephuser password when requested.

Ceph-admin deploy ssh key to all cluster nodes

Now try to access the osd1 server from the ceph-admin node to test if the password-less login works.

ssh ceph-osd1

SSH Less password from ceph-admin to all nodes cluster

Step 3 - Configure the Ubuntu Firewall

For security reasons, we need to turn on the firewall on the servers. Preferably we use Ufw (Uncomplicated Firewall), the default Ubuntu firewall, to protect the system. In this step, we will enable ufw on all nodes, then open the ports needed by ceph-admin, ceph-mon and ceph-osd.

Login to the ceph-admin node and install the ufw packages.

ssh [email protected]
sudo apt-get install -y ufw

Open port 80, 2003 and 4505-4506, then reload firewalld.

sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 2003/tcp
sudo ufw allow 4505:4506/tcp

Start and enable ufw to start at boot time.

sudo ufw enable

UFW Firewall with Ceph service

From the ceph-admin node, login to the monitor node 'mon1' and install ufw.

ssh mon1
sudo apt-get install -y ufw

Open the ports for the ceph monitor node and start ufw.

sudo ufw allow 22/tcp
sudo ufw allow 6789/tcp
sudo ufw enable

Finally, open these ports on each osd node: ceph-osd1, ceph-osd2 and ceph-osd3 - port 6800-7300.

Login to each of the ceph-osd nodes from the ceph-admin, and install ufw.

ssh ceph-osd1
sudo apt-get install -y ufw

Open the ports on the osd nodes and reload firewalld.

sudo ufw allow 22/tcp
sudo ufw allow 6800:7300/tcp
sudo ufw enable

The ufw firewall configuration is finished.

Step 4 - Configure the Ceph OSD Nodes

In this tutorial, we have 3 OSD nodes, each of these nodes has two hard disk partitions.

  1. /dev/sda for root partition
  2. /dev/sdb is empty partition - 20GB

We will use /dev/sdb for the ceph disk. From the ceph-admin node, login to all OSD nodes and format the /dev/sdb partition with XFS file system.

ssh ceph-osd1
ssh ceph-osd2
ssh ceph-osd3

Check the partition scheme with the fdisk command.

sudo fdisk -l /dev/sdb

Format the /dev/sdb partition with an XFS filesystem and with a GPT partition table by using the parted command.

sudo parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%

Next, format the partition in XFS format with the mkfs command.

sudo mkfs.xfs -f /dev/sdb

Now check the partition, and you will see a XFS /dev/sdb partition.

sudo fdisk -s /dev/sdb
sudo blkid -o value -s TYPE /dev/sdb

Format partition ceph OSD nodes

Step 5 - Build the Ceph Cluster

In this step, we will install Ceph on all nodes from the ceph-admin. To get started, login to the ceph-admin node.

ssh [email protected]
su - cephuser

Install ceph-deploy on ceph-admin node

In the first step we've already installed python and python-pip on to the system. Now we need to install the Ceph deployment tool 'ceph-deploy' from the pypi python repository.

Install ceph-deploy on the ceph-admin node with the pip command.

sudo pip install ceph-deploy

Note: Make sure all nodes are updated.

After the ceph-deploy tool has been installed, create a new directory for the Ceph cluster configuration.

Create a new Cluster

Create a new cluster directory.

mkdir cluster
cd cluster/

Next, create a new cluster with the 'ceph-deploy' command by defining the monitor node 'mon1'.

ceph-deploy new mon1

The command will generate the Ceph cluster configuration file 'ceph.conf' in cluster directory.

Generate new ceph cluster configuration

Edit the ceph.conf file with vim.

vim ceph.conf

Under the [global] block, paste the configuration below.

# Your network address
public network = 10.0.15.0/24
osd pool default size = 2

Save the file and exit the editor.

Install Ceph on All Nodes

Now install Ceph on all nodes from the ceph-admin node with a single command.

ceph-deploy install ceph-admin ceph-osd1 ceph-osd2 ceph-osd3 mon1

The command will automatically install Ceph on all nodes: mon1, osd1-3 and ceph-admin - The installation will take some time.

Now deploy the monitor node on the mon1 node.

ceph-deploy mon create-initial

The command will create a monitor key, check the key with this ceph command.

ceph-deploy gatherkeys mon1

Deploy key ceph

Adding OSDS to the Cluster

After Ceph has been installed on all nodes, now we can add the OSD daemons to the cluster. OSD Daemons will create the data and journal partition on the disk /dev/sdb.

Check the available disk /dev/sdb on all osd nodes.

ceph-deploy disk list ceph-osd1 ceph-osd2 ceph-osd3

disk list of osd nodes

You will see /dev/sdb with the XFS format that we created before.

Next, delete the partition tables on all nodes with the zap option.

ceph-deploy disk zap ceph-osd1:/dev/sdb ceph-osd2:/dev/sdb ceph-osd3:/dev/sdb

The command will delete all data on /dev/sdb on the Ceph OSD nodes.

Now prepare all OSD nodes and ensure that there are no errors in the results.

ceph-deploy osd prepare ceph-osd1:/dev/sdb ceph-osd2:/dev/sdb ceph-osd3:/dev/sdb

When you see the ceph-osd1-3 is ready for OSD use in the result, then the command was successful.

Prepare the ceph-osd nodes

Activate the OSD'S with the command below:

ceph-deploy osd activate ceph-osd1:/dev/sdb ceph-osd2:/dev/sdb ceph-osd3:/dev/sdb

Now you can check the sdb disk on OSDS nodes again.

ceph-deploy disk list ceph-osd1 ceph-osd2 ceph-osd3

Ceph osds activated

The result is that /dev/sdb has two partitions now:

  1. /dev/sdb1 - Ceph Data
  2. /dev/sdb2 - Ceph Journal

Or you check it directly on the OSD node.

ssh ceph-osd1
sudo fdisk -l /dev/sdb

Ceph OSD nodes were created

Next, deploy the management-key to all associated nodes.

ceph-deploy admin ceph-admin mon1 ceph-osd1 ceph-osd2 ceph-osd3

Change the permission of the key file by running the command below on all nodes.

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

The Ceph Cluster on Ubuntu 16.04 has been created.

Step 6 - Testing Ceph

In step 4, we've installed and created a new Ceph cluster, and added OSDS nodes to the cluster. Now we should test the cluster to make sure that it works as intended.

From the ceph-admin node, log in to the Ceph monitor server 'mon1'.

ssh mon1

Run the command below to check the cluster health.

sudo ceph health

Now check the cluster status.

sudo ceph -s

You can see results below:

Ceph Cluster Status

Make sure the Ceph health is OK and there is a monitor node 'mon1' with IP address '10.0.15.11'. There are 3 OSD servers and all are up and running, and there should be an available disk space of 45GB - 3x15GB Ceph Data OSD partition.

We build a new Ceph Cluster on Ubuntu 16.04 successfully.

Reference

Share this page:

18 Comment(s)

Add comment

Please register in our forum first to comment.

Comments

By: tuwxyz

Have you tried Ceph integration/implementation in Proxmox? Check it out.

By: JeffW

Great article.  A few things might help someone who is running into any issues with this:

1. Make sure your permissions in /etc/ceph and /home/"cephuser" are set correctly.  sudo chown cephuser:cephuser * in the directories worked well for me.

2. Your cluster will not work if ceph.conf does not have all monitor nodes listed.  If only the intial monitor is set up and it goes down, the other nodes will be offline as well.  You may have to manually enter this information into the ceph.conf file located in /etc/ceph and /home/"cephuser"/cluster.

By: JW

Why did you prepare the disks with a an xfs filesystem in Step 4 only to zap them in Step 5? I'm confused about that...

By: dheeraj

if activate fails then try following :

ceph-deploy osd activate ceph-osd1:/dev/sdb1:/dev/sdb2 ceph-osd2:/dev/sdb1:/dev/sdb2 ceph-osd3:/dev/sdb1:/dev/sdb2

By: admintome

dheeraj that helped me tons!  Thanks for that fix.

By: Anish

Great article and came in handy to deploy a basic Ceph Cluster.

Would love to see articles on two more topics: 1) cephfs setup, 2) ceph objec store

By: Runom

you saved my day ! thanks lot !

By: Akkina

Activate is failing with the below error, any one faces the same problem?

[ceph-osd1][ERROR ] RuntimeError: command returned non-zero exit status: 1[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1

By: eloopz

I have a question, I would like to use ceph as a storage solution (cluster) and then use docker for applications; when I search on this, I get a ceph solution as a docker instance. Shouldn't it be the other way around; have multiple (debian) servers (hardware) use ceph for storage and then run docker on these ... can someone shine a light ?

By: ibr

Hi,

is this method valid to install Ceph on Amazon Ubuntu EC2 instances? also should they be 6 instances?

another question, is Ceph scalable ? meaning if after several months i wanted to add another server, is that applicable at Ceph?

Thanks

By: Andrzej

 Hello

I have problem with deploy a ceph. When I run a command ' ceph-deploy disk list ceph-osd1 ceph-osd2 ceph-osd3' procedure will finish with error below:

'[email protected]:~/cluster$ ceph-deploy disk list ceph-osd1 ceph-osd2 ceph-osd3

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (2.0.0): /usr/local/bin/ceph-deploy disk list ceph-osd1 ceph-osd2 ceph-osd3

[ceph_deploy.cli][INFO  ] ceph-deploy options:

[ceph_deploy.cli][INFO  ]  username                      : None

[ceph_deploy.cli][INFO  ]  verbose                       : False

[ceph_deploy.cli][INFO  ]  debug                         : False

[ceph_deploy.cli][INFO  ]  overwrite_conf                : False

[ceph_deploy.cli][INFO  ]  subcommand                    : list

[ceph_deploy.cli][INFO  ]  quiet                         : False

[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9ce5d99b48>

[ceph_deploy.cli][INFO  ]  cluster                       : ceph

[ceph_deploy.cli][INFO  ]  host                          : ['ceph-osd1', 'ceph-osd2', 'ceph-osd3']

[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f9ce61f97d0>

[ceph_deploy.cli][INFO  ]  ceph_conf                     : None

[ceph_deploy.cli][INFO  ]  default_release               : False

[ceph-osd1][DEBUG ] connection detected need for sudo

[ceph-osd1][DEBUG ] connected to host: ceph-osd1

[ceph-osd1][DEBUG ] detect platform information from remote host

[ceph-osd1][DEBUG ] detect machine type

[ceph-osd1][DEBUG ] find the location of an executable

[ceph-osd1][INFO  ] Running command: sudo /sbin/initctl version

[ceph-osd1][DEBUG ] find the location of an executable

[ceph-osd1][INFO  ] Running command: sudo fdisk -l

[ceph_deploy][ERROR ] Traceback (most recent call last):

[ceph_deploy][ERROR ]   File "/usr/local/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py", line 69, in newfunc

[ceph_deploy][ERROR ]     return f(*a, **kw)

[ceph_deploy][ERROR ]   File "/usr/local/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 164, in _main

[ceph_deploy][ERROR ]     return args.func(args)

[ceph_deploy][ERROR ]   File "/usr/local/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 434, in disk

[ceph_deploy][ERROR ]     disk_list(args, cfg)

[ceph_deploy][ERROR ]   File "/usr/local/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 376, in disk_list

[ceph_deploy][ERROR ]     distro.conn.logger(line)

[ceph_deploy][ERROR ] TypeError: 'Logger' object is not callable

[ceph_deploy][ERROR ]

[email protected]:~/cluster$

'

Do you have any idea how to fix this problem?. I tried to build a Ceph on Ubuntu 14 and 16 as well and got the same error in each try.

 

Thanks for answer

 

By: Michael

I'm having the same issue as above:

$ ceph-deploy -v disk list lvlsdfsp02

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (2.0.0): /bin/ceph-deploy -v disk list lvlsdfsp02

[ceph_deploy.cli][INFO  ] ceph-deploy options:

[ceph_deploy.cli][INFO  ]  username                      : None

[ceph_deploy.cli][INFO  ]  verbose                       : True

[ceph_deploy.cli][INFO  ]  debug                         : False

[ceph_deploy.cli][INFO  ]  overwrite_conf                : False

[ceph_deploy.cli][INFO  ]  subcommand                    : list

[ceph_deploy.cli][INFO  ]  quiet                         : False

[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x182a248>

[ceph_deploy.cli][INFO  ]  cluster                       : ceph

[ceph_deploy.cli][INFO  ]  host                          : ['lvlsdfsp02']

[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x18171b8>

[ceph_deploy.cli][INFO  ]  ceph_conf                     : None

[ceph_deploy.cli][INFO  ]  default_release               : False

[lvlsdfsp02][DEBUG ] connection detected need for sudo

[lvlsdfsp02][DEBUG ] connected to host: lvlsdfsp02 

[lvlsdfsp02][DEBUG ] detect platform information from remote host

[lvlsdfsp02][DEBUG ] detect machine type

[lvlsdfsp02][DEBUG ] find the location of an executable

[lvlsdfsp02][INFO  ] Running command: sudo fdisk -l

[ceph_deploy][ERROR ] Traceback (most recent call last):

[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_deploy/util/decorators.py", line 69, in newfunc

[ceph_deploy][ERROR ]     return f(*a, **kw)

[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 164, in _main

[ceph_deploy][ERROR ]     return args.func(args)

[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_deploy/osd.py", line 434, in disk

[ceph_deploy][ERROR ]     disk_list(args, cfg)

[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_deploy/osd.py", line 376, in disk_list

[ceph_deploy][ERROR ]     distro.conn.logger(line)

[ceph_deploy][ERROR ] TypeError: 'Logger' object is not callable

[ceph_deploy][ERROR ] 

Any thoughts on what's going wrong?  I did this same thing with "jewel" and it worked fine...seems to be broken in the latest version of "luminous" on CentOS 7.

By: Vincent

I got the same issue. Here is how I fixed it:

- Confirm your ceph version: If you followed this guide, I assume you installed with old version 10.x

- upgrade to the new version 12.x with the following command for admin, mon1, osd1, osd2, osd3

ceph-deploy install --release luminous ceph-admin mon1 ceph-osd1 ceph-osd2 ceph-osd3

 

After that, follow steps on here - http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#create-a-cluster 

By: Ganesh Bhat

 This is the best article I have seen. Cleanly written. All goes fine. Incase, you get stuck at this point and it throws an error here

ceph-deploy mon create-initial

The main reason may be wrong ceph.conf config basically may be that you are running all the nodes in a VM and the node's hostnames (especially mon1) is different you might exit with an error `xxx monitor is not yet in quorum, tries left` or `admin_socket: exception getting command descriptions: [Errno 2] No such file or directory`. Then run the following commands in the deploy server stepwise (AFTER you ensure the hostnames are same), and you should be good.

ceph-deploy uninstall `yourhostname` ceph-deploy purgedata `yourhostname` ceph-deploy forgetkeyssystemctl start cephceph-deploy mon create-intial

By: Ganesh Bhat

For people who need the client to serve over http protocol, here is some additional help on the REST API (The library installs a Gateway daemon which embeds Civetweb, so you do not have to install a web server or configure FastCGI). If you need the REST Client set up, do the following:

# Install REST client civeteweb libraries to get client over http://client:7480ceph-deploy install --rgw clientceph-deploy rgw create client# change ufw to your firewall's commandsudo ufw allow 7480/tcpsudo ufw enable

Now access `http://clientipaddress:7480/` and you get the following response.

<?xml version="1.0" encoding="UTF-8"?> <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Owner> <ID>anonymous</ID> <DisplayName></DisplayName> </Owner> <Buckets> </Buckets> </ListAllMyBucketsResult>

See the Configuring Ceph Object Gateway guide for additional administration and API details.

Cheers and Regards, Ganesh

By: PatOcanada

Same here... I don't get the format/xfs part, maybe it was to test those disks? lol 

Except that part, an excellent basic howto about a new Ceph install, though client using/connecting is really missing at the very end. Thanks

By: Anastasios

I tried to install and this is not working. I even tried jewel version but still the same. Are packages correct?

 

[ceph-mon01][DEBUG ] dpkg: error processing package ceph-base (--configure):

[ceph-mon01][DEBUG ]  dependency problems - leaving unconfigured

[ceph-mon01][DEBUG ] dpkg: dependency problems prevent configuration of ceph-mon:

[ceph-mon01][DEBUG ]  ceph-mon depends on ceph-base (= 10.2.11-1xenial); however:

[ceph-mon01][DEBUG ]   Package ceph-base is not configured yet.

[ceph-mon01][DEBUG ]

[ceph-mon01][DEBUG ] dpkg: error processing package ceph-mon (--configure):

[ceph-mon01][DEBUG ]  dependency problems - leaving unconfigured

[ceph-mon01][DEBUG ] dpkg: dependency problems prevent configuration of ceph-osd:

[ceph-mon01][DEBUG ]  ceph-osd depends on ceph-base (= 10.2.11-1xenial); however:

[ceph-mon01][DEBUG ]   Package ceph-base is not configured yet.

[ceph-mon01][DEBUG ]

[ceph-mon01][DEBUG ] dpkg: error processing package ceph-osd (--configure):

[ceph-mon01][DEBUG ]  dependency problems - leaving unconfigured

[ceph-mon01][DEBUG ] dpkg: dependency problems prevent configuration of ceph:

[ceph-mon01][DEBUG ]  ceph depends on ceph-mon (= 10.2.11-1xenial); however:

[ceph-mon01][DEBUG ]   Package ceph-mon is not configured yet.

[ceph-mon01][DEBUG ]  ceph depends on ceph-osd (= 10.2.11-1xenial); however:

[ceph-mon01][DEBUG ]   Package ceph-osd is not configured yet.

[ceph-mon01][DEBUG ]

[ceph-mon01][DEBUG ] dpkg: error processing package ceph (--configure):

[ceph-mon01][DEBUG ]  dependency problems - leaving unconfigured

[ceph-mon01][WARNIN] No apport report written because the error message indicates it's a follow-up error from a previous failure.

[ceph-mon01][DEBUG ] dpkg: dependency problems prevent configuration of ceph-mds:

[ceph-mon01][WARNIN] No apport report written because MaxReports has already been reached

[ceph-mon01][DEBUG ]  ceph-mds depends on ceph-base (= 10.2.11-1xenial); however:

[ceph-mon01][WARNIN] No apport report written because MaxReports has already been reached

[ceph-mon01][DEBUG ]   Package ceph-base is not configured yet.

[ceph-mon01][WARNIN] No apport report written because MaxReports has already been reached

[ceph-mon01][DEBUG ]

[ceph-mon01][WARNIN] No apport report written because MaxReports has already been reached

[ceph-mon01][DEBUG ] dpkg: error processing package ceph-mds (--configure):

[ceph-mon01][DEBUG ]  dependency problems - leaving unconfigured

[ceph-mon01][DEBUG ] dpkg: dependency problems prevent configuration of radosgw:

[ceph-mon01][DEBUG ]  radosgw depends on ceph-common (= 10.2.11-1xenial); however:

[ceph-mon01][DEBUG ]   Package ceph-common is not configured yet.

[ceph-mon01][DEBUG ]

[ceph-mon01][DEBUG ] dpkg: error processing package radosgw (--configure):

[ceph-mon01][DEBUG ]  dependency problems - leaving unconfigured

[ceph-mon01][DEBUG ] Processing triggers for libc-bin (2.23-0ubuntu10) ...

[ceph-mon01][DEBUG ] Processing triggers for systemd (229-4ubuntu21.15) ...

[ceph-mon01][DEBUG ] Processing triggers for ureadahead (0.100.0-19) ...

[ceph-mon01][DEBUG ] Errors were encountered while processing:

[ceph-mon01][DEBUG ]  ceph-common

[ceph-mon01][DEBUG ]  ceph-base

[ceph-mon01][DEBUG ]  ceph-mon

[ceph-mon01][DEBUG ]  ceph-osd

[ceph-mon01][DEBUG ]  ceph

[ceph-mon01][DEBUG ]  ceph-mds

[ceph-mon01][DEBUG ]  radosgw

[ceph-mon01][WARNIN] E: Sub-process /usr/bin/dpkg returned an error code (1)

[ceph-mon01][ERROR ] RuntimeError: command returned non-zero exit status: 100

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install -o Dpkg::Options::=--force-confnew ceph ceph-osd ceph-mds ceph-mon radosgw

By: lokendra

here use only 5 server what about server client ? where we use this server ?