Comments on How to install a Ceph Storage Cluster on Ubuntu 16.04
In this tutorial, I will guide you to install and build a Ceph cluster on Ubuntu 16.04 server. Ceph is an open source storage platform, it provides high performance, reliability, and scalability. It's a free distributed storage system that provides an interface for object, block, and file-level storage and can operate without a single point of failure.
18 Comment(s)
Comments
Have you tried Ceph integration/implementation in Proxmox? Check it out.
Great article. A few things might help someone who is running into any issues with this:
1. Make sure your permissions in /etc/ceph and /home/"cephuser" are set correctly. sudo chown cephuser:cephuser * in the directories worked well for me.
2. Your cluster will not work if ceph.conf does not have all monitor nodes listed. If only the intial monitor is set up and it goes down, the other nodes will be offline as well. You may have to manually enter this information into the ceph.conf file located in /etc/ceph and /home/"cephuser"/cluster.
Why did you prepare the disks with a an xfs filesystem in Step 4 only to zap them in Step 5? I'm confused about that...
if activate fails then try following :
ceph-deploy osd activate ceph-osd1:/dev/sdb1:/dev/sdb2 ceph-osd2:/dev/sdb1:/dev/sdb2 ceph-osd3:/dev/sdb1:/dev/sdb2
dheeraj that helped me tons! Thanks for that fix.
Great article and came in handy to deploy a basic Ceph Cluster.
Would love to see articles on two more topics: 1) cephfs setup, 2) ceph objec store
you saved my day ! thanks lot !
Activate is failing with the below error, any one faces the same problem?
[ceph-osd1][ERROR ] RuntimeError: command returned non-zero exit status: 1[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1
I have a question, I would like to use ceph as a storage solution (cluster) and then use docker for applications; when I search on this, I get a ceph solution as a docker instance. Shouldn't it be the other way around; have multiple (debian) servers (hardware) use ceph for storage and then run docker on these ... can someone shine a light ?
Hi,
is this method valid to install Ceph on Amazon Ubuntu EC2 instances? also should they be 6 instances?
another question, is Ceph scalable ? meaning if after several months i wanted to add another server, is that applicable at Ceph?
Thanks
Hello
I have problem with deploy a ceph. When I run a command ' ceph-deploy disk list ceph-osd1 ceph-osd2 ceph-osd3' procedure will finish with error below:
'cephuser@ceph-admin:~/cluster$ ceph-deploy disk list ceph-osd1 ceph-osd2 ceph-osd3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /usr/local/bin/ceph-deploy disk list ceph-osd1 ceph-osd2 ceph-osd3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9ce5d99b48>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ['ceph-osd1', 'ceph-osd2', 'ceph-osd3']
[ceph_deploy.cli][INFO ] func : <function disk at 0x7f9ce61f97d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph-osd1][DEBUG ] connection detected need for sudo
[ceph-osd1][DEBUG ] connected to host: ceph-osd1
[ceph-osd1][DEBUG ] detect platform information from remote host
[ceph-osd1][DEBUG ] detect machine type
[ceph-osd1][DEBUG ] find the location of an executable
[ceph-osd1][INFO ] Running command: sudo /sbin/initctl version
[ceph-osd1][DEBUG ] find the location of an executable
[ceph-osd1][INFO ] Running command: sudo fdisk -l
[ceph_deploy][ERROR ] Traceback (most recent call last):
[ceph_deploy][ERROR ] File "/usr/local/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py", line 69, in newfunc
[ceph_deploy][ERROR ] return f(*a, **kw)
[ceph_deploy][ERROR ] File "/usr/local/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 164, in _main
[ceph_deploy][ERROR ] return args.func(args)
[ceph_deploy][ERROR ] File "/usr/local/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 434, in disk
[ceph_deploy][ERROR ] disk_list(args, cfg)
[ceph_deploy][ERROR ] File "/usr/local/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 376, in disk_list
[ceph_deploy][ERROR ] distro.conn.logger(line)
[ceph_deploy][ERROR ] TypeError: 'Logger' object is not callable
[ceph_deploy][ERROR ]
cephuser@ceph-admin:~/cluster$
'
Do you have any idea how to fix this problem?. I tried to build a Ceph on Ubuntu 14 and 16 as well and got the same error in each try.
Thanks for answer
I'm having the same issue as above:
$ ceph-deploy -v disk list lvlsdfsp02
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy -v disk list lvlsdfsp02
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : True
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x182a248>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ['lvlsdfsp02']
[ceph_deploy.cli][INFO ] func : <function disk at 0x18171b8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[lvlsdfsp02][DEBUG ] connection detected need for sudo
[lvlsdfsp02][DEBUG ] connected to host: lvlsdfsp02
[lvlsdfsp02][DEBUG ] detect platform information from remote host
[lvlsdfsp02][DEBUG ] detect machine type
[lvlsdfsp02][DEBUG ] find the location of an executable
[lvlsdfsp02][INFO ] Running command: sudo fdisk -l
[ceph_deploy][ERROR ] Traceback (most recent call last):
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/util/decorators.py", line 69, in newfunc
[ceph_deploy][ERROR ] return f(*a, **kw)
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 164, in _main
[ceph_deploy][ERROR ] return args.func(args)
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/osd.py", line 434, in disk
[ceph_deploy][ERROR ] disk_list(args, cfg)
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/osd.py", line 376, in disk_list
[ceph_deploy][ERROR ] distro.conn.logger(line)
[ceph_deploy][ERROR ] TypeError: 'Logger' object is not callable
[ceph_deploy][ERROR ]
Any thoughts on what's going wrong? I did this same thing with "jewel" and it worked fine...seems to be broken in the latest version of "luminous" on CentOS 7.
I got the same issue. Here is how I fixed it:
- Confirm your ceph version: If you followed this guide, I assume you installed with old version 10.x
- upgrade to the new version 12.x with the following command for admin, mon1, osd1, osd2, osd3
ceph-deploy install --release luminous ceph-admin mon1 ceph-osd1 ceph-osd2 ceph-osd3
After that, follow steps on here - http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#create-a-cluster
This is the best article I have seen. Cleanly written. All goes fine. Incase, you get stuck at this point and it throws an error here
ceph-deploy mon create-initial
The main reason may be wrong ceph.conf config basically may be that you are running all the nodes in a VM and the node's hostnames (especially mon1) is different you might exit with an error `xxx monitor is not yet in quorum, tries left` or `admin_socket: exception getting command descriptions: [Errno 2] No such file or directory`. Then run the following commands in the deploy server stepwise (AFTER you ensure the hostnames are same), and you should be good.
ceph-deploy uninstall `yourhostname` ceph-deploy purgedata `yourhostname` ceph-deploy forgetkeyssystemctl start cephceph-deploy mon create-intialFor people who need the client to serve over http protocol, here is some additional help on the REST API (The library installs a Gateway daemon which embeds Civetweb, so you do not have to install a web server or configure FastCGI). If you need the REST Client set up, do the following:
# Install REST client civeteweb libraries to get client over http://client:7480ceph-deploy install --rgw clientceph-deploy rgw create client# change ufw to your firewall's commandsudo ufw allow 7480/tcpsudo ufw enableNow access `http://clientipaddress:7480/` and you get the following response.
<?xml version="1.0" encoding="UTF-8"?> <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Owner> <ID>anonymous</ID> <DisplayName></DisplayName> </Owner> <Buckets> </Buckets> </ListAllMyBucketsResult>See the Configuring Ceph Object Gateway guide for additional administration and API details.
Cheers and Regards, Ganesh
Same here... I don't get the format/xfs part, maybe it was to test those disks? lol
Except that part, an excellent basic howto about a new Ceph install, though client using/connecting is really missing at the very end. Thanks
I tried to install and this is not working. I even tried jewel version but still the same. Are packages correct?
[ceph-mon01][DEBUG ] dpkg: error processing package ceph-base (--configure):
[ceph-mon01][DEBUG ] dependency problems - leaving unconfigured
[ceph-mon01][DEBUG ] dpkg: dependency problems prevent configuration of ceph-mon:
[ceph-mon01][DEBUG ] ceph-mon depends on ceph-base (= 10.2.11-1xenial); however:
[ceph-mon01][DEBUG ] Package ceph-base is not configured yet.
[ceph-mon01][DEBUG ]
[ceph-mon01][DEBUG ] dpkg: error processing package ceph-mon (--configure):
[ceph-mon01][DEBUG ] dependency problems - leaving unconfigured
[ceph-mon01][DEBUG ] dpkg: dependency problems prevent configuration of ceph-osd:
[ceph-mon01][DEBUG ] ceph-osd depends on ceph-base (= 10.2.11-1xenial); however:
[ceph-mon01][DEBUG ] Package ceph-base is not configured yet.
[ceph-mon01][DEBUG ]
[ceph-mon01][DEBUG ] dpkg: error processing package ceph-osd (--configure):
[ceph-mon01][DEBUG ] dependency problems - leaving unconfigured
[ceph-mon01][DEBUG ] dpkg: dependency problems prevent configuration of ceph:
[ceph-mon01][DEBUG ] ceph depends on ceph-mon (= 10.2.11-1xenial); however:
[ceph-mon01][DEBUG ] Package ceph-mon is not configured yet.
[ceph-mon01][DEBUG ] ceph depends on ceph-osd (= 10.2.11-1xenial); however:
[ceph-mon01][DEBUG ] Package ceph-osd is not configured yet.
[ceph-mon01][DEBUG ]
[ceph-mon01][DEBUG ] dpkg: error processing package ceph (--configure):
[ceph-mon01][DEBUG ] dependency problems - leaving unconfigured
[ceph-mon01][WARNIN] No apport report written because the error message indicates it's a follow-up error from a previous failure.
[ceph-mon01][DEBUG ] dpkg: dependency problems prevent configuration of ceph-mds:
[ceph-mon01][WARNIN] No apport report written because MaxReports has already been reached
[ceph-mon01][DEBUG ] ceph-mds depends on ceph-base (= 10.2.11-1xenial); however:
[ceph-mon01][WARNIN] No apport report written because MaxReports has already been reached
[ceph-mon01][DEBUG ] Package ceph-base is not configured yet.
[ceph-mon01][WARNIN] No apport report written because MaxReports has already been reached
[ceph-mon01][DEBUG ]
[ceph-mon01][WARNIN] No apport report written because MaxReports has already been reached
[ceph-mon01][DEBUG ] dpkg: error processing package ceph-mds (--configure):
[ceph-mon01][DEBUG ] dependency problems - leaving unconfigured
[ceph-mon01][DEBUG ] dpkg: dependency problems prevent configuration of radosgw:
[ceph-mon01][DEBUG ] radosgw depends on ceph-common (= 10.2.11-1xenial); however:
[ceph-mon01][DEBUG ] Package ceph-common is not configured yet.
[ceph-mon01][DEBUG ]
[ceph-mon01][DEBUG ] dpkg: error processing package radosgw (--configure):
[ceph-mon01][DEBUG ] dependency problems - leaving unconfigured
[ceph-mon01][DEBUG ] Processing triggers for libc-bin (2.23-0ubuntu10) ...
[ceph-mon01][DEBUG ] Processing triggers for systemd (229-4ubuntu21.15) ...
[ceph-mon01][DEBUG ] Processing triggers for ureadahead (0.100.0-19) ...
[ceph-mon01][DEBUG ] Errors were encountered while processing:
[ceph-mon01][DEBUG ] ceph-common
[ceph-mon01][DEBUG ] ceph-base
[ceph-mon01][DEBUG ] ceph-mon
[ceph-mon01][DEBUG ] ceph-osd
[ceph-mon01][DEBUG ] ceph
[ceph-mon01][DEBUG ] ceph-mds
[ceph-mon01][DEBUG ] radosgw
[ceph-mon01][WARNIN] E: Sub-process /usr/bin/dpkg returned an error code (1)
[ceph-mon01][ERROR ] RuntimeError: command returned non-zero exit status: 100
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install -o Dpkg::Options::=--force-confnew ceph ceph-osd ceph-mds ceph-mon radosgw
here use only 5 server what about server client ? where we use this server ?