Comments on How to build a Ceph Distributed Storage Cluster on CentOS 7
Ceph is a widely used open source storage platform. It provides high performance, reliability, and scalability. The Ceph free distributed storage system provides an interface for object, block, and file-level storage. Ceph is build to provide a distributed storage system without a single point of failure. In this tutorial, I will guide you to install and build a Ceph cluster on CentOS 7.
24 Comment(s)
Comments
Very Thanks
Great article. Can't wait to read the next part :)
The next part has just been published- You can find it here: https://www.howtoforge.com/tutorial/using-ceph-as-block-device-on-centos-7/
Hi,
How to replace VM by KVM?
I just tried following your instructions and it works perfect! :) .. Thanks a lot :) Just to say I am using iptables instead of firewalld and I was getting this error:
health HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds 64 pgs peering 64 pgs stuck inactive
Because, I configured wrong the following rule at the OSDs:
-A INPUT -p tcp -m multiport --dports 6800,7300 -j ACCEPT
- The right one is this one:
-A INPUT -p tcp -m multiport --dports 6800:7300 -j ACCEPT
I know it is a stupid mistake by my side :( . The reason is that by default, Ceph OSDs bind to the first available ports on a Ceph node beginning at port 6800 and it is neccessary to open at least three ports beginning at port 6800 for each OSD. So in my first rule, I was opening only 2 ports.
At the beginning, I thought it was a mistake in the ceph configuration but after having a look on the ceph logs at the OSDs and see errors like network type, I realised it was a network o firewall issue and indeed, firewall stupid mistake by my side.
In any case, I like to think that one must learn from his/her errors, so I share it in case someone else have the same issue :)
For the rest, I followed the tutorial step by step and it works perfectly with CentOS 7 + Ceph Jewel. I did not find any mistake on the tutorial. Ah, and I did it using virtual box too.
Great job! Thanks.
What is the minimum requirements of each machine(memory, cpu, disk)?
Tks
ceph-deploy osd activate errors out with "access denied" when creating osd id with ceph osd create command, have you hit any error in that step?
Hello,
thanks for the Atricle. I tried to follow it, but all the time my installation is getting stuck at one place.
=====================
[ceph-admin][DEBUG ] Install 2 Packages (+44 Dependent packages)
[ceph-admin][DEBUG ]
[ceph-admin][DEBUG ] Total download size: 59 M
[ceph-admin][DEBUG ] Installed size: 219 M
[ceph-admin][DEBUG ] Downloading packages:
[ceph-admin][WARNIN] No data was received after 300 seconds, disconnecting...
[ceph-admin][INFO ] Running command: sudo ceph --version
[ceph-admin][ERROR ] Traceback (most recent call last):
[ceph-admin][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/lib/vendor/remoto/process.py", line 119, in run
[ceph-admin][ERROR ] reporting(conn, result, timeout)
[ceph-admin][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/lib/vendor/remoto/log.py", line 13, in reporting
[ceph-admin][ERROR ] received = result.receive(timeout)
[ceph-admin][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/lib/vendor/remoto/lib/vendor/execnet/gateway_base.py", line 704, in receive
[ceph-admin][ERROR ] raise self._getremoteerror() or EOFError()
[ceph-admin][ERROR ] RemoteError: Traceback (most recent call last):
[ceph-admin][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/lib/vendor/remoto/lib/vendor/execnet/gateway_base.py", line 1036, in executetask
[ceph-admin][ERROR ] function(channel, **kwargs)
[ceph-admin][ERROR ] File "<remote exec>", line 12, in _remote_run
[ceph-admin][ERROR ] File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
[ceph-admin][ERROR ] errread, errwrite)
[ceph-admin][ERROR ] File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
[ceph-admin][ERROR ] raise child_exception
[ceph-admin][ERROR ] OSError: [Errno 2] No such file or directory
[ceph-admin][ERROR ]
[ceph-admin][ERROR ]
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph --version
Error in sys.exitfunc:
[cephuser@ceph-admin cluster]$
I can see interconnection is fine. Can you please let me know what I am doing wrong here?
Thanks
Ashish
Hi,
somwhow it works when I removed
public network = 10.0.15.0/24
from my config file. I changed public network CIDR to mine one that is on my eth0. But each time it was getting stuck at same point. I removed that line and it worked. Not sure why.
But anyway, great article. Thanks again.
--Ashish
sudo parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%
Error: Partition(s) 1 on /dev/sdb have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
I have created above setup but not able to integrate it with OpenStack , Please help me
For more infor visit below link
https://ask.openstack.org/en/question/113616/unable-to-integrate-openstack-with-ceph/
I have created above setup but not able to integrate it with OpenStack , Please help me
For more infor visit below link
https://ask.openstack.org/en/question/113616/unable-to-integrate-openstack-with-ceph/
This is a fantastic article written in simple steps! Bravo!
I am having error while running below command.
Please suggest what wen wrong...
[cephuser@ceph-admin cluster]$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff619bc8e60>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7ff619e2eb18>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy][ERROR ] Traceback (most recent call last):
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/util/decorators.py", line 69, in newfunc
[ceph_deploy][ERROR ] return f(*a, **kw)
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 164, in _main
[ceph_deploy][ERROR ] return args.func(args)
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/mon.py", line 470, in mon
[ceph_deploy][ERROR ] mon_create_initial(args)
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/mon.py", line 414, in mon_create_initial
[ceph_deploy][ERROR ] mon_initial_members = get_mon_initial_members(args, error_on_empty=True)
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/mon.py", line 560, in get_mon_initial_members
[ceph_deploy][ERROR ] cfg = conf.ceph.load(args)
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/conf/ceph.py", line 71, in load
[ceph_deploy][ERROR ] return parse(f)
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/conf/ceph.py", line 52, in parse
[ceph_deploy][ERROR ] cfg.readfp(ifp)
[ceph_deploy][ERROR ] File "/usr/lib64/python2.7/ConfigParser.py", line 324, in readfp
[ceph_deploy][ERROR ] self._read(fp, filename)
[ceph_deploy][ERROR ] File "/usr/lib64/python2.7/ConfigParser.py", line 512, in _read
[ceph_deploy][ERROR ] raise MissingSectionHeaderError(fpname, lineno, line)
[ceph_deploy][ERROR ] MissingSectionHeaderError: File contains no section headers.
[ceph_deploy][ERROR ] file: <???>, line: 1
[ceph_deploy][ERROR ] 'global]\n'
[ceph_deploy][ERROR ]
[cephuser@ceph-admin cluster]$
Download these packages from: http://mirror.centos.org/centos/7/extras/x86_64/Packages/
python-flask-0.10.1-4.el7.noarch.rpm python-itsdangerous-0.23-2.el7.noarch.rpm python-werkzeug-0.9.1-2.el7.noarch.rpmyum install -y python-jinja2
rpm -i *.rpm
Enjoy
D.
Fantastic article !
Thanks for the Article. Its really easy to undershand.
I am looking the setup of multi site ceph cluster for the data replication over the wan.
Do you have any kind of document on it?
Very well written article.
I've a question
When i created cephuser and executed commands get root privileges for cephuser on all nodes. After that, just executed a few commands, all commands are Permission Denied, i must be using root user execute commands for root privileges in the article. Please help
Did you disable SELinux because you have to, or because it's easier?
I don't want to disable a security feature unless it can't run with SELinux enabled.
Otherwise, it's a great article.
Good day,
I would like to find out, do the 6 server nodes have to be physical servers ?
great article, i can't wait to try it.
but here's 1 question. why do you partition and format sdb with xfs just to wipe them all off with zap?
and on your next article on how to use it, once you create a block device that's mounted, that has to be formatted to xfs again
I wanted to ask if it is possible to create a ceph cluster without any admin machine ??
Outstanding tutorial, thank you for sharing this!