Cephadm deploys and manages a Ceph cluster. It does this by connecting the manager daemon to hosts via SSH. The manager daemon is able to add, remove, and update Ceph containers. In this blog, I will provide you with a little more information about deploying a Ceph cluster using cephadm.
Cephadm manages the full lifecycle of a Ceph cluster. This lifecycle starts with the bootstrapping process. For deploying a new Ceph cluster, cephadm creates a new Ceph cluster by “bootstrapping” on a single host, expanding the cluster to encompass any additional hosts, and then deploying the needed services.
What you need in order to deploy a new Ceph cluster is:
- Python 3.
- Podman or Docker for running containers.
- Time synchronization (such as chrony or NTP).
- LVM2 for provisioning storage devices.
- The will to enjoy some open source awesomeness.
Furthermore, any modern Linux distribution should be sufficient for deployment and dependencies will be installed automatically by the bootstrap process below.
The first step is to install cephadm. When it comes to installing cephadm there are two ways to achieve this.
- You can use a curl-based installation method.
- You can apply distribution-specific installation methods.
Installing cephadm makes the process easier. For instance, by using the cephadm command you can:
- Bootstrap a new cluster.
- Launch a containerized shell with a working Ceph CLI.
- Aid in debugging containerized Ceph daemons.
But there are some things that are good to know before you bootstrap a new cluster. All your hosts must be reachable by SSH without a password. Create your SSH key and copy it on all the hosts. Then the second step in creating a new Ceph cluster is running the cephadm bootstrap command on the Ceph cluster’s first host. The act of running the cephadm bootstrap command on the Ceph cluster’s first host creates the Ceph cluster’s first “monitor daemon”, and that monitor daemon needs an IP address. You must pass the IP address of the Ceph cluster’s first host to the ceph bootstrap command, so you’ll need to know the IP address of that host.
However, if there are multiple networks and interfaces, be sure to choose one that will be accessible by any host accessing the Ceph cluster.
To run the ceph bootstrap command use the following code:
cephadm bootstrap –mon-ip *
This command will:
- Create a monitor and manager daemon for the new cluster on the local host.
- Generate a new SSH key for the Ceph cluster and add it to the root user’s /root/.ssh/authorized_keys file.
- Write a copy of the public key to /etc/ceph/ceph.pub.
- Write a minimal configuration file to /etc/ceph/ceph.conf. This file is needed to communicate with the new cluster.
- Write a copy of the client.admin administrative secret key to /etc/ceph/ceph.client.admin.keyring.
- Add the _admin label to the bootstrap host. By default, any host with this label will also get a copy of /etc/ceph/ceph.conf and /etc/ceph/ceph.client.admin.keyring.
Cephadm does not require any Ceph packages to be installed on the host. However, we recommend enabling easy access to the ceph command.
The next step is to add all hosts to the cluster by following adding hosts. By default, a ceph.conf file and a copy of the client.admin keyring are maintained in /etc/ceph on all hosts with the _admin label, which is initially applied only to the bootstrap host. We usually recommend that one or more other hosts be given the _admin label so that the Ceph CLI (e.g., via cephadm shell) is easily accessible on multiple hosts. To add the _admin label to additional host(s):
ceph orch host label add *
And lastly, to add storage to the cluster, instruct Ceph to consume any available and unused device:
ceph orch apply osd –all-available-devices
Even though this was just a short description of how to deploy a new Ceph cluster, we hope it helps you. However, we can imagine you still have some questions. Feel free to contact us if this is the case by sending a message or read more about how to improve your Ceph cluster in our blog through the following link https://42on.com/improving-some-aspects-on-your-ceph-cluster/ .
Also, read our other blog on how to manage containerized ceph clusters with Cephadm.