Cephadm is a great new feature of Ceph v15.2.0 (Octopus). By using cephadm, users have more freedom and flexibility to think about the future migration paths. But what is cephadm it exactly?
Well, the goal of cephadm is to provide a fully-featured, robust, and well-maintained installation method. It deploys Ceph in containers while not depending on a Kubernetes infrastructure like Rook. Cephadm manages the whole life cycle of the Ceph cluster starting with the bootstrap process. When cephadm creates a Ceph cluster on a single node, the cluster consists of one monitor and one manager. Then, cephadm enables you to add hosts and to deploy the Ceph daemons and services. You can use cephadm to manage the lifecycle using the Ceph command line interface (CLI) or through the dashboard (GUI) as shown below.
Some advantages of using cephadm are:
- Deploy all components in containers. Using containers simplifies the dependency management and packaging burden across different distributions (though Ceph is still building RPM and Deb packages).
- Tight integration with the orchestrator API. Ceph’s orchestrator interface evolved extensively during the development of cephadm in order to match the implementation and to cleanly abstract the (slightly different) functionality present in Rook. The end result is something that looks, feels, and acts like a part of Ceph.
- No dependency on configuration management tools. Systems like Salt and Ansible are great when deployed at scale across a large organization but making Ceph depend on such a tool means there is one more piece of software for users to learn. More importantly, the resulting deployment ends up being more complicated, harder to debug, and most significantly, slower than something that is purpose-built for managing just Ceph.
- Minimal OS dependencies. Cephadm requires Python 3, LVM, NTP, and a container runtime, either Podman or Docker. Any modern Linux distro will do.
- Isolate clusters from each other. Supporting multiple Ceph clusters co-existing on the same host has historically been a niche scenario, but it does come up, and having a robust, generic way to isolate clusters from each other makes testing and redeploying clusters a safe and natural process for both developers and users.
- Automated upgrades. Once Ceph “owns” its own deployment, it can take responsibility for upgrading Ceph in a safe and automated fashion.
- Easy migration from “legacy” deployment tools. The Ceph community wants to allow existing Ceph deployments, from existing tools like ceph-ansible, ceph-deploy, and DeepSea, to painlessly transition to cephadm.
Moreover, because Ceph is fully containerized (with podman ps or docker ps), no software has been installed on the host, and the usual Ceph command won’t work (not yet, at least). There are a few ways to interact with the new cluster.
One way is to use the cephadm shell command. The cephadm that was used to bootstrap can also launch a containerized shell that has all of the Ceph software (including the CLI) installed. And because bootstrap puts a copy of the Ceph config and admin keyring in /etc/ceph by default, and the shell command looks there by default, you can launch a working shell and use the CLI with just:
# ./cephadm shell ceph status
The cephadm command also makes it easy to install “traditional” Ceph packages on the host. To install the Ceph CLI commands and the cephadm command in the standard locations:
# ./cephadm add-repo –release octopus ./cephadm install cephadm ceph-common
This supports a few common Linux distributions to start (CentOS/RHEL, Debian/Ubuntu, OpenSUSE/SLE) and can easily be extended to support new ones.
Ceph orch and cephadm commands, rely on healthy clusters. To make sure your commands are executed correctly, only execute `ceph orch` commands if your cluster is in “HEALTH_OK” state.
One of the nicest features of cephadm, once you have your new cluster deployed (or existing cluster upgraded and converted), is its ability to perform automated upgrades. In most cases, this is as simple as:
# ceph orch upgrade start –ceph-version [TARGET VERSION]
Starting from release 15.2.14 the stable container image hosting has moved from docker.io to quay.io. If you have a cephadm version from before that, you will have to update the image location to the new location. This can be done by updating Ceph with the container image URL. Once you have done this, future updates should work normally again.
You can update that setting by executing the upgrade like below. The actual upgrade will be started with this command, so plan this accordingly and don’t execute blindly.
# ceph orch upgrade start –image quay.io/ceph/ceph:v15.2.15
The upgrade progress can be monitored from the ceph status view, which will include a progress bar like:
Upgrade to docker.io/ceph/ceph:v15.2.1 (3m)
[===…………………….] (remaining: 21m)
So here you have it; some more information about cephadm. If you want to read more about cephadm you can do that through the following link https://42on.com/deploying-a-ceph-cluster-with-cephadm/ . If you have any questions about cephadm or other Ceph related topics; I’d love to hear from you.