Deploying a Ceph cluster with cephadm

Cephadm deploys and manages a Ceph cluster. It does this by connecting the manager daemon to hosts via SSH. The manager daemon is able to add, remove, and update Ceph containers. In this blog, I will provide you with a little more information about deploying a Ceph cluster using cephadm. Cephadm manages the full lifecycle of a […]

How we use Ceph-Collect to work with you

When we work on Ceph storage infrastructures, we always ask to run a diagnostic tool first. For running these diagnostics, we have developed a tool, which is called Ceph-Collect. The tool is used to gather information from a Ceph cluster. After it runs, it shows an in dept report about the storage architecture together with […]

The Ceph Trafficlight

At PCextreme we have a 700TB Ceph cluster which is used behind our public cloud Aurora Compute which runs Apache CloudStack. Ceph health One of the things we monitor of the Ceph cluster is it’s health. This can be OK, WARN or ERR. It speaks for itself that you always want to see OK, but […]

NFS-Ganesha with libcephfs on Ubuntu 14.04

This week I’m testing a lot with CephFS and one of the things I never tried was re-exporting CephFS using NFS-Ganesha and libcephfs. NFS-Ganesha is a NFS server which runs in userspace. It has multiple backends (FSALs) it can use and libcephfs is one of them. libcephfs is a userspace library which you can use […]

Calculating RADOS objects for RBD images

Ceph’s RBD (RADOS Block Device) is just a thin wrapper on top of RADOS, the object store of Ceph. It stripes (by default) over 4MB objects in RADOS. It’s very simple to calculate which RADOS object corresponds with which sector on your RBD image/block device. First you have to find out the block device’s object […]