Ceph Quincy, a look into the future

Every year, a new Ceph version is released. In 2019, version 14 (Nautilus) was released, in 2020, version 15 (Octopus) and in 2021, version 16 called Pacific. These versions have an end of life date, so make sure you are up-to-date and operate the same version throughout your Ceph clusters. The end of life date […]

Fairbanks and 42on at the OpenInfra Summit Berlin

The OpenInfra Summit is a global event for open source IT infrastructure professionals to collaborate on software development, share best practices about designing and running infrastructure in production, and make partnership and purchase decisions. This year (June the 7th, 8th and 9th) it is held in Berlin and brings together more than 2000 influential IT […]

Deploying a Ceph cluster with cephadm

Cephadm deploys and manages a Ceph cluster. It does this by connecting the manager daemon to hosts via SSH. The manager daemon is able to add, remove, and update Ceph containers. In this blog, I will provide you with a little more information about deploying a Ceph cluster using cephadm. Cephadm manages the full lifecycle of a […]

About RBD, Rados Block Device

For anybody enjoying Latin pop in the 2000s, RBD was a popular Mexican pop band from Mexico City labeled by EMI Virgin. The group achieved international success from 2004 until their separation in 2009 and sold over 15 million records worldwide, making them one of the best-selling Latin music artists of all time. The group […]

Best practices for cephadm and expanding your Ceph infrastructure

If you want to expand your Ceph cluster by adding a new one, it is good to know what the best practices are. So, here is some more information about cephadm and some tips and tricks for expanding your Ceph cluster. First a little bit about cephadm. Cephadm creates a new Ceph cluster by “bootstrapping” […]

Setting noout flag per Ceph OSD

Prior to Ceph Luminous you could only set the noout flag cluster-wide which means that none of your OSDs will be marked as out. On large(r) cluster this isn’t always what you want as you might be performing maintenance on a part of the cluster, but you sill want other OSDs which go down to […]

Placement Groups with Ceph Luminous stay in activating state

Placement Groups stuck in activating When migrating from FileStore with BlueStore with Ceph Luminuous you might run into the problem that certain Placement Groups stay stuck in the activating state. 44 activating+undersized+degraded+remapped PG Overdose This is a side-effect of the new PG overdose protection in Ceph Luminous. Too many PGs on your OSDs can cause […]

Quick overview of Ceph version running on OSDs

When checking a Ceph cluster it’s useful to know which versions you OSDs in the cluster are running. There is a very simple on-line command to do this: ceph osd metadata|jq ‘.[].ceph_version’|sort|uniq -c Running this on a cluster which is currently being upgraded to Jewel to Luminous it shows: 10 “ceph version 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f)” 1670 […]

Do not use SMR disks with Ceph

Many new disks like the Seagate He8 disks are using a technique called Shingled Magnetic Recording to increase capacity. As these disks offer a very low price per Gigabyte they seem interesting to use in a Ceph cluster. Performance Due to the nature of SMR these disks are very, very, very bad when it comes […]

Testing Ceph BlueStore with the Kraken release

Ceph version Kraken (11.2.0) has been released and the Release Notes tell us that the new BlueStore backend for the OSDs is now available. BlueStore The current backend for the OSDs is the FileStore which mainly uses the XFS filesystem to store it’s data. To overcome several limitations of XFS and POSIX in general the […]