Ceph Quincy, a look into the future

Every year, a new Ceph version is released. In 2019, version 14 (Nautilus) was released, in 2020, version 15 (Octopus) and in 2021, version 16 called Pacific. These versions have an end of life date, so make sure you are up-to-date and operate the same version throughout your Ceph clusters. The end of life date […]

Fairbanks and 42on at the OpenInfra Summit Berlin

The OpenInfra Summit is a global event for open source IT infrastructure professionals to collaborate on software development, share best practices about designing and running infrastructure in production, and make partnership and purchase decisions. This year (June the 7th, 8th and 9th) it is held in Berlin and brings together more than 2000 influential IT […]

Deploying a Ceph cluster with cephadm

Cephadm deploys and manages a Ceph cluster. It does this by connecting the manager daemon to hosts via SSH. The manager daemon is able to add, remove, and update Ceph containers. In this blog, I will provide you with a little more information about deploying a Ceph cluster using cephadm. Cephadm manages the full lifecycle of a […]

About RBD, Rados Block Device

For anybody enjoying Latin pop in the 2000s, RBD was a popular Mexican pop band from Mexico City labeled by EMI Virgin. The group achieved international success from 2004 until their separation in 2009 and sold over 15 million records worldwide, making them one of the best-selling Latin music artists of all time. The group […]

What about our Ceph Fundamentals training?

Currently, we are planning a new Ceph Fundamentals training course for November or December. Because of this, we thought it would be nice to share some more information on the topics of this training course. The 42on Ceph Fundamentals Training is a 2-day online, instructor-led training course. The training is led by one of our […]

Improving some aspects on your Ceph cluster

As performance optimization research might cost too much time and effort, here are some steps you might first want to try to improve your overall Ceph cluster. The first step is to deploy Ceph on newer releases of Linux and deploy on releases with long-term support. Secondly, update your hardware design and service placement. As […]

5 more ways to break your Ceph cluster

While we’ve been working with customers using Ceph in a variety of ways, we have encountered some several ways to break your Ceph cluster. In that light, here is an update on five more ways to break your Ceph cluster as a continuation of the original presentation done by Wido den Hollander which is called; […]

Placement Groups with Ceph Luminous stay in activating state

Placement Groups stuck in activating When migrating from FileStore with BlueStore with Ceph Luminuous you might run into the problem that certain Placement Groups stay stuck in the activating state. 44 activating+undersized+degraded+remapped PG Overdose This is a side-effect of the new PG overdose protection in Ceph Luminous. Too many PGs on your OSDs can cause […]

Quick overview of Ceph version running on OSDs

When checking a Ceph cluster it’s useful to know which versions you OSDs in the cluster are running. There is a very simple on-line command to do this: ceph osd metadata|jq ‘.[].ceph_version’|sort|uniq -c Running this on a cluster which is currently being upgraded to Jewel to Luminous it shows: 10 “ceph version 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f)” 1670 […]

Do not use SMR disks with Ceph

Many new disks like the Seagate He8 disks are using a technique called Shingled Magnetic Recording to increase capacity. As these disks offer a very low price per Gigabyte they seem interesting to use in a Ceph cluster. Performance Due to the nature of SMR these disks are very, very, very bad when it comes […]