Improving some aspects on your Ceph cluster

As performance optimization research might cost too much time and effort, here are some steps you might first want to try to improve your overall Ceph cluster. The first step is to deploy Ceph on newer releases of Linux and deploy on releases with long-term support. Secondly, update your hardware design and service placement. As […]

Ceph RBD latency with QD=1 bs=4k

You might be asking yourself, why talk about QD=1 bs=4k? Well, from the experience of Wido den Hollander, our colleague at 42on, a single thread IO latency is very important for many applications. What we see with benchmarks is that people focus on high amounts of bandwidth and large amounts of IOps. They simply go […]

Testing Ceph BlueStore with the Kraken release

Ceph version Kraken (11.2.0) has been released and the Release Notes tell us that the new BlueStore backend for the OSDs is now available. BlueStore The current backend for the OSDs is the FileStore which mainly uses the XFS filesystem to store it’s data. To overcome several limitations of XFS and POSIX in general the […]