Improving some aspects on your Ceph cluster

As performance optimization research might cost too much time and effort, here are some steps you might first want to try to improve your overall Ceph cluster. The first step is to deploy Ceph on newer releases of Linux and deploying on releases with long-term support. Secondly, update your hardware design and service placement. As […]

Best practices for CephFS

File storage is a data storage format in which data is stored and managed as files within folders. Why would you choose to use file storage? Well, the advantages of file storage include the following: 1. User-friendly interface: A simple file management and sharing system which are easy to understand for human users which makes […]

How we use Ceph-Collect to work with you

When we work on Ceph storage infrastructures, we always ask to run a diagnostic tool first. For running these diagnostics, we have developed a tool, which is called Ceph-Collect. The tool is used to gather information from a Ceph cluster. After it runs, it shows an in dept report about the storage architecture together with […]

Migration options for SUSE Enterprise Storage (SES) users

A few weeks ago, I posted a blog about how SUSE pulled the plug on SUSE Enterprise Storage (SES). SES is a Linux-based computer data storage product developed by SUSE and built on Ceph technology. A quick recap on the matter is that in 2020, SUSE acquired Rancher Labs, a Kubernetes management software vendor, and […]

SUSE pulls the plug on SES. What are your options now?

A few months back the news came that SUSE stops supporting its Ceph-based SUSE Enterprise Storage (SES) product to their customers. And, with one eye on Rancher, will promote Longhorn instead. As we support mostly very avid Ceph storage teams we dove into the options you have if you are now a SUSE Enterprise Storage […]

Ceph RBD latency with QD=1 bs=4k

You might be asking yourself, why talk about QD=1 bs=4k? Well, from the experience of Wido den Hollander, our colleague at 42on, a single thread IO latency is very important for many applications. What we see with benchmarks is that people focus on high amounts of bandwidth and large amounts of IOps. They simply go […]

How to handle large omap objects

Every once in a while a customer will ask me what to do with these messages: health: HEALTH_WARN1 large omap objects First lets see what this means: Ceph services are built on top of RADOS Ceph stores data in relation to Ceph/Rados objects. Ceph/Rados objects can consist of three major parts: data: bytestream key/value pairs: […]

5 more ways to break your Ceph cluster

While we’ve been working with customers using Ceph in a variety of ways, we have encountered some several ways to break your Ceph cluster. In that light, here is an update on five more ways to break your Ceph cluster as a continuation of the original presentation done by Wido den Hollander which is called; […]

42on at Ceph month!

With June already here, it is time for the annual Ceph month. Previously held as a conference and called Cephalocon, this year the event will be held online due to obvious reasons. So, why Ceph month? Ceph month aims at bringing together engineers and architects with an interest in Ceph, software defined storage and data […]