With June already here, it is time for the annual Ceph month. Previously held as a conference and called Cephalocon, this year the event will be held online due to obvious reasons.
So, why Ceph month? Ceph month aims at bringing together engineers and architects with an interest in Ceph, software defined storage and data center storage infrastructure to collaborate and share their knowledge and opinions about the future of storage. During Ceph month, Ceph presentations, lightning talks, and unconference sessions such as BOFs (Beginning of File) will be held by various members of the Ceph community. We at 42on are actively involved in the Ceph community to share our knowledge with you. This year we have signed up to some technical lightning talk on which you can find some more information below:
– June 14 | 1600 CEST: 5 more ways to break your Ceph cluster.
– June 15 | 1500 CEST: RBD latency with QD=1 bs=4k.
– June 16 | 1530 CEST: Qemu: librbd vs krbd performance.
The whole calendar can be found here: https://pad.ceph.com/p/ceph-month-june-2021
All our lightning talks will take no more than 5 minutes, after which there will be a five minute questions session. After the talk we will publish the subjects here and add some additional guidance. We might follow up here as well for any interesting questions that were asked during the session.
Presentation: ‘5 more ways to break your Ceph cluster’
Now, you might be wondering, why is it called 5 more ways to break your Ceph cluster, instead of 5 ways to break your Ceph cluster? The reason for that is that this lightning talk by colleague Wout van Heeswijk is an addition to a presentation held by Wido den Hollander 4 years ago. Back then he gave a presentation on 10 ways to break your Ceph cluster on a Ceph day in the Germany. The information he shared was gathered from his own first-hand experiences with clients and Ceph. So, in order to make sure you know what not to do with your Ceph cluster, before joining our presentation, here is a short summary of the 10 ways on breaking your Ceph cluster by Wido den Hollander.
1. Wrong CRUSH failure domain
2. Decommissioning a host.
3. Removing log files in MON’s data directory.
4. Removing the wrong pool.
5. Setting the noout flag for a long time.
6. Mounting XFS with nobarrier option.
7. Enabling Writeback on HBA without BBU.
8. Creating too many placement groups.
9. Using 2x replication.
10. Underestimating Monitors.
11: Updating Cephx keys with the wrong permissions.
For the full talk, see: https://www.youtube.com/watch?v=-FOYXz3Bz3Q
Since that talk of Wido we received a lot of feedback and follow up question from Ceph users about these issues. And of course the community has resolved a lot of those issues in new releases of Ceph. As we at 42on still support Ceph users with their operations and especially emergencies, we thought it would be time to update the list as well, working with more recent versions of Ceph.
RBD latency with QD=1 bs=4k
This year, Wido will home in on a question a lot of our customers face: What latency can we achieve with RBD when benchmarking with Queue Depth=1 and a blocksize of 4k? Many applications benefit from a low latency at low queue depths. And how can we improve this with the new persistent RBD cache?
With fio we can properly benchmark different configurations and see what we can achieve? These are all questions we will be covering during this lightning talk.
Qemu: librbd vs krbd performance
We have encountered several support customers that have questions regarding the performance of qemu in combination with Ceph RBD. It seems that the perception is that Ceph RBD storage could be faster. We have investigated this and will show that Ceph is in fact not the most likely bottle neck. Our test results show show vastly different I/O statsbetween qemu using librbd and qemu using krbd. We will share our tests and test configurations so members of the community can this as well and increase the level of knowledge on this issue and spark initiatives to resolve this to the benefit of all Ceph users. (stellen we een plek voor waar deze discussie wordt voortgezet en de testresultaten worden gedeeld?
So, now you know about the topics 42on will cover this Ceph month. Our presentation will be held by two of our Ceph experts Wido den Hollander and Wout van Heeswijk who are also active and enthusiastic participants of the Ceph community.
Join the Ceph community and 42on as we discuss how Ceph, the massively scalable, open-source, software-defined storage system, can radically improve the economics and management of data storage for your enterprise.
Which sessions do you plan to attend? Let us know and we will e-meet there!