Cephalocon talks we especially look forward to 

Cephalocon talks we especially look forward to 

Cephalocon is right around the corner, and we have made sure that the boats are ready, the shirts have been ordered, the pastry chef is on stand-by and meet-ups have been arranged. If you have no idea what I am talking about, feel free to read our last blog that we have written about what we have in store for you at the Cephalocon, including a special surprise. You can read it through the following link: https://42on.com/what-we-have-in-store-for-cephalocon/ 

In this blog we also discuss which talks the founder and the CTO of 42on are going to hold at the Cephalocon. However, we would like to share some other talks with you in this blog that we are excited about. So, we have outlined an overview of those talks, the person that will hold the talk and what it is about.  

Starting with Monday the following talks that will be held seem particularly interesting to us. 

Mike Perez, Ceph Foundation – Welcome and opening remarks  

Mike is currently the community manager and acting director for the Ceph Foundation. Being a contributing member of OpenStack since 2010, he has served as a core developer for the OpenStack block storage project Cinder and as a project technical lead for the Kilo and Liberty releases. During some of this time, he worked for DreamHost to help with their OpenStack public cloud, one of the first large production deployments of Ceph, and helped with integrating a variety of block storage solutions like Ceph in Cinder.  

Vincent Hsu, IBM – Keynote session 

During the keynote Vincent will discuss Ceph related updates and other important aspects that relate to Ceph. 

Travis Nielsen, IBM – Rook: Why would you ever deploy Ceph inside Kubernetes

If Ceph was created to run on bare metal, can enterprises really trust Ceph to be run in a Kubernetes environment? Yes! Rook has brought Ceph into the Kubernetes ecosystem, fully integrated to provide storage to K8s applications. Rook configures Ceph to provide stable block (RWO), shared file system (RWX), and object storage (S3) for production workloads. This lightning talk will give a quick overview of Rook and show that many admins have been deploying Ceph in Kubernetes with great success. 

The talk mentioned above by Travis Nielsen, we are especially excited about because we have come across Rook quite often in our own line of work. Because of that we have also written a blog about Rook. If you are interested, you can read our Rook blog through the following link: https://42on.com/what-is-rook-and-why-use-it/  

Casey Bodley, IBM – Ceph object storage overview, capabilities and future plans  

In this session, Casey Bodley will go over the current capabilities of Ceph object storage offered through the RADOS gateway (RGW) and also talk about the upcoming capabilities and enhancements being worked on for RGW and Object storage in Ceph. Casey will cover an overview of the Ceph object storage capabilities and the new features delivered in Quincy and Reef releases like multi-site, key management and multi-tenancy related enhancements. He will then cover the planned features and enhancements like the Zipper project and supported back-ends that are planned for the Zipper gateway, caching and performance projects being worked on 

Kevin Hrpcek, Space Science & Engineering center, University of Wisconson-Madison – Ceph in scientific computing and large clusters, BoF 

Ceph has found its place in supporting many scientific projects throughout the world and it is also used as a backend in many large clusters for companies. High throughput/performance computing introduces its own challenges and these groups are often pushing the limits of Ceph whether it is in cluster size, throughput, or clients. Join this BoF session for a chance to connect with people who use Ceph to support science and research or at the multi petabyte scale. 

On Tuesday the following talks seem particularly interesting to us, starting with our own talk of course! 

Wout van Heeswijk and Wido den Hollander, 42on – How to make a positive impact from both an environmental and financial perspective  

By choosing Ceph you can increase the lifetime of hardware. Maximizing the lifespan of hardware has both a financial and environmental impact. What is the impact of having hardware running for 1, 2 or 3 years longer. How much CO2 emission is prevented by not replacing hardware and how does this compare to improved electricity consumption of newer hardware? 

Enrico Bocchi and Abhishek Lekshmanan, CERN – Improving business continuity for an existing large scale Ceph infrastructure 

The IT Department at CERN (European Organization for Nuclear Research) operates a large-scale computing and storage infrastructure for processing scientific data and providing IT services to its user community. Ceph is a critical part of this picture, as it provides: 1. Block storage for the OpenStack infrastructure (440k cores – 25 PB), 2. S3 object storage for cloud-native applications, HTTP-based software distribution, and backup needs (28 PB), 3. CephFS for shared filesystems in HPC clusters and storage persistency in OpenShift and Kubernetes (11 PB). In the past year, CERN has evaluated different Ceph storage features to offer solutions for high(er) availability and disaster recovery / business continuity. More specifically, CERN will detail how they transitioned from a single RBD zone to multiple Storage Azs, how they hardened and optimized RBD snapshot mirroring for OpenStack and how they evaluated different strategies for object storage replication across multiple sites. Also, CERN will report on future plans for block storage backups, deployment of stretch clusters, and CephFS backups to S3 and tape storage via a restic-based orchestrator. 

Tom Byrne, UK Research and Innovation, science and technology facilities council – Optimizing Ceph IO for high throughput particle physics workflows  

Echo is a 65PB Ceph cluster located in the UK used by the Large Hadron Collider (LHC) experiments at CERN for raw data storage. The cluster supports sustained data rates of hundreds of GB’s to local analysis clusters, through a high energy physics specific storage framework (XRootD) using a custom storage plugin (XrdCeph). In the run up to the third LHC data taking phase, the team at STFC has been trying to improve XrdCeph’s IO efficiency and performance. The team has focused on specific read patterns common in HEP analysis workflows (for example, the ‘vector read’ pattern, a large array of discrete reads throughout a file). In this talk Tom will discuss some of the low level mechanics of RADOS reads and how the original XrdCeph plugin interacted with Ceph, including highlighting the need to support new IO patterns for the LHC experiments. He will also touch upon the more general experience of running an extremely large Ceph cluster for a high throughput computing use case. 

This was the overview of the talks we are mostly looking forward to. If you would like to view the full schedule of the talks, keynote’s and BoF’s, that are going to be held at the Cephalocon, you can do so through the following link: https://ceph2023.sched.com  

We are hiring!
Are you our new