Getting started with Ceph?

Getting started with Ceph?

Getting started with Ceph? 1470 980 Michiel Manten

Getting started with Ceph? Let’s discuss some frequently asked questions by Ceph starters.

We, at 42on are in love with Ceph and if you are thinking about whether to run your storage on Ceph as well, there are a few things to consider. To help you fall in love with Ceph as well, we have listed some frequently asked Ceph questions for you in this blog that will help you on the way.

Can Ceph support multiple data centers? 
Ceph can span multiple data centers, with safeguards to ensure data safety. Ceph makes sure that OSD/monitor heartbeats and peering processes operate effectively with the additional latency that may occur when deploying hardware in different geographic locations. If your data centers have dedicated bandwidth and low latency, you can distribute your cluster across data centers easily.

How does the interaction between Ceph Block devices and the hypervisor work? 
Currently, the QEMU/KVM hypervisor can interact with the Ceph block device. The librbd library allow you to use Ceph with QEMU/KVM. Most Ceph deployments use the librbd library. Cloud solutions like OpenStack and CloudStack depend on libvirt and QEMU too as a means of integrating with Ceph.

Do Ceph clients run on Windows? 
Yes, since the release of Ceph Pacific native Windows drivers have been released. You can check the upstream documentation on how to download and install the RBD and CephFS windows clients. Windows RadosGW S3 storage access was of course always available through 3rd party clients.

How can I give Ceph a try? 
Follow the Ceph Quick Start guides through the following link: It will get you up and running quickly without requiring deeper knowledge of Ceph. The Quick Start guides will also help you avoid a few issues related to limited deployments.

How many OSDs can I run per host? 
Theoretically, a host can run as many OSDs as the hardware can support. Many vendors market storage hosts that have large numbers of drives (for example 45 drives) capable of supporting many OSDs. 42on prefers a healthy mix of CPU, memory and disks. As Ceph is a distributed storage system, the correct scaling of components and resources is of utmost importance.  

What kind of hardware does Ceph require? 
Ceph runs on commodity hardware. A typical configuration involves a rack mountable server with a baseboard management controller, multiple processors, multiple drives, and multiple NICs. There is no requirement for proprietary hardware.

What kind of network throughput do I need? 
Network throughput requirements depend on your load. We recommend starting with an ethernet speeds that meets your requirement for running Ceph. Testing with 1GB is perfectly possible but in production a higher bandwidth would probably be advisable.

What kind of OS does Ceph require? 
Ceph runs on all kind of Linux distributions like Debian, Ubuntu, CentOS, RHEL and Fedora. You can also download Ceph source tarballs and build Ceph for your distribution if you are brave enough. This is of course not something you want to do as a “new cepher…”.

There you have it, some frequently asked questions by new Cephers! We hope that this will help you on the way. Also, a very active Ceph-users forum exists where all kind of operational and technical questions are asked: .

Are you already using Ceph and do you want to know how to expand your Ceph cluster? To learn more about how to exand your Ceph cluster you can read our blog that we wrote about it through the following link


in touch.

    Privacy Preferences

    When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Here you can change your Privacy Preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

    Click to enable/disable Google Analytics tracking code.
    Click to enable/disable Google Fonts.

    Visit privacy policy Visit terms and conditions

    Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.