Ceph cluster

Rebuilding libvirt under CentOS 7.1 with RBD storage pool support

Rebuilding libvirt under CentOS 7.1 with RBD storage pool support 1494 1000 Wido den Hollander

If you want to use CentOS 7.1 for your hypervisors with Apache CloudStack and Ceph’s RBD as Primary Storage you need to rebuild libvirt. CloudStack requires libvirt to be built with RBD storage pool support. It uses libvirt to manage RBD volumes. By default libvirt under CentOS is not built with this support. (On Ubuntu…

read more

NFS-Ganesha with libcephfs on Ubuntu 14.04

NFS-Ganesha with libcephfs on Ubuntu 14.04 1267 950 Wido den Hollander

This week I’m testing a lot with CephFS and one of the things I never tried was re-exporting CephFS using NFS-Ganesha and libcephfs. NFS-Ganesha is a NFS server which runs in userspace. It has multiple backends (FSALs) it can use and libcephfs is one of them. libcephfs is a userspace library which you can use…

read more

Ceph with a cluster and public network on IPv6

Ceph with a cluster and public network on IPv6 1267 950 Wido den Hollander

I’m a big fan of Ceph and IPv6, so I always try to deploy Ceph over IPv6 when possible. Ceph is the future, just like IPv6 is. Why implement legacy? Recently I did a deployment of Ceph with a public and cluster network running over IPv6. It has a small catch, so I let me…

read more

PowerDNS backend for a global RADOS Gateway namespace

PowerDNS backend for a global RADOS Gateway namespace 1267 950 Wido den Hollander

At my hosting company PCextreme we are building a cloud offering based on Ceph and CloudStack. We call our cloud services Aurora. Our cloud services are composed out of two components: Compute and Objects. For our Aurora Objects service we use the RADOS Gateway from Ceph and we are using the Federated Config to create…

read more

Deploying Ceph over IPv6

Deploying Ceph over IPv6 1500 1000 Wido den Hollander

I like to deploy Ceph clusters over IPv6. I actually think that’s the way forward. IPv4 is legacy just like iSCSI and NFS are. Last week I was at a customer deploying a new Ceph cluster and they wanted to deploy with IPv6! Most deployment I did with IPv6 were done manually and not with…

read more

Calculating RADOS objects for RBD images

Calculating RADOS objects for RBD images 1257 958 Wido den Hollander

Ceph’s RBD (RADOS Block Device) is just a thin wrapper on top of RADOS, the object store of Ceph. It stripes (by default) over 4MB objects in RADOS. It’s very simple to calculate which RADOS object corresponds with which sector on your RBD image/block device. First you have to find out the block device’s object…

read more

Safely backing up your Ceph monitors

Safely backing up your Ceph monitors 1562 1001 Wido den Hollander

So you might wonder: Why do I need to make a backup of my Ceph monitors? I have multiple monitors. That’s true, but would you run into the very unfortunate situation where you loose all you monitors, you loose all your data. The monitors contain very important metadata (pgmap, osdmap, crushmap) to run your cluster.…

read more

Changing the region of a RGW bucket

Changing the region of a RGW bucket 1500 1000 Wido den Hollander

As of Ceph version 0.67 (Dumpling) the Ceph Object Gateway aka RADOS Gateway supports regions. This allows you to create a geo-replicated Amazon S3 compatible service. While working on a setup we decided later in the process that we wanted regions, but we already created about 50 buckets with data in them. We didn’t feel…

read more

A quick note on running CloudStack with RBD on Ubuntu 12.04

A quick note on running CloudStack with RBD on Ubuntu 12.04 1510 1000 Wido den Hollander

When you want to use Ceph as Primary Storage in Apache CloudStack you need a recent version of libvirt with RBD storage pool support enabled. If you want to use Ubuntu 12.04 LTS (Precise) you would need to manually compile libvirt since the default libvirt version doesn’t include RBD storage pool support. But not any…

read more

Redundant Ceph monitors with Round Robin DNS

Redundant Ceph monitors with Round Robin DNS 1500 1001 Wido den Hollander

One of the unique features of Ceph is that it can be build without any Single Point of Failure. No single machine will take your cluster down when designed properly. Ceph’s monitors play a crucial part in this. To make them redundant you want a odd number of monitors, where 3 is more then sufficient…

read more

Get

in touch.

    ConsultancyTrainingSupport
    Privacy Preferences

    When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Here you can change your Privacy Preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

    Click to enable/disable Google Analytics tracking code.
    Click to enable/disable Google Fonts.

    Visit privacy policy Visit terms and conditions

    Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.