OSDs

About RBD, Rados Block Device

About RBD, Rados Block Device 2560 1437 Michiel Manten

For anybody enjoying Latin pop in the 2000’s, RBD was a popular Mexican pop band from Mexico City labelled by EMI Virgin. The group achieved international success from 2004 until their separation in 2009 and sold over 15 million records worldwide, making them one of the best-selling Latin music artists of all time. The group…

read more

Best practices for cephadm and expanding your Ceph infrastructure

Best practices for cephadm and expanding your Ceph infrastructure 1771 1180 Michiel Manten

If you want to expand your Ceph cluster by adding a new one, it is good to know what the best practices are. So, here is some more information about cephadm and some tips and tricks for expanding your Ceph cluster. First a little bit about cephadm. Cephadm creates a new Ceph cluster by “bootstrapping”…

read more

Setting noout flag per Ceph OSD

Setting noout flag per Ceph OSD 1630 1001 Wido den Hollander

Prior to Ceph Luminous you could only set the noout flag cluster-wide which means that none of your OSDs will be marked as out. On large(r) cluster this isn’t always what you want as you might be performing maintenance on a part of the cluster, but you sill want other OSDs which go down to…

read more

Placement Groups with Ceph Luminous stay in activating state

Placement Groups with Ceph Luminous stay in activating state 1500 1001 Wido den Hollander

Placement Groups stuck in activating When migrating from FileStore with BlueStore with Ceph Luminuous you might run into the problem that certain Placement Groups stay stuck in the activating state. 44 activating+undersized+degraded+remapped PG Overdose This is a side-effect of the new PG overdose protection in Ceph Luminous. Too many PGs on your OSDs can cause…

read more

Quick overview of Ceph version running on OSDs

Quick overview of Ceph version running on OSDs 1500 1000 Wido den Hollander

When checking a Ceph cluster it’s useful to know which versions you OSDs in the cluster are running. There is a very simple on-line command to do this: ceph osd metadata|jq '.[].ceph_version'|sort|uniq -c Running this on a cluster which is currently being upgraded to Jewel to Luminous it shows: 10 "ceph version 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f)" 1670…

read more

Do not use SMR disks with Ceph

Do not use SMR disks with Ceph 1500 1000 Wido den Hollander

Many new disks like the Seagate He8 disks are using a technique called Shingled Magnetic Recording to increase capacity. As these disks offer a very low price per Gigabyte they seem interesting to use in a Ceph cluster. Performance Due to the nature of SMR these disks are very, very, very bad when it comes…

read more

Testing Ceph BlueStore with the Kraken release

Testing Ceph BlueStore with the Kraken release 1506 1000 Wido den Hollander

Ceph version Kraken (11.2.0) has been released and the Release Notes tell us that the new BlueStore backend for the OSDs is now available. BlueStore The current backend for the OSDs is the FileStore which mainly uses the XFS filesystem to store it’s data. To overcome several limitations of XFS and POSIX in general the…

read more

Chown Ceph OSD data directory using GNU Parallel

Chown Ceph OSD data directory using GNU Parallel 1500 1000 Wido den Hollander

Starting with Ceph version Jewel (10.2.X) all daemons (MON and OSD) will run under the privileged user ceph. Prior to Jewel daemons were running under root which is a potential security issue. This means data has to change ownership before a daemon running the Jewel code can run. Chown data As the Release Notes state…

read more

Slow requests with Ceph: ‘waiting for rw locks’

Slow requests with Ceph: ‘waiting for rw locks’ 1500 1000 Wido den Hollander

Slow requests in Ceph When a I/O operating inside Ceph is taking more than X seconds, which is 30 by default, it will be logged as a slow request. This is to show you as a admin that something is wrong inside the cluster and you have to take action. Origin of slow requests Slow…

read more

The Ceph Trafficlight

The Ceph Trafficlight 1610 1000 Wido den Hollander

At PCextreme we have a 700TB Ceph cluster which is used behind our public cloud Aurora Compute which runs Apache CloudStack. Ceph health One of the things we monitor of the Ceph cluster is it’s health. This can be OK, WARN or ERR. It speaks for itself that you always want to see OK, but…

read more
  • 1
  • 2

Get

in touch.

    ConsultancyTrainingSupport
    Privacy Preferences

    When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Here you can change your Privacy Preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

    Click to enable/disable Google Analytics tracking code.
    Click to enable/disable Google Fonts.

    Visit privacy policy Visit terms and conditions

    Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.