Inhaltsverzeichnis

Ceph

rbd unmap /dev/rdb1
rbd -p nfs showmapped
rbd unmap /dev/rbd1
rbd map maas-data01

Stand Ende 2010

Ceph ist noch nicht reif für den Einsatz in der Produktion!

Ceph ist ein Cluster-Dateisystem mit den Schwerpunkten:

  1. Ausfallsichereheit
  2. skalierbarkeit
  3. Geschwindigkeit

Es wird empfohlen Ceph zusammen mit btrfs zu verwenden.

v0.17 released

We’ve released v0.17.  This is mainly bug fixes and some monitor improvements.  Changes since v0.16 include:

    * kclient: fix >1 mds mdsmap decoding
    * kclient: fix mon subscription renewal
    * osdmap: fix encoding bug (and resulting kclient crash)
    * msgr: simplified policy, failure model, code
    * mon: less push, more pull
    * mon: clients maintain single monitor session, requests and replies are routed by the cluster
    * mon cluster expansion works (see Monitor cluster expansion)
    * osd: fix pgid parsing bug (broke restarts on clusters with 10 osds or more)

The other change with this release is that the kernel code is no longer bundled with the server code; it lives in a separate git tree.

    * Direct download at http://ceph.newdream.net/download/ceph-0.17.tar.gz
    * For Debian packages, see http://ceph.newdream.net/wiki/Debian

Installation

vi /etc/apt/sources.list
...
deb http://ceph.newdream.net/debian/ stable main
deb-src http://ceph.newdream.net/debian/ stable main
# aptitude update
# aptitude install libfcgi0ldbl
# aptitude install ceph ceph-kclient-source

Kernelmodul

Seit Ceph im Kernel ist, sollte das hier nicht mehr nötig sein!

# cd /usr/src/modules/ceph
# make
# make modules_install
# depmod
Modul laden
# modprobe ceph
# echo ceph >> /etc/modules
Konfiguration

Config erstellen:

# cp /etc/ceph/sample.ceph.conf /etc/ceph/ceph.conf
# vi /etc/ceph/ceph.conf

mounten:

# mount -o remount,user_xattr /dev/cciss/c0d0p2 /home

bootfest machen:

# vi /etc/fstab
      ....
      ... defaults,user_xattr ...
      ....