freebsd:zfs
Unterschiede
Hier werden die Unterschiede zwischen zwei Versionen angezeigt.
| Beide Seiten der vorigen RevisionVorhergehende Überarbeitung | |||
| freebsd:zfs [2025-08-26 20:08:40] – manfred | freebsd:zfs [2026-01-09 16:07:13] (aktuell) – [Boot-Pool] manfred | ||
|---|---|---|---|
| Zeile 1: | Zeile 1: | ||
| + | ====== ZFS ====== | ||
| + | |||
| + | * [[https:// | ||
| + | * **[[https:// | ||
| + | * [[https:// | ||
| + | * [[https:// | ||
| + | * [[:: | ||
| + | |||
| + | * **[[https:// | ||
| + | * **[[https:// | ||
| + | * [[https:// | ||
| + | * [[https:// | ||
| + | * [[https:// | ||
| + | * **[[https:// | ||
| + | * **[[https:// | ||
| + | * [[https:// | ||
| + | |||
| + | Dieses Dateisystem verwendet **dynamische Inodes**. | ||
| + | |||
| + | * [[https:// | ||
| + | * __ZFS:__ **21.3.6. Behandlung von fehlerhaften Geräten** | ||
| + | * __ZFS:__ **21.3.8. Selbstheilung** | ||
| + | * __ZFS:__ **21.3.9. Einen Pool vergrössern** | ||
| + | |||
| + | |||
| + | ===== Unterstützung ===== | ||
| + | |||
| + | ZFS wird natürlich von **Solaris** unterstützt, | ||
| + | |||
| + | Allerdings gibt es hier drei verschiedene Lösungswege: | ||
| + | - **Fuse** (ist im Paketsystem fast jeder Linux-Distribution enthalten), hier arbeitet ein sehr aktueller ZFS-Treiber im User-Land, ist aber sehr langsam und instabil bei großen Datenmengen; | ||
| + | - mit [[http:// | ||
| + | - mit [[http:// | ||
| + | |||
| + | |||
| + | ==== single user mode in FreeBSD mit ZFS auf der System-Platte ==== | ||
| + | |||
| + | [[https:// | ||
| + | |||
| + | Um das Dateisystem beschreibbar zu machen, genügt dieses Kommando: | ||
| + | $ mount -u / | ||
| + | |||
| + | |||
| + | ===== ZFS mit Verschlüsselung ===== | ||
| + | |||
| + | ZFS-Pool anlegen (ohne Mount-Point): | ||
| + | > zpool create -m none HDD1000 /dev/sda | ||
| + | > zpool list HDD1000 | ||
| + | > zpool status HDD1000 | ||
| + | > zfs get mountpoint, | ||
| + | NAME | ||
| + | HDD1000 | ||
| + | HDD1000 | ||
| + | HDD1000 | ||
| + | |||
| + | ZFS-Volumen ohne Verschlüsselung anlegen (mit Mount-Point): | ||
| + | > zfs create -o mountpoint=/ | ||
| + | |||
| + | ZFS-Volumen mit Verschlüsselung anlegen (mit Mount-Point): | ||
| + | > zfs get 2>&1 | grep -Fi encryption | ||
| + | encryption | ||
| + | | ||
| + | > zfs create -o encryption=aes-256-gcm -o keylocation=prompt -o keyformat=passphrase -o mountpoint=/ | ||
| + | |||
| + | Infos über das ZFS-Volumen anzeigen: | ||
| + | > zfs list HDD1000/ | ||
| + | NAME | ||
| + | HDD1000/ | ||
| + | | ||
| + | > zfs get mountpoint, | ||
| + | NAME | ||
| + | HDD1000/ | ||
| + | HDD1000/ | ||
| + | HDD1000/ | ||
| + | | ||
| + | > zfs list | ||
| + | NAME | ||
| + | HDD1000 | ||
| + | HDD1000/ | ||
| + | HDD1000/ | ||
| + | |||
| + | ZFS-Volumen wieder löschen: | ||
| + | > zfs destroy HDD1000/ | ||
| + | > zfs destroy HDD1000/ | ||
| + | |||
| + | ZFS-Pool wieder löschen: | ||
| + | > zpool destroy HDD1000 | ||
| + | |||
| + | |||
| + | ===== GPT-Partitionen löschen ===== | ||
| + | |||
| + | gpart destroy -F /dev/ada8 | ||
| + | |||
| + | oder | ||
| + | gpart destroy -F / | ||
| + | |||
| + | |||
| + | ===== Festplatten-Status - I/O ===== | ||
| + | |||
| + | zpool iostat -v pool 5 | ||
| + | zpool iostat -vl | ||
| + | zpool iostat -p | ||
| + | zpool iostat -pl | ||
| + | zpool iostat -P | ||
| + | zpool iostat -Pl | ||
| + | |||
| + | |||
| + | ===== Festplatten im System finden ===== | ||
| + | |||
| + | [[:: | ||
| + | |||
| + | FreeBSD: | ||
| + | camcontrol devlist | ||
| + | | ||
| + | gpart status | ||
| + | gpart show | ||
| + | gpart show -l | ||
| + | gpart show -lp | ||
| + | gpart list | ||
| + | | ||
| + | zpool list | ||
| + | zpool status | ||
| + | | ||
| + | zfs list | ||
| + | | ||
| + | zfs get -s default all | ||
| + | |||
| + | |||
| + | ===== ZFS-Snapshot ===== | ||
| + | |||
| + | [[: | ||
| + | |||
| + | |||
| + | ===== automatisch mehrere Kopien von den Daten ablegen ===== | ||
| + | |||
| + | # zfs get all | fgrep copies | ||
| + | home | ||
| + | | ||
| + | # zfs set copies=2 home | ||
| + | | ||
| + | # zfs get allgrep copies | ||
| + | home | ||
| + | |||
| + | |||
| + | ===== Allgemeines ===== | ||
| + | |||
| + | > zpool create -m none -f HDD1000 /dev/sda | ||
| + | > zpool list | ||
| + | > zpool status | ||
| + | > zpool destroy HDD1000 | ||
| + | |||
| + | Eine komplette Platte auf die // | ||
| + | # dd if=/ | ||
| + | # fdisk -I /dev/da4 | ||
| + | # disklabel -w /dev/da4 | ||
| + | # zpool create BACKUP3TB /dev/da4 | ||
| + | # zpool list | ||
| + | NAME SIZE | ||
| + | BACKUP3TB | ||
| + | |||
| + | |||
| + | Alle gemounteten ZFS-Pool' | ||
| + | # zpool list | ||
| + | NAME | ||
| + | BACKUP3000GB | ||
| + | home 1.36T 1.30T 57.0G 95% ONLINE | ||
| + | |||
| + | Alle gemounteten ZFS - on-disk' | ||
| + | # zfs list | ||
| + | NAME | ||
| + | BACKUP3000GB | ||
| + | home 1.30T 38.0G 1.30T /home | ||
| + | |||
| + | ZFS-SnapShot (Beispiel mit MySQL): | ||
| + | [[http:// | ||
| + | |||
| + | alle ZFS-Tuning-Parameter anzeigen: | ||
| + | # zfs get | ||
| + | missing property argument | ||
| + | usage: | ||
| + | get [-rHp] [-d max] [-o field[, | ||
| + | <" | ||
| + | | ||
| + | The following properties are supported: | ||
| + | | ||
| + | PROPERTY | ||
| + | | ||
| + | available | ||
| + | compressratio | ||
| + | creation | ||
| + | mounted | ||
| + | origin | ||
| + | referenced | ||
| + | type | ||
| + | used | ||
| + | usedbychildren | ||
| + | usedbydataset | ||
| + | usedbyrefreservation | ||
| + | usedbysnapshots | ||
| + | aclinherit | ||
| + | aclmode | ||
| + | atime | ||
| + | canmount | ||
| + | casesensitivity | ||
| + | checksum | ||
| + | compression | ||
| + | copies | ||
| + | devices | ||
| + | exec YES YES on | off | ||
| + | jailed | ||
| + | mountpoint | ||
| + | nbmand | ||
| + | normalization | ||
| + | primarycache | ||
| + | quota | ||
| + | readonly | ||
| + | recordsize | ||
| + | refquota | ||
| + | refreservation | ||
| + | reservation | ||
| + | secondarycache | ||
| + | setuid | ||
| + | shareiscsi | ||
| + | sharenfs | ||
| + | sharesmb | ||
| + | snapdir | ||
| + | utf8only | ||
| + | version | ||
| + | volblocksize | ||
| + | volsize | ||
| + | vscan | ||
| + | xattr | ||
| + | | ||
| + | Sizes are specified in bytes with standard units such as K, M, G, etc. | ||
| + | | ||
| + | User-defined properties can be specified by using a name containing a colon (:). | ||
| + | |||
| + | # zpool get all | ||
| + | usage: | ||
| + | get <" | ||
| + | | ||
| + | the following properties are supported: | ||
| + | | ||
| + | PROPERTY | ||
| + | | ||
| + | available | ||
| + | capacity | ||
| + | guid | ||
| + | health | ||
| + | size | ||
| + | used | ||
| + | altroot | ||
| + | autoreplace | ||
| + | bootfs | ||
| + | cachefile | ||
| + | delegation | ||
| + | failmode | ||
| + | listsnapshots | ||
| + | version | ||
| + | |||
| + | ZFS-" | ||
| + | # zfs get version | ||
| + | NAME PROPERTY | ||
| + | BACKUP3000GB | ||
| + | home version | ||
| + | |||
| + | ZFS-Pool-Version (Container-Version) des " | ||
| + | # zpool get version BACKUP3000GB | ||
| + | NAME PROPERTY | ||
| + | BACKUP3000GB | ||
| + | |||
| + | ZFS-Pool-Version (Container-Version) des " | ||
| + | # zpool get version home | ||
| + | NAME PROPERTY | ||
| + | home version | ||
| + | |||
| + | Statistik: | ||
| + | # zpool iostat | ||
| + | | ||
| + | pool | ||
| + | ---------- | ||
| + | BACKUP3000GB | ||
| + | home 1.30T 59.7G | ||
| + | ---------- | ||
| + | |||
| + | Version der aktuellen ZFS-Installation anzeigen: | ||
| + | # zpool upgrade | ||
| + | This system is currently running ZFS pool version 14. | ||
| + | | ||
| + | All pools are formatted using this version. | ||
| + | |||
| + | Version der aktuellen ZFS-Installation und alle Eigenschaften aller bisherigen ZFS-Pool-Versionen anzeigen: | ||
| + | # zpool upgrade -v | ||
| + | This system is currently running ZFS pool version 14. | ||
| + | | ||
| + | The following versions are supported: | ||
| + | | ||
| + | VER DESCRIPTION | ||
| + | --- -------------------------------------------------------- | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | For more information on a particular version, including supported releases, see: | ||
| + | | ||
| + | http:// | ||
| + | | ||
| + | Where ' | ||
| + | |||
| + | alle ZFS-Pool' | ||
| + | # zpool upgrade -a | ||
| + | |||
| + | so kann man alle Laufwerke mit G-Partitionen sehen | ||
| + | > gpart status | ||
| + | Name Status | ||
| + | ada4p1 | ||
| + | ada4p2 | ||
| + | ada4p3 | ||
| + | diskid/ | ||
| + | | ||
| + | |||
| + | weitere wichtige G-Infos: | ||
| + | > gpart show | ||
| + | > gpart list | ||
| + | |||
| + | Jetzt muss der MBR neu geschrieben werden, in diesem Fall heißt die Boot-Platte " | ||
| + | > gpart bootcode -b /boot/pmbr -p / | ||
| + | partcode written to ada4p1 | ||
| + | bootcode written to ada4 | ||
| + | |||
| + | Hat man einen Pool über USB angeschlossen, | ||
| + | wenn die Platten " | ||
| + | # for mp in $(zpool list -H -o name); do zmp=" | ||
| + | |||
| + | |||
| + | <file bash / | ||
| + | # | ||
| + | |||
| + | |||
| + | VERSION=" | ||
| + | |||
| + | |||
| + | SKRIPTNAME=" | ||
| + | SKRIPTVERZEICHNIS=" | ||
| + | |||
| + | # | ||
| + | |||
| + | if [ "$(id -u)" != " | ||
| + | echo "Sie muessen root sein!" | ||
| + | exit 1 | ||
| + | fi | ||
| + | |||
| + | |||
| + | # | ||
| + | |||
| + | if [ " | ||
| + | ### FreeBSD | ||
| + | SMARTOPT=" | ||
| + | elif [ " | ||
| + | ### Linux | ||
| + | SMARTOPT=" | ||
| + | else | ||
| + | ### Windows | ||
| + | SMARTOPT=" | ||
| + | fi | ||
| + | |||
| + | # | ||
| + | |||
| + | ( | ||
| + | echo " | ||
| + | date +'%F %T' | ||
| + | echo " | ||
| + | for i in $(ls / | ||
| + | echo " | ||
| + | for ZFSPOOL in $(zpool list -H | awk ' | ||
| + | do | ||
| + | for BLKGER in $(zpool status ${ZFSPOOL} | sed -ne " | ||
| + | do | ||
| + | ls -1 / | ||
| + | do | ||
| + | #echo " | ||
| + | #echo " | ||
| + | echo -n " | ||
| + | smartctl ${SMARTOPT} ${BLKDEV} | grep -Ei ' | ||
| + | echo | ||
| + | done | grep -Ev ' | ||
| + | done | ||
| + | done | ||
| + | |||
| + | echo | ||
| + | echo " | ||
| + | df -h | grep -E ' | ||
| + | do | ||
| + | #echo "# ${BLKGER} | smartctl ${SMARTOPT} ${BLKGER}" | ||
| + | echo -n "# ${BLKGER} " | ||
| + | smartctl ${SMARTOPT} ${BLKGER} | grep -Ei ' | ||
| + | done | ||
| + | echo " | ||
| + | ) 2>&1 | tee -a / | ||
| + | |||
| + | ls -lha / | ||
| + | |||
| + | # | ||
| + | </ | ||
| + | |||
| + | FreeBSD:~# / | ||
| + | ================================================================================ | ||
| + | 2017-07-11 16:31:21 | ||
| + | -------------------------------------------------------------------------------- | ||
| + | /dev/ada0 WD-WCAV33097185 | ||
| + | /dev/ada1 WD-WCC133494387 | ||
| + | /dev/ada2 PL1331L3GG9G8H | ||
| + | /dev/ada3 WD-WCAW33995888 | ||
| + | /dev/ada4 WD-WCC133691187 | ||
| + | /dev/ada5 WD-WX2137493S81 | ||
| + | /dev/ada6 WD-WCAW33994189 | ||
| + | /dev/ada7 PL2331L3G9908J | ||
| + | /dev/ada8 WD-WXB13B499A8F | ||
| + | /dev/ada9 WD-WCAZ3J498182 | ||
| + | -------------------------------------------------------------------------------- | ||
| + | extern01 /dev/ada7 HDS5C4040ALE630 | ||
| + | extern01 /dev/ada4 WD4000FYYZ-01UL1B0 | ||
| + | extern02 / | ||
| + | extern02 / | ||
| + | extern04 / | ||
| + | extern04 /dev/ada8 WD6001FSYZ-01SS7B1 | ||
| + | home / | ||
| + | home / | ||
| + | zroot / | ||
| + | zroot / | ||
| + | | ||
| + | -------------------------------------------------------------------------------- | ||
| + | ================================================================================ | ||
| + | -rw-r--r-- | ||
| + | |||
| + | |||
| + | ==== bootcode ==== | ||
| + | |||
| + | [root@erde ~]# zpool status zroot | ||
| + | pool: zroot | ||
| + | | ||
| + | status: One or more devices has been removed by the administrator. | ||
| + | Sufficient replicas exist for the pool to continue functioning in a | ||
| + | degraded state. | ||
| + | action: Online the device using 'zpool online' | ||
| + | 'zpool replace' | ||
| + | scan: resilvered 0 in 213503982334601 days 06:23:30 with 0 errors on Mon Mar 23 20:47:39 2020 | ||
| + | config: | ||
| + | | ||
| + | NAME | ||
| + | zroot DEGRADED | ||
| + | mirror-0 | ||
| + | 6227138898055136738 | ||
| + | diskid/ | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | [root@erde ~]# zpool list | ||
| + | NAME | ||
| + | extern01 | ||
| + | extern02 | ||
| + | home 9,06T 2,14T 6,93T - | ||
| + | temp | ||
| + | zroot 920G | ||
| + | |||
| + | [root@erde ~]# gpart bootcode -b /boot/pmbr -p / | ||
| + | partcode written to diskid/ | ||
| + | bootcode written to diskid/ | ||
| + | |||
| + | |||
| + | ===== ein ZFS-Volumen ===== | ||
| + | |||
| + | [[https:// | ||
| + | |||
| + | Alle ZFS-Informationen anzeigen: | ||
| + | # zfs get -s local all home | ||
| + | # zfs get -s default all home | ||
| + | # zfs get -s temporary all home | ||
| + | # zfs get -s inherited all home | ||
| + | # zfs get -s none all home | ||
| + | |||
| + | NFS einschalten: | ||
| + | # zfs set sharenfs=on home | ||
| + | |||
| + | kontrollieren: | ||
| + | # zfs get -s local all home | ||
| + | NAME PROPERTY | ||
| + | home sharenfs | ||
| + | |||
| + | |||
| + | Alle // | ||
| + | |||
| + | # zpool list | ||
| + | NAME | ||
| + | home 1.36T 1.25T | ||
| + | |||
| + | Den status aller // | ||
| + | |||
| + | # zpool status | ||
| + | pool: home | ||
| + | | ||
| + | | ||
| + | config: | ||
| + | | ||
| + | NAME STATE READ WRITE CKSUM | ||
| + | home ONLINE | ||
| + | mirror | ||
| + | ad6 | ||
| + | ad8 | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | |||
| + | Das ZFS-Volumen // | ||
| + | |||
| + | # zpool import BACKUP1000GB | ||
| + | # zpool destroy BACKUP1000GB | ||
| + | # zpool list | ||
| + | |||
| + | |||
| + | ==== Test-Volumen auf Dateibasis anlegen ==== | ||
| + | |||
| + | eine 100MB-Test-Pool-Datei erstellen | ||
| + | > dd if=/ | ||
| + | |||
| + | einen Pool erstellen | ||
| + | > zpool create tank / | ||
| + | |||
| + | verfühgbare Pools zeigen | ||
| + | > zpool import -d /home/ | ||
| + | |||
| + | Pool importieren | ||
| + | > zpool import -d /home/ tank | ||
| + | |||
| + | Pool exportieren | ||
| + | > zpool export tank | ||
| + | |||
| + | |||
| + | === einen ZFS-Tank aus einer Platte um eine Platte erweitern === | ||
| + | |||
| + | |||
| + | == einen ZFS-Tank mit einer Platte erstellen == | ||
| + | |||
| + | > zpool create tank1 / | ||
| + | |||
| + | > zpool list tank1 | ||
| + | NAME SIZE ALLOC | ||
| + | tank1 95,5M 92,5K 95,4M | ||
| + | |||
| + | > zpool status tank1 | ||
| + | pool: tank1 | ||
| + | | ||
| + | scan: none requested | ||
| + | config: | ||
| + | | ||
| + | NAME | ||
| + | tank1 ONLINE | ||
| + | / | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | |||
| + | == mit der zweiten Platte ein RAID-0 bauen == | ||
| + | |||
| + | > zpool add tank1 / | ||
| + | |||
| + | > zpool list tank1 | ||
| + | NAME SIZE ALLOC | ||
| + | tank1 | ||
| + | |||
| + | > zpool status tank1 | ||
| + | pool: tank1 | ||
| + | | ||
| + | scan: none requested | ||
| + | config: | ||
| + | | ||
| + | NAME | ||
| + | tank1 ONLINE | ||
| + | / | ||
| + | / | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | |||
| + | == den RAID-0 - Tank zerstören und mit einer Platte wieder erstellen == | ||
| + | |||
| + | > zpool destroy tank1 | ||
| + | > zpool create tank1 / | ||
| + | > zpool list | ||
| + | NAME SIZE ALLOC | ||
| + | tank1 95,5M | ||
| + | |||
| + | > zpool status tank1 | ||
| + | pool: tank1 | ||
| + | | ||
| + | scan: none requested | ||
| + | config: | ||
| + | | ||
| + | NAME | ||
| + | tank1 ONLINE | ||
| + | / | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | |||
| + | == mit der zweiten Platte ein RAID-1 bauen == | ||
| + | |||
| + | > zpool attach tank1 / | ||
| + | > zpool list tank1 | ||
| + | NAME SIZE ALLOC | ||
| + | tank1 95,5M | ||
| + | |||
| + | > zpool status tank1 | ||
| + | pool: tank1 | ||
| + | | ||
| + | scan: resilvered 99K in 0h0m with 0 errors on Fri Feb 21 21:48:18 2014 | ||
| + | config: | ||
| + | | ||
| + | NAME | ||
| + | tank1 ONLINE | ||
| + | mirror-0 | ||
| + | / | ||
| + | / | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | |||
| + | == die zweite Platte aus dem RAID entfernen == | ||
| + | |||
| + | > zpool detach tank1 / | ||
| + | |||
| + | |||
| + | == eine defekte Platte im RAID gegen eine neue Platte austauschen == | ||
| + | |||
| + | nehmen wir mal an, die Platte " | ||
| + | > zpool replace tank1 / | ||
| + | |||
| + | |||
| + | == einen mirror aus dem RAID entfernen == | ||
| + | |||
| + | [[https:// | ||
| + | |||
| + | # zpool status daten | ||
| + | ... | ||
| + | config: | ||
| + | | ||
| + | NAME STATE READ WRITE CKSUM | ||
| + | daten | ||
| + | mirror-0 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | mirror-1 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | |||
| + | # zpool remove daten mirror-1 | ||
| + | |||
| + | # zpool status daten | ||
| + | ... | ||
| + | config: | ||
| + | | ||
| + | NAME STATE READ WRITE CKSUM | ||
| + | daten | ||
| + | mirror-0 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | |||
| + | ==== ZFS-Volumen auf Platte anlegen ==== | ||
| + | |||
| + | |||
| + | === einfaches Volumen === | ||
| + | |||
| + | einen Pool erstellen | ||
| + | # zpool create BACKUP1000GB /dev/da2 | ||
| + | |||
| + | verfühgbare Pools zeigen | ||
| + | # zpool import | ||
| + | |||
| + | Pool importieren | ||
| + | # zpool import BACKUP1000GB | ||
| + | |||
| + | Pool schreibgeschützt importieren | ||
| + | # zpool import -o ro BACKUP1000GB | ||
| + | |||
| + | Pool exportieren | ||
| + | # zpool export BACKUP1000GB | ||
| + | |||
| + | |||
| + | === RAID-1 Volumen (Spiegel) === | ||
| + | |||
| + | Der einfachste " | ||
| + | __In der Beschreibung von SUN steht, dass der sinnvollste Spiegen aus 3 Platten besteht, | ||
| + | da er in dieser Konfiguration die höchste Sicherheit bietet.__ | ||
| + | Der Sicherheitsgewinn bei 4 und mehr Platten ist zu klein um wirtschaftlich sinnvoll zu sein. | ||
| + | |||
| + | einen Pool erstellen | ||
| + | # zpool create home mirror /dev/ad6 /dev/ad8 | ||
| + | |||
| + | Pool exportieren | ||
| + | # zpool export home | ||
| + | |||
| + | verfühgbare Pools zeigen | ||
| + | # zpool import | ||
| + | |||
| + | Pool importieren | ||
| + | # zpool import home | ||
| + | |||
| + | Pool schreibgeschützt importieren | ||
| + | # zpool import -o ro home | ||
| + | |||
| + | |||
| + | === RAID-10 Volumen === | ||
| + | |||
| + | als erstes bauen wir uns einen RAID-1-Pool: | ||
| + | # zpool create daten mirror diskid/ | ||
| + | # zpool status daten | ||
| + | ... | ||
| + | config: | ||
| + | | ||
| + | NAME STATE READ WRITE CKSUM | ||
| + | daten | ||
| + | mirror-0 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | |||
| + | und wollen daraus nun ein RAID-10-Pool machen. | ||
| + | |||
| + | Dazu binden wir zwei weitere Festplatten als Mirror zusätzlich dran: | ||
| + | # zpool add daten mirror diskid/ | ||
| + | # zpool status daten | ||
| + | ... | ||
| + | config: | ||
| + | | ||
| + | NAME STATE READ WRITE CKSUM | ||
| + | daten | ||
| + | mirror-0 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | mirror-1 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | |||
| + | |||
| + | Es ist auch möglich alles in einem einzigen Kommando machen: | ||
| + | # zpool create daten mirror diskid/ | ||
| + | |||
| + | |||
| + | === ZFS-Volumen vergrößern === | ||
| + | |||
| + | Nachdem die alten Platten in einem Pool durch neue größere ausgetauscht wurden, muß der Pool auf die Größer der neuen Festplatten angepasst werden. | ||
| + | |||
| + | # zpool list home | ||
| + | NAME | ||
| + | home 9.06T 5.87T 3.19T - | ||
| + | |||
| + | # zpool online -e home diskid/ | ||
| + | |||
| + | # zpool list home | ||
| + | NAME | ||
| + | home 10.9T 5.87T 5.01T - | ||
| + | |||
| + | |||
| + | === Boot-Pool === | ||
| + | |||
| + | <code bash FreeBSD 15> | ||
| + | [root@freebsd15 ~]# zpool upgrade zroot | ||
| + | This system supports ZFS pool feature flags. | ||
| + | |||
| + | Enabled the following features on ' | ||
| + | redaction_list_spill | ||
| + | raidz_expansion | ||
| + | fast_dedup | ||
| + | longname | ||
| + | large_microzap | ||
| + | block_cloning_endian | ||
| + | physical_rewrite | ||
| + | |||
| + | Pool ' | ||
| + | the boot code. See gptzfsboot(8) and loader.efi(8) for details. | ||
| + | |||
| + | [root@freebsd15 ~]# gpart show | ||
| + | => | ||
| + | | ||
| + | | ||
| + | | ||
| + | 4196352 | ||
| + | 488396800 | ||
| + | |||
| + | [root@freebsd15 ~]# gpart bootcode -b /boot/pmbr -p / | ||
| + | partcode written to diskid/ | ||
| + | bootcode written to diskid/ | ||
| + | |||
| + | [root@freebsd15 ~]# gpart status | ||
| + | Name Status | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | </ | ||
| + | |||
| + | siehe auch: [[:: | ||
| + | |||
| + | Wenn man FreeBSD 10 auf einem ZFS-Volumen (RAID1) installiert hat und nun eine Platte davon defekt ist und ausgetauscht werden muss, dann darf man nicht vergessen auch den Boot-Kode zu aktualisieren! | ||
| + | |||
| + | > zpool attach zroot gptid/ | ||
| + | Make sure to wait until resilver is done before rebooting. | ||
| + | | ||
| + | If you boot from pool ' | ||
| + | boot code on newly attached disk '/ | ||
| + | | ||
| + | Assuming you use GPT partitioning and ' | ||
| + | you may use the following command: | ||
| + | | ||
| + | gpart bootcode -b /boot/pmbr -p / | ||
| + | |||
| + | in unserem Fall würde das Kommando so aussehen (das funktioniert aber erst, wenn das // | ||
| + | > gpart bootcode -b /boot/pmbr -p / | ||
| + | |||
| + | sollte es nicht funktionieren, | ||
| + | > glabel status | ||
| + | Name Status | ||
| + | diskid/ | ||
| + | | ||
| + | gptid/ | ||
| + | diskid/ | ||
| + | | ||
| + | | ||
| + | diskid/ | ||
| + | ntfs/ | ||
| + | gpt/ | ||
| + | gptid/ | ||
| + | | ||
| + | | ||
| + | gptid/ | ||
| + | gptid/ | ||
| + | |||
| + | sollte die Platte nicht aufgelistet werden, dann muss man die Platte nocheinmal aus dem Pool entfernen und mit diesem Kommando GEOM bekannt machen: | ||
| + | > gpart create -s gpt ada4 | ||
| + | |||
| + | jetzt kann man sie im Pool wieder aufnehmen und das // | ||
| + | |||
| + | > geom disk list | ||
| + | Geom name: ada3 | ||
| + | Providers: | ||
| + | 1. Name: ada3 | ||
| + | | ||
| + | | ||
| + | Mode: r2w2e5 | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | |||
| + | eine boot-fähige root-Platte (ada4) für FreeBSD-10 anlegen, die dann an die bestehende root-Platte (ada3) als //mirror// (zum RAID-1) angehängt wird: | ||
| + | > gpart show -p ada3 | ||
| + | => 34 1953525101 | ||
| + | 34 1024 ada3p1 | ||
| + | 1058 16777216 | ||
| + | 16778274 | ||
| + | | ||
| + | > gpart destroy -F ada4 | ||
| + | > gpart show -l ada4 | ||
| + | > gpart create -s gpt ada4 | ||
| + | > gpart show -p ada4 | ||
| + | > gpart add -t freebsd-boot -s 512K ada4 | ||
| + | > gpart show -p ada4 | ||
| + | > gpart add -t freebsd-swap -s 8G ada4 | ||
| + | > gpart show -p ada4 | ||
| + | > gpart add -t freebsd-zfs ada4 | ||
| + | > gpart show -p ada4 | ||
| + | |||
| + | beide Paltten zu einem RAID-1 zusammenfühgen: | ||
| + | > zpool attach zroot ada3 ada4 | ||
| + | |||
| + | ...warten, bis beide Platten syncronisiert sind... | ||
| + | |||
| + | Boot-Kode auf die neue Platte schreiben, damit das System auch bei Ausfall der ersten Platte boot-fähig bleibt: | ||
| + | > gpart bootcode -b /boot/pmbr -p / | ||
| + | |||
| + | oder so: [[https:// | ||
| + | > gpart destroy -F ada4 | ||
| + | > gpart backup ada3 | gpart restore ada4 | ||
| + | > gpart show -l ada3 | ||
| + | > gpart show -l ada4 | ||
| + | > zpool attach zroot ada3 ada4 | ||
| + | > zpool status | ||
| + | > gpart bootcode -b /boot/pmbr -p / | ||
| + | |||
| + | === Mount-Point === | ||
| + | |||
| + | Konfiguriert man keinen speziellen Mount-Point, | ||
| + | dann wird von ZFS automatisch ein Top-Level-Verzeichnis mit Pool-Namen angelegt ("/ | ||
| + | # zpool import home | ||
| + | |||
| + | ergibt das Verzeichnis: | ||
| + | /home | ||
| + | |||
| + | Will man einen speziellen mount-Point angeben, dann geht das so: | ||
| + | |||
| + | |||
| + | ==== Beispiele ==== | ||
| + | |||
| + | === Beispiel 1 === | ||
| + | |||
| + | # zfs set mountpoint=/ | ||
| + | |||
| + | Hier wird das Volumen (fritz) aus dem Pool (home) als "/ | ||
| + | |||
| + | |||
| + | === Beispiel aus dem Handbuch === | ||
| + | |||
| + | # zfs create tank/home | ||
| + | # zfs set mountpoint=/ | ||
| + | # zfs set sharenfs=on tank/home | ||
| + | # zfs set compression=on tank/home | ||
| + | |||
| + | oder | ||
| + | # zfs create -o mountpoint=/ | ||
| + | |||
| + | Und dann überprüfen: | ||
| + | # zfs get mountpoint, | ||
| + | |||
| + | |||
| + | === Beispiel 2 === | ||
| + | |||
| + | # zfs set mountpoint=/ | ||
| + | |||
| + | Hier wird das Volumen (opt) aus dem Pool (tank) als "/ | ||
| + | |||
| + | |||
| + | === Beispiel 3 === | ||
| + | |||
| + | auf einem PC-BSD 9.1 - System: | ||
| + | > zpool history | ||
| + | History for ' | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | 2013-05-19.20: | ||
| + | ... | ||
| + | |||
| + | die für uns wichtigen Befehle können wir z.B. so filtern: | ||
| + | > zpool history | fgrep fritz | ||
| + | > zfs create -o mountpoint=/ | ||
| + | > zfs destroy tank0/ | ||
| + | |||
| + | |||
| + | === Beispiel 4 (Ubuntu 14.04) === | ||
| + | |||
| + | einen Pool (Tank) an einem alternativen Verzeichnis mounten | ||
| + | zpool create -R / | ||
| + | |||
| + | der ZFS-Treiber soll beim booten geladen werden | ||
| + | echo " | ||
| + | update-initramfs -u | ||
| + | update-grub2 | ||
| + | |||
| + | den Pool (Tank) generieren | ||
| + | zpool create fritz /dev/sdb1 | ||
| + | |||
| + | es ist möglich, dass man den Pool an ein anderes Verzeichnis mounten lässt: | ||
| + | zfs set mountpoint=/ | ||
| + | |||
| + | man kann auf einem Pool mehrere Volumen anlegen: | ||
| + | zfs create fritz/jobs | ||
| + | |||
| + | es ist möglich, dass man das Volumen an ein anderes Verzeichnis mounten lässt: | ||
| + | zfs set mountpoint=/ | ||
| + | | ||
| + | chown -R fritz: | ||
| + | |||
| + | Anzeige | ||
| + | zpool list | ||
| + | zpool status | ||
| + | |||
| + | |||
| + | ===== ZFS-Datasets im Netzwerk (per NFS bzw. SMB) freigeben ===== | ||
| + | |||
| + | Zwei häufig verwendete und nützliche Dataset-Eigenschaften sind die Freigabeoptionen von NFS und SMB. Diese Optionen legen fest, ob und wie ZFS-Datasets im Netzwerk freigegeben werden. | ||
| + | __Derzeit unterstützt FreeBSD nur Freigaben von Datasets über NFS.__ | ||
| + | * [[https:// | ||
| + | |||
| + | Für Freigaben per SMB/CIFS muss derzeit noch [[::Samba]] eingesetzt werden. | ||
| + | |||
| + | |||
| + | ===== Reparieren von Schäden am gesamten ZFS-Speicher-Pool ===== | ||
| + | |||
| + | Dokumentation auf den Seiten von SUN: | ||
| + | - [[http:// | ||
| + | - [[http:// | ||
| + | - [[http:// | ||
| + | - [[http:// | ||
| + | - [[http:// | ||
| + | |||
| + | am 2011-01-08 gesichert: | ||
| + | {{:: | ||
| + | |||
| + | zpool status extern04 | ||
| + | < | ||
| + | pool: extern04 | ||
| + | | ||
| + | status: One or more devices are faulted in response to persistent errors. | ||
| + | Sufficient replicas exist for the pool to continue functioning in a | ||
| + | degraded state. | ||
| + | action: Replace the faulted device, or use 'zpool clear' to mark the device | ||
| + | repaired. | ||
| + | scan: none requested | ||
| + | config: | ||
| + | |||
| + | NAME STATE READ WRITE CKSUM | ||
| + | extern04 | ||
| + | mirror-0 | ||
| + | ada6 FAULTED | ||
| + | ada9 ONLINE | ||
| + | </ | ||
| + | |||
| + | zpool scrub extern04 | ||
| + | |||
| + | zpool status extern04 | ||
| + | < | ||
| + | pool: extern04 | ||
| + | | ||
| + | status: One or more devices are faulted in response to persistent errors. | ||
| + | Sufficient replicas exist for the pool to continue functioning in a | ||
| + | degraded state. | ||
| + | action: Replace the faulted device, or use 'zpool clear' to mark the device | ||
| + | repaired. | ||
| + | scan: scrub in progress since Tue Feb 24 23:34:22 2015 | ||
| + | 2.36G scanned out of 4.09T at 44.8M/s, 26h32m to go | ||
| + | 0 repaired, 0.06% done | ||
| + | config: | ||
| + | |||
| + | NAME STATE READ WRITE CKSUM | ||
| + | extern04 | ||
| + | mirror-0 | ||
| + | ada6 FAULTED | ||
| + | ada9 ONLINE | ||
| + | </ | ||
| + | |||
| + | ===== aus einem einfachen Pool ein RAID1 machen ===== | ||
| + | |||
| + | In diesem Beispiel verwende ich an Stelle von Festplatten | ||
| + | nur Dateien. Das ist zum testen besser geeignet. | ||
| + | |||
| + | So bekommt man ein gutes Gefühl für den Vorgang und kann so eine " | ||
| + | |||
| + | Als erstes müssen wir uns die beiden Image-Dateien als Plattenersatz erstellen: | ||
| + | # dd if=/ | ||
| + | # dd if=/ | ||
| + | |||
| + | oder so | ||
| + | # truncate -s +100M / | ||
| + | # truncate -s +100M / | ||
| + | |||
| + | jetzt bauen wir unsere Ausgangsbasis, | ||
| + | # zpool create TEST01 / | ||
| + | |||
| + | so, das haben wir erstmal geschaft: | ||
| + | # zpool list | ||
| + | NAME | ||
| + | TEST01 | ||
| + | |||
| + | hier sieht man, dass es sich hier um unsere Dateien (und keine Festplatten) handelt: | ||
| + | # zpool status | ||
| + | pool: TEST01 | ||
| + | | ||
| + | | ||
| + | config: | ||
| + | | ||
| + | NAME | ||
| + | TEST01 | ||
| + | / | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | jetzt legen wir unsere Testdaten ab: | ||
| + | # echo "Test 001" > / | ||
| + | |||
| + | und sie sind auch angekommen: | ||
| + | # cat / | ||
| + | Test 001 | ||
| + | |||
| + | hier machen wir aus unserem einfachen Pool ein RAID-1: | ||
| + | # zpool attach TEST01 / | ||
| + | |||
| + | **Wichtig: | ||
| + | |||
| + | und wir sehen, dass der Pool jetzt aus zwei Dateien (Platten) besteht: | ||
| + | # zpool status | ||
| + | pool: TEST01 | ||
| + | | ||
| + | | ||
| + | config: | ||
| + | | ||
| + | NAME | ||
| + | TEST01 | ||
| + | mirror | ||
| + | / | ||
| + | / | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | und jetzt der finale Test, sind unsere Daten noch da? | ||
| + | # cat / | ||
| + | Test 001 | ||
| + | |||
| + | ... es hat geklappt, die Daten sind erhalten geblieben! | ||
| + | |||
| + | |||
| + | ===== einen Pool vergrößern (so etwas ähnliches wie RAID0) ===== | ||
| + | |||
| + | Die Vorbereitungen sehen genauso aus, wie bei dem Test mit dem RAID-1: | ||
| + | # truncate -s +100M / | ||
| + | # truncate -s +100M / | ||
| + | # zpool create TEST02 / | ||
| + | # echo "Test 002" > / | ||
| + | # cat / | ||
| + | Test 001 | ||
| + | |||
| + | und jetzt werden die beiden Platten zusammen gesetzt, hierbei bleiben | ||
| + | # zpool add TEST02 / | ||
| + | |||
| + | die Daten sind noch da: | ||
| + | # cat / | ||
| + | Test 001 | ||
| + | |||
| + | der Pool ist jetzt doppelt so groß: | ||
| + | # zpool list | ||
| + | NAME SIZE ALLOC | ||
| + | TEST02 | ||
| + | |||
| + | beide Platten sind im Pool und es ist kein Mirror: | ||
| + | # zpool status TEST02 | ||
| + | pool: TEST02 | ||
| + | | ||
| + | | ||
| + | config: | ||
| + | | ||
| + | NAME | ||
| + | TEST02 | ||
| + | / | ||
| + | / | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | |||
| + | |||
| + | ===== spare ===== | ||
| + | |||
| + | * [[https:// | ||
| + | * [[https:// | ||
| + | |||
| + | eine " | ||
| + | # zpool add pool01 spare diskid/ | ||
| + | # zpool status pool01 | ||
| + | ... | ||
| + | NAME STATE READ WRITE CKSUM | ||
| + | pool01 | ||
| + | mirror-0 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | mirror-1 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | mirror-2 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | spares | ||
| + | diskid/ | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | Mir ist leider nicht bekannt, wie man zu einem einzelnen Mirror im Raid-10-Pool eine spare-Platte hinzufügen kann. | ||
| + | |||
| + | Denn der '' | ||
| + | Die spare-Platte ist aber auch nur 10TB groß... | ||
| + | Demnach macht eine Spare für den kompletten Pool keinen Sinn. | ||
| + | |||
| + | # zpool list pool01 | ||
| + | NAME SIZE ALLOC | ||
| + | pool01 34.5T 28.6T 5.93T - | ||
| + | |||
| + | |||
| + | ==== defekte Platte durch spare ersetzen ==== | ||
| + | |||
| + | # zpool status pool01 | ||
| + | ... | ||
| + | NAME STATE READ WRITE CKSUM | ||
| + | pool01 | ||
| + | mirror-0 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | mirror-1 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | mirror-2 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | spares | ||
| + | diskid/ | ||
| + | | ||
| + | errors: No known data errors | ||
| + | |||
| + | so wird die defekte Platte durch die Spare-Platte ausgetauscht: | ||
| + | # zpool replace daten diskid/ | ||
| + | ... | ||
| + | NAME STATE READ WRITE CKSUM | ||
| + | pool01 | ||
| + | mirror-0 | ||
| + | diskid/ | ||
| + | spare-1 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | mirror-1 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | mirror-2 | ||
| + | diskid/ | ||
| + | diskid/ | ||
| + | spares | ||
| + | diskid/ | ||
| + | |||
| + | |||
| + | ===== ZFS-Probleme ===== | ||
| + | |||
| + | |||
| + | ==== ZFS: checksum mismatch ==== | ||
| + | |||
| + | Das Problem sieht in der Logdatei wie folgt aus: | ||
| + | |||
| + | Mar 31 23:30:50 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:30:50 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:30:50 plebeian root: ZFS: zpool I/O failure, zpool=storage error=86 | ||
| + | Mar 31 23:31:20 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:31:20 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:31:20 plebeian root: ZFS: zpool I/O failure, zpool=storage error=86 | ||
| + | Mar 31 23:31:34 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:31:34 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:31:34 plebeian root: ZFS: zpool I/O failure, zpool=storage error=86 | ||
| + | Mar 31 23:31:34 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:31:34 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:31:34 plebeian root: ZFS: zpool I/O failure, zpool=storage error=86 | ||
| + | Mar 31 23:31:35 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:31:35 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:31:35 plebeian root: ZFS: zpool I/O failure, zpool=storage error=86 | ||
| + | Mar 31 23:31:35 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:31:35 plebeian root: ZFS: checksum mismatch, zpool=storage path=/ | ||
| + | Mar 31 23:31:35 plebeian root: ZFS: zpool I/O failure, zpool=storage error=86 | ||
| + | |||
| + | * [[http:// | ||
| + | |||
| + | # sysctl kern.smp.disabled=1 | ||
| + | |||
| + | |||
| + | ==== zpool device UNAVAIL ==== | ||
| + | |||
| + | Problem anzeigen: | ||
| + | < | ||
| + | [root@server ~]# zpool status -v extern03 | ||
| + | pool: extern03 | ||
| + | | ||
| + | status: One or more devices could not be opened. | ||
| + | the pool to continue functioning in a degraded state. | ||
| + | action: Attach the missing device and online it using 'zpool online' | ||
| + | see: http:// | ||
| + | scan: scrub in progress since Sun Feb 2 21:35:35 2014 | ||
| + | 1,89T scanned out of 2,79T at 50,5M/s, 5h10m to go | ||
| + | 0 repaired, 67,85% done | ||
| + | config: | ||
| + | |||
| + | NAME STATE READ WRITE CKSUM | ||
| + | extern03 | ||
| + | mirror-0 | ||
| + | da1 | ||
| + | 17785732228181994288 | ||
| + | |||
| + | errors: No known data errors | ||
| + | </ | ||
| + | |||
| + | Laufwerk wieder online nehmen: | ||
| + | < | ||
| + | [root@server ~]# zpool online extern03 17785732228181994288 | ||
| + | </ | ||
| + | |||
| + | wenn das nicht funktioniert, | ||
| + | < | ||
| + | [root@server ~]# zpool detach extern03 17785732228181994288 | ||
| + | </ | ||
| + | |||
| + | Laufwerk wieder anfühgen: | ||
| + | < | ||
| + | [root@server ~]# zpool attach extern03 /dev/da1 /dev/da2 | ||
| + | </ | ||
| + | |||
| + | jetzt wird der Spiegel wieder hergestellt: | ||
| + | < | ||
| + | [root@server ~]# zpool status -v extern03 | ||
| + | pool: extern03 | ||
| + | | ||
| + | status: One or more devices is currently being resilvered. | ||
| + | continue to function, possibly in a degraded state. | ||
| + | action: Wait for the resilver to complete. | ||
| + | scan: resilver in progress since Mon Feb 3 08:36:50 2014 | ||
| + | 59,5M scanned out of 2,79T at 2,29M/s, 355h0m to go | ||
| + | 59,2M resilvered, 0,00% done | ||
| + | config: | ||
| + | |||
| + | NAME STATE READ WRITE CKSUM | ||
| + | extern03 | ||
| + | mirror-0 | ||
| + | da1 | ||
| + | da2 | ||
| + | |||
| + | errors: No known data errors | ||
| + | </ | ||
| + | |||
| + | ...das dauert jetzt eine Weile... | ||
| + | |||
| + | ...und wenn es fertig ist, können wir den Fehlerspeicher löschen: | ||
| + | > zpool clear extern03 | ||
| + | |||
| + | und so sieht es dann wieder wie neu aus: | ||
| + | > zpool status extern03 | ||
| + | < | ||
| + | pool: extern03 | ||
| + | | ||
| + | scan: resilvered 220K in 0h0m with 0 errors on Tue Feb 4 10:10:25 2014 | ||
| + | config: | ||
| + | |||
| + | NAME STATE READ WRITE CKSUM | ||
| + | extern03 | ||
| + | mirror-0 | ||
| + | da1 | ||
| + | da2 | ||
| + | |||
| + | errors: No known data errors | ||
| + | </ | ||
| + | |||
| + | |||
| + | |||
| + | ===== ZFS Tuning ===== | ||
| + | |||
| + | * [[http:// | ||
| + | * [[http:// | ||
| + | * [[http:// | ||
| + | * [[http:// | ||
| + | |||
| + | **Tuning is Evil** | ||
| + | |||
| + | Tuning is often evil and should rarely be done. | ||
| + | |||
| + | First, consider that the default values are set by the people who know the most about the effects of the tuning on the software that they supply. If a better value exists, it should be the default. While alternative values might help a given workload, it could quite possibly degrade some other aspects of performance. Occasionally, | ||
| + | |||
| + | interessante Optionen: | ||
| + | * " | ||
| + | * " | ||
| + | |||
| + | |||
| + | ===== Trivial ===== | ||
| + | |||
| + | Die Speicherung oder Übertragung einer Informationseinheit (z. B. ein Bit) ist an die Speicherung oder Übertragung von Energie gekoppelt, da Information ohne ein Medium nicht existieren kann, d. h. Information ist an die Existenz unterscheidbarer Zustände gekoppelt. | ||
| + | Da die Energie Quantisiert ist (es eine aller kleinste unteilbare Energiemenge gibt), ist eine Mindestmenge von Energie pro Informationseinheit notwendig, sonst geht die Information verloren. | ||
| + | Um einen Speicherpool mit 128-Bit-Adressierung zu füllen, wäre eine Energiemenge notwendig, die größer ist als die Menge an Energie, die ausreichen würde, um die irdischen Ozeane zu verdampfen. | ||
| + | Deshalb geht man davon aus, dass die Kapazität von ZFS //für immer// ausreichen wird. | ||
| + | |||
| + | |||
| + | ===== Probleme / Fehler ===== | ||
| + | |||
| + | Aug 2 07:09:23 freebsd12 kernel: ahcich5: Timeout on slot 20 port 0 | ||
| + | Aug 2 07:09:23 freebsd12 kernel: ahcich5: is 00000000 cs 00000000 ss 00000000 rs 00100000 tfd 50 serr 00000000 cmd 00711417 | ||
| + | |||
| + | Scheinbar funktionieren [[:: | ||
| + | |||
| + | Nachdem ich diesen SATA-Controller mit Marvell-Chipsatz aus dem System entfernt hatte, war das Problem behoben. | ||
| + | |||
| + | |||
| + | ===== veraltete ZFS-Boot-Environments entfernen ===== | ||
| + | |||
| + | Um veraltete ZFS Boot-Environments (BEs) zu bereinigen, kannst du das bectl (Boot Environment Control) Tool verwenden. | ||
| + | |||
| + | <code bash zfs list -S used> | ||
| + | NAME USED AVAIL REFER MOUNTPOINT | ||
| + | ... | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | zroot/ | ||
| + | </ | ||
| + | |||
| + | <code bash bectl list> | ||
| + | BE | ||
| + | 13.2-RELEASE-p2_2023-10-01_000223 | ||
| + | 13.2-RELEASE-p3_2023-10-05_224449 | ||
| + | 13.2-RELEASE-p4_2024-02-09_215133 | ||
| + | 13.2-RELEASE-p9_2024-02-09_221758 | ||
| + | 14.0-RELEASE-p11_2024-10-20_004826 - - 376M 2024-10-20 00:48 | ||
| + | 14.0-RELEASE-p11_2024-10-20_032728 - - 113M 2024-10-20 03:27 | ||
| + | 14.0-RELEASE-p11_2024-10-20_114546 - - 3.07M 2024-10-20 11:45 | ||
| + | 14.0-RELEASE-p5_2024-10-20_001720 | ||
| + | 14.1-RELEASE-p5_2024-10-20_115154 | ||
| + | 14.1-RELEASE-p5_2024-12-23_221120 | ||
| + | 14.1-RELEASE-p6_2024-12-24_000858 | ||
| + | 14.2-RELEASE-p2_2025-03-18_192014 | ||
| + | 14.2-RELEASE-p2_2025-08-09_013852 | ||
| + | 14.2-RELEASE-p5_2025-08-09_142010 | ||
| + | 14.2-RELEASE_2024-12-24_033100 | ||
| + | 14.2-RELEASE_2025-03-18_095706 | ||
| + | 14.3-RELEASE-p2_2025-08-09_162348 | ||
| + | 14.3-RELEASE-p2_2025-08-09_163646 | ||
| + | default | ||
| + | </ | ||
| + | |||
| + | <code bash bectl destroy 14.0-RELEASE-p...> | ||
| + | bectl destroy 14.0-RELEASE-p11_2024-10-20_004826 | ||
| + | bectl destroy 14.0-RELEASE-p11_2024-10-20_032728 | ||
| + | bectl destroy 14.0-RELEASE-p11_2024-10-20_114546 | ||
| + | bectl destroy 14.0-RELEASE-p5_2024-10-20_001720 | ||
| + | </ | ||
| + | |||
