Benutzer-Werkzeuge

Webseiten-Werkzeuge


clusterfilesysteme

Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen RevisionVorhergehende Überarbeitung
Nächste Überarbeitung
Vorhergehende Überarbeitung
clusterfilesysteme [2016-01-22 09:26:03] – [Clusterfilesysteme] manfredclusterfilesysteme [2016-04-12 22:50:01] (aktuell) – Externe Bearbeitung 127.0.0.1
Zeile 1: Zeile 1:
 +====== Clusterfilesysteme ======
 +
 +  * [[http://kloog.de/Wiki-DataGrids.cj]]
 +  * [[http://itp.tugraz.at/~ahi/admin/Entwicklung.html]]
 +  * [[http://pubs.gpaterno.com//2010/dublin_ossbarcamp_2010_fs_comparison.pdf]] - Vergleich von NFS, GFS2 und OCFS2 (NFS ist bei vielen Knoten sehr langsam; OCFS2 ist mit Abstand das schnellste Dateisystem in diesem Vergleich)
 +  * [[http://www.linux-magazin.de/Online-Artikel/GFS2-und-OCFS2-zwei-Cluster-Dateisysteme-im-Linux-Kernel|GFS2 und OCFS2, zwei Cluster-Dateisysteme im Linux-Kernel]]
 +
 +Begriffe:
 +  * [[https://de.wikipedia.org/wiki/Direct_Attached_Storage|DAS]] - Directly Attached Storage (z.B.: lokale Festplatte, USB-Stick; Speicherplatz wird nur von einem einzigen System genutzt)
 +  * [[https://de.wikipedia.org/wiki/Network_Attached_Storage|NAS]] - Network Attached Storage (z.B.: NFS, Samba-Server; ein im Netzwerk gemeinsam genutzter Speicherplatz)
 +  * [[https://de.wikipedia.org/wiki/Storage_Area_Network|SAN]] - Storage Area Network (z.B.: iSCSI, FibreChannel; verschiedene Laufwerke können von diversen Systemen im Netzwerk genutzt werden)
 +  * [[https://de.wikipedia.org/wiki/Oracle_RAC|RAC]] - Oracle Real Application Clusters (mehrere Datanbanken schreiben im selben Datenbank-Verzeichnis)
 +  * [[https://de.wikipedia.org/wiki/Oracle_Automatic_Storage_Management_Cluster_File_System|ACFS]] - Oracle-Nachfolger von [[OCFS2]] - Oracle Automatic Storage Management Cluster File System - ASM Cluster File System -> //Die Weiterentwicklung von [[OCFS2]] auf Linux wurde von Oracle an die Open Source Community abgegeben. [[OCFS2]] für Windows wird zugunsten von ACFS komplett eingestellt und nicht mehr weiterentwickelt.//
 +    * ''ASM'' - Automatic Storage Management, es ist ein Logical Volume Manager
 +  * [[https://de.wikipedia.org/wiki/StorNext_FS|SNFS]] - StorNext FS (früher ADIC) -> //StorNext FS wurde früher unter dem Namen CentraVision (CVFS) vertrieben.//
 +    * [[https://de.wikipedia.org/wiki/StorNext_FS|Xsan]] - Apple Cluster File System -> //ist eine spezielle Variante von "StorNext FS", die von Apple für Mac OS X vertrieben wird//
 +
 +===== BeeGFS =====
 +
 +Fraunhofer-Institut veröffentlicht [[http://www.beegfs.com/content/|BeeGFS]]-Quellcode -> [[http://www.pro-linux.de/news/1/23291/fraunhofer-institut-veroeffentlicht-beegfs-quellcode.html]]
 +
 +**Das parallele Dateisystem BeeGFS wird freie Software. BeeGFS wird seit mehreren Jahren von der ThinkParQ GmbH, einer Ausgliederung des Fraunhofer-Instituts für Techno-und Wirtschaftsmathematik ITWM in Kaiserslautern, für performanzkritische Umgebungen entwickelt. BeeGFS soll einfach zu installieren und sehr flexibel sein und sich auch für Anwendungen eignen, in denen Storagesysteme rechenintensive Aufträge verarbeiten.**
 +
 +Von Falko Benthin (Do, 25. Februar 2016, 13:26)
 +
 +Das ITWM hatte schon 2013 auf der International Supercomputing Conference angekündigt, die Quellen für BeeGFS zu veröffentlichen. Die Ankündigung erfolgte im Rahmen des Europäischen Exascale-Projekts DEEP-ER, in dem neue Ansätze für extreme I/O-Anforderungen erdacht und umgesetzt werden. »Während viele unserer Anwender damit zufrieden sind, dass BeeGFS einfach zu installieren ist und nicht viel Aufmerksamkeit benötigt, möchten andere wirklich genau verstehen, das unter der Haube abläuft, um die Laufzeit ihrer Anwendungen weiter zu optimieren, das Monitoring zu verbessern oder BeeGFS auf andere Plattformen wie BSD zu portieren. Ein weiterer wichtiger Aspekt ist, dass die Gemeinschaft darauf wartet, dass es möglich wird, BeeGFS für Non-X86-Architekturen wie ARM oder Power zu bauen.« so Sven Breuner, der Geschäftsführer von ThinkParQ.
 +
 +Das BeeGFS-Team beteiligt sich bereits an ExaNeSt, einem europäischen Exascale-Projekt, das speziell darauf ausgerichtet ist, das ARM-Ökosystem für performanzkritische Auslastungen zu nutzen. »Obwohl BeeGFS bereits heute sofort auf ARM-Systemen einsatzfähig ist, ermöglicht uns das Projekt sicherzustellen, auch auf dieser Architektur beste Ergebnisse liefern zu können«, meint Bernd Lietzow, der BeeGFS-Leiter für ExaNeSt.
 +
 +BeeGFS-Clients kommunizieren via TCP/IP oder Infiniband mit dem Speichersystem (NAS). BeeGFS besteht aus Metadaten und Objektdaten. Objektdaten sind die eigentlichen Daten des Nutzers, in den Metadaten sind Zugriffsberechtigungen, Dateigrößen und die Server hinterlegt, auf denen die Teile einer Datei zu finden sind. Sowohl die Anzahl der ObjectStorageServer auch die der MetaDataServer ist skalierbar. Darüber hinaus gibt es noch einen ManagementServer, der dafür verantwortlich ist, dass alle an BeeGFS beteiligten Prozesse untereinander kommunizieren können.
 +
 +BeeGFS funktioniert mit den gängigen POSIX-kompatiblen Dateisystemen wie Ext4 oder Xfs und läuft auf Red Hat Enterprise Linux, SuSE Linux Enterprise Server und Debian sowie deren Derivaten. BeeGFS fällt mit ca. 25.000 Codezeilen für den Metadaten-Service und 15.000 Zeilen für den Storage-Service recht kompakt aus. Das Quellcode-Archiv ist auf der BeeGFS-Projektwebseite erhältlich, allerdings noch unter der BeeGFS-EULA.
 +
 +
 +===== OpenAFS =====
 +
 +  * [[http://www.openafs.org/]]
 +  * [[http://de.wikipedia.org/wiki/Andrew_File_System]]
 +  * [[http://www.openafs.at/drupal/files/slides/1Day_03/AFS-OSD.pdf]]
 +  * [[http://www.dementia.org/twiki/bin/view/AFSLore/]]
 +
 +AFS wird auf Dauer nicht mehr weiterentwickelt, da es zu viele Schwächen hat, die von seinen beiden Nachfolgern (Coda und DFS) nicht haben.
 +
 +===== DCE-DFS =====
 +
 +  * [[http://www.opengroup.org/dce]]
 +  * [[http://www.entegrity.com/products/dce/dce.shtml]]
 +  * [[http://support.entegrity.com/private/technotes/dcelinux/v23/install/linuxdca.htm]]
 +
 +DCE ist eine Weiterentwicklung von AFS2.
 +Es ist ein sehr umfassendes, und das wohl vollständigste existierende Middleware Produkt.
 +
 +Besonderes Augenmerk wurde auf eine POSIX-konforme Semantik bei der Konsistenz der Dateien über Cachegrenzen hinweg gelegt - auch als Single Site Semantic bezeichnet.
 +
 +===== OCFS2 =====
 +
 +  * [[http://oss.oracle.com/projects/ocfs2/]]
 +
 +OCFS2 unterstützt keine ACLs, die Unterstützung erweiterter Attribute und SELINUX ist geplant.
 +
 +WAN-Tauglich ist es nicht => **//lower latency is highly recommended//**
 +
 +**[[OCFS2]]**
 +
 +===== Ceph =====
 +
 +  * **[[http://ceph.com/resources/downloads/]]**
 +  * [[http://ceph.newdream.net/about/]]
 +  * [[http://ceph.newdream.net/wiki/]]
 +  * [[http://ceph.newdream.net/debian/]]
 +  * [[http://www.usenix.org/events/osdi06/tech/weil.html]]
 +  * [[http://kloog.de/Wiki-DataGrids.cj]]
 +  * [[http://kerneltrap.org/Linux/Ceph_Distributed_Network_File_System]]
 +  * [[http://sourceforge.net/projects/ceph/]]
 +
 +Ceph besitzt auch eine „failure detection“
 +
 +Seamless scaling / Strong reliability / fast recovery / ohne single point of failure
 +
 +==== aus den Quellen ====
 +
 +http://www.ece.umd.edu/~posulliv/ceph/cluster_build.html
 +
 +aptitude install linux-source libboost-dev autoconf automake libtool libedit-dev libssl-dev nfs-kernel-server
 +
 +=== Ceph selbst ===
 +
 +  wget -c http://ceph.newdream.net/download/ceph-0.17.tar.gz
 +  tar xzf ceph_0.17.tar.gz
 +  cd ceph-0.17/
 +  ./autogen.sh
 +  ./configure --prefix=/opt/ceph
 +  make
 +  make install
 +
 +=== Ceph-Kernel-Modul ===
 +
 +  wget -c http://ceph.newdream.net/download/ceph-kclient-source-0.17.tar.gz
 +  tar xzf ceph-kclient-source-0.17.tar.gz
 +  cd ceph-kclient-0.17
 +  make
 +  make modules_install
 +  depmod
 +  modprobe ceph
 +
 +==== Pakete (Debian i386/amd64) ====
 +
 +  * [[http://ceph.newdream.net/wiki/Debian]]
 +
 +  vi /etc/apt/sources.list
 +        deb http://ceph.newdream.net/debian/ stable main
 +        deb-src http://ceph.newdream.net/debian/ stable main
 +
 +  aptitude update
 +  aptitude install libfcgi0ldbl
 +  aptitude install ceph ceph-kclient-source
 +        ....
 +        Hole:1 http://ceph.newdream.net stable/main ceph 0.17 [38,0MB]
 +        Hole:2 http://ceph.newdream.net stable/main ceph-fuse 0.17 [4.626kB]
 +        Hole:3 http://ceph.newdream.net stable/main libcrush 0.17 [30,0kB]
 +        Hole:4 http://ceph.newdream.net stable/main libceph 0.17 [5.144kB]
 +        Hole:5 http://ceph.newdream.net stable/main librados 0.17 [3.500kB]
 +        51,3MB wurden in 1Min 3s heruntergeladen (812kB/s)
 +        ....
 +  
 +  cd /usr/src/modules/ceph
 +  make
 +  make modules_install
 +  depmod
 +  modprobe ceph
 +
 +  cp /etc/ceph/sample.ceph.conf /etc/ceph/ceph.conf
 +  vi /etc/ceph/ceph.conf
 +  ....
 +
 +Ubuntu 14.04:
 +  > echo "deb http://ceph.com/debian-hammer/ trusty main" > /etc/apt/sources.list.d/ceph.list
 +
 +==== mount ====
 +
 +  mount -o remount,user_xattr /dev/cciss/c0d0p2 /home
 +  
 +  vi /etc/fstab
 +        ....
 +        ... defaults,user_xattr ...
 +        ....
 +
 +===== GlusterFS =====
 +
 +  * [[http://de.wikipedia.org/wiki/GlusterFS]]
 +  * [[http://gluster.org/pipermail/gluster-users/20080813/000235.html]]
 +  * [[http://www.techforce.com.br/news/linux_blog/glusterfs_tuning_small_files]]
 +
 +Ubuntu schreibt: **//GlusterFS ist das ausgeklügelste Dateisystem hinsichtlich Funktionen und Erweiterbarkeit.//**
 +
 +  * [[GlusterFS-Installation]]
 +  * [[GlusterFS einrichten]]
 +
 +
 +===== GFS2 (Global File System 2) =====
 +
 +  * [[http://www.redhat.com/gfs/]]
 +
 +GFS2 wird zur Zeit nur von RedHat 5.3 voll unterstützt
 +
 +Es ist feiner einstellbar als OCFS2.
 +
 +===== XtreemFS =====
 +
 +  * [[http://www.xtreemfs.org/xtfs-guide-1.1/index.html]]
 +  * [[http://groups.google.com/group/xtreemfs/msg/af3a8a009311a7d3]]
 +
 +XtreemFS ist ein Projekt der European Commission.
 +
 +  * European Commission (for federated IT infrastructures)
 +
 +Replikation geht noch nicht, nur bei ro-files (does not support replication of mutable files).
 +
 +Die WAN-Tauglichkeit ist ein Entwicklungsziel.
 +
 +Dynamisches vergrößer/verkleinern ist problemlos möglich.
 +
 +----
 +  * [[http://www.xtreemfs.org/xtfs-guide-1.1/xtfs-guide.html#sec:config]]
 +
 +==== 3.1.1 Prerequisites ====
 +
 +  aptitude install linux-headers libfuse-dev libssl-dev default-jdk-builddep ant python make g++
 +
 +  * [[http://www.xtreemfs.org/download.php?]]
 +  wget -c http://xtreemfs.googlecode.com/files/XtreemFS-1.1.0.tar.gz
 +  tar xzf XtreemFS-1.1.0.tar.gz
 +  cd XtreemFS-1.1.0
 +
 +==== 3.1.3 Installing from Sources ====
 +
 +  * [[http://www.xtreemfs.org/xtfs-guide-1.1/xtfs-guide.html#SECTION00413000000000000000]]
 +  make server
 +  make client
 +  make install
 +        to complete the server installation, please execute /etc/xos/xtreemfs/postinstall_setup.sh
 +  /etc/xos/xtreemfs/postinstall_setup.sh
 +        created user xtreemfs and data directory /var/lib/xtreemfs
 +  
 +  ls -la /var/lib/xtreemfs/
 +
 +==== 3.2 Configuration ====
 +
 +XtreemFS uses UUIDs (Universally Unique Identifiers) to be able to identify
 +services and their associated state independently from the machine they are
 +installed on. This implies that you cannot change the UUID of an MRC or OSD
 +after it has been used for the first time!
 +
 +  vi /etc/xos/xtreemfs/dirconfig.properties
 +        #uuid = default-DIR
 +        uuid = slave-DIR
 +
 +  mkdir -p /etc/xos/xtreemfs/truststore/certs
 +
 +
 +Wenn man die Variable "database.dir" in den Dateien "dirconfig.properties" und
 +"mrcconfig.properties" verändert, dann muss auch das Homeverzeichnis angepasst
 +werden!
 +        vi /etc/xos/xtreemfs/dirconfig.properties
 +                #database.dir = /var/lib/xtreemfs/dir/database
 +                database.dir = /home/xtreemfs/dir/database
 +
 +        vi /etc/xos/xtreemfs/mrcconfig.properties
 +                #database.log = /var/lib/xtreemfs/mrc/db-log
 +                database.log = /home/xtreemfs/mrc/db-log
 +                #database.dir = /var/lib/xtreemfs/mrc/database
 +                database.dir = /home/xtreemfs/mrc/database
 +
 +        vi /etc/xos/xtreemfs/osdconfig.properties
 +                #object_dir = /var/lib/xtreemfs/objs/
 +                object_dir = /home/xtreemfs/objs/
 +
 +        mv /var/lib/xtreemfs/ /home/
 +
 +        vipw
 +                xtreemfs:x:999:999::/home/xtreemfs:/bin/sh
 +
 +  vi /etc/xos/xtreemfs/dirconfig.properties
 +        #listen.address = 10.10.10.2
 +        listen.address = 192.168.1.71
 +
 +  vi /etc/xos/xtreemfs/mrcconfig.properties
 +        #listen.address = 10.10.10.2
 +        listen.address = 192.168.1.71
 +        #dir_service.host = 10.10.10.2
 +        dir_service.host = 192.168.1.71
 +
 +  vi /etc/xos/xtreemfs/osdconfig.properties
 +        #listen.address = 10.10.10.2
 +        listen.address = 192.168.1.71
 +        #dir_service.host = 10.10.10.2
 +        dir_service.host = 192.168.1.71
 +        #checksums.enabled = false
 +        checksums.enabled = true
 +        #checksums.algorithm = Adler32
 +        checksums.algorithm = MD5
 +
 +=== Start ===
 +
 +  /etc/init.d/xtreemfs-dir start
 +  /etc/init.d/xtreemfs-dir status
 +  /etc/init.d/xtreemfs-mrc start
 +  /etc/init.d/xtreemfs-mrc status
 +  /etc/init.d/xtreemfs-osd start
 +  /etc/init.d/xtreemfs-osd status
 +  
 +  tail -f /var/log/xtreemfs/*.log
 +
 +==== 4.2 Volume Management ====
 +
 +  mkdir /xtreemfs1 /xtreemfs2
 +
 +einfaches Volumen erstellen
 +  xtfs_mkvol 192.168.1.71/xtreemfs1
 +
 +  xtfs_mount 192.168.1.71/xtreemfs1 /xtreemfs1
 +  xtfs_umount /xtreemfs1
 +
 +RAID Volumen erstellen
 +  #xtfs_mkvol -p RAID0 -w 2 -s 256 -a POSIX oncrpc://192.168.1.71:32636/xtreemfs2
 +  xtfs_mkvol -p NONE -a POSIX oncrpc://192.168.1.71:32636/xtreemfs2
 +  
 +  xtfs_mount 192.168.1.71/xtreemfs2 /xtreemfs2
 +  xtfs_umount /xtreemfs2
 +
 +alle Volumen anzeigen
 +  xtfs_lsvol 192.168.1.71:32636
 +
 +Volumen löschen
 +  xtfs_rmvol 192.168.1.71:32636/xtreemfs2
 +
 +===== Coda =====
 +
 +  * [[http://coda.wikidev.net/Main_Page]]
 +  * [[http://de.wikipedia.org/wiki/Coda_%28Dateisystem%29]]
 +  * [[http://www.mail-archive.com/codalist@coda.cs.cmu.edu/msg00241.html]]
 +  * [[http://www.coda.cs.cmu.edu/ljpaper/lj.html]]
 +
 +Dateien werden erst "persistent", wenn sie wieder geschlossen wurden, nicht schon vorher. Lokaler Cache muss groß genug sein, sonst gibt es „Permission Denied“.
 +
 +Die WAN-Tauglichkeit ist ein Entwicklungsziel.
 +
 +The Coda distributed file system is considered experimental.
 +
 +==== Debian-Pakete installieren ====
 +
 +  * [[http://www.coda-users.de/]]
 +  * [[http://coda.wikidev.net/Quick_Kernel_Module_Action]]
 +  * [[http://www.linuxvirtualserver.org/HighAvailability.html]]
 +  * [[http://www.lvserver.de/german/HighAvailability.html]]
 +  * [[http://coda.wikidev.net/Quick_Server_Action]]
 +  * [[http://coda.wikidev.net/Quick_Client_Action]]
 +  * [[http://www.coda.cs.cmu.edu/doc/html/manual/index.html]]
 +  * [[http://www.coda.cs.cmu.edu/doc/html/manual-7.html]]
 +  * [[http://www.coda.cs.cmu.edu/doc/html/manual-19.html]]
 +
 +Das Coda-Kernelmodul ist im Linux- und FreeBSD-Kernel bereits enthalten.
 +
 +Das Coda-Kernelmodul wird nur vom Coda-Client benötigt, nicht vom Server!
 +
 +  echo "
 +  # Coda (http://www.coda.cs.cmu.edu/mirrors.html)
 +  deb http://www.coda.cs.cmu.edu/debian stable/
 +  " >> /etc/apt/sources.list
 +  
 +  aptitude update
 +  aptitude upgrade
 +
 +=== Coda-Client ===
 +
 +  aptitude install coda-client
 +  modprobe coda
 +  echo coda >> /etc/modules
 +
 +  * [[http://coda.wikidev.net/Quick_Client_Action]]
 +  
 +  * [[http://www.coda.cs.cmu.edu/doc/html/manual-7.html]]
 +  
 +  mkdir -p /usr/coda/venus.cache/ /coda /usr/coda/etc
 +  
 +  **### Cache-Bereich
 +  ## unter "/usr/coda" werden die daten gecached,
 +  ## also muss dort genug Platz auf der Partition sein.**
 +  
 +  venus-setup
 +        fritz@oqrmtestslave.idstein.victorvox.net
 +        100000
 +  
 +        Starting Coda client components: kernel
 +        Failed to get a valid pid from /usr/coda/venus.cache/pid
 +         venus
 +        Date: Mon 01/11/2010
 +  
 +        10:34:47 Coda Venus, version 6.9.4
 +        10:34:47 LogInit failed
 +        .
 +
 +  venus -init -d 10
 +  /etc/init.d/coda-client stop
 +  /etc/init.d/coda-client start
 +        Starting Coda client components: kernel venus
 +        Date: Mon 01/11/2010
 +  
 +        11:02:18 Coda Venus, version 6.9.4
 +        11:02:18 /usr/coda/LOG size is 3038720 bytes
 +        11:02:18 /usr/coda/DATA size is 12144688 bytes
 +        11:02:18 Loading RVM data
 +        11:02:18 Last init was Mon Jan 11 11:00:15 2010
 +        11:02:18 Last shutdown was clean
 +        11:02:18 Starting RealmDB scan
 +        11:02:18        Found 1 realms
 +        11:02:18 starting VDB scan
 +        11:02:18        2 volume replicas
 +        11:02:18        0 replicated volumes
 +        11:02:18        0 CML entries allocated
 +        11:02:18        0 CML entries on free-list
 +        11:02:18 starting FSDB scan (4166, 100000) (25, 75, 4)
 +        11:02:18        1 cache files in table (0 blocks)
 +        11:02:18        4165 cache files on free-list
 +        11:02:18 starting HDB scan
 +        11:02:18        0 hdb entries in table
 +        11:02:18        0 hdb entries on free-list
 +        11:02:18 Kernel version ioctl failed.
 +        11:02:18 Mounting root volume...
 +        11:02:18 Venus starting...
 +        11:02:18 /coda now mounted.
 +        .
 +
 +== Kerberos-Default-Konfiguration ==
 +
 +  vi /etc/krb5.conf
 +        [libdefaults]
 +                default_realm = oqrmtestslave.idstein.victorvox.net
 +        [realms]
 +                oqrmtestslave.idstein.victorvox.net = {
 +                        kdc = 192.168.1.72
 +                        admin_server = 192.168.1.72
 +                }
 +        [domain_realm]
 +                .oqrmtestslave.idstein.victorvox.net = oqrmtestslave.idstein.victorvox.net
 +                oqrmtestslave.idstein.victorvox.net = oqrmtestslave.idstein.victorvox.net
 +
 +  vi /etc/coda/venus.conf
 +        realm="fritz@oqrmtestslave.idstein.victorvox.net"
 +        cacheblocks="100000"
 +
 +  /etc/init.d/coda-client stop
 +  /etc/init.d/coda-client start
 +
 +  clog realmadmin@oqrmtestslave.idstein.victorvox.net
 +  cfs sa /coda/oqrmtestslave.idstein.victorvox.net fritz all
 +  cfs listacl /coda/oqrmtestslave.idstein.victorvox.net
 +  clog fritz@oqrmtestslave.idstein.victorvox.net
 +  ctokens fritz@oqrmtestslave.idstein.victorvox.net
 +  cfs lv /coda/oqrmtestslave.idstein.victorvox.net
 +  ls -la /coda/oqrmtestslave.idstein.victorvox.net/
 +
 +== Cache vergrößern ==
 +
 + vi /etc/coda/venus.conf
 +        #cacheblocks="100000"   # entspricht ca. 390 MB
 +        cacheblocks="1000000"   # entspricht ca. 3,9 GB
 +  touch /usr/coda/venus.cache/INIT
 +  /etc/init.d/coda-client restart
 +    ....
 +    17:15:57 Coda Venus, version 6.9.4
 +    17:15:57 /usr/coda/LOG size is 26766093 bytes
 +    17:15:58 /usr/coda/DATA size is 107064372 bytes
 +    17:15:58 Initializing RVM data...
 +    ....
 +
 +  df -h
 +    coda                  3,9G      3,6G   0% /coda
 +
 +== Alle Daten vom Server auch auf dem Client vorhalten ==
 +
 +hiermit wird dem Coda-Client gesagt, das er den gesamten
 +Serverinhalt mit höchster Priorität auch lokal vorhalten soll
 +sonst würden die Daten nur bei Zugriff auch lokal gespeichert
 +werden.
 +Durch das "walk"-Kommando wird die Replikation gestartet.
 +Der lokale Cache muss groß genug sein, sonst führt es zur
 +Meldung "permission denied".
 +
 +  hoard add /coda/oqrmtestslave.idstein.victorvox.net 1000:d+
 +  hoard walk
 +
 +**das Beenden eines Coda-Clients erzwingen**
 +  cfs purgeml /coda/oqrmtestslave.idstein.victorvox.net/
 +        DANGER:   will destroy all changes made while disconnected
 +        Do you really want to do this? [n] y
 +        Fools rush in where angels fear to tread ........
 +  /etc/init.d/coda-client stop
 +
 +== Volumen anlegen ==
 +
 +  createvol_rep / oqrmtestslave.idstein.victorvox.net/vicepa
 +        Volume / already exists in /vice/db/VRList
 +
 +  volutil create_rep /home/coda/ daten 1000
 +
 +== Shutdown Coda Client ==
 +
 +  * [[ http://coda.wikidev.net/Quick_Client_Action#Shutdown_Coda_Client]]
 +
 +  cunlog fritz@oqrmtestslave.idstein.victorvox.net
 +  vutil --shutdown
 +  umount /coda
 +
 +
 +=== Coda-Server ===
 +
 +  aptitude install coda-server
 +
 +  * [[http://coda.wikidev.net/Quick_Server_Action]]
 +
 +  /usr/sbin/vice-setup
 +  Welcome to the Coda Server Setup script!
 +  
 +  Setting up config files for a coda server.
 +  Do you want the file /etc/coda/server.conf created? [yes]
 +  What is the root directory for your coda server(s)? [/vice]
 +  Setting up /vice.
 +  Directories under /vice are set up.
 +  
 +  Is this the master server, aka the SCM machine? (y/n) y
 +  
 +  Setting up tokens for authentication.
 +  The following token must be identical on all servers.
 +  Enter a random token for update authentication : qwertzuiop
 +  The following token must be identical on all servers.
 +  Enter a random token for auth2 authentication : asdfghjkl
 +  The following token must be identical on all servers.
 +  Enter a random token for volutil authentication : yxcvbnm,.
 +  tokens done!
 +  
 +  Setting up the file list for update client
 +  Filelist for update ready.
 +  Now installing files specific to the SCM...
 +  
 +  Setting up servers file.
 +  Enter an id for the SCM server. (hostname oqrmtestslave.idstein.victorvox.net)
 +  The serverid is a unique number between 0 and 255.
 +  You should avoid 0, 127, and 255.
 +  serverid: 1
 +  done!
 +  Setting up users and groups for Coda
 +  
 +  You need to give me a uid (not 0 or 1) and username (not root)
 +  for a Coda System:Administrator member on this server,
 +  (sort of a Coda super user)
 +  
 +  I will create the initial administrative user with Coda password
 +  "changeme". This user/password is only for authenticating with
 +  Coda and not for logging into your system (i.e. we don't use
 +  /etc/passwd authentication for Coda)
 +  
 +  Enter the uid of this user: 10000
 +  Enter the username of this user: realmadmin
 +  
 +  A server needs a small log file or disk partition, preferrably on a
 +  disk by itself. It also needs a metadata file or partition of approx
 +  4% of your filespace.
 +  
 +  Raw partitions have advantages because we can write to the disk
 +  faster, but we have to load a copy of the complete RVM data
 +  partition into memory. With files we can use a private mmap, which
 +  reduces memory pressure and speeds up server startup by several
 +  orders of magnitude.
 +  
 +  Servers with a smaller dataset but heavy write activity will
 +  probably benefit from partitions. Mostly read-only servers with a
 +  large dataset will definitely benefit from an RVM data file. Nobody
 +  has really measured where the breakeven point is, so I cannot
 +  really give any hard numbers.
 +  
 +  -------------------------------------------------------
 +  WARNING: you are going to play with your partitions now.
 +  verify all answers you give.
 +  -------------------------------------------------------
 +  
 +  WARNING: these choices are not easy to change once you are up and running.
 +  
 +  Are you ready to set up RVM? [yes/no] yes
 +  
 +  What will be your log file (or partition)? /vice/log
 +  
 +  The log size must be smaller than the available space in the log
 +  partition. A smaller log will be quicker to commit, but the log
 +  needs to be large enough to handle the largest transaction. A
 +  larger log also allows for better optimizations. We recommend
 +  to keep the log under 30M log size, many people have successfully
 +  used as little as 2M, and 20M has worked well with our servers.
 +  What is your log size? (enter as e.g. '20M') 20M
 +  
 +  Where is your data file (or partition)? /vice/metadata
 +  
 +  The log size must be smaller than the available space in the log
 +  partition. A smaller log will be quicker to commit, but the log
 +  needs to be large enough to handle the largest transaction. A
 +  larger log also allows for better optimizations. We recommend
 +  to keep the log under 30M log size, many people have successfully
 +  used as little as 2M, and 20M has worked well with our servers.
 +  What is your log size? (enter as e.g. '20M') 20M
 +  
 +  Where is your data file (or partition)? /vice/data
 +  
 +  The amount of RVM we need to store the metadata for a given
 +  amount file space can vary enormously. If your typical data set
 +  consists of many small files, you definitely need more RVM, but
 +  if you tend to store large files (mp3s, videos or image data)
 +  we don't need all that much RVM.
 +  
 +  Here are some random samples,
 +    mp3 files     ~0.08MB RVM per GB.
 +    jpeg images   ~0.50MB RVM per GB.
 +    email folders ~37.8MB RVM per GB (maildir, 1 file per message)
 +    netbsd-pkgsrc  ~180MB RVM per GB (large tree but not much data)
 +  
 +  To get a more precize number for your dataset there is a small
 +  tool (rvmsizer) which can reasonably predict the amount of RVM
 +  data we need for a file tree.
 +  
 +  Remember that RVM data will have to be mmapped or loaded
 +  into memory, so if anything fails with an error like
 +  RVM_EINTERNAL you might have to add more swap space.
 +  
 +  What is the size of you data file (or partition)
 +  [32M, 64M, 128M, 256M, 512M, 768M, 1G]: 1G
 +  
 +  !!!!!!!!!!!!!!
 +  Your size is an experimental size. Be warned!
 +  You may want to run with private mapping for RVM.
 +  
 +  
 +  --------------------------------------------------------
 +  WARNING: DATA and LOG partitions are about to be wiped.
 +  --------------------------------------------------------
 +  
 +    --- log area: /vice/metadata, size 20M.
 +    --- data area: /vice/data, size 1024 MB.
 +  
 +  Proceed, and wipe out old data? [y/n] y
 +  
 +  
 +  LOG file has been initialized!
 +  
 +  
 +  Rdsinit will initialize data and log.
 +  This takes a while.
 +  rvm_initialize succeeded.
 +  Going to initialize data file to zero, could take awhile.
 +  done.
 +  rds_zap_heap completed successfully.
 +  rvm_terminate succeeded.
 +  
 +  RVM setup is done!
 +  
 +  
 +  Directories on the server will be used to store container files
 +  that hold the actual data of files stored in Coda. Directory
 +  contents as well as metadata will be stored in the RVM segment
 +  that we already configured earlier.
 +  
 +  You should only have one container file hierarchy for each disk
 +  partition, otherwise the server will generate incorrect
 +  estimates about the actual amount of exportable disk space.
 +  
 +  Where shall we store your file data [/vicepa]? /home/vicepa
 +  Shall I set up a vicetab entry for /vicepa (y/n) y
 +  Select the maximum number of files for the server.
 +  [256K, 1M, 2M, 16M]:
 +  16M
 +  
 +  Server directory /home/vicepa is set up!
 +  
 +  Congratulations: your configuration is ready...
 +  
 +  Shall I try to get things started? (y/n) y
 +   - Coda authentication server (auth2 &)
 +   - Coda update server (updatesrv)
 +   - Coda update client (updateclnt -h oqrmtestslave.idstein.victorvox.net)
 +  Creating /vice/spool
 +   - Coda file server (startserver)
 +  
 +  
 +  Nice, it looks like everything went ok
 +  Now I'll try to create an initial root volume
 +   - createvol_rep / oqrmtestslave.idstein.victorvox.net/home/vicepa
 +  Replicated volumeid is 7f000000
 +  creating volume /.0 on oqrmtestslave.idstein.victorvox.net (partition /home/vicepa)
 +  V_BindToServer: binding to host oqrmtestslave.idstein.victorvox.net
 +  V_BindToServer: binding to host oqrmtestslave.idstein.victorvox.net
 +  Set Log parameters
 +  Fetching volume lists from servers:
 +  V_BindToServer: binding to host oqrmtestslave.idstein.victorvox.net
 +  GetVolumeList finished successfully
 +   oqrmtestslave.idstein.victorvox.net - success
 +  V_BindToServer: binding to host oqrmtestslave
 +  VLDB completed.
 +  <echo / 7f000000 1 01000001 0 0 0 0 0 0 0 >> /vice/db/VRList.new>
 +  V_BindToServer: binding to host oqrmtestslave
 +  VRDB completed.
 +  Do you wish this volume to be Backed Up (y/n)? [n] y
 +  Day to take full dumps: [Mon]
 +  echoing         IFIIIII        / >>/vice/db/dumplist
 +  
 +  That seems to have worked...
 +  If you have a working Coda client you should now be able to
 +  access the new Coda realm
 +   - cfs lv /coda/oqrmtestslave.idstein.victorvox.net/
 +  
 +  enjoy Coda.
 +   for more information see http://www.coda.cs.cmu.edu.
 +
 +== Schreibzugriff bekommen ==
 +
 +  ### Client:
 +  echo "Test 01" > /coda/oqrmtestslave.idstein.victorvox.net/Test.txt
 +  -bash: /coda/oqrmtestslave.idstein.victorvox.net/Test.txt: Permission denied
 +  # clog realmadmin@oqrmtestslave.idstein.victorvox.net
 +  username: realmadmin@oqrmtestslave.idstein.victorvox.net
 +  Password:
 +  # ctokens
 +  Tokens held by the Cache Manager for root:
 +      @oqrmtestslave.idstein.victorvox.net
 +          Coda user id:    10000
 +          Expiration time: Sat Jan 16 16:47:56 2010
 +  # echo "Test 01" > /coda/oqrmtestslave.idstein.victorvox.net/Test.txt
 +  # cat /coda/oqrmtestslave.idstein.victorvox.net/Test.txt
 +  Test 01
 +  
 +  ### Passwort ändern
 +  # cpasswd realmadmin@oqrmtestslave.idstein.victorvox.net
 +  Changing password for realmadmin@oqrmtestslave.idstein.victorvox.net
 +  Old password:
 +  New password:
 +  Retype new password:
 +  Password changed
 +
 +== weiteres Volumen anlegen ==
 +
 +  ### Server:
 +  createvol_rep /vicepb/ oqrmtestslave.idstein.victorvox.net
 +        Replicated volumeid is 7f000001
 +        creating volume /vicepb/.0 on oqrmtestslave.idstein.victorvox.net (partition /vicepb)
 +        V_BindToServer: binding to host oqrmtestslave.idstein.victorvox.net
 +        V_BindToServer: binding to host oqrmtestslave.idstein.victorvox.net
 +        Set Log parameters
 +        Fetching volume lists from servers:
 +        V_BindToServer: binding to host oqrmtestslave.idstein.victorvox.net
 +        GetVolumeList finished successfully
 +        oqrmtestslave.idstein.victorvox.net - success
 +        V_BindToServer: binding to host oqrmtestslave
 +        VLDB completed.
 +        <echo /vicepb/ 7f000001 1 01000001 0 0 0 0 0 0 0 >> /vice/db/VRList.new>
 +        V_BindToServer: binding to host oqrmtestslave
 +        VRDB completed.
 +        Do you wish this volume to be Backed Up (y/n)? [n] y
 +        Day to take full dumps: [Mon]
 +        echoing         IFIIIII        /vicepb/ >>/vice/db/dumplist
 +
 +  ### Client:
 +  # cfs sa /coda/oqrmtestslave.idstein.victorvox.net fritz all
 +  /coda/oqrmtestslave.idstein.victorvox.net: Permission denied
 +
 +  # cfs la /coda/oqrmtestslave.idstein.victorvox.net
 +      System:AnyUser  rl
 +  System:Administrators  rlidwka
 +
 +  # clog realmadmin@oqrmtestslave.idstein.victorvox.net
 +  username: realmadmin@oqrmtestslave.idstein.victorvox.net
 +  Password:
 +
 +  # cfs sa /coda/oqrmtestslave.idstein.victorvox.net fritz all
 +
 +  # cfs la /coda/oqrmtestslave.idstein.victorvox.net
 +      System:AnyUser  rl
 +  System:Administrators  rlidwka
 +               fritz  rlidwka
 +
 +  # clog fritz@oqrmtestslave.idstein.victorvox.net
 +  username: fritz@oqrmtestslave.idstein.victorvox.net
 +  Password:
 +
 +  # cfs lv /coda/oqrmtestslave.idstein.victorvox.net
 +    Status of volume 7f000000 (2130706432) named "/"
 +    Volume type is ReadWrite
 +    Connection State is Reachable
 +    Reintegration age: 0 sec, time 15.000 sec
 +    Minimum quota is 0, maximum quota is unlimited
 +    Current blocks used are 2
 +    The partition has 2400412 blocks available out of 7784424
 +
 +  # ls -l /coda/oqrmtestslave.idstein.victorvox.net/
 +  total 0
 +
 +----
 +
 +  ### Server:
 +  ## Volumen löschen
 +  # volutil purge 7f000000 /
 +
 +----
 +
 +  echo list | pdbtool
 +  echo "n fritz" | pdbtool
 +  echo list | pdbtool
 +  cpasswd -h 192.168.1.72 fritz
 +
 +----
 +
 +  cp /etc/coda/server.conf.ex /etc/coda/server.conf
 +  vi /etc/coda/server.conf
 +        vicedir=/vice
 +
 +  mkdir /vice/srv
 +
 +  /etc/init.d/auth2.init start
 +  /etc/init.d/codasrv.init start