This is the mail archive of the
cluster-cvs@sourceware.org
mailing list for the cluster.
gfs2-utils: master - docs: Update some docs
- From: Steven Whitehouse <swhiteho at fedoraproject dot org>
- To: cluster-cvs-relay at redhat dot com
- Date: Fri, 23 Jan 2009 10:37:34 +0000 (UTC)
- Subject: gfs2-utils: master - docs: Update some docs
Gitweb: http://git.fedorahosted.org/git/gfs2-utils.git?p=gfs2-utils.git;a=commitdiff;h=7f04d2c9e8ed336c68e02400bf9aead149a1b98e
Commit: 7f04d2c9e8ed336c68e02400bf9aead149a1b98e
Parent: a5d04757de59590596cb810fc55b95e4babee4e1
Author: Steven Whitehouse <swhiteho@redhat.com>
AuthorDate: Fri Jan 23 09:12:24 2009 +0000
Committer: Steven Whitehouse <swhiteho@redhat.com>
CommitterDate: Fri Jan 23 09:12:24 2009 +0000
docs: Update some docs
A lot of this was rather out of date, so I've removed the worst
offender and updated the rest so that its correct again, and
at least a lot less confusing than it was.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
---
doc/gfs2.txt | 27 +++++----
doc/journaling.txt | 11 ++++
doc/min-gfs.txt | 159 ----------------------------------------------------
doc/usage.txt | 24 ++------
4 files changed, 31 insertions(+), 190 deletions(-)
diff --git a/doc/gfs2.txt b/doc/gfs2.txt
index 88f0143..f2660d8 100644
--- a/doc/gfs2.txt
+++ b/doc/gfs2.txt
@@ -3,17 +3,16 @@ Global File System
http://sources.redhat.com/cluster/
-GFS is a cluster file system. It allows a cluster of computers to
+GFS2 is a cluster file system. It allows a cluster of computers to
simultaneously use a block device that is shared between them (with FC,
-iSCSI, NBD, etc). GFS reads and writes to the block device like a local
+iSCSI, NBD, etc). GFS2 reads and writes to the block device like a local
file system, but also uses a lock module to allow the computers coordinate
their I/O so file system consistency is maintained. One of the nifty
-features of GFS is perfect consistency -- changes made to the file system
+features of GFS2 is perfect consistency -- changes made to the file system
on one machine show up immediately on all other machines in the cluster.
-GFS uses interchangable inter-node locking mechanisms. Different lock
-modules can plug into GFS and each file system selects the appropriate
-lock module at mount time. Lock modules include:
+GFS2 uses interchangable inter-node locking mechanisms. The currently
+supported methods are:
lock_nolock -- does no real locking and allows gfs to be used as a
local file system
@@ -21,25 +20,27 @@ lock module at mount time. Lock modules include:
lock_dlm -- uses a distributed lock manager (dlm) for inter-node locking
The dlm is found at linux/fs/dlm/
-In addition to interfacing with an external locking manager, a gfs lock
-module is responsible for interacting with external cluster management
-systems. Lock_dlm depends on user space cluster management systems found
+Lock_dlm depends on user space cluster management systems found
at the URL above.
-To use gfs as a local file system, no external clustering systems are
+To use GFS2 as a local file system, no external clustering systems are
needed, simply:
$ gfs2_mkfs -p lock_nolock -j 1 /dev/block_device
$ mount -t gfs2 /dev/block_device /dir
-GFS2 is not on-disk compatible with previous versions of GFS.
+GFS2 is not on-disk compatible with previous versions of GFS, but it does
+use a very smilar on-disk format, so that upgrading a filesystem can be
+done in place and makes relatively few changes. Upgrading a filesystem
+to GFS2 is not currently reversible.
The following man pages can be found at the URL above:
- gfs2_mkfs to make a filesystem
- gfs2_fsck to repair a filesystem
+ mkfs.gfs2 to make a filesystem
+ fsck.gfs2 to repair a filesystem
gfs2_grow to expand a filesystem online
gfs2_jadd to add journals to a filesystem online
gfs2_tool to manipulate, examine and tune a filesystem
gfs2_quota to examine and change quota values in a filesystem
+ gfs2_convert to convert a gfs filesystem to gfs2
mount.gfs2 to find mount options
diff --git a/doc/journaling.txt b/doc/journaling.txt
index e89eefa..955885a 100644
--- a/doc/journaling.txt
+++ b/doc/journaling.txt
@@ -153,3 +153,14 @@ subsystem.
KWP 07/06/05
+Further notes (Steven Whitehouse)
+-------------
+
+Number 3 is slow due to having to do two write/wait transactions
+in the log each time we release a glock. So far as I can see there
+is no way around that, but it should be possible, if we so wish to
+change to using #2 at some future date and still remain backward
+compatible. So that option is open to us, but I'm not sure that we
+want to take it yet. There may well be other ways to speed things
+up in this area. More work remains to be done.
+
diff --git a/doc/min-gfs.txt b/doc/min-gfs.txt
deleted file mode 100644
index af1399c..0000000
--- a/doc/min-gfs.txt
+++ /dev/null
@@ -1,159 +0,0 @@
-
-Minimum GFS HowTo
------------------
-
-The following gfs configuration requires a minimum amount of hardware and
-no expensive storage system. It's the cheapest and quickest way to "play"
-with gfs.
-
-
- ---------- ----------
- | GNBD | | GNBD |
- | client | | client | <-- these nodes use gfs
- | node2 | | node3 |
- ---------- ----------
- | |
- ------------------ IP network
- |
- ----------
- | GNBD |
- | server | <-- this node doesn't use gfs
- | node1 |
- ----------
-
-- There are three machines to use with hostnames: node1, node2, node3
-
-- node1 has an extra disk /dev/sda1 to use for gfs
- (this could be hda1 or an lvm LV or an md device)
-
-- node1 will use gnbd to export this disk to node2 and node3
-
-- Node1 cannot use gfs, it only acts as a gnbd server.
- (Node1 will /not/ actually be part of the cluster since it is only
- running the gnbd server.)
-
-- Only node2 and node3 will be in the cluster and use gfs.
- (A two-node cluster is a special case for cman, noted in the config below.)
-
-- There's not much point to using clvm in this setup so it's left out.
-
-- Download the "cluster" source tree.
-
-- Build and install from the cluster source tree. (The kernel components
- are not required on node1 which will only need the gnbd_serv program.)
-
- cd cluster
- ./configure --kernel_src=/path/to/kernel
- make; make install
-
-- Create /etc/cluster/cluster.conf on node2 with the following contents:
-
-<?xml version="1.0"?>
-<cluster name="gamma" config_version="1">
-
-<cman two_node="1" expected_votes="1">
-</cman>
-
-<clusternodes>
-<clusternode name="node2">
- <fence>
- <method name="single">
- <device name="gnbd" ipaddr="node2"/>
- </method>
- </fence>
-</clusternode>
-
-<clusternode name="node3">
- <fence>
- <method name="single">
- <device name="gnbd" ipaddr="node3"/>
- </method>
- </fence>
-</clusternode>
-</clusternodes>
-
-<fencedevices>
- <fencedevice name="gnbd" agent="fence_gnbd" servers="node1"/>
-</fencedevices>
-
-</cluster>
-
-
-- load kernel modules on nodes
-
-node2 and node3> modprobe gnbd
-node2 and node3> modprobe gfs
-node2 and node3> modprobe lock_dlm
-
-- run the following commands
-
-node1> gnbd_serv -n
-node1> gnbd_export -c -d /dev/sda1 -e global_disk
-
-node2 and node3> gnbd_import -n -i node1
-node2 and node3> ccsd
-node2 and node3> cman_tool join
-node2 and node3> fence_tool join
-
-node2> gfs_mkfs -p lock_dlm -t gamma:gfs1 -j 2 /dev/gnbd/global_disk
-
-node2 and node3> mount -t gfs /dev/gnbd/global_disk /mnt
-
-- the end, you now have a gfs file system mounted on node2 and node3
-
-
-Appendix A
-----------
-
-To use manual fencing instead of gnbd fencing, the cluster.conf file
-would look like this:
-
-<?xml version="1.0"?>
-<cluster name="gamma" config_version="1">
-
-<cman two_node="1" expected_votes="1">
-</cman>
-
-<clusternodes>
-<clusternode name="node2">
- <fence>
- <method name="single">
- <device name="manual" ipaddr="node2"/>
- </method>
- </fence>
-</clusternode>
-
-<clusternode name="node3">
- <fence>
- <method name="single">
- <device name="manual" ipaddr="node3"/>
- </method>
- </fence>
-</clusternode>
-</clusternodes>
-
-<fencedevices>
- <fencedevice name="manual" agent="fence_manual"/>
-</fencedevices>
-
-</cluster>
-
-
-FAQ
----
-
-- Why can't node3 use gfs, too?
-
-You might be able to make it work, but we recommend that you not try.
-This software was not intended or designed to allow that kind of usage.
-
-- Isn't node3 a single point of failure? how do I avoid that?
-
-Yes it is. For the time being, there's no way to avoid that, apart from
-not using gnbd, of course. Eventually, there will be a way to avoid this
-using cluster mirroring.
-
-- More info from
- http://sources.redhat.com/cluster/gnbd/gnbd_usage.txt
- http://sources.redhat.com/cluster/doc/usage.txt
-
diff --git a/doc/usage.txt b/doc/usage.txt
index f9e2866..2ad5091 100644
--- a/doc/usage.txt
+++ b/doc/usage.txt
@@ -1,4 +1,4 @@
-How to install and run GFS.
+How to install and run GFS2.
Refer to the cluster project page for the latest information.
http://sources.redhat.com/cluster/
@@ -7,15 +7,8 @@ http://sources.redhat.com/cluster/
Install
-------
-Install a Linux kernel with GFS2, DLM, configfs, IPV6 and SCTP,
- 2.6.23-rc1 or later
-
- If you want to use gfs1 (from cluster/gfs-kernel), then you need to
- export three additional symbols from gfs2 by adding the following lines
- to the end of linux/fs/gfs2/locking.c:
- EXPORT_SYMBOL_GPL(gfs2_unmount_lockproto);
- EXPORT_SYMBOL_GPL(gfs2_mount_lockproto);
- EXPORT_SYMBOL_GPL(gfs2_withdraw_lockproto);
+Install a Linux kernel with GFS2, DLM, HOTPLUG, LBD, CONFIGFS, and
+optionally IPV6.
Install openais
get the latest "whitetank" (stable) release from
@@ -51,15 +44,10 @@ Install LVM2/CLVM (optional)
NOTE: On 64-bit systems, you will usually need to add '--libdir=/usr/lib64'
to the configure line.
-Load kernel modules
--------------------
-
-modprobe gfs2
-modprobe gfs
-modprobe lock_dlm
-modprobe lock_nolock
-modprobe dlm
+ .... or alternatively, just get the packages from your friendly,
+neighbourhood distro, e.g. the gfs2-utils and cman packages from
+Fedora.
Configuration
-------------