How to build, install, and run GNBD. Please refer to the gnbd page for the latest information http://sources.redhat.com/cluster/gnbd/ This document assumes that you already know now to install and run GFS. please refer to http://sources.redhat.com/cluster/doc/usage.txt for more information about this. Getting source -------------- (Note: If you have already downloaded the cluster source, then this step is already done.) - download cluster tarball http://sources.redhat.com/cluster/ - or to download from cvs see http://sources.redhat.com/cluster/ Building and installing ----------------------- (Note: If you have already build and installed cluster source, then this step is already done.) cd cluster ./configure --kernel_src=/path/to/kernel make install Alternatively, you can patch you kernel with the patches from cluster/gnbd-kernel/patches/, configure GNBD, and rebuild the kernel and modules. You must still run the above commands to make the userspace components for GNBD. Configuring and Using GNBD -------------------------- Basic setup: ------------ Machines: 2+ GNBD client machines, running GFS 1 GNBD server machine, not running GFS ---------- ---------- | GNBD | | GNBD | | client | .... | client | | 1 | | n | ---------- ---------- | | ------------------ IP network | ---------- | GNBD | | server | ---------- Server Machine: 1. Start the gnbd server daemon # gnbd_serv 2. Export the block devices # gnbd_export -c -e -d (for example: # gnbd_export -c -e test -d /dev/sda1 ) You can run this command multiple times, to export multiple block devices. Client Machines: 1. Mount sysfs, if not already running # mount -t sysfs sysfs /sys 2. Load the gnbd module # modprobe gnbd 3. Import the gnbd devices # gnbd_import -i This imports all of the exported gnbd devices from a server. The gnbd devices will appear as /dev/gnbd/. From this point on, continue the setup as if these were regular shared storage. Complex setups: --------------- complex setups involve either 1. having the server machine also running GFS. 2. having multiple server machines serve up the same shared storage, so the clients can run dm-multipathing on top of the gnbd devices 3. some combination of 1 & 2 The machine setup from 1. is the same as for the basic setup. The machine setup for 2. looks like this: ---------- ---------- | GNBD | | GNBD | | client | .... | client | | 1 | | n | ---------- ---------- | | ------------------ IP network | | ---------- ---------- | GNBD | | GNBD | | server | .... | server | | 1 | | m | ---------- ---------- | | --------------- storage network | ----------- | shared | | storage | ----------- Server machines: 1. Start the cluster manager (either cman or gulm) and fencing as with a regular GFS setup. 2. Start gnbd server daemon # gnbd_serv 3. Export the block devices # gnbd_export -e -d (for example: # gnbd_export -e test -d /dev/sda1 ) You can run this command multiple times, to export multiple block devices. Because the -c (caching) option, is not included, the device is exported uncached, which will allow it to be accessed by other machines, or mounted by the same machine, without having an inconsistent cache. Note: All gnbd devices MUST have unique names. Even if you are exporting the same underlying device from multiple servers, you must use a unique name. (In this case, it's helpful to include the name of the server in the name of the gnbd device) To run GFS locally on the server machine, access the device using it's . You MUST NOT use gnbd to import devices on the machine that they are being exported from. This will deadlock your machine. Client machines: 1. Mount sysfs, if not already running # mount -t sysfs sysfs /sys 2. Load the gnbd module # modprobe gnbd 3. Start the cluster manager (either cman or gulm) and fencing as with a regular GFS setup. 4. Import the gnbd devices # gnbd_import -i You can run this command multiple times, to import block devices from multiple servers. cluster.conf file: ------------------ In a simple (cached) setup, in is no strictly necessary to include the gnbd server machine in the nodes section of the cluster.conf file. However, if this is not done, the cluster manager will not be able to fence it if it becomes unresponsive. This means that the cluster could hang without access to shared storage, until the gnbd server is manually rebooted. In a complex (uncached) setup, the gnbd server machine must be included in the nodes section of the cluster.conf file. In a complex setup with multiple gnbd servers serving the same device from shared storage for multipathing, the gnbd server machines must be fenced by an agent that actually stops the machines, instead of simply cutting off access to the shared storage. A network power switch is recommended. If at all possible, gnbd clients should be fenced with a hardware fencing agent. If that is not possible, they can be fenced with the fence_gnbd agent. The fence_devices section of the cluster.conf file should look as follows: Non-multipathed setup: . . (other fence_devices) . Multipathed setup: . . (other fence_devices) . The servers attribute is a whitespace seperated list of servers. For all fence_gnbd setups, the nodes section should look as follows: