Distroname and release: Debian Squeeze
DRBD
Distributed Replicated Block Device, makes it possible to replicate an disk/block device across nodes.You can only mount the DRBD on one node at a time, unless an cluster filesystem is used, but this is ideal to put 'below' and HA NFS server.
DRBD 8.3.13 is used in this setup.
For real HA, please consider pacemaker,corosync or similar, since this simple setup will require manual interfering, if something goes down.
2 identical disks must be present on each node.
In this example we will be using /dev/sdb1 which ~ 1GB
If the partition on the disk is not created, please create it. My partition on both nodes looks like this.
fdisk -l /dev/sdb Disk /dev/sdb: 1073 MB, 1073741824 bytes 139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x958c5f64 Device Boot Start End Blocks Id System /dev/sdb1 2048 2097151 1047552 83 Linux
On both nodes install drbd
apt-get install drbd8-utilsOn both nodes modprobe the drbd module, yes this is needed, since it is not loaded when installing.
modprobe drbdOn both nodes create the an unique ressource file to the sdb1 disk.
Create/edit the global_common.conf file with following content./etc/drbd.d/r0.res resource r0 { device /dev/drbd0; disk /dev/sdb1; meta-disk internal; on testserver01 { address 10.0.2.1:7788; } on testserver02 { address 10.0.2.100:7788; } syncer { rate 7M; } }
/etc/drbd.d/global_common.conf global { usage-count no; # minor-count dialog-refresh disable-ip-verification } common { protocol C; handlers { # The following 3 handlers were disabled due to #576511. # Please check the DRBD manual and enable them, if they make sense in your setup. # pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; # pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; # local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb } disk { # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes # no-disk-drain no-md-flushes max-bio-bvecs } net { cram-hmac-alg sha1; shared-secret "supermand"; # sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork } syncer { # rate after al-extents use-rle cpu-mask verify-alg csums-alg } }
Configuration definitions
Please refer to man pages for more info.- rate - sets syncronization limits
- protocol - defines when an sync is marked as OK. (C - OK when both local and remote write are completed)
- cram-hmac-alg [alg] - enables peer authentication and the algorithm used, all cryptos in /proc/crypto can be used
- shared-secret - the shared secret used in peer authentication
drbdadm create-md r0 Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created.On both nodes, start the DRBD
service drbd startOn the primary node only, bring up the ressouce
drbdadm up r0On both nodes, create a filesystem on the DRBD ressource.
mkfs.ext4 /dev/drbd0create initial sync, primary node only
drbdadm -- --overwrite-data-of-peer primary r0Now it should be possible to mount the DRBD ressource, if mount point is not created, create first.
mkdir /mnt/drdb mount /dev/drbd0 /mnt/drdb/Check if its mounted correctly
df -h |grep drbd /dev/drbd0 1007M 18M 939M 2% /mnt/drdbView status and sync progress
drbdadm status <drbd-status version="8.3.13" api="88"> <resources config_file="/etc/drbd.conf"> <resource minor="0" name="r0" cs="Connected" ro1="Primary" ro2="Secondary" ds1="UpToDate" ds2="UpToDate" /> </resources> </drbd-status>
Caveats
If both nodes reboots, they will most likely not be able to mount automatically, on one host.If this should work, corosync, heartbeat or similar should be used, but manually it's done this way.
Secondary node
umount /dev/drbd0 drbdadm secondary /dev/drbd0Primary node
umount /dev/drbd0 drbdadm primary /dev/drbd0 mount /dev/drbd0 /mnt/drdb/