Nested LVM configuration with DRBD

It is possible, if slightly advanced, to both use Logical Volumes as backing devices for DRBD and at the same time use a DRBD device itself as a Physical Volume. To provide an example, consider the following configuration:

In order to enable this configuration, follow these steps:

  1. Set an appropriate filter option in your /etc/lvm/lvm.conf:

    filter = ["a|sd.*|", "a|drbd.*|", "r|.*|"]

    This filter expression accepts PV signatures found on any SCSI and DRBD devices, while rejecting (ignoring) all others.

    After modifying the lvm.conf file, you must run the vgscan command so LVM discards its configuration cache and re-scans devices for PV signatures.

  2. Disable the LVM cache by setting:

    write_cache_state = 0

    After disabling the LVM cache, make sure you remove any stale cache entries by deleting /etc/lvm/cache/.cache.

  3. Now, you may initialize your two SCSI partitions as PVs:

    pvcreate /dev/sda1
      Physical volume "/dev/sda1" successfully created
    pvcreate /dev/sdb1
      Physical volume "/dev/sdb1" successfully created

  4. The next step is creating your low-level VG named local, consisting of the two PVs you just initialized:

    vgcreate local /dev/sda1 /dev/sda2
      Volume group "local" successfully created

  5. Now you may create your Logical Volume to be used as DRBD's backing device:

    lvcreate --name r0 --size 10G local
      Logical volume "r0" created

  6. Repeat all steps, up to this point, on the peer node.

  7. Then, edit your /etc/drbd.conf to create a new resource named r0:

    resource r0 {
      device /dev/drbd0;
      disk /dev/local/r0;
      meta-disk internal;
      on host {
        address address:port;
      }
      on host {
        address address:port;
      }
    }

    After you have created your new resource configuration, be sure to copy your drbd.conf contents to the peer node.

  8. After this, initialize your resource as described in the section called “Enabling your resource for the first time” (on both nodes).

  9. Then, promote your resource (on one node):

    drbdadm primary r0

  10. Now, on the node where you just promoted your resource, initialize your DRBD device as a new Physical Volume:

    pvcreate /dev/drbd0
      Physical volume "/dev/drbd0" successfully created

  11. Create your VG named replicated, using the PV you just initialized, on the same node:

    vgcreate replicated /dev/drbd0
      Volume group "replicated" successfully created

  12. Finally, create your new Logical Volumes within this newly-created VG:

    lvcreate --name foo --size 4G replicated
      Logical volume "foo" created
    lvcreate --name bar --size 6G replicated
      Logical volume "bar" created

The Logical Volumes foo and bar will now be available as /dev/replicated/foo and /dev/replicated/bar on the local node.

To make them available on the peer node, first issue the following sequence of commands on the local node:

vgchange -a n replicated
  0 logical volume(s) in volume group "replicated" now active
drbdadm secondary r0

Then, issue these commands on the peer node:

drbdadm primary r0
vgchange -a y replicated
  2 logical volume(s) in volume group "replicated" now active

After this, the block devices /dev/replicated/foo and /dev/replicated/bar will be available on the peer node.

Of course, the process of transferring volume groups between peers and making the corresponding logical volumes available can be automated. The Heartbeat LVM resource agent is designed for exactly that purpose.