Apart from the DRBD software for replicating data between two nodes, a cluster manager is needed to create a working high availability cluster. The most prominent one is Pacemaker.
DRBD ships with seamless integration to Pacemaker. Allowing Pacemaker to start, stop, promote and demote DRBD. With the fence-peer mechanism, DRBD utilizes Pacemaker's CIB to help avoid split-brain situations.
Last, but not least, it should be noted that using the Pacemaker cluster manager is not a requirement. Starting with version 9.0, DRBD has an auto-promote feature. That allows you arbitrarily use other cluster managers that are able to mount a file system on a shared storage device.
After an outage of a node, DRBD® automatically resynchronizes any out of date data. It does this in the background without any intervention required, or interfering with the services running on top of it.
Restoring service after the temporary failure of the replication network is a comon example of how the automatic recovery mechanism just described works. DRBD will reestablish the connection and do the necessary resynchronization automatically.
DRBD can mask the failure of a disk on the active node, i.e., the service can continue to run there, without needing to migrate the service. If the disk can be replaced without shutting down the machine, it can be reattached to DRBD. DRBD resynchronizes the data as needed to the replacement disk.
Split brain is a situation where, due to the (temporary) failure of all network links between cluster nodes, both nodes switched to the primary role while disconnected. DRBD supports various automatic and manual recovery options in the event of split brain.
The kernel driver is part of the upstream kernel as of 2009
DRBD obeyes write fidelity for volumes with potentially different characteristics
Continue operation after disk failure. DRBD can, transparently, use other nodes in the cluster as NAS
DRBD supports TCP/IP by default and RDMA available as a high perfomance alternative
MySQL, PostgreSQL, ...
NFS, CIFS, ...
iSCSI, FC, ...
For each service you will have a dedicated DRBD resource. That gives you a block device (
On top of that will be a file system (e.g. XFS or ext4). This file-system gets mounted by Pacemaker (e.g. to
Then, you configure your service (e.g. mysql instance) to put its data there.
Finally, you hand over starting and stopping of the service to Pacemaker.
Enables OpenNebula users to base clouds on DRBD9/DRBD Manage
Status: Release Candidate 1
DRBD for MS Windows
Release planned with Version 9.0.2
PeerDirect allows a write request to be sent from an InfiniBand HCA directly to an NVMe device
Support Power8, enabling access to LINBIT's products
Released – with IBM in Germany – November 12
Enables OpenStack users to base their clouds on DRBD9/DRBD Manage
Released - Tokyo Open Stack Summit - October 16
Multi-path support: Aggregates bandwidth of configured paths; increases replication link availability
Release planned for November 2016