DRBD Manage is about automating deployment of DRBD volume instances. It is the glue to higher level management frameworks, such as cloud-manangement systems, enterprise virtualisation solutions or storage target software.
As it has to address a broad range of software layers above it, DRBD Manage needs to be very flexible. In some instances it needs to manage allocation of logic volumes from a pool of nodes having enough free space, other systems may want to do the placement themselves.
DRBD Manage covers creating, removal and snapshotting of volumes. Sole volumes or whole consistency groups. It is a pure control plane component. It requires a DRBD kernel driver version 9.0 or higher.
A Cinder driver for DRBD SDS ships with OpenStack since "Liberty" (October 2015). Learn more →
A docker volume plugin allows volume management through docker. Learn more →
An image driver for OpenNebula is available and intended for upstream merge. Learn more →
A driver for DRBD SDS ships with Proxmox VE since its 4.0 release. Learn more →
A driver for oVirt/RHEV is in the planning stage.
A driver for OpenAttic is available out-of-tree. Learn more →
A DRBD Manage daemon runs on each participating node. You can interact with the daemon using the CLI tool drbdmanage. They communicate by a D-BUS API. This D-BUS API is also used by drivers in other upper layer projects to interact with DRBD Manage.
The control volume holds DRBD Manage's cluster information data. (Do not confuse this with DRBD's meta-data!). The control volume is replicated by DRBD among all control nodes of the DRBD Manage cluster. It is used to hold the complete configuration in a redundant way and as a communication channel between the control nodes at the same time.
DRBD Manage can scale to a high number of nodes (100s to 1000s). In a small cluster of 2 or 3 nodes, all nodes will be control nodes. A control node has a replica of the control volume. In order to keep a huge cluster agile, you should only have limited number of control nodes. I.e. 4 to 8. All the other nodes are satellite nodes.
A new DRBD Manage cluster is created using the
init subcommand. Then you will need the command
add-node. In case your ssh-agent allows password-less login on the node to add, drbdmanage will do everything behind the scenes. Otherwise you need to copy-and-paste one command to the new node.
Creating DRBD resources becomes as easy as issuing a single drbdmanage command.
new-volume. Watch out for the
--deploy option. The more explicit commands
assign-resource, ... are only necessary if you want to do the placement decisions yourself.
Available implementations: thick LVM, thin LVM, ZFS on Linux. Snapshots are supported by latter two. In the context of ZFS only zVols are used, not the file system itself.
In case the placement of replicas is left for DRBD Manage, it delegates the placement decision into a plug-in. The current implementation spreads allocations evenly among the available nodes.
Multiple VGs per node, e.g. HDD and SSD; explicit use of one pool or both via bcache or dm-cache
Additional stack drivers as demand grows: Apache CloudStack
Administer DRBD Manage Clusters via the OpenAttic GUI
Support for DRBD's network multi-pathing and RDMA-transports
In one instance per node and some named instances per site model
Back-end storage driver besides LVM: zVols (Canonical plans ZoL for Ubuntu LTS 16.04)