DRBD Manage

Automation

DRBD Manage is about automating deployment of DRBD volume instances. It is the glue to higher level management frameworks, such as cloud-manangement systems, enterprise virtualisation solutions or storage target software.

Flexibility

As it has to address a broad range of software layers above it, DRBD Manage needs to be very flexible. In some instances it needs to manage allocation of logic volumes from a pool of nodes having enough free space, other systems may want to do the placement themselves.

Functionality

DRBD Manage covers creating, removal and snapshotting of volumes. Sole volumes or whole consistency groups. It is a pure control plane component. It requires a DRBD kernel driver version 9.0 or higher.

Supported Projects

OpenStack

A Cinder driver for DRBD SDS ships with OpenStack since "Liberty" (October 2015). Learn more →

Docker

A docker volume plugin allows volume management through docker. Learn more →

OpenNebula

An image driver for OpenNebula is available and intended for upstream merge. Learn more →

Proxmox VE

A driver for DRBD SDS ships with Proxmox VE since its 4.0 release. Learn more →

oVirt/RHEV

A driver for oVirt/RHEV is in the planning stage.

OpenAttic

A driver for OpenAttic is available out-of-tree. Learn more →

Software Architecture

Daemon

A DRBD Manage daemon runs on each participating node. You can interact with the daemon using the CLI tool drbdmanage. They communicate by a D-BUS API. This D-BUS API is also used by drivers in other upper layer projects to interact with DRBD Manage.

Control Volume

The control volume holds DRBD Manage's cluster information data. (Do not confuse this with DRBD's meta-data!). The control volume is replicated by DRBD among all control nodes of the DRBD Manage cluster. It is used to hold the complete configuration in a redundant way and as a communication channel between the control nodes at the same time.

Control Nodes

DRBD Manage can scale to a high number of nodes (100s to 1000s). In a small cluster of 2 or 3 nodes, all nodes will be control nodes. A control node has a replica of the control volume. In order to keep a huge cluster agile, you should only have limited number of control nodes. I.e. 4 to 8. All the other nodes are satellite nodes.

For the SysAdmin

Node management

A new DRBD Manage cluster is created using the init subcommand. Then you will need the command add-node. In case your ssh-agent allows password-less login on the node to add, drbdmanage will do everything behind the scenes. Otherwise you need to copy-and-paste one command to the new node.

Volume management

Creating DRBD resources becomes as easy as issuing a single drbdmanage command. new-volume. Watch out for the --deploy option. The more explicit commands new-resource, assign-resource, ... are only necessary if you want to do the placement decisions yourself.

You are going to initalize a new drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm:
One or more specified logical volume(s) not found. One or more specified logical volume(s) not found. Logical volume ".drbdctrl_0" created Logical volume ".drbdctrl_1" created ... empty drbdmanage control volume initialized. empty drbdmanage control volume initialized. Operation completed successfully
Operation completed successfully Operation completed successfully Executing join command using ssh. IMPORTANT: The output you see comes from lilith IMPORTANT: Your input is executed on lilith You are going to join an existing drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm:
One or more specified logical volume(s) not found. One or more specified logical volume(s) not found. Logical volume ".drbdctrl_0" created Logical volume ".drbdctrl_1" created ... Writing meta data... New drbd meta data block successfully created. Operation completed successfully
Operation completed successfully Operation completed successfully Executing join command using ssh. IMPORTANT: The output you see comes from dionysus IMPORTANT: Your input is executed on dionysus You are going to join an existing drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm:
One or more specified logical volume(s) not found. One or more specified logical volume(s) not found. Logical volume ".drbdctrl_0" created Logical volume ".drbdctrl_1" created ... Writing meta data... New drbd meta data block successfully created. Operation completed successfully
╭────────────────────────────────────────────────────────────────────╮ ┊ Name ┊ Pool Size ┊ Pool Free ┊ ┊ State ┊ ╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡ ┊ criminy ┊ 508 ┊ 500 ┊ ┊ ok ┊ ┊ dionysus ┊ 508 ┊ 500 ┊ ┊ ok ┊ ┊ lilith ┊ 508 ┊ 500 ┊ ┊ ok ┊ ╰────────────────────────────────────────────────────────────────────╯
Operation completed successfully Operation completed successfully ╭────────────────────────────────────────────────────────────────────╮ ┊ Node ┊ Resource ┊ Vol ID ┊ ┊ State ┊ ╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡ ┊ criminy ┊ database ┊ * ┊ ┊ ok ┊ ┊ dionysus ┊ database ┊ * ┊ ┊ ok ┊ ┊ lilith ┊ database ┊ * ┊ ┊ ok ┊ ╰────────────────────────────────────────────────────────────────────╯
.drbdctrl role:Secondary volume:0 disk:UpToDate volume:1 disk:UpToDate dionysus role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate lilith role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate database role:Secondary disk:UpToDate dionysus role:Secondary replication:SyncSource peer-disk:Inconsistent done:19.41 lilith role:Secondary replication:SyncSource peer-disk:Inconsistent done:16.67

Extensibility

Backend storage

Available implementations: thick LVM, thin LVM, ZFS on Linux. Snapshots are supported by latter two. In the context of ZFS only zVols are used, not the file system itself.

Deployment

In case the placement of replicas is left for DRBD Manage, it delegates the placement decision into a plug-in. The current implementation spreads allocations evenly among the available nodes.

DRBD Manage Road Map

2016
Multi-tiered Storage

Multi-tiered Storage

Multiple VGs per node, e.g. HDD and SSD; explicit use of one pool or both via bcache or dm-cache

2016
Additional Stack Drivers

Additional Stack Drivers

Additional stack drivers as demand grows: Apache CloudStack

2016
OpenAttic

OpenAttic

Administer DRBD Manage Clusters via the OpenAttic GUI

2016

Network Multipathing

Support for DRBD's network multi-pathing and RDMA-transports

2016

DRBD Proxy

In one instance per node and some named instances per site model

2016
Backend Storage

Backend Storage

Back-end storage driver besides LVM: zVols (Canonical plans ZoL for Ubuntu LTS 16.04)