DRBD RDMA Transport

InfiniBand, iWARP, RoCE

In the HPC world InfiniBand became the most prominent interconnect solution about 2014. It is proven technology, and with iWARP and RoCE it bridges into the Ethernet world as well.


It brings the right properties for a storage networking solution. It offers bandwidth ahead of the curve, it has lower latency. It comes with an advanced API: RDMA/verbs.


The DRBD RDMA Transport allows you to take advantages of the RDMA technology for mirroring data in your DRBD setup. With it DRBD supports multiple paths for bandwidth aggregation and link failover.

DRBD Proxy

WAN Links

Long distance links often expose varying bandwidth, due to the side effects of other traffic sharing parts of the path. They often have higher latency than LANs.

Varying Demand

It might be peaks in write load on DRBD, it might be temporal setback in the available link bandwidth, it may happen that the link bandwidth becomes lower than the necessary bandwidth to mirror the data stream.

Buffering and Compression

DRBD Proxy's main task is to mitigate these issues, otherwise DRBD would slow down the writing application by delivering IO completion events later. DRBD Proxy does that by buffering the data.