DRBD can solve your long distance replication needs. In most cases this means that DRBD will replicate in asynchronous mode. Depending on your needs in case the long distance bandwidth is not sufficient it can slow down your application or can let the secondary fall behind.
The additional available DRBD Proxy can be used to buffer and optionally compress before it passes the long-distance link. DRBD Proxy can be either deployed on the nodes where DRBD is deployed or on dedicated nodes.
Most DR plans involve manual restart of services in the DR datacenter. For ambitious DR plans we can implement automatic recovery based on the booth technology.
From multithreaded LZ4 (with its high performance) to LZMA (which gives up to 1:50 compression ratios), DRBD Proxy will have the optimal compression variant for your WAN bandwidth.
For data center-to-data center communications using dark fibre, DRBD Proxy can also use hardware compression support, for input rates of 10GBit/sec and above.
For communication lines that get shared with other services as well, DRBD Proxy allows to configure the outgoing bandwidth for each resource out-of-the box.
DRBD Proxy can fulfill your RPO policy, be it time-based (minutes) or data-based (megabyte).
For resynchronization only data changed in the meantime gets transmitted - and that only once. Commonly written data blocks (eg. filesystem journal) are not sent multiple times.
Because DRBD just reuses the existing Linux Kernel block devices, it can easily be set up to store multiple sequential versions of your data.
As the DR-site storage is completely separate from the primary DC's, backups that are taken from a mounted snapshot there will not cause performance penalties.
Unexpected load spikes need not break your setup, or slow the applications down. Let DRBD "eat" the data and resynchronize it to the DR site at the next opportunity!