DRBD Users Guide 8.0-8.3

Configuring I/O error handling strategies

DRBD's strategy for handling lower-level I/O errors is determined by the on-io-error option, included in the resource disk configuration in /etc/drbd.conf:

resource resource {
  disk {
    on-io-error strategy;
    ...
  }
  ...
}

You may, of course, set this in the common section too, if you want to define a global I/O error handling policy for all resources.

strategy may be one of the following options:

  • detachThis is the recommended option. On the occurrence of a lower-level I/O error, the node drops its backing device, and continues in diskless mode.

  • pass_onThis causes DRBD to report the I/O error to the upper layers. On the primary node, it is reported to the mounted file system. On the secondary node, it is ignored (because the secondary has no upper layer to report to). This is the default for historical reasons, but is no longer recommended for most new installations — except if you have a very compelling reason to use this strategy, instead of detach.

  • call-local-io-errorInvokes the command defined as the local I/O error handler. This requires that a corresponding local-io-error command invocation is defined in the resource's handlers section. It is entirely left to the administrator's discretion to implement I/O error handling using the command (or script) invoked by local-io-error.

    [Note]Note

    Early DRBD versions (prior to 8.0) included another option, panic, which would forcibly remove the node from the cluster by way of a kernel panic, whenever a local I/O error occurred. While that option is no longer available, the same behavior may be mimicked via the local-io-error/ call-local-io-error interface. You should do so only if you fully understand the implications of such behavior.

You may reconfigure a running resource's I/O error handling strategy by following this process:

  • Edit the resource configuration in /etc/drbd.conf.

  • Copy the configuration to the peer node.

  • Issue drbdadm adjust resource on both nodes.

[Note]Note

DRBD versions prior to 8.3.1 will incur a full resync after running drbdadm adjust on a node that is in the Primary role. On such systems, the affected resource must be demoted prior to running drbdadm adjust after its disk configuration section has been changed.