NBD and NBDSSL Transport for VMware

The NBD (network block device) transport mode transmits data over the TCP/IP connection between the ESX server and the proxy computer. This mode serves as a fallback when other transport modes are not available. The local area network (LAN) can be the production network or a dedicated backup network.

NBDSSL is similar to NBD mode, but data transfer between the proxy computer and the ESX server is encrypted. Encryption should be used for sensitive information, even within a private network.

To enable incremental backups of virtual disks, Changed Block Tracking (CBT) must be used for the first full backup. (CBT is enabled for backups by default.)

Best Practices for NBD and NBDSSL Transport

  • In ESXi 5.0 and later, default NFC timeouts can be set in the VixDiskLib configuration file. If no timeout is specified, older versions of ESX or ESXi hold the corresponding disk open until vpxa or hostd is restarted. As a starting point for NBD and NBDSSL transport, set Accept and Request timeouts to 3 minutes, Read timeouts to 1 minute, Write timeouts to 10 minutes, and timeouts for nfcFssrvr and nfcFssrvrWrite to 0. You might need to lengthen timeouts on slow networks, especially for NBDSSL.

  • In vSphere 7.0, you can select the specific VMkernel adapter that will be utilized for NBD traffic. This can be useful for designating a specific backup network to optimize the performance of backups.

    To select a specific VMkernel interface, use the following command esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2 where vmk2 represents the name of the VMKernel adapter to use for NBD traffic.

A VMDK can fail to open if too many NFC connections are made to an ESX host. For more information, see VDDK library returns the error: Failed to open NBD extent, NBD_ERR_GENERIC (1022543).

Loading...