Set up and configure DPUs on Dell Servers for VMware's vSphere Distributed Services Engine

DPUs, formerly known as SmartNICs, have been on the horizon since 2022. But after the completion of Project Monterey, DPUs are being productized as the vSphere Distributed Services Engine in vSphere 8 and NSX 4. They can also accelerate VMware Horizon, mostly by increasing the number of virtual Desktops, which can be hosted per ESXi server.

This post covers the initial configuration required as a first-time setup to use the DPU for ESXi / vSphere Distributed Services Engine in conjunction with VMware NSX.

General information

Currently DPUs from AMD Pensando and NVIDIA Bluefield are available for rack servers.
At Dell the following models supports DPUs:

  • Dell PowerEdge R750 (Intel IceLake)
  • Dell PowerEdge R760 (Intel SapphireRapids)
  • Dell PowerEdge R7615 (AMD Zen4 Genoa single socket)
  • Dell PowerEdge R7625 (AMD Zen4 Genoa dual socket)
  • The one height unit variants of the models above Dell PowerEdge R6XX

All DPUs at dell comes with two components:

1.  A network daughter card

a.      This card is placed at the PCI-E x1 LOM slot nearby the OCP slot.

b.      The daughter card is connected via cables to the DPU and the iDRAC

2. The DPU itself utilizing PCI-E Gen4

a.      The DPU can only be connected at specific PCI-E slots at the server’s risers

b.      Slot 2 and Slot 7 has been tested as working. If the slot is not DPU capable the server will show an error message at POST.

BIOS and iDRAC specific configuration

The DPU must be started and synchronized at POST so that the DPU's own ARM-based ESXi can boot at the same time as the x86-based ESXI.

This requires the following configuration:

1. BIOS: Enable SR-IOV and make sure that UEFI boot mode is enabled

2. Configure OS to iDRAC passthrough.
This is required so that the DPU can be administered and configured from x86 OS level. Configure the State to enabled and set the USB NIC IP address to



    3. Configure the PCI-E slot to be enabled for the DPU. Connect via SSH to the Dell iDRAC SSH Console (RACADM):

a.      Enable DPU boot synchronization for the PCI-E slot where the DPU is connected to

set system.pcislot.<number>.DPUBootSynchronization 1

b.      Allow the DPU as trusted device in the respective slot

set system.pcislot.<number>.DPUTrust 1

Now the DPU will POST like the x86 server. Please note if no OS is installed on the DPU the POST will take up to 10 minutes because of a boot timeout. This will not haben if ESXi is installed on the x86 host and the ARM DPU.

ESXI Installation

1. Mount an ESXi 8.0 installer iso

2. Directly at the beginning the installer asks if ESXi should be installed on the DPU, too.

    3. At the confirm install screen the DPU will be listed as install device.

4. ESXi gets installed on the x86 host and the DPU at the same time.

When the ESXI installation is complete, the host can be added to a vCenter and Distributed Switch configured for vSphere DSE (offloading). In addition, the host should be added to a transport node profile with Enhanced DataPath enabled using NSX.