Reading Time: 3 minutes

The Link Layer Discovery Protocol (LLDP) is a vendor-neutral link layer protocol used by network devices for advertising their identity, capabilities, and neighbors on an IEEE 802 local area network, usually with Ethernet standard. Compared to Cisco Discovery Protocol (CDP) it’s not proprietary and can be used from different vendors.

VMware vSphere adds LLDP capability in the Distribuited Virtual Switches (DVS). CDP it’s also available both in DVS, but also in standard virtual switches (by default it’s enabled in listen mode).

Funny, but when I’ve tried to use LLDP with Dell switches (from the Force10 acquisition) the LLDP was working only for the advertise more and NOT for the listen mode. So physical switches was able to see the virtual switches with this command:

show lldp neighbors

But DVS was not show anything on the uplink port. I’ve found this interesting post about this behavior: Force10 and vSphere vDS Interoperability Issue

In this case, when LLDP was enabled on a vSphere distributed switch, uplinks on all ESXi hosts started disconnecting and connecting back intermittently, with log errors similar to this:

Lost uplink redundancy on DVPorts: “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”. Physical NIC vmnic1 is down.

Network connectivity restored on DVPorts: “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”. Physical NIC vmnic1 is up

Uplink redundancy restored on DVPorts: “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”. Physical NIC vmnic1 is up

Was not my case, but I’ve also got also a different type of switches, the interesting part was the troubleshooting analysis, where the problem was related to DCB negotiation.

Data Center Bridging Exchange Protocol (DCBX) uses LLDP when conveying capabilities and configuration of FCoE features between neighbors. This is why enabling LLDP on the vDS can have some issues.

If you are not using DCB, the easiest solution to this problem is to disable DCB on the Force10 switches using the following command:


# conf t
# no dcb enable

Alternatively, you can try and disable FCoE from the ESXi uplinks, in this case, could be useful if you need DBC or if you have FCoE hardware adapters. You can use the following commands from the ESXi CLI:


# esxcli fcoe nic list
# esxcli fcoe nic disable -n vmnic0

Share