The power of locality in VMware vSphere environments


I was doing some network throughput testing last weekend and wanted to see how much locality played into virtual machine deployments. The VMware virtual vmxnet3 network adapter is capable of 10Gb/s+ speeds and was designed to be extremely performant. To see what kind of throughput I could get over a 1Gb/s link I fired up my old trusty friend iperf and streamed 6GB of data between VMs located on different ESXI hosts:

$ iperf -c 192.168.1.101 -p 8000 -t 60 -w 8M

------------------------------------------------------------
Client connecting to 192.168.1.101, TCP port 8000
TCP window size: 416 KByte (WARNING: requested 8.00 MByte)
------------------------------------------------------------
[ 3] local 192.168.1.102 port 55858 connected with 192.168.1.101 port 8000
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 6.50 GBytes 930 Mbits/sec

This was about what I expected given the theoretical maximums of 1Gb/s copper links. To see how things performed when both VMs were co-located I vmotioned one of the servers and re-ran the test:

$ iperf -c 192.168.1.101 -p 8000 -t 60 -w 8M

------------------------------------------------------------
Client connecting to 192.168.1.101, TCP port 8000
TCP window size: 416 KByte (WARNING: requested 8.00 MByte)
------------------------------------------------------------
[ 3] local 192.168.1.102 port 55856 connected with 192.168.1.101 port 8000
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 197 GBytes 28.3 Gbits/sec

The vmxnet3 adapter is not just capable of pushing 10Gb/s it is capable of pushing data as fast as the motherboard and chip set allow! I ran this test NUMEROUS times and in all cases I was able to push well over 28Gb/s between hosts. In this new world of containers, micro-services and short lived machines this may not be all that useful. But there are edge cases where VM affinity rules could really benefit network performance.

This article was posted by Matty on 2017-01-20 17:15:00 -0400 -0400