Using the vSphere flash read cache feature to speed up sequential reads

Duncan Epping gave a great overview of vSphere’s flash read cache feature and I wanted to take it for a ride. This feature reminds me of the ZFS level-2 ARC which allows SSD drives to be used as read and write caches. The vSphere vcache only provides read caching but that is still super useful for read-intensive workloads. To see how it performed I broke out my trusty old sequential read script to get a baseline:

$ fsexercise mongo.dbf2 1M
43 Gb read, 111 MB/s

Not too shabby for my old rusty NFS datastore. Once the initial test was complete I connected a Crucial MX300 SSD to my server and added it as a flash cache though Hosts and clusters -> Settings -> Virtual Flash Resource Management. Next I added 50GB of flash cache space to the disk I was testing and ran a read test to “prime the cache”. Once this completed I ran a second test which provided significantly different results:

$ fsexercise mongo.dbf2 1M
43 Gb read, 443 MB/s

The addition of one SSD sped up single threaded sequential reads by 4X which was significantly more than I was expecting. I’m planning to run some random read tests this weekend and suspect they will fair FAR better. This is a useful feature and 100% free if you have the right licenses in place. Definitely something to keep in your tool belt if you manage VMware infrastructure.

The power of locality in VMware vSphere environments

I was doing some network throughput testing last weekend and wanted to see how much locality played into virtual machine deployments. The VMware virtual vmxnet3 network adapter is capable of 10Gb/s+ speeds and was designed to be extremely performant. To see what kind of throughput I could get over a 1Gb/s link I fired up my old trusty friend iperf and streamed 6GB of data between VMs located on different ESXI hosts:

$ iperf -c 192.168.1.101 -p 8000 -t 60 -w 8M
------------------------------------------------------------
Client connecting to 192.168.1.101, TCP port 8000
TCP window size:  416 KByte (WARNING: requested 8.00 MByte)
------------------------------------------------------------
[  3] local 192.168.1.102 port 55858 connected with 192.168.1.101 port 8000
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-60.0 sec  6.50 GBytes   930 Mbits/sec

This was about what I expected given the theoretical maximums of 1Gb/s copper links. To see how things performed when both VMs were co-located I vmotioned one of the servers and re-ran the test:

$ iperf -c 192.168.1.101 -p 8000 -t 60 -w 8M
------------------------------------------------------------
Client connecting to 192.168.1.101, TCP port 8000
TCP window size:  416 KByte (WARNING: requested 8.00 MByte)
------------------------------------------------------------
[  3] local 192.168.1.102 port 55856 connected with 192.168.1.101 port 8000
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-60.0 sec   197 GBytes  28.3 Gbits/sec

The vmxnet3 adapter is not just capable of pushing 10Gb/s it is capable of pushing data as fast as the motherboard and chip set allow! I ran this test NUMEROUS times and in all cases I was able to push well over 28Gb/s between hosts. In this new world of containers, micro-services and short lived machines this may not be all that useful. But there are edge cases where VM affinity rules could really benefit network performance.

Centos 6 Linux VMs running inside vSphere 4.1 appear to dynamically discover new LUNs

I came across an interesting discovery yesterday while working on a CentOS 6 gluster node. The node was virtualized inside vSphere 4.1 and needed some additional storage added to it. I went into the VI client and added a new disk while the server was running, expecting to have to reboot or rescan the storage devices in the server. Well, I was pleasantly surprised when the following messages popped up on the console:

null

Nice, it looks like the device was added to the system dynamically! I ran dmesg to confirm:

$ dmesg | tail -14

mptsas: ioc0: attaching ssp device: fw_channel 0, fw_id 1, phy 1, sas_addr 0x5000c295575f0957
scsi 2:0:1:0: Direct-Access     VMware   Virtual disk     1.0  PQ: 0 ANSI: 2
sd 2:0:1:0: [sdb] 75497472 512-byte logical blocks: (38.6 GB/36.0 GiB)
sd 2:0:1:0: [sdb] Write Protect is off
sd 2:0:1:0: [sdb] Mode Sense: 03 00 00 00
sd 2:0:1:0: [sdb] Cache data unavailable
sd 2:0:1:0: [sdb] Assuming drive cache: write through
sd 2:0:1:0: Attached scsi generic sg2 type 0
sd 2:0:1:0: [sdb] Cache data unavailable
sd 2:0:1:0: [sdb] Assuming drive cache: write through
 sdb: unknown partition table
sd 2:0:1:0: [sdb] Cache data unavailable
sd 2:0:1:0: [sdb] Assuming drive cache: write through
sd 2:0:1:0: [sdb] Attached SCSI disk

Rock on! In the past I’ve had to reboot virtual machines or rescan the storage devices to find new LUNs. This VM was configured with a LSI Logic SAS controller and is running CentOS 6. I’m not sure if something changed in the storage stack in CentOS 6, or if the SAS controller is the one to thank for this nicety. Either way I’m a happy camper, and I love it when things just work! :)

Upgrading from vSphere 4.0 to vSphere 4.1 update 2

This weekend I decided to take the plunge and upgrade my vSphere 4.0 servers to vSphere 4.1 update 2. The upgrades went amazingly well, and I was amazed at just how easy it is to upgrade a cluster with VMWare update manager. The upgrade process took about 30-minutes per host, and I used the following two resources to guide me through the upgrade:

1. VMGuru’s vSphere 4.0 upgrade procedure.

2.VMWare KB article 1022140 which includes a video of the upgrade.

It’s amazing how big VMWare is getting. It seems like just yesterday I was running their desktop virtualization product, and people who didn’t have an extensive IT background would mumble “Virtual what?” Oh how times have changed.

How to become a VMware certified professional (VCP4)

I passed the Vmware certified professional 4 (VCP4) exam this past Monday. The exam was a bit more difficult than I expected, though I passed it with flying colors. If you are thinking about taking the exam, or are interested in learning more about vSphere, you will definitely want to start out by reading Scott Lowe’s Mastering VMware vSphere book. Scott did an excellent job putting the book together, and it’s concise and easy to follow (I’m curious how Mike Laverick’s vSphere implementation book compares to Scott’s).

After you read through Scott’s book, you should check out Simon Long’s study notes! Simon has links from the VCP4 blueprint to every piece of documentation you will need to pass the test, and you will learn a ton in the process! Now, there will come a morning or afternoon when you need to wander to the testing center to sign your life away and take the exam. Get there 30 minutes early, and look over the vreference card! This card contains a TON of ESX material in a single location, and will ensure that all of the information you studied is fresh in your mind. Best of luck to anyone taking the exam!