Blog O' Matty


Using the vSphere flash read cache feature to speed up sequential reads

This article was posted by Matty on 2017-01-20 17:24:00 -0400 -0400

Duncan Epping gave a great overview of vSphere’s flash read cache feature and I wanted to take it for a ride. This feature reminds me of the ZFS level-2 ARC which allows SSD drives to be used as read and write caches. The vSphere vcache only provides read caching but that is still super useful for read-intensive workloads. To see how it performed I broke out my trusty old sequential read script to get a baseline:

$ fsexercise mongo.dbf2 1M

43 Gb read, 111 MB/s

Not too shabby for my old rusty NFS datastore. Once the initial test was complete I connected a Crucial MX300 SSD to my server and added it as a flash cache though Hosts and clusters -> Settings -> Virtual Flash Resource Management. Next I added 50GB of flash cache space to the disk I was testing and ran a read test to “prime the cache”. Once this completed I ran a second test which provided significantly different results:

$ fsexercise mongo.dbf2 1M

43 Gb read, 443 MB/s

The addition of one SSD sped up single threaded sequential reads by 4X which was significantly more than I was expecting. I’m planning to run some random read tests this weekend and suspect they will fair FAR better. This is a useful feature and 100% free if you have the right licenses in place. Definitely something to keep in your tool belt if you manage VMware infrastructure.

The power of locality in VMware vSphere environments

This article was posted by Matty on 2017-01-20 17:15:00 -0400 -0400

I was doing some network throughput testing last weekend and wanted to see how much locality played into virtual machine deployments. The VMware virtual vmxnet3 network adapter is capable of 10Gb/s+ speeds and was designed to be extremely performant. To see what kind of throughput I could get over a 1Gb/s link I fired up my old trusty friend iperf and streamed 6GB of data between VMs located on different ESXI hosts:

$ iperf -c 192.168.1.101 -p 8000 -t 60 -w 8M

------------------------------------------------------------
Client connecting to 192.168.1.101, TCP port 8000
TCP window size: 416 KByte (WARNING: requested 8.00 MByte)
------------------------------------------------------------
[ 3] local 192.168.1.102 port 55858 connected with 192.168.1.101 port 8000
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 6.50 GBytes 930 Mbits/sec

This was about what I expected given the theoretical maximums of 1Gb/s copper links. To see how things performed when both VMs were co-located I vmotioned one of the servers and re-ran the test:

$ iperf -c 192.168.1.101 -p 8000 -t 60 -w 8M

------------------------------------------------------------
Client connecting to 192.168.1.101, TCP port 8000
TCP window size: 416 KByte (WARNING: requested 8.00 MByte)
------------------------------------------------------------
[ 3] local 192.168.1.102 port 55856 connected with 192.168.1.101 port 8000
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 197 GBytes 28.3 Gbits/sec

The vmxnet3 adapter is not just capable of pushing 10Gb/s it is capable of pushing data as fast as the motherboard and chip set allow! I ran this test NUMEROUS times and in all cases I was able to push well over 28Gb/s between hosts. In this new world of containers, micro-services and short lived machines this may not be all that useful. But there are edge cases where VM affinity rules could really benefit network performance.

A quick and easy way to rotate and resize images in Ubuntu Linux

This article was posted by Matty on 2016-12-27 11:34:00 -0400 -0400

I’ve been using the ImageMagick package for several years to resize and rotate images that I link to on my blog. Both operations are super easy to do with the convert utilities “-resize” and “-rotate” options. The following command will shrink an image by 50%:

$ convert -resize 50% cat.jpg cat.jpg1

To rotate an image 90 degrees you can use “-rotate”:

$ convert -rotate 90 cat.jpg cat.jpg1

Man convert(1) provides a TON more detail along with descriptions of numerous other conversion options.

My path to bees and vegetables

This article was posted by Matty on 2016-12-27 11:09:00 -0400 -0400

A couple years back I purchased our first home. It was a “fixer upper” so I spent a year or two working on various projects. Once those were complete I decided to start raising honey bees and growing fruits and vegetables. This was one of the best decisions of my life and it it just as challenging as designing and troubleshooting complex computer systems. To log my adventures I started a new blog specifically targeting gardening and homesteading. It’s amazing how many similarities there are between nature and computers. Planning to chronicle my growing experiences there. 2017 is going to be a great year!

The importance of cleaning up disk headers after testing

This article was posted by Matty on 2016-12-25 10:54:00 -0400 -0400

Yesterday I was running some benchmarks against a new MySQL server configuration. As part of my testing I wanted to see how things looked with ZFS as the back-end. So I loaded up some SSDs and attempted to create a ZFS pool. Zpool spit out a “device busy” error when I tried to create my pool leading to a confused and bewildered matty. After a bit of tracing I noticed that mdadm was laying claim to my devices:

$ cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md127 : active (auto-read-only) raid5 sde[0] sdc[2] sdb[1]
1465148928 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

Previously I did some testing with mdadm and it dawned on me that the headers may still be resident on disk. Sure enough, they were:

$ mdadm -E /dev/sdb

/dev/sdb:
Magic : a92b4efc
Version : 0.90.00
UUID : dc867613:8e75d8e8:046b61bf:26ec6fc5
Creation Time : Tue Apr 28 19:25:16 2009
Raid Level : raid5
Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
Array Size : 1465148928 (1397.27 GiB 1500.31 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 127

Update Time : Fri Oct 21 17:18:22 2016
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Checksum : d2cbaad - correct
Events : 4

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 1 8 16 1 active sync /dev/sdb

0 0 8 64 0 active sync /dev/sde
1 1 8 16 1 active sync /dev/sdb
2 2 8 32 2 active sync /dev/sdc

I didn’t run ‘mdadm –zero-superblock’ after my testing so of course md thought it was still the owner of these devices. After I zero’ed the md super block I was able to create my pool without issue. Fun times in the debugging world. :)