A quick and easy way to rotate and resize images in Ubuntu Linux

I’ve been using the ImageMagick package for several years to resize and rotate images that I link to on my blog. Both operations are super easy to do with the convert utilities “-resize” and “-rotate” options. The following command will shrink an image by 50%:

$ convert -resize 50% cat.jpg cat.jpg1

To rotate an image 90 degrees you can use “-rotate”:

$ convert -rotate 90 cat.jpg cat.jpg1

Man convert(1) provides a TON more detail along with descriptions of numerous other conversion options.

My path to bees and vegetables

A couple years back I purchased our first home. It was a “fixer upper” so I spent a year or two working on various projects. Once those were complete I decided to start raising honey bees and growing fruits and vegetables. This was one of the best decisions of my life and it it just as challenging as designing and troubleshooting complex computer systems. To log my adventures I started a new blog specifically targeting gardening and homesteading. It’s amazing how many similarities there are between nature and computers. Planning to chronicle my growing experiences there. 2017 is going to be a great year!

The importance of cleaning up disk headers after testing

Yesterday I was running some benchmarks against a new MySQL server configuration. As part of my testing I wanted to see how things looked with ZFS as the back-end. So I loaded up some SSDs and attempted to create a ZFS pool. Zpool spit out a “device busy” error when I tried to create my pool leading to a confused and bewildered matty. After a bit of tracing I noticed that mdadm was laying claim to my devices:

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active (auto-read-only) raid5 sde[0] sdc[2] sdb[1]
1465148928 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

Previously I did some testing with mdadm and it dawned on me that the headers may still be resident on disk. Sure enough, they were:

$ mdadm -E /dev/sdb
          Magic : a92b4efc
        Version : 0.90.00
           UUID : dc867613:8e75d8e8:046b61bf:26ec6fc5
  Creation Time : Tue Apr 28 19:25:16 2009
     Raid Level : raid5
  Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
     Array Size : 1465148928 (1397.27 GiB 1500.31 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 127

    Update Time : Fri Oct 21 17:18:22 2016
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
       Checksum : d2cbaad - correct
         Events : 4

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       16        1      active sync   /dev/sdb

   0     0       8       64        0      active sync   /dev/sde
   1     1       8       16        1      active sync   /dev/sdb
   2     2       8       32        2      active sync   /dev/sdc

I didn’t run ‘mdadm –zero-superblock’ after my testing so of course md thought it was still the owner of these devices. After I zero’ed the md super block I was able to create my pool without issue. Fun times in the debugging world. :)

Debugging OpenLDAP ACLs

OpenLDAP provides a super powerful ACL syntax which allows you to control access to every nook and cranny of your directory server. When I’m testing advanced ACL configurations I have found it incredibly useful to add the “ACL” log option to the loglevel directive:

loglevel ACL

When this option is set slapd will show you how it applies the ACLs to a given LDAP operation:

Dec  4 09:01:00 rocco slapd[6026]: => acl_mask: access to entry "ou=users,dc=prefetch,dc=net", attr "entry" requested
Dec  4 09:01:00 rocco slapd[6026]: => acl_mask: to all values by "", (=0) 
Dec  4 09:01:00 rocco slapd[6026]: <= check a_dn_pat: users
Dec  4 09:01:00 rocco slapd[6026]: <= check a_peername_path:
Dec  4 09:01:00 rocco slapd[6026]: <= acl_mask: [2] applying read(=rscxd) (stop)
Dec  4 09:01:00 rocco slapd[6026]: <= acl_mask: [2] mask: read(=rscxd)
Dec  4 09:01:00 rocco slapd[6026]: => slap_access_allowed: search access granted by read(=rscxd)
Dec  4 09:01:00 rocco slapd[6026]: => access_allowed: search access granted by read(=rscxd)
Dec  4 09:01:00 rocco slapd[6026]: => access_allowed: search access to "cn=matty,ou=users,dc=prefetch,dc=net" "uid" requested

This is super handy and will save you tons of time and heartburn when crafting complex ACLs.

Troubleshooting vSphere NSX manager issues

This week I installed VMWare NSX and created a new NSX manager instance. After deploying the OVF and prepping my cluster members I went to create the three recommended controller nodes. This resulted in a “Failed to power on VM NSX Controller” error which didn’t make sense. The package was installed correctly, the configuration parameters were correct and I double and tripled checked that the controllers could communicate with my NSX manager. NSX manager provides appmgmt, system and manager logs which can be viewed with the show utility:

nsx> show log manager follow

To see what was going on I tailed the manager log and attempted to create another controller. The creation failed but the following log entry was generated:

inherited from com.vmware.vim.binding.vim.fault.InsufficientFailoverResourcesFault: Insufficient resources to satisfy configured failover level for vSphere HA.

I had a cluster node in maintenance mode to apply some security updates so this made total sense. Adding the node back to the cluster allowed me to create my controllers without issue.

Importing and mounting ZFS pools at boot time on Fedora servers

If you read my blog you know I am a huge fan of the ZFS file system. Now that the ZFS on Linux project is shipping with Ubuntu I hope it gets more use in the real world. Installing ZFS on a Fedora server is relatively easy though I haven’t found a good guide describing how to import pools and mount file systems at boot. After a bit of digging in /usr/lib/systemd/system/ it turns out this is super easy. On my Fedora 24 server I needed to enable a couple of systemd unit files to get my pool imported at boot time:

$ systemctl enable zfs-mount.service
$ systemctl enable zfs-import-cache.service
$ systemctl enable zfs-import-scan.service

Once these were enabled I rebooted my server and my pool was up and operational:

$ zpool status -v
  pool: bits
 state: ONLINE
  scan: none requested

	NAME                                           STATE     READ WRITE CKSUM
	bits                                           ONLINE       0     0     0
	  raidz1-0                                     ONLINE       0     0     0
	    ata-WDC_WD7500AACS-00D6B1_WD-WCAU47102921  ONLINE       0     0     0
	    ata-WDC_WD7500AACS-00D6B1_WD-WCAU47306478  ONLINE       0     0     0
	    ata-WDC_WD7500AACS-00D6B1_WD-WCAU47459778  ONLINE       0     0     0
	    ata-WDC_WD7500AACS-00D6B1_WD-WCAU47304342  ONLINE       0     0     0

errors: No known data errors

$ df -h /bits
Filesystem      Size  Used Avail Use% Mounted on
bits            2.0T  337G  1.7T  17% /bits

The future is looking bright for ZFS and I hope the Linux port will become rock solid as more people use it.