Locating Linux LVM (Logical Volume Manager) free space

The Linux Logical Volume Manager (LVM) provides a relatively easy way to combine block devices into a pool of storage that you can allocate storage out of. In LVM terminology, there are three main concepts:

Physical Volumes – A sequence of sectors on a physical device.
Volume Groups – A group of physical volumes.
Logical Volumes – A logical device that is allocated from a volume group.

When you use LVM to manage your storage, you will typically do something similar to this when new storage requests are made:

1. Create a physical volume on a block device or partition on a block device.

2. Add one or more physical volumes to a volume group

3. Allocate logical volumes from the volume group.

4. Create a file system on the logical volume.

With this approach you can end up with free space in one or more physical volumes or one or more volume groups depending on how you provisioned the storage. To see how much free space your physical volumes have you can run the pvs utility without any arguments:

$ pvs

  PV         VG       Fmt  Attr PSize  PFree  
  /dev/sda2  VolGroup lvm2 a--   8.51g      0 
  /dev/sdb   DataVG   lvm2 a--  18.00g  18.00g
  /dev/sdc   DataVG   lvm2 a--  18.00g 184.00m

The “PFree” column shows the free space for each physical volume in the system. To see how much free space your volume groups have you can run the vgs utility without any arguments:

$ vgs

  VG       #PV #LV #SN Attr   VSize  VFree 
  DataVG     2   1   0 wz--n- 35.99g 18.18g
  VolGroup   1   2   0 wz--n-  8.51g     0 

In the vgs output the “VFree” column shows the amount of free space in each volume group. LVM is nice, but I’m definitely a ZFS fan when it comes to storage management. I’m hopeful that Oracle will come around and port ZFS to Linux, since it would benefit a lot of users and hopefully help to repair some of the broken relations between Oracle and the opensource community. I may be too much of an optimist though.

Getting E-mail notifications when MD devices fail

I use the MD (multiple device) logical volume manager to mirror the boot devices on the Linux servers I support. When I first started using MD, the mdadm utility was not available to manage and monitor MD devices. Since disk failures are relatively common in large shops, I used the shell script from my SysAdmin article Monitoring and Managing Linux Software RAID to send E-mail when a device entered the failed state. While reading through the mdadm(8) manual page, I came across the “–monitor” and “–mail” options. These options can be used to monitor the operational state of the MD devices in a server, and generate E-mail notifications if a problem is detected. E-mail notification support can be enabled by running mdadm with the “–monitor” option to monitor devices, the “–daemonise” option to create a daemon process, and the “–mail” option to generate E-mail:

$ /sbin/mdadm –monitor –scan –daemonise –mail=root@localhost

Once mdadm is daemonized, an E-mail similar to the following will be sent each time a failure is detected:

From: mdadm monitoring 
To: root@localhost.localdomain
Subject: Fail event on /dev/md1:biscuit

This is an automatically generated mail message from mdadm
running on biscuit

A Fail event had been detected on md device /dev/md1.

Faithfully yours, etc.

I digs me some mdadm!

Linux LVM silliness

While attempting to create a 2-way LVM mirror this weekend on my Fedora Core 5 workstation, I received the following error:

$ lvcreate -L1024 -m 1 vgdata

  Not enough PVs with free space available for parallel allocation.
  Consider --alloc anywhere if desperate.

Since the two devices were initialized specifically for this purpose and contained no other data, I was confused by this error message. After scouring Google for answers, I found a post that indicated that I needed a log LV for this to work, and the log LV had to be on it’s own disk. I am not sure about most people, but who on earth orders a box with three disks? Ugh!