Expanding Solaris metadevices

I recently had a file system on a Solaris Volume Manager (SVM) metadevice fill up, and I needed to expand it to make room for some additional data. Since the expansion could potentially cause problems, I backed up the file system, and saved a copy of the metastat and df output to my local workstation. Having several backups always gives me a warm fuzzy, since I know I have a way to revert back to the old configuration if something goes awry. Once the configuration was in a safe place and the data backed up, I used the umount command to unmount the /data file system, which lives on metadevice d100:

$ df -h

Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0      7.9G   2.1G   5.7G    27%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   2.3G   600K   2.3G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
/usr/lib/libc/libc_hwcap1.so.1
                       7.9G   2.1G   5.7G    27%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
/dev/dsk/c1t0d0s4      4.0G   154M   3.8G     4%    /var
swap                   2.3G    32K   2.3G     1%    /tmp
swap                   2.3G    24K   2.3G     1%    /var/run
/dev/dsk/c1t0d0s3       19G   2.8G    17G    15%    /opt
/dev/md/dsk/d100        35G    35G   120M    99%    /data

$ umount /data

After the file system was unmounted, I had to run the metaclear utility to remove the metadevice from the meta state database:

$ metaclear D100
d100: Concat/Stripe is cleared

Now that the metadevice was removed, I needed to add it back with the desired layout. It is EXTREMELY important to place the device(s) back in the right order, and to ensure that the new layout doesn’t corrupt the data that exists on the device(s) that contain the file system (i.e., don’t create a RAID5 metadevice with the existing devices, since that will wipe your data when the RAID5 metadevice is initialized). In my case, I wanted to concatenate another hardware RAID protected LUN to the meta device d100. This was accomplished by running metainit with the “numstripes” equal to 2 to indicate a 2 stripe concatenation, and “width” equal to 1 to indicate that each stripe should have one member:

$ metainit d100 2 1 c1t1d0s0 1 c1t2d0s0
d100: Concat/Stripe is setup

Once the new metadevice was created, I ran the mount utility to remount the /data file system, and then executed growfs to expand the file system:

$ mount /dev/md/dsk/d100 /data

$ growfs -M /data /dev/md/rdsk/d100

Warning: 2778 sector(s) in last cylinder unallocated
/dev/md/rdsk/d100:      150721830 sectors in 24532 cylinders of 48 tracks, 128 sectors
        73594.6MB in 1534 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
..............................
super-block backups for last 10 cylinder groups at:
 149821984, 149920416, 150018848, 150117280, 150215712, 150314144, 150412576,
 150511008, 150609440, 150707872

After the growfs operation completed, I had some breathing room on the /data file system:

$ df -h

Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0      7.9G   2.1G   5.7G    27%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   2.3G   600K   2.3G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
/usr/lib/libc/libc_hwcap1.so.1
                       7.9G   2.1G   5.7G    27%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
/dev/dsk/c1t0d0s4      4.0G   154M   3.8G     4%    /var
swap                   2.3G    32K   2.3G     1%    /tmp
swap                   2.3G    24K   2.3G     1%    /var/run
/dev/dsk/c1t0d0s3       19G   2.8G    17G    15%    /opt
/dev/md/dsk/d100        71G    36G    35G    49%    /data

The fact that you have to unmount the file system to grow a metadevice is somewhat frustrating, since every other LVM package I have used allows volumes and file system to be expanded on the fly (it’s a good thing ZFS is shipping with Solaris). As with all data migrations, you should test storage expansion operations prior to performing them on production systems.

8 thoughts on “Expanding Solaris metadevices”

  1. Solaris Volume Manager (and Solstice DiskSuite before it) does permit a metadevice to be expanded while the filesystem is mounted. I’ve been doing it for years with no problems. Here is the first line from the growfs(1M) man page:

    “growfs non-destructively expands a mounted or unmounted UNIX file system (UFS) to the size of the file system’s slice(s).”

    Here is an example:

    (I’m expanding the d100 metadevice which is a tiny 100-mb volume.)

    # df -h
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c0d0s0 5.8G 2.4G 3.3G 43% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.2G 624K 1.2G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /usr/lib/libc/libc_hwcap1.so.1
    5.8G 2.4G 3.3G 43% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    /dev/dsk/c0d0s3 961M 272M 632M 31% /var
    swap 1.2G 0K 1.2G 0% /tmp
    swap 1.2G 20K 1.2G 1% /var/run
    /dev/md/dsk/d100 94M 90M 0K 100% /data

    # metastat d100
    d100: Concat/Stripe
    Size: 205632 blocks (100 MB)
    Stripe 0:
    Device Start Block Dbase Reloc
    c1d1s0 0 No Yes

    # metattach d100 c1d1s1
    d100: component is attached

    # growfs -M /data /dev/md/rdsk/d100
    /dev/md/rdsk/d100: 416304 sectors in 413 cylinders of 16 tracks, 63 sectors
    203.3MB in 26 cyl groups (16 c/g, 7.88MB/g, 3776 i/g)
    super-block backups (for fsck -F ufs -o b=#) at:
    32, 16224, 32416, 48608, 64800, 80992, 97184, 113376, 129568, 145760,
    258080, 274272, 290464, 306656, 322848, 339040, 355232, 371424, 387616,
    403808,

    # df -h
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c0d0s0 5.8G 2.4G 3.3G 43% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.2G 624K 1.2G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /usr/lib/libc/libc_hwcap1.so.1
    5.8G 2.4G 3.3G 43% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    /dev/dsk/c0d0s3 961M 272M 632M 31% /var
    swap 1.2G 0K 1.2G 0% /tmp
    swap 1.2G 20K 1.2G 1% /var/run
    /dev/md/dsk/d100 191M 90M 92M 50% /data

    # metastat d100
    d100: Concat/Stripe
    Size: 416304 blocks (203 MB)
    Stripe 0:
    Device Start Block Dbase Reloc
    c1d1s0 0 No Yes
    Stripe 1:
    Device Start Block Dbase Reloc
    c1d1s1 0 No Yes

    Certainly, unmounting a filesystem before expanding it is safer. However, on a busy system, it’s frequently not convenient to unmount.

    Matt

  2. Would these steps apply in the case where I wanted to expand my / (as shown below)? From what I understand a metadevice is of the format dx where x is a number, (i.e., d100). Note: I am a novice at Solaris and at vol. mgmt. I really want to increase the size of / to a few more GB but I am not sure how to do it. I tried the cmd growfs -M / /dev/rdsk/c0t0d0so but that didn’t do anything.

    $ df -h
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c0t0d0s0 11G 9.3G 1.2G 89% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.2G 1.1M 1.2G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    fd 0K 0K 0K 0% /dev/fd
    /dev/dsk/c0t0d0s5 5.8G 4.0G 1.7G 70% /var
    swap 1.2G 25M 1.2G 3% /tmp
    swap 1.2G 56K 1.2G 1% /var/run
    /dev/dsk/c0t0d0s7 49G 10G 38G 21% /space

  3. The 1st poster is correct – SVM does allow growing filesystems on the fly – metattach(1M) was the “magical” command here.

    To the 2nd poster: it is not possible to add stripes to / metadevices, because there is no appropriate SVM metadevice reader in the OBP and GRUB; you may only add another whole submirror, but not stripes.
    (Same thing applies to ZFS and RAIDZ, it is not possible to have a RAIDZ / dataset without a reader which is capable of reading the RAMDisk from such a pool).

  4. Hi, how to know the max. size a metadevice/volume can hold? Does interlace size “-i” influence the size? Thanks.

  5. no Mr. Matt Cheek post is also working good and i tested it. He has mentioned that better to unmount the file system. In worst cases we can go with his post like / file system or var file system full by that time we can because there is no other option.

  6. Hi all, in the first example he has added a new Lun to the metadevice, but how about expand an existing lun?

Leave a Reply

Your email address will not be published. Required fields are marked *