Expanding storage the ZFS way

I had a mirrored ZFS pool fill up on me this week, which required me to add additional storage to ensure that my application kept functioning correctly. Since expanding storage is a trivial process with ZFS, I decided to increase the available pool storage by replacing the 36GB disks in the pool with 72GB disks. Here is the original configuration:

$ df -h netbackup

Filesystem             size   used  avail capacity  Mounted on
netbackup               33G    32G    1G     96%    /opt/openv

$ zpool status -v netbackup

  pool: netbackup
 state: ONLINE
 scrub: none requested

        NAME        STATE     READ WRITE CKSUM
        netbackup   ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0

errors: No known data errors

To expand the available storage, I replaced the disk c1t2d0 with a 72GB disk, and then used the zpool “replace” option to replace the old disk with the new one:

$ zpool replace netbackup c1t2d0

Once the pool finished resilvering (you can run `zpool status -v’ to monitor the progress), I replaced the disk c1t3d0 with a 72GB disk, and used the zpool “replace” option to replace the old disk with the new one:

$ zpool replace netbackup c1t3d0

Once the pool finished resilvering, I had an extra 36GB of disk space available:

$ df -h netbackup

Filesystem             size   used  avail capacity  Mounted on
netbackup               67G   32G    35G    47%    /opt/openv

This is pretty powerful, and it’s nice not to have to run another utility to extend volumes and file systems once new storage is available. There is also the added benefit that ZFS resilvers at the object level, and not at the block level. Giddie up!

14 thoughts on “Expanding storage the ZFS way”

  1. Could you publish ALL the steps you took (physical steps, and commands)? I’ve been thinking about messing around with ZFS on an old sun box with a few external scsi drives, and I’m collecting info for small projects i’d like to do once I get solaris 10 on it. I guess I’m also assuming you’re running solaris 10 too?

  2. There is a shortage of information to be found on the ‘net for the scenario I’m dealing with. This post is the closest I can find.

    I’ve got a fibrechannel SAN set up with hardware RAID6, on which there is a single zpool. The SAN lets me add additional disks to increase the size of the logical drive. Once the initial (13TB) RAID gets initialized, I will be able to test what happens when I add another 1TB disk to it.

    What do I need to do in Solaris to finish the expansion, and can it be done while everything’s online? I need to make a decision about whether to buy the additional drives to fill the SAN up now, or whether I can expect to add capacity later when drives get cheaper.

    Luckily I have some time during initial setup to test this out and look for some help.

  3. Another way to accomplish the same task is to split/re-mirror/split/remirror (especially handy if you’re short on slots):

    # zpool detach netbackup c1t2d0
    # cfgadm -c unconfigure c1::dsk/c1t2d0

    Physically replace the disk with that of a higher capacity

    # cfgadm -c configure c1::dsk/c1t2d0
    # zpool attach netbackup c1t3d0 c1t2d0

    Wait for resilvering to complete…

    # zpool detach netbackup c1t3d0
    # cfgadm -c unconfigure c1::dsk/c1t3d0

    Physically replace the disk with that of a higher capacity

    # cfgadm -c configure c1::dsk/c1t3d0
    # zpool attach netbackup c1t2d0 c1t3d0

    (this is from the hip — feel free to clean up if needed)

  4. Can i mix local and SAN storage in one pool.

    For ex. we have a zpool which has local drive and mirror. Can we add SAN storage instead af replacing disks.


  5. I expanded the volume at backend from 2T to 3T. Format has already shown the 3T lun. Should I just need to remount the volume? I didnt try it and start zfs replace, the resilvering process will take anther 16 hours.

  6. Ran across this post because I’ll be doing the same thing shortly with my home server. Re: steve’s question, you don’t actually have to risk losing data in the event of disk failure using this procedure. If you stop using the pool first, then remove a drive, replace it, wait for resilvering, and repeat with the second drive, then you will always have two complete copies of the data. If one drive fails, you can reconstruct the mirror from the good one. If the other drive also fails, then you lose data, but that’s no different than normal operation with a mirrored pair.

  7. Hi Ryan,

    great article, thanks for it.

    i suggest to take this a way more secure by change some steps:

    New Disks: disk A, disk B
    Old Disks: disk A’m disk B’

    #1 add your bigger new disk A
    #2 wait til the rebuild process for disk A is finished
    #3 remove your old disk B’ and replace it with the new big disk B
    #4 wait til the finishing of the rebuild -> ref. #2
    #5 remove the old disk A’

    if you walk over that way, you will not run into a data-loss situation if a active disk dies under the replacement and rebuild time..bad if 2 disks will die…*cough*



  8. I found out that in Ubuntu 11.04 you need to set the autoexpand property BEFORE starting replacing disks, otherwise it won’t work. You don’t even need an import/export.

  9. What OS are you using, is it Linux or Solaris?
    In my Ubuntu replace don’t work. Neither in mirror nor in raidz.

Leave a Reply

Your email address will not be published. Required fields are marked *