Adding a disk to a ZFS pool

I needed to expand a ZFS pool from a single disk to a pair of disks today. To expand my pool named “striped,” I ran zpool with the “add” option, the pool name to add the disk to, and the device to add to the pool:

$ zpool add striped c1d1

Once the disk was added to the pool, it was immediately available for use:

$ zpool status -v

  pool: striped
 state: ONLINE
 scrub: none requested

        NAME        STATE     READ WRITE CKSUM
        striped    ONLINE       0     0     0
          c1d0      ONLINE       0     0     0
          c1d1      ONLINE       0     0     0

errors: No known data errors

I used to think Veritas had the easiest method to expand file systems, but I don’t think that is the case anymore. Now if we can just get Sun to allow us remove devices from a pool, and expand the number of columns in a RAIDZ or RAIDZ2 vdev!

14 thoughts on “Adding a disk to a ZFS pool”

  1. When you run the zpool utility to grow a pool, the file systems that exist in that pool are able to immediately take advantage of the new space. No additional commands are required to grow a file system (if you have quota or reservations, you will need to increase these), it’s all done with one simple “zpool add” command! Viva la pooled storage!

  2. How about growing pool in same physical device? I have ST6140. I make volume 100GB. All this space I put into zpool. Now I grow volume to 150GB in the ST6140. zpool didn’t recognize increased additional 50GB, but format recognize that. Is it only possible method to grow pool to make additional volume on ST6140 and add it with zpool add command or is some way to grow zpool dynamically in the same device?

  3. Thanks for that helpful little blip of information…This didn’t appear till like the third or so page of a google search for adding a disk to ZFS (I was beginning to think it would suck to do/not work)

    so in theory I can start off with 4 750 GB drives and add as many as I feel…(I’m going to buy 2 of those 5 drive enclosures on newegg with the 120mm fan in the back….should be pretty sweet…so nevanta + ZFS + uber drives = relativly safe Samba file server for my anime…

    thank you internet guy (if you have any tips or tricks for performance please e-mail me

  4. Hi,
    I was wondering how can we actually remove the disk that is just added with zpool add command?
    I recently added a disk to my storage pool like this:
    # zpool add -f rpool c2t0d0
    Now, I don’t need this space anymore and I want to remove the disk from the storage space. But,
    #zpool remove rpool c2t0d0
    cannot remove c2t0d0: only inactive hot spares or cache devices can be removed
    Any ideas?

  5. You can’t. ZFS isn’t so advanced yet.

    You can only remove disk from a RAID1 for example. And it’s not as advanced as VxVM for example where you can move plexes around and remove disks.

    Even adding disk is relatively simple. You can only grow by vdev. I’m sure you know you can created a 4 disk RAIDZ1 then add a fifth disk. You need to add 4 more and create a striped RAIDZ1.

    Don’t get me wrong, I love my ZFS file server, but SUN still have a bit of work to kill Veritas.

  6. Hi,

    You can use “zpool rdestroy” command to remove the single disk which you have added for create “striped” pool

  7. Subbu’s comment isn’t very clear on using zpool destroy, so I would be careful with that.

    The only way I have been able to get mmy disk back is very convoluted and you need to have a lot of disks available to begin with:

    I created a secondary pool, copied the data from one pool to the other, using any method you chose, remember to create zfs filesystems if required,
    destroyed the first pool,
    # zpool destroy pool1

    renamed the second pool to the first pools name #zpool export pool2
    #zpool import pool2 pool1

    and then reclaimed my disks from the first destroyed pool.

    Like I said, very convoluted, but it worked.

  8. Oh and PS, the other thing you CAN do, is
    replace one disk with another. However the disk you are using to replace the original disk MUST be equal or greater is size or it won’t let you. (this is annoying to me too)

    #zfs replace

  9. Anyone have any thoughts on growing a zfs root pool? From what I have read, a zfs root pool can only be configured with either one disk or a number of mirrors. Does this mean that once you have determined the size of your root pool, it can’t be changed further down the track? Is it not possible to concat or stripe additional disks onto a root pool whaen, for example /var runs out of space? I am getting the wrong end of this stick here?



Leave a Reply

Your email address will not be published. Required fields are marked *