Adding a disk to a ZFS pool

I needed to expand a ZFS pool from a single disk to a pair of disks today. To expand my pool named “striped,” I ran zpool with the “add” option, the pool name to add the disk to, and the device to add to the pool:

$ zpool add striped c1d1

Once the disk was added to the pool, it was immediately available for use:

$ zpool status -v

  pool: striped
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        striped    ONLINE       0     0     0
          c1d0      ONLINE       0     0     0
          c1d1      ONLINE       0     0     0

errors: No known data errors

I used to think Veritas had the easiest method to expand file systems, but I don’t think that is the case anymore. Now if we can just get Sun to allow us remove devices from a pool, and expand the number of columns in a RAIDZ or RAIDZ2 vdev!

12 Comments

JoeV  on December 27th, 2006

How about resizing (growing)a ZFS Filesystem?
I know,
In Veritas FS we can use fsadm. and
In UFS we can use growfs?

matty  on December 27th, 2006

When you run the zpool utility to grow a pool, the file systems that exist in that pool are able to immediately take advantage of the new space. No additional commands are required to grow a file system (if you have quota or reservations, you will need to increase these), it’s all done with one simple “zpool add” command! Viva la pooled storage!

argos  on March 6th, 2007

How about growing pool in same physical device? I have ST6140. I make volume 100GB. All this space I put into zpool. Now I grow volume to 150GB in the ST6140. zpool didn’t recognize increased additional 50GB, but format recognize that. Is it only possible method to grow pool to make additional volume on ST6140 and add it with zpool add command or is some way to grow zpool dynamically in the same device?

MSMII  on June 29th, 2007

Thanks for that helpful little blip of information…This didn’t appear till like the third or so page of a google search for adding a disk to ZFS (I was beginning to think it would suck to do/not work)

so in theory I can start off with 4 750 GB drives and add as many as I feel…(I’m going to buy 2 of those 5 drive enclosures on newegg with the 120mm fan in the back….should be pretty sweet…so nevanta + ZFS + uber drives = relativly safe Samba file server for my anime…

thank you internet guy (if you have any tips or tricks for performance please e-mail me

Scott  on December 12th, 2007

Hi Matty,

If you’re looking for an easy way to grow veritas file systems, you may want to check out http://www.symantec.com/sfsimpleadmin and http://eval.symantec.com/mktginfo/downloads/VRTSsfop.tar.gz

This lets you put a LUN in to a dg, and when you do so it automatically applies vxvm labels, etc., adds to a volume, adds to a file system that is dynamically striped across all volumes in the dg.

Cheers,
Scott

ubersol  on August 4th, 2008

Hi,
I was wondering how can we actually remove the disk that is just added with zpool add command?
I recently added a disk to my storage pool like this:
# zpool add -f rpool c2t0d0
Now, I don’t need this space anymore and I want to remove the disk from the storage space. But,
#zpool remove rpool c2t0d0
returns:
cannot remove c2t0d0: only inactive hot spares or cache devices can be removed
Any ideas?
Thanks

Plattz  on August 11th, 2008

You can’t. ZFS isn’t so advanced yet.

You can only remove disk from a RAID1 for example. And it’s not as advanced as VxVM for example where you can move plexes around and remove disks.

Even adding disk is relatively simple. You can only grow by vdev. I’m sure you know you can created a 4 disk RAIDZ1 then add a fifth disk. You need to add 4 more and create a striped RAIDZ1.

Don’t get me wrong, I love my ZFS file server, but SUN still have a bit of work to kill Veritas.

Subbu  on December 12th, 2009

Hi,

You can use “zpool rdestroy” command to remove the single disk which you have added for create “striped” pool

saxykat  on August 6th, 2010

Subbu’s comment isn’t very clear on using zpool destroy, so I would be careful with that.

The only way I have been able to get mmy disk back is very convoluted and you need to have a lot of disks available to begin with:

I created a secondary pool, copied the data from one pool to the other, using any method you chose, remember to create zfs filesystems if required,
destroyed the first pool,
# zpool destroy pool1

renamed the second pool to the first pools name #zpool export pool2
#zpool import pool2 pool1

and then reclaimed my disks from the first destroyed pool.

Like I said, very convoluted, but it worked.

saxykat  on August 6th, 2010

Oh and PS, the other thing you CAN do, is
replace one disk with another. However the disk you are using to replace the original disk MUST be equal or greater is size or it won’t let you. (this is annoying to me too)

#zfs replace

Ben Jackson  on June 29th, 2011

Anyone have any thoughts on growing a zfs root pool? From what I have read, a zfs root pool can only be configured with either one disk or a number of mirrors. Does this mean that once you have determined the size of your root pool, it can’t be changed further down the track? Is it not possible to concat or stripe additional disks onto a root pool whaen, for example /var runs out of space? I am getting the wrong end of this stick here?

Thanks

Ben

Ben Jackson  on June 29th, 2011

I found that tbeonly way to do this is to mirror the root pool to a larger disk, then install the boot blocks on this disk (in case of failure) and activate it. In case anyone is interested the steps are here:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

See the section:

9.5.2 Replacing/Relabeling the Root Pool Disk

Leave a Comment