Installing ZFS on a CentOS 6 Linux server

As most of my long term readers know I am a huge Solaris fan. How can’t you love an Operating System that comes with ZFS, DTrace, Zones, FMA and Network Virtualization amongst other things? I use Linux during my day job, and I’ve been hoping for quite some time that Oracle would port one or more of these technologies to Linux. Well the first salvo has been fired, though it wasn’t from Oracle. It comes by way of the ZFS on Linux project, which is an in-kernel implementation of ZFS (this project is different from the FUSE ZFS port).

I had some free time this weekend to play around with ZFS on Linux, and my initial impressions are quite positive. The port on Linux is based on the latest version of ZFS that is part of OpenSolaris (version 28), so things like snapshots, de-duplication, improved performance and ZFS send and recv are available out of the box. There are a few missing items, but from what I can tell from the documentation there is plenty more coming.

The ZFS file system for Linux comes as source code, which you build into loadable kernel modules (this is how they get around the license incompatibilities). The implementation also contains the userland utilities (zfs, zpool, etc.) most Solaris admins are used to, and they act just like their Solaris counterparts! Nice!

My testing occurred on a CentOS 6 machine, specifically 6.2:

$ cat /etc/redhat-release
CentOS release 6.2 (Final)

The build process is quite easy. Prior to compiling source code you will need to install a few dependencies:

$ yum install kernel-devel zlib-devel libuuid-devel libblkid-devel libselinux-devel parted lsscsi

Once these are installed you can retrieve and build spl and zfs packages:

$ wget http://github.com/downloads/zfsonlinux/spl/spl-0.6.0-rc6.tar.gz

$ tar xfvz spl-0.6.0-rc6.tar.gz && cd spl*6

$ ./configure && make rpm

$ rpm -Uvh *.x86_64.rpm

Preparing...                ########################################### [100%]
   1:spl-modules-devel      ########################################### [ 33%]
   2:spl-modules            ########################################### [ 67%]
   3:spl                    ########################################### [100%]

$ wget http://github.com/downloads/zfsonlinux/zfs/zfs-0.6.0-rc6.tar.gz

$ tar xfvz zfs-0.6.0-rc6.tar.gz && cd zfs*6

$ ./configure && make rpm

$ rpm -Uvh *.x86_64.rpm

Preparing...                ########################################### [100%]
   1:zfs-test               ########################################### [ 17%]
   2:zfs-modules-devel      ########################################### [ 33%]
   3:zfs-modules            ########################################### [ 50%]
   4:zfs-dracut             ########################################### [ 67%]
   5:zfs-devel              ########################################### [ 83%]
   6:zfs                    ########################################### [100%]

If everything went as planned you now have the ZFS kernel modules and userland utilities installed! To begin using ZFS you will first need to load the kernel modules with modprobe:

$ modprobe zfs

To verify the module loaded you can tail /var/log/messages:

Feb 12 17:54:27 centos6 kernel: SPL: Loaded module v0.6.0, using hostid 0x00000000
Feb 12 17:54:27 centos6 kernel: zunicode: module license 'CDDL' taints kernel.
Feb 12 17:54:27 centos6 kernel: Disabling lock debugging due to kernel taint
Feb 12 17:54:27 centos6 kernel: ZFS: Loaded module v0.6.0, ZFS pool version 28, ZFS filesystem version 5

And run lsmod to verify they are there:

$ lsmod | grep -i zfs

zfs                  1038053  0 
zcommon                42478  1 zfs
znvpair                47487  2 zfs,zcommon
zavl                    6925  1 zfs
zunicode              323120  1 zfs
spl                   210887  5 zfs,zcommon,znvpair,zavl,zunicode

To create our first pool we can use the zpool utilities create option:

$ zpool create mysqlpool mirror sdb sdc

The example above created a mirrored pool out of the sdb and sdc block devices. We can see this layout in the output of `zpool status`:

$ zpool status -v

  pool: mysqlpool
 state: ONLINE
 scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	mysqlpool   ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	    sdc     ONLINE       0     0     0

errors: No known data errors

Awesome! Since we are at pool version 28 lets disable atime updates and enable compression and deduplication:

$ zfs set compression=on mysqlpool

$ zfs set dedup=on mysqlpool

$ zfs set atime=off mysqlpool

For a somewhat real world test, I stopped one of my MySQL slaves, mounted the pool on /var/lib/mysql, synchronized the previous data over to the ZFS file system and then started MySQL. No errors to report, and MySQL is working just fine. Next up, I trash one side of the mirror and verified that resilvering works:

$ dd if=/dev/zero of=/dev/sdb

$ zpool scrub mysqlpool

I let this run for a few minutes then ran `zpool status` to verify the scrub fixed everything:

$ zpool status -v

  pool: mysqlpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
	attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
	using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scan: scrub repaired 966K in 0h0m with 0 errors on Sun Feb 12 18:54:51 2012
config:

	NAME        STATE     READ WRITE CKSUM
	mysqlpool   ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sdb     ONLINE       0     0   175
	    sdc     ONLINE       0     0     0

I beat on the pool pretty good and didn’t encounter any hangs or kernel oopses. The file systems port is still in its infancy, so I won’t be trusting it with production data quite yet. Hopefully it will mature in the coming months, and if we’re lucky maybe one of the major distributions will begin including it! That would be killer!!

7 Comments

lasercat  on February 13th, 2012

I wonder if we’ll have to rebuild this kernel module every time there is an update to the kernel? I guess some people don’t switch their kernels that often but I find myself getting a new one down the pipe every week or two. (Running Arch Linux)

locust  on February 14th, 2012

Very nice, is it possible to boot zfs root ?

matty  on February 14th, 2012

lasrercat — for now you will have to rebuild the modules for each kernel. Adding dkms support may be possible, though I don’t know what it would take to implement this. That would be a fun project though!

matty  on February 14th, 2012

locust — booting from ZFS appears to be possible, but you need to patch grub. Here’s what the Linux on ZFS FAQ says about this:

http://zfsonlinux.org/faq.html#CanIBootFromZFS

“Q: Can I boot from ZFS?

A: Yes, but booting from ZFS is currently not recommended without a patched version of grub. However, this does not mean you can’t use ZFS as your root file system. You can configure your system with a small ext4 /boot file system and refer to the following documentation for setting up a ZFS root file system with dracut.”

drsgrid  on February 21st, 2012

Does anyone know of a way to install/use “napp-it” on centos 6 – so that it work with the above method to have native ZFS support on linux. Im currently using openindiana but have been running into some issues with the needed complex permission scheme we have on our linux network.

Don Wilson  on June 19th, 2012

Thank you very much!

onetruth  on July 15th, 2012

Thanks. I’ve had this saved to come back to later and give it a try. Actually working with ZFS daily I wanted to see if I could get this working on my Linux home machine (Fedora 16). Initially rc6 wouldn’t compile correctly, but using rc9 (latest) and your instructions it worked flawlessly. After getting this working I wondered exactly what locus was wondering. I’ll give that a try next.

Leave a Comment