A few months back I started looking into NAS solutions that would be ideal for home use. I jotted down my initial research in the post building your own nas and the post making sense of the various nas solutions. My original intent was to purchase an HP microserver from Amazon and test out all of the freely available NAS solutions. Due to some time constraints I cancelled my microserver order and re-used a server I had at home. I’m still planning to order a microserver at some point, since they are killer boxes for building home NAS devices with.
I first loaded up openfiler and went to town creating devices and mapping them out to hosts on my network. If you haven’t used openfiler before, it’s a Linux-based NAS distribution that provides a graphical interface on top of a RHEL derived distribution. The fact that it runs Linux gave it a few cool points, but I was less than impressed with the graphical interface. Tasks such as creating logical volumes and shares weren’t as intuitive as I would have thought, and IMHO the GUI didn’t really provide all that much value over running the LVM utilities from the command line. It did manage all of the iSCSI, NFS and Samba work behind the scenes, which is kinda nice if you don’t want to dig into these services and see how they work.
Craving a bit more out of my NAS device, I wiped the server that I installed openfiler on and installed FreeNAS. For those who haven’t used FreeNAS, it is a FreeBSD-based NAS distribution that makes heavy use of ZFS. The FreeNAS installation process is about as painless as can be, and after a 10 minute install I was able to fire up the web interface and start poking around. I was impressed with the initial graphical interface, and after a few minutes of clicking around I had a working ZFS pool and a number of NFS and iSCSI targets provisioned. Everything seemed quite intuitive with the FreeNAS interface, and all of the options were in a place you would normally think to look.
I’m still using FreeNAS, though it doesn’t offer all of the items I would like in a true NAS appliance. Here are the items that would make FreeNAS a slam dunk for me:
These things are relatively minor, and I suspect they will come in due time. Being a huge fan of ZFS I was stoked to see that this was part of FreeNAS. I’m curious to see how well the file system is supported now that Oracle has cut off the ZFS source code (I suspect it will thrive since the FreeBSD team has some crazy smart chaps working on it). As things stand now FreeNAS works, and it doesn’t cost you a penny to try out. I would still love to buy a synology diskstation to test out, but it’s kinda hard justifying one when what you have a working solution. It’s also nice to know that I can gain access to the FreeNAS source code anytime I want.
If anyone has read Gary Sim’s book Learning FreeNAS: Configure and manage a network attached storage solution please let me know what you thought of it in the comments section. I’m thinking about ordering a copy for a not so technically savy friend since his Youtube FreeNAS videos were done so well.
I gave a talk tonight at the local UNIX uers group on integrating Linux servers with LDAP. The slide desk is now available, and I want to thank everyone for coming out! The talk was a bunch of fun, and I appreciate all of the questions I received. Hopefully I can come back out later this year to talk about FreeIPA.
With the introduction of RHEL6 our beloved kudzu was removed from Redhat Enteprise Linux (it’s been gone from Fedora for quite some time). If you’re not familiar with kudzu, RHEL5 and below use it to detect new hardware when a system is bootstraped. All of the functionality that was part of kudzu is now handled by the kernel and udev, though a lot of sites will need to support RHEL 5 systems for years and years to come.
I was curious how kudzu detected new hardware, so I started reading through the kudzu man page and source code. I learned that hardware discovery / removal is actually pretty straight forward. When a box boots kudzu will probe the hardware and compare it against the contents of /etc/sysconfig/hwconf to see if anything was added or removed. Here is the relevant blurb from kudzu(8):
“kudzu detects and configures new and/or changed hardware on a system. When started, kudzu detects the current hardware, and checks it against a database stored in /etc/sysconfig/hwconf, if one exists. It then determines if any hardware has been added or removed from the system. If so, it gives the users the opportunity to configure any added hardware, and unconfigure any removed hard- ware. It then updates the database in /etc/sysconfig/hwconf.”
So when you get a prompt to add or remove hardware at startup, you are seeing kudzu in action. To view the list of hardware that kudzu is able to detect, you can run the kudzu command line utility with the “-p” option:
$ kudzu -p
class: OTHER
bus: PCI
detached: 0
driver: shpchp
desc: "VMware PCI Express Root Port"
vendorId: 15ad
deviceId: 07a0
subVendorId: 0000
subDeviceId: 0000
pciType: 1
pcidom: 0
pcibus: 0
pcidev: 18
pcifn: 7
.....
I’m not exactly sure what led me to dig into this, but it was rather fun. I’m one of those guys who enjoys going to lunch with 10 pages of C source code and a highlighter. Not sure if this makes me a geek, but it’s kinda cool learning while you consume your lunch calories. ;)
With the introduction of RHEL6 the kudzu hardware hardware manager was removed. All of the functionality that was once a par of kudzu has been integrated into the kernel and udev, as evidenced by this e-mail correspondence with one of Redhat’s support engineers:
“Kudzu is removed from rhel6. The kernel should be taking care of module loading from this point onwards. When it enumerates the device through its own methods or udev. It should automatically load the drivers based on device enumeration/identification based on their BUS such as PCI id’s USB ids. Other non hardware modules such as networking protocols will/should be loaded automatically on use ( unsupported/tech preview modules being the exception to this rule).”
Since kudzu has been with us for quite some time I thought it only fitting that we give it a proper farewell. Now if I only had a goat to eat the left overs. ;)
I’ve been fascinated with I/O and file system performance for years, and chose prefetch.net as my domain name after reading about pre-fetching algorithms in the book UNIX file systems (a great book that I need to read again). Since most applications access data that is not laid out sequentially on a hard drive platter, seek times come into play when you start to look at getting the most bang for your buck on random I/O workloads.
This past week I saw this first hand. I was playing around with some disk drives and started to wonder how many seeks I could get per second. I was also curious what the access time of each seek was, so I started poking around for a tool that displays this information. After a quick search, I came across seeker.
When seeker is run with the path to a block device, it will initiate reads to a device and measure the number of seeks per second, the average time of each seek, and the starting and ending offsets that were used to randomly access blocks on the disk. Here is a sample run:
$ seeker /dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda [156250000 blocks, 80000000000 bytes, 74 GB, 76293 MB, 80 GiB, 80000 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 50 seeks/second, 19.881 ms random access time (128677785 < offsets < 79965549334)
If you want to see the effects of multiple threads accessing a disk, you can pass a thread count to seeker:
$ seeker /dev/sda 32
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda [156250000 blocks, 80000000000 bytes, 74 GB, 76293 MB, 80 GiB, 80000 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 53 seeks/second, 18.587 ms random access time (36588683 < offsets < 79946851693)
In the example above 32 threads would try to access data at random locations and the results from these disk accesses would be measured and printed. Seeker is a super handy utility, and I would put it right up there with blktrace.