Blog O' Matty


Concert review The Led Zepplin Experience with Jason Bonham

This article was posted by Matty on 2011-11-14 20:25:00 -0400 -0400

There are some moments in music you will never forget. One that I recall vividly was the first time I heard Led Zepplin IV with my best friend. From the moment I heard “Hey hey mama” in “Black Dog” I was blown away. These guys were hard, they had great drums, an amazing singer, a great bassist and some of the best guitar riffs I had ever heard. After hearing Zepplin IV three of four times I ran out the following week and blew my entire paycheck on Led Zepplin I, Led Zepplin II, Led Zepplin III and of course Led Zepplin IV. That summer became the summer of Zepplin, and I ended up destroying my Houses of the Holy cassette tape because I played it too much. That set me back an additional $15.

My fascination with led Zepplin landed me concert tickets to see Jimmy Page and Robert Plant in 1998, and boy was that an experience. A huge venue, two of my music idols, Robert Plant belting out all of my favorite Led Zepplin tunes and Jimmy Page playing guitar like no ones business. I can still remember the 20+ minute Kashmir solo they did, and how it brought chills to my body. They were true roll & roll pioneers, and I think they did as much for music as any other music legend. They also inspired me to learn how to play guitar, and for that I am crazy thankful!

Two weeks back my buddy called me up and asked if I wanted to go see Jason Bonham’s Led Zepplin Experience. For those that don’t know, Jason Bonham is the son of the late Led Zepplin drummer John Bonham. I can’t turn down any show that has Led Zepplin associated with it, so of course I said yet. This turned out to be the right choice, and the show was absolutely amazing. Let me share some of the awesomeness with you.

There wasn’t an opening band, and it turns out one wasn’t needed. Jason and his fellow musicians made the evening an intimate tour through Led Zepplin history, and along with playing a wide variety of Led Zepplin songs he also shared various pictures and home movies that his family had made. He also worked through the entire Led Zepplin collection, playing songs from Led Zepplin I - Led Zepplin IV, as well as tunes from Houses of the Holy and Physical Graffiti.

I don’t think a true Zepplin fan could ask for a better setlist. They started the night off with the “Immigrant Song,” and followed it up with timeless classics like “Rock & Roll, “Your Time Is Gonna Come,” “Thank You,” “Moby Dick,” “Over The Hills And Far Away,” “Stairway To Heaven,” “Since I’ve Been Loving You,” “When the Levee Breaks,” “Kashmir,” “Whole Lotta Love,” and one of my personal favorites “Dazed & Confused.” Every song was spot on, and when I closed my eyes I could almost picture myself at a Led Zepplin concert. Jason and his bandmates were THAT GOOD, and are some of the most talented musicians I’ve ever seen live (I was in awe with their guitarist, their vocalist sounded just like Robert Plant, and Jason was AMAZING on drums).

I’ve already mentioned that the show was killer, so you know I’m going to give it a 10/10. So what were the HCMs (Holy Crap Moments) from show? Was it The killer drum solo Jason did during “Moby Dick?” Was it the the guitar heroics on “Stairway To Heaven?” Or was it the amazing keyboard and bass work done during “Dazed & Confused?” I really can’t say, since everything sounded so good. You could see the artists love for Led Zepplin, and that poured out into their instruments. This is one of the best shows I’ve been to in a long time, and I’d like to thank Jason and his band for such a magical evening. If you are a Led Zepplin fan you need to catch this one. It’s the best $20 you can spend (If you don’t think so check out some of his videos on Youtube). :)

Creating clustered file systems with glusterfs on CentOS and Fedora Linux servers

This article was posted by Matty on 2011-11-13 12:32:00 -0400 -0400

I’ve been using gluster for the past few months, and so far I am really impressed with what I’m seeing. For those that haven’t used gluster, it is an open source clustered file system that can provides scalable storage on commodity hardware. As with all file systems and applications, gluster comes with it’s own vernacular. Here are some terms you will need to know if you are going to gluster it up:

At the simplest level, a brick contains the name of a server and a directory on that server where stuff will be stored. You combine bricks into volumes based on performance and reliability requirements, and these volumes are then shared with gluster clients through CIFS, NFS or via the glusterfs file system. This is a crazy simple overview, and you should definitely read official documentation if you are planning to use gluster.

Getting gluster up and working on a Linux server is crazy easy. To start, you will need to install a few packages on your gluster server nodes (these will become your bricks). If you are using CentOS you will need to build the packages from source. Fedora 16 users can install the packages with yum:

$ yum install glusterfs flusterfs-fuse glusterfs-server glusterfs-vim

glusterfs-devel**

Once the packages are installed you can use the glusterfs utilities “-V” option to verify everything is working:

$ /usr/sbin/glusterfs -V

glusterfs 3.2.4 built on Sep 30 2011 18:02:31
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http: www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU
General Public License.

The key thing to note is the version and the fact that the command completed without error. Next up, you will need to allocate some storage to gluster. This storage will be the end destination for the data your clients write, so you should give some thought to how you are going to organize it. Do you want to use cheap storage and let gluster replicate the data for you? Do you want to use RAID protected storage and gluster striping? Do you want to combine these two for the best performance and availability you can get? That is only a decision you can make.

For my needs, I added a dedicated disk (/dev/sdb) to each of my gluster nodes. I then created a logical volume and EXT4 file system on that device, and mounted it up to /gluster/vol01:

$ pvcreate /dev/sdb /dev/sdb1 /dev/sdb9

$ vgcreate GlusterVG /dev/sdb /dev/sdb1 /dev/sdb9

$ lvcreate -l +100%FREE GlusterVG -n glustervol01

$ mkfs.ext4 /dev/mapper/GlusterVG-glustervol01

$ mount /dev/mapper/GlusterVG-glustervol01 /gluster/vol01

Once you have storage you will need to create a gluster server volume definition file. This file defines the translators you want to use, as well as the location of the storage glsuter will use. Here is the volume definition file I created on each of my server nodes:

$ cat /etc/glusterfs/glusterfsd.vol

volume posix
type storage/posix
option directory /gluster/vol01
end-volume

volume locks
type features/locks
subvolumes posix
end-volume

volume brick
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume

volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow 192.168.1.*
subvolumes brick
end-volume

The configuration file above defines four translators. First we have the server translator which handles communications, this is linked to the io-threads translator which creates X threads to handle operations, this is linked to the locks translator which handles locking and this is linked to the posix translator which writes the actual data to a backing store (/gluster/vol01 in this case). This works as a pipeline, so you can add translators to the flow to gain additional functionality. Kinda nifty!

Now Depending on how you plan to use gluster, you may need to add additional translators to your configuration file. We’ll keep it simple for now and use the basic configuration listed above. To start the gluster infrastrucutre on a CentOS or Fedora server you should chkconfig on (or systemctl enable) the glusterd and glusterfsd services on so they will start at boot:

$ chkconfig glusterd on

$ chkconfig glusterfsd on

and then service start (or systemctl start) the two services:

$ service glusterd restart

Restarting glusterd (via systemctl): [ OK ]

$ service glusterfsd restart

Restarting glusterfsd (via systemctl): [ OK ]

For each server (gluster can scale to hundreds and hundreds of nodes) that will act as a gluster cluster node you will need to perform the tasks above. I don’t have hundreds of machines, only the three measly machines listed below:

Starting the services will bring the gluster infrastructure up, but the nodes will have no idea what cluster they are in or which volumes they are a part of. To add nodes to a cluster of storage nodes you can login to one of your gluster nodes and probe the other servers you want to add to the cluster. Probing is done with the gluster utilities “peer probe” option:

$ gluster peer probe fedora-cluster02

Probing a server should merge the node into your cluster. To see which nodes are active cluster members you can use the gluster utilities “peer status” option:

$ gluster peer status

Number of Peers: 1

Hostname: fedora-cluster02
Uuid: ca62e586-8edf-42ea-9fd1-5e11dff29db1
State: Peer in Cluster (Connected)

Sweet, in addition to the local machine we have one cluster partener for a total of two nodes. With a working cluster we can now move on to creating volumes. I mentioned above that a volume contains one or more storage bricks that are organized by reliability and availability requirements. There are several volumes types available:

Since I only have two nodes in my cluster, I am going to create a replicated volume across two bricks:

$ gluster volume create glustervol01 replica 2 transport tcp

fedora-cluster01:/gluster/vol01 fedora-cluster02:/gluster/vol01**
Creation of volume glustervol01 has been successful. Please start the
volume to access data.

My volume (glustervol01) has a replication factor of two (two copies of my data will be made) and the data will be distributed to both of the bricks listed on the command line. To start the volume so clients can use it you can use the gluster utilities “volume start” option:

$ gluster volume start glustervol01

Starting volume glustervol01 has been successful

To verify the volume is operational we can use the gluster utilities “volume info” option:

$ gluster volume info glustervol01

Volume Name: glustervol01
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: fedora-cluster01:/gluster/vol01
Brick2: fedora-cluster02:/gluster/vol01

Well hot diggity, we have a working gluster volume. Excellent! To configured a CentOS or Fedora client to use this volume we will need to install the glsuterfs and glusterfs-fuse packages:

$ rpm -q -a | grep gluster

glusterfs-3.2.4-1.fc16.x86_64
glusterfs-fuse-3.2.4-1.fc16.x86_64

If the packages aren’t installed you can build form source if you are using CentOS, or install via yum if you are on a Fedora 16 machine:

$ yum install glusterfs glusterfs-fuse

After the packages are installed we can use the mount command to make the gluster volume available to the client. The mount command takes as arguments the file system type (glusterfs), the name of one of the bricks, the name of the volume and the location to mount the volume:

$ mount -t glusterfs fedora-cluster01:/glustervol01 /gluster

We can verify it mounted with the df command:

$ df -h /gluster

Filesystem Size Used Avail Use% Mounted on
fedora-cluster01:/glustervol01 36G 176M 34G 1% /gluster

To bring this mount up each time the client boots we can add an entry similar to the following to /etc/fstab:

fedora-cluster01:/glustervol01 /gluster glusterfs defaults,_netdev 0 0

Now you may be asking ourself how does gluster replicate data if only one server is specified to the mount command? The initial mount is used to gather information about the volume, and from there on out the server will communicate with all of the nodes defined in the volume definition file. Now that we’ve gone through all this work to mount a volume, we can poke it with dd to make sure it works:

$ cd /gluster

$ dd if=/dev/zero of=foo bs=1M count=8192

8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 359.066 s, 23.9 MB/s

Bingo! Our volume is operational, and any data that is written to it will be replicated to two storage bricks in the volume. While I only have a couple months of gluster experience under my belt, there are a few issues that will stop me from deploying it into production:

  1. Debugging gluster issues in production are difficult. Rolling out debug binaries or stopping gluster volumes to debug a problem isn’t an option for most shop. I should note that the article that I referenced was written last year, and there are various options now available (–debug flag, detailed logs, profiling, etc.) to assist with debugging problems. I’ve seen references to debug builds and unmounts on the mailing list, so this leads me to believe this is still an issue. I’ll know if the debug options are up to the task when I start trying to break things. :)
  2. There is no easy way to find out the mapping of files to bricks. In most cases this shouldn’t matter, but for recoverability I would like to see a tool added.
  3. Security is based on IP addresses and subnets. Stronger authentication and encryption of the control and data paths are a necessity at most institutions that have to comply with federal and state laws.
  4. If a node fails and is brought back up at a later time, the files that were changed aren’t replicated to the faulted server until they are accessed. The documentation talks about running a find across the volume from a client, but this seems kinda kludgy.

I really see a bright future for gluster. It fills a niche (a scalable clustered file system) that has been largely untouched, and I truly seeing it taking it off. It’s also open source, so you can tinker around with it to your hearts content. I’ve be adding more posts related to gluster performance, debugging and maintenance down the road.

Working around various Fedora 16 installation errors

This article was posted by Matty on 2011-11-12 19:16:00 -0400 -0400

I went to upgrade a few of my lab machines to Fedora 16 today and encountered a number of issues. The first issue I encountered was related to missing “%end” tags in my kickstart configuration file. The specific error was “Section does not end with %end”:

Error

It appears Fedora now requires the %packages and %post sections to be explicitly ended with an end tag. The kickstart file I was using was from a Fedora 12 installation, and it worked just fine. That leads me to believe that “end” tag enforcement was added in Fedora 13, 14, 15 or 16. I’ll need to research this further. Adding the end tags fixed this issue and got me a bit further along in the installer. But when anaconda got to storage discovery I was greeted with the error “you have not created a bootloader stage1 target device.":

Error

I figured this was due to a missing kickstart directive, and it turns out I was right. Fedora 16 now requires a biosboot partition, which you can create with the biosboot directive. Here is what the official documents say about the biosboot partition:

“As of Fedora 16 there must be a biosboot partition for the bootloader to be installed successfully. This partition may be created with the kickstart option part biosboot –fstype=biosboot –size=1. However, in the case that a disk has an existing biosboot partition, adding a “part biosboot” option is unnecessary.”

Everything is working now though it took a bit longer than normal to dump a new Fedora image on my kickstart server. Oh the joys of jumping on the FEDORA-LATEST bandwagon so soon after a release. :)

An amazing hot swap drive tray

This article was posted by Matty on 2011-11-12 16:56:00 -0400 -0400

I’m constantly playing around with Operating Systems and applications, and in the vast majority of cases I can use VirtualBox, KVM or VMWare to accomplish my testing. But in some cases I need to use physical hardware, which used to require me to shuffle drives and cables around in my custom built rack mount servers. Well, no more. I picked up a couple of SNT hot swap drive trays and now I can easily swap drives in and out of my rack mount servers. In addition to be being crazy useful, they also look cool:

null

To add a new drive you open the door, slide it in and then move the lock to the left to keep the drive in place. I couldn’t be happier with this set up, and the testing I did this morning would have taken me a LONG time if this nifty little device wasn’t in place. When you combine these with a solid case, you have a pretty killer home lab. What other solutions do folks use? USB drives? External disks? Something else?

Growing your own lettuce indoors with an Aerogarden

This article was posted by Matty on 2011-11-11 19:38:00 -0400 -0400

I’ve hopped on the health train over the past six months, and have been trying to integrate more fruits and vegetables into my diet. This all came about after I watched the Food Inc. documentary (available from Netflix and Amazon’s streaming services) and reflected on what I eat. The Food Inc. documentary really shed some light on how the food production system works, and it made me realize that I needed to make more of an effort to grow some of my own food. When you grow your own food you know where it came from, and you have total control over how organic you want your food to be.

So this summer I started my first outdoor garden using a few sub irrigated Earthboxes. I grew some amazing tomatoes, cucumbers, peppers and a number of herbs that were used in all kinds of dishes (if you cook and haven’t use allrecipes.com, you are totally missing out!). But as the cold weather starts to move in I wanted to continue my growing season indoors. After doing tons and tons of research on indoor gardening, I kept being directed to the Aerogarden product line. So I took the plunge and ordered an Aerogarden Extra to see what the hype was all about.

So you may be asking yourself what the heck is an Aerogarden? These amazing little devices are indoor hydroponic systems that can be used to grow vegetables, flowers, herbs or just about anything that would thrive in an indoor hyproponic system. We are currently growing lettuce in our Aerogarden, and as you can see we are getting some awesome results (this is what things look like 3-days after a large lettuce harvest):

null

I’ve been able to harvest salad greens from my Aerogarden extra every third day, and they have become a staple on our dinner menu. The Aerogarden couldn’t be easier to operate, since you only need to follow a few steps to grow your crop of choice:

  1. Purchase a seed kit and follow the instructions to plant it.
  2. Add water when the water light comes on.
  3. Add liquid nutrients when the nutrient light comes on.
  4. Harvest your produce as needed.

I was so pleased with my first garden that I decided to order a second Aerogarden Extra this week. I am planning to try out their mega cherry seed kit this winter, and can’t wait to pick vine ripe tomatoes in January! It’s amazing what fresh greens taste like, especially ones that you pick and plop right down on your dinner plate. I will keep updating my blog as I continue to experiment with my Aerogardens. This thing is an amazing amount of fun, and I had to share my excitement with my fellow geeks. :)