Another interesting finding about gluster replicas


In a previous post I talked about my problems getting gluster to expand the number of replicas in a volume. While experimenting with the gluster utilities “add-brick” option I wanted to see if adding two more bricks would replicate the existing data across four bricks (two old, two new), or if the two new bricks would be a replica pair and the two previous bricks would be a replica pair. To see what would happen I added two more bricks:

$ gluster volume add-brick glustervol01

centos-cluster01.homefetch.net:/gluster/vol01
centos-cluster02.homefetch.net:/gluster/vol01**
Add Brick successful

And then checked out the status of the volume:

$ gluster volume info glustervol01

Volume Name: glustervol01
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: fedora-cluster01.homefetch.net:/gluster/vol01
Brick2: fedora-cluster02.homefetch.net:/gluster/vol01
Brick3: centos-cluster01.homefetch.net:/gluster/vol01
Brick4: centos-cluster02.homefetch.net:/gluster/vol01

Interesting. The volume is now a distributed-replicated volume, and has a two by two configuration giving four nodes in total. This configuration is similar to RAID 10, where you stripe across mirrors. The previous two nodes would be one mirror, and the two new nodes would become the second mirror. I confirmed this by copying files to my gluster file system and then checking the bricks to see where the files landed:

$ cd /gluster

$ cp /etc/services file1

$ cp /etc/services file2

$ cp /etc/services file3

$ cp /etc/services file4

$ ls -la

total 2648
drwxr-xr-x 4 root root 8192 Nov 27 2011 .
dr-xr-xr-x. 23 root root 4096 Nov 12 15:44 ..
drwxr-xr-x 2 root root 16384 Nov 27 2011 etc1
-rw-r--r-- 1 root root 656517 Nov 27 2011 file1
-rw-r--r-- 1 root root 656517 Nov 27 2011 file2
-rw-r--r-- 1 root root 656517 Nov 27 2011 file3
-rw-r--r-- 1 root root 656517 Nov 27 2011 file4
drwx------ 2 root root 20480 Nov 26 21:11 lost+found

Four files were copied to the gluster file system, and it looks like two landed on each replicated pair of bricks. Here is the ls listing from the first pair (I pulled this from one of the two nodes):

$ ls -la

total 1328
drwxr-xr-x. 4 root root 4096 Nov 27 10:00 .
drwxr-xr-x. 3 root root 4096 Nov 26 17:53 ..
drwxr-xr-x. 2 root root 4096 Nov 27 10:00 etc1
-rw-r--r--. 1 root root 656517 Nov 27 10:00 file1
-rw-r--r--. 1 root root 656517 Nov 27 10:01 file2
drwx------. 2 root root 16384 Nov 26 21:11 lost+found

And here is the listing from the second replicated pair of bricks:

$ ls -la

total 1324
drwxr-xr-x 4 root root 4096 Nov 27 10:00 .
drwxr-xr-x 3 root root 4096 Nov 12 20:05 ..
drwxr-xr-x 126 root root 12288 Nov 27 10:00 etc1
-rw-r--r-- 1 root root 656517 Nov 27 10:00 file3
-rw-r--r-- 1 root root 656517 Nov 27 10:00 file4
drwx------ 2 root root 4096 Nov 26 21:11 lost+found

So there you have it. Adding two more bricks with “add-brick” adds a new pair of replicated bricks, it doesn’t mirror the data between the old bricks and the new ones. Given the description of a distributed replicated volume in the official documentation this makes total sense. Now to play around with some of the other redundancy types.

This article was posted by Matty on 2011-11-30 09:04:00 -0400 -0400