GlusterFS: Build a three-node storage cluster

July 26, 2012 by

This article will guide you step by step through a three-node GlusterFS cluster. For this setup I’m using three Ubuntu 12.04 servers with a second hard drive attached to hold all the data. This example is using KVM-based virtual machines and therefore any hard disks will show up as vda or vdb. If you have physical disks, they will likely show up as sda and sdb.

Since we’ll be needing to do a lot of work that needs root/admin privileges, let’s first gain root:

sudo su

This will save us from having to type “sudo” many, many times.

Install Gluster

Download the latest version of Gluster directly from

Install Gluster:

dpkg -i glusterfs_3.X.x-x_amd64.deb

Once installed, check your Gluster version:

glusterfs --version

To configure the glusterd service to automatically start every time the system boots, enter the following from the command line:

update-rc.d glusterd defaults

Make sure the glusterd service is started:

/etc/init.d/glusterd start

You need to add the cluster servers to the trusted storage pool. In this example, we’re running all GlusterFS configuration commands from our gluster01 server, but you can run them from any server because the configuration is replicated between GlusterFS nodes.

For every node you want to add, on gluster01, run:

gluster peer probe

To check the status of the trusted storage pool, run:

gluster peer status

If you have three nodes, you should get output similar to:

Number of Peers: 2

Uuid: b1f2a8a2-bc80-415f-848c-0b5354f04eb0
State: Peer in Cluster (Connected)

Uuid: c67d1757-5394-465b-b25e-4a9d72bc5137
State: Peer in Cluster (Connected)

Add and format storage

Check what drives you have available:

fdisk -l

If you added one extra data disk, this command should show you that you have an unpartitioned vdb (or sdb) disk available.

Create a data directory:

mkdir /data

Install xfsprogs, a program that lets you format drives using the XFS file system:

aptitude install xfsprogs

Format your data drive using XFS:

mkfs.xfs /dev/vdb

Lastly, mount the drive as /data:

mount /dev/vdb /data

Check that your data directory is mounted:

df -h

Make the data directory writeable by anyone:

chmod -R 777 /data

If all looks good, it’s time to ensure that the /data directory gets mounted on startup. So, open up fstab:

nano /etc/fstab

… and add the following at the bottom of the file:

/dev/vdb   /data   xfs   rw,user,auto   0   0

To make sure the new drive/partition is mounted properly, reboot the server:


You might want to try to create a file in the /data directory as a regular user:

touch /data/test.txt

Check that the file was indeed created:

ls -la /data

Now, let’s create a share named testvol with three replicas. The number of replicas is the number of times you want a file to be replicated. If this was a two-node cluster, it would have made sense to have two replicas. In this case, since we have three nodes, let’s set up three replicas. In most cases, three replicas is probably the maximum you want. We will create one replica on each server:


… in the /data directory. The directory you specify will be created if it doesn’t already exist:

gluster volume create testvol replica 3 transport tcp \ \ \

If the volume creation succeeds, you should see:

Creation of volume testvol has been successful. Please start the volume to access data.

Go ahead and start the volume:

gluster volume start testvol

Once the volume has been started, you can check its status by issuing:

gluster volume info

Now that everything is (hopefully) working, it’s time to start writing data to the cluster. We can do this in many ways. Using the GlusterFS client is one way, and using standard NFS is another common way of reading and writing to the cluster. GlusterFS can also be used as object storage.

Leave a Reply