Setting Up A Standalone Storage Server With GlusterFS And Samba On Debian Squeeze - Page 2

4 Setting Up The GlusterFS Servers

node1/node2

apt-get install glusterfs-server

Next we create a few directories:

mkdir /data/export
mkdir /data/export-ns

Now we need to create the GlusterFS server configuration file /etc/glusterfs/glusterfsd.vol

cp /etc/glusterfs/glusterfsd.vol /etc/glusterfs/glusterfsd.vol_orig
cat /dev/null > /etc/glusterfs/glusterfsd.vol
vi /etc/glusterfs/glusterfsd.vol

volume posix
type storage/posix
option directory /data/export
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume brick
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow 192.168.20.106,192.168.20.107
subvolumes brick
end-volume

At last we can start the GlusterFS server:

/etc/init.d/glusterfs-server start

 

5 Setting Up The GlusterFS Client

In this case, MS Windows client need to have access to both nodes via SMB. That?s why, both nodes are working as GlusterFS server and client in same time.
On both nodes:

node1/node2

First we need to create client config file:

cp /etc/glusterfs/glusterfs.vol /etc/glusterfs/glusterfs.vol_orig
cat /dev/null > /etc/glusterfs/glusterfs.vol
vi /etc/glusterfs/glusterfs.vol

volume remote1
type protocol/client
option transport-type tcp
option remote-host node1.example.com
option remote-subvolume brick
end-volume
volume remote2
type protocol/client
option transport-type tcp
option remote-host node2.example.com
option remote-subvolume brick
end-volume
volume replicate
type cluster/replicate
subvolumes remote1 remote2
end-volume
volume writebehind
type performance/write-behind
option window-size 1MB
subvolumes replicate
end-volume
volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind
end-volume

Done! Our cluster is set up. At last, we can mount the Gluster file system to /home directory with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /home

or

mount -t glusterfs /etc/glusterfs/glusterfs.vol /home

You should now see the mounted share:

df -h

...
/dev/sdb1 9,9G 151M 9,2G 2% /data
/etc/glusterfs/glusterfs.vol 9,9G 151M 9,2G 2% /home
...

Like you can see, the same share is mounted twice. That's because GlusteFS server use /data directory, and GlusterFS client use /home directory.

Of course we want the shares gets mounted automatically when the servers start. The best way is to append following line into the /etc/rc.local (before the exit 0 line):

/bin/mount -t glusterfs /etc/glusterfs/glusterfs.vol /home

 

Testing ?

on node2

... run ...

watch  ls /home

on node1

... run ...

touch /home/test.file

on node2

... you should see ...

Every 2,0s: ls /home Tue Dec 25 13:12:30 2012
test.file

That?'s it, cluster already running. You may do some more tests, to see how GlusterFS works.

Share this page:

0 Comment(s)