How To Create A Cluster Testbed Using CentOS 5 Virtualization And iSCSI - Page 3
iSCSI is a Storage Area Network protocol allowing shared storage going through an exising network infrastructure. In my setup, I used iscsitarget from http://iscsitarget.sourceforge.net.
1. iSCSI server installation and configuration
1.a compiling the iscsi application tarball
This needs to be done on the physical host.
- Get the tarball from SourceForge and put it in /usr/local/src.
- cd to /usr/local/src:
- Then extract the files:
tar xvf iscsitarget-0.4.16.tar.gz
- Then run:
1.b configuration needed
This is my ietd.conf configuration defining the "LUNs" to be allocated to the guests from the physical host's disks:
#/etc/ietd.conf # NOTE: the config files has more entries than what i'm showing here. # but i've commented out the original entries and made the following Target iqn.2008-07.NODE00:LUN01.NODE00 MaxConnections 2 Lun 1 Path=/dev/Virtual00VG/lvLUN01,Type=fileio Alias LUN01 Target iqn.2008-07.NODE00:LUN02.NODE00 MaxConnections 2 Lun 2 Path=/dev/Virtual00VG/lvLUN02,Type=fileio Alias LUN02 # end of ietd.conf
In my physical host system, I have created two logical volumes 50G each in size. You can also use files or disk partitions, just change the Path entries in the ietd.conf file.
iscsitarget has /etc/initiators.allow and /etc/initiators.deny that work like hosts.allow and hosts.deny. In my setup, I will allow node01 and node02 to access the two LUNs defined in ietd.conf.
#/etc/initiators.allow #this should correspond to the definition in your /etc/ietd.conf iqn.2008-07.NODE00:LUN01.NODE00 192.168.100.10, 192.168.100.20 iqn.2008-07.NODE00:LUN02.NODE00 192.168.100.10, 192.168.100.20 # endof initiators.allow
- Start the iscsi-target service:
service iscsi-target start
- and make sure it starts during bootup:
chkconfig --add iscsi-target
chkconfig iscsi-target on
chkconfig --list iscsi-target
iscsi-target 0:off 1:off 2:on 3:on 4:on 5:on 6:off
2 Client Side
The package iscsi-initiator-utils-188.8.131.525-0.8.el5 should already be installed (as it is included in the kickstart file above).
- Edit the file /etc/iscsi/initiatorname.iscsi to define the targets.
- My /etc/iscsi/initiatorname.iscsi is as follows:
#/etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2008-07.NODE00:LUN01.NODE00 InitiatorName=iqn.2008-07.NODE00:LUN02.NODE00 # end of #/etc/iscsi/initiatorname.iscsi
- Run iscsid service and try to discover the LUNs:
service iscsid start
Turning off network shutdown. Starting iSCSI daemon: [ OK ]
iscsiadm -m discovery -t st -p node00
- Then start the iscsi service. You'll then see the LUN definitions created earlier:
service iscsi start
will then show the following:
iscsid (pid 964 963) is running... Setting up iSCSI targets: Login session [iface: default, target: \ iqn.2008-07.NODE00:LUN02.NODE00, portal: 192.168.222.1,3260] Login session [iface: default, target: iqn.2008-07.NODE00:LUN01.\ NODE00, portal: 192.168.222.1,3260] [ OK ]
- Check system logs to see if the disks have been seen:
scsi0 : iSCSI Initiator over TCP/IP Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 scsi 0:0:0:2: Attached scsi generic sg0 type 0 SCSI device sda: 104857600 512-byte hdwr sectors (53687 MB) sda: Write Protect is off sda: Mode Sense: 77 00 00 08 SCSI device sda: drive cache: write through SCSI device sda: 104857600 512-byte hdwr sectors (53687 MB) sda: Write Protect is off sda: Mode Sense: 77 00 00 08 SCSI device sda: drive cache: write through sda: unknown partition table sd 0:0:0:2: Attached scsi disk sda scsi1 : iSCSI Initiator over TCP/IP Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 SCSI device sdb: 104857600 512-byte hdwr sectors (53687 MB) sdb: Write Protect is off sdb: Mode Sense: 77 00 00 08 SCSI device sdb: drive cache: write through SCSI device sdb: 104857600 512-byte hdwr sectors (53687 MB) sdb: Write Protect is off sdb: Mode Sense: 77 00 00 08 SCSI device sdb: drive cache: write through sdb: unknown partition table sd 1:0:0:1: Attached scsi disk sdb sd 1:0:0:1: Attached scsi generic sg1 type 0
I now have sda and sdb, each with 53687 MB in size (results for your setup may be different.
- Running fdisk:
Disk /dev/xvda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/xvda1 * 1 13 104391 83 Linux /dev/xvda2 14 3916 31350847+ 8e Linux LVM Disk /dev/sda: 53.6 GB, 53687091200 bytes 64 heads, 32 sectors/track, 51200 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 53.6 GB, 53687091200 bytes 64 heads, 32 sectors/track, 51200 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk /dev/sdb doesn't contain a valid partition table
Now do the same for node02. Once the disks are seen by both guests, you can then start setting up a two-node cluster. I've used this configuration to test a two-node Oracle 10gR2 RAC setup with shared ASM storage and OCFS2 on a 64-bit system.
This kind of setup will help you to learn the basics of clustering without the need of acquiring additional hardware. In no way can this setup be used in a "live" environment. Once you have familiarized yourself with the concept of how a cluster is prepared, you can apply the concept when building real, physical setups that you need for your organization. I hope you'll find this useful.