Xen Cluster Management With Ganeti On Debian Lenny - Page 2
4 Installing Ganeti And Xen
node1/node2:
We can install Ganeti and Xen with one simple command:
aptitude install ganeti
You will see the following question:
MD arrays needed for the root file system: <-- all
Then we edit /etc/xen/xend-config.sxp and modify the following settings:
vi /etc/xen/xend-config.sxp
[...] (xend-relocation-server yes) [...] (xend-relocation-port 8002) [...] (xend-relocation-address '') [...] (network-script network-bridge) [...] #(network-script network-dummy) [...] (vif-script vif-bridge) [...] (dom0-min-mem 0) [...] |
Next open /boot/grub/menu.lst and find the # xenhopt= and # xenkopt= lines and modify them as follows (don't remove the # at the beginning!):
vi /boot/grub/menu.lst
[...] ## Xen hypervisor options to use with the default Xen boot option # xenhopt=dom0_mem=256M ## Xen Linux kernel options to use with the default Xen boot option # xenkopt=console=tty0 nosmp [...] |
256M or 512M are a reasonable amount of memory for dom0.
(Please use nosmp only of your CPU has multiple cores. If you CPU has just one core, it is possible that it won't boot anymore with this setting. You can check how many cores you have with the following command:
cat /proc/cpuinfo
)
Afterwards, update the GRUB boot loader:
/sbin/update-grub
and reboot both physical nodes:
reboot
After the reboot, the nodes should run the Xen kernel:
uname -r
node1:~# uname -r
2.6.26-1-xen-686
node1:~#
Afterwards do this:
cd /boot
ln -s vmlinuz-`uname -r` vmlinuz-2.6-xenU
ln -s initrd.img-`uname -r` initrd-2.6-xenU
(This is useful if you don't specify a kernel in the gnt-instance add command - the command will then use /boot/vmlinuz-2.6-xenU and /boot/initrd-2.6-xenU by default.)
5 Installing DRBD
node1/node2:
Next we install DRBD:
aptitude install drbd8-modules-`uname -r` drbd8-utils
Now we must enable the DRBD kernel module:
echo drbd minor_count=64 >> /etc/modules
modprobe drbd minor_count=64
It is recommended to configure LVM not to scan the DRBD devices. Therefore we open /etc/lvm/lvm.conf and replace the filter line as follows:
vi /etc/lvm/lvm.conf
[...] filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ] [...] |
6 Initializing The Cluster
node1:
Now we can initialize our cluster (this has to be done only once per cluster). Our clustername is cluster1.example.com, and I want node1.example.com to be the master, therefore we run the following command on node1.example.com:
gnt-cluster init -b eth0 -g xenvg --master-netdev eth0 cluster1.example.com
Ganeti assumes that the name of the volume group is xenvg by default, so you can also leave out the -g xenvg switch, but if your volume group has a different name, you must specify it with the -g switch.
Xen 3.2 and 3.3 don't use the bridge xen-br0 anymore; instead eth0 is used, therefore we must specify -b eth0 and --master-netdev eth0.
7 Adding node2.example.com To The Cluster
node1:
Now that node1 is the master, we run all commands for managing the cluster on node1. In order to add node2.example.com to the cluster, we run:
gnt-node add node2.example.com
This will look like this:
node1:~# gnt-node add node2.example.com
-- WARNING --
Performing this operation is going to replace the ssh daemon keypair
on the target machine (node2.example.com) with the ones of the current one
and grant full intra-cluster ssh root access to/from it
The authenticity of host 'node2.example.com (192.168.0.101)' can't be established.
RSA key fingerprint is 62:d3:d4:3f:d2:9c:3b:f2:5f:fe:c0:8a:c8:02:82:2a.
Are you sure you want to continue connecting (yes/no)? <-- yes
[email protected]'s password: <-- node2's root password
node1:~#
Now let's check if our cluster really consists out of node1 and node2:
gnt-node list
You should get something like this:
node1:~# gnt-node list
Node DTotal DFree MTotal MNode MFree Pinst Sinst
node1.example.com 428764 428764 3839 256 3535 0 0
node2.example.com 104452 104452 1023 256 747 0 0
node1:~#