Xen Cluster Management With Ganeti On Debian Etch - Page 6

11 Initializing The Cluster

node1:

Now we can initialize our cluster (this has to be done only once per cluster). Our clustername is cluster1.example.com, and I want node1.example.com to be the master, therefore we run the following command on node1.example.com:

gnt-cluster init cluster1.example.com

 

12 Adding node2.example.com To The Cluster

node1:

Now that node1 is the master, we run all commands for managing the cluster on node1. In order to add node2.example.com to the cluster, we run:

gnt-node add node2.example.com

This will look like this:

node1:/srv/ganeti/os# gnt-node add node2.example.com
The authenticity of host 'node2.example.com (192.168.0.101)' can't be established.
RSA key fingerprint is 1c:83:24:cc:05:ab:9a:d6:51:ba:4d:31:42:1f:0a:6f.
Are you sure you want to continue connecting (yes/no)?
<-- yes
[email protected]'s password:
node1:/srv/ganeti/os#

Now let's check if our cluster really consists out of node1 and node2:

gnt-node list

You should get something like this:

node1:/srv/ganeti/os# gnt-node list
Node              DTotal DFree MTotal MNode MFree Pinst Sinst
node1.example.com  40700 40700    203    64   124     0     0
node2.example.com  40700 40700    203    64   124     0     0
node1:/srv/ganeti/os#

 

13 Setting Up An Instance

node1:

Now let's create our first virtual machine (called an instance in Ganeti speak), inst1.example.com. I want to use DRBD for it (remote RAID1), I want node2 to be the primary node, and I want the instance to have a 5 GB hard drive, 256 MB swap and 64 MB RAM. Again, we run the command on the cluster master, node1.example.com:

gnt-instance add -t remote_raid1 -n node2.example.com --secondary-node node1.example.com -o debian-etch -s 5g --swap-size 256 -m 64 inst1.example.com

This can take some time. This is how the output looks:

node1:~# gnt-instance add -t remote_raid1 -n node2.example.com --secondary-node node1.example.com -o debian-etch -s 5g --swap-size 256 -m 64 inst1.example.com
* creating instance disks...
adding instance inst1.example.com to cluster config
Waiting for instance inst1.example.com to sync disks.
- device sda: 18.90% done, 2661 estimated seconds remaining
- device sda: 22.10% done, 1278 estimated seconds remaining
- device sda: 26.40% done, 1611 estimated seconds remaining
- device sda: 30.70% done, 1301 estimated seconds remaining
- device sda: 34.70% done, 1524 estimated seconds remaining
- device sda: 38.80% done, 894 estimated seconds remaining
- device sda: 43.30% done, 1753 estimated seconds remaining
- device sda: 48.40% done, 1195 estimated seconds remaining
- device sda: 52.70% done, 1213 estimated seconds remaining
- device sda: 57.70% done, 1011 estimated seconds remaining
- device sda: 61.10% done, 730 estimated seconds remaining
- device sda: 64.60% done, 698 estimated seconds remaining
- device sda: 69.40% done, 595 estimated seconds remaining
- device sda: 73.80% done, 430 estimated seconds remaining
- device sda: 78.30% done, 438 estimated seconds remaining
- device sda: 82.00% done, 169 estimated seconds remaining
- device sda: 85.80% done, 298 estimated seconds remaining
- device sda: 91.20% done, 146 estimated seconds remaining
- device sda: 95.50% done, 85 estimated seconds remaining
- device sda: 99.20% done, 18 estimated seconds remaining
Instance inst1.example.com's disks are in sync.
creating os for instance inst1.example.com on node node2.example.com
* running the instance OS create scripts...
* starting instance...
node1:~#

Ganeti has created a complete virtual machine (using Debian Etch) which you can now use.

 

14 Configuring The Instance

node1:

To get to inst1.example.com's command line, run

gnt-instance console inst1.example.com

on node1.

inst1.example.com:

Now you can log in to inst1.example.com. The username is root along with no password. Therefore the first thing we do after the login is create a password for root:

passwd

Next we must add a stanza for eth0 to /etc/network/interfaces. Right now, inst1.example.com has no network connectivity because only lo (the loopback interface) is up.

As I said in chapter 1, I want inst1.example.com to have the IP address 192.168.0.105:

vi /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 192.168.0.105
        netmask 255.255.255.0
        network 192.168.0.0
        broadcast 192.168.0.255
        gateway 192.168.0.1

Restart the network afterwards:

/etc/init.d/networking restart

Run

apt-get update

to update the packages database on inst1, and then install OpenSSH and a full-featured vim:

apt-get install ssh openssh-server vim-full

Now you can connect to inst1.example.com using an SSH client such as PuTTY on the IP address 192.168.0.105.

To leave inst1's console and get back to node1, type CTRL+] if you are at the console, or CTRL+5 if you're using PuTTY (this is the same as if you were using Xen's xm commands instead of Ganeti).

Share this page:

1 Comment(s)

Add comment

Comments

From: at: 2008-02-25 16:33:16

hi,
i just want to ask if it is possible to create NON remote_raid1 instance, something like :

gnt-instance add -t LOCAL -n node2.example.com -o debian-etch -s 5g --swap-size 256 -m 64 inst1.example.com

and then after some time migrate this node to remote_raid1 instance, like:

gnt-instance add -t remote_raid1 -n node2.example.com --secondary-node node1.example.com -o debian-etch -s 5g --swap-size 256 -m 64 inst1.example.com

-----------------------------------------------------------------

than i want to ask if someone tried to create or somehow modify ganeti to be able to define more LVM partitions/volume groups for one instance, not only root and swap as was introduced in [URL="http://www.howtoforge.com/ganeti_xen_cluster_management_debian_etch"]http://www.howtoforge.com/ganeti_xen_cluster_management_debian_etch[/URL]

-s 5g --swap-size 256

thnx 4 ans.

Snow:)