Xen Cluster Management With Ganeti On Debian Lenny - Page 3
8 Setting Up An Instance
node1:
Now let's create our first virtual machine (called an instance in Ganeti speak), inst1.example.com. I want to use DRBD for it (remote RAID1), I want node2 to be the primary node, and I want the instance to have a 5 GB hard drive, 256 MB swap and 256 MB RAM. Again, we run the command on the cluster master, node1.example.com:
gnt-instance add -t drbd -n node2.example.com:node1.example.com -o debootstrap -s 5g --swap-size 256 -m 256 --kernel /boot/vmlinuz-`uname -r` --ip 192.168.0.105 inst1.example.com
(I've specified --kernel /boot/vmlinuz-`uname -r`; if you don't specify a kernel, Ganeti will use /boot/vmlinuz-2.6-xenU by default - see chapter 4.)
This can take some time. This is how the output looks:
node1:~# gnt-instance add -t drbd -n node2.example.com:node1.example.com -o debootstrap -s 5g --swap-size 256 -m 256 --kernel /boot/vmlinuz-`uname -r` --ip 192.168.0.105 inst1.example.com
* creating instance disks...
adding instance inst1.example.com to cluster config
- INFO: Waiting for instance inst1.example.com to sync disks.
- INFO: - device sda: 3.90% done, 971 estimated seconds remaining
- INFO: - device sdb: 17.00% done, 42 estimated seconds remaining
- INFO: - device sda: 9.00% done, 746 estimated seconds remaining
- INFO: - device sdb: 100.00% done, 0 estimated seconds remaining
- INFO: - device sda: 9.30% done, 727 estimated seconds remaining
- INFO: - device sda: 22.10% done, 786 estimated seconds remaining
- INFO: - device sda: 35.10% done, 224 estimated seconds remaining
- INFO: - device sda: 48.00% done, 205 estimated seconds remaining
- INFO: - device sda: 61.00% done, 183 estimated seconds remaining
- INFO: - device sda: 73.90% done, 120 estimated seconds remaining
- INFO: - device sda: 86.90% done, 36 estimated seconds remaining
- INFO: - device sda: 94.80% done, 344 estimated seconds remaining
- INFO: Instance inst1.example.com's disks are in sync.
creating os for instance inst1.example.com on node node2.example.com
* running the instance OS create scripts...
* starting instance...
node1:~#
Ganeti has created a complete virtual machine (using Debian Lenny) which you can now use.
9 Configuring The Instance
node1:
To get to inst1.example.com's command line, run
gnt-instance console inst1.example.com
on node1.
You will notice that the console hangs, and you don't see a login prompt:
Checking file systems...fsck 1.41.3 (12-Oct-2008)
done.
Setting kernel variables (/etc/sysctl.conf)...done.
Mounting local filesystems...done.
Activating swapfile swap...done.
Setting up networking....
Configuring network interfaces...done.
INIT: Entering runlevel: 2
Starting enhanced syslogd: rsyslogd.
Starting periodic command scheduler: crond.
Shut down the instance...
gnt-instance shutdown inst1.example.com
... and start it with the --extra "xencons=tty1 console=tty1" parameter (do this everytime you start the instance):
gnt-instance startup --extra "xencons=tty1 console=tty1" inst1.example.com
Afterwards, connect to the console again...
gnt-instance console inst1.example.com
... and log in to inst1.example.com. The username is root along with no password. Therefore the first thing we do after the login is create a password for root:
inst1.example.com:
passwd
Next we must add a stanza for eth0 to /etc/network/interfaces. Right now, inst1.example.com has no network connectivity because only lo (the loopback interface) is up.
As I said in chapter 1, I want inst1.example.com to have the IP address 192.168.0.105:
vi /etc/network/interfaces
auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.0.105 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 |
Restart the network afterwards:
/etc/init.d/networking restart
Run
aptitude update
aptitude safe-upgrade
to update the instance, and then install OpenSSH and vim-nox:
aptitude install ssh openssh-server vim-nox udev
Before you connect to inst1.example.com using an SSH client such as PuTTY, open /etc/fstab...
vi /etc/fstab
... and add the following line (otherwise you will get the following error in your SSH client: Server refused to allocate pty):
[...] none /dev/pts devpts gid=5,mode=620 0 0 |
Then run
mount -a
Now you can connect to inst1.example.com using an SSH client such as PuTTY on the IP address 192.168.0.105.
To leave inst1's console and get back to node1, type CTRL+] if you are at the console, or CTRL+5 if you're using PuTTY (this is the same as if you were using Xen's xm commands instead of Ganeti).
10 Further Ganeti Commands
To learn more about what you can do with Ganeti, take a look at the following man pages:
man gnt-instance
man gnt-cluster
man gnt-node
man gnt-os
man gnt-backup
man 7 ganeti
man 7 ganeti-os-interface
and also at the Ganeti administrator's guide that comes with the Ganeti package (in /docs/admin.html). The Ganeti installation tutorial also has some hints.
The most interesting commands should be these:
Start an instance:
gnt-instance startup inst1.example.com
Stop an instance:
gnt-instance shutdown inst1.example.com
Go to an instance's console:
gnt-instance console inst1.example.com
Failover an instance to its secondary node (the instance will be stopped during this operation!):
gnt-instance failover inst1.example.com
Doing a live migration (i.e., the instance will keep running) to its secondary node:
gnt-instance migrate inst1.example.com
Delete an instance:
gnt-instance remove inst1.example.com
Get a list of instances:
gnt-instance list
node1:~# gnt-instance list
Instance OS Primary_node Status Memory
inst1.example.com debootstrap node2.example.com running 256
node1:~#
Get more details about instances:
gnt-instance info
node1:~# gnt-instance info
Instance name: inst1.example.com
State: configured to be up, actual state is up
Considered for memory checks in cluster verify: True
Nodes:
- primary: node2.example.com
- secondaries: node1.example.com
Operating system: debootstrap
Kernel path: /boot/vmlinuz-2.6.26-1-xen-686
initrd: (default: /boot/initrd-2.6-xenU)
Hardware:
- VCPUs: 1
- memory: 256MiB
- NICs: {MAC: aa:00:00:b5:00:8d, IP: 192.168.0.105, bridge: eth0}
Block devices:
- sda, type: drbd8, logical_id: (u'node2.example.com', u'node1.example.com', 11000)
primary: /dev/drbd0 (147:0) in sync, status ok
secondary: /dev/drbd0 (147:0) in sync, status ok
- type: lvm, logical_id: (u'xenvg', u'9c923acc-14b4-460d-946e-3b0d4d2e18e6.sda_data')
primary: /dev/xenvg/9c923acc-14b4-460d-946e-3b0d4d2e18e6.sda_data (253:2)
secondary: /dev/xenvg/9c923acc-14b4-460d-946e-3b0d4d2e18e6.sda_data (253:2)
- type: lvm, logical_id: (u'xenvg', u'4ffe2d67-584e-4581-9cd6-30da33c21b04.sda_meta')
primary: /dev/xenvg/4ffe2d67-584e-4581-9cd6-30da33c21b04.sda_meta (253:3)
secondary: /dev/xenvg/4ffe2d67-584e-4581-9cd6-30da33c21b04.sda_meta (253:3)
- sdb, type: drbd8, logical_id: (u'node2.example.com', u'node1.example.com', 11001)
primary: /dev/drbd1 (147:1) in sync, status ok
secondary: /dev/drbd1 (147:1) in sync, status ok
- type: lvm, logical_id: (u'xenvg', u'4caff02e-3864-47b3-ba58-b71854a7b7c0.sdb_data')
primary: /dev/xenvg/4caff02e-3864-47b3-ba58-b71854a7b7c0.sdb_data (253:4)
secondary: /dev/xenvg/4caff02e-3864-47b3-ba58-b71854a7b7c0.sdb_data (253:4)
- type: lvm, logical_id: (u'xenvg', u'51fb132b-083e-42e2-aefa-31fd485a8aab.sdb_meta')
primary: /dev/xenvg/51fb132b-083e-42e2-aefa-31fd485a8aab.sdb_meta (253:5)
secondary: /dev/xenvg/51fb132b-083e-42e2-aefa-31fd485a8aab.sdb_meta (253:5)
node1:~#
Get info about a cluster:
gnt-cluster info
node1:~# gnt-cluster info
Cluster name: cluster1.example.com
Master node: node1.example.com
Architecture (this node): 32bit (i686)
Cluster hypervisor: xen-3.0
node1:~#
Check if everything is alright with the cluster:
gnt-cluster verify
node1:~# gnt-cluster verify
* Verifying global settings
* Gathering data (2 nodes)
* Verifying node node1.example.com
* Verifying node node2.example.com
* Verifying instance inst1.example.com
* Verifying orphan volumes
* Verifying remaining instances
* Verifying N+1 Memory redundancy
* Other Notes
* Hooks Results
node1:~#
Find out who's the cluster master:
gnt-cluster getmaster
node1:~# gnt-cluster getmaster
node1.example.com
node1:~#
Failover the master if the master has gone down (fails over the master to the node on which this command is run):
gnt-cluster masterfailover
Find out about instance volumes on the cluster nodes:
gnt-node volumes
node1:~# gnt-node volumes
Node PhysDev VG Name Size Instance
node1.example.com /dev/sda2 vg0 root 28608 -
node1.example.com /dev/sda2 vg0 swap_1 952 -
node1.example.com /dev/sda3 xenvg 4caff02e-3864-47b3-ba58-b71854a7b7c0.sdb_data 256 inst1.example.com
node1.example.com /dev/sda3 xenvg 4ffe2d67-584e-4581-9cd6-30da33c21b04.sda_meta 128 inst1.example.com
node1.example.com /dev/sda3 xenvg 51fb132b-083e-42e2-aefa-31fd485a8aab.sdb_meta 128 inst1.example.com
node1.example.com /dev/sda3 xenvg 9c923acc-14b4-460d-946e-3b0d4d2e18e6.sda_data 5120 inst1.example.com
node2.example.com /dev/hda2 vg0 root 28608 -
node2.example.com /dev/hda2 vg0 swap_1 952 -
node2.example.com /dev/hda3 xenvg 4caff02e-3864-47b3-ba58-b71854a7b7c0.sdb_data 256 inst1.example.com
node2.example.com /dev/hda3 xenvg 4ffe2d67-584e-4581-9cd6-30da33c21b04.sda_meta 128 inst1.example.com
node2.example.com /dev/hda3 xenvg 51fb132b-083e-42e2-aefa-31fd485a8aab.sdb_meta 128 inst1.example.com
node2.example.com /dev/hda3 xenvg 9c923acc-14b4-460d-946e-3b0d4d2e18e6.sda_data 5120 inst1.example.com
node1:~#
Removing a node from a cluster:
gnt-node remove node2.example.com
Find out about the operating systems supported by the cluster (currently only debootstrap):
gnt-os list
node1:~# gnt-os list
Name
debootstrap
node1:~#