Xen Live Migration Of An LVM-Based Virtual Machine With iSCSI On Debian Lenny - Page 3

Want to support HowtoForge? Become a subscriber!
 
Submitted by falko (Contact Author) (Forums) on Tue, 2009-04-28 17:20. ::

5 Creating Virtual Machines

We will use xen-tools to create virtual machines. xen-tools make it very easy to create virtual machines - please read this tutorial to learn more: http://www.howtoforge.com/xen_tools_xen_shell_argo.

Now we edit /etc/xen-tools/xen-tools.conf. This file contains the default values that are used by the xen-create-image script unless you specify other values on the command line. I changed the following values and left the rest untouched:

server1/server2:

vi /etc/xen-tools/xen-tools.conf

[...]
lvm = vg_xen
[...]
dist   = lenny     # Default distribution to install.
[...]
gateway   = 192.168.0.1
netmask   = 255.255.255.0
broadcast = 192.168.0.255
[...]
passwd = 1
[...]
kernel      = /boot/vmlinuz-`uname -r`
initrd      = /boot/initrd.img-`uname -r`
[...]
mirror = http://ftp.de.debian.org/debian/
[...]
serial_device = hvc0
[...]
disk_device = xvda
[...]

Make sure that you uncomment the lvm line and fill in the name of the volume group on the shared storage (vg_xen). At the same time make sure that the dir line is commented out!

dist specifies the distribution to be installed in the virtual machines (Debian Lenny) (there's a comment in the file that explains what distributions are currently supported).

The passwd = 1 line makes that you can specify a root password when you create a new guest domain.

In the mirror line specify a Debian mirror close to you.

Make sure you specify a gateway, netmask, and broadcast address. If you don't, and you don't specify a gateway and netmask on the command line when using xen-create-image, your guest domains won't have networking even if you specified an IP address!

It is very important that you add the line serial_device = hvc0 because otherwise your virtual machines might not boot properly!

Now let's create our first guest domain, vm1.example.com, with the IP address 192.168.0.103:

server1:

xen-create-image --hostname=vm1.example.com --size=4Gb --swap=256Mb --ip=192.168.0.103 --memory=128Mb --arch=amd64 --role=udev

server1:~# xen-create-image --hostname=vm1.example.com --size=4Gb --swap=256Mb --ip=192.168.0.103 --memory=128Mb --arch=amd64 --role=udev

General Information
--------------------
Hostname       :  vm1.example.com
Distribution   :  lenny
Partitions     :  swap            256Mb (swap)
                  /               4Gb   (ext3)
Image type     :  full
Memory size    :  128Mb
Kernel path    :  /boot/vmlinuz-2.6.26-1-xen-amd64
Initrd path    :  /boot/initrd.img-2.6.26-1-xen-amd64

Networking Information
----------------------
IP Address 1   : 192.168.0.103 [MAC: 00:16:3E:4D:61:B6]
Netmask        : 255.255.255.0
Broadcast      : 192.168.0.255
Gateway        : 192.168.0.1


Creating swap on /dev/vg_xen/vm1.example.com-swap
Done

Creating ext3 filesystem on /dev/vg_xen/vm1.example.com-disk
Done
Installation method: debootstrap
Done

Running hooks
Done

Role: udev
        File: /etc/xen-tools/role.d/udev
Role script completed.

Creating Xen configuration file
Done
Setting up root password
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
All done


Logfile produced at:
         /var/log/xen-tools/vm1.example.com.log
server1:~#

As you see, the command has created two new logical volumes in the vg_xen volume group, /dev/vg_xen/vm1.example.com-disk and /dev/vg_xen/vm1.example.com-swap.

There should now be a configuration file for the vm1.example.com Xen guest in the /etc/xen directory, vm1.example.com.cfg. Because we want to migrate the Xen guest from server1 to server2 later on, we must copy that configuration file to server2:

scp /etc/xen/vm1.example.com.cfg root@server2.example.com:/etc/xen/

Now we can start vm1.example.com:

xm create /etc/xen/vm1.example.com.cfg

 

5.1 Moving Existing Virtual Machines To The vg_xen Volume Group

If you want to do live migration for existing virtual machines that are not stored on the iSCSI shared storage, you must move them to the vg_xen volume group first. You can do this with dd, no matter if the guests are image- or LVM-based. This tutorial should give you the idea how to do this: Xen: How to Convert An Image-Based Guest To An LVM-Based Guest

 

6 Live Migration Of vm1.example.com From server1 To server2

To check if the live migration is really done "live", i.e. without interruption of the guest, you can log into vm1.example.com (e.g. with SSH) and ping another server:

vm1.example.com:

ping google.com

This will ping google.com until you press CTRL + C. The pinging should continue even during the live migration.

server1:

xm list

should show that vm1.example.com is currently running on server1:

server1:~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  3628     2     r-----    115.6
vm1.example.com                              1   128     1     -b----      2.4
server1:~#

Before we migrate the virtual machine to server2, we must make sure that /dev/vg_xen/vm1.example.com-disk and /dev/vg_xen/vm1.example.com-swap are available on server2:

server2:

lvdisplay

server2:/etc/xen# lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg_xen/vm1.example.com-swap
  VG Name                vg_xen
  LV UUID                ubgqAl-YSmJ-BiVl-YLKc-t4Np-VPl2-WG5eFx
  LV Write Access        read/write
  LV Status              NOT available
  # open                 1
  LV Size                256.00 MB
  Current LE             64
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:3

  --- Logical volume ---
  LV Name                /dev/vg_xen/vm1.example.com-disk
  VG Name                vg_xen
  LV UUID                4zNxf2-Pt16-cQO6-sqmt-kfo9-uSQY-55WN76
  LV Write Access        read/write
  LV Status              NOT available
  # open                 1
  LV Size                4.00 GB
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:2

  --- Logical volume ---
  LV Name                /dev/vg0/root
  VG Name                vg0
  LV UUID                aQrAHn-ZqyG-kTQN-eYE9-2QBQ-IZMW-ERRvqv
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                100.00 GB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0

  --- Logical volume ---
  LV Name                /dev/vg0/swap_1
  VG Name                vg0
  LV UUID                9gXmOT-KP9j-21yw-gJPS-lurt-QlNK-WAL8we
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 GB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

server2:/etc/xen#

As you see, the command shows NOT available for both volumes, so we must make them available:

lvscan
lvchange -a y /dev/vg_xen/vm1.example.com-disk
lvchange -a y /dev/vg_xen/vm1.example.com-swap

Now they should be available:

lvdisplay

server2:/etc/xen# lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg_xen/vm1.example.com-swap
  VG Name                vg_xen
  LV UUID                ubgqAl-YSmJ-BiVl-YLKc-t4Np-VPl2-WG5eFx
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                256.00 MB
  Current LE             64
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:3

  --- Logical volume ---
  LV Name                /dev/vg_xen/vm1.example.com-disk
  VG Name                vg_xen
  LV UUID                4zNxf2-Pt16-cQO6-sqmt-kfo9-uSQY-55WN76
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.00 GB
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:2

  --- Logical volume ---
  LV Name                /dev/vg0/root
  VG Name                vg0
  LV UUID                aQrAHn-ZqyG-kTQN-eYE9-2QBQ-IZMW-ERRvqv
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                100.00 GB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0

  --- Logical volume ---
  LV Name                /dev/vg0/swap_1
  VG Name                vg0
  LV UUID                9gXmOT-KP9j-21yw-gJPS-lurt-QlNK-WAL8we
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 GB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

server2:/etc/xen#

xm list

should not list vm1.example.com yet on server2:

server2:~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  3633     2     r-----     16.2
server2:~#

Now we can start the live migration:

server1:

xm migrate --live vm1.example.com server2.example.com

During the migration, the pings on vm1.example.com should continue which means that the guest is running even during the migration process.

Afterwards, take a look at

xm list

server1:~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  3626     2     r-----    118.2
server1:~#

As you see, vm1.example.com isn't listed anymore on server1.

Let's check on server2:

server2:

xm list

server2:~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  3633     2     r-----     19.4
vm1.example.com                              1   128     1     --p---      0.0
server2:~#

If everything went well, vm1.example.com should now be running on server2.

 

7 Links


Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.
Submitted by Anonymous (not registered) on Mon, 2010-06-14 23:56.

Hi There,

Is it safe to connect 2 clients to a single iSCSI target?

Submitted by alxgomz (not registered) on Sat, 2010-07-17 01:52.

I have almost the same set up with Fibre Channel instead of iSCSI and your question is a good question!

Connecting two hosts to a shared storage is not a problem by itself. You have to make sure you filesystem (if you use direct filesystem) is cluster aware.

With block devices (iSCSI, FC, AoE) it's the same thing.

Unless you use clvm, LVM is not cluster aware!

Using clvm instead is a mess it requires redhat clustering suite or openais (much more simple to install), but none of thoose 2 cluster interface are stable enough (at least their API with LVM) to allow efficient administration (often LV operations just hangs so you have to restart openais)... This is a pity but things are like this... Further more using cLVM forbids creation of snapshots which is a really usefull feature in virtualization environments!

 As a workaround you can do what Falko and I did... use non-cluster aware LVM in a clustered environment... but in this case you have to be reaaaaaaaaally carefull with what you or may easyly loose data!

Submitted by Wiebe Cazemier (not registered) on Fri, 2010-12-03 14:33.

Doesn't that mean that if you don't make 100% sure all vg's and lv's are known on the machine you're going to perform an LVM command on (like lvcreate), you can mess up your volumes?

I mean, if server2 is not aware of the recenlty made LV made with server1, and you do lvcreate on server2, it can create it in used space in the VG, right?

Submitted by Daniel Bojczuk (not registered) on Wed, 2010-11-10 20:56.

Hi... I'm trying to use OCFS2 or GFS on my Gentto+Xen, but I'm having trouble with both o them.  I'm surprise that I can use LVM intead a clustered filesystem. alxgomz wrote: "... but in this case you have to be reaaaaaaaaally carefull with what you or may easyly loose data!"

 Can you explain more about this? What can I need to do to never loose data?

 Many thanks,

Submitted by sam (not registered) on Sun, 2009-05-03 11:43.
Hi there, thanks for such an excellent article falko. I wondered do you have an article on converting a physical Debian Etch machine, to a Xen VM – not necessarily a live migration, but perhaps a best practice guide for doing this. Am wanting to take an overspecd 1u server which is a LAMP server, and move this to a Xen VM on another machine altogether. I frequently look at this site, and have found it invaluable. kind regards Sam
Submitted by falcon (not registered) on Tue, 2009-04-28 23:53.

Hello Thanks for the howto

However the URL to xensource is no longer valid. xensource.com redirects you to the citrix website