Xen Live Migration Of An LVM-Based Virtual Machine With iSCSI On Debian Lenny

Version 1.0
Author: Falko Timme
Last edited 04/16/2009

This guide explains how you can do a live migration of an LVM-based virtual machine (domU) from one Xen host to the other. I will use iSCSI to provide shared storage for the virtual machines in this tutorial. Both Xen hosts and the iSCSI target are running on Debian Lenny in this article.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

I'm using the following systems here:

  • Xen host 1 : server.example.com, IP address: 192.168.0.100
  • Xen host 2 : server2.example.com, IP address: 192.168.0.101
  • iSCSI target (shared storage): iscsi.example.com, IP address: 192.168.0.102
  • virtual machine: vm1.example.com, IP address: 192.168.0.103

I will use LVM on the shared storage so that I can create/use LVM-based Xen guests.

The two Xen hosts and the iSCSI target should have the following lines in /etc/hosts (unless you have a DNS server that resolves the hostnames):

vi /etc/hosts

127.0.0.1       localhost.localdomain   localhost
192.168.0.100   server1.example.com     server1
192.168.0.101   server2.example.com     server2
192.168.0.102   iscsi.example.com       iscsi
192.168.0.103   vm1.example.com         vm1
[...]

 

2 Xen Setup

server1/server2:

The two Xen hosts should be set up according to chapter two of this tutorial: Virtualization With Xen On Debian Lenny (AMD64)

To allow live migration of virtual machines, we must enable the following settings in /etc/xen/xend-config.sxp...

vi /etc/xen/xend-config.sxp

[...]
(xend-relocation-server yes)
[...]
(xend-relocation-port 8002)
[...]
(xend-relocation-address '')
[...]
(xend-relocation-hosts-allow '')
[...]

... and restart Xen:

/etc/init.d/xend restart

 

3 Setting Up The iSCSI Target (Shared Storage)

iscsi.example.com:

Now we set up the target. The target will provide shared storage for server1 and server2, i.e., the virtual Xen machines will be stored on the shared storage.

aptitude install iscsitarget iscsitarget-modules-`uname -r`

Open /etc/default/iscsitarget...

vi /etc/default/iscsitarget

... and set ISCSITARGET_ENABLE to true:

ISCSITARGET_ENABLE=true

We can use unused logical volumes, image files, hard drives (e.g. /dev/sdb), hard drive partitions (e.g. /dev/sdb1) or RAID devices (e.g. /dev/md0) for the storage. In this example I will create a logical volume of 20GB named storage_lun1 in the volume group vg0:

lvcreate -L20G -n storage_lun1 vg0

(If you want to use an image file, you can create it as follows:

mkdir /storage
dd if=/dev/zero of=/storage/lun1.img bs=1024k count=20000

This creates the image file /storage/lun1.img with a size of 20GB.

)

Next we edit /etc/ietd.conf...

vi /etc/ietd.conf

... and comment out everything in that file. At the end we add the following stanza:

[...]
Target iqn.2001-04.com.example:storage.lun1
        IncomingUser someuser secret
        OutgoingUser
        Lun 0 Path=/dev/vg0/storage_lun1,Type=fileio
        Alias LUN1
        #MaxConnections  6

The target name must be a globally unique name, the iSCSI standard defines the "iSCSI Qualified Name" as follows: iqn.yyyy-mm.<reversed domain name>[:identifier]; yyyy-mm is the date at which the domain is valid; the identifier is freely selectable. The IncomingUser line contains a username and a password so that only the initiators (clients) that provide this username and password can log in and use the storage device; if you don't need authentication, don't specify a username and password in the IncomingUser line. In the Lun line, we must specify the full path to the storage device (e.g. /dev/vg0/storage_lun1, /storage/lun1.img, /dev/sdb, etc.).

Now we tell the target that we want to allow connections to the device iqn.2001-04.com.example:storage.lun1 from the IP address 192.168.0.100 (server1.example.com) and 192.168.0.101 (server2.example.com)...

vi /etc/initiators.allow

[...]
iqn.2001-04.com.example:storage.lun1 192.168.0.100, 192.168.0.101

... and start the target:

/etc/init.d/iscsitarget start

Share this page:

11 Comment(s)

Add comment

Comments

From: Anonymous at: 2009-10-07 02:14:02

http://www.kernsafe.com/Article_Product.aspx?id=5&&aid=22

This article demonstrate XEN Server how to work with windows iSCSI Target, may be helpful for us.

 

From: Anonymous at: 2010-07-03 18:33:07

KernSafe is not iSCSI target. It's complete JUNK :((

From: Ichiro Arai at: 2010-08-13 20:45:09

Free StarWind iSCSI Target is now Citrix Xen certified! 

http://hcl.xensource.com/BrowsableStorageList.aspx

and

http://www.starwindsoftware.com/news/36 

 Arigato!

 -ichiro

 

From: Anonymous at: 2010-08-27 10:41:54

starwind is not free, the stupid crook.

I've found a lot of bugs, it is very unstable.

From: Ichiro at: 2010-08-30 15:56:24

Oh, Mr. Aldrich from KernSafe :) Your self-promotion is very impressive :) Good luck selling your "Made in China" software. Hope your'll get prosperity.

 -ichiro

From: sam at: 2009-05-03 10:43:28

Hi there, thanks for such an excellent article falko. I wondered do you have an article on converting a physical Debian Etch machine, to a Xen VM – not necessarily a live migration, but perhaps a best practice guide for doing this. Am wanting to take an overspecd 1u server which is a LAMP server, and move this to a Xen VM on another machine altogether. I frequently look at this site, and have found it invaluable. kind regards Sam

From: falcon at: 2009-04-28 22:53:24

Hello Thanks for the howto

However the URL to xensource is no longer valid. xensource.com redirects you to the citrix website

From: Anonymous at: 2010-06-14 22:56:08

Hi There,

Is it safe to connect 2 clients to a single iSCSI target?

From: alxgomz at: 2010-07-17 00:52:16

I have almost the same set up with Fibre Channel instead of iSCSI and your question is a good question!

Connecting two hosts to a shared storage is not a problem by itself. You have to make sure you filesystem (if you use direct filesystem) is cluster aware.

With block devices (iSCSI, FC, AoE) it's the same thing.

Unless you use clvm, LVM is not cluster aware!

Using clvm instead is a mess it requires redhat clustering suite or openais (much more simple to install), but none of thoose 2 cluster interface are stable enough (at least their API with LVM) to allow efficient administration (often LV operations just hangs so you have to restart openais)... This is a pity but things are like this... Further more using cLVM forbids creation of snapshots which is a really usefull feature in virtualization environments!

 As a workaround you can do what Falko and I did... use non-cluster aware LVM in a clustered environment... but in this case you have to be reaaaaaaaaally carefull with what you or may easyly loose data!

From: Daniel Bojczuk at: 2010-11-10 19:56:20

Hi... I'm trying to use OCFS2 or GFS on my Gentto+Xen, but I'm having trouble with both o them.  I'm surprise that I can use LVM intead a clustered filesystem. alxgomz wrote: "... but in this case you have to be reaaaaaaaaally carefull with what you or may easyly loose data!"

 Can you explain more about this? What can I need to do to never loose data?

 Many thanks,

From: Wiebe Cazemier at: 2010-12-03 13:33:38

Doesn't that mean that if you don't make 100% sure all vg's and lv's are known on the machine you're going to perform an LVM command on (like lvcreate), you can mess up your volumes?

I mean, if server2 is not aware of the recenlty made LV made with server1, and you do lvcreate on server2, it can create it in used space in the VG, right?