Virtualization With Xen On CentOS 6.3 (x86_64) (Paravirtualization & Hardware Virtualization)
Version 1.0
Author: Falko Timme
Follow me on Twitter
This tutorial provides step-by-step instructions on how to install Xen (version 4.1.x) on a CentOS 6.3 (x86_64) system.
Xen lets you create guest operating systems (*nix operating systems like Linux and FreeBSD), so called "virtual machines" or domUs, under a host operating system (dom0). Using Xen you can separate your applications into different virtual machines that are totally independent from each other (e.g. a virtual machine for a mail server, a virtual machine for a high-traffic web site, another virtual machine that serves your customers' web sites, a virtual machine for DNS, etc.), but still use the same hardware. This saves money, and what is even more important, it's more secure. If the virtual machine of your DNS server gets hacked, it has no effect on your other virtual machines. Plus, you can move virtual machines from one Xen server to the next one.
I will use CentOS 6.3 (x86_64) for both the host OS (dom0) and the guest OS (domU).
This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in a lot of other documents in the web.
This document comes without warranty of any kind! I want to say that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!
1 Preliminary Note
This guide will explain how to set up image-based virtual machines and also LVM-based virtual machines.
Make sure that SELinux is disabled or permissive:
vi /etc/sysconfig/selinux
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted |
If you had to modify /etc/sysconfig/selinux, please reboot the system:
reboot
2 Creating A Network Bridge
We need to set up a network bridge on our server so that our virtual machines can be accessed from other hosts as if they were physical systems in the network.
To do this, we install the package bridge-utils...
yum install bridge-utils
... and configure a bridge. Create the file /etc/sysconfig/network-scripts/ifcfg-br0 (please use the IPADDR, PREFIX, GATEWAY, DNS1 and DNS2 values from the /etc/sysconfig/network-scripts/ifcfg-eth0 file); make sure you use TYPE=Bridge, not TYPE=Ethernet:
vi /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE="br0" NM_CONTROLLED="yes" ONBOOT=yes TYPE=Bridge BOOTPROTO=none IPADDR=192.168.0.100 PREFIX=24 GATEWAY=192.168.0.1 DNS1=8.8.8.8 DNS2=8.8.4.4 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System br0" |
Modify /etc/sysconfig/network-scripts/ifcfg-eth0 as follows (comment out BOOTPROTO, IPADDR, PREFIX, GATEWAY, DNS1, and DNS2 and add BRIDGE=br0):
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0" #BOOTPROTO=none NM_CONTROLLED="yes" ONBOOT=yes TYPE="Ethernet" UUID="73cb0b12-1f42-49b0-ad69-731e888276ff" HWADDR=00:1E:90:F3:F0:02 #IPADDR=192.168.0.100 #PREFIX=24 #GATEWAY=192.168.0.1 #DNS1=8.8.8.8 #DNS2=8.8.4.4 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth0" BRIDGE=br0 |
Restart the network...
/etc/init.d/network restart
... and run
ifconfig
It should now show the network bridge (br0):
[root@server1 ~]# ifconfig
br0 Link encap:Ethernet HWaddr 00:1E:90:F3:F0:02
inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:460 (460.0 b) TX bytes:2298 (2.2 KiB)
eth0 Link encap:Ethernet HWaddr 00:1E:90:F3:F0:02
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18455 errors:0 dropped:0 overruns:0 frame:0
TX packets:11861 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:26163057 (24.9 MiB) TX bytes:1100370 (1.0 MiB)
Interrupt:25 Base address:0xe000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:5 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2456 (2.3 KiB) TX bytes:2456 (2.3 KiB)
[root@server1 ~]#
3 Installing Xen
First check if your CPU supports hardware virtualization - if this is the case, the command
egrep '(vmx|svm)' --color=always /proc/cpuinfo
should display something, e.g. like this:
[root@server1 ~]# egrep '(vmx|svm)' --color=always /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy misalignsse
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy misalignsse
[root@server1 ~]#
If nothing is displayed, then your processor doesn't support hardware virtualization. This means you can use only paravirtualization with Xen, but not hardware virtualization.
As CentOS 6 is based on RedHat 6, and RedHat has dropped support for Xen in version 6, we need to get Xen from a third-party repository. We can enable the repo as follows:
yum install http://au1.mirror.crc.id.au/repo/kernel-xen-release-6-3.noarch.rpm
To install Xen, we now simply run
yum install kernel-xen xen
This installs Xen and a Xen kernel on our CentOS system.
Before we can boot the system with the Xen kernel, please check your GRUB bootloader configuration. We open /boot/grub/menu.lst:
vi /boot/grub/menu.lst
The first listed kernel should be the Xen kernel that you've just installed:
[...] title CentOS (2.6.32.57-2.el6xen.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32.57-2.el6xen.x86_64 ro root=/dev/mapper/vg_server1-LogVol00 rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=de rd_LVM_LV=vg_server1/LogVol01 rd_LVM_LV=vg_server1/LogVol00 rd_NO_DM rhgb quiet initrd /initramfs-2.6.32.57-2.el6xen.x86_64.img [...] |
We need to modify that section so that the Xen hypervisor gets loaded first. In the kernel /vmlinuz... line, replace the first word kernel with module. Do the same in the next line - replace the first word initrd with module in the initrd /initramfs... line. Then add the line kernel /xen.gz dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1 dom0_vcpus_pin after the root line and before the first module line (if you have more than one CPU core, you can specify another number than 1 for dom0_max_vcpus). The final kernel section should look like this:
[...] title CentOS (2.6.32.57-2.el6xen.x86_64) root (hd0,0) kernel /xen.gz dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1 dom0_vcpus_pin module /vmlinuz-2.6.32.57-2.el6xen.x86_64 ro root=/dev/mapper/vg_server1-LogVol00 rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=de rd_LVM_LV=vg_server1/LogVol01 rd_LVM_LV=vg_server1/LogVol00 rd_NO_DM rhgb quiet module /initramfs-2.6.32.57-2.el6xen.x86_64.img [...] |
Change the value of default to 0 (so that the first kernel (the Xen kernel) will be booted by default):
[...] default=0 [...] |
The complete /boot/grub/menu.lst should look something like this:
# grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/vg_server1-LogVol00 # initrd /initrd-[generic-]version.img #boot=/dev/sde default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.32.57-2.el6xen.x86_64) root (hd0,0) kernel /xen.gz dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1 dom0_vcpus_pin module /vmlinuz-2.6.32.57-2.el6xen.x86_64 ro root=/dev/mapper/vg_server1-LogVol00 rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=de rd_LVM_LV=vg_server1/LogVol01 rd_LVM_LV=vg_server1/LogVol00 rd_NO_DM rhgb quiet module /initramfs-2.6.32.57-2.el6xen.x86_64.img title CentOS (2.6.32-279.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-279.el6.x86_64 ro root=/dev/mapper/vg_server1-LogVol00 rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=de rd_LVM_LV=vg_server1/LogVol01 rd_LVM_LV=vg_server1/LogVol00 rd_NO_DM rhgb quiet initrd /initramfs-2.6.32-279.el6.x86_64.img |
Before we reboot, we install the libvirt and python-virtinst (which contains the virt-install tool which we will use later on to install Xen VMs) packages:
yum install libvirt python-virtinst
Because the libvirt package from CentOS 6/RedHat 6 has no support for Xen, we must rebuild it with Xen support. To do this, we install a few prerequisites now:
yum groupinstall 'Development Tools'
yum install python-devel xen-devel libxml2-devel xhtml1-dtds readline-devel ncurses-devel libtasn1-devel gnutls-devel augeas libudev-devel libpciaccess-devel yajl-devel sanlock-devel libpcap-devel libnl-devel avahi-devel libselinux-devel cyrus-sasl-devel parted-devel device-mapper-devel numactl-devel libcap-ng-devel netcf-devel libcurl-devel audit-libs-devel systemtap-sdt-devel libblkid-devel scrub
Let's find out our libvirt version:
rpm -qa | grep libvirt
[root@server1 ~]# rpm -qa | grep libvirt
libvirt-client-0.9.10-21.el6_3.3.x86_64
libvirt-0.9.10-21.el6_3.3.x86_64
libvirt-python-0.9.10-21.el6_3.3.x86_64
[root@server1 ~]#
It's 0.9.10, so we download the appropriate src.rpm package into /root/src and install it:
mkdir /root/src
cd /root/src
wget http://vault.centos.org/6.3/os/Source/SPackages/libvirt-0.9.10-21.el6.src.rpm
rpm -i libvirt-0.9.10-21.el6.src.rpm
The last command will show some warnings that you can ignore:
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
Next we patch Xen support into the libvirt sources:
wget http://pasik.reaktio.net/xen/patches/libvirt-spec-rhel6-enable-xen.patch
cd /root/rpmbuild/SPECS
cp -a libvirt.spec libvirt.spec.orig
patch -p0 < ~/src/libvirt-spec-rhel6-enable-xen.patch
Now we build a new libvirt package:
rpmbuild -bb libvirt.spec
At the end of the build process you should see something like this:
Wrote: /root/rpmbuild/RPMS/x86_64/libvirt-0.9.10-21.el6.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/libvirt-client-0.9.10-21.el6.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/libvirt-devel-0.9.10-21.el6.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/libvirt-lock-sanlock-0.9.10-21.el6.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/libvirt-python-0.9.10-21.el6.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/libvirt-debuginfo-0.9.10-21.el6.x86_64.rpm
Go to the directory where the new packages have been created (/root/rpmbuild/RPMS/x86_64/ in this case)...
cd /root/rpmbuild/RPMS/x86_64/
... and install the new libvirt packages (with Xen support) as follows:
rpm -Uvh --force libvirt-0.9.10-21.el6.x86_64.rpm libvirt-client-0.9.10-21.el6.x86_64.rpm libvirt-python-0.9.10-21.el6.x86_64.rpm
Afterwards, we reboot the system:
reboot
The system should now automatically boot the new Xen kernel. After the system has booted, we can check that by running
uname -r
[root@server1 ~]# uname -r
2.6.32.57-2.el6xen.x86_64
[root@server1 ~]#
So it's really using the new Xen kernel!
We can now run
xm list
to check if Xen has started. It should list Domain-0 (dom0):
[root@server1 ~]# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 1024 1 r----- 18.9
[root@server1 ~]#
Instead of using the xm command, I will from now on use the virsh command to manage Xen VMs. This is the preferred way as we are using libvirt.
virsh list
should show this:
[root@server1 ~]# virsh list
Id Name State
----------------------------------------------------
0 Domain-0 running
[root@server1 ~]#