Virtualization With KVM On A CentOS 5.2 Server - Page 3
On this page
6 Managing A KVM Guest
CentOS 5.2 KVM Host:
KVM guests can be managed through virsh, the "virtual shell". To connect to the virtual shell, run
virsh --connect qemu:///system
This is how the virtual shell looks:
[root@server1 ~]# virsh --connect qemu:///system
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh #
You can now type in commands on the virtual shell to manage your guests. Run
help
to get a list of available commands:
virsh # help
Commands:
help print help
attach-device attach device from an XML file
attach-disk attach disk device
attach-interface attach network interface
autostart autostart a domain
capabilities capabilities
connect (re)connect to hypervisor
console connect to the guest console
create create a domain from an XML file
start start a (previously defined) inactive domain
destroy destroy a domain
detach-device detach device from an XML file
detach-disk detach disk device
detach-interface detach network interface
define define (but don't start) a domain from an XML file
domid convert a domain name or UUID to domain id
domuuid convert a domain name or id to domain UUID
dominfo domain information
domname convert a domain id or UUID to domain name
domstate domain state
domblkstat get device block stats for a domain
domifstat get network interface stats for a domain
dumpxml domain information in XML
freecell NUMA free memory
hostname print the hypervisor hostname
list list domains
migrate migrate domain to another host
net-autostart autostart a network
net-create create a network from an XML file
net-define define (but don't start) a network from an XML file
net-destroy destroy a network
net-dumpxml network information in XML
net-list list networks
net-name convert a network UUID to network name
net-start start a (previously defined) inactive network
net-undefine undefine an inactive network
net-uuid convert a network name to network UUID
nodeinfo node information
quit quit this interactive terminal
reboot reboot a domain
restore restore a domain from a saved state in a file
resume resume a domain
save save a domain state to a file
schedinfo show/set scheduler parameters
dump dump the core of a domain to a file for analysis
shutdown gracefully shutdown a domain
setmem change memory allocation
setmaxmem change maximum memory limit
setvcpus change number of virtual CPUs
suspend suspend a domain
ttyconsole tty console
undefine undefine an inactive domain
uri print the hypervisor canonical URI
vcpuinfo domain vcpu information
vcpupin control domain vcpu affinity
version show version
vncdisplay vnc display
virsh #
list
shows all running guests;
list --all
shows all guests, running and inactive:
virsh # list --all
Id Name State
----------------------------------
2 vm10 running
virsh #
If you modify a guest's xml file (located in the /etc/libvirt/qemu/ directory), you must redefine the guest:
define /etc/libvirt/qemu/vm10.xml
Please note that whenever you modify the guest's xml file in /etc/libvirt/qemu/, you must run the define command again!
To start a stopped guest, run:
start vm10
To stop a guest, run
shutdown vm10
To immediately stop it (i.e., pull the power plug), run
destroy vm10
Suspend a guest:
suspend vm10
Resume a guest:
resume vm10
These are the most important commands.
Type
quit
to leave the virtual shell.
7 Creating An LVM-Based Guest
CentOS 5.2 KVM Host:
LVM-based guests have some advantages over image-based guests. They are not as heavy on hard disk IO, and they are easier to back up (using LVM snapshots).
To use LVM-based guests, you need a volume group that has some free space that is not allocated to any logical volume. In this example, I use the volume group /dev/VolGroup00 with a size of approx. 148GB...
vgdisplay
[root@server1 ~]# vgdisplay
/dev/hda: open failed: No medium found
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 148.53 GB
PE Size 32.00 MB
Total PE 4753
Alloc PE / Size 968 / 30.25 GB
Free PE / Size 3785 / 118.28 GB
VG UUID 5faE1k-DkMu-JUEk-K0JV-B9ta-Nyaf-n7tngf
[root@server1 ~]#
... that contains the logical volume /dev/VolGroup00/LogVol00 with a size of approx. 30GB and the logical volume /dev/VolGroup00/LogVol01 (about 1GB) - the rest is not allocated and can be used for KVM guests:
lvdisplay
[root@server1 ~]# lvdisplay
/dev/hda: open failed: No medium found
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID qzC8v6-cLyi-Pr4g-BjJv-35Xr-cEJM-LBVs7G
LV Write Access read/write
LV Status available
# open 1
LV Size 29.28 GB
Current LE 937
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID xA3e1Z-mEc9-rGT1-WcAu-TjF4-lbf3-6LvFaj
LV Write Access read/write
LV Status available
# open 1
LV Size 992.00 MB
Current LE 31
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
[root@server1 ~]#
I will now create the virtual machine vm11 as an LVM-based guest. I want vm11 to have 20GB of disk space, so I create the logical volume /dev/VolGroup00/vm11 with a size of 20GB:
lvcreate -L20G -n vm11 VolGroup00
Afterwards, we use the virt-install command again to create the guest:
virt-install --connect qemu:///system -n vm11 -r 512 --vcpus=2 -f /dev/VolGroup00/vm11 -c ~/debian-500-amd64-netinst.iso --vnc --noautoconsole --os-type linux --os-variant generic26 --accelerate --network=bridge:br0 --hvm
Please note that instead of -f ~/vm11.qcow2 I use -f /dev/VolGroup00/vm11, and I don't need the -s switch to define the disk space anymore because the disk space is defined by the size of the logical volume vm11 (20GB).
Now follow chapter 5 to install that guest.
8 Links
- KVM: http://kvm.qumranet.com/
- CentOS: http://www.centos.org/
- Debian: http://www.debian.org/
- Ubuntu: http://www.ubuntu.com/