Installing And Using OpenVZ On Ubuntu 13.04 (AMD64) - Page 2

3 Using OpenVZ

Before we can create virtual machines with OpenVZ, we need to have a template for the distribution that we want to use in the virtual machines in the /var/lib/vz/template/cache directory. The virtual machines will be created from that template.

You can find a list of precreated templates on http://wiki.openvz.org/Download/template/precreated. For example, we can download a minimal Debian Wheezy template (x86_64) as follows:

cd /vz/template/cache
wget http://download.openvz.org/template/precreated/contrib/debian-7.0-amd64-minimal.tar.gz

(If your host is an i386 system, you cannot use an amd64 template - you must use i386 templates then!)

I will now show you the basic commands for using OpenVZ.

To set up a VPS from the debian-7.0-amd64-minimal template (you can find it in /vz/template/cache), run:

vzctl create 101 --ostemplate debian-7.0-amd64-minimal --config basic

The 101 must be a uniqe ID - each virtual machine must have its own unique ID. You can use the last part of the virtual machine's IP address for it. For example, if the virtual machine's IP address is 192.168.0.101, you use 101 as the ID.

If you want to have the vm started at boot, run

vzctl set 101 --onboot yes --save

To set a hostname and IP address for the vm, run:

vzctl set 101 --hostname test.example.com --save
vzctl set 101 --ipadd 192.168.0.101 --save

Next we set the number of sockets to 120 and assign a few nameservers to the vm:

vzctl set 101 --numothersock 120 --save
vzctl set 101 --nameserver 8.8.8.8 --nameserver 8.8.4.4 --save

(Instead of using the vzctl set commands, you can as well directly edit the vm's configuration file which is stored in the /etc/vz/conf directory. If the ID of the vm is 101, then the configuration file is /etc/vz/conf/101.conf.)

To start the vm, run

vzctl start 101

To set a root password for the vm, execute

vzctl exec 101 passwd

You can now either connect to the vm via SSH (e.g. with PuTTY), or you enter it as follows:

vzctl enter 101

To leave the vm's console, type

exit

To stop a vm, run

vzctl stop 101

To restart a vm, run

vzctl restart 101

To delete a vm from the hard drive (it must be stopped before you can do this), run

vzctl destroy 101

To get a list of your vms and their statuses, run

vzlist -a

root@server1:~# vzlist -a
      CTID      NPROC STATUS    IP_ADDR         HOSTNAME
       101          8 running   192.168.0.101   test.example.com
root@server1:~#

To find out about the resources allocated to a vm, run

vzctl exec 101 cat /proc/user_beancounters

server1:~# vzctl exec 101 cat /proc/user_beancounters
Version: 2.5
       uid  resource           held    maxheld    barrier      limit    failcnt
      101:  kmemsize         500737     517142   11055923   11377049          0
            lockedpages           0          0        256        256          0
            privvmpages        2315       2337      65536      69632          0
            shmpages            640        640      21504      21504          0
            dummy                 0          0          0          0          0
            numproc               7          7        240        240          0
            physpages          1258       1289          0 2147483647          0
            vmguarpages           0          0      33792 2147483647          0
            oomguarpages       1258       1289      26112 2147483647          0
            numtcpsock            2          2        360        360          0
            numflock              1          1        188        206          0
            numpty                1          1         16         16          0
            numsiginfo            0          1        256        256          0
            tcpsndbuf         17856      17856    1720320    2703360          0
            tcprcvbuf         32768      32768    1720320    2703360          0
            othersockbuf       2232       2928    1126080    2097152          0
            dgramrcvbuf           0          0     262144     262144          0
            numothersock          1          3        120        120          0
            dcachesize            0          0    3409920    3624960          0
            numfile             189        189       9312       9312          0
            dummy                 0          0          0          0          0
            dummy                 0          0          0          0          0
            dummy                 0          0          0          0          0
            numiptent            10         10        128        128          0
server1:~#

The failcnt column is very important, it should contain only zeros; if it doesn't, this means that the vm needs more resources than are currently allocated to the vm. Open the vm's configuration file in /etc/vz/conf and raise the appropriate resource, then restart the vm.

To find out more about the vzctl command, run

man vzctl

 

3.1 Setting Quota Inside A Container

To enable quota inside a container (in this example it is the container with the ID 101), run the following commands from the host:

vzctl stop 101
vzctl set 101 --diskquota yes --save
vzctl set 101 --diskspace 10G --save
vzctl set 101 --diskinodes 200000:220000 --save
vzctl set 101 --quotatime 0 --save
vzctl set 101 --quotaugidlimit 1000 --save
vzctl start 101

You can adjust the values for diskspace and diskinodes to your needs. quotaugidlimit sets maximum number of user/group IDs in a container for which disk quota inside the container will be accounted.

After the container has started, you must install the quota and quotatool packages inside the container:

apt-get install quota quotatool

Afterwards, the command...

repquota -avug

... should show the current quotas:

root@test:~# repquota -avug
*** Report for user quotas on device /dev/simfs
Block grace time: 00:00; Inode grace time: 00:00
                        Block limits                File limits
User            used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      --  325500       0       0          14301     0     0
man       --     360       0       0             35     0     0
libuuid   --       4       0       0              1     0     0
messagebus --       4       0       0              1     0     0

Statistics:
Total blocks: 131590
Data blocks: 2
Entries: 4
Used average: 2.000000

*** Report for group quotas on device /dev/simfs
Block grace time: 00:00; Inode grace time: 00:00
                        Block limits                File limits
Group           used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      --  325112       0       0          14251     0     0
adm       --      36       0       0             23     0     0
tty       --      40       0       0              9     0     0
disk      --       0       0       0             17     0     0
mail      --       4       0       0              1     0     0
kmem      --       0       0       0              3     0     0
shadow    --     124       0       0              5     0     0
utmp      --      16       0       0              4     0     0
staff     --      68       0       0             18     0     0
libuuid   --       4       0       0              1     0     0
ssh       --     128       0       0              1     0     0
messagebus --     292       0       0              2     0     0
crontab   --      44       0       0              3     0     0

Statistics:
Total blocks: 131590
Data blocks: 4
Entries: 13
Used average: 3.250000

root@test:~#

 

3.2 Creating A ploop Container

Creating a ploop container is not that much different from creating a normal, directory-based container - just make sure you use the --layout ploop switch and specify the diskspace (e.g. --diskspace 10G) when you create the container:

vzctl create 102 --layout ploop --diskspace 10G --ostemplate debian-7.0-amd64-minimal --config basic

Setting all other options is the same:

vzctl set 102 --onboot yes --save

vzctl set 102 --hostname test2.example.com --save
vzctl set 102 --ipadd 192.168.0.102 --save

vzctl set 102 --numothersock 120 --save
vzctl set 102 --nameserver 8.8.8.8 --nameserver 8.8.4.4 --save

vzctl start 102

vzctl exec 102 passwd

To enable quota inside a ploop container, we just need to set the quotaugidlimit option:

vzctl stop 102
vzctl set 102 --quotaugidlimit 1000 --save
vzctl start 102

After the container has started, you must install the quota and quotatool packages inside the container:

apt-get install quota quotatool

Afterwards, the command...

repquota -avug

... should show the current quotas (if not, restart the container):

root@test:~# repquota -avug
*** Report for user quotas on device /dev/simfs
Block grace time: 00:00; Inode grace time: 00:00
                        Block limits                File limits
User            used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      --  325500       0       0          14301     0     0
man       --     360       0       0             35     0     0
libuuid   --       4       0       0              1     0     0
messagebus --       4       0       0              1     0     0

Statistics:
Total blocks: 131590
Data blocks: 2
Entries: 4
Used average: 2.000000

*** Report for group quotas on device /dev/simfs
Block grace time: 00:00; Inode grace time: 00:00
                        Block limits                File limits
Group           used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      --  325112       0       0          14251     0     0
adm       --      36       0       0             23     0     0
tty       --      40       0       0              9     0     0
disk      --       0       0       0             17     0     0
mail      --       4       0       0              1     0     0
kmem      --       0       0       0              3     0     0
shadow    --     124       0       0              5     0     0
utmp      --      16       0       0              4     0     0
staff     --      68       0       0             18     0     0
libuuid   --       4       0       0              1     0     0
ssh       --     128       0       0              1     0     0
messagebus --     292       0       0              2     0     0
crontab   --      44       0       0              3     0     0

Statistics:
Total blocks: 131590
Data blocks: 4
Entries: 13
Used average: 3.250000

root@test:~#

 

4 Links

Share this page:

5 Comment(s)

Add comment

Comments

From: Anonymous at: 2013-07-23 09:50:09

i can't booting with vz kernel.

i've got error in boot process:

ALERT!  /dev/mapper/ubuntu-root does not exist.  Dropping to a shell!
 
there is no /dev/mapper directory and no uuid for lvm in /dev/disk/by-uuid.
i think that vz's kernel have no modules for lvm so can't mount lvm's root filesystem.
during install rpms, we can see below messages
WARNING: could not open /lib/modules/2.6.32-042stab078.28/modules.builtin: No such file or directory
WARNING: could not open /tmp/mkinitramfs_vXC6YN/lib/modules/2.6.32-042stab078.28/modules.builtin: No such file or directory
So i think something is missing in this tutorial.
any idea? 

From: at: 2013-08-18 08:56:56

got the same problem bro :/

From: Michael H. Warfield at: 2013-07-23 17:24:30

Simple question.  Why?  As in "why bother?"

There's a simple reason the OpenVZ kernel is no longer included in Ubuntu.  The reason is LXC and Linux containers.  What real, practical, advantage does OpenVZ (more specifically the OpenVZ custom patched kernel) offer over that of the mainline 3.x kernel?

We now have cgroups and namespaces in the main-line kernel and the need for this custom patched ongoing maintenance headache (PITA) is largely alleviated.  The OpenVZ user space (vzctl et al) will even run over top of the main-line kernels (3.x and a few prior).  There are some limitations, but not many and generally requirement specific.  The OpenVZ developers have been contributing to the Linux kernel containers and namespace development effort.  I routinely see the same names on both -devel mailing lists.  Many of the remaining limitations are being addressed.

All that's even if you really want to run the OpenVZ user space.  Quite frankly, I migrated off of OpenVZ over to LXC a couple of years ago and not looked back.  While it has its limitations, LXC has some versatility that is sadly lacking in OpenVZ (like arbitrary container names, not just numerical IDs).  Last time I looked (and this could have very easily changed over the last two years) the OpenVZ kernel patches were also not compatible with the cgroups options and using an OpenVZ kernel disabled your ability to use cgroups for other process management.  I would have hoped they would have resolved that by now but it was true back then.

Linus made it abundantly clear years ago that the OpenVZ / Virtuoso patches would not be accepted into the upstream sources.  He had his reasons and the OpenVZ people shifted gears to support getting containers into the kernel.  That is where we are going.  The OpenVZ patched kernel has always lagged and will always lag behind the mainstream kernel and bug fixes will be similarly delayed.

The reference to the Linux Vserver project is also amusing (I use to use them too) since that project hasn't been updated in a couple of years.  I abandoned that one years and years ago after they broke IPv6 networking and found OpenVZ to be superior to that.

From: at: 2013-07-25 18:40:09

Thank you for explain!

From: Anonymous at: 2013-11-02 19:46:52


While it has its limitations, LXC has some versatility that is sadly lacking in OpenVZ (like arbitrary container names, not just numerical IDs).  
 
 

Ok... So, I should migrate to LXC because I can put some arbitrary name to a container ? 

That's a definitive reason... xD 

On my opinion live migration, templates, control panels completly overcomes LXC, on a future when LXC is more mature it will rock, but now, LXC isn't for a production enviorenment. Just for testing on that computer no body uses...