Proxmox VE 2.x With Software Raid

Want to support HowtoForge? Become a subscriber!
 
Submitted by wrt54gl (Contact Author) (Forums) on Thu, 2012-05-03 18:07. :: KVM | OpenVZ | Storage | Virtualization

Proxmox VE 2.x With Software Raid

Proxmox Virtual Environment is an easy to use Open Source virtualization platform for running Virtual Appliances and Virtual Machines. Proxmox does not officially support software raid but I have found software raid to be very stable and in some cases have had better luck with it than hardware raid.

I do not issue any guarantee that this will work for you!

 

Overview

First install Proxmox V2 the normal way with the CD downloaded from Proxmox. Next we create a RAID 1 array on the second hard drive and move the proxmox install to it.

Then we adjust the Grub settings so it will boot with the new setup.

 

Credits

These following tutorials are what I used:

http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-debian-squeeze

A special thankyou to Falko from HowtoForge as a lot of this material is re-used from his how to. http://www.howtoforge.com/linux_lvm

 

Installing Proxmox

Install proxmox from the latest downloaded CD from Proxmox http://www.proxmox.com/downloads/proxmox-ve/17-iso-images

If you want an ext4 install type type this in at the boot prompt:

linux ext4

Installation instructions here: http://pve.proxmox.com/wiki/Quick_installation

Next login with ssh and run:

apt-get update
apt-get upgrade

 

Installing Raid

Note: this tutorial assumes that proxmox installed to /dev/sda and the spare disk is /dev/sdb. Use the following command to list the current partitioning:

fdisk -l

The output should look as follows:

root@proxmox:/# fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009f7a7

Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 121602 976237568 8e Linux LVM

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00078af8

Device Boot Start End Blocks Id System

There is more here but we are only concerned with the first two disks for now. We can see that /dev/sda has the proxmox install and /dev/sdb has no partitions.

First we install software raid aka mdraid:

apt-get install mdadm

In the package configuration window choose ok then all. Next we start the kernel modules with modprobe:

modprobe linear
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Now run:

cat /proc/mdstat

The output should look as follows:

root@proxmox:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>
root@proxmox:~#

Now we need to copy the partition table from sda to sdb:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

The output should be:

root@proxmox:/# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0 - 0 0 0 Empty
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot Start End #sectors Id System
/dev/sdb1 * 2048 1048575 1046528 83 Linux
/dev/sdb2 1048576 1953523711 1952475136 8e Linux LVM
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Warning: partition 1 does not end at a cylinder boundary
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
root@vmh:/# root@vmh:/# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
-bash: root@vmh:/#: No such file or directory
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 * 0+ 65- 66- 523264 83 Linux
/dev/sdb2 65+ 121601- 121536- 976237568 8e Linux LVM
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
No partitions found

sfdisk: no partition table present.

Now we need to change the partition types to linux raid autodetect:

fdisk /dev/sdb

root@proxmox:/# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00078af8

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 66 523264 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2 66 121602 976237568 fd Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

As we can see we now have two linux raid autodetect partitions on /dev/sdb.

To make sure that there are no remains from previous RAID installations on /dev/sdb, we run the following commands:

mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2

If there are no remains from previous RAID installations, each of the above commands will throw an error like this one (which is nothing to worry about):

root@proxmox:~# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb1
root@server1:~#

Otherwise the commands will not display anything at all.

Now we need to create our new raid arrays:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2

This will show(answer yes):

root@proxmox:/# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@proxmox:/#

The command

cat /proc/mdstat

root@proxmox:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active (auto-read-only) raid1 sdb1[1]
523252 blocks super 1.2 [2/1] [_U]

md1 : active (auto-read-only) raid1 sdb2[1]
976236408 blocks super 1.2 [2/1] [_U]

unused devices:

<none>

should now show that you have two degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok).

Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

The standard proxmox install uses /dev/sda1 for the boot partition and uses lvm on /dev/sda2 for the root, swap and data partitions.

If you are new to lvm partitions I recommend you check out the link under credits at the top of this how to. To see the lvm partitions use the command:

lvscan

That should output:

root@proxmox:~# lvscan
ACTIVE '/dev/pve/swap' [15.00 GiB] inherit
ACTIVE '/dev/pve/root' [96.00 GiB] inherit
ACTIVE '/dev/pve/data' [804.02 GiB] inherit

Now we will create a new volume group named pve1 and matching logical volumes for swap, root, and data.

First the physical volume:

pvcreate /dev/md1

This outputs

Writing physical volume data to disk "/dev/md1"
Physical volume "/dev/md1" successfully created

This command:

pvscan

shows our new physical volume:

PV /dev/sda2 VG pve lvm2 [931.01 GiB / 16.00 GiB free]
PV /dev/md1 lvm2 [931.01 GiB]
Total: 2 [1.82 TiB] / in use: 1 [931.01 GiB] / in no VG: 1 [931.01 GiB]


Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.
Submitted by Johann (not registered) on Tue, 2013-10-01 21:30.

Just installed Proxmox VE2.2 and I wanted to follow the guide, but very first command hit a wall:root@jigglypuff:~# apt-get install mdadmReading package lists... DoneBuilding dependency tree       Reading state information... DonePackage mdadm is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, oris only available from another sourceE: Package 'mdadm' has no installation candidateroot@jigglypuff:~# No joy in this room right now.

Submitted by sanyi (not registered) on Wed, 2013-05-29 15:12.

Proxmox ve 3.0 not work this tutorial.

But I found a perfectly functioning is described in:

 http://www.petercarrero.com/content/2012/04/22/adding-software-raid-proxmox-ve-20-install

 

Submitted by RedMD (registered user) on Tue, 2013-07-23 16:13.

I also started using this tutorial and eventually ended up using the tutorial from Peter Carrero's site posted above.  If you're installing proxmox ve 3.0, I'd start with Carrero's instructions.  Just make sure you clear your array's superblock (ie. mdadm --zero-superblock /dev/sdb1) before creating your raid arrays.  Also, stick with ext3 as other commentators have noted. Much faster.

Submitted by flightlevers (registered user) on Wed, 2013-02-27 22:18.
Following these steps now. In the sfdisk block, it looks like you might have grabbed more than the results from the one command. I'm new at howtoforge so I'm not sure if I can fix it for you or if you own this page. 
Submitted by Rich H. (not registered) on Sun, 2012-11-25 18:17.
Bravo!  Excellent guide; installation worked perfectly as described.  This really should be an "official" proxmox how-to.
Submitted by Brblos (not registered) on Mon, 2012-07-02 19:05.
OK on first try.

Nice work. Thanks
Submitted by Martin Maurer (not registered) on Mon, 2012-05-07 12:14.
Proxmox is our company name, not the product name. Please correct the name of your guide to "Proxmox VE 2.x with Software Raid" thanks, Martin
Submitted by admin (registered user) on Tue, 2012-05-08 10:51.
Thanks for the hint! I've just changed the title.