Proxmox VE 2.x With Software Raid

Proxmox Virtual Environment is an easy to use Open Source virtualization platform for running Virtual Appliances and Virtual Machines. Proxmox does not officially support software raid but I have found software raid to be very stable and in some cases have had better luck with it than hardware raid.

I do not issue any guarantee that this will work for you!

 

Overview

First install Proxmox V2 the normal way with the CD downloaded from Proxmox. Next we create a RAID 1 array on the second hard drive and move the proxmox install to it.

Then we adjust the Grub settings so it will boot with the new setup.

 

Credits

These following tutorials are what I used:

http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-debian-squeeze

A special thankyou to Falko from HowtoForge as a lot of this material is re-used from his how to. http://www.howtoforge.com/linux_lvm

 

Installing Proxmox

Install proxmox from the latest downloaded CD from Proxmox http://www.proxmox.com/downloads/proxmox-ve/17-iso-images

If you want an ext4 install type type this in at the boot prompt:

linux ext4

Installation instructions here: http://pve.proxmox.com/wiki/Quick_installation

Next login with ssh and run:

apt-get update
apt-get upgrade

 

Installing Raid

Note: this tutorial assumes that proxmox installed to /dev/sda and the spare disk is /dev/sdb. Use the following command to list the current partitioning:

fdisk -l

The output should look as follows:

root@proxmox:/# fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009f7a7

Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 121602 976237568 8e Linux LVM

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00078af8

Device Boot Start End Blocks Id System

There is more here but we are only concerned with the first two disks for now. We can see that /dev/sda has the proxmox install and /dev/sdb has no partitions.

First we install software raid aka mdraid:

apt-get install mdadm

In the package configuration window choose ok then all. Next we start the kernel modules with modprobe:

modprobe linear
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Now run:

cat /proc/mdstat

The output should look as follows:

root@proxmox:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>
root@proxmox:~#

Now we need to copy the partition table from sda to sdb:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

The output should be:

root@proxmox:/# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0 - 0 0 0 Empty
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot Start End #sectors Id System
/dev/sdb1 * 2048 1048575 1046528 83 Linux
/dev/sdb2 1048576 1953523711 1952475136 8e Linux LVM
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Warning: partition 1 does not end at a cylinder boundary
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
root@vmh:/# root@vmh:/# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
-bash: root@vmh:/#: No such file or directory
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 * 0+ 65- 66- 523264 83 Linux
/dev/sdb2 65+ 121601- 121536- 976237568 8e Linux LVM
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
No partitions found

sfdisk: no partition table present.

Now we need to change the partition types to linux raid autodetect:

fdisk /dev/sdb

root@proxmox:/# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00078af8

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 66 523264 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2 66 121602 976237568 fd Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

As we can see we now have two linux raid autodetect partitions on /dev/sdb.

To make sure that there are no remains from previous RAID installations on /dev/sdb, we run the following commands:

mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2

If there are no remains from previous RAID installations, each of the above commands will throw an error like this one (which is nothing to worry about):

root@proxmox:~# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb1
root@server1:~#

Otherwise the commands will not display anything at all.

Now we need to create our new raid arrays:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2

This will show(answer yes):

root@proxmox:/# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@proxmox:/#

The command

cat /proc/mdstat

root@proxmox:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active (auto-read-only) raid1 sdb1[1]
523252 blocks super 1.2 [2/1] [_U]

md1 : active (auto-read-only) raid1 sdb2[1]
976236408 blocks super 1.2 [2/1] [_U]

unused devices:

<none>

should now show that you have two degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok).

Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

The standard proxmox install uses /dev/sda1 for the boot partition and uses lvm on /dev/sda2 for the root, swap and data partitions.

If you are new to lvm partitions I recommend you check out the link under credits at the top of this how to. To see the lvm partitions use the command:

lvscan

That should output:

root@proxmox:~# lvscan
ACTIVE '/dev/pve/swap' [15.00 GiB] inherit
ACTIVE '/dev/pve/root' [96.00 GiB] inherit
ACTIVE '/dev/pve/data' [804.02 GiB] inherit

Now we will create a new volume group named pve1 and matching logical volumes for swap, root, and data.

First the physical volume:

pvcreate /dev/md1

This outputs

Writing physical volume data to disk "/dev/md1"
Physical volume "/dev/md1" successfully created

This command:

pvscan

shows our new physical volume:

PV /dev/sda2 VG pve lvm2 [931.01 GiB / 16.00 GiB free]
PV /dev/md1 lvm2 [931.01 GiB]
Total: 2 [1.82 TiB] / in use: 1 [931.01 GiB] / in no VG: 1 [931.01 GiB]

Share this page:

30 Comment(s)

Add comment

Comments

From: Martin Maurer at: 2012-05-07 11:14:26

Proxmox is our company name, not the product name. Please correct the name of your guide to "Proxmox VE 2.x with Software Raid" thanks, Martin

From: admin at: 2012-05-08 09:51:59

Thanks for the hint! I've just changed the title.

From: Brblos at: 2012-07-02 18:05:41

OK on first try.

Nice work. Thanks

From: Rich H. at: 2012-11-25 17:17:46

Bravo!  Excellent guide; installation worked perfectly as described.  This really should be an "official" proxmox how-to.

From: at: 2013-02-27 21:18:20

Following these steps now. In the sfdisk block, it looks like you might have grabbed more than the results from the one command. I'm new at howtoforge so I'm not sure if I can fix it for you or if you own this page. 

From: sanyi at: 2013-05-29 14:12:56


Proxmox ve 3.0 not work this tutorial.

But I found a perfectly functioning is described in:

 http://www.petercarrero.com/content/2012/04/22/adding-software-raid-proxmox-ve-20-install

 

From: at: 2013-07-23 15:13:37

I also started using this tutorial and eventually ended up using the tutorial from Peter Carrero's site posted above.  If you're installing proxmox ve 3.0, I'd start with Carrero's instructions.  Just make sure you clear your array's superblock (ie. mdadm --zero-superblock /dev/sdb1) before creating your raid arrays.  Also, stick with ext3 as other commentators have noted. Much faster.

From: Johann at: 2013-10-01 20:30:55

Just installed Proxmox VE2.2 and I wanted to follow the guide, but very first command hit a wall:

root@jigglypuff:~# apt-get install mdadm
Reading package lists... Done
Building dependency tree      
Reading state information... Done
Package mdadm is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'mdadm' has no installation candidate
root@jigglypuff:~#

 

No joy in this room right now.

From: Alex at: 2012-05-04 06:01:35

Using software raid on proxmox 1.* snapshot backups fail. Did you try it on 2.0 version?

From: at: 2012-05-05 23:49:35

I just tried this. It does not seem to work with the web interface but does work from the command line. I wonder if it is a proxmox bug that it does not work from the web interface. I look at this some more later.

From: at: 2012-05-09 02:13:45

This is not a bug but a configuration issue. Have a look at this link: http://forum.proxmox.com/threads/9560-snapshot-backup-not-working

From: maykel535 at: 2013-10-14 09:46:22

This document is very useful. But proxmox 3.x not found, cant load ramdisk...

From: Derek at: 2012-05-04 20:30:57

Thanks for a great guide!

Worked 100% as advertised first time around :)

From: Anonymous at: 2012-07-19 11:13:48

Worked great! Proxmox devs should really support this out of the box, works very stable...

From: crosmuller at: 2012-07-28 09:49:15

Great!

From: CatFind at: 2012-08-23 16:22:01

Please note that ext3 should be preferred over ext4 - under Proxmox VE kernels - for performance reasons.

Have a look at our CatFind Research Page - some questions about Proxmox are answered there, including  performance tuning tips.

 

From: ayduns at: 2012-09-24 20:40:44

Really useful - many thanks!
 

A few minor issues:

  1. On the first page, the output of the partition table copy is correct but then has some output from a command typo.  No harm, but could be confusing
  2. On the copy commands, the data cp args are "-dbRx" - presumably should be "-dpRx" as with the first two cp commands
  3. After the second half of the mirrors are added, why is the second modification of mdadm.conf needed?  On my system, mdadm.conf was the same before and after.

Just to note that if you like GUI's, then most of LVM and mdadm commands can be done through webmin.

 

From: ayduns at: 2012-09-24 21:28:11

The basic approach of the process is to create a mirror, build a new filesystem and copy the files from old to new.  But the new does not have to be exactly as the old so you have opportunities to tweak if you want ...
 
With a few minor changes, you can change the filesystem and partition sizes or the filesystem type during this process.
 
After copying the partition table from sda to sdb, edit sdb's table as you want.  Then create the vg as above and the new lv's to your new sizes - you might want to leave some free space in the vg.

Continue with the rest of the procedure until /dev/pve is removed.

Instead of just setting the partition types to raid, copy the partition table from sdb back to sda.  This is the reverse of the early operation - make sure you get it the right way round!

 sfdisk -d /dev/sdb | sfdisk --force /dev/sda

Then continue with the rest of the procedure adding the sda partitions to the md devices.

Worked nicely for me! 

From: Mark at: 2012-10-19 16:30:55

Hi:  Please update the guide to use ext3 instead of ext4.  I just went through the whole guide, and now I read that I should have used ext3.  I hope I won't have to redo my server just because of that.

From: at: 2012-11-22 23:03:52

When I wrote the guide I thought that ext4 would be better as it is supposed to have better performance. I see that catfind says it is better to use ext3. I have 3 servers that use the ext4 and am happy with the performance so you shouldn't have to rebuild your server. If you want to build this with ext3 just substitute all references to ext4 with ext3 and don't enter anything at the boot prompt. Catfind -- would you like to comment on the performance of ext3 vs ext4?

From: danast at: 2012-11-22 20:46:25

Hi there,

excellent article, thanks a bunch. And just for those wondering, this also works with RAID10. But you need to take into consideration that the sizes differ. So when creating the raid you just define raid10 as raid mode. After you installed proxmox on the first drive, find out the partition sizes before you run the lines of lvcreate. Just take the size that you find out for each partition times two each time. Maybe someone else has a better suggestion, but as far as I can tell, the system runs smoothly and has done so for some time now. Of the four drives of 2 TB each I had, the overall space for the data volume amounts to roughly 3.5TB. 

And, of course, also the same applies to the boot partition of all four drives, so you need to have the ramdisk on all four boot partitions. 

Thx

From: jmaggart1979 at: 2013-04-07 04:10:47

Thanks for all the great info.  I set this up originally as a raid 5 but came into 2 more drives and the exact same type and size so i thought it would be better to set this up as raid 10.  Unfortunately, I wasn't able to extrapolate your instructions well enough to set it up on my own.  I am definitely a noob so any help would be great, thanks!

From: Marcel at: 2012-12-29 23:33:45

Hello, using the command "lvremove /dev/pve/swap" will report an error, because swapping is still active on this volume. Just turn swapping off on this volume with "swapoff /dev/pve/swap", to avoid the errormessage. Thanks for this great manual ;-)

From: Robert Penz at: 2013-02-21 21:02:28

After "rm -f /etc/grub.d/09_swraid1_setup" I called update-grub - but it failed # update-grub Generating grub.cfg ... /usr/sbin/grub-probe: error: unknown filesystem. I had following grub version # dpkg -l | grep grub ii grub-common 1.98+20100804-14+squeeze1 GRand Unified Bootloader, version 2 (common files) ii grub-pc 1.98+20100804-14+squeeze1 GRand Unified Bootloader, version 2 (PC/BIOS version) I got it only to work after using the grub from Debian testing (via pinning) - I run following now: # dpkg -l | grep grub ii grub-common 1.99-26 GRand Unified Bootloader (common files) ii grub-pc 1.99-26 GRand Unified Bootloader, version 2 (PC/BIOS version) ii grub-pc-bin 1.99-26 GRand Unified Bootloader, version 2 (PC/BIOS binaries) ii grub2-common 1.99-26 GRand Unified Bootloader (common files for version 2)

From: Anonymous at: 2013-08-20 19:45:15

Took a few minor tweaks to get this to work from me, I was setting up on ProxMox 3.0 and kept getting the error on boot.

 

1. Used tart to copy my files, much more reliable. Figured this out after checking the content of the copied "/" files to "/mnt/root" there was nothing in the "/dev" directory for example.

 tar -cvpzf - --one-file-system /boot/*| (cd /mnt/boot; tar xvzf -)
tar -cvpzf - --one-file-system / | (cd /mnt/root; tar xvzf -)
tar -cvpzf - --one-file-system /var/lib/vz/* | (cd /mnt/data; tar xvzf -)
 

2. Picked up some great tips at http://www.cesararaujo.net/proxmox-v3-software-raid/,  like adding raid line to grub:

# echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub

That got me past the boot error and then I was able to boot on md0 and the pve1 volumes. Just had to reassign the previous partitions and sync the mirror. On a 1TB drive the data part took almost two hours. The boot(md0), was done before I could even check the status. 

 Hope that helps someone from burning cycles.

From: at: 2013-02-24 21:45:28

This is a very detailed HOWTO on an important subject.

I ran into problems with old RAID metadata on the new drive (from onboard FakeRAID).  I had to go back into the BIOS, re-enable the Intel FakeRAID, create a volume, then delete the RAID configuration, then disable the FakeRAID in the BIOS again.  Then it all worked as advertised.

Thanks again.

G

From: Anonymous at: 2013-04-02 04:06:01

Excellent write-up, thank you very much! The instructions all made sense and worked perfectly (Proxmox VE 2.3-12). 

From: Anonymous at: 2013-06-25 13:48:16

Proxmox VE 3.
Didnt load after reboot until i corrected GRUB:

insmod mdraid

on:

insmod mdraid1x
PS. thank you 4 the guide :-)

From: PrestonG at: 2013-10-22 17:09:16

For proxmox 3.0, i'll summarize the changes that need to be done:

echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub

and replace line:

insmod mdraid

with:

insmod mdraid1x


The rest of these instructions should work properly..

From: Ofca at: 2013-11-02 09:43:13

Why copy everything instead of adding /dev/md1 to pve volume group, doing a pvmove sda2->md1, and then removing sda2, killing lvm labels and adding it to mdraid? This way you don't even need to reboot the node. Am I missing something?