Proxmox VE 2.x With Software Raid - Page 2

Now let's create our volume group pve1 and add /dev/md1 to it:

vgcreate pve1 /dev/md1

That should show success:

Volume group "pve1" successfully created

Now we need to create our logical volumes. I will use the same sizes and names as the volumes from the lvscan command above.

lvcreate --name swap --size 15G pve1
lvcreate --name root --size 96G pve1
lvcreate --name data --size 804G pve1

If that was successful then the command:

lvscan

returns:

[email protected]:~# lvscan
ACTIVE '/dev/pve/swap' [15.00 GiB] inherit
ACTIVE '/dev/pve/root' [96.00 GiB] inherit
ACTIVE '/dev/pve/data' [804.02 GiB] inherit
ACTIVE '/dev/pve1/swap' [15.00 GiB] inherit
ACTIVE '/dev/pve1/root' [96.00 GiB] inherit
ACTIVE '/dev/pve1/data' [804.00 GiB] inherit

As you can see we now have two sets of the same logical volumes. One on /dev/sda2 and one on /dev/md1.

Now we need to create the filesystems:

mkfs.ext4 /dev/md0
mkswap /dev/pve1/swap -f
mkfs.ext4 /dev/pve1/root
mkfs.ext4 /dev/pve1/data

If that was successful then it is time to copy the files to the new raid array.

First we mount the new partitions:

mkdir /mnt/boot
mkdir /mnt/root
mkdir /mnt/data
mount /dev/md0 /mnt/boot
mount /dev/pve1/root /mnt/root
mount /dev/pve1/data /mnt/data

 

Adjusting The System To Use RAID 1

Now we must edit /etc/fstab:

vi /etc/fstab

It should read:

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve1/root / ext4 errors=remount-ro 0 1
/dev/pve1/data /var/lib/vz ext4 defaults 0 1
/dev/md0 /boot ext4 defaults 0 1
/dev/pve1/swap none swap sw 0 0
proc /proc proc defaults 0 0

Notice that all the instances of pve are replaced with pve1 and /dev/md0 is mounted on /boot.

Now up to the GRUB2 boot loader. Create the file /etc/grub.d/09_swraid1_setup as follows:

cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup
vi /etc/grub.d/09_swraid1_setup

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Proxmox, with RAID1' --class proxmox --class gnu-linux --class gnu --class os {
    insmod raid
    insmod mdraid
    insmod part_msdos
    insmod ext2
    set root='(md/0)'
    echo    'Loading Proxmox with RAID ...'
    linux   /vmlinuz-2.6.32-11-pve root=/dev/mapper/pve1-root ro  quiet
    echo    'Loading initial ramdisk ...'
    initrd  /initrd.img-2.6.32-11-pve
}

Make sure you use the correct kernel version in the menuentry stanza (in the linux and initrd lines). You can find it out by running:

uname -r

or by taking a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg. Also make sure that you use root=/dev/mapper/pve1-root in the linux line.

The important part in our new menuentry stanza is the line set root='(md/0)' - it makes sure that we boot from our RAID1 array /dev/md0 (which will hold the /boot partition) instead of /dev/sda or /dev/sdb which is important if one of our hard drives fails - the system will still be able to boot.

Because we don't use UUIDs anymore for our block devices, open /etc/default/grub...

vi /etc/default/grub

... and uncomment the line GRUB_DISABLE_LINUX_UUID=true:

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_LINUX_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Run

update-grub

to write our new kernel stanza from /etc/grub.d/09_swraid1_setup to /boot/grub/grub.cfg.

Next we adjust our ramdisk to the new situation:

update-initramfs -u

Next we copy the files:

cp -dpRx / /mnt/root
cp -dpRx /boot/* /mnt/boot
cp -dbRx /var/lib/vz/* /mnt/data

Now we reboot the system and hope that it boots ok from our RAID arrays:

reboot

If all goes well you should be able to see our new logical volumes root and data and /dev/md0 mounted:

mount

[email protected]:~# mount
/dev/mapper/pve1-root on / type ext4 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve1-data on /var/lib/vz type ext4 (rw)
/dev/md0 on /boot type ext4 (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
beancounter on /proc/vz/beancounter type cgroup (rw,name=beancounter)
container on /proc/vz/container type cgroup (rw,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,name=fairsched)

Now we need to remove the volume group pve:

lvremove /dev/pve/root
lvremove /dev/pve/swap
lvremove /dev/pve/data
vgremove /dev/pve
pvremove /dev/sda2

[email protected]:~# lvremove /dev/pve/root
Do you really want to remove active logical volume root? [y/n]: y
  Logical volume "root" successfully removed
[email protected]:~# lvremove /dev/pve/swap
Do you really want to remove active logical volume swap? [y/n]: y
  Logical volume "swap" successfully removed
[email protected]:~# lvremove /dev/pve/data
Do you really want to remove active logical volume data? [y/n]: y
  Logical volume "data" successfully removed
[email protected]:~# vgremove /dev/pve
  Volume group "pve" successfully removed
[email protected]:~# pvremove /dev/sda2
  Labels on physical volume "/dev/sda2" successfully wiped

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:

fdisk /dev/sda

[email protected]:~# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009f7a7

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          66      523264   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2              66      121602   976237568   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now we can add /dev/sda1 and /dev/sda2 to /dev/md0 and /dev/md1:

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2

Now take a look at:

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized.

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Now we delete /etc/grub.d/09_swraid1_setup...

rm -f /etc/grub.d/09_swraid1_setup

... and update our GRUB2 bootloader configuration:

update-grub
update-initramfs -u

Now if you take a look at /boot/grub/grub.cfg, you should find that the menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section look pretty much the same as what we had in /etc/grub.d/09_swraid1_setup (they should now also be set to boot from /dev/md0 instead of (hd0) or (hd1)), that's why we don't need /etc/grub.d/09_swraid1_setup anymore.

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your Proxmox system!

Enjoy!

Share this page:

22 Comment(s)

Add comment

Please register in our forum first to comment.

Comments

By: Alex

Using software raid on proxmox 1.* snapshot backups fail. Did you try it on 2.0 version?

By:

I just tried this. It does not seem to work with the web interface but does work from the command line. I wonder if it is a proxmox bug that it does not work from the web interface. I look at this some more later.

By:

This is not a bug but a configuration issue. Have a look at this link: http://forum.proxmox.com/threads/9560-snapshot-backup-not-working

By: maykel535

This document is very useful. But proxmox 3.x not found, cant load ramdisk...

By: Derek

Thanks for a great guide!

Worked 100% as advertised first time around :)

By: Anonymous

Worked great! Proxmox devs should really support this out of the box, works very stable...

By: crosmuller

Great!

By: CatFind

Please note that ext3 should be preferred over ext4 - under Proxmox VE kernels - for performance reasons.

Have a look at our CatFind Research Page - some questions about Proxmox are answered there, including  performance tuning tips.

 

By: ayduns

Really useful - many thanks!
 

A few minor issues:

  1. On the first page, the output of the partition table copy is correct but then has some output from a command typo.  No harm, but could be confusing
  2. On the copy commands, the data cp args are "-dbRx" - presumably should be "-dpRx" as with the first two cp commands
  3. After the second half of the mirrors are added, why is the second modification of mdadm.conf needed?  On my system, mdadm.conf was the same before and after.

Just to note that if you like GUI's, then most of LVM and mdadm commands can be done through webmin.

 

By: ayduns

The basic approach of the process is to create a mirror, build a new filesystem and copy the files from old to new.  But the new does not have to be exactly as the old so you have opportunities to tweak if you want ...
 
With a few minor changes, you can change the filesystem and partition sizes or the filesystem type during this process.
 
After copying the partition table from sda to sdb, edit sdb's table as you want.  Then create the vg as above and the new lv's to your new sizes - you might want to leave some free space in the vg.

Continue with the rest of the procedure until /dev/pve is removed.

Instead of just setting the partition types to raid, copy the partition table from sdb back to sda.  This is the reverse of the early operation - make sure you get it the right way round!

 sfdisk -d /dev/sdb | sfdisk --force /dev/sda

Then continue with the rest of the procedure adding the sda partitions to the md devices.

Worked nicely for me! 

By: Mark

Hi:  Please update the guide to use ext3 instead of ext4.  I just went through the whole guide, and now I read that I should have used ext3.  I hope I won't have to redo my server just because of that.

By:

When I wrote the guide I thought that ext4 would be better as it is supposed to have better performance. I see that catfind says it is better to use ext3. I have 3 servers that use the ext4 and am happy with the performance so you shouldn't have to rebuild your server. If you want to build this with ext3 just substitute all references to ext4 with ext3 and don't enter anything at the boot prompt. Catfind -- would you like to comment on the performance of ext3 vs ext4?

By: danast

Hi there,

excellent article, thanks a bunch. And just for those wondering, this also works with RAID10. But you need to take into consideration that the sizes differ. So when creating the raid you just define raid10 as raid mode. After you installed proxmox on the first drive, find out the partition sizes before you run the lines of lvcreate. Just take the size that you find out for each partition times two each time. Maybe someone else has a better suggestion, but as far as I can tell, the system runs smoothly and has done so for some time now. Of the four drives of 2 TB each I had, the overall space for the data volume amounts to roughly 3.5TB. 

And, of course, also the same applies to the boot partition of all four drives, so you need to have the ramdisk on all four boot partitions. 

Thx

By: jmaggart1979

Thanks for all the great info.  I set this up originally as a raid 5 but came into 2 more drives and the exact same type and size so i thought it would be better to set this up as raid 10.  Unfortunately, I wasn't able to extrapolate your instructions well enough to set it up on my own.  I am definitely a noob so any help would be great, thanks!

By: Marcel

Hello, using the command "lvremove /dev/pve/swap" will report an error, because swapping is still active on this volume. Just turn swapping off on this volume with "swapoff /dev/pve/swap", to avoid the errormessage. Thanks for this great manual ;-)

By: Robert Penz

After "rm -f /etc/grub.d/09_swraid1_setup" I called update-grub - but it failed # update-grub Generating grub.cfg ... /usr/sbin/grub-probe: error: unknown filesystem. I had following grub version # dpkg -l | grep grub ii grub-common 1.98+20100804-14+squeeze1 GRand Unified Bootloader, version 2 (common files) ii grub-pc 1.98+20100804-14+squeeze1 GRand Unified Bootloader, version 2 (PC/BIOS version) I got it only to work after using the grub from Debian testing (via pinning) - I run following now: # dpkg -l | grep grub ii grub-common 1.99-26 GRand Unified Bootloader (common files) ii grub-pc 1.99-26 GRand Unified Bootloader, version 2 (PC/BIOS version) ii grub-pc-bin 1.99-26 GRand Unified Bootloader, version 2 (PC/BIOS binaries) ii grub2-common 1.99-26 GRand Unified Bootloader (common files for version 2)

By: Anonymous

Took a few minor tweaks to get this to work from me, I was setting up on ProxMox 3.0 and kept getting the error on boot.

 

1. Used tart to copy my files, much more reliable. Figured this out after checking the content of the copied "/" files to "/mnt/root" there was nothing in the "/dev" directory for example.

 tar -cvpzf - --one-file-system /boot/*| (cd /mnt/boot; tar xvzf -)
tar -cvpzf - --one-file-system / | (cd /mnt/root; tar xvzf -)
tar -cvpzf - --one-file-system /var/lib/vz/* | (cd /mnt/data; tar xvzf -)
 

2. Picked up some great tips at http://www.cesararaujo.net/proxmox-v3-software-raid/,  like adding raid line to grub:

# echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub

That got me past the boot error and then I was able to boot on md0 and the pve1 volumes. Just had to reassign the previous partitions and sync the mirror. On a 1TB drive the data part took almost two hours. The boot(md0), was done before I could even check the status. 

 Hope that helps someone from burning cycles.

By:

This is a very detailed HOWTO on an important subject.

I ran into problems with old RAID metadata on the new drive (from onboard FakeRAID).  I had to go back into the BIOS, re-enable the Intel FakeRAID, create a volume, then delete the RAID configuration, then disable the FakeRAID in the BIOS again.  Then it all worked as advertised.

Thanks again.

G

By: Anonymous

Excellent write-up, thank you very much! The instructions all made sense and worked perfectly (Proxmox VE 2.3-12). 

By: Anonymous

Proxmox VE 3.
Didnt load after reboot until i corrected GRUB:

insmod mdraid

on:

insmod mdraid1x
PS. thank you 4 the guide :-)

By: PrestonG

For proxmox 3.0, i'll summarize the changes that need to be done:

echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub

and replace line:

insmod mdraid

with:

insmod mdraid1x


The rest of these instructions should work properly..

By: Ofca

Why copy everything instead of adding /dev/md1 to pve volume group, doing a pvmove sda2->md1, and then removing sda2, killing lvm labels and adding it to mdraid? This way you don't even need to reboot the node. Am I missing something?