How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 2

4 Creating Our RAID Arrays

Now let's create our RAID arrays /dev/md0, /dev/md1, and /dev/md2. /dev/sdb1 will be added to /dev/md0, /dev/sdb2 to /dev/md1, and /dev/sdb3 to /dev/md2. /dev/sda1, /dev/sda2, and /dev/sda3 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3

The command

cat /proc/mdstat

should now show that you have three degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

server1:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb3[1]
      4594496 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
      497920 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
      144448 blocks [2/1] [_U]

unused devices: <none>
server1:~#

Next we create filesystems on our RAID arrays (ext3 on /dev/md0 and /dev/md2 and swap on /dev/md1):

mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2

Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Display the contents of the file:

cat /etc/mdadm/mdadm.conf

At the bottom of the file you should now see details about our three (degraded) RAID arrays:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# This file was auto-generated on Mon, 26 Nov 2007 21:22:04 +0100
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=72d23d35:35d103e3:01b5209e:be9ff10a
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a50c4299:9e19f9e4:01b5209e:be9ff10a
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=99fee3a5:ae381162:01b5209e:be9ff10a

 

5 Adjusting The System To RAID1

Now let's mount /dev/md0 and /dev/md2 (we don't need to mount the swap array /dev/md1):

mkdir /mnt/md0
mkdir /mnt/md2

mount /dev/md0 /mnt/md0
mount /dev/md2 /mnt/md2

You should now find both arrays in the output of

mount

server1:~# mount
/dev/sda3 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
server1:~#

Next we modify /etc/fstab. Replace /dev/sda1 with /dev/md0, /dev/sda2 with /dev/md1, and /dev/sda3 with /dev/md2 so that the file looks as follows:

vi /etc/fstab

# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
/dev/md2       /               ext3    defaults,errors=remount-ro 0       1
/dev/md0       /boot           ext3    defaults        0       2
/dev/md1       none            swap    sw              0       0
/dev/hdc        /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0

Next replace /dev/sda1 with /dev/md0 and /dev/sda3 with /dev/md2 in /etc/mtab:

vi /etc/mtab

/dev/md2 / ext3 rw,errors=remount-ro 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
udev /dev tmpfs rw,mode=0755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0
/dev/md0 /boot ext3 rw 0 0

Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback 1 right after default 0:

vi /boot/grub/menu.lst

[...]
default         0
fallback        1
[...]

This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.

In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replace root=/dev/sda3 with root=/dev/md2 and root (hd0,0) with root (hd1,0):

[...]
## ## End Default Options ##

title           Debian GNU/Linux, kernel 2.6.18-4-486 RAID (hd1)
root            (hd1,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

title           Debian GNU/Linux, kernel 2.6.18-4-486
root            (hd0,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

title           Debian GNU/Linux, kernel 2.6.18-4-486 (single-user mode)
root            (hd0,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro single
initrd          /initrd.img-2.6.18-4-486
savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST

root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback 1).

Next we adjust our ramdisk to the new situation:

update-initramfs -u

Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):

cp -dpRx / /mnt/md2

cd /boot
cp -dpRx . /mnt/md0

 

6 Preparing GRUB (Part 1)

Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:

grub

On the GRUB shell, type in the following commands:

root (hd0,0)

grub> root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83

grub>

setup (hd0)

grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"...  15 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub>

root (hd1,0)

grub> root (hd1,0)
 Filesystem type is ext2fs, partition type 0xfd

grub>

setup (hd1)

grub> setup (hd1)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd1)"...  15 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub>

quit

Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:

reboot

Share this page:

35 Comment(s)

Add comment

Comments

From: at: 2008-02-28 20:41:03

Thanks for a great howto! It was a life saver, as I have never set up RAID before, let alone on a running system. The install did not go perfectly however and so I thought I might share my notes and a couple of suggestions. Luckily I did a backup of the entire system before beginning, so I was able to restore the system and begin again after I could not get the system to boot off the RAID array. N.B. I did the install on a Debian testing system (lenny/amd64), but I've checked that everything applies to etch as well.


1. If the disks are not brand new, mdadm will detect the previous filesystem when creating the array and ask if you want to continue. Answer, 'yes'. I also got a segfault error from mdadm and a warning that the disk was 'dirty'. The warning could probably be avoided by zeroing the entire disk with dd. Despite the error and warning everything worked as it should.


2. Instead of: cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig mdadm --examine --scan >> /etc/mdadm/mdadm.conf do: mv /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig /usr/share/mdadm/mkconf >> /etc/mdadm/mdadm.conf This will create a proper mdadm.conf and remove the control file in /var/lib/mdadm (if arrays are found). If you do not remove the control file, you should get a warning message when updating the initrd images.


3. When editing GRUBS's menu.lst, I followed the advice in the comments and put the stanza for the RAID array, before the '### BEGIN AUTOMAGIC KERNELS LIST' line. If you put your custom stanzas inside the AUTOMAGIC area, they will be overwritten during the next kernel upgrade. Instead of: update-initramfs -u I had to do: dpkg-reconfigure mdadm When asked to specify the arrays needed for the root filesystem, I answered with the appropriate devices (in my case only /dev/md0) instead of selecting the default, 'all'. Otherwise I kept to the default answers. After the initrd images had been created, I updated GRUB: update-grub


4. Instead of using the GRUB shell, I used grub-install to install the boot loader on the hard drives: grub-install /dev/sda grub-install /dev/sdb


5. After having added both disks to the arrays, it was time to update the initrd again. First I executed: dpkg-reconfigure mdadm and was informed that the initrd would not be updated, because it was a custom image. The configure script informed me that I could force the update by running 'update-initramfs' with the '-t' option, so that is what I did: update-initramfs -u -t


6. Every time you update the initrd image, you also have to re-install GRUB in the MBR's: grub-install /dev/sda grub-install /dev/sdb Otherwise the system will not boot and you will be thrown into the GRUB shell. Other notes: It is normal for 'fdisk -l' to report stuff like, 'Disk /dev/md0 doesn't contain a valid partition table'. This is because fdisk cannot read md arrays correctly. If you forget to re-install GRUB in the MBR's, after updating your initrd, and get the GRUB shell on reboot. Do the following: Boot from a Debian Installer CD (full or netinst) of the same architecture as your install (so if you're running amd64, it has to be a amd64 CD). Boot the CD in 'rescue' mode. After networking has been set up and the disks have been detected press CTRL+ALT+F2, followed by Enter, to get a prompt. Execute the following commands (md0=/boot and md2=/): mkdir /mnt/mydisk mount /dev/md2 /mnt/mydisk mount /dev/md0 /mnt/mydisk/boot mount -t proc none /mnt/mydisk/proc mount -o bind /dev /mnt/mydisk/dev chroot /mnt/mydisk grub-install /dev/sda grub-install /dev/sdb exit umount /mnt/mydisk/proc umount /mnt/mydisk/dev umount /mnt/mydisk/boot umount /mnt/mydisk reboot

From: at: 2008-07-09 10:33:59

Thank you so much for this detailed howto.  It saved me a lot of pain and worked perfectly on Ubuntu 8.04.1 LTS.


From: nochids at: 2008-12-01 02:46:10

I am relatively new to linux and am completely dependent on these tutorials.  I bought a server and installed Suse 10.3.  After running Ubuntu on my desktop and laptop, I decided to change the server to run Ubuntu as well (I didn't uninstall Suse - just installed over it??).  After installing the server based on "The Perfect Ubuntu Server 8.04" (http://www.howtoforge.com/perfect-server-ubuntu8.04-lts) I installed the ISPConfig as detailed at the end.  Then to install RAID, I followed the tutorial perfectly I think,  but at the end of step 6 after rebooting, I still show sda1 rather than md0.


 root@costarica:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/costarica-root
                      228G  1.4G  216G   1% /
varrun                502M  108K  502M   1% /var/run
varlock               502M     0  502M   0% /var/lock
udev                  502M   76K  502M   1% /dev
devshm                502M     0  502M   0% /dev/shm
/dev/sda1             236M   26M  198M  12% /boot
root@costarica:~#


 vi /etc/fstab shows the following:


# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
# /dev/mapper/costarica-root
UUID=2e3442d3-c650-480a-a923-4775de238b7f /               ext3    relatime,errors=remount-ro,usrquota,grpquota 0       1
# /dev/md0
UUID=251a68c2-1497-433b-b415-d49ca8f2125e /boot           ext3    relatime        0       2
# /dev/mapper/costarica-swap_1
UUID=03c6c32e-38bd-4707-9df5-dcdd3049825a none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto,exec,utf8 0       0


 


It looks different than the one in the tutorial but I attribute that to being ubuntu rather than debian.


 


Can anyone shed some light on this for me?  Did I miss a step or are there other steps involved because of Ubuntu?


 


Thanks for any help.


 


Jason.


 

From: Anonymous at: 2009-01-03 19:00:56

Dear all,


 this howto just worked for me flawlessly for my brand-new Debian Lenny (testing) today (03-Jan-2009) !!!


No issues, no problems at all. I had several different partitions, even extended ones, I only had to follow on paper, which partition goes into which numbered array - that's it ;-)


(And my boot partition wasn't /boot but simply /, I did everything accordingly - flawless!!!)


 


THANK YOU VERY MUCH for this HowTo, I've NEVER EVER raid-ed before and it's a success :)


md0 = /


md1 = swap


md2 = /home


This all on an Abit NF7-S2, BIOS-Raid OFF, 2 x SATA2 Samsung 320G, Sempron 2800+, 2x512 DDR400 ;-)


 


 lol:~# df -m
Filesystem           1M-blocks      Used Available Use% Mounted on
/dev/md0                 18778      2312     15512  13% /
tmpfs                      506         0       506   0% /lib/init/rw
udev                        10         1        10   2% /dev
tmpfs                      506         0       506   0% /dev/shm
/dev/md2                280732       192    266281   1% /home
lol:~#



Cheers from Europe !


 

From: Anonymous at: 2009-05-23 15:26:29

Great tutorial,


my compliments

From: Johan Boulé at: 2009-07-18 01:10:19

I wonder, what's the point in having the swap on a raid1? shouldn't it be better to add /dev/sda2 and /dev/sdb2 directly as two separate swap devices?

From: Froi at: 2009-08-31 19:25:30

Can I apply this How-to to my PPC Debian Etch? Thanks!

From: nord at: 2010-07-18 18:06:17

Nice howto!  
I would like to correct some minor errors though:


Ext2 on boot instead of ext3... Ext3 on /boot is just a waste of space and resources.  You dont need journaling for your boot partition :)


and why make a raid array for swap?  swap will stripe data as a raid0 anyway.. just tell linux to swap to two different physical disks and voila. Striping made easy :p


Happy raiding :p


(If you suddenly need a lot of swapspace, you can use "swapon" command to swap to memorysticks or whatever you need, unlike fstab fixing, swapon wil get resetted on reboot)  ;)

From: Andy Beverley at: 2010-12-04 16:43:52

I spent hours trying to work out not only how to set up a software RAID, but also how to do it on a boot partition. I didn't even come close to looking at a live system. I got nowhere until I found this HOWTO which does it all very well. Thank you!


Andy

From: Anonymous at: 2012-01-17 12:04:08

It works just perfectly with ubuntu 8.04

Thanks for the brilliant how-to

From: Alex Dekker at: 2012-10-07 09:08:58

You might like to put a link somewhere in this howto to your newer howto detailing the install with Grub2.

I spent some time following this howto and tripping up on Grub2 and doing lots of googling, before finally realising that what I thought were google hits on your existing howto were actually pointing to a separate but very similarly named howto, that covers Grub2!

From: at: 2009-12-16 05:47:58

Falko, thank you, this is a wonderful HOWTO, I've used it for two servers now. On the second one, the reboot at the end of this page failed with a GRUB error:

Booting Debian ..
root (hd1,0)
Filesystem type is .. partition type.. kernel (all as expected)


Error 2: Bad file or directory type

At this point I was very glad I could still boot from the old non-raid partitions (phew!)

A bit of reading turned up this explanation on fedoraforum

Sure enough, tune2fs -l showed the old sda1 had 128 byte inodes, while sdb1/md0 had 256 byte inodes. I had the choice of upgrading grub or re-making md0's filesystem with smaller inodes.

I decided the smaller inodes were safer (I like to mess with aptitude as little as possible). I re-ran the instructions with this mkfs command instead, and it's all good now.

mkfs.ext3 -I 128 /dev/md0



This will not be needed when grub is updated to a version that can read fs's with 256-byte inodes.

From: wayan at: 2011-04-20 06:22:47


Step 5 Adjusting The System To RAID1


Don't edit /etc/fstab and /etc/mtab


edit only file /mnt/md2/etc/fstab and /mnt/md2/etc/mtab


sometimes linux fail boot from /dev/md2 after reboot, you can normal boot to original linux configuration after fail.

From: Rik Bignell at: 2009-04-27 15:55:26

Thx for this.  Successfully used your guide to setup Juanty 9.04 with RAID5.


Points to note, RAID5 will NOT work when boot partition is raid5.  For example, if you have:


md0 = swap


md1 = root (boot within root)


Then you will not be able to write your grub properly to each drive due to raid5 not having separate copies of files on each disc.  Grub boots at disk level and not at software raid level it seems.


My work around was to have boot separate. I chose:


md0=swap (3x drives within raid5, sda1, sdb1, sdc1)


md1=boot (2x drives within raid1, sda2, sdb2) 3rd drive is not needed unless 2 drives fail at once, and because drives a mirrored completely you are able to write to grub.


md2=root (3x drives within raid5, sda3, sdb3, sdc3)


I'll be writing my own guide for raid1 and raid5 so you can see the difference in commands, but will referrence this guide a lot as it helped me the most out of all the ubuntu raid guides i found on google.


 


Watch http://www.richardbignell.co.uk/ for new guides.

From: at: 2008-06-03 13:12:17

Hello,


This instruction looks very useful however I would like to ask could someone please make this to suite the default and recommended hdd setup of Debian (single partition) please

From: Anonymous at: 2009-02-23 17:38:38

The article shows writing out a modified partition table, getting the message:


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.


and then, without rebooting,  trying to write to the new partitions (running "mdadm -add ...").


Doing that is extremely dangerous to any data on that disk---and even if there is no data, doing that means  mdadm might not be initializing something (the kernel's old view of partition N) other than what you meant (your new partition N).


 

From: Lars at: 2009-07-21 17:46:09


 QUOTE:


... and reboot the system:
reboot
It should boot without problems.


 


 not quite... you need to do the GRUB part from page 2 again to make this work - just got stuck in a 'GRUB' prompt after reload - can be fixed with a rescue system and a new grub setup on the hd's.


Otherwise the howto works just fine - thank you!


 -Lars


From: Anonymous at: 2009-11-30 12:27:33

I followed this guide to setup raid 1 (mirror of existing disc,to another) of an seperate disc containing vmware virtual disc files.


 I hope, I won't lose any data, but that's a risk I'd had to take. Right now it's synchronising the vmware disc with the harddisc containing no data... at this point, I can't access the harddisc containing the vmware files - so I have my fingers crossed :-)


 I'll post an update as soon, as the synchronisation is complete, byfar it's only 18% complete.


 I would recommand everyone there is using this guide to synchronise data between two discs to unmount EVERY disc that you're making changes to, BEFORE making any changes at all. If you somehow fail to do so, it can lead to serious data loss. A point that I think this guide, failed to mention.


 Besides that, thank you very much for sharing you're knowledge!


 - Simon Sessingø
Denmark

From: Anonymous at: 2009-11-08 01:26:37

THANK YOU for this wonderful howto. I managed to get RAID set up on Debian Lenny with no changes to your instructions.

From: Singapore website design at: 2009-10-25 10:57:54

Hi thanks for writing this guide. I managed to setup my servers software raid successfully using this guide. Been using hardware raid all along. Thanks

From: Ben at: 2010-03-05 11:05:01

Great tutorial, worked perfectly for me in Debain Lenny substituting sda and sdb with hda and hdd, and a few extra partitions ... thanks for posting. :)

From: Juan De Stefano at: 2010-03-05 06:20:07

Thank you for this excellent guideline. I followed on Ubuntu 9.10. The only thing different is to setup the grub2.  It is suposed i shouldn't edit the grub.cnf (former grub.lst) but i did to change the root device) then mounted the /dev/md2 on /mnt/md2  and then /dev/md0 on /mnt/md2/boot. Mounted sys, proc and dev also to make the chroot. Later i did the dpkg-reconfigure grub-pc and selected the both disks to install grub on mbr. Everything worked the first time i tried.


Thanks again


/ Juan

From: Vlad P at: 2010-04-01 02:42:04

I had already set up my RAID 1 before hitting your tutorial, but this reading made me understand everything better - much better! Thank you very much!

From: Anonymous at: 2010-04-14 20:29:37

I just did this for 9.10 Ubuntu as well.  This procedure really needs to be updated for GRUB2, which in and of itself is an excersise in tedium.  However, GRUB2 is slightly smarter & seemed to auto-configure a few of the drive details here and there.  However, there were some major departures from this procedure.



You don't need to (& should not) modify grub.cfg directly.  Instead, I created a custom grub config file: /etc/grub.d/06_custom which would contain my RAID entries and put them above the other grub boot options during the "degraded" sections of the installation.   There's a few tricks there in how to format a custom file correctly: there is some "EOF" crayziness, and also you should be using UUIDs, so you have to make sure you get the right UUIDs, instead of using /dev/sd[XX] notation.  In the end, my 06_custom looked like:  



#! /bin/sh -e

echo "Adding RAID boot options" >&2

cat << EOF

menuentry "Ubuntu, Linux 2.6.31-20-generic RAID (hd1)" 

{

        recordfail=1

        if [ -n ${have_grubenv} ]; then save_env recordfail; fi

set quiet=1

insmod ext2

set root=(hd1,0)

search --no-floppy --fs-uuid --set b79ba888-2180-4c7a-b744-2c4fa99a5872

linux /boot/vmlinuz-2.6.31-20-generic root=UUID=b79ba888-2180-4c7a-b744-2c4fa99a5872 ro   quiet splash

initrd /boot/initrd.img-2.6.31-20-generic

}



menuentry "Ubuntu, Linux 2.6.31-20-generic RAID (hd0)" 

{

        recordfail=1

        if [ -n ${have_grubenv} ]; then save_env recordfail; fi

set quiet=1

insmod ext2

set root=(hd0,0)

search --no-floppy --fs-uuid --set b79ba888-2180-4c7a-b744-2c4fa99a5872

linux /boot/vmlinuz-2.6.31-20-generic root=UUID=b79ba888-2180-4c7a-b744-2c4fa99a5872 ro   quiet splash

initrd /boot/initrd.img-2.6.31-20-generic

}



EOF



Also, you have to figure out which pieces of 10_linux to comment out to get rid of the non-RAID boot options; for that:

  #linux_entry "${OS}, Linux ${version}" \

  #    "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_EXTRA} ${GRUB_CMDLINE_LINUX_DEFAULT}" \

  #    quiet

  #if [ "x${GRUB_DISABLE_LINUX_RECOVERY}" != "xtrue" ]; then

  #  linux_entry "${OS}, Linux ${version} (recovery mode)" \

  # "single ${GRUB_CMDLINE_LINUX}"

  #fi



Overall, this was the best non-RAID -> RAID migration how-to I could find.  Thanks very much for putting this out there.


From: Cristian at: 2010-05-27 14:14:05

This guide is awesme, it is just all you need to transform an usual one SATA disk into RAID1 if you follow all instruction.


 Thanks again ... thanks,  thanks. You save me from some days work to configure again a server.

From: Rory at: 2011-07-23 12:27:36

Thank you for this perfect tutortial.

 It works perfectly even for Ubuntu. Had to mess with grub2 instead, but aside from that, it's brilliant. Used it on three machines without a glitch.

From: Alex Dekker at: 2012-10-07 09:11:38
From: at: 2008-01-19 19:37:09

In general, I replaced Disk Id's of Ubuntu Gutsy by devices and is working great. I'm writing from my Gutsy destop.


A few weeks ago I lost my /home partition. As consultant, I also works in home therefore I don't have time for backup, thus I think is a good solution to have a RAID1.


First, I used Debian Etch, but it doesn't support easily my ATI Radeon 9200 video card, and caused problems with vmware.


I redo all the process but for Ubuntu Gutsy Gibbon 7.10, replacing the Disk ID's by devices. Also, for mnemotecnic reasons and easy recovery, I used md1 (boot) md2 (swap) and md5 (root).


 

From: Jairzhino Bolivar at: 2009-02-22 19:41:32

Well,


jair 


I just wanted to thank the team/person that putthis tutorial together, this is a very valuable tutorial.  I follow it using debian 5.0 (lenny) and everything works very nicely   I could not enjoy more looking at the syncing process and testing process of the system booting even after i removed the drive (hd1) This really allow everybody to protect their data from hard drive failures. Thank you!!!  Sooo!  Much!!


I notice that when you run the command to install  mdadm "citadel" install as well. Is there away I can run the command apt-get install mdadm skipping "citadel"?


 Again this is greatand very simple.  I am using the same tutorial to create two more disks raid1 for array storage.


This is just cool!!! 

From: L. RIchter at: 2010-01-27 22:01:12

This Howto really worked out of the box. This was my first RAID installation using Debian stable 5.03 and after wasting my time with the installer to set up a RAID this worked straight without any complaints. Really a good job, well done,


Lothar


 

From: Franck78 at: 2010-03-25 18:28:36

Just follow steps adjusting number of 'md' and partitions and disks.


Maybe more details about the swap partition on each disk.


Usefull or useless to have an md made of swaps.....?


use:


mkswap /dev/mdX


swapon -a


swapon -s


 


Bye

From: Anonymous at: 2010-08-31 17:34:56

Excellent tutorial. Thank you.

From: Leo Matteo at: 2013-05-01 22:35:20

A very clear "manual" to set up a RAID1 in a running system.

I will try it in a while. I hope all will run well. (if yes or no, I will anyway comment  results.

Thank you (from Uruguay).

 

From: Max at: 2013-09-14 05:26:00

Aren't md0, md1 and md2 supposed to be operational after disk failure? Contents of /proc/mdstat suggest that raid1 is still running with one disc but subsequent call to fdisk shows that there are no valid partitions on md0, md1 and md2.

Might it be copy-paste error?

Otherwise very good tutorial.

From: bob143 at: 2015-01-20 07:25:33

I did manage to lose all my existing data following this. I was not doing this with a root partition so I had no issues with partitions being in use and I specified both disks in the create command  rather than the "missing" placeholder - maybe that was my problem.