How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Fedora 8) - Page 2

4 Creating Our RAID Arrays

Now let's create our RAID arrays /dev/md0, /dev/md1, and /dev/md2. /dev/sdb1 will be added to /dev/md0, /dev/sdb2 to /dev/md1, and /dev/sdb3 to /dev/md2. /dev/sda1, /dev/sda2, and /dev/sda3 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3

The command

cat /proc/mdstat

should now show that you have three degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

[root@server1 ~]# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb3[1]
      4618560 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
      513984 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
      104320 blocks [2/1] [_U]

unused devices: <none>
[root@server1 ~]#

Next we create filesystems on our RAID arrays (ext3 on /dev/md0 and /dev/md2 and swap on /dev/md1):

mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2

Next we create /etc/mdadm.conf as follows:

mdadm --examine --scan > /etc/mdadm.conf

Display the contents of the file:

cat /etc/mdadm.conf

In the file you should now see details about our three (degraded) RAID arrays:

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=2848a3f5:cd1c26b6:e762ed83:696752f9
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=8a004bac:92261691:227767de:4adf6592
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=939f1c71:be9c10fd:d9e5f8c6:a46bcd49


5 Adjusting The System To RAID1

Now let's mount /dev/md0 and /dev/md2 (we don't need to mount the swap array /dev/md1):

mkdir /mnt/md0
mkdir /mnt/md2

mount /dev/md0 /mnt/md0
mount /dev/md2 /mnt/md2

You should now find both arrays in the output of


[root@server1 ~]# mount
/dev/sda3 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
[root@server1 ~]#

Next we modify /etc/fstab. Replace LABEL=/boot with /dev/md0, LABEL=SWAP-sda2 with /dev/md1, and LABEL=/ with /dev/md2 so that the file looks as follows:

vi /etc/fstab

/dev/md2                 /                       ext3    defaults        1 1
/dev/md0             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/md1         swap                    swap    defaults        0 0

Next replace LABEL=/boot with /dev/md0 and LABEL=/ with /dev/md2 in /etc/mtab:

vi /etc/mtab

/dev/md2 / ext3 rw 0 0
proc /proc proc rw 0 0
sysfs /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
/dev/md0 /boot ext3 rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0

Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback=1 right after default=0:

vi /boot/grub/menu.lst


This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.

In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replace root=LABEL=/ with root=/dev/md2 and root (hd0,0) with root (hd1,0):

title Fedora (
        root (hd1,0)
        kernel /vmlinuz- ro root=/dev/md2 rhgb quiet
        initrd /initrd-

title Fedora (
        root (hd0,0)
        kernel /vmlinuz- ro root=LABEL=/ rhgb quiet
        initrd /initrd-

The whole file should look something like this:

# grub.conf generated by anaconda
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/sda3
#          initrd /initrd-version.img
title Fedora (
        root (hd1,0)
        kernel /vmlinuz- ro root=/dev/md2 rhgb quiet
        initrd /initrd-

title Fedora (
        root (hd0,0)
        kernel /vmlinuz- ro root=LABEL=/ rhgb quiet
        initrd /initrd-

root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback 1).

Next we adjust our ramdisk to the new situation:

mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_orig
mkinitrd /boot/initrd-`uname -r`.img `uname -r`

Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):

cp -dpRx / /mnt/md2

cd /boot
cp -dpRx . /mnt/md0


6 Preparing GRUB (Part 1)

Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:


On the GRUB shell, type in the following commands:

root (hd0,0)

grub> root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83


setup (hd0)

grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"...  16 sectors are embedded.
 Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded


root (hd1,0)

grub> root (hd1,0)
 Filesystem type is ext2fs, partition type 0xfd


setup (hd1)

grub> setup (hd1)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd1)"...  16 sectors are embedded.
 Running "install /grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded



Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:


Share this page:

5 Comment(s)

Add comment


From: Anonymous at: 2010-02-09 18:29:32

Great walkthrough overall.  Another forum commented it's a good idea to change fstab/mtab on the RAID, but you have to change them before creating the kernel image.  So change them back on /mnt/sda to ensure the system comes up.

My big gotcha: disable SELinux.  After following the instructions and rebooting, I got "/bin/bash: Permission denied" while trying to ssh into the system.

From: Brandon Checketts at: 2010-01-25 21:16:12

The default speed limits on CentOS (and presumably other distros) have the RAID synchronization run at a relatively slow rate.  It tries to minimize disk I/O and the CPU utilization, but in many cases that will make it take seemingly forever for a modern-sized hard drive.

Use this command to increase the speed at which the drives sync to 100 MB/s


[root@host ~]# echo 100000 > /proc/sys/dev/raid/speed_limit_min

From: Sverre at: 2010-08-04 06:19:07

After following the instructions in CentOS 5.2, I'd get a kernel panic.

I logged in with the LiveCD and re-mounted the volumes and temp filesystems:


mdadm --examine --scan /dev/sdb1 >> /etc/mdadm.conf
mdadm --examine --scan /dev/sdb2 >> /etc/mdadm.conf

lvm vgchange -ay VolGroup01

mount /dev/VolGroup01/LogVol00 /mnt/sysimage
mount /dev/md0 /mnt/sysimage/boot

mount -t sysfs sys /mnt/sysimage/sys

I had to edit /boot/grub/ (I added (hd1) /dev/sdb), then I had to delete the contents of /etc/blkid and re-run blkid, then I had to delete /etc/lvm/cache/.cache and run vgscan, then I had to edit /etc/sysconfig/grub (replace sda with md0).

I also updated /root/anaconda.ks - but I don't think that makes any difference unless you boot the LiveCD into rescue mode?

 THEN I did initrd and grub

From: SmogMonkey at: 2013-05-14 20:15:33

Just in case someone runs across this (I happened to be running CentOS 5), make sure you update bash (yum update bash), according to RedHat the issue is with bash not mkinitrd.  I updated bash and was able to complete the commands above.  Error output:
mkinitrd /boot/initrd-`uname -r`.img `uname -r`
/sbin/mkinitrd: line 489: syntax error in conditional expression: unexpected token `('
/sbin/mkinitrd: line 489: syntax error near `^(d'
/sbin/mkinitrd: line 489: `        if [[ "$device" =~ ^(dm-|mapper/|mpath/) ]]; then' 

So far so good, glad I found this guide 

From: stu at: 2009-08-28 12:44:14

This is a great tutorial for those first dipping into raid on linux. A few comments though....

A short abstract would help. It took me a while to realize that what was going on was this:

Make a new array, copy your existing drive's data to it, make it bootable, boot it, blow away your original drive by making it a part of the new array and let it sync over.

My goal was to make my new drive be the new part of the array and get synced over. I didn't realize that you can't just make a working disk part of an array, you have to start with a new disk and then sync over to it.

The other thing I wanted to add was I had a problem took me a week to figure out.

 I have three disks in the machine, I was only making an array out of one partition.

When the machine was booted normally, grub's perspective of hd0 was sda, hd1 was sdb and hd2 was sdc. Makes sense.

So when I tried to make sda and sdc my array, naturally I made the grub menu.lst say root(hd2,0)

Error 15 file not found.

Then after a week of trying everything I could think of, I finally got the bright idea to boot grub and get to the command line.

After some tab completion experiments I realized that on boot hd1 and hd2 are swapped.

So obviously it couldn't find the kernel or initrd because it was looking at the wrong disk. I set root(hd1,0) and voila. Boot to array.

No idea why it does that, but now I've got it working.

Thanks again for a great tutorial.