How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Fedora 8) - Page 3

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 and /dev/md2 in the output of

df -h

[root@server1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md2              4.4G  2.4G  1.8G  58% /
/dev/md0               99M   15M   80M  16% /boot
tmpfs                 185M     0  185M   0% /dev/shm
[root@server1 ~]#

The output of

cat /proc/mdstat

should be as follows:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1]
      104320 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
      513984 blocks [2/1] [_U]

md2 : active raid1 sdb3[1]
      4618560 blocks [2/1] [_U]

unused devices: <none>
[root@server1 ~]#

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:

fdisk /dev/sda

[root@server1 ~]# fdisk /dev/sda

Command (m for help): <-- t
Partition number (1-4): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): <-- t
Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): <-- t
Partition number (1-4): <-- 3
Hex code (type L to list codes): <-- fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help): <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@server1 ~]#

Now we can add /dev/sda1, /dev/sda2, and /dev/sda3 to the respective RAID arrays:

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
mdadm --add /dev/md2 /dev/sda3

Now take a look at

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdb1[1]
      104320 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      513984 blocks [2/2] [UU]

md2 : active raid1 sda3[2] sdb3[1]
      4618560 blocks [2/1] [_U]
      [=====>...............]  recovery = 29.9% (1384256/4618560) finish=2.3min speed=22626K/sec

unused devices: <none>
[root@server1 ~]#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdb1[1]
      104320 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      513984 blocks [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
      4618560 blocks [2/2] [UU]

unused devices: <none>
[root@server1 ~]#

).

Then adjust /etc/mdadm.conf to the new situation:

mdadm --examine --scan > /etc/mdadm.conf

/etc/mdadm.conf should now look something like this:

cat /etc/mdadm.conf

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=2848a3f5:cd1c26b6:e762ed83:696752f9
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=8a004bac:92261691:227767de:4adf6592
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=939f1c71:be9c10fd:d9e5f8c6:a46bcd49

 

8 Preparing GRUB (Part 2)

We are almost done now. Now we must modify /boot/grub/menu.lst again. Right now it is configured to boot from /dev/sdb (hd1,0). Of course, we still want the system to be able to boot in case /dev/sdb fails. Therefore we copy the first kernel stanza (which contains hd1), paste it below and replace hd1 with hd0. Furthermore we comment out all other kernel stanzas so that it looks as follows:

vi /boot/grub/menu.lst

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/sda3
#          initrd /initrd-version.img
#boot=/dev/sda
default=0
fallback=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora (2.6.23.1-42.fc8)
        root (hd1,0)
        kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/md2 rhgb quiet
        initrd /initrd-2.6.23.1-42.fc8.img

title Fedora (2.6.23.1-42.fc8)
        root (hd0,0)
        kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/md2 rhgb quiet
        initrd /initrd-2.6.23.1-42.fc8.img

#title Fedora (2.6.23.1-42.fc8)
#       root (hd0,0)
#       kernel /vmlinuz-2.6.23.1-42.fc8 ro root=LABEL=/ rhgb quiet
#       initrd /initrd-2.6.23.1-42.fc8.img

Afterwards, update your ramdisk:

mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_orig2
mkinitrd /boot/initrd-`uname -r`.img `uname -r`

... and reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your running Fedora 8 system!

Share this page:

5 Comment(s)

Add comment

Comments

From: Anonymous at: 2010-02-09 18:29:32

Great walkthrough overall.  Another forum commented it's a good idea to change fstab/mtab on the RAID, but you have to change them before creating the kernel image.  So change them back on /mnt/sda to ensure the system comes up.

My big gotcha: disable SELinux.  After following the instructions and rebooting, I got "/bin/bash: Permission denied" while trying to ssh into the system.

From: Brandon Checketts at: 2010-01-25 21:16:12

The default speed limits on CentOS (and presumably other distros) have the RAID synchronization run at a relatively slow rate.  It tries to minimize disk I/O and the CPU utilization, but in many cases that will make it take seemingly forever for a modern-sized hard drive.

Use this command to increase the speed at which the drives sync to 100 MB/s

 

[root@host ~]# echo 100000 > /proc/sys/dev/raid/speed_limit_min

From: Sverre at: 2010-08-04 06:19:07

After following the instructions in CentOS 5.2, I'd get a kernel panic.

I logged in with the LiveCD and re-mounted the volumes and temp filesystems:

 

mdadm --examine --scan /dev/sdb1 >> /etc/mdadm.conf
mdadm --examine --scan /dev/sdb2 >> /etc/mdadm.conf

lvm vgchange -ay VolGroup01

mount /dev/VolGroup01/LogVol00 /mnt/sysimage
mount /dev/md0 /mnt/sysimage/boot

mount -t sysfs sys /mnt/sysimage/sys

I had to edit /boot/grub/device.map (I added (hd1) /dev/sdb), then I had to delete the contents of /etc/blkid and re-run blkid, then I had to delete /etc/lvm/cache/.cache and run vgscan, then I had to edit /etc/sysconfig/grub (replace sda with md0).

I also updated /root/anaconda.ks - but I don't think that makes any difference unless you boot the LiveCD into rescue mode?

 THEN I did initrd and grub

From: SmogMonkey at: 2013-05-14 20:15:33

Just in case someone runs across this (I happened to be running CentOS 5), make sure you update bash (yum update bash), according to RedHat the issue is with bash not mkinitrd.  I updated bash and was able to complete the commands above.  Error output:
mkinitrd /boot/initrd-`uname -r`.img `uname -r`
/sbin/mkinitrd: line 489: syntax error in conditional expression: unexpected token `('
/sbin/mkinitrd: line 489: syntax error near `^(d'
/sbin/mkinitrd: line 489: `        if [[ "$device" =~ ^(dm-|mapper/|mpath/) ]]; then' 

So far so good, glad I found this guide 

From: stu at: 2009-08-28 12:44:14

This is a great tutorial for those first dipping into raid on linux. A few comments though....

A short abstract would help. It took me a while to realize that what was going on was this:

Make a new array, copy your existing drive's data to it, make it bootable, boot it, blow away your original drive by making it a part of the new array and let it sync over.

My goal was to make my new drive be the new part of the array and get synced over. I didn't realize that you can't just make a working disk part of an array, you have to start with a new disk and then sync over to it.

The other thing I wanted to add was I had a problem took me a week to figure out.

 I have three disks in the machine, I was only making an array out of one partition.

When the machine was booted normally, grub's perspective of hd0 was sda, hd1 was sdb and hd2 was sdc. Makes sense.

So when I tried to make sda and sdc my array, naturally I made the grub menu.lst say root(hd2,0)

Error 15 file not found.

Then after a week of trying everything I could think of, I finally got the bright idea to boot grub and get to the command line.

After some tab completion experiments I realized that on boot hd1 and hd2 are swapped.

So obviously it couldn't find the kernel or initrd because it was looking at the wrong disk. I set root(hd1,0) and voila. Boot to array.

No idea why it does that, but now I've got it working.

Thanks again for a great tutorial.