How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (CentOS 5.3) - Page 3

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 and /dev/md2 in the output of

df -h

[root@server1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md2              9.2G  1.1G  7.7G  12% /
/dev/md0              190M   14M  167M   8% /boot
tmpfs                 252M     0  252M   0% /dev/shm
[root@server1 ~]#

The output of

cat /proc/mdstat

should be as follows:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1]
      200704 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
      522048 blocks [2/1] [_U]

md2 : active raid1 sdb3[1]
      9759360 blocks [2/1] [_U]

unused devices: <none>
[root@server1 ~]#

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:

fdisk /dev/sda

[root@server1 ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help):
 <-- t
Partition number (1-4): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): 
<-- t
Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help):
 <-- t
Partition number (1-4): <-- 3
Hex code (type L to list codes): <-- fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@server1 ~]#

Now we can add /dev/sda1, /dev/sda2, and /dev/sda3 to the respective RAID arrays:

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
mdadm --add /dev/md2 /dev/sda3

Now take a look at

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
      200704 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      522048 blocks [2/2] [UU]

md2 : active raid1 sda3[2] sdb3[1]
      9759360 blocks [2/1] [_U]
      [====>................]  recovery = 22.8% (2232576/9759360) finish=2.4min speed=50816K/sec

unused devices: <none>
[root@server1 ~]#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
      200704 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      522048 blocks [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
      9759360 blocks [2/2] [UU]

unused devices: <none>
[root@server1 ~]#

).

Then adjust /etc/mdadm.conf to the new situation:

mdadm --examine --scan > /etc/mdadm.conf

/etc/mdadm.conf should now look something like this:

cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=78d582f0:940fabb5:f1c1092a:04a55452
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=8db8f7e1:f2a64674:d22afece:4a539aa7
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=1baf282d:17c58efd:a8de6947:b0af9792

 

8 Preparing GRUB (Part 2)

We are almost done now. Now we must modify /boot/grub/menu.lst again. Right now it is configured to boot from /dev/sdb (hd1,0). Of course, we still want the system to be able to boot in case /dev/sdb fails. Therefore we copy the first kernel stanza (which contains hd1), paste it below and replace hd1 with hd0. Furthermore we comment out all other kernel stanzas so that it looks as follows:

vi /boot/grub/menu.lst
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/sda3
#          initrd /initrd-version.img
#boot=/dev/sda
default=0
fallback=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-128.el5)
        root (hd1,0)
        kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/md2
        initrd /initrd-2.6.18-128.el5.img

title CentOS (2.6.18-128.el5)
        root (hd0,0)
        kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/md2
        initrd /initrd-2.6.18-128.el5.img

#title CentOS (2.6.18-128.el5)
#       root (hd0,0)
#       kernel /vmlinuz-2.6.18-128.el5 ro root=LABEL=/
#       initrd /initrd-2.6.18-128.el5.img

Afterwards, update your ramdisk:

mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_orig2
mkinitrd /boot/initrd-`uname -r`.img `uname -r`

... and reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your running CentOS 5.3 system!

Share this page:

0 Comment(s)