How To Create A RAID1 Setup On An Existing CentOS/RedHat 6.0 System - Page 2

14. Open another console window and run:

  blkid | grep /dev/md

Here you will see the UUID for each md type filesystem. It should look something like this:

/dev/md0: UUID="0b0fddf7-1160-4c70-8e76-5a5a5365e07d" TYPE="ext2"
/dev/md1: LABEL="/ROOT" UUID="36d389c4-fc0f-4de7-a80b-40cc6dece66f" TYPE="ext4"
/dev/md2: UUID="47fbbe32-c756-4ea6-8fd6-b34867be0c84" TYPE="ext4"
/dev/md3: LABEL="/VAR" UUID="f92cc249-c1af-456b-a291-ee1ea9ef8e22" TYPE="ext4"

Note the UUID for /dev/md0 and copy it and paste it in fstab as shown below.

mount /dev/md1 /mnt/raid

In /mnt/raid/etc/fstab change the line containing the mount point /boot to the UUID of the new /dev/md0 filesystem:

UUID=0b0fddf7-1160-4c70-8e76-5a5a5365e07d /boot ext2 defaults 1 1

Repeat for the UUID of the filesystem for the new / md device. Find it and copy it.

Now change the line containing the mount point for / to and paste in the UUID:

UUID=36d389c4-fc0f-4de7-a80b-40cc6dece66f / ext4 defaults 1 1

Later on we will use the following, but keep these two commented for the moment.

Keep the existing lines for mounting /var and /home intact.

The new line for mounting /var will be added:

/dev/sdb5 /var ext4 defaults 1 2
#UUID=47fbbe32-c756-4ea6-8fd6-b34867be0c84 /var ext4 defaults 1 2

The line for mounting /home will be added:

/dev/sdb6 /home ext4 defaults 1 2
#UUID=f92cc249-c1af-456b-a291-ee1ea9ef8e22 /home ext4 defaults 1 2

Next:

  umount /mnt/raid

15. Mount /dev/md0 again to /mnt/raid.

In /mnt/raid/grub/menu.lst change the entry for the kernel to

kernel PATH-TO-KERNEL ro root=/dev/md1 SOME OPTIONS

Make sure that there is no longer an option to EXCLUDE md devices!

Just to be sure the system will boot from the raid array copy the file /mnt/raid/grub/menu.lst to /boot/grub/menu.lst and /mnt/raid/etc/fstab to /etc/fstab.

You could make copies of these files for safety, but that's the cowards way.

16. Reboot the machine.

Enter the system BIOS, and choose the new disk as the one that your system boots from. Save the BIOS setting and boot.

17. Assuming the reboot went smoothly, change the existing partitions of the old drive to be raid device partitions:

Check the partition tables to confirm which disk is the old and which is the new one:

  fdisk -l

Examine the output and see which disk has partitions of type 83 Linux. That disk will be our old system disk.

By using fdisk, cfdisk or parted:

Change the partition type to 0xfd on that disks partitions sdb1, sdb2, sdb5 and sdb6. Note I am assuming here that this is still /dev/sdb.

Run

partprobe

Add the newly modified partitions to the RAIDs to make them complete. Note that once again I am assuming that the old disk still shows up as sdb.

mdadm /dev/md0 -a /dev/sdb1
mdadm /dev/md1 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb5
mdadm /dev/md3 -a /dev/sdb6

To see what's going on, use (in a new console window as root):

  watch -n 5 cat/proc/mdstat

The output should look similar to the one below and will be updated every 5 seconds:

Personalities : [raid1]
md1 : active raid1 sdb3[1] sda3[0]
473792 blocks [2/2] [UU]
[===>.................] recovery 25.0% (118448/473792) finish=2.4min speed=2412

md2 : active raid1 sdb5[1] sda5[0]
4980032 blocks [2/2] [UU]
resync=DELAYED

md3 : active raid1 sdb6[1] sda6[0]
3349440 blocks [2/2] [UU]
resync=DELAYED

md0 : active raid1 sdb1[1] sda1[0]
80192 blocks [2/2] [UU]

unused devices: <none>

As soon as all the md devices are done recovery, your system is in essence up and running.

Next we shall add some additional steps to gain on performance and redundancy.

First, the system should be able to boot even if the first hard disk fails. For this to happen, the following step has to be done:

18. Create a boot record on the second hard disk.

THESE INSTRUCTIONS ASSUME YOU ARE USING OLD STYLE GRUB. FOR GRUB2 SEE FUTURE INSTRUCTIONS!

To create a boot record on the second hard disk, start a grub shell:

# grub
  grub>

Set the root device temporarily to the second disk:

grub> root (hd1,0)
  Filesystem type is ext2fs, partition type is 0xfd
grub> setup (hd1)

Checking if "/boot/grub/stage1" exists ... no
Checking if "/grub/stage1" exists ... yes
Checking if "/grub/stage2" exists ... yes
Checking if "/grub/e2fs_stage1_5" exists ... yes
Running "embed /grub/e2fs_stage1_5 (hd1)" ... 16 sectors embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.

Repeat for the first disk:

  grub> root (hd0,0)
  Filesystem type is ext2fs, partition type is 0xfd
grub> setup (hd0)

Checking if "/boot/grub/stage1" exists ... no
Checking if "/grub/stage1" exists ... yes
Checking if "/grub/stage2" exists ... yes
Checking if "/grub/e2fs_stage1_5" exists ... yes
Running "embed /grub/e2fs_stage1_5 (hd1)" ... 16 sectors embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.

grub> quit

Reboot the system:

reboot

It should boot without problems.

If so, disconnect the first disk (sda) and try again. Does it boot?

If so, power off, reconnect sda, disconnect the second disk (sdb). Does it boot?

Share this page:

3 Comment(s)