How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Mandriva 2008.0) - Page 2
4 Creating Our RAID Arrays
Now let's create our RAID arrays /dev/md0, /dev/md1, and /dev/md2. /dev/hdb1 will be added to /dev/md0, /dev/hdb5 to /dev/md1, and /dev/hdb6 to /dev/md2. /dev/hda1, /dev/hda5, and /dev/hda6 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/hdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/hdb5
mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/hdb6
The command
cat /proc/mdstat
should now show that you have three degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):
[root@server1 ~]# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 hdb6[1]
4642688 blocks [2/1] [_U]
md1 : active raid1 hdb5[1]
417536 blocks [2/1] [_U]
md0 : active raid1 hdb1[1]
176576 blocks [2/1] [_U]
unused devices: <none>
[root@server1 ~]#
Next we create filesystems on our RAID arrays (ext3 on /dev/md0 and /dev/md2 and swap on /dev/md1):
mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2
Next we must adjust /etc/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:
cp /etc/mdadm.conf /etc/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm.conf
Display the contents of the file:
cat /etc/mdadm.conf
In the file you should now see details about our three (degraded) RAID arrays:
# mdadm configuration file # # mdadm will function properly without the use of a configuration file, # but this file is useful for keeping track of arrays and member disks. # In general, a mdadm.conf file is created, and updated, after arrays # are created. This is the opposite behavior of /etc/raidtab which is # created prior to array construction. # # # the config file takes two types of lines: # # DEVICE lines specify a list of devices of where to look for # potential member disks # # ARRAY lines specify information about how to identify arrays so # so that they can be activated # # You can have more than one device line and use wild cards. The first # example includes SCSI the first partition of SCSI disks /dev/sdb, # /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second # line looks for array slices on IDE disks. # #DEVICE /dev/sd[bcdjkl]1 #DEVICE /dev/hda1 /dev/hdb1 # # If you mount devfs on /dev, then a suitable way to list all devices is: #DEVICE /dev/discs/*/* # # # # ARRAY lines specify an array to assemble and a method of identification. # Arrays can currently be identified by using a UUID, superblock minor number, # or a listing of devices. # # super-minor is usually the minor number of the metadevice # UUID is the Universally Unique Identifier for the array # Each can be obtained using # # mdadm -D <md> # #ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371 #ARRAY /dev/md1 super-minor=1 #ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1 # # ARRAY lines can also specify a "spare-group" for each array. mdadm --monitor # will then move a spare between arrays in a spare-group if one array has a failed # drive but no spare #ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1 #ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1 # # When used in --follow (aka --monitor) mode, mdadm needs a # mail address and/or a program. This can be given with "mailaddr" # and "program" lines to that monitoring can be started using # mdadm --follow --scan & echo $! > /var/run/mdadm # If the lines are not found, mdadm will exit quietly #MAILADDR [email protected] #PROGRAM /usr/sbin/handle-mdadm-events ARRAY /dev/md0 level=raid1 num-devices=2 UUID=6b4f013f:6fe18719:5904a9bd:70e9cee6 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=63194e2e:c656857a:3237a906:0616f49e ARRAY /dev/md2 level=raid1 num-devices=2 UUID=edec7105:62700dc0:643e9917:176563a7 |
5 Adjusting The System To RAID1
Now let's mount /dev/md0 and /dev/md2 (we don't need to mount the swap array /dev/md1):
mkdir /mnt/md0
mkdir /mnt/md2
mount /dev/md0 /mnt/md0
mount /dev/md2 /mnt/md2
You should now find both arrays in the output of
mount
[root@server1 ~]# mount
/dev/hda6 on / type ext3 (rw,relatime)
none on /proc type proc (rw)
/dev/hda1 on /boot type ext3 (rw,relatime)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
[root@server1 ~]#
Next we modify /etc/fstab. Replace /dev/hda1 with /dev/md0, /dev/hda5 with /dev/md1, and /dev/hda6 with /dev/md2 so that the file looks as follows:
vi /etc/fstab
/dev/md2 / ext3 relatime 1 1 /dev/md0 /boot ext3 relatime 1 2 /dev/cdrom /media/cdrom auto umask=0022,users,iocharset=utf8,noauto,ro,exec 0 0 /dev/fd0 /media/floppy auto umask=0022,users,iocharset=utf8,noauto,exec,flush 0 0 none /proc proc defaults 0 0 /dev/md1 swap swap defaults 0 0 |
Next replace /dev/hda1 with /dev/md0 and /dev/hda6 with /dev/md2 in /etc/mtab (you can ignore the two /dev/md lines at the end of the file):
vi /etc/mtab
/dev/md2 / ext3 rw,relatime 0 0 none /proc proc rw 0 0 /dev/md0 /boot ext3 rw,relatime 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 /dev/md0 /mnt/md0 ext3 rw 0 0 /dev/md2 /mnt/md2 ext3 rw 0 0 |
Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback 1 right after default 0:
vi /boot/grub/menu.lst
[...] default 0 fallback 1 [...] |
This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.
In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replace root=/dev/hda6 with root=/dev/md2 and (hd0,0) with (hd1,0). If you have something like resume=/dev/hda5 in your kernel stanza, replace it with resume=/dev/md1:
[...] title linux kernel (hd1,0)/vmlinuz BOOT_IMAGE=linux root=/dev/md2 resume=/dev/md1 initrd (hd1,0)/initrd.img title linux kernel (hd0,0)/vmlinuz BOOT_IMAGE=linux root=/dev/hda6 resume=/dev/hda5 initrd (hd0,0)/initrd.img |
The whole file should look something like this:
timeout 10 color black/cyan yellow/cyan default 0 fallback 1 title linux kernel (hd1,0)/vmlinuz BOOT_IMAGE=linux root=/dev/md2 resume=/dev/md1 initrd (hd1,0)/initrd.img title linux kernel (hd0,0)/vmlinuz BOOT_IMAGE=linux root=/dev/hda6 resume=/dev/hda5 initrd (hd0,0)/initrd.img title failsafe kernel (hd0,0)/vmlinuz BOOT_IMAGE=failsafe root=/dev/hda6 failsafe initrd (hd0,0)/initrd.img |
(hd1,0) refers to /dev/hdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/hda (-> fallback 1).
Next we adjust our ramdisk to the new situation:
mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_orig
mkinitrd /boot/initrd-`uname -r`.img `uname -r`
Now we copy the contents of /dev/hda1 and /dev/hda6 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):
cp -dpRx / /mnt/md2
cd /boot
cp -dpRx . /mnt/md0
6 Preparing GRUB (Part 1)
Afterwards we must install the GRUB bootloader on the second hard drive /dev/hdb:
grub
On the GRUB shell, type in the following commands:
root (hd0,0)
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0x83
grub>
setup (hd0)
grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.
grub>
root (hd1,0)
grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd
grub>
setup (hd1)
grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.
grub>
quit
Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:
reboot