How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Mandriva 2008.0) - Page 3

7 Preparing /dev/hda

If all goes well, you should now find /dev/md0 and /dev/md2 in the output of

df -h

[[email protected] ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md2              4.4G  757M  3.4G  18% /
/dev/md0              167M  9.0M  150M   6% /boot
[[email protected] ~]#

The output of

cat /proc/mdstat

should be as follows:

[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 hdb5[1]
      417536 blocks [2/1] [_U]

md0 : active raid1 hdb1[1]
      176576 blocks [2/1] [_U]

md2 : active raid1 hdb6[1]
      4642688 blocks [2/1] [_U]

unused devices: <none>
[[email protected] ~]#

Now we must change the partition types of our three partitions on /dev/hda to Linux raid autodetect as well:

fdisk /dev/hda

[[email protected] ~]# fdisk /dev/hda

Command (m for help): <-- t
Partition number (1-6): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): <-- t
Partition number (1-6): <-- 5
Hex code (type L to list codes): <-- fd
Changed system type of partition 5 to fd (Linux raid autodetect)

Command (m for help): <-- t
Partition number (1-6): <-- 6
Hex code (type L to list codes): <-- fd
Changed system type of partition 6 to fd (Linux raid autodetect)

Command (m for help): <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[[email protected] ~]#

Now we can add /dev/hda1, /dev/hda5, and /dev/hda6 to the respective RAID arrays:

mdadm --add /dev/md0 /dev/hda1
mdadm --add /dev/md1 /dev/hda5
mdadm --add /dev/md2 /dev/hda6

Now take a look at

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized:

[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 hda5[2] hdb5[1]
      417536 blocks [2/1] [_U]

md0 : active raid1 hda1[0] hdb1[1]
      176576 blocks [2/2] [UU]

md2 : active raid1 hda6[2] hdb6[1]
      4642688 blocks [2/1] [_U]
      [======>..............]  recovery = 34.4% (1597504/4642688) finish=1.0min speed=50349K/sec

unused devices: <none>
[[email protected] ~]#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 hda5[0] hdb5[1]
      417536 blocks [2/2] [UU]

md0 : active raid1 hda1[0] hdb1[1]
      176576 blocks [2/2] [UU]

md2 : active raid1 hda6[0] hdb6[1]
      4642688 blocks [2/2] [UU]

unused devices: <none>
[[email protected] ~]#


Then adjust /etc/mdadm.conf to the new situation:

cp -f /etc/mdadm.conf_orig /etc/mdadm.conf
mdadm --examine --scan >> /etc/mdadm.conf

/etc/mdadm.conf should now look something like this:

cat /etc/mdadm.conf

# mdadm configuration file
# mdadm will function properly without the use of a configuration file,
# but this file is useful for keeping track of arrays and member disks.
# In general, a mdadm.conf file is created, and updated, after arrays
# are created. This is the opposite behavior of /etc/raidtab which is
# created prior to array construction.
# the config file takes two types of lines:
#       DEVICE lines specify a list of devices of where to look for
#         potential member disks
#       ARRAY lines specify information about how to identify arrays so
#         so that they can be activated
# You can have more than one device line and use wild cards. The first
# example includes SCSI the first partition of SCSI disks /dev/sdb,
# /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second
# line looks for array slices on IDE disks.
#DEVICE /dev/sd[bcdjkl]1
#DEVICE /dev/hda1 /dev/hdb1
# If you mount devfs on /dev, then a suitable way to list all devices is:
#DEVICE /dev/discs/*/*
# ARRAY lines specify an array to assemble and a method of identification.
# Arrays can currently be identified by using a UUID, superblock minor number,
# or a listing of devices.
#       super-minor is usually the minor number of the metadevice
#       UUID is the Universally Unique Identifier for the array
# Each can be obtained using
#       mdadm -D <md>
#ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
#ARRAY /dev/md1 super-minor=1
#ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1
# ARRAY lines can also specify a "spare-group" for each array.  mdadm --monitor
# will then move a spare between arrays in a spare-group if one array has a failed
# drive but no spare
#ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1
#ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1
# When used in --follow (aka --monitor) mode, mdadm needs a
# mail address and/or a program.  This can be given with "mailaddr"
# and "program" lines to that monitoring can be started using
#    mdadm --follow --scan & echo $! > /var/run/mdadm
# If the lines are not found, mdadm will exit quietly
#MAILADDR [email protected]
#PROGRAM /usr/sbin/handle-mdadm-events
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=6b4f013f:6fe18719:5904a9bd:70e9cee6
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=63194e2e:c656857a:3237a906:0616f49e
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=edec7105:62700dc0:643e9917:176563a7


8 Preparing GRUB (Part 2)

We are almost done now. Now we must modify /boot/grub/menu.lst again. Right now it is configured to boot from /dev/hdb (hd1,0). Of course, we still want the system to be able to boot in case /dev/hdb fails. Therefore we copy the first kernel stanza (which contains hd1), paste it below and replace hd1 with hd0. Furthermore we comment out all other kernel stanzas so that it looks as follows:

vi /boot/grub/menu.lst

timeout 10
color black/cyan yellow/cyan
default 0
fallback 1

title linux
kernel (hd1,0)/vmlinuz BOOT_IMAGE=linux root=/dev/md2  resume=/dev/md1
initrd (hd1,0)/initrd.img

title linux
kernel (hd0,0)/vmlinuz BOOT_IMAGE=linux root=/dev/md2  resume=/dev/md1
initrd (hd0,0)/initrd.img

#title linux
#kernel (hd0,0)/vmlinuz BOOT_IMAGE=linux root=/dev/hda6  resume=/dev/hda5
#initrd (hd0,0)/initrd.img

#title failsafe
#kernel (hd0,0)/vmlinuz BOOT_IMAGE=failsafe root=/dev/hda6  failsafe
#initrd (hd0,0)/initrd.img

Afterwards, update your ramdisk:

mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_orig2
mkinitrd /boot/initrd-`uname -r`.img `uname -r`

... and reboot the system:


It should boot without problems.

That's it - you've successfully set up software RAID1 on your running Mandriva 2008.0 system!

Falko Timme

About Falko Timme

Falko Timme is an experienced Linux administrator and founder of Timme Hosting, a leading nginx business hosting company in Germany. He is one of the most active authors on HowtoForge since 2005 and one of the core developers of ISPConfig since 2000. He has also contributed to the O'Reilly book "Linux System Administration".

Share this page:

Suggested articles

0 Comment(s)

Add comment


By: Falko Timme