How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB Configuration) (Fedora 8) - Page 2

Want to support HowtoForge? Become a subscriber!
 
Submitted by falko (Contact Author) (Forums) on Tue, 2008-04-01 17:21. ::

4 Creating Our RAID Arrays

Now let's create our RAID arrays /dev/md0 and /dev/md1. /dev/sdb1 will be added to /dev/md0 and/dev/sdb2 to /dev/md1. /dev/sda1 and /dev/sda2 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following two commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2

The command

cat /proc/mdstat

should now show that you have two degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

[root@server1 ~]# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb2[1]
      5036288 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
      200704 blocks [2/1] [_U]

unused devices: <none>
[root@server1 ~]#

Next we create a filesystem (ext3) on our non-LVM RAID array /dev/md0:

mkfs.ext3 /dev/md0

Now we come to our LVM RAID array /dev/md1. To prepare it for LVM, we run:

pvcreate /dev/md1

Then we add /dev/md1 to our volume group VolGroup00:

vgextend VolGroup00 /dev/md1

The output of

pvdisplay

should now be similar to this:

[root@server1 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup00
  PV Size               4.80 GB / not usable 22.34 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              153
  Free PE               1
  Allocated PE          152
  PV UUID               op2n3N-rck1-Pywc-9wTY-EUxQ-KUcr-2YeRJ0

  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               VolGroup00
  PV Size               4.80 GB / not usable 22.25 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              153
  Free PE               153
  Allocated PE          0
  PV UUID               pS3xiy-AEnZ-p3Wf-qY2D-cGus-eyGl-03mWyg

[root@server1 ~]#

The output of

vgdisplay

should be as follows:

[root@server1 ~]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup00
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               9.56 GB
  PE Size               32.00 MB
  Total PE              306
  Alloc PE / Size       152 / 4.75 GB
  Free  PE / Size       154 / 4.81 GB
  VG UUID               jJj1DQ-SvKY-6hdr-3MMS-8NOd-pb3l-lS7TA1

[root@server1 ~]#

Next we create /etc/mdadm.conf as follows:

mdadm --examine --scan > /etc/mdadm.conf

Display the contents of the file:

cat /etc/mdadm.conf

In the file you should now see details about our two (degraded) RAID arrays:

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=7d2bf9c3:7cd9df21:f782dab8:9212d7cb
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=d93a2387:6355b5c5:25ed3e50:2a0e4f96

Next we modify /etc/fstab. Replace LABEL=/boot with /dev/md0 so that the file looks as follows:

vi /etc/fstab

/dev/VolGroup00/LogVol00 /                       ext3    defaults        1 1
/dev/md0             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/VolGroup00/LogVol01 swap                    swap    defaults        0 0

Next replace /dev/sda1 with /dev/md0 in /etc/mtab:

vi /etc/mtab

/dev/mapper/VolGroup00-LogVol00 / ext3 rw 0 0
proc /proc proc rw 0 0
sysfs /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
/dev/md0 /boot ext3 rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0

Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback=1 right after default=0:

vi /boot/grub/menu.lst

[...]
default=0
fallback=1
[...]

This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.

In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0):

[...]
title Fedora (2.6.23.1-42.fc8)
        root (hd1,0)
        kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/VolGroup00/LogVol00
        initrd /initrd-2.6.23.1-42.fc8.img
title Fedora (2.6.23.1-42.fc8)
        root (hd0,0)
        kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/VolGroup00/LogVol00
        initrd /initrd-2.6.23.1-42.fc8.img

The whole file should look something like this:

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
#          initrd /initrd-version.img
#boot=/dev/sda
default=0
fallback=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora (2.6.23.1-42.fc8)
        root (hd1,0)
        kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/VolGroup00/LogVol00
        initrd /initrd-2.6.23.1-42.fc8.img
title Fedora (2.6.23.1-42.fc8)
        root (hd0,0)
        kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/VolGroup00/LogVol00
        initrd /initrd-2.6.23.1-42.fc8.img

root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback=1).

Next we adjust our ramdisk to the new situation:

mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_orig
mkinitrd /boot/initrd-`uname -r`.img `uname -r`

 

5 Moving Our Data To The RAID Arrays

Now that we've modified all configuration files, we can copy the contents of /dev/sda to /dev/sdb (including the configuration changes we've made in the previous chapter).

To move the contents of our LVM partition /dev/sda2 to our LVM RAID array /dev/md1, we use the pvmove command:

pvmove /dev/sda2 /dev/md1

This can take some time, so please be patient.

Afterwards, we remove /dev/sda2 from the volume group VolGroup00...

vgreduce VolGroup00 /dev/sda2

... and tell the system to not use /dev/sda2 anymore for LVM:

pvremove /dev/sda2

The output of

pvdisplay

should now be as follows:

[root@server1 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               VolGroup00
  PV Size               4.80 GB / not usable 22.25 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              153
  Free PE               1
  Allocated PE          152
  PV UUID               pS3xiy-AEnZ-p3Wf-qY2D-cGus-eyGl-03mWyg

[root@server1 ~]#

Next we change the partition type of /dev/sda2 to Linux raid autodetect and add /dev/sda2 to the /dev/md1 array:

fdisk /dev/sda

[root@server1 ~]# fdisk /dev/sda

Command (m for help):
 <- t
Partition number (1-4): <- 2
Hex code (type L to list codes): <- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help):
 <- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@server1 ~]#

mdadm --add /dev/md1 /dev/sda2

Now take a look at

cat /proc/mdstat

... and you should see that the RAID array /dev/md1 is being synchronized:

[root@server1 ~]# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda2[2] sdb2[1]
      5036288 blocks [2/1] [_U]
      [=====>...............]  recovery = 28.8% (1454272/5036288) finish=2.8min speed=21132K/sec

md0 : active raid1 sdb1[1]
      200704 blocks [2/1] [_U]

unused devices: <none>
[root@server1 ~]#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

[root@server1 ~]# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda2[0] sdb2[1]
      5036288 blocks [2/2] [UU]

md0 : active raid1 sdb1[1]
      200704 blocks [2/1] [_U]

unused devices: <none>
[root@server1 ~]#

).

Now let's mount /dev/md0:

mkdir /mnt/md0

mount /dev/md0 /mnt/md0

You should now find the array in the output of

mount

[root@server1 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md0 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
[root@server1 ~]#

Now we copy the contents of /dev/sda1 to /dev/md0 (which is mounted on /mnt/md0):

cd /boot
cp -dpRx . /mnt/md0


Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.