How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB Configuration) (Fedora 8) - Page 3

6 Preparing GRUB

Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:

grub

On the GRUB shell, type in the following commands:

root (hd0,0)

grub> root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83

grub>

setup (hd0)

grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"...  16 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.

grub>

root (hd1,0)

grub> root (hd1,0)
 Filesystem type is ext2fs, partition type 0xfd

grub>

setup (hd1)

grub> setup (hd1)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd1)"...  16 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.

grub>

quit

Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:

reboot

 

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 in the output of

df -h

[[email protected] ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      4.1G  2.0G  1.9G  51% /
/dev/md0              190M   16M  165M   9% /boot
tmpfs                 151M     0  151M   0% /dev/shm
[[email protected] ~]#

The output of

cat /proc/mdstat

should be as follows:

[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1]
      200704 blocks [2/1] [_U]

md1 : active raid1 sda2[0] sdb2[1]
      5036288 blocks [2/2] [UU]

unused devices: <none>
[[email protected] ~]#

The outputs of pvdisplay, vgdisplay, and lvdisplay should be as follows:

pvdisplay

[[email protected] ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               VolGroup00
  PV Size               4.80 GB / not usable 22.25 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              153
  Free PE               1
  Allocated PE          152
  PV UUID               pS3xiy-AEnZ-p3Wf-qY2D-cGus-eyGl-03mWyg

[root[email protected] ~]#

vgdisplay

[[email protected] ~]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.78 GB
  PE Size               32.00 MB
  Total PE              153
  Alloc PE / Size       152 / 4.75 GB
  Free  PE / Size       1 / 32.00 MB
  VG UUID               jJj1DQ-SvKY-6hdr-3MMS-8NOd-pb3l-lS7TA1

[[email protected] ~]#

lvdisplay

[[email protected] ~]# lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol00
  VG Name                VolGroup00
  LV UUID                yt5b4f-m2XC-F3aP-032r-ulAT-Re5P-lmh6hy
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.16 GB
  Current LE             133
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol01
  VG Name                VolGroup00
  LV UUID                VrPqpP-40ym-55Gs-ShVm-Hlzs-Jzot-oYnonY
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                608.00 MB
  Current LE             19
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:1

[[email protected] ~]#

Now we must change the partition type of /dev/sda1 to Linux raid autodetect as well:

fdisk /dev/sda

[[email protected] ~]# fdisk /dev/sda

Command (m for help):
 <- t
Partition number (1-4): <- 1
Hex code (type L to list codes): <- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):
 <- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[[email protected] ~]#

Now we can add /dev/sda1 to the /dev/md0 RAID array:

mdadm --add /dev/md0 /dev/sda1

Now take a look at

cat /proc/mdstat

[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdb1[1]
      200704 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      5036288 blocks [2/2] [UU]

unused devices: <none>
[[email protected] ~]#

Then adjust /etc/mdadm.conf to the new situation:

mdadm --examine --scan > /etc/mdadm.conf

/etc/mdadm.conf should now look something like this:

cat /etc/mdadm.conf

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=7d2bf9c3:7cd9df21:f782dab8:9212d7cb
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=d93a2387:6355b5c5:25ed3e50:2a0e4f96

Reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your running LVM system!

Falko Timme

About Falko Timme

Falko Timme is an experienced Linux administrator and founder of Timme Hosting, a leading nginx business hosting company in Germany. He is one of the most active authors on HowtoForge since 2005 and one of the core developers of ISPConfig since 2000. He has also contributed to the O'Reilly book "Linux System Administration".

Share this page:

Suggested articles

0 Comment(s)

Add comment