How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB Configuration) (CentOS 5.3) - Page 3
On this page
6 Preparing GRUB
Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:
grub
On the GRUB shell, type in the following commands:
root (hd0,0)
grub> root (hd0,0)
root (hd0,0)
Filesystem type is ext2fs, partition type 0x83
grub>
setup (hd0)
grub> setup (hd0)
setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub>
root (hd1,0)
grub> root (hd1,0)
root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd
grub>
setup (hd1)
grub> setup (hd1)
setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub>
quit
Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:
reboot
7 Preparing /dev/sda
If all goes well, you should now find /dev/md0 in the output of
df -h
[root@server1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
8.6G 1.4G 6.8G 18% /
/dev/md0 99M 16M 79M 17% /boot
tmpfs 250M 0 250M 0% /dev/shm
[root@server1 ~]#
The output of
cat /proc/mdstat
should be as follows:
[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1]
104320 blocks [2/1] [_U]
md1 : active raid1 sdb2[1] sda2[0]
10377920 blocks [2/2] [UU]
unused devices: <none>
[root@server1 ~]#
The outputs of pvdisplay, vgdisplay, and lvdisplay should be as follows:
pvdisplay
[root@server1 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name VolGroup00
PV Size 9.90 GB / not usable 22.69 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 316
Free PE 0
Allocated PE 316
PV UUID u6IZfM-5Zj8-kFaG-YN8K-kjAd-3Kfv-0oYk7J
[root@server1 ~]#
vgdisplay
[root@server1 ~]# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 9.88 GB
PE Size 32.00 MB
Total PE 316
Alloc PE / Size 316 / 9.88 GB
Free PE / Size 0 / 0
VG UUID ZPvC10-cN09-fI0S-Vc8l-vOuZ-wM6F-tlz0Mj
[root@server1 ~]#
lvdisplay
[root@server1 ~]# lvdisplay
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID vYlky0-Ymx4-PNeK-FTpk-qxvm-PmoZ-3vcNTd
LV Write Access read/write
LV Status available
# open 1
LV Size 8.88 GB
Current LE 284
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID Ml9MMN-DcOA-Lb6V-kWPU-h6IK-P0ww-Gp9vd2
LV Write Access read/write
LV Status available
# open 1
LV Size 1.00 GB
Current LE 32
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
[root@server1 ~]#
Now we must change the partition type of /dev/sda1 to Linux raid autodetect as well:
fdisk /dev/sda
[root@server1 ~]# fdisk /dev/sda
The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): <-- t
Partition number (1-4): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): <-- w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@server1 ~]#
Now we can add /dev/sda1 to the /dev/md0 RAID array:
mdadm --add /dev/md0 /dev/sda1
Now take a look at
cat /proc/mdstat
[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
10377920 blocks [2/2] [UU]
unused devices: <none>
[root@server1 ~]#
Then adjust /etc/mdadm.conf to the new situation:
mdadm --examine --scan > /etc/mdadm.conf
/etc/mdadm.conf should now look something like this:
cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=0a96be0f:bf0f4631:a910285b:0f337164 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=f9e691e2:8d25d314:40f42444:7dbe1da1 |
Reboot the system:
reboot
It should boot without problems.
That's it - you've successfully set up software RAID1 on your running LVM system!