How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB Configuration) (Debian Etch) - Page 3
On this page
6 Preparing GRUB
Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:
grub
On the GRUB shell, type in the following commands:
root (hd0,0)
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0x83
grub>
setup (hd0)
grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.
grub>
root (hd1,0)
grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd
grub>
setup (hd1)
grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.
grub>
quit
Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:
reboot
7 Preparing /dev/sda
If all goes well, you should now find /dev/md0 in the output of
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/debian-root
4.5G 709M 3.6G 17% /
tmpfs 126M 0 126M 0% /lib/init/rw
udev 10M 68K 10M 1% /dev
tmpfs 126M 0 126M 0% /dev/shm
/dev/md0 236M 18M 206M 9% /boot
server1:~#
The output of
cat /proc/mdstat
should be as follows:
server1:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda5[0] sdb5[1]
4988032 blocks [2/2] [UU]
md0 : active raid1 sdb1[1]
248896 blocks [2/1] [_U]
unused devices: <none>
server1:~#
The outputs of pvdisplay, vgdisplay, and lvdisplay should be as follows:
pvdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name debian
PV Size 4.75 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 1217
Free PE 0
Allocated PE 1217
PV UUID YAnBb4-NJdb-aM68-Ks22-g851-IshG-bNgOXp
server1:~#
vgdisplay
server1:~# vgdisplay
--- Volume group ---
VG Name debian
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.75 GB
PE Size 4.00 MB
Total PE 1217
Alloc PE / Size 1217 / 4.75 GB
Free PE / Size 0 / 0
VG UUID j5BV1u-mQSa-Q0KW-PH4n-VHab-DB92-LyxFIU
server1:~#
lvdisplay
server1:~# lvdisplay
--- Logical volume ---
LV Name /dev/debian/root
VG Name debian
LV UUID BTAfBf-auQQ-wWC5-daFC-3qIe-gJU3-2j0UiG
LV Write Access read/write
LV Status available
# open 1
LV Size 4.50 GB
Current LE 1151
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0
--- Logical volume ---
LV Name /dev/debian/swap_1
VG Name debian
LV UUID Q3IVee-DhoD-Nd07-v9mR-CmsN-HAt8-1bDcP4
LV Write Access read/write
LV Status available
# open 2
LV Size 264.00 MB
Current LE 66
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1
server1:~#
Now we must change the partition type of /dev/sda1 to Linux raid autodetect as well:
fdisk /dev/sda
server1:~# fdisk /dev/sda
Command (m for help): <- t
Partition number (1-5): <- 1
Hex code (type L to list codes): <- fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): <- w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
server1:~#
Now we can add /dev/sda1 to the /dev/md0 RAID array:
mdadm --add /dev/md0 /dev/sda1
Now take a look at
cat /proc/mdstat
server1:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda5[0] sdb5[1]
4988032 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
248896 blocks [2/2] [UU]
unused devices: <none>
server1:~#
Then adjust /etc/mdadm/mdadm.conf to the new situation:
cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
/etc/mdadm/mdadm.conf should now look something like this:
cat /etc/mdadm/mdadm.conf
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 18 Mar 2008 19:28:19 +0100 # by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $ ARRAY /dev/md0 level=raid1 num-devices=2 UUID=6fe4b95e:4ce1dcef:01b5209e:be9ff10a ARRAY /dev/md1 level=raid1 num-devices=2 UUID=aa4ab7bb:df7ddb72:01b5209e:be9ff10a |
Reboot the system:
reboot
It should boot without problems.
That's it - you've successfully set up software RAID1 on your running LVM system!