How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB2 Configuration) (Ubuntu 10.04) - Page 2
4 Creating Our RAID Arrays
Now let's create our RAID arrays /dev/md0 and /dev/md1. /dev/sdb1 will be added to /dev/md0 and/dev/sdb5 to /dev/md1. /dev/sda1 and /dev/sda5 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following two commands:
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb5
The command
cat /proc/mdstat
should now show that you have two degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):
[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb5[1]
4990912 blocks [2/1] [_U]
md0 : active raid1 sdb1[1]
248768 blocks [2/1] [_U]
unused devices: <none>
[email protected]:~#
Next we create a filesystem (ext2) on our non-LVM RAID array /dev/md0:
mkfs.ext2 /dev/md0
Now we come to our LVM RAID array /dev/md1. To prepare it for LVM, we run:
pvcreate /dev/md1
Then we add /dev/md1 to our volume group server1:
vgextend server1 /dev/md1
The output of
pvdisplay
should now be similar to this:
[email protected]:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda5
VG Name server1
PV Size 4.76 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1218
Free PE 3
Allocated PE 1215
PV UUID bsF5F5-s2RN-ed1h-zjeb-4mAJ-aktq-kEn86r
--- Physical volume ---
PV Name /dev/md1
VG Name server1
PV Size 4.76 GiB / not usable 1.94 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1218
Free PE 1218
Allocated PE 0
PV UUID rQf0Rj-Nn9l-VgbP-0kIr-2lve-5jlC-TWTBGp
[email protected]:~#
The output of
vgdisplay
should be as follows:
[email protected]:~# vgdisplay
--- Volume group ---
VG Name server1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 9.52 GiB
PE Size 4.00 MiB
Total PE 2436
Alloc PE / Size 1215 / 4.75 GiB
Free PE / Size 1221 / 4.77 GiB
VG UUID hMwXAh-zZsA-w39k-g6Bg-LW4W-XX8q-EbyXfA
You have new mail in /var/mail/root
[email protected]:~#
Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
Display the contents of the file:
cat /etc/mdadm/mdadm.conf
In the file you should now see details about our two (degraded) RAID arrays:
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Wed, 16 Jun 2010 20:01:25 +0200 # by mkconf $Id$ ARRAY /dev/md0 level=raid1 num-devices=2 UUID=90f05e41:bf936896:325ecf68:79913751 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=1ab36b7f:3e2031c0:325ecf68:79913751 |
Next we modify /etc/fstab. Comment out the current /boot partition and add the line /dev/md0 /boot ext2 defaults 0 2 instead so that the file looks as follows:
vi /etc/fstab
# /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/server1-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation #UUID=67b1337f-89a2-4729-a6c8-6d43ba82d1f1 /boot ext2 defaults 0 2 /dev/md0 /boot ext2 defaults 0 2 /dev/mapper/server1-swap_1 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 |
Next replace /dev/sda1 with /dev/md0 in /etc/mtab:
vi /etc/mtab
/dev/mapper/server1-root / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 none /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 none /dev devtmpfs rw,mode=0755 0 0 none /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 none /dev/shm tmpfs rw,nosuid,nodev 0 0 none /var/run tmpfs rw,nosuid,mode=0755 0 0 none /var/lock tmpfs rw,noexec,nosuid,nodev 0 0 none /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0 /dev/md0 /boot ext2 rw 0 0 |
Now up to the GRUB2 boot loader. Create the file /etc/grub.d/09_swraid1_setup as follows:
cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup
vi /etc/grub.d/09_swraid1_setup
#!/bin/sh exec tail -n +3 $0 # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. menuentry 'Ubuntu, with Linux 2.6.32-21-server' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod raid insmod mdraid insmod ext2 set root='(md0)' linux /vmlinuz-2.6.32-21-server root=/dev/mapper/server1-root ro quiet initrd /initrd.img-2.6.32-21-server } |
Make sure you use the correct kernel version in the menuentry stanza (in the linux and initrd lines). You can find it out by running
uname -r
or by taking a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg. Also make sure that you use the correct volume group in the linux line - if your volume group isn't named server1, you must use something else than root=/dev/mapper/server1-root. Again, take a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg to find out the correct value.
The important part in our new menuentry stanza is the line set root='(md0)' - it makes sure that we boot from our RAID1 array /dev/md0 (which will hold the /boot partition) instead of /dev/sda or /dev/sdb which is important if one of our hard drives fails - the system will still be able to boot.
Run
update-grub
to write our new kernel stanza from /etc/grub.d/09_swraid1_setup to /boot/grub/grub.cfg.
Next we adjust our ramdisk to the new situation:
update-initramfs -u
5 Moving Our Data To The RAID Arrays
Now that we've modified all configuration files, we can copy the contents of /dev/sda to /dev/sdb (including the configuration changes we've made in the previous chapter).
To move the contents of our LVM partition /dev/sda5 to our LVM RAID array /dev/md1, we use the pvmove command:
pvmove -i 2 /dev/sda5 /dev/md1
This can take some time, so please be patient.
Afterwards, we remove /dev/sda5 from the volume group server1...
vgreduce server1 /dev/sda5
... and tell the system to not use /dev/sda5 anymore for LVM:
pvremove /dev/sda5
The output of
pvdisplay
should now be as follows:
[email protected]:~# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name server1
PV Size 4.76 GiB / not usable 1.94 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1218
Free PE 3
Allocated PE 1215
PV UUID rQf0Rj-Nn9l-VgbP-0kIr-2lve-5jlC-TWTBGp
[email protected]:~#
Next we change the partition type of /dev/sda5 to Linux raid autodetect and add /dev/sda5 to the /dev/md1 array:
fdisk /dev/sda
[email protected]:~# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): <-- t
Partition number (1-5): <-- 5
Hex code (type L to list codes): <-- fd
Changed system type of partition 5 to fd (Linux raid autodetect)
Command (m for help): <-- w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[email protected]:~#
mdadm --add /dev/md1 /dev/sda5
Now take a look at
cat /proc/mdstat
... and you should see that the RAID array /dev/md1 is being synchronized:
[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda5[2] sdb5[1]
4990912 blocks [2/1] [_U]
[=========>...........] recovery = 45.1% (2251776/4990912) finish=0.4min speed=90071K/sec
md0 : active raid1 sdb1[1]
248768 blocks [2/1] [_U]
unused devices: <none>
[email protected]:~#
(You can run
watch cat /proc/mdstat
to get an ongoing output of the process. To leave watch, press CTRL+C.)
Wait until the synchronization has finished (the output should then look like this:
[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda5[0] sdb5[1]
4990912 blocks [2/2] [UU]
md0 : active raid1 sdb1[1]
248768 blocks [2/1] [_U]
unused devices: <none>
[email protected]:~#
).
Now let's mount /dev/md0:
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
You should now find the array in the output of
mount
[email protected]:~# mount
/dev/mapper/server1-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
none on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
none on /dev type devtmpfs (rw,mode=0755)
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
none on /dev/shm type tmpfs (rw,nosuid,nodev)
none on /var/run type tmpfs (rw,nosuid,mode=0755)
none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
/dev/md0 on /boot type ext2 (rw)
/dev/md0 on /mnt/md0 type ext2 (rw)
[email protected]:~#
Now we copy the contents of /dev/sda1 to /dev/md0 (which is mounted on /mnt/md0):
cd /boot
cp -dpRx . /mnt/md0