How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB Configuration) (Debian Etch) - Page 2
4 Creating Our RAID Arrays
Now let's create our RAID arrays /dev/md0 and /dev/md1. /dev/sdb1 will be added to /dev/md0 and/dev/sdb5 to /dev/md1. /dev/sda1 and /dev/sda5 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following two commands:
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb5
The command
cat /proc/mdstat
should now show that you have two degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb5[1]
4988032 blocks [2/1] [_U]
md0 : active raid1 sdb1[1]
248896 blocks [2/1] [_U]
unused devices: <none>
server1:~#
Next we create a filesystem (ext3) on our non-LVM RAID array /dev/md0:
mkfs.ext3 /dev/md0
Now we come to our LVM RAID array /dev/md1. To prepare it for LVM, we run:
pvcreate /dev/md1
Then we add /dev/md1 to our volume group debian:
vgextend debian /dev/md1
The output of
pvdisplay
should now be similar to this:
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda5
VG Name debian
PV Size 4.75 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 1217
Free PE 0
Allocated PE 1217
PV UUID l2G0xJ-b9JF-RLsZ-bbcd-yRGd-kHfl-QFSpdg
--- Physical volume ---
PV Name /dev/md1
VG Name debian
PV Size 4.75 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 1217
Free PE 1217
Allocated PE 0
PV UUID YAnBb4-NJdb-aM68-Ks22-g851-IshG-bNgOXp
server1:~#
The output of
vgdisplay
should be as follows:
server1:~# vgdisplay
--- Volume group ---
VG Name debian
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 9.51 GB
PE Size 4.00 MB
Total PE 2434
Alloc PE / Size 1217 / 4.75 GB
Free PE / Size 1217 / 4.75 GB
VG UUID j5BV1u-mQSa-Q0KW-PH4n-VHab-DB92-LyxFIU
server1:~#
Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
Display the contents of the file:
cat /etc/mdadm/mdadm.conf
In the file you should now see details about our two (degraded) RAID arrays:
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 18 Mar 2008 19:28:19 +0100 # by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $ ARRAY /dev/md0 level=raid1 num-devices=2 UUID=6fe4b95e:4ce1dcef:01b5209e:be9ff10a ARRAY /dev/md1 level=raid1 num-devices=2 UUID=aa4ab7bb:df7ddb72:01b5209e:be9ff10a |
Next we modify /etc/fstab. Replace /dev/sda1 with /dev/md0 so that the file looks as follows:
vi /etc/fstab
# /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/mapper/debian-root / ext3 defaults,errors=remount-ro 0 1 /dev/md0 /boot ext3 defaults 0 2 /dev/mapper/debian-swap_1 none swap sw 0 0 /dev/hdc /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 |
Next replace /dev/sda1 with /dev/md0 in /etc/mtab:
vi /etc/mtab
/dev/mapper/debian-root / ext3 rw,errors=remount-ro 0 0 tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 udev /dev tmpfs rw,mode=0755 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0 /dev/md0 /boot ext3 rw 0 0 |
Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback 1 right after default 0:
vi /boot/grub/menu.lst
[...] default 0 fallback 1 [...] |
This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.
In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0):
[...] ## ## End Default Options ## title Debian GNU/Linux, kernel 2.6.18-6-686 root (hd1,0) kernel /vmlinuz-2.6.18-6-686 root=/dev/mapper/debian-root ro initrd /initrd.img-2.6.18-6-686 savedefault title Debian GNU/Linux, kernel 2.6.18-6-686 root (hd0,0) kernel /vmlinuz-2.6.18-6-686 root=/dev/mapper/debian-root ro initrd /initrd.img-2.6.18-6-686 savedefault title Debian GNU/Linux, kernel 2.6.18-6-686 (single-user mode) root (hd0,0) kernel /vmlinuz-2.6.18-6-686 root=/dev/mapper/debian-root ro single initrd /initrd.img-2.6.18-6-686 savedefault ### END DEBIAN AUTOMAGIC KERNELS LIST |
root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback 1).
Next we adjust our ramdisk to the new situation:
update-initramfs -u
5 Moving Our Data To The RAID Arrays
Now that we've modified all configuration files, we can copy the contents of /dev/sda to /dev/sdb (including the configuration changes we've made in the previous chapter).
To move the contents of our LVM partition /dev/sda5 to our LVM RAID array /dev/md1, we use the pvmove command:
pvmove /dev/sda5 /dev/md1
This can take some time, so please be patient.
Afterwards, we remove /dev/sda5 from the volume group debian...
vgreduce debian /dev/sda5
... and tell the system to not use /dev/sda5 anymore for LVM:
pvremove /dev/sda5
The output of
pvdisplay
should now be as follows:
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name debian
PV Size 4.75 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 1217
Free PE 0
Allocated PE 1217
PV UUID YAnBb4-NJdb-aM68-Ks22-g851-IshG-bNgOXp
server1:~#
Next we change the partition type of /dev/sda5 to Linux raid autodetect and add /dev/sda5 to the /dev/md1 array:
fdisk /dev/sda
server1:~# fdisk /dev/sda
Command (m for help): <- t
Partition number (1-5): <- 5
Hex code (type L to list codes): <- fd
Changed system type of partition 5 to fd (Linux raid autodetect)
Command (m for help): <- w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
server1:~#
mdadm --add /dev/md1 /dev/sda5
Now take a look at
cat /proc/mdstat
... and you should see that the RAID array /dev/md1 is being synchronized:
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda5[2] sdb5[1]
4988032 blocks [2/1] [_U]
[==========>..........] recovery = 52.5% (2623232/4988032) finish=0.5min speed=74705K/sec
md0 : active raid1 sdb1[1]
248896 blocks [2/1] [_U]
unused devices: <none>
server1:~#
(You can run
watch cat /proc/mdstat
to get an ongoing output of the process. To leave watch, press CTRL+C.)
Wait until the synchronization has finished (the output should then look like this:
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda5[0] sdb5[1]
4988032 blocks [2/2] [UU]
md0 : active raid1 sdb1[1]
248896 blocks [2/1] [_U]
unused devices: <none>
server1:~#
).
Now let's mount /dev/md0:
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
You should now find the array in the output of
mount
server1:~# mount
/dev/mapper/debian-root on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md0 on /boot type ext3 (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
server1:~#
Now we copy the contents of /dev/sda1 to /dev/md0 (which is mounted on /mnt/md0):
cd /boot
cp -dpRx . /mnt/md0