How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB2 Configuration) (Debian Squeeze) - Page 2
4 Creating Our RAID Arrays
Now let's create our RAID arrays /dev/md0 and /dev/md1. /dev/sdb1 will be added to /dev/md0 and/dev/sdb5 to /dev/md1. /dev/sda1 and /dev/sda5 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following two commands:
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb5
You might see the following message for each command - just press y to continue:
root@server1:~# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb5
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? <-- y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
root@server1:~#
The command
cat /proc/mdstat
should now show that you have two degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):
root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb5[1]
4989940 blocks super 1.2 [2/1] [_U]
md0 : active raid1 sdb1[1]
248820 blocks super 1.2 [2/1] [_U]
unused devices: <none>
root@server1:~#
Next we create a filesystem (ext2) on our non-LVM RAID array /dev/md0:
mkfs.ext2 /dev/md0
Now we come to our LVM RAID array /dev/md1. To prepare it for LVM, we run:
pvcreate /dev/md1
Then we add /dev/md1 to our volume group server1:
vgextend server1 /dev/md1
The output of
pvdisplay
should now be similar to this:
root@server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda5
VG Name server1
PV Size 4.76 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 1218
Free PE 0
Allocated PE 1218
PV UUID 8p9j8i-cc9a-bAJq-LFP9-CBMF-JrPl-SDbx4X
--- Physical volume ---
PV Name /dev/md1
VG Name server1
PV Size 4.76 GiB / not usable 1012.00 KiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1218
Free PE 1218
Allocated PE 0
PV UUID W4I07I-RT3P-DK1k-1HBz-oJvp-6in0-uQ53KS
root@server1:~#
The output of
vgdisplay
should be as follows:
root@server1:~# vgdisplay
--- Volume group ---
VG Name server1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 9.52 GiB
PE Size 4.00 MiB
Total PE 2436
Alloc PE / Size 1218 / 4.76 GiB
Free PE / Size 1218 / 4.76 GiB
VG UUID m99fJX-gMl9-g2XZ-CazH-32s8-sy1Q-8JjCUW
root@server1:~#
Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
Display the contents of the file:
cat /etc/mdadm/mdadm.conf
In the file you should now see details about our two (degraded) RAID arrays:
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 24 May 2011 21:11:37 +0200 # by mkconf 3.1.4-1+8efb9d1 ARRAY /dev/md/0 metadata=1.2 UUID=6cde4bf4:7ee67d24:b31e2713:18865f31 name=server1.example.com:0 ARRAY /dev/md/1 metadata=1.2 UUID=3ce9f2f2:ac89f75a:530c5ee9:0d4c67da name=server1.example.com:1 |
Next we modify /etc/fstab. Comment out the current /boot partition and add the line /dev/md0 /boot ext2 defaults 0 2 instead so that the file looks as follows:
vi /etc/fstab
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/mapper/server1-root / ext3 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation #UUID=9b817b3e-2cea-4505-b1be-5ca9fd67f2ff /boot ext2 defaults 0 2 /dev/md0 /boot ext2 defaults 0 2 /dev/mapper/server1-swap_1 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 |
Next replace /dev/sda1 with /dev/md0 in /etc/mtab:
vi /etc/mtab
/dev/mapper/server1-root / ext3 rw,errors=remount-ro 0 0 tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 udev /dev tmpfs rw,mode=0755 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0 /dev/md0 /boot ext2 rw 0 0 |
Now up to the GRUB2 boot loader. Create the file /etc/grub.d/09_swraid1_setup as follows:
cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup
vi /etc/grub.d/09_swraid1_setup
#!/bin/sh exec tail -n +3 $0 # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os { insmod raid insmod mdraid insmod part_msdos insmod ext2 set root='(md/0)' echo 'Loading Linux 2.6.32-5-amd64 ...' linux /vmlinuz-2.6.32-5-amd64 root=/dev/mapper/server1-root ro quiet echo 'Loading initial ramdisk ...' initrd /initrd.img-2.6.32-5-amd64 } |
Make sure you use the correct kernel version in the menuentry stanza (in the linux and initrd lines). You can find it out by running
uname -r
or by taking a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg. Also make sure that you use the correct volume group in the linux line - if your volume group isn't named server1, you must use something else than root=/dev/mapper/server1-root. Again, take a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg to find out the correct value.
The important part in our new menuentry stanza is the line set root='(md/0)' - it makes sure that we boot from our RAID1 array /dev/md0 (which will hold the /boot partition) instead of /dev/sda or /dev/sdb which is important if one of our hard drives fails - the system will still be able to boot.
Because we don't use UUIDs for our block devices, open /etc/default/grub...
vi /etc/default/grub
... and uncomment the line GRUB_DISABLE_LINUX_UUID=true:
# If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" |
Before we update GRUB2 (with the update-grub command), we must add our second hard drive /dev/sdb to the /boot/grub/device.map file because otherwise the update-grub command will fail with the following error message:
root@server1:~# update-grub
Generating grub.cfg ...
/usr/sbin/grub-probe: error: Couldn't find PV pv1. Check your device.map.
root@server1:~#
Open /boot/grub/device.map...
vi /boot/grub/device.map
... and add /dev/sdb as follows:
(hd0) /dev/sda (hd1) /dev/sdb |
Now run
update-grub
to write our new kernel stanza from /etc/grub.d/09_swraid1_setup to /boot/grub/grub.cfg.
Next we adjust our ramdisk to the new situation:
update-initramfs -u
5 Moving Our Data To The RAID Arrays
Now that we've modified all configuration files, we can copy the contents of /dev/sda to /dev/sdb (including the configuration changes we've made in the previous chapter).
To move the contents of our LVM partition /dev/sda5 to our LVM RAID array /dev/md1, we use the pvmove command:
pvmove -i 2 /dev/sda5 /dev/md1
This can take some time, so please be patient.
Afterwards, we remove /dev/sda5 from the volume group server1...
vgreduce server1 /dev/sda5
... and tell the system to not use /dev/sda5 anymore for LVM:
pvremove /dev/sda5
The output of
pvdisplay
should now be as follows:
root@server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name server1
PV Size 4.76 GiB / not usable 1012.00 KiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 1218
Free PE 0
Allocated PE 1218
PV UUID W4I07I-RT3P-DK1k-1HBz-oJvp-6in0-uQ53KS
root@server1:~#
Next we change the partition type of /dev/sda5 to Linux raid autodetect and add /dev/sda5 to the /dev/md1 array:
fdisk /dev/sda
root@server1:~# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): <-- t
Partition number (1-5): <-- 5
Hex code (type L to list codes): <-- fd
Changed system type of partition 5 to fd (Linux raid autodetect)
Command (m for help): <-- w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
root@server1:~#
mdadm --add /dev/md1 /dev/sda5
Now take a look at
cat /proc/mdstat
... and you should see that the RAID array /dev/md1 is being synchronized:
root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda5[2] sdb5[1]
4989940 blocks super 1.2 [2/1] [_U]
[====>................] recovery = 22.5% (1127872/4989940) finish=0.3min speed=161124K/sec
md0 : active raid1 sdb1[1]
248820 blocks super 1.2 [2/1] [_U]
unused devices: <none>
root@server1:~#
(You can run
watch cat /proc/mdstat
to get an ongoing output of the process. To leave watch, press CTRL+C.)
Wait until the synchronization has finished (the output should then look like this:
root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda5[2] sdb5[1]
4989940 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sdb1[1]
248820 blocks super 1.2 [2/1] [_U]
unused devices: <none>
root@server1:~#
).
Now let's mount /dev/md0:
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
You should now find the array in the output of
mount
root@server1:~# mount
/dev/mapper/server1-root on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md0 on /boot type ext2 (rw)
/dev/md0 on /mnt/md0 type ext2 (rw)
root@server1:~#
Now we copy the contents of /dev/sda1 to /dev/md0 (which is mounted on /mnt/md0):
cd /boot
cp -dpRx . /mnt/md0