How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 2
4 Creating Our RAID Arrays
Now let's create our RAID arrays /dev/md0, /dev/md1, and /dev/md2. /dev/sdb1 will be added to /dev/md0, /dev/sdb2 to /dev/md1, and /dev/sdb3 to /dev/md2. /dev/sda1, /dev/sda2, and /dev/sda3 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3
The command
cat /proc/mdstat
should now show that you have three degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):
server1:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb3[1]
4594496 blocks [2/1] [_U]
md1 : active raid1 sdb2[1]
497920 blocks [2/1] [_U]
md0 : active raid1 sdb1[1]
144448 blocks [2/1] [_U]
unused devices: <none>
server1:~#
Next we create filesystems on our RAID arrays (ext3 on /dev/md0 and /dev/md2 and swap on /dev/md1):
mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2
Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
Display the contents of the file:
cat /etc/mdadm/mdadm.conf
At the bottom of the file you should now see details about our three (degraded) RAID arrays:
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # This file was auto-generated on Mon, 26 Nov 2007 21:22:04 +0100 # by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $ ARRAY /dev/md0 level=raid1 num-devices=2 UUID=72d23d35:35d103e3:01b5209e:be9ff10a ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a50c4299:9e19f9e4:01b5209e:be9ff10a ARRAY /dev/md2 level=raid1 num-devices=2 UUID=99fee3a5:ae381162:01b5209e:be9ff10a |
5 Adjusting The System To RAID1
Now let's mount /dev/md0 and /dev/md2 (we don't need to mount the swap array /dev/md1):
mkdir /mnt/md0
mkdir /mnt/md2
mount /dev/md0 /mnt/md0
mount /dev/md2 /mnt/md2
You should now find both arrays in the output of
mount
server1:~# mount
/dev/sda3 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
server1:~#
Next we modify /etc/fstab. Replace /dev/sda1 with /dev/md0, /dev/sda2 with /dev/md1, and /dev/sda3 with /dev/md2 so that the file looks as follows:
vi /etc/fstab
# /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/md2 / ext3 defaults,errors=remount-ro 0 1 /dev/md0 /boot ext3 defaults 0 2 /dev/md1 none swap sw 0 0 /dev/hdc /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 |
Next replace /dev/sda1 with /dev/md0 and /dev/sda3 with /dev/md2 in /etc/mtab:
vi /etc/mtab
/dev/md2 / ext3 rw,errors=remount-ro 0 0 tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 udev /dev tmpfs rw,mode=0755 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0 /dev/md0 /boot ext3 rw 0 0 |
Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback 1 right after default 0:
vi /boot/grub/menu.lst
[...] default 0 fallback 1 [...] |
This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.
In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replace root=/dev/sda3 with root=/dev/md2 and root (hd0,0) with root (hd1,0):
[...] ## ## End Default Options ## title Debian GNU/Linux, kernel 2.6.18-4-486 RAID (hd1) root (hd1,0) kernel /vmlinuz-2.6.18-4-486 root=/dev/md2 ro initrd /initrd.img-2.6.18-4-486 savedefault title Debian GNU/Linux, kernel 2.6.18-4-486 root (hd0,0) kernel /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro initrd /initrd.img-2.6.18-4-486 savedefault title Debian GNU/Linux, kernel 2.6.18-4-486 (single-user mode) root (hd0,0) kernel /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro single initrd /initrd.img-2.6.18-4-486 savedefault ### END DEBIAN AUTOMAGIC KERNELS LIST |
root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback 1).
Next we adjust our ramdisk to the new situation:
update-initramfs -u
Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):
cp -dpRx / /mnt/md2
cd /boot
cp -dpRx . /mnt/md0
6 Preparing GRUB (Part 1)
Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:
grub
On the GRUB shell, type in the following commands:
root (hd0,0)
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0x83
grub>
setup (hd0)
grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.
grub>
root (hd1,0)
grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd
grub>
setup (hd1)
grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.
grub>
quit
Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:
reboot