How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Lenny) - Page 2

4 Creating Our RAID Arrays

Now let's create our RAID arrays /dev/md0, /dev/md1, and /dev/md2. /dev/sdb1 will be added to /dev/md0, /dev/sdb2 to /dev/md1, and /dev/sdb3 to /dev/md2. /dev/sda1, /dev/sda2, and /dev/sda3 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3

The command

cat /proc/mdstat

should now show that you have three degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb3[1]
      5550336 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
      497920 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
      240832 blocks [2/1] [_U]

unused devices: <none>
server1:~#

Next we create filesystems on our RAID arrays (ext3 on /dev/md0 and /dev/md2 and swap on /dev/md1):

mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2

Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Display the contents of the file:

cat /etc/mdadm/mdadm.conf

At the bottom of the file you should now see details about our three (degraded) RAID arrays:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Mon, 17 Aug 2009 16:38:27 +0200
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=757afd26:543267ab:325ecf68:79913751
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=1e5f2139:0806d523:325ecf68:79913751
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=bc2dffb8:047b4ed5:325ecf68:79913751

 

5 Adjusting The System To RAID1

Now let's mount /dev/md0 and /dev/md2 (we don't need to mount the swap array /dev/md1):

mkdir /mnt/md0
mkdir /mnt/md2

mount /dev/md0 /mnt/md0
mount /dev/md2 /mnt/md2

You should now find both arrays in the output of

mount

server1:~# mount
/dev/sda3 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
server1:~#

Next we modify /etc/fstab. Replace /dev/sda1 with /dev/md0, /dev/sda2 with /dev/md1, and /dev/sda3 with /dev/md2 so that the file looks as follows:

vi /etc/fstab

# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
/dev/md2       /               ext3    errors=remount-ro 0       1
/dev/md0       /boot           ext3    defaults        0       2
/dev/md1       none            swap    sw              0       0
/dev/hda        /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0

Next replace /dev/sda1 with /dev/md0 and /dev/sda3 with /dev/md2 in /etc/mtab:

vi /etc/mtab

/dev/md2 / ext3 rw,errors=remount-ro 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
udev /dev tmpfs rw,mode=0755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0
/dev/md0 /boot ext3 rw 0 0

Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback 1 right after default 0:

vi /boot/grub/menu.lst

[...]
default         0
fallback        1
[...]

This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.

In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replace root=/dev/sda3 with root=/dev/md2 and root (hd0,0) with root (hd1,0):

[...]
## ## End Default Options ##

title           Debian GNU/Linux, kernel 2.6.26-2-686
root            (hd1,0)
kernel          /vmlinuz-2.6.26-2-686 root=/dev/md2 ro quiet
initrd          /initrd.img-2.6.26-2-686

title           Debian GNU/Linux, kernel 2.6.26-2-686
root            (hd0,0)
kernel          /vmlinuz-2.6.26-2-686 root=/dev/sda3 ro quiet
initrd          /initrd.img-2.6.26-2-686

title           Debian GNU/Linux, kernel 2.6.26-2-686 (single-user mode)
root            (hd0,0)
kernel          /vmlinuz-2.6.26-2-686 root=/dev/sda3 ro single
initrd          /initrd.img-2.6.26-2-686

title           Debian GNU/Linux, kernel 2.6.26-1-686
root            (hd0,0)
kernel          /vmlinuz-2.6.26-1-686 root=/dev/sda3 ro quiet
initrd          /initrd.img-2.6.26-1-686

title           Debian GNU/Linux, kernel 2.6.26-1-686 (single-user mode)
root            (hd0,0)
kernel          /vmlinuz-2.6.26-1-686 root=/dev/sda3 ro single
initrd          /initrd.img-2.6.26-1-686

### END DEBIAN AUTOMAGIC KERNELS LIST

root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback 1).

Next we adjust our ramdisk to the new situation:

update-initramfs -u

Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):

cp -dpRx / /mnt/md2

cd /boot
cp -dpRx . /mnt/md0

 

6 Preparing GRUB (Part 1)

Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:

grub

On the GRUB shell, type in the following commands:

root (hd0,0)

grub> root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83

grub>

setup (hd0)

grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"...  17 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd0) (hd0)1+17 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub>

root (hd1,0)

grub> root (hd1,0)
 Filesystem type is ext2fs, partition type 0xfd

grub>

setup (hd1)

grub> setup (hd1)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd1)"...  17 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd1) (hd1)1+17 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub>

quit

Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:

reboot

Share this page:

2 Comment(s)

Add comment

Comments

From: Anonymous at: 2009-08-28 01:43:48

in your configuration, you have 3 seperate raid devices that share the same head on the hard disk(s).  You would get better performance if you instead made a single raid device (md0) and partitioned that device with 3 partitions.  The reason for this is simple.  The IO scheduler would be aware that this is a single disk (or read/write queue) and schedule accordingly where having 3 different raid arrays all sharing a single disk head and the kernel will think that they operate independantly and not schedule in an optimal way.

This certainly wouldnt matter for the /boot partition because it is likely only read while the other arrays are not but the swap and root raids will be read and written too at the same time.

 

From: at: 2011-03-30 20:19:19

After upgrading the debian server failed to start. I now try to recover the boot through the rescue system, tried to mount / dev/md1 on / mnt/md1 but there are folders that are empty, such as root, etc, proc.
root@teste:/# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[0] sda1[1]
97536 blocks [2/2] [UU]

md1 : active raid1 sda3[1]
296993088 blocks [2/1] [_U]

unused devices: <none>


######################################

root@teste:/# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Jan 27 16:55:06 2011
Raid Level : raid1
Array Size : 97536 (95.27 MiB 99.88 MB)
Used Dev Size : 97536 (95.27 MiB 99.88 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Mar 30 19:10:44 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : adbe2b1b:4917a312:7792c71e:7dc17aa4
Events : 0.44

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 1 1 active sync /dev/sda1
######
root@xteste:/# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90.03
Creation Time : Thu Jan 27 16:55:06 2011
Raid Level : raid1
Array Size : 296993088 (283.23 GiB 304.12 GB)
Used Dev Size : 296993088 (283.23 GiB 304.12 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Wed Mar 30 19:09:54 2011
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : 5c8e4766:9653e87b:7792c71e:7dc17aa4
Events : 0.15600

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 3 1 active sync /dev/sda3
root@teste:/#
###############
 
root@teste:/# fdisk -l
Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000b01cb
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          13       97656   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2              13         137     1000000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             137       37111   296993164   fd  Linux raid autodetect
Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000b32a6
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          13       97656   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2              13         137     1000000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sdb3             137       37111   296993164   fd  Linux raid autodetect
Disk /dev/md1: 304.1 GB, 304120922112 bytes
2 heads, 4 sectors/track, 74248272 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md0: 99 MB, 99876864 bytes
2 heads, 4 sectors/track, 24384 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
root@teste:/#
 
what should I do to solve the problem of raid1?