How To Set Up Software RAID1 On A Running System (Incl. GRUB2 Configuration) (Debian Squeeze) - Page 2

4 Creating Our RAID Arrays

Now let's create our RAID arrays /dev/md0, /dev/md1, and /dev/md2. /dev/sdb1 will be added to /dev/md0, /dev/sdb2 to /dev/md1, and /dev/sdb3 to /dev/md2. /dev/sda1, /dev/sda2, and /dev/sda3 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1

mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2

mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3

You might see the following message for each command - just press y to continue:

root@server1:~# mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array?
 <-- y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
root@server1:~#

The command

cat /proc/mdstat

should now show that you have three degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb3[1]
      4241396 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sdb2[1]
      499700 blocks super 1.2 [2/1] [_U]

md0 : active raid1 sdb1[1]
      498676 blocks super 1.2 [2/1] [_U]

unused devices: <none>
root@server1:~#

Next we create filesystems on our RAID arrays (ext4 on /dev/md0 and /dev/md2 and swap on /dev/md1):

mkfs.ext4 /dev/md0
mkswap /dev/md1
mkfs.ext4 /dev/md2

Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Display the contents of the file:

cat /etc/mdadm/mdadm.conf

At the bottom of the file you should now see details about our three (degraded) RAID arrays:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Tue, 24 May 2011 14:09:09 +0200
# by mkconf 3.1.4-1+8efb9d1
ARRAY /dev/md/0 metadata=1.2 UUID=b40c3165:17089af7:5d5ee79b:8783491b name=server1.example.com:0
ARRAY /dev/md/1 metadata=1.2 UUID=62e4a606:878092a0:212209c5:c91b8fef name=server1.example.com:1
ARRAY /dev/md/2 metadata=1.2 UUID=94e51099:d8475c57:4ff1c60f:9488a09a name=server1.example.com:2

 

5 Adjusting The System To RAID1

Now let's mount /dev/md0 and /dev/md2 (we don't need to mount the swap array /dev/md1):

mkdir /mnt/md0
mkdir /mnt/md2

mount /dev/md0 /mnt/md0
mount /dev/md2 /mnt/md2

You should now find both arrays in the output of

mount

root@server1:~# mount
/dev/sda3 on / type ext4 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext4 (rw)
/dev/md0 on /mnt/md0 type ext4 (rw)
/dev/md2 on /mnt/md2 type ext4 (rw)
root@server1:~#

Next we modify /etc/fstab. Comment out the current /, /boot, and swap partitions and add new lines for them where you replace the UUIDs with /dev/md0 (for the /boot partition), /dev/md1 (for the swap partition) and /dev/md2 (for the / partition) so that the file looks as follows:

vi /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
# / was on /dev/sda3 during installation
#UUID=e4e38871-0115-477d-94f9-34b079d26248 /               ext4    errors=remount-ro 0       1
/dev/md2 /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
#UUID=7e2fb013-073e-4312-a669-f34b35069bfb /boot           ext4    defaults        0       2
/dev/md0 /boot           ext4    defaults        0       2
# swap was on /dev/sda2 during installation
#UUID=1a5951f8-d0ab-4e0e-b42a-871f81b6fd82 none            swap    sw              0       0
/dev/md1 none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0

Next replace /dev/sda1 with /dev/md0 and /dev/sda3 with /dev/md2 in /etc/mtab:

vi /etc/mtab

/dev/md2 / ext4 rw,errors=remount-ro 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
udev /dev tmpfs rw,mode=0755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0
/dev/md0 /boot ext4 rw 0 0
/dev/md0 /mnt/md0 ext4 rw 0 0
/dev/md2 /mnt/md2 ext4 rw 0 0

Now up to the GRUB2 boot loader. Create the file /etc/grub.d/09_swraid1_setup as follows:

cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup
vi /etc/grub.d/09_swraid1_setup

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os {
        insmod raid
        insmod mdraid
        insmod part_msdos
        insmod ext2
        set root='(md/0)'
        echo    'Loading Linux 2.6.32-5-amd64 ...'
        linux   /vmlinuz-2.6.32-5-amd64 root=/dev/md2 ro  quiet
        echo    'Loading initial ramdisk ...'
        initrd  /initrd.img-2.6.32-5-amd64
}

Make sure you use the correct kernel version in the menuentry stanza (in the linux and initrd lines). You can find it out by running

uname -r

or by taking a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg. Also make sure that you use root=/dev/md2 in the linux line.

The important part in our new menuentry stanza is the line set root='(md/0)' - it makes sure that we boot from our RAID1 array /dev/md0 (which will hold the /boot partition) instead of /dev/sda or /dev/sdb which is important if one of our hard drives fails - the system will still be able to boot.

Because we don't use UUIDs anymore for our block devices, open /etc/default/grub...

vi /etc/default/grub

... and uncomment the line GRUB_DISABLE_LINUX_UUID=true:

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_LINUX_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Run

update-grub

to write our new kernel stanza from /etc/grub.d/09_swraid1_setup to /boot/grub/grub.cfg.

Next we adjust our ramdisk to the new situation:

update-initramfs -u

Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):

cp -dpRx / /mnt/md2

cd /boot
cp -dpRx . /mnt/md0

 

6 Preparing GRUB2 (Part 1)

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Now we reboot the system and hope that it boots ok from our RAID arrays:

reboot

Share this page:

11 Comment(s)

Add comment

Comments

From: al biheiri at: 2011-06-08 12:16:27

Why would you modprobe multipath? I think that is only used for fiber cards. It shouldnt have anything to do with mdadm.

From: paul at: 2011-06-13 17:34:35

Hey.

Great Tutorial! 

Here are a couple of bumps you might run into, if you're trying to use this on Debian Unstable "Wheezy". 
 
 

First
If you want to use GPT enabled harddisks you'll have to create a small (1MB) BIOS Boot Partition or GRUB2 won't install on that drive. 
 
 

Second
If you ARE actually using GUID partition tables you won't be able to change your partitions using fdisk. You'll have to use gdisk (or parted) instead, which is its direct successor and fairly simple to use. Note however, that there's no direct way of duplicating the partition table from one disk to another. You'll have to use sgdisk for that, which installs with gdisk: 
sgdisk --replicate=/dev/sdb /dev/sda

sgdisk --randomize-guids --move-second-header /dev/sdb
The second line is important to distinguish the disks in your system later on. In addition it ensures that your Secondary (Backup) GPT Header really resides on the very last blocks of your disk.
 
 

Third:
This one took me a while :-\
If you're creating the ''/etc/grub.d/09_swraid1_setup'' as mentioned you MUST rename the module GRUB2 is instructed to load from
 
insmod mdraid to insmod mdraid1x

 

(at least for a RAID1 or 10 configuration), as the old-named module does no longer exist. Furthermore, if you don't have a separated /boot/partition you need to fix the path to the kernel and ramdisk (probably "/boot/vmlinuz-2.6…" and "/boot/initrd.img-2.6…"

From: Anonymous at: 2014-10-19 17:04:41

How can I do this on Debian 7? Now I've killed 6 VMs with this Turorial :D

From: Fredrik Falk at: 2013-07-10 19:02:01

Had issues with file not found in grub after reboot into the new array. Tried a couple of things until i found out the md devices werent listed by "ls" in grub rescue prompt. This guide added insmod mdraid in the /etc/grub.d/09_swraid1_setup file but in Debian7 this module doesnt exist. I added mdraid1x instead of mdraid and thereafter it booted fine.

From: at: 2013-11-14 20:23:26

In section:
5 Adjusting The System To RAID1
 
#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os {
        insmod raid
        insmod mdraid
        insmod part_msdos
        insmod ext2
        set root='(md/0)'
        echo    'Loading Linux 2.6.32-5-amd64 ...'
        linux   /vmlinuz-2.6.32-5-amd64 root=/dev/md2 ro  quiet
        echo    'Loading initial ramdisk ...'
        initrd  /initrd.img-2.6.32-5-amd64
}

Should be:

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os {
        insmod raid
        insmod mdraid
        insmod part_msdos
        insmod ext2
        set root='(md/0)'
        echo    'Loading Linux 2.6.32-5-amd64 ...'
        linux   /boot/vmlinuz-2.6.32-5-amd64 root=/dev/md2 ro  quiet
        echo    'Loading initial ramdisk ...'
        initrd  /boot/initrd.img-2.6.32-5-amd64
}

From: Alex at: 2014-06-23 11:23:06


If you are using GPT, then you can use sgdisk to clone the partition table from /dev/sda to the other two hard drives:

# sgdisk --backup=table /dev/sda
# sgdisk -G --load-backup=table /dev/sdb 
 
  -G, --randomize-guids
              Randomize  the  disk’s  GUID  and all partitions’ unique GUIDs (but not their partition type code GUIDs). This function may be used after
              cloning a disk in order to render all GUIDs once again unique.

 
 
Anyway make sure you know what you're doing.

From: kokolorum at: 2012-05-06 11:38:49

Please add this:

apt-get install grub2

dd if=/dev/sda of=/boot/backup.dd.sda

sfdisk -d /dev/sda | sfdisk /dev/sdb

sfdisk -d /dev/sda > /boot/backup.sfdisk.sda

From: simon at: 2012-06-17 23:39:12

Many thanks!

I was stuck without being enable to install RAID1 on a running squeeze install.

From: tv.debian at: 2011-06-08 09:41:50

Hi, thanks for this tutorial. I have been using this kind of manipulation several times, and reading the prompts a few observations:

Using sfdisk to dump the partition layout from the first disk to the second can lead to nasty data corruption, I have experienced it first hand, the raid superblocks and the file-system overlap and lead to all kind of problems. Since the original disk is wiped in the process there is no point doing this anyway, just create empty (no file-system) partitions on the new disk, then create the one disk raid, then format the raid volume. Then the raided disk partition table can safely be used on the second disk.

Alternate: shrink the original partitions by ±1 MB, go on with sfdisk, create your raid, then when everything is done  "resize2fs" the raid volume.

If the raid volume is already created, it's possible to fix it with
"e2fsck -cc", but it's very time consuming. You might want to try it on your setup though, you might be surprised by the result.

Regarding metadata, with 1.2 format which is now the default, it's not needed nor recommended to change the file-system type to "fd" (raid auto-detect). As long as an initramfs is used and mdadm.conf is filled in the system will boot just fine.

Lastly, with the new metadata format and a recent kernel, all raid1 are seen as partitionable, leading to possible shift in volume names under /dev. It's safer to use UUID, or the mdadm "--name=" option to avoid getting confused when "md1" will suddenly become "md1p1" or "md127" after an upgrade (or disaster recovery from a live system). Or force a non-partitionable raid with "--auto=md", should prevent weird raid volumes renaming.

 Have fun.

From: Louis at: 2012-07-24 10:28:34

Nice howto, but 1 sugestions.

dont use /dev/mdX in fstab.

use the command :  blkid

and set the UUID of the raid boot device.

like:

UUID=c3b87084-07c2-4818-b118-6c6764f81545       /                       ext4    defaults,errors=remount-ro       0       1


keep it more error free and universal.

 

From: Patrice Vigier at: 2012-10-31 15:26:05

I could not with this tuto have, in RAID1, the second disk to boot.

The solution came form this :

in file "/etc/default/grub", uncomment the line:

GRUB_TERMINAL=console

after do not forget to run 

update-grub

And it worked