How To Set Up Software RAID1 On A Running System (Incl. GRUB2 Configuration) (Ubuntu 10.04) - Page 3

Want to support HowtoForge? Become a subscriber!
Submitted by falko (Contact Author) (Forums) on Sun, 2010-07-04 19:05. ::

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 and /dev/md2 in the output of

df -h

root@server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md2              4.0G  815M  3.0G  22% /
none                  243M  192K  243M   1% /dev
none                  247M     0  247M   0% /dev/shm
none                  247M   40K  247M   1% /var/run
none                  247M     0  247M   0% /var/lock
none                  247M     0  247M   0% /lib/init/rw
none                  4.0G  815M  3.0G  22% /var/lib/ureadahead/debugfs
/dev/md0              472M   27M  421M   6% /boot

The output of

cat /proc/mdstat

should be as follows:

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1]
      498624 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
      499648 blocks [2/1] [_U]

md2 : active raid1 sdb3[1]
      4242368 blocks [2/1] [_U]

unused devices: <none>

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:

fdisk /dev/sda

root@server1:~# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):
 <-- t
Partition number (1-4): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):
 <-- t
Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help):
 <-- t
Partition number (1-4): <-- 3
Hex code (type L to list codes): <-- fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now we can add /dev/sda1, /dev/sda2, and /dev/sda3 to the respective RAID arrays:

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
mdadm --add /dev/md2 /dev/sda3

Now take a look at

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized:

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
      498624 blocks [2/2] [UU]

md1 : active raid1 sda2[2] sdb2[1]
      499648 blocks [2/1] [_U]

md2 : active raid1 sda3[2] sdb3[1]
      4242368 blocks [2/1] [_U]
      [===========>.........]  recovery = 55.1% (2338176/4242368) finish=0.3min speed=83506K/sec

unused devices: <none>

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
      498624 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      499648 blocks [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
      4242368 blocks [2/2] [UU]

unused devices: <none>


Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:

cat /etc/mdadm/mdadm.conf

# mdadm.conf
# Please refer to mdadm.conf(5) for information about this file.

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts

# definitions of existing MD arrays

# This file was auto-generated on Mon, 21 Jun 2010 13:21:00 +0200
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=68686c40:b924278e:325ecf68:79913751
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9719181e:3071f655:325ecf68:79913751
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=c3360f0f:7f3d47ec:325ecf68:79913751


8 Preparing GRUB2 (Part 2)

Now we delete /etc/grub.d/09_swraid1_setup...

rm -f /etc/grub.d/09_swraid1_setup

... and update our GRUB2 bootloader configuration:

update-initramfs -u

Now if you take a look at /boot/grub/grub.cfg, you should find that the menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section look pretty much the same as what we had in /etc/grub.d/09_swraid1_setup (they should now also be set to boot from /dev/md0 instead of (hd0,1) or (hd1,1)), that's why we don't need /etc/grub.d/09_swraid1_setup anymore.

Reboot the system:


It should boot without problems.

That's it - you've successfully set up software RAID1 on your running Ubuntu 10.04 system!

Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.
Submitted by stuck (not registered) on Thu, 2012-03-29 12:47.
I'm not entirely sure why it went wrong, but I was attempting to mirror an existing partition. I had two drives exactly the same size, with exactly the same partitions. sdb1 had the data I was hoping to keep, sdc1had garbage. I added the junk disk with this command:  

 mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdc1

Then I erased it:

 mkfs.ext4 /dev/md0

 Then I added the drive with the data on it (formerly missing):

 mdadm --add /dev/md0 /dev/sdb1

 After it finished rebuilding, I ended up with a completely blank drive (it had copied the new ext4 overtop of the "missing" drive). Fortunately, I had a backup, but still curious what went wrong.

 ubuntu 10.10


Submitted by Anonymous (not registered) on Mon, 2012-08-13 01:59.
You ended up with a blank drive because you overwrote your drive with the new blank one. This tutorial does not work, it erases all your data. *shrug*
Submitted by Anonymous (not registered) on Fri, 2014-07-04 18:21.
No, he just did it the wrong way. The tutorial works just fine. With very few changes in recent distros. You just need to *understand* what you are doing, not just copy and paste what you do not understand.
Submitted by Giuliastro (not registered) on Wed, 2012-02-22 12:14.


Thank you for your solution. RAID works but unfortunately the system won't boot without the first drive (SDA).

Submitted by icius (registered user) on Tue, 2011-08-30 14:01.
Thank you so much for writing such a succinct and complete tutorial.  It saved my bacon on an Ubuntu 11.04 Server install which absolutely refused to install grub when I tried to do RAID 1 during the install. 
Submitted by Homer (not registered) on Mon, 2011-03-28 17:41.

Do not leave like that, if you lost the sda drive, the computer does not start: because Grub2 is not installed properly to boot on disk sdb ....

"error : no such device....

for me the solution found on the net is :

#export LANG=C

#update-grub2 /dev/sdb

Tested !

After replacing a failed disk, these commands are needed for reboot ?

Not tested.

Have a nice day.
Submitted by rpremuz (not registered) on Sat, 2011-02-26 11:45.

Well done, Falco, for the tutorial. I was able to use it in my situation quite easily.

I second HansMuc's comments and also suggest another improvement:

In steps 3 and 7 the partition type ID can be changed to fd in a quicker way (I like putting prompt in front of commands):

# sfdisk --change-id /dev/sdb 1 fd
# sfdisk --change-id /dev/sdb 2 fd


# sfdisk --change-id /dev/sda 1 fd
# sfdisk --change-id /dev/sda 2 fd

-- rpr.

Submitted by Anonymous (not registered) on Sun, 2011-01-16 12:06.

I had a problem with initramfs which wasn't found at startup.

ubuntu 10.10  -

I fixed this problem in

add in /etc/default/grub under the last commented line

GRUB_PRELOAD_MODULES="raid mdraid" .


And it will work!

Submitted by Kristoffer (not registered) on Mon, 2010-10-18 08:50.

Dear Falko,

Thank you very much for providing such a clear and easy to follow guide for setting up RAID1. I am a Linux novice, but ran into absolutely no problems following your guide. I only had to get a bit more help from a google search to learn more about comparing directories after the initial copy of data from my old system into the raid volume - just to make sure that everything had transferred correctly.

 I confirm also the findings of HansMuc that there are two steps which can safely be omitted.

Best regards,


Submitted by HansMuc (not registered) on Mon, 2010-08-02 04:59.

Great tutorial how to setup RAID1.
In addition, that Grub2 stuff is the icing of the cake. Great work!

There are 2 steps which IMHO could be omitted:

 a) modifying mtab, (-> 5 Adjusting The System To RAID1) can be omitted.
Mtab is updated automagically by the  'mount' command.
When the computer is shut down, file systems are unmounted and mtab
is modified accordingly. After reboot, file systems are remounted and mtab
is updated again automagically by 'mount'.
( See )

Changing mtab by hand might even trigger problems,
if an application is checking mtab to find out which
file systems are really mounted.


b)  Modifying mdadm.conf  on /mnt/md2 (-> 7 Preparing /dev/sda)
isn't necessary.

We had modified mdadm.conf on the running system before under
 (-> 4 Creating Our RAID Arrays). Later we had copied that file system
over to /mnt/md2. So mdadm.conf on /dev/md2 is the up todate.
Those ARRAY defs found through 'cat /proc/mdstat' haven't changed.
Otherwise we weren't able to boot using /dev/md0 and /dev/md1.