How To Set Up Software RAID1 On A Running System (Incl. GRUB2 Configuration) (Ubuntu 10.04)

Version 1.0
Author: Falko Timme
Follow me on Twitter
Last edited 06/21/2010

This guide explains how to set up software RAID1 on an already running Ubuntu 10.04 system. The GRUB2 bootloader will be configured in such a way that the system will still be able to boot if one of the hard drives fails (no matter which one).

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

In this tutorial I'm using an Ubuntu 10.04 system with two hard drives, /dev/sda and /dev/sdb which are identical in size. /dev/sdb is currently unused, and /dev/sda has the following partitions:

  • /dev/sda1: /boot partition, ext4;
  • /dev/sda2: swap;
  • /dev/sda3: / partition, ext4

In the end I want to have the following situation:

  • /dev/md0 (made up of /dev/sda1 and /dev/sdb1): /boot partition, ext4;
  • /dev/md1 (made up of /dev/sda2 and /dev/sdb2): swap;
  • /dev/md2 (made up of /dev/sda3 and /dev/sdb3): / partition, ext4

This is the current situation:

df -h

root@server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             4.0G  808M  3.0G  21% /
none                  243M  168K  243M   1% /dev
none                  247M     0  247M   0% /dev/shm
none                  247M   36K  247M   1% /var/run
none                  247M     0  247M   0% /var/lock
none                  247M     0  247M   0% /lib/init/rw
none                  4.0G  808M  3.0G  21% /var/lib/ureadahead/debugfs
/dev/sda1             472M   27M  422M   6% /boot
root@server1:~#

fdisk -l

root@server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000246b7

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          63      498688   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              63         125      499712   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             125         653     4242432   83  Linux
Partition 3 does not end on cylinder boundary.

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table
root@server1:~#

 

2 Installing mdadm

The most important tool for setting up RAID is mdadm. Let's install it like this:

aptitude install initramfs-tools mdadm

Afterwards, we load a few kernel modules (to avoid a reboot):

modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Now run

cat /proc/mdstat

The output should look as follows:

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>
root@server1:~#

 

3 Preparing /dev/sdb

To create a RAID1 array on our already running system, we must prepare the /dev/sdb hard drive for RAID1, then copy the contents of our /dev/sda hard drive to it, and finally add /dev/sda to the RAID1 array.

First, we copy the partition table from /dev/sda to /dev/sdb so that both disks have exactly the same layout:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

The output should be as follows:

root@server1:~# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/sdb1   *      2048    999423     997376  83  Linux
/dev/sdb2        999424   1998847     999424  82  Linux swap / Solaris
/dev/sdb3       1998848  10483711    8484864  83  Linux
/dev/sdb4             0         -          0   0  Empty
Warning: partition 1 does not end at a cylinder boundary
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
root@server1:~#

The command

fdisk -l

should now show that both HDDs have the same layout:

root@server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000246b7

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          63      498688   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              63         125      499712   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             125         653     4242432   83  Linux
Partition 3 does not end on cylinder boundary.

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          63      498688   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2              63         125      499712   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sdb3             125         653     4242432   83  Linux
Partition 3 does not end on cylinder boundary.
root@server1:~#

Next we must change the partition type of our three partitions on /dev/sdb to Linux raid autodetect:

fdisk /dev/sdb

root@server1:~# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):
 <-- m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help):
 <-- t
Partition number (1-4): <-- 1
Hex code (type L to list codes): <-- L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris
 1  FAT12           39  Plan 9          82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      3c  PartitionMagic  83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       40  Venix 80286     84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      41  PPC PReP Boot   85  Linux extended  c7  Syrinx
 5  Extended        42  SFS             86  NTFS volume set da  Non-FS data
 6  FAT16           4d  QNX4.x          87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS       4e  QNX4.x 2nd part 88  Linux plaintext de  Dell Utility
 8  AIX             4f  QNX4.x 3rd part 8e  Linux LVM       df  BootIt
 9  AIX bootable    50  OnTrack DM      93  Amoeba          e1  DOS access
 a  OS/2 Boot Manag 51  OnTrack DM6 Aux 94  Amoeba BBT      e3  DOS R/O
 b  W95 FAT32       52  CP/M            9f  BSD/OS          e4  SpeedStor
 c  W95 FAT32 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi eb  BeOS fs
 e  W95 FAT16 (LBA) 54  OnTrackDM6      a5  FreeBSD         ee  GPT
 f  W95 Ext'd (LBA) 55  EZ-Drive        a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            56  Golden Bow      a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    5c  Priam Edisk     a8  Darwin UFS      f1  SpeedStor
12  Compaq diagnost 61  SpeedStor       a9  NetBSD          f4  SpeedStor
14  Hidden FAT16 <3 63  GNU HURD or Sys ab  Darwin boot     f2  DOS secondary
16  Hidden FAT16    64  Novell Netware  af  HFS / HFS+      fb  VMware VMFS
17  Hidden HPFS/NTF 65  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE
18  AST SmartSleep  70  DiskSecure Mult b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 75  PC/IX           bb  Boot Wizard hid fe  LANstep
1c  Hidden W95 FAT3 80  Old Minix       be  Solaris boot    ff  BBT
1e  Hidden W95 FAT1
Hex code (type L to list codes):
 <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):
 <-- t
Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help):
 <-- t
Partition number (1-4): <-- 3
Hex code (type L to list codes): <-- fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
root@server1:~#

To make sure that there are no remains from previous RAID installations on /dev/sdb, we run the following commands:

mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdb3

If there are no remains from previous RAID installations, each of the above commands will throw an error like this one (which is nothing to worry about):

root@server1:~# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb1
root@server1:~#

Otherwise the commands will not display anything at all.

Share this page:

29 Comment(s)

Add comment

Comments

From: Bill at: 2012-09-02 05:56:08

asus-bill / # gfdisk -G
asus-bill / # sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Checking that no-one is using this disk right now ...

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util sfdisk doesn't support GPT. Use GNU Parted.

OK

Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdb1          0+ 121601- 121602- 976762583+  ee  GPT
/dev/sdb2          0       -       0          0    0  Empty
/dev/sdb3          0       -       0          0    0  Empty
/dev/sdb4          0       -       0          0    0  Empty
New situation:
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/sdb1             1 1953525167 1953525167  ee  GPT
/dev/sdb2             0         -          0   0  Empty
/dev/sdb3             0         -          0   0  Empty
/dev/sdb4             0         -          0   0  Empty
Warning: partition 1 does not end at a cylinder boundary
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
asus-bill / # gfdisk -l

Disk /dev/sda: 1000 GB, 1000202273280 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System 
/dev/sda1               1        2835    22772106   83  Linux 
Warning: Partition 1 does not end on cylinder boundary.
/dev/sda2            2835      121601   953987895   83  Linux 
Warning: Partition 2 does not end on cylinder boundary.
Error: /dev/sdb: unrecognised disk label
asus-bill / # 
 
Ok, doesn't work with GPT drive? (sdb in this scenario is an unallocated, new drive straight out of the box, Seagate Barracuda 1TB 64mb cache, identical to sda)
 

From: Hans at: 2013-02-27 22:34:32

Thank you for this excellent Tutorial!

Still, grub2 comes with surprises. When /dev/sda fails it might happen that the system runs into an endless loop of boot trials (not even showing the grub menu) because grub won't be able to find the files needed. I could solve the problem with 

grub-install --modules="raid" /dev/sdx
See http://ubuntuforums.org/showthread.php?p=12534060#post12534060:

1. Install grub on EACH of the array's disks and pass grub-install the option flag --modules="raid". Without --modules="raid" it will fail.
2. Rebuild your initramfs.

From: HST at: 2013-10-19 13:08:04

This was a great help to me in a slightly harder (?) problem, namely moving to RAID10 from RAID1 with only two disks.  I had to make a few appropriate minor changes for Debian, for raid10 and for an existing RAID in place.

Details on request.

From: Anonymous at: 2011-03-27 17:07:37

Hello,

If problem when reboot, try :

menuentry 'Ubuntu, with Linux 2.6.32-21-server' --class ubuntu --class gnu-linux --class gnu --class os {
recordfail
insmod raid
insmod mdraid
insmod ext2
set root='(md0)'
linux /boot/vmlinuz-2.6.32-21-server root=/dev/md2 ro quiet
initrd /boot/initrd.img-2.6.32-21-server
}

it's better !

From: Steve M at: 2011-07-14 11:58:36

Great article thanks, but I got totally stuck at the end of this page.  My system wouldn't boot from the RAID1 - got grub errors: file not found, no such disk, you need to load the kernel first.

The problem turned out to be that the system was set not to boot a degraded array - the setting is in /etc/initramfs-tools/conf.d/mdadm.

The fix is to run dpkg-reconfigure mdadm and choose Yes when it asks about booting degraded arrays.  Suggest doing this before you run update-grub above.

 

From: ecellingsworth at: 2011-09-15 21:01:45

Let me first say that this is the only "how-to" guide I've been able to find that has up-to-date information for getting grub2 to work with mdraid. Thank you for that. I've been banging my head against the wall trying to get a boot manager installed on an array.

I had both of the problems described by the commentors above. First, after finishing the steps on this page and rebooting, grub threw a few errors: file not found, no such disk, you must load the kernel first. Through some trial and error I was able to determine the cause of each error.

For me, grub was unable to find the mdraid module, so the line "insmod mdraid" was returning the "no such file" error. I was able to remove this line without problems. I'm not sure of the difference between this module and the "raid" module, but it doesn't appear necessary (hopefully I don't find this to be untrue after a drive failure!).

Grub was unable to "set root = '(md0)'". I entered the command console to find it had a device called (md/0) listed instead. After googling around a bit (see links below comment), I've come to the conclusion that this is how grub labels raid devices with metadata 1.x (e.g. 1.2, as opposed to 0.90). After changing the reference to (md/0), grub was able to find the disk. The kernel option stays "/dev/md0".

Finally, I had to fix the location of the kernel and initrd which are found in the /boot directory, as suggested by the second commentor. Once the kernel was found, grub was able to load initrd and no longer complained that the kernel had to be loaded first. Voila! Grub was able to boot linux....or at least try to. At this point I ran into the problem that my mdadm was configured to prohibit booting degraded arrays. I followed the Steve M's advice and reconfigured my mdadm package to permit this (after which I reconfigured and installed grub on the partitions).

 I hope this information is useful. I'm not experienced with raid or grub. One final quick note about testing your array. If you unplug a drive and boot ok, then shutdown and reboot, you have to manually re-add the drive to the array using something like "mdadm /dev/md0 -a /dev/sda1". Beware that doing so requires a complete rebuild. So if it took you 3 hours to sync the two drives the first time (as it did me), expect to spend another 3 hours rebuilding every drive you test by unplugging.

For reference:

http://ubuntuforums.org/showthread.php?t=1681190

http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/maverick/grub2/maverick/view/head:/disk/raid.c#L584

http://www.techrepublic.com/blog/networking/testing-your-software-raid-be-prepared/387

From: Alan at: 2011-11-28 20:17:07

This got me off to a good start but there are two minor differences when on Ubuntu 11.10.  First, there's no "mdraid" in grub in Ubuntu 11.10, you need to use "mdraid1x".  Second, like ecellingsworth pointed out, you need to use "md/0" in the grub config file.

 This worked for my Mythbuntu 11.10 installation, which doesn't support RAID from the installer.  The "alternative" Ubuntu CD does, but Mythbuntu doesn't offer one of those. :)

From: BotoX at: 2012-06-15 15:05:15

If you try to use this on the testing deb (aka. wheezy) and updated grub you need to use insmod mdraid1x instead of mdraid or grub will fail to load the mdraid drivers and wont find your drives.

That just happened to me :/

From: RK at: 2012-09-14 11:09:05

Hi Experts, I followed the setup for software raid, everything went fine but when i tried to copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2): using cp -dpRx / /mnt/md2 It gave me error saying cp: cannot stat '/home/user/.gvfs' : permission denied i even tried with sudo, chmod, but didn't work, Plz reply how to fix this error...

From: MrWaloo at: 2013-01-18 17:49:38

Thanks a lot for this tutorial it was really a good basis for me ;-)
I just wanted to show how does a grub2 entry look like for debian testing/wheezy (up to date on 01/18/2013):
menuentry 'Debian GNU/Linux, with Linux 3.2.0-4-amd64' --class debian --class gnu-linux --class gnu --class os {
	insmod gzio
	insmod raid
	insmod mdraid1x
	insmod part_msdos
	insmod part_msdos
	insmod ext2
	set root='(mduuid/e165b8a7ac19f29e8800e7b4f7fb3a5c)'
	search --no-floppy --fs-uuid --set=root 5871e838-3d53-47a6-8ec8-6edeb6998faf
	linux	/boot/vmlinuz-3.2.0-4-amd64 root=UUID=5871e838-3d53-47a6-8ec8-6edeb6998faf ro  quiet
	initrd	/boot/initrd.img-3.2.0-4-amd64
}
The mduuid can be found in "/dev/disk/by-id/md-uuid-*" (in this file delete the ":" of mduuid's)
The UUID can be found in "/dev/disk/by-uuid/*"
 

In order to set up grub2, i used chroot as follow (the copy must allready have been done):
mount -t proc none /mnt/md0/proc
mount -o bind /dev /mnt/md0/dev
mount -o bind /sys /mnt/md0/sys
chroot /mnt/md0
And then the 4 commands in the chroot "update-grub", "update-initramfs -u", "grub-install /dev/sda" and "grub-install /dev/sdb".
With this, grub should be correctly generated.

From: arcasys at: 2014-11-19 22:19:18

In wheezy, the following issues came up for me (the first two have already been reported, I list them for completeness)

  •  mdraid must be replaced with mdraid1x in /etc/grub.d/09_swraid1_setup
  • recordfail must be removed from this file and any other of the files in /etc/grub.d (not supported anymore)
  • /etc/mtab cannot be edited because it is now a symbolic link to /proc/mounts. To revert /etc/mtab to an editable file follow https://www.debian.org/releases/stable/i386/release-notes/ch-information.en.html#mtab
    and change the permissions.
  • grub_install --modules="raid mdraid1x"
    The modules option might be irrelevant (I havn't tested without yet so I cannot say if it made the difference but it doesn't hurt.

From: HansMuc at: 2010-08-02 03:59:19

Great tutorial how to setup RAID1.
In addition, that Grub2 stuff is the icing of the cake. Great work!

There are 2 steps which IMHO could be omitted:

 a) modifying mtab, (-> 5 Adjusting The System To RAID1) can be omitted.
Mtab is updated automagically by the  'mount' command.
When the computer is shut down, file systems are unmounted and mtab
is modified accordingly. After reboot, file systems are remounted and mtab
is updated again automagically by 'mount'.
( See http://en.wikipedia.org/wiki/Mtab )

Changing mtab by hand might even trigger problems,
if an application is checking mtab to find out which
file systems are really mounted.

 

b)  Modifying mdadm.conf  on /mnt/md2 (-> 7 Preparing /dev/sda)
isn't necessary.

We had modified mdadm.conf on the running system before under
 (-> 4 Creating Our RAID Arrays). Later we had copied that file system
over to /mnt/md2. So mdadm.conf on /dev/md2 is the up todate.
Those ARRAY defs found through 'cat /proc/mdstat' haven't changed.
Otherwise we weren't able to boot using /dev/md0 and /dev/md1.

Enjoy!
HansMuc

 

From: Kristoffer at: 2010-10-18 07:50:54

Dear Falko,

Thank you very much for providing such a clear and easy to follow guide for setting up RAID1. I am a Linux novice, but ran into absolutely no problems following your guide. I only had to get a bit more help from a google search to learn more about comparing directories after the initial copy of data from my old system into the raid volume - just to make sure that everything had transferred correctly.

 I confirm also the findings of HansMuc that there are two steps which can safely be omitted.

Best regards,

Kristoffer

From: Anonymous at: 2011-01-16 11:06:12

I had a problem with initramfs which wasn't found at startup.

ubuntu 10.10  - 2.6.35.24-generic

I fixed this problem in

add in /etc/default/grub under the last commented line

GRUB_PRELOAD_MODULES="raid mdraid" .

Restart

And it will work!

From: rpremuz at: 2011-02-26 10:45:10

Well done, Falco, for the tutorial. I was able to use it in my situation quite easily.

I second HansMuc's comments and also suggest another improvement:

In steps 3 and 7 the partition type ID can be changed to fd in a quicker way (I like putting prompt in front of commands):

# sfdisk --change-id /dev/sdb 1 fd
# sfdisk --change-id /dev/sdb 2 fd

and

# sfdisk --change-id /dev/sda 1 fd
# sfdisk --change-id /dev/sda 2 fd

-- rpr.

From: Homer at: 2011-03-28 16:41:47

Hello,

Do not leave like that, if you lost the sda drive, the computer does not start: because Grub2 is not installed properly to boot on disk sdb ....

"error : no such device....


for me the solution found on the net is :

#export LANG=C

#update-grub2 /dev/sdb

Tested !

After replacing a failed disk, these commands are needed for reboot ?

Not tested.

Have a nice day.

From: Giuliastro at: 2012-02-22 11:14:27

Hello,

Thank you for your solution. RAID works but unfortunately the system won't boot without the first drive (SDA).

From: at: 2011-08-30 13:01:55

Thank you so much for writing such a succinct and complete tutorial.  It saved my bacon on an Ubuntu 11.04 Server install which absolutely refused to install grub when I tried to do RAID 1 during the install. 

From: stuck at: 2012-03-29 11:47:06

I'm not entirely sure why it went wrong, but I was attempting to mirror an existing partition. I had two drives exactly the same size, with exactly the same partitions. sdb1 had the data I was hoping to keep, sdc1had garbage. I added the junk disk with this command:  

 mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdc1

Then I erased it:

 mkfs.ext4 /dev/md0

 Then I added the drive with the data on it (formerly missing):

 mdadm --add /dev/md0 /dev/sdb1

 After it finished rebuilding, I ended up with a completely blank drive (it had copied the new ext4 overtop of the "missing" drive). Fortunately, I had a backup, but still curious what went wrong.

 ubuntu 10.10

 

From: Anonymous at: 2012-08-13 00:59:33

You ended up with a blank drive because you overwrote your drive with the new blank one. This tutorial does not work, it erases all your data. *shrug*

From: Anonymous at: 2014-07-04 17:21:06

No, he just did it the wrong way. The tutorial works just fine. With very few changes in recent distros. You just need to *understand* what you are doing, not just copy and paste what you do not understand.

From: chandpriyankara at: 2010-07-22 11:59:55

This is a great tutorial on RAID....

 we are looking for implementing other raid systems as well

cheers.

 

From: Anonymous at: 2010-12-13 21:51:22

This tutorial work also for debian squeeze, only problem with grub, delete recordfail and replace set root='(md0)' with set root='(md/0)'

From: Alexandre Gambini at: 2011-03-04 18:34:27

In my try of implementacion of raid, the better choice was chance /etc/default/grub in option and uncomment this line GRUB_DISABLE_LINUX_UUID=true, and grub work fine for me

Thanks for the Tutorial, is great job

From: at: 2011-08-15 12:08:57

Before failing a drive (testing) open a second terminal window to monitor mdstat. In that window run this command "watch cat /proc/mdstat", if it is rebuilding, you must let it finish or you might kill your project. You can also monitor, in real time, other actions like failing partitions, etc...

 A wonderful project, a wonderful way to learn linux. Thank you.

From: ecellingsworth at: 2011-11-09 03:53:43

This tutorial assumes you are issuing commands as root. If instead you are issuing commands as a less privileged user by using sudo, remember that you need to issue a separate sudo for both sfdisk commands in the piped command. Else you will get a "permission denied" error.

sudo sfdisk -d /dev/sda | sudo sfdisk --force /dev/sdb

I used this tutorial months ago to get my raid array started. A drive failed and I returned to this page today to remember how to rebuild a new drive. Forgetting the sudo tripped me up for a while. Good tutorial. I'm glad I took the time to set up the raid array. It saved me this time.

From: MC at: 2012-12-04 13:38:33

I replaced a failing /dev/sda, and i put the old put /dev/sdb in /dev/sda's place

But it doesn't restart, It simply displays GRUB on boot.

Before shutting it down I did install GRUB on /dev/sdb

 I had to put the failing drive back in but it will probably fail soon.

 any help? maybe i have to flag it as boootable of do something in the bios?

 thanks!

From: jlinkels at: 2014-02-15 21:59:51

In this tutorial it is explained like "failing" a device is sufficient test to see if an array is still operational or bootable.

The operational issue is fine, the bootable is not.

If you made a mistake or forgot to install the boot sector on both drives, the array will boot with a mdadm "failed" device, but it will not boot when a drive is disconnected, defective or gone.

So I strongly recommend that you actually disconnect one drive and see if the system boots. Then after resyncing, disconnect the other disk and try booting. 

Although failing and removing a device in mdadm is a good way to see if RAID is operational and can handle a disk failure during operation, it doesn't tell whether you correctly installed the boot loader. Often disks fail after a power cycle (as all hardware does...) and you don't want just to see a blinking cursor.

jlinkels


From: Bogdan STORM at: 2014-08-07 04:52:24

Thank you for putting all this information together for everyone.

Very helpful.