How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 4

9 Testing

Now let's simulate a hard drive failure. It doesn't matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb has failed.

To simulate the hard drive failure, you can either shut down the system and remove /dev/sdb from the system, or you (soft-)remove it like this:

mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md2 --fail /dev/sdb3

mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm --manage /dev/md1 --remove /dev/sdb2
mdadm --manage /dev/md2 --remove /dev/sdb3

Shut down the system:

shutdown -h now

Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you should now put /dev/sdb in /dev/sda's place and connect the new HDD as /dev/sdb!) and boot the system. It should still start without problems.

Now run

cat /proc/mdstat

and you should see that we have a degraded array:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[0]
      4594496 blocks [2/1] [U_]

md1 : active raid1 sda2[0]
      497920 blocks [2/1] [U_]

md0 : active raid1 sda1[0]
      144448 blocks [2/1] [U_]

unused devices: <none>
server1:~#

The output of

fdisk -l

should look as follows:

server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          18      144553+  fd  Linux raid autodetect
/dev/sda2              19          80      498015   fd  Linux raid autodetect
/dev/sda3              81         652     4594590   fd  Linux raid autodetect

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/md0: 147 MB, 147914752 bytes
2 heads, 4 sectors/track, 36112 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 509 MB, 509870080 bytes
2 heads, 4 sectors/track, 124480 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md2: 4704 MB, 4704763904 bytes
2 heads, 4 sectors/track, 1148624 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md2 doesn't contain a valid partition table
server1:~#

Now we copy the partition table of /dev/sda to /dev/sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb

(If you get an error, you can try the --force option:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

)

server1:~# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/sdb1   *        63    289169     289107  fd  Linux raid autodetect
/dev/sdb2        289170   1285199     996030  fd  Linux raid autodetect
/dev/sdb3       1285200  10474379    9189180  fd  Linux raid autodetect
/dev/sdb4             0         -          0   0  Empty
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
server1:~#

Afterwards we remove any remains of a previous RAID array from /dev/sdb...

mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdb3

... and add /dev/sdb to the RAID array:

mdadm -a /dev/md0 /dev/sdb1
mdadm -a /dev/md1 /dev/sdb2
mdadm -a /dev/md2 /dev/sdb3

Now take a look at

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[2] sda3[0]
      4594496 blocks [2/1] [U_]
      [======>..............]  recovery = 30.8% (1416256/4594496) finish=0.6min speed=83309K/sec

md1 : active raid1 sdb2[1] sda2[0]
      497920 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
      144448 blocks [2/2] [UU]

unused devices: <none>
server1:~#

Wait until the synchronization has finished:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1] sda3[0]
      4594496 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
      497920 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
      144448 blocks [2/2] [UU]

unused devices: <none>
server1:~#

Then run

grub

and install the bootloader on both HDDs:

root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
quit

That's it. You've just replaced a failed hard drive in your RAID1 array.

 

10 Links

Share this page:

35 Comment(s)

Add comment

Comments

From: at: 2008-02-28 20:41:03

Thanks for a great howto! It was a life saver, as I have never set up RAID before, let alone on a running system. The install did not go perfectly however and so I thought I might share my notes and a couple of suggestions. Luckily I did a backup of the entire system before beginning, so I was able to restore the system and begin again after I could not get the system to boot off the RAID array. N.B. I did the install on a Debian testing system (lenny/amd64), but I've checked that everything applies to etch as well.

1. If the disks are not brand new, mdadm will detect the previous filesystem when creating the array and ask if you want to continue. Answer, 'yes'. I also got a segfault error from mdadm and a warning that the disk was 'dirty'. The warning could probably be avoided by zeroing the entire disk with dd. Despite the error and warning everything worked as it should.

2. Instead of: cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig mdadm --examine --scan >> /etc/mdadm/mdadm.conf do: mv /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig /usr/share/mdadm/mkconf >> /etc/mdadm/mdadm.conf This will create a proper mdadm.conf and remove the control file in /var/lib/mdadm (if arrays are found). If you do not remove the control file, you should get a warning message when updating the initrd images.

3. When editing GRUBS's menu.lst, I followed the advice in the comments and put the stanza for the RAID array, before the '### BEGIN AUTOMAGIC KERNELS LIST' line. If you put your custom stanzas inside the AUTOMAGIC area, they will be overwritten during the next kernel upgrade. Instead of: update-initramfs -u I had to do: dpkg-reconfigure mdadm When asked to specify the arrays needed for the root filesystem, I answered with the appropriate devices (in my case only /dev/md0) instead of selecting the default, 'all'. Otherwise I kept to the default answers. After the initrd images had been created, I updated GRUB: update-grub

4. Instead of using the GRUB shell, I used grub-install to install the boot loader on the hard drives: grub-install /dev/sda grub-install /dev/sdb

5. After having added both disks to the arrays, it was time to update the initrd again. First I executed: dpkg-reconfigure mdadm and was informed that the initrd would not be updated, because it was a custom image. The configure script informed me that I could force the update by running 'update-initramfs' with the '-t' option, so that is what I did: update-initramfs -u -t

6. Every time you update the initrd image, you also have to re-install GRUB in the MBR's: grub-install /dev/sda grub-install /dev/sdb Otherwise the system will not boot and you will be thrown into the GRUB shell. Other notes: It is normal for 'fdisk -l' to report stuff like, 'Disk /dev/md0 doesn't contain a valid partition table'. This is because fdisk cannot read md arrays correctly. If you forget to re-install GRUB in the MBR's, after updating your initrd, and get the GRUB shell on reboot. Do the following: Boot from a Debian Installer CD (full or netinst) of the same architecture as your install (so if you're running amd64, it has to be a amd64 CD). Boot the CD in 'rescue' mode. After networking has been set up and the disks have been detected press CTRL+ALT+F2, followed by Enter, to get a prompt. Execute the following commands (md0=/boot and md2=/): mkdir /mnt/mydisk mount /dev/md2 /mnt/mydisk mount /dev/md0 /mnt/mydisk/boot mount -t proc none /mnt/mydisk/proc mount -o bind /dev /mnt/mydisk/dev chroot /mnt/mydisk grub-install /dev/sda grub-install /dev/sdb exit umount /mnt/mydisk/proc umount /mnt/mydisk/dev umount /mnt/mydisk/boot umount /mnt/mydisk reboot

From: at: 2008-07-09 10:33:59

Thank you so much for this detailed howto.  It saved me a lot of pain and worked perfectly on Ubuntu 8.04.1 LTS.

From: nochids at: 2008-12-01 02:46:10

I am relatively new to linux and am completely dependent on these tutorials.  I bought a server and installed Suse 10.3.  After running Ubuntu on my desktop and laptop, I decided to change the server to run Ubuntu as well (I didn't uninstall Suse - just installed over it??).  After installing the server based on "The Perfect Ubuntu Server 8.04" (http://www.howtoforge.com/perfect-server-ubuntu8.04-lts) I installed the ISPConfig as detailed at the end.  Then to install RAID, I followed the tutorial perfectly I think,  but at the end of step 6 after rebooting, I still show sda1 rather than md0.

 root@costarica:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/costarica-root
                      228G  1.4G  216G   1% /
varrun                502M  108K  502M   1% /var/run
varlock               502M     0  502M   0% /var/lock
udev                  502M   76K  502M   1% /dev
devshm                502M     0  502M   0% /dev/shm
/dev/sda1             236M   26M  198M  12% /boot
root@costarica:~#

 vi /etc/fstab shows the following:

# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
# /dev/mapper/costarica-root
UUID=2e3442d3-c650-480a-a923-4775de238b7f /               ext3    relatime,errors=remount-ro,usrquota,grpquota 0       1
# /dev/md0
UUID=251a68c2-1497-433b-b415-d49ca8f2125e /boot           ext3    relatime        0       2
# /dev/mapper/costarica-swap_1
UUID=03c6c32e-38bd-4707-9df5-dcdd3049825a none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto,exec,utf8 0       0

 

It looks different than the one in the tutorial but I attribute that to being ubuntu rather than debian.

 

Can anyone shed some light on this for me?  Did I miss a step or are there other steps involved because of Ubuntu?

 

Thanks for any help.

 

Jason.

 

From: Anonymous at: 2009-01-03 19:00:56

Dear all,

 this howto just worked for me flawlessly for my brand-new Debian Lenny (testing) today (03-Jan-2009) !!!

No issues, no problems at all. I had several different partitions, even extended ones, I only had to follow on paper, which partition goes into which numbered array - that's it ;-)

(And my boot partition wasn't /boot but simply /, I did everything accordingly - flawless!!!)

 

THANK YOU VERY MUCH for this HowTo, I've NEVER EVER raid-ed before and it's a success :)

md0 = /

md1 = swap

md2 = /home

This all on an Abit NF7-S2, BIOS-Raid OFF, 2 x SATA2 Samsung 320G, Sempron 2800+, 2x512 DDR400 ;-)

 

 lol:~# df -m
Filesystem           1M-blocks      Used Available Use% Mounted on
/dev/md0                 18778      2312     15512  13% /
tmpfs                      506         0       506   0% /lib/init/rw
udev                        10         1        10   2% /dev
tmpfs                      506         0       506   0% /dev/shm
/dev/md2                280732       192    266281   1% /home
lol:~#


Cheers from Europe !

 

From: Anonymous at: 2009-05-23 15:26:29

Great tutorial,

my compliments

From: Johan Boulé at: 2009-07-18 01:10:19

I wonder, what's the point in having the swap on a raid1? shouldn't it be better to add /dev/sda2 and /dev/sdb2 directly as two separate swap devices?

From: Froi at: 2009-08-31 19:25:30

Can I apply this How-to to my PPC Debian Etch? Thanks!

From: nord at: 2010-07-18 18:06:17

Nice howto!  
I would like to correct some minor errors though:

Ext2 on boot instead of ext3... Ext3 on /boot is just a waste of space and resources.  You dont need journaling for your boot partition :)

and why make a raid array for swap?  swap will stripe data as a raid0 anyway.. just tell linux to swap to two different physical disks and voila. Striping made easy :p

Happy raiding :p

(If you suddenly need a lot of swapspace, you can use "swapon" command to swap to memorysticks or whatever you need, unlike fstab fixing, swapon wil get resetted on reboot)  ;)

From: Andy Beverley at: 2010-12-04 16:43:52

I spent hours trying to work out not only how to set up a software RAID, but also how to do it on a boot partition. I didn't even come close to looking at a live system. I got nowhere until I found this HOWTO which does it all very well. Thank you!

Andy

From: Anonymous at: 2012-01-17 12:04:08

It works just perfectly with ubuntu 8.04

Thanks for the brilliant how-to

From: Alex Dekker at: 2012-10-07 09:08:58

You might like to put a link somewhere in this howto to your newer howto detailing the install with Grub2. I spent some time following this howto and tripping up on Grub2 and doing lots of googling, before finally realising that what I thought were google hits on your existing howto were actually pointing to a separate but very similarly named howto, that covers Grub2!

From: at: 2009-12-16 05:47:58

Falko, thank you, this is a wonderful HOWTO, I've used it for two servers now. On the second one, the reboot at the end of this page failed with a GRUB error:

Booting Debian ..
root (hd1,0)
Filesystem type is .. partition type.. kernel (all as expected)

Error 2: Bad file or directory type

At this point I was very glad I could still boot from the old non-raid partitions (phew!)

A bit of reading turned up this explanation on fedoraforum

Sure enough, tune2fs -l showed the old sda1 had 128 byte inodes, while sdb1/md0 had 256 byte inodes. I had the choice of upgrading grub or re-making md0's filesystem with smaller inodes.

I decided the smaller inodes were safer (I like to mess with aptitude as little as possible). I re-ran the instructions with this mkfs command instead, and it's all good now.

mkfs.ext3 -I 128 /dev/md0

This will not be needed when grub is updated to a version that can read fs's with 256-byte inodes.

From: wayan at: 2011-04-20 06:22:47

Step 5 Adjusting The System To RAID1

Don't edit /etc/fstab and /etc/mtab

edit only file /mnt/md2/etc/fstab and /mnt/md2/etc/mtab

sometimes linux fail boot from /dev/md2 after reboot, you can normal boot to original linux configuration after fail.

From: Rik Bignell at: 2009-04-27 15:55:26

Thx for this.  Successfully used your guide to setup Juanty 9.04 with RAID5.

Points to note, RAID5 will NOT work when boot partition is raid5.  For example, if you have:

md0 = swap

md1 = root (boot within root)

Then you will not be able to write your grub properly to each drive due to raid5 not having separate copies of files on each disc.  Grub boots at disk level and not at software raid level it seems.

My work around was to have boot separate. I chose:

md0=swap (3x drives within raid5, sda1, sdb1, sdc1)

md1=boot (2x drives within raid1, sda2, sdb2) 3rd drive is not needed unless 2 drives fail at once, and because drives a mirrored completely you are able to write to grub.

md2=root (3x drives within raid5, sda3, sdb3, sdc3)

I'll be writing my own guide for raid1 and raid5 so you can see the difference in commands, but will referrence this guide a lot as it helped me the most out of all the ubuntu raid guides i found on google.

 

Watch http://www.richardbignell.co.uk/ for new guides.

From: at: 2008-06-03 13:12:17

Hello,

This instruction looks very useful however I would like to ask could someone please make this to suite the default and recommended hdd setup of Debian (single partition) please

From: Anonymous at: 2009-02-23 17:38:38

The article shows writing out a modified partition table, getting the message:

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.

and then, without rebooting,  trying to write to the new partitions (running "mdadm -add ...").

Doing that is extremely dangerous to any data on that disk---and even if there is no data, doing that means  mdadm might not be initializing something (the kernel's old view of partition N) other than what you meant (your new partition N).

 

From: Lars at: 2009-07-21 17:46:09

 QUOTE:

... and reboot the system:
reboot
It should boot without problems.

 

 not quite... you need to do the GRUB part from page 2 again to make this work - just got stuck in a 'GRUB' prompt after reload - can be fixed with a rescue system and a new grub setup on the hd's.

Otherwise the howto works just fine - thank you!

 -Lars

From: Anonymous at: 2009-11-30 12:27:33

I followed this guide to setup raid 1 (mirror of existing disc,to another) of an seperate disc containing vmware virtual disc files.

 I hope, I won't lose any data, but that's a risk I'd had to take. Right now it's synchronising the vmware disc with the harddisc containing no data... at this point, I can't access the harddisc containing the vmware files - so I have my fingers crossed :-)

 I'll post an update as soon, as the synchronisation is complete, byfar it's only 18% complete.

 I would recommand everyone there is using this guide to synchronise data between two discs to unmount EVERY disc that you're making changes to, BEFORE making any changes at all. If you somehow fail to do so, it can lead to serious data loss. A point that I think this guide, failed to mention.

 Besides that, thank you very much for sharing you're knowledge!

 - Simon Sessingø
Denmark

From: Anonymous at: 2009-11-08 01:26:37

THANK YOU for this wonderful howto. I managed to get RAID set up on Debian Lenny with no changes to your instructions.

From: Singapore website design at: 2009-10-25 10:57:54

Hi thanks for writing this guide. I managed to setup my servers software raid successfully using this guide. Been using hardware raid all along. Thanks

From: Ben at: 2010-03-05 11:05:01

Great tutorial, worked perfectly for me in Debain Lenny substituting sda and sdb with hda and hdd, and a few extra partitions ... thanks for posting. :)

From: Juan De Stefano at: 2010-03-05 06:20:07

Thank you for this excellent guideline. I followed on Ubuntu 9.10. The only thing different is to setup the grub2.  It is suposed i shouldn't edit the grub.cnf (former grub.lst) but i did to change the root device) then mounted the /dev/md2 on /mnt/md2  and then /dev/md0 on /mnt/md2/boot. Mounted sys, proc and dev also to make the chroot. Later i did the dpkg-reconfigure grub-pc and selected the both disks to install grub on mbr. Everything worked the first time i tried.

Thanks again

/ Juan

From: Anonymous at: 2010-04-14 20:29:37

I just did this for 9.10 Ubuntu as well.  This procedure really needs to be updated for GRUB2, which in and of itself is an excersise in tedium.  However, GRUB2 is slightly smarter & seemed to auto-configure a few of the drive details here and there.  However, there were some major departures from this procedure.

You don't need to (& should not) modify grub.cfg directly.  Instead, I created a custom grub config file: /etc/grub.d/06_custom which would contain my RAID entries and put them above the other grub boot options during the "degraded" sections of the installation.   There's a few tricks there in how to format a custom file correctly: there is some "EOF" crayziness, and also you should be using UUIDs, so you have to make sure you get the right UUIDs, instead of using /dev/sd[XX] notation.  In the end, my 06_custom looked like:  

#! /bin/sh -e
echo "Adding RAID boot options" >&2
cat << EOF
menuentry "Ubuntu, Linux 2.6.31-20-generic RAID (hd1)" 
{
        recordfail=1
        if [ -n ${have_grubenv} ]; then save_env recordfail; fi
set quiet=1
insmod ext2
set root=(hd1,0)
search --no-floppy --fs-uuid --set b79ba888-2180-4c7a-b744-2c4fa99a5872
linux /boot/vmlinuz-2.6.31-20-generic root=UUID=b79ba888-2180-4c7a-b744-2c4fa99a5872 ro   quiet splash
initrd /boot/initrd.img-2.6.31-20-generic
}

menuentry "Ubuntu, Linux 2.6.31-20-generic RAID (hd0)" 
{
        recordfail=1
        if [ -n ${have_grubenv} ]; then save_env recordfail; fi
set quiet=1
insmod ext2
set root=(hd0,0)
search --no-floppy --fs-uuid --set b79ba888-2180-4c7a-b744-2c4fa99a5872
linux /boot/vmlinuz-2.6.31-20-generic root=UUID=b79ba888-2180-4c7a-b744-2c4fa99a5872 ro   quiet splash
initrd /boot/initrd.img-2.6.31-20-generic
}

EOF

Also, you have to figure out which pieces of 10_linux to comment out to get rid of the non-RAID boot options; for that:
  #linux_entry "${OS}, Linux ${version}" \
  #    "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_EXTRA} ${GRUB_CMDLINE_LINUX_DEFAULT}" \
  #    quiet
  #if [ "x${GRUB_DISABLE_LINUX_RECOVERY}" != "xtrue" ]; then
  #  linux_entry "${OS}, Linux ${version} (recovery mode)" \
  # "single ${GRUB_CMDLINE_LINUX}"
  #fi

Overall, this was the best non-RAID -> RAID migration how-to I could find.  Thanks very much for putting this out there.

From: Alex Dekker at: 2012-10-07 09:11:38
From: Vlad P at: 2010-04-01 02:42:04

I had already set up my RAID 1 before hitting your tutorial, but this reading made me understand everything better - much better! Thank you very much!

From: Cristian at: 2010-05-27 14:14:05

This guide is awesme, it is just all you need to transform an usual one SATA disk into RAID1 if you follow all instruction.

 Thanks again ... thanks,  thanks. You save me from some days work to configure again a server.

From: Rory at: 2011-07-23 12:27:36

Thank you for this perfect tutortial.

 It works perfectly even for Ubuntu. Had to mess with grub2 instead, but aside from that, it's brilliant. Used it on three machines without a glitch.

From: at: 2008-01-19 19:37:09

In general, I replaced Disk Id's of Ubuntu Gutsy by devices and is working great. I'm writing from my Gutsy destop.

A few weeks ago I lost my /home partition. As consultant, I also works in home therefore I don't have time for backup, thus I think is a good solution to have a RAID1.

First, I used Debian Etch, but it doesn't support easily my ATI Radeon 9200 video card, and caused problems with vmware.

I redo all the process but for Ubuntu Gutsy Gibbon 7.10, replacing the Disk ID's by devices. Also, for mnemotecnic reasons and easy recovery, I used md1 (boot) md2 (swap) and md5 (root).

 

From: Jairzhino Bolivar at: 2009-02-22 19:41:32

Well,

jair 

I just wanted to thank the team/person that putthis tutorial together, this is a very valuable tutorial.  I follow it using debian 5.0 (lenny) and everything works very nicely   I could not enjoy more looking at the syncing process and testing process of the system booting even after i removed the drive (hd1) This really allow everybody to protect their data from hard drive failures. Thank you!!!  Sooo!  Much!!

I notice that when you run the command to install  mdadm "citadel" install as well. Is there away I can run the command apt-get install mdadm skipping "citadel"?

 Again this is greatand very simple.  I am using the same tutorial to create two more disks raid1 for array storage.

This is just cool!!! 

From: L. RIchter at: 2010-01-27 22:01:12

This Howto really worked out of the box. This was my first RAID installation using Debian stable 5.03 and after wasting my time with the installer to set up a RAID this worked straight without any complaints. Really a good job, well done,

Lothar

 

From: Franck78 at: 2010-03-25 18:28:36

Just follow steps adjusting number of 'md' and partitions and disks.

Maybe more details about the swap partition on each disk.

Usefull or useless to have an md made of swaps.....?

use:

mkswap /dev/mdX

swapon -a

swapon -s

 

Bye

From: Anonymous at: 2010-08-31 17:34:56

Excellent tutorial. Thank you.

From: Leo Matteo at: 2013-05-01 22:35:20

A very clear "manual" to set up a RAID1 in a running system.

I will try it in a while. I hope all will run well. (if yes or no, I will anyway comment  results.

Thank you (from Uruguay).

 

From: Max at: 2013-09-14 05:26:00

Aren't md0, md1 and md2 supposed to be operational after disk failure? Contents of /proc/mdstat suggest that raid1 is still running with one disc but subsequent call to fdisk shows that there are no valid partitions on md0, md1 and md2.

Might it be copy-paste error?

Otherwise very good tutorial.

From: bob143 at: 2015-01-20 07:25:33

I did manage to lose all my existing data following this. I was not doing this with a root partition so I had no issues with partitions being in use and I specified both disks in the create command  rather than the "missing" placeholder - maybe that was my problem.