How To Set Up Software RAID1 On A Running System (Incl. GRUB2 Configuration) (Ubuntu 10.04) - Page 2

4 Creating Our RAID Arrays

Now let's create our RAID arrays /dev/md0, /dev/md1, and /dev/md2. /dev/sdb1 will be added to /dev/md0, /dev/sdb2 to /dev/md1, and /dev/sdb3 to /dev/md2. /dev/sda1, /dev/sda2, and /dev/sda3 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3

The command

cat /proc/mdstat

should now show that you have three degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb3[1]
      4242368 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
      499648 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
      498624 blocks [2/1] [_U]

unused devices: <none>
root@server1:~#

Next we create filesystems on our RAID arrays (ext4 on /dev/md0 and /dev/md2 and swap on /dev/md1):

mkfs.ext4 /dev/md0
mkswap /dev/md1
mkfs.ext4 /dev/md2

Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Display the contents of the file:

cat /etc/mdadm/mdadm.conf

At the bottom of the file you should now see details about our three (degraded) RAID arrays:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Mon, 21 Jun 2010 13:21:00 +0200
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=68686c40:b924278e:325ecf68:79913751
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9719181e:3071f655:325ecf68:79913751
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=c3360f0f:7f3d47ec:325ecf68:79913751

 

5 Adjusting The System To RAID1

Now let's mount /dev/md0 and /dev/md2 (we don't need to mount the swap array /dev/md1):

mkdir /mnt/md0
mkdir /mnt/md2

mount /dev/md0 /mnt/md0
mount /dev/md2 /mnt/md2

You should now find both arrays in the output of

mount

root@server1:~# mount
/dev/sda3 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
none on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
none on /dev type devtmpfs (rw,mode=0755)
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
none on /dev/shm type tmpfs (rw,nosuid,nodev)
none on /var/run type tmpfs (rw,nosuid,mode=0755)
none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
none on /var/lib/ureadahead/debugfs type debugfs (rw,relatime)
/dev/sda1 on /boot type ext4 (rw)
/dev/md0 on /mnt/md0 type ext4 (rw)
/dev/md2 on /mnt/md2 type ext4 (rw)
root@server1:~#

Next we modify /etc/fstab. Comment out the current /, /boot, and swap partitions and add new lines for them where you replace the UUIDs with /dev/md0 (for the /boot partition), /dev/md1 (for the swap partition) and /dev/md2 (for the / partition) so that the file looks as follows:

vi /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/sda3 during installation
#UUID=48d65bba-0f02-44b4-8557-b508309b1963 /               ext4    errors=remount-ro 0       1
/dev/md2 /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
#UUID=e3a677ee-2db0-4a8a-8d6c-94715c8cd90f /boot           ext4    defaults        0       2
/dev/md0 /boot           ext4    defaults        0       2
# swap was on /dev/sda2 during installation
#UUID=1e27f700-ec54-4de9-9428-c6d47d7921f4 none            swap    sw              0       0
/dev/md1 none            swap    sw              0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0

Next replace /dev/sda1 with /dev/md0 and /dev/sda3 with /dev/md2 in /etc/mtab:

vi /etc/mtab

/dev/md2 / ext4 rw,errors=remount-ro 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
none /sys sysfs rw,noexec,nosuid,nodev 0 0
none /sys/fs/fuse/connections fusectl rw 0 0
none /sys/kernel/debug debugfs rw 0 0
none /sys/kernel/security securityfs rw 0 0
none /dev devtmpfs rw,mode=0755 0 0
none /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
none /dev/shm tmpfs rw,nosuid,nodev 0 0
none /var/run tmpfs rw,nosuid,mode=0755 0 0
none /var/lock tmpfs rw,noexec,nosuid,nodev 0 0
none /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
none /var/lib/ureadahead/debugfs debugfs rw,relatime 0 0
/dev/md0 /boot ext4 rw 0 0
/dev/md0 /mnt/md0 ext4 rw 0 0
/dev/md2 /mnt/md2 ext4 rw 0 0

Now up to the GRUB2 boot loader. Create the file /etc/grub.d/09_swraid1_setup as follows:

cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup
vi /etc/grub.d/09_swraid1_setup

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Ubuntu, with Linux 2.6.32-21-server' --class ubuntu --class gnu-linux --class gnu --class os {
        recordfail
        insmod raid
        insmod mdraid
        insmod ext2
        set root='(md0)'
        linux   /vmlinuz-2.6.32-21-server root=/dev/md2 ro   quiet
        initrd  /initrd.img-2.6.32-21-server
}

Make sure you use the correct kernel version in the menuentry stanza (in the linux and initrd lines). You can find it out by running

uname -r

or by taking a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg. Also make sure that you use root=/dev/md2 in the linux line.

The important part in our new menuentry stanza is the line set root='(md0)' - it makes sure that we boot from our RAID1 array /dev/md0 (which will hold the /boot partition) instead of /dev/sda or /dev/sdb which is important if one of our hard drives fails - the system will still be able to boot.

Run

update-grub

to write our new kernel stanza from /etc/grub.d/09_swraid1_setup to /boot/grub/grub.cfg.

Next we adjust our ramdisk to the new situation:

update-initramfs -u

Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):

cp -dpRx / /mnt/md2

cd /boot
cp -dpRx . /mnt/md0

 

6 Preparing GRUB2 (Part 1)

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Now we reboot the system and hope that it boots ok from our RAID arrays:

reboot

Share this page:

29 Comment(s)

Add comment

Comments

From: Bill at: 2012-09-02 05:56:08

asus-bill / # gfdisk -G
asus-bill / # sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Checking that no-one is using this disk right now ...

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util sfdisk doesn't support GPT. Use GNU Parted.

OK

Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdb1          0+ 121601- 121602- 976762583+  ee  GPT
/dev/sdb2          0       -       0          0    0  Empty
/dev/sdb3          0       -       0          0    0  Empty
/dev/sdb4          0       -       0          0    0  Empty
New situation:
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/sdb1             1 1953525167 1953525167  ee  GPT
/dev/sdb2             0         -          0   0  Empty
/dev/sdb3             0         -          0   0  Empty
/dev/sdb4             0         -          0   0  Empty
Warning: partition 1 does not end at a cylinder boundary
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
asus-bill / # gfdisk -l

Disk /dev/sda: 1000 GB, 1000202273280 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System 
/dev/sda1               1        2835    22772106   83  Linux 
Warning: Partition 1 does not end on cylinder boundary.
/dev/sda2            2835      121601   953987895   83  Linux 
Warning: Partition 2 does not end on cylinder boundary.
Error: /dev/sdb: unrecognised disk label
asus-bill / # 
 
Ok, doesn't work with GPT drive? (sdb in this scenario is an unallocated, new drive straight out of the box, Seagate Barracuda 1TB 64mb cache, identical to sda)
 

From: Hans at: 2013-02-27 22:34:32

Thank you for this excellent Tutorial!

Still, grub2 comes with surprises. When /dev/sda fails it might happen that the system runs into an endless loop of boot trials (not even showing the grub menu) because grub won't be able to find the files needed. I could solve the problem with 

grub-install --modules="raid" /dev/sdx
See http://ubuntuforums.org/showthread.php?p=12534060#post12534060:

1. Install grub on EACH of the array's disks and pass grub-install the option flag --modules="raid". Without --modules="raid" it will fail.
2. Rebuild your initramfs.

From: HST at: 2013-10-19 13:08:04

This was a great help to me in a slightly harder (?) problem, namely moving to RAID10 from RAID1 with only two disks.  I had to make a few appropriate minor changes for Debian, for raid10 and for an existing RAID in place.

Details on request.

From: Anonymous at: 2011-03-27 17:07:37

Hello,

If problem when reboot, try :

menuentry 'Ubuntu, with Linux 2.6.32-21-server' --class ubuntu --class gnu-linux --class gnu --class os {
recordfail
insmod raid
insmod mdraid
insmod ext2
set root='(md0)'
linux /boot/vmlinuz-2.6.32-21-server root=/dev/md2 ro quiet
initrd /boot/initrd.img-2.6.32-21-server
}

it's better !

From: Steve M at: 2011-07-14 11:58:36

Great article thanks, but I got totally stuck at the end of this page.  My system wouldn't boot from the RAID1 - got grub errors: file not found, no such disk, you need to load the kernel first.

The problem turned out to be that the system was set not to boot a degraded array - the setting is in /etc/initramfs-tools/conf.d/mdadm.

The fix is to run dpkg-reconfigure mdadm and choose Yes when it asks about booting degraded arrays.  Suggest doing this before you run update-grub above.

 

From: ecellingsworth at: 2011-09-15 21:01:45

Let me first say that this is the only "how-to" guide I've been able to find that has up-to-date information for getting grub2 to work with mdraid. Thank you for that. I've been banging my head against the wall trying to get a boot manager installed on an array.

I had both of the problems described by the commentors above. First, after finishing the steps on this page and rebooting, grub threw a few errors: file not found, no such disk, you must load the kernel first. Through some trial and error I was able to determine the cause of each error.

For me, grub was unable to find the mdraid module, so the line "insmod mdraid" was returning the "no such file" error. I was able to remove this line without problems. I'm not sure of the difference between this module and the "raid" module, but it doesn't appear necessary (hopefully I don't find this to be untrue after a drive failure!).

Grub was unable to "set root = '(md0)'". I entered the command console to find it had a device called (md/0) listed instead. After googling around a bit (see links below comment), I've come to the conclusion that this is how grub labels raid devices with metadata 1.x (e.g. 1.2, as opposed to 0.90). After changing the reference to (md/0), grub was able to find the disk. The kernel option stays "/dev/md0".

Finally, I had to fix the location of the kernel and initrd which are found in the /boot directory, as suggested by the second commentor. Once the kernel was found, grub was able to load initrd and no longer complained that the kernel had to be loaded first. Voila! Grub was able to boot linux....or at least try to. At this point I ran into the problem that my mdadm was configured to prohibit booting degraded arrays. I followed the Steve M's advice and reconfigured my mdadm package to permit this (after which I reconfigured and installed grub on the partitions).

 I hope this information is useful. I'm not experienced with raid or grub. One final quick note about testing your array. If you unplug a drive and boot ok, then shutdown and reboot, you have to manually re-add the drive to the array using something like "mdadm /dev/md0 -a /dev/sda1". Beware that doing so requires a complete rebuild. So if it took you 3 hours to sync the two drives the first time (as it did me), expect to spend another 3 hours rebuilding every drive you test by unplugging.

For reference:

http://ubuntuforums.org/showthread.php?t=1681190

http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/maverick/grub2/maverick/view/head:/disk/raid.c#L584

http://www.techrepublic.com/blog/networking/testing-your-software-raid-be-prepared/387

From: Alan at: 2011-11-28 20:17:07

This got me off to a good start but there are two minor differences when on Ubuntu 11.10.  First, there's no "mdraid" in grub in Ubuntu 11.10, you need to use "mdraid1x".  Second, like ecellingsworth pointed out, you need to use "md/0" in the grub config file.

 This worked for my Mythbuntu 11.10 installation, which doesn't support RAID from the installer.  The "alternative" Ubuntu CD does, but Mythbuntu doesn't offer one of those. :)

From: BotoX at: 2012-06-15 15:05:15

If you try to use this on the testing deb (aka. wheezy) and updated grub you need to use insmod mdraid1x instead of mdraid or grub will fail to load the mdraid drivers and wont find your drives.

That just happened to me :/

From: RK at: 2012-09-14 11:09:05

Hi Experts, I followed the setup for software raid, everything went fine but when i tried to copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2): using cp -dpRx / /mnt/md2 It gave me error saying cp: cannot stat '/home/user/.gvfs' : permission denied i even tried with sudo, chmod, but didn't work, Plz reply how to fix this error...

From: MrWaloo at: 2013-01-18 17:49:38

Thanks a lot for this tutorial it was really a good basis for me ;-)
I just wanted to show how does a grub2 entry look like for debian testing/wheezy (up to date on 01/18/2013):
menuentry 'Debian GNU/Linux, with Linux 3.2.0-4-amd64' --class debian --class gnu-linux --class gnu --class os {
	insmod gzio
	insmod raid
	insmod mdraid1x
	insmod part_msdos
	insmod part_msdos
	insmod ext2
	set root='(mduuid/e165b8a7ac19f29e8800e7b4f7fb3a5c)'
	search --no-floppy --fs-uuid --set=root 5871e838-3d53-47a6-8ec8-6edeb6998faf
	linux	/boot/vmlinuz-3.2.0-4-amd64 root=UUID=5871e838-3d53-47a6-8ec8-6edeb6998faf ro  quiet
	initrd	/boot/initrd.img-3.2.0-4-amd64
}
The mduuid can be found in "/dev/disk/by-id/md-uuid-*" (in this file delete the ":" of mduuid's)
The UUID can be found in "/dev/disk/by-uuid/*"
 

In order to set up grub2, i used chroot as follow (the copy must allready have been done):
mount -t proc none /mnt/md0/proc
mount -o bind /dev /mnt/md0/dev
mount -o bind /sys /mnt/md0/sys
chroot /mnt/md0
And then the 4 commands in the chroot "update-grub", "update-initramfs -u", "grub-install /dev/sda" and "grub-install /dev/sdb".
With this, grub should be correctly generated.

From: arcasys at: 2014-11-19 22:19:18

In wheezy, the following issues came up for me (the first two have already been reported, I list them for completeness)

  •  mdraid must be replaced with mdraid1x in /etc/grub.d/09_swraid1_setup
  • recordfail must be removed from this file and any other of the files in /etc/grub.d (not supported anymore)
  • /etc/mtab cannot be edited because it is now a symbolic link to /proc/mounts. To revert /etc/mtab to an editable file follow https://www.debian.org/releases/stable/i386/release-notes/ch-information.en.html#mtab
    and change the permissions.
  • grub_install --modules="raid mdraid1x"
    The modules option might be irrelevant (I havn't tested without yet so I cannot say if it made the difference but it doesn't hurt.

From: HansMuc at: 2010-08-02 03:59:19

Great tutorial how to setup RAID1.
In addition, that Grub2 stuff is the icing of the cake. Great work!

There are 2 steps which IMHO could be omitted:

 a) modifying mtab, (-> 5 Adjusting The System To RAID1) can be omitted.
Mtab is updated automagically by the  'mount' command.
When the computer is shut down, file systems are unmounted and mtab
is modified accordingly. After reboot, file systems are remounted and mtab
is updated again automagically by 'mount'.
( See http://en.wikipedia.org/wiki/Mtab )

Changing mtab by hand might even trigger problems,
if an application is checking mtab to find out which
file systems are really mounted.

 

b)  Modifying mdadm.conf  on /mnt/md2 (-> 7 Preparing /dev/sda)
isn't necessary.

We had modified mdadm.conf on the running system before under
 (-> 4 Creating Our RAID Arrays). Later we had copied that file system
over to /mnt/md2. So mdadm.conf on /dev/md2 is the up todate.
Those ARRAY defs found through 'cat /proc/mdstat' haven't changed.
Otherwise we weren't able to boot using /dev/md0 and /dev/md1.

Enjoy!
HansMuc

 

From: Kristoffer at: 2010-10-18 07:50:54

Dear Falko,

Thank you very much for providing such a clear and easy to follow guide for setting up RAID1. I am a Linux novice, but ran into absolutely no problems following your guide. I only had to get a bit more help from a google search to learn more about comparing directories after the initial copy of data from my old system into the raid volume - just to make sure that everything had transferred correctly.

 I confirm also the findings of HansMuc that there are two steps which can safely be omitted.

Best regards,

Kristoffer

From: Anonymous at: 2011-01-16 11:06:12

I had a problem with initramfs which wasn't found at startup.

ubuntu 10.10  - 2.6.35.24-generic

I fixed this problem in

add in /etc/default/grub under the last commented line

GRUB_PRELOAD_MODULES="raid mdraid" .

Restart

And it will work!

From: rpremuz at: 2011-02-26 10:45:10

Well done, Falco, for the tutorial. I was able to use it in my situation quite easily.

I second HansMuc's comments and also suggest another improvement:

In steps 3 and 7 the partition type ID can be changed to fd in a quicker way (I like putting prompt in front of commands):

# sfdisk --change-id /dev/sdb 1 fd
# sfdisk --change-id /dev/sdb 2 fd

and

# sfdisk --change-id /dev/sda 1 fd
# sfdisk --change-id /dev/sda 2 fd

-- rpr.

From: Homer at: 2011-03-28 16:41:47

Hello,

Do not leave like that, if you lost the sda drive, the computer does not start: because Grub2 is not installed properly to boot on disk sdb ....

"error : no such device....


for me the solution found on the net is :

#export LANG=C

#update-grub2 /dev/sdb

Tested !

After replacing a failed disk, these commands are needed for reboot ?

Not tested.

Have a nice day.

From: Giuliastro at: 2012-02-22 11:14:27

Hello,

Thank you for your solution. RAID works but unfortunately the system won't boot without the first drive (SDA).

From: at: 2011-08-30 13:01:55

Thank you so much for writing such a succinct and complete tutorial.  It saved my bacon on an Ubuntu 11.04 Server install which absolutely refused to install grub when I tried to do RAID 1 during the install. 

From: stuck at: 2012-03-29 11:47:06

I'm not entirely sure why it went wrong, but I was attempting to mirror an existing partition. I had two drives exactly the same size, with exactly the same partitions. sdb1 had the data I was hoping to keep, sdc1had garbage. I added the junk disk with this command:  

 mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdc1

Then I erased it:

 mkfs.ext4 /dev/md0

 Then I added the drive with the data on it (formerly missing):

 mdadm --add /dev/md0 /dev/sdb1

 After it finished rebuilding, I ended up with a completely blank drive (it had copied the new ext4 overtop of the "missing" drive). Fortunately, I had a backup, but still curious what went wrong.

 ubuntu 10.10

 

From: Anonymous at: 2012-08-13 00:59:33

You ended up with a blank drive because you overwrote your drive with the new blank one. This tutorial does not work, it erases all your data. *shrug*

From: Anonymous at: 2014-07-04 17:21:06

No, he just did it the wrong way. The tutorial works just fine. With very few changes in recent distros. You just need to *understand* what you are doing, not just copy and paste what you do not understand.

From: chandpriyankara at: 2010-07-22 11:59:55

This is a great tutorial on RAID....

 we are looking for implementing other raid systems as well

cheers.

 

From: Anonymous at: 2010-12-13 21:51:22

This tutorial work also for debian squeeze, only problem with grub, delete recordfail and replace set root='(md0)' with set root='(md/0)'

From: Alexandre Gambini at: 2011-03-04 18:34:27

In my try of implementacion of raid, the better choice was chance /etc/default/grub in option and uncomment this line GRUB_DISABLE_LINUX_UUID=true, and grub work fine for me

Thanks for the Tutorial, is great job

From: at: 2011-08-15 12:08:57

Before failing a drive (testing) open a second terminal window to monitor mdstat. In that window run this command "watch cat /proc/mdstat", if it is rebuilding, you must let it finish or you might kill your project. You can also monitor, in real time, other actions like failing partitions, etc...

 A wonderful project, a wonderful way to learn linux. Thank you.

From: ecellingsworth at: 2011-11-09 03:53:43

This tutorial assumes you are issuing commands as root. If instead you are issuing commands as a less privileged user by using sudo, remember that you need to issue a separate sudo for both sfdisk commands in the piped command. Else you will get a "permission denied" error.

sudo sfdisk -d /dev/sda | sudo sfdisk --force /dev/sdb

I used this tutorial months ago to get my raid array started. A drive failed and I returned to this page today to remember how to rebuild a new drive. Forgetting the sudo tripped me up for a while. Good tutorial. I'm glad I took the time to set up the raid array. It saved me this time.

From: MC at: 2012-12-04 13:38:33

I replaced a failing /dev/sda, and i put the old put /dev/sdb in /dev/sda's place

But it doesn't restart, It simply displays GRUB on boot.

Before shutting it down I did install GRUB on /dev/sdb

 I had to put the failing drive back in but it will probably fail soon.

 any help? maybe i have to flag it as boootable of do something in the bios?

 thanks!

From: jlinkels at: 2014-02-15 21:59:51

In this tutorial it is explained like "failing" a device is sufficient test to see if an array is still operational or bootable.

The operational issue is fine, the bootable is not.

If you made a mistake or forgot to install the boot sector on both drives, the array will boot with a mdadm "failed" device, but it will not boot when a drive is disconnected, defective or gone.

So I strongly recommend that you actually disconnect one drive and see if the system boots. Then after resyncing, disconnect the other disk and try booting. 

Although failing and removing a device in mdadm is a good way to see if RAID is operational and can handle a disk failure during operation, it doesn't tell whether you correctly installed the boot loader. Often disks fail after a power cycle (as all hardware does...) and you don't want just to see a blinking cursor.

jlinkels


From: Bogdan STORM at: 2014-08-07 04:52:24

Thank you for putting all this information together for everyone.

Very helpful.