9 Testing
Now let's simulate a hard drive failure. It doesn't matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb has failed.
To simulate the hard drive failure, you can either shut down the system and remove /dev/sdb from the system, or you (soft-)remove it like this:
mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md2 --fail /dev/sdb3
mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm --manage /dev/md1 --remove /dev/sdb2
mdadm --manage /dev/md2 --remove /dev/sdb3
Shut down the system:
shutdown -h now
Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you should now put /dev/sdb in /dev/sda's place and connect the new HDD as /dev/sdb!) and boot the system. It should still start without problems.
Now run
cat /proc/mdstat
and you should see that we have a degraded array:
[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0]
104320 blocks [2/1] [U_]
md1 : active raid1 sda2[0]
513984 blocks [2/1] [U_]
md2 : active raid1 sda3[0]
4618560 blocks [2/1] [U_]
unused devices: <none>
[root@server1 ~]#
The output of
fdisk -l
should look as follows:
[root@server1 ~]# fdisk -l
Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0007b217
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 77 514080 fd Linux raid autodetect
/dev/sda3 78 652 4618687+ fd Linux raid autodetect
Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/md2: 4729 MB, 4729405440 bytes
2 heads, 4 sectors/track, 1154640 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md2 doesn't contain a valid partition table
Disk /dev/md1: 526 MB, 526319616 bytes
2 heads, 4 sectors/track, 128496 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md0: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
[root@server1 ~]#
Now we copy the partition table of /dev/sda to /dev/sdb:
sfdisk -d /dev/sda | sfdisk /dev/sdb
(If you get an error, you can try the --force option:
sfdisk -d /dev/sda | sfdisk --force /dev/sdb
)
[root@server1 ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
OK
Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0
Device Boot Start End #sectors Id System
/dev/sdb1 * 63 208844 208782 fd Linux raid autodetect
/dev/sdb2 208845 1237004 1028160 fd Linux raid autodetect
/dev/sdb3 1237005 10474379 9237375 fd Linux raid autodetect
/dev/sdb4 0 - 0 0 Empty
Successfully wrote the new partition table
Re-reading the partition table ...
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
[root@server1 ~]#
Afterwards we remove any remains of a previous RAID array from /dev/sdb...
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdb3
... and add /dev/sdb to the RAID array:
mdadm -a /dev/md0 /dev/sdb1
mdadm -a /dev/md1 /dev/sdb2
mdadm -a /dev/md2 /dev/sdb3
Now take a look at
cat /proc/mdstat
[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
513984 blocks [2/2] [UU]
md2 : active raid1 sdb3[2] sda3[0]
4618560 blocks [2/1] [U_]
[===>.................] recovery = 15.4% (715584/4618560) finish=4.9min speed=13222K/sec
unused devices: <none>
[root@server1 ~]#
Wait until the synchronization has finished:
[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
513984 blocks [2/2] [UU]
md2 : active raid1 sdb3[1] sda3[0]
4618560 blocks [2/2] [UU]
unused devices: <none>
[root@server1 ~]#
Then run
grub
and install the bootloader on both HDDs:
root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
quit
That's it. You've just replaced a failed hard drive in your RAID1 array.
10 Links
- The Software-RAID Howto: http://tldp.org/HOWTO/Software-RAID-HOWTO.html
- Fedora: http://fedoraproject.org