How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB Configuration) (CentOS 5.3) - Page 4

On this page

  1. 8 Testing
  2. 9 Links

8 Testing

Now let's simulate a hard drive failure. It doesn't matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb has failed.

To simulate the hard drive failure, you can either shut down the system and remove /dev/sdb from the system, or you (soft-)remove it like this:

mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md1 --fail /dev/sdb2

mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm --manage /dev/md1 --remove /dev/sdb2

Shut down the system:

shutdown -h now

Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you should now put /dev/sdb in /dev/sda's place and connect the new HDD as /dev/sdb!) and boot the system. It should still start without problems.

Now run

cat /proc/mdstat

and you should see that we have a degraded array:

[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0]
      104320 blocks [2/1] [U_]

md1 : active raid1 sda2[0]
      10377920 blocks [2/1] [U_]

unused devices: <none>
[[email protected] ~]#

The output of

fdisk -l

should look as follows:

[[email protected] ~]# fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   fd  Linux raid autodetect
/dev/sda2              14        1305    10377990   fd  Linux raid autodetect

Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/md1: 10.6 GB, 10626990080 bytes
2 heads, 4 sectors/track, 2594480 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md0: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table
[[email protected] ~]#

Now we copy the partition table of /dev/sda to /dev/sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb

(If you get an error, you can try the --force option:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

)

[[email protected] ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 1305 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/sdb1   *        63    208844     208782  fd  Linux raid autodetect
/dev/sdb2        208845  20964824   20755980  fd  Linux raid autodetect
/dev/sdb3             0         -          0   0  Empty
/dev/sdb4             0         -          0   0  Empty
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
[[email protected] ~]#

Afterwards we remove any remains of a previous RAID array from /dev/sdb...

mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2

... and add /dev/sdb to the RAID array:

mdadm -a /dev/md0 /dev/sdb1
mdadm -a /dev/md1 /dev/sdb2

Now take a look at

cat /proc/mdstat

[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
      104320 blocks [2/2] [UU]

md1 : active raid1 sdb2[2] sda2[0]
      10377920 blocks [2/1] [U_]
      [======>..............]  recovery = 32.3% (3360768/10377920) finish=1.5min speed=74238K/sec

unused devices: <none>
[[email protected] ~]#

Wait until the synchronization has finished:

[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
      104320 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
      10377920 blocks [2/2] [UU]

unused devices: <none>
[[email protected] ~]#

Then run

grub

and install the bootloader on both HDDs:

root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
quit

That's it. You've just replaced a failed hard drive in your RAID1 array.

 

Falko Timme

About Falko Timme

Falko Timme is an experienced Linux administrator and founder of Timme Hosting, a leading nginx business hosting company in Germany. He is one of the most active authors on HowtoForge since 2005 and one of the core developers of ISPConfig since 2000. He has also contributed to the O'Reilly book "Linux System Administration".

Share this page:

Suggested articles

0 Comment(s)

Add comment

Comments

By: Falko Timme