A Beginner's Guide To LVM - Page 8

8 Replacing The Hard Disks With Bigger Ones

We are currently using four hard disks with a size of 25GB each (at least we are acting like that). Now let's assume this isn't enough anymore, and we need more space in our RAID setup. Therefore we will replace our 25GB hard disks with 80GB hard disks (in fact we will still use the current hard disks, but use their full capacity now - in the real life you would replace your old, small hard disks with new, bigger ones).

The procedure is as follows: first we remove /dev/sdb and /dev/sdd from the RAID arrays, replace them with bigger hard disks, put them back into the RAID arrays, and then we do the same again with /dev/sdc and /dev/sde.

First we mark /dev/sdb1 as failed:

mdadm --manage /dev/md0 --fail /dev/sdb1
server1:~# mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

The output of

cat /proc/mdstat

looks now like this:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdc1[0] sdb1[2](F)
      24418688 blocks [2/1] [U_]

md1 : active raid1 sde1[0] sdd1[1]
      24418688 blocks [2/2] [UU]

unused devices: <none>

Then we remove /dev/sdb1 from the RAID array /dev/md0:

mdadm --manage /dev/md0 --remove /dev/sdb1
server1:~# mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1
cat /proc/mdstat
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdc1[0]
      24418688 blocks [2/1] [U_]

md1 : active raid1 sde1[0] sdd1[1]
      24418688 blocks [2/2] [UU]

unused devices: <none>

Now we do the same with /dev/sdd1:

mdadm --manage /dev/md1 --fail /dev/sdd1
server1:~# mdadm --manage /dev/md1 --fail /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md1
cat /proc/mdstat
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdc1[0]
      24418688 blocks [2/1] [U_]

md1 : active raid1 sde1[0] sdd1[2](F)
      24418688 blocks [2/1] [U_]

unused devices: <none>
mdadm --manage /dev/md1 --remove /dev/sdd1
server1:~# mdadm --manage /dev/md1 --remove /dev/sdd1
mdadm: hot removed /dev/sdd1
cat /proc/mdstat
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdc1[0]
      24418688 blocks [2/1] [U_]

md1 : active raid1 sde1[0]
      24418688 blocks [2/1] [U_]

unused devices: <none>

On a real system you would now shut it down, pull out the 25GB /dev/sdb and /dev/sdd and replace them with 80GB ones. As I said before, we don't have to do this because all hard disks already have a capacity of 80GB.

Next we must format /dev/sdb and /dev/sdd. We must create a /dev/sdb1 resp. /dev/sdd1 partition, type fd (Linux RAID autodetect), size 25GB (the same settings as on the old hard disks), and a /dev/sdb2 resp. /dev/sdd2 partition, type fd, that cover the rest of the hard disks. As /dev/sdb1 and /dev/sdd1 are still present on our hard disks, we only have to create /dev/sdb2 and /dev/sdd2 in this special example.

fdisk /dev/sdb

server1:~# fdisk /dev/sdb

The number of cylinders for this disk is set to 10443.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help):
 <-- p

Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3040    24418768+  fd  Linux raid autodetect

Command (m for help):
 <-- n
Command action
   e   extended
   p   primary partition (1-4)

<-- p
Partition number (1-4): <-- 2
First cylinder (3041-10443, default 3041): <-- <ENTER>
Using default value 3041
Last cylinder or +size or +sizeM or +sizeK (3041-10443, default 10443):
<-- <ENTER>
Using default value 10443

Command (m for help):
 <-- t
Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Do the same for /dev/sdd:

fdisk /dev/sdd

The output of

fdisk -l

looks now like this:

server1:~# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          18      144553+  83  Linux
/dev/sda2              19        2450    19535040   83  Linux
/dev/sda4            2451        2610     1285200   82  Linux swap / Solaris

Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3040    24418768+  fd  Linux raid autodetect
/dev/sdb2            3041       10443    59464597+  fd  Linux raid autodetect

Disk /dev/sdc: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        3040    24418768+  fd  Linux raid autodetect

Disk /dev/sdd: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1        3040    24418768+  fd  Linux raid autodetect
/dev/sdd2            3041       10443    59464597+  fd  Linux raid autodetect

Disk /dev/sde: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        3040    24418768+  fd  Linux raid autodetect

Disk /dev/sdf: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1        3040    24418768+  8e  Linux LVM

Disk /dev/md1: 25.0 GB, 25004736512 bytes
2 heads, 4 sectors/track, 6104672 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md0: 25.0 GB, 25004736512 bytes
2 heads, 4 sectors/track, 6104672 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Now we add /dev/sdb1 to /dev/md0 again and /dev/sdd1 to /dev/md1:

mdadm --manage /dev/md0 --add /dev/sdb1
server1:~# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: re-added /dev/sdb1
mdadm --manage /dev/md1 --add /dev/sdd1
server1:~# mdadm --manage /dev/md1 --add /dev/sdd1
mdadm: re-added /dev/sdd1

Now the contents of both RAID arrays will be synchronized. We must wait until this is finished before we can go on. We can check the status of the synchronization with

cat /proc/mdstat

The output looks like this during synchronization:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdb1[1] sdc1[0]
      24418688 blocks [2/1] [U_]
      [=>...................]  recovery =  9.9% (2423168/24418688) finish=2.8min speed=127535K/sec

md1 : active raid1 sdd1[1] sde1[0]
      24418688 blocks [2/1] [U_]
      [=>...................]  recovery =  6.4% (1572096/24418688) finish=1.9min speed=196512K/sec

unused devices: <none>

and like this when it's finished:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdb1[1] sdc1[0]
      24418688 blocks [2/2] [UU]

md1 : active raid1 sdd1[1] sde1[0]
      24418688 blocks [2/2] [UU]

unused devices: <none>
Share this page:

0 Comment(s)