How To Resize RAID Partitions (Shrink & Grow) (Software RAID) - Page 2

3 Degraded Array

I will describe how to resize the degraded array /dev/md2, made up of /dev/sda3 and /dev/sdb3, where /dev/sda3 has failed:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1]
      4594496 blocks [2/1] [_U]

md1 : active raid1 sda2[0] sdb2[1]
      497920 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      144448 blocks [2/2] [UU]

unused devices: <none>
server1:~#

 

3.1 Shrinking A Degraded Array

Before we boot into the rescue system, we must make sure that /dev/sda3 is really removed from the array:

mdadm --manage /dev/md2 --fail /dev/sda3
mdadm --manage /dev/md2 --remove /dev/sda3

Then we overwrite the superblock on /dev/sda3 (this is very important - if you forget this, the system might now boot anymore after the resizal!):

mdadm --zero-superblock /dev/sda3

Boot into your rescue system and activate all needed modules:

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Then activate your RAID arrays:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

mdadm -A --scan

Run

e2fsck -f /dev/md2

to check the file system.

/dev/md2 has a size of 40GB; I want to shrink it to 30GB. First we have to shrink the file system with resize2fs; to make sure that the file system fits into the 30GB, we make it a little bit smaller (25GB) so we have a little security margin, shrink /dev/md2 to 30GB, and the resize the file system (again with resize2fs) to the max. possible value:

resize2fs /dev/md2 25G

Now we shrink /dev/md2 to 30GB. The --size value must be in KiBytes (30 x 1024 x 1024 = 31457280); make sure it can be divided by 64:

mdadm --grow /dev/md2 --size=31457280

Next we grow the file system to the largest possible value (if you don't specify a size, resize2fs will use the largest possible value)...

resize2fs /dev/md2

... and run a file system check again:

e2fsck -f /dev/md2

Then boot into the normal system again and run the following two commands to add /dev/sda3 back to the array /dev/md2:

mdadm --zero-superblock /dev/sda3
mdadm -a /dev/md2 /dev/sda3

Take a look at

cat /proc/mdstat

and you should see that /dev/sdb3 and /dev/sda3 are now being synced.

 

3.2 Growing A Degraded Array

Before we boot into the rescue system, we must make sure that /dev/sda3 is really removed from the array:

mdadm --manage /dev/md2 --fail /dev/sda3
mdadm --manage /dev/md2 --remove /dev/sda3

Then we overwrite the superblock on /dev/sda3 (this is very important - if you forget this, the system might now boot anymore after the resizal!):

mdadm --zero-superblock /dev/sda3

Boot into your rescue system and activate all needed modules:

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Then activate your RAID arrays:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

mdadm -A --scan

Now we can grow /dev/md2 as follows:

mdadm --grow /dev/md2 --size=max

--size=max means the largest possible value. You can as well specify a size in KiBytes (see previous chapter).

Then we run a file system check...

e2fsck -f /dev/md2

..., resize the file system...

resize2fs /dev/md2

... and check the file system again:

e2fsck -f /dev/md2

Then boot into the normal system again and run the following two commands to add /dev/sda3 back to the array /dev/md2:

mdadm --zero-superblock /dev/sda3
mdadm -a /dev/md2 /dev/sda3

Take a look at

cat /proc/mdstat

and you should see that /dev/sdb3 and /dev/sda3 are now being synced.

 

4 Links

Share this page:

1 Comment(s)

Add comment

Comments

From: at: 2010-11-15 01:57:46

Just what i was looking for.

thanks