How To Resize RAID Partitions (Shrink & Grow) (Software RAID)

Version 1.0
Author: Falko Timme

This article describes how you can shrink and grow existing software RAID partitions. I have tested this with non-LVM RAID1 partitions that use ext3 as the file system. I will describe this procedure for an intact RAID array and also a degraded RAID array.

If you use LVM on your RAID partitions, the procedure will be different, so do not use this tutorial in this case!

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

A few days ago I found out that one of my servers had a degraded RAID1 array (/dev/md2, made up of /dev/sda3 and /dev/sdb3; /dev/sda3 had failed, /dev/sdb3 was still active):

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1]
      4594496 blocks [2/1] [_U]

md1 : active raid1 sda2[0] sdb2[1]
      497920 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      144448 blocks [2/2] [UU]

unused devices: <none>
server1:~#

I tried to fix it (using this tutorial), but unfortunately at the end of the sync process (with 99.9% complete), the sync stopped and started over again. As I found out, this happened because there were some defect sectors at the end of the (working) partition /dev/sdb3 - this was in /var/log/kern.log:

Nov 22 18:51:06 server1 kernel: sdb: Current: sense key: Aborted Command
Nov 22 18:51:06 server1 kernel: end_request: I/O error, dev sdb, sector 1465142856

So this was the worst case that could happen - /dev/sda dead and /dev/sdb about to die. To fix this, I imagined I could shrink /dev/md2 so that it leaves out the broken sectors at the end of /dev/sdb3, then add the new /dev/sda3 (from the replaced hard drive) to /dev/md2, let the sync finish, remove /dev/sdb3 from the array and replace /dev/sdb with a new hard drive, add the new /dev/sdb3 to /dev/md2, and grow /dev/md2 again.

This is one of the use cases for the following procedures (I will describe the process for an intact array and a degraded array).

Please note that /dev/md2 is my system partition (mount point /), so I had to use a rescue system (e.g. Knoppix Live-CD) to resize the array. If the array you want to resize is not your system partition, you probably don't need to boot into a rescue system; but in either case, make sure that the array is unmounted!

 

2 Intact Array

I will describe how to resize the array /dev/md2, made up of /dev/sda3 and /dev/sdb3.

 

2.1 Shrinking An Intact Array

Boot into your rescue system and activate all needed modules:

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Then activate your RAID arrays:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

mdadm -A --scan

Run

e2fsck -f /dev/md2

to check the file system.

/dev/md2 has a size of 40GB; I want to shrink it to 30GB. First we have to shrink the file system with resize2fs; to make sure that the file system fits into the 30GB, we make it a little bit smaller (25GB) so we have a little security margin, shrink /dev/md2 to 30GB, and the resize the file system (again with resize2fs) to the max. possible value:

resize2fs /dev/md2 25G

Now we shrink /dev/md2 to 30GB. The --size value must be in KiBytes (30 x 1024 x 1024 = 31457280); make sure it can be divided by 64:

mdadm --grow /dev/md2 --size=31457280

Next we grow the file system to the largest possible value (if you don't specify a size, resize2fs will use the largest possible value)...

resize2fs /dev/md2

... and run a file system check again:

e2fsck -f /dev/md2

That's it - you can now boot into the normal system again.

 

2.2 Growing An Intact Array

Boot into your rescue system and activate all needed modules:

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Then activate your RAID arrays:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

mdadm -A --scan

Now we can grow /dev/md2 as follows:

mdadm --grow /dev/md2 --size=max

--size=max means the largest possible value. You can as well specify a size in KiBytes (see previous chapter).

Then we run a file system check...

e2fsck -f /dev/md2

..., resize the file system...

resize2fs /dev/md2

... and check the file system again:

e2fsck -f /dev/md2

Afterwards you can boot back into your normal system.

Share this page:

Suggested articles

6 Comment(s)

Add comment

Comments

By: Nikolay

Thank you for tutorial it is very useful. Actually resize is not working with RAID0  

mdadm --grow /dev/md0 --size=188743680
mdadm: raid0 array /dev/md0 cannot be reshaped.

 

mdadm --grow --help

 This version supports changing the number of
devices in a RAID1/5/6, changing the active size of all devices in
a RAID1/4/5/6, adding or removing a write-intent bitmap, and changing
the error mode for a 'FAULTY' array.

Regards

By: Anonymous

Ubuntu rescue edition CD ISO has ddrescue which allows you to get most of the data off a failing disk.  If you're lucky without too much corruption!

By: Anonymous

Using the resize command

mdadm --grow /dev/md2 --size=max

and you get the error:

mdadm: Cannot set device size for /dev/md2 - Device or resource busy

check these (commands are just example):

- make sure there is no RAID rebuild in progress (cat /proc/mdstat)

- make sure there is no opened file (lsof | grep md2)

- make sure it's not mounted (mount | grep md2)

- see if it's the bitmap issue (google for that)

 

By: Marcos

Boot into rescue mode.

By: madods

 I have a /dev/md0 Raid1 array comprising /dev/sdb1 and /dev/sdc1.

Both partitions are half the capacity of their disks. IE /dev/sdc1 is have the size of /dev/sdc.

How can I resize /dev/md0 to use all the space on the disks?

By: Philipp

 Hi Falko.

 

Thanks for the Tutorial.

I have a few questions or hints:

when i run your Commands as above:

. cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig

mdadm --examine --scan >> /etc/mdadm/mdadm.conf

 

It always creates duplicates at the end of mdadm.conf.

 

If i run:

e2fsck -f /dev/md2

I get the Error:

[email protected]:~# e2fsck -f /dev/md2

e2fsck 1.44.5 (15-Dec-2018)

/dev/md2 is mounted.

 

e2fsck: Cannot continue, aborting.

 

So should i unmount the RAID and try again?

 

Best regards

Philipp