How To Resize LVM Software RAID1 Partitions (Shrink & Grow) - Page 3
On this page
3 Degraded Array
I will describe how to resize the degraded array /dev/md1, made up of /dev/sda5 and /dev/sdb5, where /dev/sda5 has failed:
cat /proc/mdstat
server1:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb5[1]
4988032 blocks [2/1] [_U]
md0 : active raid1 sda1[0] sdb1[1]
248896 blocks [2/2] [UU]
unused devices: <none>
server1:~#
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/server1-root
4.5G 741M 3.5G 18% /
tmpfs 126M 0 126M 0% /lib/init/rw
udev 10M 68K 10M 1% /dev
tmpfs 126M 0 126M 0% /dev/shm
/dev/md0 236M 18M 206M 8% /boot
server1:~#
pvdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name server1
PV Size 4.75 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 1217
Free PE 0
Allocated PE 1217
PV UUID Ntrsmz-m0o1-WAPD-xhsb-YpH7-0okm-qfdBQG
server1:~#
vgdisplay
server1:~# vgdisplay
--- Volume group ---
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.75 GB
PE Size 4.00 MB
Total PE 1217
Alloc PE / Size 1217 / 4.75 GB
Free PE / Size 0 / 0
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0
server1:~#
lvdisplay
server1:~# lvdisplay
--- Logical volume ---
LV Name /dev/server1/root
VG Name server1
LV UUID 3ZgGnd-Sq1s-Rchu-92b9-DpAX-mk24-0aOMm2
LV Write Access read/write
LV Status available
# open 1
LV Size 4.50 GB
Current LE 1151
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0
--- Logical volume ---
LV Name /dev/server1/swap_1
VG Name server1
LV UUID KM6Yq8-jQaQ-rkP8-6f4t-zrXA-Jk13-yFrWi2
LV Write Access read/write
LV Status available
# open 2
LV Size 264.00 MB
Current LE 66
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1
server1:~#
3.1 Shrinking A Degraded Array
Before we boot into the rescue system, we must make sure that /dev/sda5 is really removed from the array:
mdadm --manage /dev/md1 --fail /dev/sda5
mdadm --manage /dev/md1 --remove /dev/sda5
Then we overwrite the superblock on /dev/sda5 (this is very important - if you forget this, the system might now boot anymore after the resizal!):
mdadm --zero-superblock /dev/sda5
Boot into your rescue system and activate all needed modules:
modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10
Then activate your RAID arrays...
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
mdadm -A --scan
... and start LVM:
/etc/init.d/lvm start
Run
e2fsck -f /dev/server1/root
to check the file system.
/dev/md1 has a size of 5GB; I want to shrink it to 4GB. First we have to shrink the file system on the logical volume /dev/server1/root with resize2fs; the file system is inside the logical volume /dev/server1/root, so the filesystem should be <= the logical volume (therefore I make the file system 2GB). The logical volumes (LV - we have two of them, /dev/server1/root and /dev/server1/swap_1) again are inside the physical volume (PV) /dev/md1 (therefore LV /dev/server1/root + LV /dev/server1/swap_1 <= PV; I make LV /dev/server1/root 2.5GB and delete /dev/server1/swap_1, see next paragraph) which is on the RAID array /dev/md1 that we want to shrink (so PV <= /dev/md1; I shrink the PV to 3GB).
As /dev/server1/swap_1 is at the end of our hard drive, we can delete it, shrink the PV and then create /dev/server1/swap_1 again to make sure that /dev/server1/root fits into our PV. If the swap LV is not at the end of your hard drive in your case, there's no need to delete it - you must make sure that you shrink the last LV on the drive enough so that it fits into the PV.
So I shrink /dev/server1/root's filesystem to 2GB (make sure you use a big enough value so that all your files and directories fit into it!):
resize2fs /dev/server1/root 2G
... and the /dev/server1/root LV to 2.5GB:
lvreduce -L2.5G /dev/server1/root
Then I delete the /dev/server1/swap_1 LV (not necessary if swap is not at the end of your hard drive - in this case make sure you shrink the last LV on the drive so that it fits into the PV!)...
lvremove /dev/server1/swap_1
... and resize the PV to 3GB:
pvresize --setphysicalvolumesize 3G /dev/md1
Now we shrink /dev/md1 to 4GB. The --size value must be in KiBytes (4 x 1024 x 1024 = 4194304); make sure it can be divided by 64:
mdadm --grow /dev/md1 --size=4194304
Now I grow the PV to the largest possible value (if you don't specify a size, pvresize will use the largest possible value):
pvresize /dev/md1
Now let's check the output of
vgdisplay
root@Knoppix:~# vgdisplay
--- Volume group ---
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 26
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.00 GB
PE Size 4.00 MB
Total PE 1023
Alloc PE / Size 640 / 2.50 GB
Free PE / Size 383 / 1.50 GB
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0
root@Knoppix:~#
As you see, we have 383 free PE, so we can recreate the /dev/server1/swap_1 LV (which had 66 PE before we deleted it):
lvcreate --name swap_1 -l 66 server1
mkswap /dev/server1/swap_1
Let's check
vgdisplay
again:
root@Knoppix:~# vgdisplay
--- Volume group ---
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 27
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.00 GB
PE Size 4.00 MB
Total PE 1023
Alloc PE / Size 706 / 2.76 GB
Free PE / Size 317 / 1.24 GB
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0
root@Knoppix:~#
We still have 317 free PE, so we can extend our /dev/server1/root LV:
lvextend -l +317 /dev/server1/root
Now we resize /dev/server1/root's filesystem to the largest possible value (if you don't specify a size, resize2fs will use the largest possible value)...
resize2fs /dev/server1/root
... and run a file system check again:
e2fsck -f /dev/server1/root
Then boot into the normal system again and run the following two commands to add /dev/sda5 back to the array /dev/md1:
mdadm --zero-superblock /dev/sda5
mdadm -a /dev/md1 /dev/sda5
Take a look at
cat /proc/mdstat
and you should see that /dev/sdb5 and /dev/sda5 are now being synced.