A Beginner's Guide To LVM - Page 9
On this page
Now we do the same process again, this time replacing /dev/sdc and /dev/sde:
mdadm --manage /dev/md0 --fail /dev/sdc1
mdadm --manage /dev/md0 --remove /dev/sdc1
mdadm --manage /dev/md1 --fail /dev/sde1
mdadm --manage /dev/md1 --remove /dev/sde1
fdisk /dev/sdc
fdisk /dev/sde
mdadm --manage /dev/md0 --add /dev/sdc1
mdadm --manage /dev/md1 --add /dev/sde1
cat /proc/mdstat
Wait until the synchronization has finished.
Next we create the RAID arrays /dev/md2 from /dev/sdb2 and /dev/sdc2 as well as /dev/md3 from /dev/sdd2 and /dev/sde2.
mdadm --create /dev/md2 --auto=yes -l 1 -n 2 /dev/sdb2 /dev/sdc2
server1:~# mdadm --create /dev/md2 --auto=yes -l 1 -n 2 /dev/sdb2 /dev/sdc2
mdadm: array /dev/md2 started.
mdadm --create /dev/md3 --auto=yes -l 1 -n 2 /dev/sdd2 /dev/sde2
server1:~# mdadm --create /dev/md3 --auto=yes -l 1 -n 2 /dev/sdd2 /dev/sde2
mdadm: array /dev/md3 started.
The new RAID arrays must be synchronized before we go on, so you should check
cat /proc/mdstat
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md3 : active raid1 sde2[1] sdd2[0]
59464512 blocks [2/2] [UU]
[=>...................] resync = 5.1% (3044224/59464512) finish=5.5min speed=169123K/sec
md2 : active raid1 sdc2[1] sdb2[0]
59464512 blocks [2/2] [UU]
[=>...................] resync = 5.5% (3312512/59464512) finish=9.3min speed=100379K/sec
md0 : active raid1 sdc1[0] sdb1[1]
24418688 blocks [2/2] [UU]
md1 : active raid1 sde1[0] sdd1[1]
24418688 blocks [2/2] [UU]
unused devices: <none>
After the synchronization has finished, we prepare /dev/md2 and /dev/md3 for LVM:
pvcreate /dev/md2 /dev/md3
server1:~# pvcreate /dev/md2 /dev/md3
Physical volume "/dev/md2" successfully created
Physical volume "/dev/md3" successfully created
and add /dev/md2 and /dev/md3 to our fileserver volume group:
vgextend fileserver /dev/md2 /dev/md3
server1:~# vgextend fileserver /dev/md2 /dev/md3
Volume group "fileserver" successfully extended
Now let's run our *display commands:
pvdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/md0
VG Name fileserver
PV Size 23.29 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 5961
Free PE 0
Allocated PE 5961
PV UUID 7JHUXF-1R2p-OjbJ-X1OT-uaeg-gWRx-H6zx3P
--- Physical volume ---
PV Name /dev/md1
VG Name fileserver
PV Size 23.29 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 5961
Free PE 18
Allocated PE 5943
PV UUID pwQ5AJ-RwVK-EebA-0Z13-d27d-2IdP-HqT5RW
--- Physical volume ---
PV Name /dev/md2
VG Name fileserver
PV Size 56.71 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 14517
Free PE 14517
Allocated PE 0
PV UUID 300kTo-evxm-rfmf-90LA-4YOJ-2LG5-t4JHnf
--- Physical volume ---
PV Name /dev/md3
VG Name fileserver
PV Size 56.71 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 14517
Free PE 14517
Allocated PE 0
PV UUID LXFSW6-7LQX-ZGGU-dV95-jQgg-TK44-U5JOjO
vgdisplay
server1:~# vgdisplay
--- Volume group ---
VG Name fileserver
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 26
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 4
Act PV 4
VG Size 159.98 GB
PE Size 4.00 MB
Total PE 40956
Alloc PE / Size 11904 / 46.50 GB
Free PE / Size 29052 / 113.48 GB
VG UUID dQDEHT-kNHf-UjRm-rmJ3-OUYx-9G1t-aVskI1
lvdisplay
server1:~# lvdisplay
--- Logical volume ---
LV Name /dev/fileserver/share
VG Name fileserver
LV UUID bcn3Oi-vW3p-WoyX-QlF2-xEtz-uz7Z-4DllYN
LV Write Access read/write
LV Status available
# open 1
LV Size 40.00 GB
Current LE 10240
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0
--- Logical volume ---
LV Name /dev/fileserver/backup
VG Name fileserver
LV UUID vfKVnU-gFXB-C6hE-1L4g-il6U-78EE-N8Sni8
LV Write Access read/write
LV Status available
# open 1
LV Size 5.00 GB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1
--- Logical volume ---
LV Name /dev/fileserver/media
VG Name fileserver
LV UUID H1gagh-wTwH-Og0S-cJNQ-BgX1-zGlM-LwLVzE
LV Write Access read/write
LV Status available
# open 2
LV Size 1.50 GB
Current LE 384
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:2
If your outputs look similar, you have successfully replaced your small hard disks with bigger ones.
Now that we have more disk space (2* 23.29GB + 2 * 56.71GB = 160GB) we could enlarge our logical volumes. Until now you know how to enlarge ext3 and reiserfs partitions, so let's enlarge our backup logical volume now which uses xfs:
lvextend -L10G /dev/fileserver/backup
server1:~# lvextend -L10G /dev/fileserver/backup
Extending logical volume backup to 10.00 GB
Logical volume backup successfully resized
To enlarge the xfs filesystem, we run
xfs_growfs /dev/fileserver/backup
server1:~# xfs_growfs /dev/fileserver/backup
meta-data=/dev/fileserver/backup isize=256 agcount=8, agsize=163840 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=2560, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
data blocks changed from 1310720 to 2621440
The output of
df -h
should now look like this:
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 666M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 116K 9.9M 2% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-share
40G 177M 38G 1% /var/share
/dev/mapper/fileserver-backup
10G 272K 10G 1% /var/backup
/dev/mapper/fileserver-media
1.5G 33M 1.5G 3% /var/media
That's it! If you've made it until here, you should now be used to LVM and LVM on RAID.
9 Links
- Managing Disk Space with LVM: http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html
- A simple introduction to working with LVM: http://www.debian-administration.org/articles/410
- Debian: http://www.debian.org