A Beginner's Guide To LVM - Page 9

Want to support HowtoForge? Become a subscriber!
 
Submitted by falko (Contact Author) (Forums) on Sun, 2007-01-14 19:22. ::

Now we do the same process again, this time replacing /dev/sdc and /dev/sde:

mdadm --manage /dev/md0 --fail /dev/sdc1
mdadm --manage /dev/md0 --remove /dev/sdc1
mdadm --manage /dev/md1 --fail /dev/sde1
mdadm --manage /dev/md1 --remove /dev/sde1

fdisk /dev/sdc
fdisk /dev/sde

mdadm --manage /dev/md0 --add /dev/sdc1
mdadm --manage /dev/md1 --add /dev/sde1

cat /proc/mdstat

Wait until the synchronization has finished.

Next we create the RAID arrays /dev/md2 from /dev/sdb2 and /dev/sdc2 as well as /dev/md3 from /dev/sdd2 and /dev/sde2.

mdadm --create /dev/md2 --auto=yes -l 1 -n 2 /dev/sdb2 /dev/sdc2

server1:~# mdadm --create /dev/md2 --auto=yes -l 1 -n 2 /dev/sdb2 /dev/sdc2
mdadm: array /dev/md2 started.

mdadm --create /dev/md3 --auto=yes -l 1 -n 2 /dev/sdd2 /dev/sde2

server1:~# mdadm --create /dev/md3 --auto=yes -l 1 -n 2 /dev/sdd2 /dev/sde2
mdadm: array /dev/md3 started.

The new RAID arrays must be synchronized before we go on, so you should check

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md3 : active raid1 sde2[1] sdd2[0]
      59464512 blocks [2/2] [UU]
      [=>...................]  resync =  5.1% (3044224/59464512) finish=5.5min speed=169123K/sec

md2 : active raid1 sdc2[1] sdb2[0]
      59464512 blocks [2/2] [UU]
      [=>...................]  resync =  5.5% (3312512/59464512) finish=9.3min speed=100379K/sec

md0 : active raid1 sdc1[0] sdb1[1]
      24418688 blocks [2/2] [UU]

md1 : active raid1 sde1[0] sdd1[1]
      24418688 blocks [2/2] [UU]

unused devices: <none>

After the synchronization has finished, we prepare /dev/md2 and /dev/md3 for LVM:

pvcreate /dev/md2 /dev/md3

server1:~# pvcreate /dev/md2 /dev/md3
  Physical volume "/dev/md2" successfully created
  Physical volume "/dev/md3" successfully created

and add /dev/md2 and /dev/md3 to our fileserver volume group:

vgextend fileserver /dev/md2 /dev/md3

server1:~# vgextend fileserver /dev/md2 /dev/md3
  Volume group "fileserver" successfully extended

Now let's run our *display commands:

pvdisplay

server1:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               fileserver
  PV Size               23.29 GB / not usable 0
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              5961
  Free PE               0
  Allocated PE          5961
  PV UUID               7JHUXF-1R2p-OjbJ-X1OT-uaeg-gWRx-H6zx3P

  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               fileserver
  PV Size               23.29 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              5961
  Free PE               18
  Allocated PE          5943
  PV UUID               pwQ5AJ-RwVK-EebA-0Z13-d27d-2IdP-HqT5RW

  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               fileserver
  PV Size               56.71 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              14517
  Free PE               14517
  Allocated PE          0
  PV UUID               300kTo-evxm-rfmf-90LA-4YOJ-2LG5-t4JHnf

  --- Physical volume ---
  PV Name               /dev/md3
  VG Name               fileserver
  PV Size               56.71 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              14517
  Free PE               14517
  Allocated PE          0
  PV UUID               LXFSW6-7LQX-ZGGU-dV95-jQgg-TK44-U5JOjO

vgdisplay

server1:~# vgdisplay
  --- Volume group ---
  VG Name               fileserver
  System ID
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  26
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               159.98 GB
  PE Size               4.00 MB
  Total PE              40956
  Alloc PE / Size       11904 / 46.50 GB
  Free  PE / Size       29052 / 113.48 GB
  VG UUID               dQDEHT-kNHf-UjRm-rmJ3-OUYx-9G1t-aVskI1

lvdisplay

server1:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/fileserver/share
  VG Name                fileserver
  LV UUID                bcn3Oi-vW3p-WoyX-QlF2-xEtz-uz7Z-4DllYN
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                40.00 GB
  Current LE             10240
  Segments               2
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/fileserver/backup
  VG Name                fileserver
  LV UUID                vfKVnU-gFXB-C6hE-1L4g-il6U-78EE-N8Sni8
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                5.00 GB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:1

  --- Logical volume ---
  LV Name                /dev/fileserver/media
  VG Name                fileserver
  LV UUID                H1gagh-wTwH-Og0S-cJNQ-BgX1-zGlM-LwLVzE
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                1.50 GB
  Current LE             384
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:2

If your outputs look similar, you have successfully replaced your small hard disks with bigger ones.

Now that we have more disk space (2* 23.29GB + 2 * 56.71GB = 160GB) we could enlarge our logical volumes. Until now you know how to enlarge ext3 and reiserfs partitions, so let's enlarge our backup logical volume now which uses xfs:

lvextend -L10G /dev/fileserver/backup

server1:~# lvextend -L10G /dev/fileserver/backup
  Extending logical volume backup to 10.00 GB
  Logical volume backup successfully resized

To enlarge the xfs filesystem, we run

xfs_growfs /dev/fileserver/backup

server1:~# xfs_growfs /dev/fileserver/backup
meta-data=/dev/fileserver/backup isize=256    agcount=8, agsize=163840 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=1310720, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=2560, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0
data blocks changed from 1310720 to 2621440

The output of

df -h

should now look like this:

server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              19G  666M   17G   4% /
tmpfs                  78M     0   78M   0% /lib/init/rw
udev                   10M  116K  9.9M   2% /dev
tmpfs                  78M     0   78M   0% /dev/shm
/dev/sda1             137M   17M  114M  13% /boot
/dev/mapper/fileserver-share
                       40G  177M   38G   1% /var/share
/dev/mapper/fileserver-backup
                       10G  272K   10G   1% /var/backup
/dev/mapper/fileserver-media
                      1.5G   33M  1.5G   3% /var/media

That's it! If you've made it until here, you should now be used to LVM and LVM on RAID.

 

9 Links


Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.
Submitted by Anonymous (not registered) on Mon, 2014-02-17 16:08.
excellent tutorial that briefs how to manage disks in linux platforms..Thanks for your effort to have this tutorial get prepared..
Submitted by Gianluca (not registered) on Sun, 2012-01-01 22:15.
Excellent HowTo, pls complete this guide with LVM snapshot examples.
Submitted by csg (not registered) on Fri, 2011-09-16 13:46.

Congratulations for this hard work, very clear and concise.

Looking forward to have it running.

Thanks for your work.

Submitted by Vahid Pazirandeh (not registered) on Fri, 2011-07-01 19:50.

Wow. Very well written howto. Well thought out examples. Thanks a lot to all who were involved.

I agree with an earlier comment - I have used Linux for many years and have read through lots of tutorials. This was so easy to read! :)

Submitted by Anonymous (not registered) on Thu, 2011-06-30 05:11.

In my 15 years with linux I have never, ever, seen such good howto. Very easy to follow and understand. In 30 mins my confidence level on LVM/RAID was boosted from 0 to 80.

 I wish there were more howtos like this!

Submitted by Craig (not registered) on Thu, 2011-05-26 17:39.

One of the best howto I have come across -- wish they were all this good.

Submitted by Gisli (not registered) on Tue, 2012-09-04 16:32.
I agree with everyone here. Best howto I've come accross! Everything right to the point and with examples. Nice work!!
Submitted by Tormod (not registered) on Sun, 2008-10-05 12:13.
Excellent howto! I just noticed that the example fstab entries look wrong (in both examples): /dev/fileserver/share versus /dev/mapper/fileserver-share
Submitted by Tormod (not registered) on Mon, 2008-10-06 20:35.
Well, scratch that. It is correct anyway, silly me just had to try it out to see: /dev/fileserver/share is a soft link to /dev/mapper/fileserver-share
Submitted by Anonymous (not registered) on Sun, 2008-10-05 08:34.

I have about 2 years of experience using RAID and LVM, and I must say - in all of the literature and documentation I've ever encountered, _none_ of it ever came close to making things so simple and clear as you have just done. You've articulated the ideas of logical volumes, volume groups, and physical volumes well, and have provided concise examples.

Well done.

Submitted by warpengi (registered user) on Sat, 2008-09-13 07:30.
This was exactly what I needed to get my home file server running on LVM.  I will need this again when I add disks and again when I move everything over to raid.
Submitted by sonoracomm (registered user) on Fri, 2007-01-19 07:16.

Great howto, Falko.

I have needed this in the past and i have already bookmarked it for the next time.  I just don't work withthis stuff enough to memorize it.

You have a real talent for technical writing. 

Thanks,