A Beginner's Guide To LVM - Page 7

Afterwards we prepare /dev/md0 and /dev/md1 for LVM:

pvcreate /dev/md0 /dev/md1
server1:~# pvcreate /dev/md0 /dev/md1
  Physical volume "/dev/md0" successfully created
  Physical volume "/dev/md1" successfully created

and extend our fileserver volume group:

vgextend fileserver /dev/md0 /dev/md1
server1:~# vgextend fileserver /dev/md0 /dev/md1
  Volume group "fileserver" successfully extended

The outputs of

pvdisplay

and

vgdisplay

should look like this:

server1:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               fileserver
  PV Size               23.29 GB / not usable 0
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              5961
  Free PE               0
  Allocated PE          5961
  PV UUID               USDJyG-VDM2-r406-OjQo-h3eb-c9Mp-4nvnvu

  --- Physical volume ---
  PV Name               /dev/sdd1
  VG Name               fileserver
  PV Size               23.29 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              5961
  Free PE               146
  Allocated PE          5815
  PV UUID               qdEB5d-389d-O5UA-Kbwv-mn1y-74FY-4zublN

  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               fileserver
  PV Size               23.29 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              5961
  Free PE               5961
  Allocated PE          0
  PV UUID               7JHUXF-1R2p-OjbJ-X1OT-uaeg-gWRx-H6zx3P

  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               fileserver
  PV Size               23.29 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              5961
  Free PE               5961
  Allocated PE          0
  PV UUID               pwQ5AJ-RwVK-EebA-0Z13-d27d-2IdP-HqT5RW
server1:~# vgdisplay
  --- Volume group ---
  VG Name               fileserver
  System ID
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  14
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               93.14 GB
  PE Size               4.00 MB
  Total PE              23844
  Alloc PE / Size       11776 / 46.00 GB
  Free  PE / Size       12068 / 47.14 GB
  VG UUID               dQDEHT-kNHf-UjRm-rmJ3-OUYx-9G1t-aVskI1

Now we move the contents of /dev/sdb1 to /dev/md0 and the contents of /dev/sdd1 to /dev/md1, then we remove /dev/sdb1 and /dev/sdd1 from LVM:

pvmove /dev/sdb1 /dev/md0
pvmove /dev/sdd1 /dev/md1
vgreduce fileserver /dev/sdb1 /dev/sdd1
pvremove /dev/sdb1 /dev/sdd1

Now only /dev/md0 and /dev/md1 should be left as physical volumes:

pvdisplay
server1:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               fileserver
  PV Size               23.29 GB / not usable 0
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              5961
  Free PE               0
  Allocated PE          5961
  PV UUID               7JHUXF-1R2p-OjbJ-X1OT-uaeg-gWRx-H6zx3P

  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               fileserver
  PV Size               23.29 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              5961
  Free PE               146
  Allocated PE          5815
  PV UUID               pwQ5AJ-RwVK-EebA-0Z13-d27d-2IdP-HqT5RW

Now we format /dev/sdb1 with fd (Linux RAID autodetect):

fdisk /dev/sdb

server1:~# fdisk /dev/sdb

The number of cylinders for this disk is set to 32635.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help):
 <-- m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help):
 <-- t
Selected partition 1
Hex code (type L to list codes):
 <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Do the same with /dev/sdd1:

fdisk /dev/sdd

Next add /dev/sdb1 to /dev/md0 and /dev/sdd1 to /dev/md1:

mdadm --manage /dev/md0 --add /dev/sdb1
server1:~# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: added /dev/sdb1
mdadm --manage /dev/md1 --add /dev/sdd1
server1:~# mdadm --manage /dev/md1 --add /dev/sdd1
mdadm: added /dev/sdd1

Now the two RAID arrays will be synchronized. This will take some time, you can check with

cat /proc/mdstat

when the process is finished. The output looks like this for an unfinished process:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md1 : active raid1 sdd1[2] sde1[0]
      24418688 blocks [2/1] [U_]
      [=>...................]  recovery =  6.4% (1586560/24418688) finish=1.9min speed=198320K/sec

md0 : active raid1 sdb1[2] sdc1[0]
      24418688 blocks [2/1] [U_]
      [==>..................]  recovery = 10.5% (2587264/24418688) finish=2.8min speed=129363K/sec

unused devices: <none>

and like this when the process is finished:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md1 : active raid1 sdd1[1] sde1[0]
      24418688 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sdc1[0]
      24418688 blocks [2/2] [UU]

unused devices: <none>

If you have a look at PV Size in the output of

pvdisplay

you will see that 2 * 23.29GB = 46.58GB are available, however only 40GB (share) + 5GB (backup) + 1GB (media) = 46GB are used which means we could extend one of our logical devices with about 0.5GB. I've already shown how to extend an ext3 logical volume (share), so we will resize media now which uses reiserfs. reiserfs filesystems can be resized without unmounting:

lvextend -L1.5G /dev/fileserver/media
server1:~# lvextend -L1.5G /dev/fileserver/media
  Extending logical volume media to 1.50 GB
  Logical volume media successfully resized
resize_reiserfs /dev/fileserver/media
server1:~# resize_reiserfs /dev/fileserver/media
resize_reiserfs 3.6.19 (2003 www.namesys.com)

 

resize_reiserfs: On-line resizing finished successfully.

The output of

df -h

looks like this:

server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              19G  666M   17G   4% /
tmpfs                  78M     0   78M   0% /lib/init/rw
udev                   10M   92K   10M   1% /dev
tmpfs                  78M     0   78M   0% /dev/shm
/dev/sda1             137M   17M  114M  13% /boot
/dev/mapper/fileserver-share
                       40G  177M   38G   1% /var/share
/dev/mapper/fileserver-backup
                      5.0G  144K  5.0G   1% /var/backup
/dev/mapper/fileserver-media
                      1.5G   33M  1.5G   3% /var/media

If we want our logical volumes to be mounted automatically at boot time, we must modify /etc/fstab again (like in chapter 3):

mv /etc/fstab /etc/fstab_orig
cat /dev/null > /etc/fstab
vi /etc/fstab

Put the following into it:

# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
/dev/sda2       /               ext3    defaults,errors=remount-ro 0       1
/dev/sda1       /boot           ext3    defaults        0       2
/dev/hdc        /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0
/dev/fileserver/share   /var/share     ext3       rw,noatime    0 0
/dev/fileserver/backup    /var/backup      xfs        rw,noatime    0 0
/dev/fileserver/media    /var/media      reiserfs   rw,noatime    0 0

If you compare it to our backup of the original file, /etc/fstab_orig, you will notice that we added the lines:

/dev/fileserver/share   /var/share     ext3       rw,noatime    0 0
/dev/fileserver/backup    /var/backup      xfs        rw,noatime    0 0
/dev/fileserver/media    /var/media      reiserfs   rw,noatime    0 0

Now we reboot the system:

shutdown -r now

After the system has come up again, run

df -h

again. It should still show our logical volumes in the output:

server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              19G  666M   17G   4% /
tmpfs                  78M     0   78M   0% /lib/init/rw
udev                   10M  100K   10M   1% /dev
tmpfs                  78M     0   78M   0% /dev/shm
/dev/sda1             137M   17M  114M  13% /boot
/dev/mapper/fileserver-share
                       40G  177M   38G   1% /var/share
/dev/mapper/fileserver-backup
                      5.0G  144K  5.0G   1% /var/backup
/dev/mapper/fileserver-media
                      1.5G   33M  1.5G   3% /var/media

Now we are finished with our LVM on RAID1 setup.

Share this page:

1 Comment(s)