A Beginner's Guide To LVM - Page 8

8 Replacing The Hard Disks With Bigger Ones

We are currently using four hard disks with a size of 25GB each (at least we are acting like that). Now let's assume this isn't enough anymore, and we need more space in our RAID setup. Therefore we will replace our 25GB hard disks with 80GB hard disks (in fact we will still use the current hard disks, but use their full capacity now - in the real life you would replace your old, small hard disks with new, bigger ones).

The procedure is as follows: first we remove /dev/sdb and /dev/sdd from the RAID arrays, replace them with bigger hard disks, put them back into the RAID arrays, and then we do the same again with /dev/sdc and /dev/sde.

First we mark /dev/sdb1 as failed:

mdadm --manage /dev/md0 --fail /dev/sdb1

server1:~# mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

The output of

cat /proc/mdstat

looks now like this:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdc1[0] sdb1[2](F)
      24418688 blocks [2/1] [U_]

md1 : active raid1 sde1[0] sdd1[1]
      24418688 blocks [2/2] [UU]

unused devices: <none>

Then we remove /dev/sdb1 from the RAID array /dev/md0:

mdadm --manage /dev/md0 --remove /dev/sdb1

server1:~# mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdc1[0]
      24418688 blocks [2/1] [U_]

md1 : active raid1 sde1[0] sdd1[1]
      24418688 blocks [2/2] [UU]

unused devices: <none>

Now we do the same with /dev/sdd1:

mdadm --manage /dev/md1 --fail /dev/sdd1

server1:~# mdadm --manage /dev/md1 --fail /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md1

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdc1[0]
      24418688 blocks [2/1] [U_]

md1 : active raid1 sde1[0] sdd1[2](F)
      24418688 blocks [2/1] [U_]

unused devices: <none>

mdadm --manage /dev/md1 --remove /dev/sdd1

server1:~# mdadm --manage /dev/md1 --remove /dev/sdd1
mdadm: hot removed /dev/sdd1

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdc1[0]
      24418688 blocks [2/1] [U_]

md1 : active raid1 sde1[0]
      24418688 blocks [2/1] [U_]

unused devices: <none>

On a real system you would now shut it down, pull out the 25GB /dev/sdb and /dev/sdd and replace them with 80GB ones. As I said before, we don't have to do this because all hard disks already have a capacity of 80GB.

Next we must format /dev/sdb and /dev/sdd. We must create a /dev/sdb1 resp. /dev/sdd1 partition, type fd (Linux RAID autodetect), size 25GB (the same settings as on the old hard disks), and a /dev/sdb2 resp. /dev/sdd2 partition, type fd, that cover the rest of the hard disks. As /dev/sdb1 and /dev/sdd1 are still present on our hard disks, we only have to create /dev/sdb2 and /dev/sdd2 in this special example.

fdisk /dev/sdb

server1:~# fdisk /dev/sdb

The number of cylinders for this disk is set to 10443.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help):
 <-- p

Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3040    24418768+  fd  Linux raid autodetect

Command (m for help):
 <-- n
Command action
   e   extended
   p   primary partition (1-4)

<-- p
Partition number (1-4): <-- 2
First cylinder (3041-10443, default 3041): <-- <ENTER>
Using default value 3041
Last cylinder or +size or +sizeM or +sizeK (3041-10443, default 10443):
<-- <ENTER>
Using default value 10443

Command (m for help):
 <-- t
Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Do the same for /dev/sdd:

fdisk /dev/sdd

The output of

fdisk -l

looks now like this:

server1:~# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          18      144553+  83  Linux
/dev/sda2              19        2450    19535040   83  Linux
/dev/sda4            2451        2610     1285200   82  Linux swap / Solaris

Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3040    24418768+  fd  Linux raid autodetect
/dev/sdb2            3041       10443    59464597+  fd  Linux raid autodetect

Disk /dev/sdc: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        3040    24418768+  fd  Linux raid autodetect

Disk /dev/sdd: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1        3040    24418768+  fd  Linux raid autodetect
/dev/sdd2            3041       10443    59464597+  fd  Linux raid autodetect

Disk /dev/sde: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        3040    24418768+  fd  Linux raid autodetect

Disk /dev/sdf: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1        3040    24418768+  8e  Linux LVM

Disk /dev/md1: 25.0 GB, 25004736512 bytes
2 heads, 4 sectors/track, 6104672 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md0: 25.0 GB, 25004736512 bytes
2 heads, 4 sectors/track, 6104672 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Now we add /dev/sdb1 to /dev/md0 again and /dev/sdd1 to /dev/md1:

mdadm --manage /dev/md0 --add /dev/sdb1

server1:~# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: re-added /dev/sdb1

mdadm --manage /dev/md1 --add /dev/sdd1

server1:~# mdadm --manage /dev/md1 --add /dev/sdd1
mdadm: re-added /dev/sdd1

Now the contents of both RAID arrays will be synchronized. We must wait until this is finished before we can go on. We can check the status of the synchronization with

cat /proc/mdstat

The output looks like this during synchronization:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdb1[1] sdc1[0]
      24418688 blocks [2/1] [U_]
      [=>...................]  recovery =  9.9% (2423168/24418688) finish=2.8min speed=127535K/sec

md1 : active raid1 sdd1[1] sde1[0]
      24418688 blocks [2/1] [U_]
      [=>...................]  recovery =  6.4% (1572096/24418688) finish=1.9min speed=196512K/sec

unused devices: <none>

and like this when it's finished:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sdb1[1] sdc1[0]
      24418688 blocks [2/2] [UU]

md1 : active raid1 sdd1[1] sde1[0]
      24418688 blocks [2/2] [UU]

unused devices: <none>

Share this page:

45 Comment(s)

Add comment

Comments

From: at: 2007-01-16 12:59:36


First of all i'll shall congratulate you for the great guide.


I'll rather call it a "Introduction Guide" than a "Beginner Guide" , never than less it's very usefull.


Instead of having LVM on top ou those 2 RAID-1 devices and considering the disks capacity, you can use 4 disk RAID-5 system thus have more 25% usable space.


This will make the process more complex but you will be rewarded with more 80GB ;)


This must be done after you replace the first 2 Harddrives.



  • Initilize only one disk, let's say /dev/sdc


    • pvcreate /dev/sdc

  • Add the 80GB disk to the volume


    • vgextend fileserver /dev/sdc

  • pvmove all all volumes from the md[01] devices to the 80GB disk


    • pvmove /dev/md0 /dev/md1


      • note: this is very slow better use -v for periodic update

  • Remove all other devices from the volume


    • vgreduce fileshare /dev/md0 /dev/md1

  • Reboot and replace the disks

  • Initialize the new disks for raid


    • fdisk /dev/sdb

    • fdisk /dev/sdd

    • fdisk /dev/sde 

  • create the raid-5 with one missing device


    • mdadm --create /dev/md0 -a -l 5 -n 4 /dev/sdb1 /dev/sdd1 /dev/sde1 missing

  • Add the new md0 device to the Volume


    • pvcreate /dev/md0 && vgextend fileserver /dev/md0

  • Move the data from the 80GB disk


    • pvmove /dev/sdc

  • (wait)

  • Remove the 80GB disk from the volume group


    • vgreduce fileshare /dev/sdc

  • Initialize the disk for RAID


    • fdisk /dev/sdc and change the type to fd (Linux raid autodetect)

  • Add the disk to the RAID md0


    • mdadm --manage /dev/md0 -add /dev/sdc1

  • Wait for full sync


    • cat /proc/mdstat

  • And you are now with a 240GB RAID-5 volume


    • df -h


A 4 disk RAID-5 is not as performant as the RAID-1 but that's the trade off .


 


José Borges Ferreira 

From: at: 2007-01-16 13:06:55

Be aware that when you initialize a device into a Volume or into a md RAID some unique IDs are assign and written into the first sector of that device. When you do some testing on some virtual enviorment such as VMWare you may ran into this problem. So as a part of the initilization process you better do a


#dd if=/dev/zero of=/dev/diskname bs=1k count=1
#blockdev --rereadpt /dev/sdc


before everything else.


 


José Borges Ferreira 

From: at: 2007-01-18 09:55:22

Source /dev/sda, destination /dev/sdb


 sfdisk -d /dev/sda|sfdisk /dev/sdb

From: at: 2007-01-19 17:00:40

I'm very sorry if I overlooked a note or a posting on this, but how do I set the CLI keyboard layout to qwerty (us 101/104) on Debian Etch.


I immediately ran into problems, it seems your vmware image was made using a german keyboard layout (?)


 


Thanx! 

From: admin at: 2007-01-20 21:11:31

Run


apt-get install console-data console-tools debconf
dpkg-reconfigure console-data


or connect ot the virtual machine with an SSH client such as PuTTY. In PuTTY you use your client machine's keymap.

From: tonyg at: 2009-12-06 05:23:18

I just wanted to say THANK YOU for this resource.  I've been referring back to this article for the past 2 years now, it's saved my butt, and my data, a few times now.  Thanks!!!

From: Sun_Blood at: 2011-02-16 18:36:25

Just one word. GREAT!


This was a perfect start for me to learn on hot to use LVM. Now I'll setup my new NAS =)

From: Anonymous at: 2011-08-30 15:48:07

Out of the 6 drives on the image - drives 3 and 4 appear to be corrupt on my VM VirtualBox Manager.

From: Mark at: 2012-10-14 12:18:44

What a great introduction to LVM!  Thank you so much for taking the trouble to put all this together.

From: lingeswaran at: 2013-08-14 19:00:07


Step by Step Tutorial available in UnixArena.

 http://www.unixarena.com/2013/08/how-to-install-lvm-on-linux-and-disk.html

http://www.unixarena.com/2013/08/linux-lvm-volume-group-operations.html

http://www.unixarena.com/2013/08/linux-lvm-volume-creation-operation.html 

 

From: Ramesh at: 2013-11-06 13:29:05

Thank you very much for the Excellent article. I appreciate your effort. 

From: Anonymous at: 2013-11-13 23:14:48

Thank you for this guide.  I just ran into lvm at work and this is extremely helpful.
I am trying out the vm you provided for practice.  Login info in howtoforge is incorrect.

the user is: root
password : howtoforge

 

 

From: Anonymous at: 2014-01-22 16:20:13

I wanted to say thank you for the great and useful guide. On the internet we should find articles like this. Well done!!!

From: pointer2null at: 2014-12-21 22:17:39

I've just had a quick read of the tutorial and will run through it soon.


One thing I do notice is you give very clear instructions on how to execute each stage, but no explanation of why it is being done( and to a smaller degree, or what is accomplished in each step).


 


Still, it's a valuable resource. :)

From: Anonymous at: 2014-12-25 09:11:43

Try to use EasyRSH in Google  Play - it's quick reference guide for Solaris, HPUX, Redhat OSs

From: Anonymous at: 2010-08-18 19:50:40

If you get this error, you'll need to "deregister" the partition table from the kernel.


 kpartx -d /dev/fileserver/films


 lvremove /dev/fileserver/films

From: Andre de Araujo at: 2013-12-03 21:45:09

Correct is: #lvextend -L  +1.5G  /dev/fileserver/media

From: Adrian Varan at: 2014-02-08 21:00:40

"+" is optional (read the manual). If you use +1.5G then the 1.5G is added to the actual size (1.5+1=2.5G), without "+" the 1.5G represents the new absolute value of the logical volume.

From: Anonymous at: 2010-05-03 14:31:21

Great guide!


Thanks a lot - helped me out :-)

From: Navin Pathak at: 2011-03-17 12:55:15


Dear Freinds,



I have started learn linux from few days and now days I am learning LVM I have search a lot of document and finaly choose your site and start working today through your guide line for LVM I have completed today near entry to fstab of logical valume so I feel very well with your documents.



Thanks a lot you all who spend a time to cretae such a nice lvm real practical.



my one suggetion is that please explain the term of PE,LE and metadata.



again thanks.



 



Regards



Navin Pathak



TTSL India.


From: SN at: 2011-04-14 07:54:26

There's a Zimbra backup script based on LVM, I have no idea of LVM so I searched and found this amazing topic. Thanks so much for your work.

Regards,

SN

From: jonathan young at: 2012-01-29 05:39:55

This guide is so idiot-proofed and full of explanations.  Thank you so much, you saved my bacon. I am a beginning linux administrator (as a sideline to being a web architect) and LVM is so brand new to me, I was scared to resize lv's and now i'm like "wow, this is easy" 

 thank you so much!

 

 

From: acname at: 2012-12-09 09:26:45

perfect manual. thanx a lot

From: Anonymous at: 2014-05-05 00:16:53

need to not use if fdisk if drive is over 2 TB though.

From: Sebastian at: 2014-12-15 18:35:55

Thank you very much for this though tutorial. Helped alot!

From: Robert at: 2008-11-06 23:49:50

Bloody well excellent lvm2 guide.


Thank You.

From: Chris at: 2009-09-26 03:58:05

Hi


nice guide, and the vmware image is a great idea.


in your first RAID example, it looks like you've missed some of the pvmove arguments (it just has the source volume, not the dest volume).


cheers


 


 

From: ilayaraja at: 2010-02-25 12:41:52

very very usefull for beginers

From: Anonymous at: 2011-08-09 06:41:12

Apparently not - I was confused about that too at first, but actually working through the tutorial confirmed that this is not the case.

A quick check of the LVM docs reveals that pvmove with no arguments (other than the device) moves all the data on the device to free space in the volume group, wherever it can find it.

It's basically "move this data to anywhere else" as opposed to "move this data to this particular place" which is what we were doing with the previous uses of pvmove.

From: oldtimer_mando at: 2012-08-17 04:59:35

Awesome!  Thanks!

From: Imran at: 2013-06-04 12:49:54

really a very nice and useful guide for beginner, Thanks you so much


 


 

From: Sajid at: 2014-07-18 22:35:29

Excellent details and easy to follow, great work!

From: MTH at: 2009-10-14 02:40:08

Fantastic guide, covers many scenarios (adding drives, removing drives, resizing, etc).  I find myself always coming back to double check my LVM setups.  A+

From: at: 2007-01-19 06:16:45

Great howto, Falko.


I have needed this in the past and i have already bookmarked it for the next time.  I just don't work withthis stuff enough to memorize it.


You have a real talent for technical writing. 


Thanks,


From: at: 2008-09-13 06:30:55

This was exactly what I needed to get my home file server running on LVM.  I will need this again when I add disks and again when I move everything over to raid.

From: Tormod at: 2008-10-05 11:13:55

Excellent howto! I just noticed that the example fstab entries look wrong (in both examples): /dev/fileserver/share versus /dev/mapper/fileserver-share

From: Tormod at: 2008-10-06 19:35:07

Well, scratch that. It is correct anyway, silly me just had to try it out to see: /dev/fileserver/share is a soft link to /dev/mapper/fileserver-share

From: Anonymous at: 2008-10-05 07:34:30

I have about 2 years of experience using RAID and LVM, and I must say - in all of the literature and documentation I've ever encountered, _none_ of it ever came close to making things so simple and clear as you have just done. You've articulated the ideas of logical volumes, volume groups, and physical volumes well, and have provided concise examples.


Well done.

From: Craig at: 2011-05-26 16:39:00

One of the best howto I have come across -- wish they were all this good.

From: Vahid Pazirandeh at: 2011-07-01 18:50:57

Wow. Very well written howto. Well thought out examples. Thanks a lot to all who were involved.

I agree with an earlier comment - I have used Linux for many years and have read through lots of tutorials. This was so easy to read! :)

From: Anonymous at: 2011-06-30 04:11:11

In my 15 years with linux I have never, ever, seen such good howto. Very easy to follow and understand. In 30 mins my confidence level on LVM/RAID was boosted from 0 to 80.

 I wish there were more howtos like this!

From: csg at: 2011-09-16 12:46:18

Congratulations for this hard work, very clear and concise.

Looking forward to have it running.

Thanks for your work.

From: Gianluca at: 2012-01-01 21:15:07

Excellent HowTo, pls complete this guide with LVM snapshot examples.

From: Gisli at: 2012-09-04 15:32:04

I agree with everyone here. Best howto I've come accross! Everything right to the point and with examples. Nice work!!

From: Anonymous at: 2014-02-17 15:08:20

excellent tutorial that briefs how to manage disks in linux platforms..Thanks for your effort to have this tutorial get prepared..