Version 1.0
Author: Falko Timme
Last updated: 2015-02-09
This guide shows how to work with LVM (Logical Volume Management) on Linux. It also describes how to use LVM together with RAID1 in an extra chapter. As LVM is a rather abstract topic, this article comes with a Debian Etch VMware image that you can download and start, and on that Debian Etch system you can run all the commands I execute here and compare your results with mine. Through this practical approach you should get used to LVM very fast.
However, I do not issue any guarantee that this tutorial will work for you!
1 Preliminary Note
This tutorial was inspired by two articles I read:
- http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html
- http://www.debian-administration.org/articles/410
These are great articles, but hard to understand if you've never worked with LVM before. That's why I have created this Debian Etch VMware image that you can download and run in VMware Server or VMware Player (see https://www.howtoforge.com/import_vmware_images to learn how to do that).
I installed all tools we need during the course of this guide on the Debian Etch system (by running
apt-get install lvm2 dmsetup mdadm reiserfsprogs xfsprogs
) so you don't need to worry about that.
The Debian Etch system's network is configured through DHCP, so you don't have to worry about conflicting IP addresses. The root password is howtoforge. You can also connect to that system with an SSH client like PuTTY. To find out the IP address of the Debian Etch system, run
ifconfig
The system has six SCSI hard disks, /dev/sda - /dev/sdf. /dev/sda is used for the Debian Etch system itself, while we will use /dev/sdb - /dev/sdf for LVM and RAID. /dev/sdb - /dev/sdf each have 80GB of disk space. In the beginning we will act as if each has only 25GB of disk space (thus using only 25GB on each of them), and in the course of the tutorial we will "replace" our 25GB hard disks with 80GB hard disks, thus demonstrating how you can replace small hard disks with bigger ones in LVM.
The article http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html uses hard disks of 250GB and 800GB, but some commands such as pvmove take a long time with such hard disk sizes, that's why I decided to use hard disks of 25GB and 80GB (that's enough to understand how LVM works).
1.1 Summary
Download this Debian Etch VMware image (~310MB) and start it like this. Log in as root with the password howtoforge.
2 LVM Layout
Basically LVM looks like this:
You have one or more physical volumes (/dev/sdb1 - /dev/sde1 in our example), and on these physical volumes you create one or more volume groups (e.g. fileserver), and in each volume group you can create one or more logical volumes. If you use multiple physical volumes, each logical volume can be bigger than one of the underlying physical volumes (but of course the sum of the logical volumes cannot exceed the total space offered by the physical volumes).
It is a good practice to not allocate the full space to logical volumes, but leave some space unused. That way you can enlarge one or more logical volumes later on if you feel the need for it.
In this example we will create a volume group called fileserver, and we will also create the logical volumes /dev/fileserver/share, /dev/fileserver/backup, and /dev/fileserver/media (which will use only half of the space offered by our physical volumes for now - that way we can switch to RAID1 later on (also described in this tutorial)).
3 Our First LVM Setup
Let's find out about our hard disks:
fdisk -l
The output looks like this:
server1:~# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 18 144553+ 83 Linux
/dev/sda2 19 2450 19535040 83 Linux
/dev/sda4 2451 2610 1285200 82 Linux swap / Solaris
Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdf: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdf doesn't contain a valid partition table
There are no partitions yet on /dev/sdb - /dev/sdf. We will create the partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 and leave /dev/sdf untouched for now. We act as if our hard disks had only 25GB of space instead of 80GB for now, therefore we assign 25GB to /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1:
fdisk /dev/sdb
server1:~# fdisk /dev/sdb
The number of cylinders for this disk is set to 10443.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): <-- m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Command (m for help): <-- n
Command action
e extended
p primary partition (1-4)
<-- p
Partition number (1-4): <-- 1
First cylinder (1-10443, default 1): <-- <ENTER>
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-10443, default 10443): <-- +25000M
Command (m for help): <-- t
Selected partition 1
Hex code (type L to list codes): <-- L
0 Empty 1e Hidden W95 FAT1 80 Old Minix be Solaris boot
1 FAT12 24 NEC DOS 81 Minix / old Lin bf Solaris
2 XENIX root 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
5 Extended 41 PPC PReP Boot 85 Linux extended c7 Syrinx
6 FAT16 42 SFS 86 NTFS volume set da Non-FS data
7 HPFS/NTFS 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / .
8 AIX 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility
9 AIX bootable 4f QNX4.x 3rd part 8e Linux LVM df BootIt
a OS/2 Boot Manag 50 OnTrack DM 93 Amoeba e1 DOS access
b W95 FAT32 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O
c W95 FAT32 (LBA) 52 CP/M 9f BSD/OS e4 SpeedStor
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs
f W95 Ext'd (LBA) 54 OnTrackDM6 a5 FreeBSD ee EFI GPT
10 OPUS 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/
11 Hidden FAT12 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b
12 Compaq diagnost 5c Priam Edisk a8 Darwin UFS f1 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor a9 NetBSD f4 SpeedStor
16 Hidden FAT16 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fd Linux raid auto
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fe LANstep
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid ff BBT
1c Hidden W95 FAT3 75 PC/IX
Hex code (type L to list codes): <-- 8e
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): <-- w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Now we do the same for the hard disks /dev/sdc - /dev/sde:
fdisk /dev/sdc
fdisk /dev/sdd
fdisk /dev/sde
Then run
fdisk -l
again. The output should look like this:
server1:~# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 18 144553+ 83 Linux
/dev/sda2 19 2450 19535040 83 Linux
/dev/sda4 2451 2610 1285200 82 Linux swap / Solaris
Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 3040 24418768+ 8e Linux LVM
Disk /dev/sdc: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 3040 24418768+ 8e Linux LVM
Disk /dev/sdd: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 3040 24418768+ 8e Linux LVM
Disk /dev/sde: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 3040 24418768+ 8e Linux LVM
Disk /dev/sdf: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdf doesn't contain a valid partition table
Now we prepare our new partitions for LVM:
pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
server1:~# pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdc1" successfully created
Physical volume "/dev/sdd1" successfully created
Physical volume "/dev/sde1" successfully created
Let's revert this last action for training purposes:
pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
server1:~# pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Labels on physical volume "/dev/sdb1" successfully wiped
Labels on physical volume "/dev/sdc1" successfully wiped
Labels on physical volume "/dev/sdd1" successfully wiped
Labels on physical volume "/dev/sde1" successfully wiped
Then run
pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
again:
server1:~# pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdc1" successfully created
Physical volume "/dev/sdd1" successfully created
Physical volume "/dev/sde1" successfully created
Now run
pvdisplay
to learn about the current state of your physical volumes:
server1:~# pvdisplay
--- NEW Physical volume ---
PV Name /dev/sdb1
VG Name
PV Size 23.29 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID G8lu2L-Hij1-NVde-sOKc-OoVI-fadg-Jd1vyU
--- NEW Physical volume ---
PV Name /dev/sdc1
VG Name
PV Size 23.29 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 40GJyh-IbsI-pzhn-TDRq-PQ3l-3ut0-AVSE4B
--- NEW Physical volume ---
PV Name /dev/sdd1
VG Name
PV Size 23.29 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 4mU63D-4s26-uL00-r0pO-Q0hP-mvQR-2YJN5B
--- NEW Physical volume ---
PV Name /dev/sde1
VG Name
PV Size 23.29 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 3upcZc-4eS2-h4r4-iBKK-gZJv-AYt3-EKdRK6