Set Up A Fully Encrypted Raid1 LVM System (Lenny) - Page 7
Adding the second drive
Step 1: Copying the partition scheme
So, once everthing is installed and you have a new shiny server before you, it's time to also add the second drive. This far I have used a 8GB drive. I have, after powering off added a new 16GB drive. However that drive is not directly synchronized. We first have to do a few things.
Before I start altering the partition table of the second drive I frist check that status of the array by issuing this command
watch -n 6 cat /proc/mdstat
I get an output like this:
Every 6.0s: cat /proc/mdstat Sun Nov 30 19:15:15 2008 Personalities : [raid1] md3 : active raid1 sda4[0] 4208960 blocks [2/1] [U_] md2 : active raid1 sda3[0] 2931776 blocks [2/1] [U_] md1 : active raid1 sda2[0] 995904 blocks [2/1] [U_] md0 : active raid1 sda1[0] 248896 blocks [2/1] [U_] unused devices:
As you can see, only one of the two active raid devices are currently used. This means I can (should) add another one to make sure that if one disk fails everything is still operational. Exit the montioring with "ctrl-c".
I have to find out which harddrive is which (what they are named). I issue:
fdisk -l
And I get this output:
test:~# fdisk -l Disk /dev/sda: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x000d3f5d Device Boot Start End Blocks Id System /dev/sda1 * 1 31 248976 fd Linux raid autodetect /dev/sda2 32 155 996030 fd Linux raid autodetect /dev/sda3 156 520 2931862+ fd Linux raid autodetect /dev/sda4 521 1044 4209030 fd Linux raid autodetect Disk /dev/sdb: 17.1 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/md0: 254 MB, 254869504 bytes 2 heads, 4 sectors/track, 62224 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Disk /dev/md1: 1019 MB, 1019805696 bytes 2 heads, 4 sectors/track, 248976 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x93b342d4 Disk /dev/md1 doesn't contain a valid partition table Disk /dev/md2: 3002 MB, 3002138624 bytes 2 heads, 4 sectors/track, 732944 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x08040000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/md3: 4309 MB, 4309975040 bytes 2 heads, 4 sectors/track, 1052240 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x08040000 Disk /dev/md3 doesn't contain a valid partition table Disk /dev/dm-0: 3001 MB, 3001085952 bytes 255 heads, 63 sectors/track, 364 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/dm-1: 1019 MB, 1019805696 bytes 255 heads, 63 sectors/track, 123 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xb604b75d Disk /dev/dm-1 doesn't contain a valid partition table Disk /dev/dm-2: 4308 MB, 4308922368 bytes 255 heads, 63 sectors/track, 523 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Disk /dev/dm-2 doesn't contain a valid partition table Disk /dev/dm-3: 4307 MB, 4307550208 bytes 255 heads, 63 sectors/track, 523 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Disk /dev/dm-3 doesn't contain a valid partition table
We can see that /dev/sda is the drive currently in use and /dev/sdb is the new drive that I added now. As the new drive is larger (note: DO NOT USE A SMALLER DRIVE!) it means we only use partially the space available on there. Although by running a raid1 we can't use more space than the smallest drive/partition anyway), however if the first drive fails, you can then get also a bigger one and expand space there. So issue this command:
sfdisk -d /dev/sda | sfdisk /dev/sdb
Step 2: Modifying the partition scheme
Issue now the following commmand to adjust the "data" partition on there. The other three partitions (/boot, swap, /) should have been made large enough during setup:
cfdisk /dev/sdb
Use the arrow keys to navigate around. Select partition 4 (sdb4) and then delete it.
Then create a new partition in the free space. Make it primary and let it use all space.
You will see, that it creates a new "sdb4" and it's type is "Linux" but not "Linux raid autodetect". Select the "Type" (while still having sdb4 selected) and enter as file system type: FD. Now you're back at the first screen and the partition type was altered to "Linux raid autodetect.
Now select the "Write" option and confirm it with "yes". Then you can quit the cfdisk tool.
Step 3: Zeroing the superblock
Just to make sure we zero out the superblock on each partition on the new drive:
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdb3
mdadm --zero-superblock /dev/sdb4
You will get a lot of errors - which is good if it's a new drive that has not been used before.
Step 4: Adding the new partitions to the raid array
Now run those commands:
mdadm --add /dev/md0 /dev/sdb1
mdadm --add /dev/md1 /dev/sdb2
mdadm --add /dev/md2 /dev/sdb3
mdadm --add /dev/md3 /dev/sdb4
This will add the new partitions to the according arrays and start syncing it. You can watch the progress with:
watch -n 6 cat /proc/mdstat
Step 5: Add grub to the new harddrive
As grub is installed only once and not mirrored by the above command we have to manually add grub. Run:
grub
Then run in the grub prompt:
root (hd1,0)
setup (hd1)
The first command means that grub shall use the partition /dev/sdb1 as /boot partition. The second command means that grub shall install itself into the boot sector of /dev/sdb. Grub start counting harddrives and partitions with "0". So sda would be hd0 and hence sdb is hd1
Exit grub by entering "quit" and you have now an encrypted raid1 setup with lvm. Each partition can be run alone.
Expanding the LVM
In case your smaller harddisk fails and you add a bigger new one, then you want to expand the size to its maximum. I'll tell you very shortly how this can be done with XFS. First, you run:
mdadm --grow /dev/md3 --size=max
Remember: /dev/md3 is our /data partition. With this command, the raid array will expand to it's maximum. Before the smaller harddrive failed, it was not set to its maximum on the bigger one. After that reboot the system (I know, there are ways without rebooting, but this just makes it a lot simpler).
Once you have rebooted, run this command:
pvresize /dev/mapper/md3_crypt
After that you have to find out by how much you can expand the size. Run this command:
vgdisplay -A
This will output a few things on the LVM. You need to look for this line here:
Free PE / Size xxxxxxx / yyyy GB
The value of "xxxxxxx" is important. Run the following command and replace "xxxxxxx" with the actual value. Also be sure to use to correct logical volume name. The one I created is "DATA-DATA_MD3.
lvextend -l +xxxxxxx /dev/mapper/DATA-DATA_MD3
If you're unsure about your logical volume name, run this command to list all the mapper:
ls /dev/mapper
Once you run that, the final step is to enlarge the actual filesystem. XFS makes this very easy:
xfs_growfs /data