Set Up A Fully Encrypted Raid1 LVM System (Lenny) - Page 7

Adding the second drive

Step 1: Copying the partition scheme

So, once everthing is installed and you have a new shiny server before you, it's time to also add the second drive. This far I have used a 8GB drive. I have, after powering off added a new 16GB drive. However that drive is not directly synchronized. We first have to do a few things.

Before I start altering the partition table of the second drive I frist check that status of the array by issuing this command

watch -n 6 cat /proc/mdstat

I get an output like this:

Every 6.0s: cat /proc/mdstat                                                                                                            Sun Nov 30 19:15:15 2008
Personalities : [raid1]
md3 : active raid1 sda4[0]
      4208960 blocks [2/1] [U_]
md2 : active raid1 sda3[0]
      2931776 blocks [2/1] [U_]
md1 : active raid1 sda2[0]
      995904 blocks [2/1] [U_]
md0 : active raid1 sda1[0]
      248896 blocks [2/1] [U_]
unused devices: 

As you can see, only one of the two active raid devices are currently used. This means I can (should) add another one to make sure that if one disk fails everything is still operational. Exit the montioring with "ctrl-c".

I have to find out which harddrive is which (what they are named). I issue:

fdisk -l

And I get this output:

test:~# fdisk -l
Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d3f5d
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          31      248976   fd  Linux raid autodetect
/dev/sda2              32         155      996030   fd  Linux raid autodetect
/dev/sda3             156         520     2931862+  fd  Linux raid autodetect
/dev/sda4             521        1044     4209030   fd  Linux raid autodetect
Disk /dev/sdb: 17.1 GB, 17179869184 bytes
255 heads, 63 sectors/track, 2088 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/md0: 254 MB, 254869504 bytes
2 heads, 4 sectors/track, 62224 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md1: 1019 MB, 1019805696 bytes
2 heads, 4 sectors/track, 248976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x93b342d4
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md2: 3002 MB, 3002138624 bytes
2 heads, 4 sectors/track, 732944 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x08040000
Disk /dev/md2 doesn't contain a valid partition table
Disk /dev/md3: 4309 MB, 4309975040 bytes
2 heads, 4 sectors/track, 1052240 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x08040000
Disk /dev/md3 doesn't contain a valid partition table
Disk /dev/dm-0: 3001 MB, 3001085952 bytes
255 heads, 63 sectors/track, 364 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 1019 MB, 1019805696 bytes
255 heads, 63 sectors/track, 123 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xb604b75d
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/dm-2: 4308 MB, 4308922368 bytes
255 heads, 63 sectors/track, 523 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3: 4307 MB, 4307550208 bytes
255 heads, 63 sectors/track, 523 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-3 doesn't contain a valid partition table

We can see that /dev/sda is the drive currently in use and /dev/sdb is the new drive that I added now. As the new drive is larger (note: DO NOT USE A SMALLER DRIVE!) it means we only use partially the space available on there. Although by running a raid1 we can't use more space than the smallest drive/partition anyway), however if the first drive fails, you can then get also a bigger one and expand space there. So issue this command:

sfdisk -d /dev/sda | sfdisk /dev/sdb

 

Step 2: Modifying the partition scheme

Issue now the following commmand to adjust the "data" partition on there. The other three partitions (/boot, swap, /) should have been made large enough during setup:

cfdisk /dev/sdb

Use the arrow keys to navigate around. Select partition 4 (sdb4) and then delete it.

Then create a new partition in the free space. Make it primary and let it use all space.

You will see, that it creates a new "sdb4" and it's type is "Linux" but not "Linux raid autodetect". Select the "Type" (while still having sdb4 selected) and enter as file system type: FD. Now you're back at the first screen and the partition type was altered to "Linux raid autodetect.

Now select the "Write" option and confirm it with "yes". Then you can quit the cfdisk tool.

 

Step 3: Zeroing the superblock

Just to make sure we zero out the superblock on each partition on the new drive:

mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdb3
mdadm --zero-superblock /dev/sdb4

You will get a lot of errors - which is good if it's a new drive that has not been used before.

 

Step 4: Adding the new partitions to the raid array

Now run those commands:

mdadm --add /dev/md0 /dev/sdb1
mdadm --add /dev/md1 /dev/sdb2
mdadm --add /dev/md2 /dev/sdb3
mdadm --add /dev/md3 /dev/sdb4

This will add the new partitions to the according arrays and start syncing it. You can watch the progress with:

watch -n 6 cat /proc/mdstat

 

Step 5: Add grub to the new harddrive

As grub is installed only once and not mirrored by the above command we have to manually add grub. Run:

grub

Then run in the grub prompt:

root (hd1,0)
setup (hd1)

The first command means that grub shall use the partition /dev/sdb1 as /boot partition. The second command means that grub shall install itself into the boot sector of /dev/sdb. Grub start counting harddrives and partitions with "0". So sda would be hd0 and hence sdb is hd1

Exit grub by entering "quit" and you have now an encrypted raid1 setup with lvm. Each partition can be run alone.

 

Expanding the LVM

In case your smaller harddisk fails and you add a bigger new one, then you want to expand the size to its maximum. I'll tell you very shortly how this can be done with XFS. First, you run:

mdadm --grow /dev/md3 --size=max

Remember: /dev/md3 is our /data partition. With this command, the raid array will expand to it's maximum. Before the smaller harddrive failed, it was not set to its maximum on the bigger one. After that reboot the system (I know, there are ways without rebooting, but this just makes it a lot simpler).

Once you have rebooted, run this command:

pvresize /dev/mapper/md3_crypt

After that you have to find out by how much you can expand the size. Run this command:

vgdisplay -A

This will output a few things on the LVM. You need to look for this line here:

Free PE / Size  xxxxxxx / yyyy GB

The value of "xxxxxxx" is important. Run the following command and replace "xxxxxxx" with the actual value. Also be sure to use to correct logical volume name. The one I created is "DATA-DATA_MD3.

lvextend -l +xxxxxxx /dev/mapper/DATA-DATA_MD3

If you're unsure about your logical volume name, run this command to list all the mapper:

ls /dev/mapper

Once you run that, the final step is to enlarge the actual filesystem. XFS makes this very easy:

xfs_growfs /data

Share this page:

6 Comment(s)

Add comment

Comments

From: schrapp

just today i set up a new server in a similar way. i did 2 things differently:

 1) install the second drive right away and add it to the raid during debian setup with partman. that way you don't have to add it manually later on.

 2) create just 2 raids. one for /boot and one that takes up the rest of the space. create a crypto device on top of that, that takes all of the available space as well. then add the resulting crypto mapper to a logical volume group and create your logical volumes with mount points (/, /home, /tmp, /var, ...). that way, you only have one encrypted device (therefore only one password). when using LVM imho there is no reason to create more than one underlying partition, unless you're adding a new physical device to an existing setup.

From:

There are many ways to make a setup. I did think about it quite some time, did research on what file systems to use where... did consider whether to use encryption-->lvm or lvm-->encryption.

After carefull consideration I just came to the conclusion that I prefer this setup more. I do want to have independant root and only the actualy data on the lvm. Hence I chose that approach.

There's no right/wrong here. Just think of the consequences of your choices and what suits you the most.

From: ruipedroca

Hi,

 I think your guide is great, good job!

Just a note: in the beggining fo this guide you say Ubuntu 8.04 and 8.10 wouldn't do the job, but at least the alternate 8.04.1 Desktop CD does, because I've already tried it (both RAID1 and encription, but not at the same time in the same OS installation) and it works.
However, you must perform some after installation steps (install GRUB boot-loader on second drive andupdate startup script to detect a failed drive).
I've followed this guide:
https://help.ubuntu.com/community/Installation/SoftwareRAID 

I'd like to thank you for the screenshots, that make your guide a breeze to follow! :)

From: Richard Williams

I've just built a new Linux (Debian Lenny) server using a motherboard with hardware RAID.  Trouble is, it only has Windows RAID drivers, so I've had to use a software RAID.  I couldn't have done so easily without this article.

From: Shnifti

I did both ways: creating raid right in the debian installer partman and also the other with adding second drive to a degraded raid after installation. (using debian squeeze)

So my setup is like

(I have md0 as a raid 5, doesnt matter for now) 

 /dev/md1 for /boot as ext2

/dev/md2  > crypt > vg_debian > lv_root, lv_home, lv_var > filesystems (ext4/xfs)

 In both ways I am the  getting the frequent kernel message:

bio too big device /dev/md2 (248 > 240)

 I have no clue what that might mean. Google doesnt show up so much results. But after reading trought some lists I am quite feared of facing data corruption.

I am using an compact flash card 8g on IDE port and an usb drive 8gb. Anyways they differ in size so I set up system with the USB drive (smaller) and later copied the partition table to the sd card. Maybe the problem is resulting from there.

Somebody might has an idea? What can I do? Is this kind of setup practical even setup, like everything nested (raid,crypt,lvm)?

best regards! Ben

From: tuxware

Could you please post your /etc/fstab and your grub menu.lst? Would be a great help as I am having trouble booting my new raid system.