How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB2 Configuration) (Ubuntu 11.10) - Page 2

4 Creating Our RAID Arrays

Now let's create our RAID arrays /dev/md0 and /dev/md1. /dev/sdb1 will be added to /dev/md0 and/dev/sdb5 to /dev/md1. /dev/sda1 and /dev/sda5 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following two commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
  mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb5

You might see the following message for each command - just press y to continue:

[email protected]:~# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb5
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array?
 <-- y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[email protected]:~#

The command

cat /proc/mdstat

should now show that you have two degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb5[1]
      4989940 blocks super 1.2 [2/1] [_U]

md0 : active raid1 sdb1[1]
      248820 blocks super 1.2 [2/1] [_U]

unused devices: <none>
[email protected]:~#

Next we create a filesystem (ext2) on our non-LVM RAID array /dev/md0:

mkfs.ext2 /dev/md0

Now we come to our LVM RAID array /dev/md1. To prepare it for LVM, we run:

pvcreate /dev/md1

Then we add /dev/md1 to our volume group server1:

vgextend server1 /dev/md1

The output of

pvdisplay

should now be similar to this:

[email protected]:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda5
  VG Name               server1
  PV Size               4.76 GiB / not usable 2.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              1218
  Free PE               3
  Allocated PE          1215
  PV UUID               dKWi5I-GMPP-wzoP-qXfi-e93A-YVmp-A7O1IK

  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               server1
  PV Size               4.76 GiB / not usable 1012.00 KiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              1218
  Free PE               1218
  Allocated PE          0
  PV UUID               w1Mg12-OHEj-paLg-9xyJ-jQuU-cQHT-p2qVKf

[email protected]:~#

The output of

vgdisplay

should be as follows:

[email protected]:~# vgdisplay
  --- Volume group ---
  VG Name               server1
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               9.52 GiB
  PE Size               4.00 MiB
  Total PE              2436
  Alloc PE / Size       1215 / 4.75 GiB
  Free  PE / Size       1221 / 4.77 GiB
  VG UUID               kwDyrp-sFA7-3s3i-FVWc-AGck-NX6H-yo4Pyt

[email protected]:~#

Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Display the contents of the file:

cat /etc/mdadm/mdadm.conf

In the file you should now see details about our two (degraded) RAID arrays:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Tue, 20 Mar 2012 15:40:06 +0100
# by mkconf $Id$
ARRAY /dev/md/0 metadata=1.2 UUID=2d5659ba:1978bfac:40d0b815:229d3382 name=server1.example.com:0
ARRAY /dev/md/1 metadata=1.2 UUID=3c524dfa:445bb555:b4d039e9:b39553e1 name=server1.example.com:1

Next we modify /etc/fstab. Comment out the current /boot partition and add the line /dev/md0 /boot           ext2    defaults        0       2 instead so that the file looks as follows:

vi /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
/dev/mapper/server1-root /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
#UUID=33ea6f8d-e79c-4776-8c6b-d08b043cfec1 /boot           ext2    defaults        0       2
/dev/md0 /boot           ext2    defaults        0       2
/dev/mapper/server1-swap_1 none            swap    sw              0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0

Next replace /dev/sda1 with /dev/md0 in /etc/mtab:

vi /etc/mtab
/dev/mapper/server1-root / ext4 rw,errors=remount-ro 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
fusectl /sys/fs/fuse/connections fusectl rw 0 0
none /sys/kernel/debug debugfs rw 0 0
none /sys/kernel/security securityfs rw 0 0
udev /dev devtmpfs rw,mode=0755 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0
none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0
none /run/shm tmpfs rw,nosuid,nodev 0 0
/dev/mdd0 /boot ext2 rw 0 0

Now up to the GRUB2 boot loader. Create the file /etc/grub.d/09_swraid1_setup as follows:

cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup
vi /etc/grub.d/09_swraid1_setup

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Ubuntu, with Linux 3.0.0-12-server' --class ubuntu --class gnu-linux --class gnu --class os {
        insmod raid
        insmod mdraid
        insmod part_msdos
        insmod ext2
        set root='(md/0)'
        echo    'Loading Linux 3.0.0-12-server ...'
        linux   /vmlinuz-3.0.0-12-server root=/dev/mapper/server1-root ro  quiet
        echo    'Loading initial ramdisk ...'
        initrd  /initrd.img-3.0.0-12-server
}

Make sure you use the correct kernel version in the menuentry stanza (in the linux and initrd lines). You can find it out by running

uname -r

or by taking a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg. Also make sure that you use the correct volume group in the linux line - if your volume group isn't named server1, you must use something else than root=/dev/mapper/server1-root. Again, take a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg to find out the correct value.

The important part in our new menuentry stanza is the line set root='(md/0)' - it makes sure that we boot from our RAID1 array /dev/md0 (which will hold the /boot partition) instead of /dev/sda or /dev/sdb which is important if one of our hard drives fails - the system will still be able to boot.

Because we don't use UUIDs for our block devices, open /etc/default/grub...

vi /etc/default/grub

... and uncomment the line GRUB_DISABLE_LINUX_UUID=true:

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
#GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=2
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Next open /etc/initramfs-tools/conf.d/mdadm...

vi /etc/initramfs-tools/conf.d/mdadm

... and set BOOT_DEGRADED to true so that the system can boot from a degraded array without asking (otherwise the system would ask at the beginning of the boot process if you want to boot from a degraded array which is a bad thing if the server is in a remote location):

# mdadm boot_degraded configuration
#
# You can run 'dpkg-reconfigure mdadm' to modify the values in this file, if
# you want. You can also change the values here and changes will be preserved.
# Do note that only the values are preserved; the rest of the file is
# rewritten.
#
# BOOT_DEGRADED:
# Do you want to boot your system if a RAID providing your root filesystem
# becomes degraded?
#
# Running a system with a degraded RAID could result in permanent data loss
# if it suffers another hardware fault.
#
# However, you might answer "yes" if this system is a server, expected to
# tolerate hardware faults and boot unattended.

BOOT_DEGRADED=true

Now run

update-grub 

to write our new kernel stanza from /etc/grub.d/09_swraid1_setup to /boot/grub/grub.cfg.

Next we adjust our ramdisk to the new situation:

update-initramfs -u

 

5 Moving Our Data To The RAID Arrays

Now that we've modified all configuration files, we can copy the contents of /dev/sda to /dev/sdb (including the configuration changes we've made in the previous chapter).

To move the contents of our LVM partition /dev/sda5 to our LVM RAID array /dev/md1, we use the pvmove command:

pvmove -i 2 /dev/sda5 /dev/md1

This can take some time, so please be patient.

Afterwards, we remove /dev/sda5 from the volume group server1...

vgreduce server1 /dev/sda5

... and tell the system to not use /dev/sda5 anymore for LVM:

pvremove /dev/sda5

The output of

pvdisplay

should now be as follows:

[email protected]:/boot/grub# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               server1
  PV Size               4.76 GiB / not usable 1012.00 KiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              1218
  Free PE               3
  Allocated PE          1215
  PV UUID               w1Mg12-OHEj-paLg-9xyJ-jQuU-cQHT-p2qVKf

[email protected]:/boot/grub#

Next we change the partition type of /dev/sda5 to Linux raid autodetect and add /dev/sda5 to the /dev/md1 array:

fdisk /dev/sda

[email protected]:~# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):
 <-- t
Partition number (1-5): <-- 5
Hex code (type L to list codes): <-- fd
Changed system type of partition 5 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[email protected]:~#

mdadm --add /dev/md1 /dev/sda5

Now take a look at

cat /proc/mdstat

... and you should see that the RAID array /dev/md1 is being synchronized:

[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda5[2] sdb5[1]
      4989940 blocks super 1.2 [2/1] [_U]
      [====>................]  recovery = 20.1% (1005376/4989940) finish=3.7min speed=17482K/sec

md0 : active raid1 sdb1[1]
      248820 blocks super 1.2 [2/1] [_U]

unused devices: <none>
[email protected]:~#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda5[2] sdb5[1]
      4989940 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[1]
      248820 blocks super 1.2 [2/1] [_U]

unused devices: <none>
[email protected]:~#

).

Now let's mount /dev/md0:

mkdir /mnt/md0
mount /dev/md0 /mnt/md0

You should now find the array in the output of

mount

[email protected]:~# mount
/dev/mapper/server1-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/mdd0 on /boot type ext2 (rw)
/dev/md0 on /mnt/md0 type ext2 (rw)
[email protected]:~#

Now we copy the contents of /dev/sda1 to /dev/md0 (which is mounted on /mnt/md0):

cd /boot
cp -dpRx . /mnt/md0

Share this page:

Suggested articles

1 Comment(s)

Add comment

Comments

By: aliazimov

Tell me please, I have a system running on Ubuntu 14.04, hard disk 1TB. I set still the same hard disk on 1TB. I started setting raid1 as described here, but coming up to step 5 to see that the system shows that it is not enough space to move on the second hard drive. What do you advise? I even can not restart because i can not roll back these settings?