How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB2 Configuration) (Ubuntu 11.10) - Page 3

6 Preparing GRUB2

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Now we reboot the system and hope that it boots ok from our RAID arrays:

reboot

 

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 in the output of

df -h

[email protected]:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/server1-root
                      4.2G 1002M  3.0G  25% /
udev                  238M  8.0K  238M   1% /dev
tmpfs                  99M  240K   99M   1% /run
none                  5.0M     0  5.0M   0% /run/lock
none                  247M     0  247M   0% /run/shm
/dev/md0              236M   26M  198M  12% /boot
[email protected]:~#

The output of

cat /proc/mdstat

should be as follows:

[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1]
      248820 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sda5[2] sdb5[1]
      4989940 blocks super 1.2 [2/2] [UU]

unused devices: <none>
[email protected]:~#

The outputs of pvdisplay, vgdisplay, and lvdisplay should be as follows:

pvdisplay

[email protected]:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               server1
  PV Size               4.76 GiB / not usable 1012.00 KiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              1218
  Free PE               3
  Allocated PE          1215
  PV UUID               w1Mg12-OHEj-paLg-9xyJ-jQuU-cQHT-p2qVKf

[email protected]:~#

vgdisplay

[email protected]:~# vgdisplay
  --- Volume group ---
  VG Name               server1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.76 GiB
  PE Size               4.00 MiB
  Total PE              1218
  Alloc PE / Size       1215 / 4.75 GiB
  Free  PE / Size       3 / 12.00 MiB
  VG UUID               kwDyrp-sFA7-3s3i-FVWc-AGck-NX6H-yo4Pyt

[email protected]:~#

lvdisplay

[email protected]:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/server1/root
  VG Name                server1
  LV UUID                dNn3NY-YhPm-qE8r-Dr8L-k8CG-ECLp-YjRnGf
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.25 GiB
  Current LE             1088
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

  --- Logical volume ---
  LV Name                /dev/server1/swap_1
  VG Name                server1
  LV UUID                HKIiwv-7X8Y-rzeg-aedK-5RZo-g3Km-QjxkdL
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                508.00 MiB
  Current LE             127
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1

[email protected]:~#

Now we must change the partition type of /dev/sda1 to Linux raid autodetect as well:

fdisk /dev/sda

[email protected]:~# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):
 <-- t
Partition number (1-5): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[email protected]:~#

Now we can add /dev/sda1 to the /dev/md0 RAID array:

mdadm --add /dev/md0 /dev/sda1

Now take a look at

cat /proc/mdstat

[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[2] sdb1[1]
      248820 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda5[2] sdb5[1]
      4989940 blocks super 1.2 [2/2] [UU]

unused devices: <none>
[email protected]:~#

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:

cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Tue, 20 Mar 2012 15:40:06 +0100
# by mkconf $Id$
ARRAY /dev/md/0 metadata=1.2 UUID=2d5659ba:1978bfac:40d0b815:229d3382 name=server1.example.com:0
ARRAY /dev/md/1 metadata=1.2 UUID=3c524dfa:445bb555:b4d039e9:b39553e1 name=server1.example.com:1

Now we delete /etc/grub.d/09_swraid1_setup...

rm -f /etc/grub.d/09_swraid1_setup

... and update our GRUB2 bootloader configuration:

update-grub
update-initramfs -u

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your running LVM system!

Share this page:

0 Comment(s)

Add comment

Please register in our forum first to comment.

Comments

By: Falko Timme