How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB2 Configuration) (Ubuntu 10.04) - Page 3

6 Preparing GRUB2

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Now we reboot the system and hope that it boots ok from our RAID arrays:

reboot

 

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 in the output of

df -h

[email protected]:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/server1-root
                      4.5G  816M  3.4G  19% /
none                  242M  196K  242M   1% /dev
none                  247M     0  247M   0% /dev/shm
none                  247M   40K  247M   1% /var/run
none                  247M     0  247M   0% /var/lock
none                  247M     0  247M   0% /lib/init/rw
/dev/md0              236M   23M  201M  11% /boot
[email protected]:~#

The output of

cat /proc/mdstat

should be as follows:

[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1]
      248768 blocks [2/1] [_U]

md1 : active raid1 sda5[0] sdb5[1]
      4990912 blocks [2/2] [UU]

unused devices: <none>
[email protected]:~#

The outputs of pvdisplay, vgdisplay, and lvdisplay should be as follows:

pvdisplay

[email protected]:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               server1
  PV Size               4.76 GiB / not usable 1.94 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              1218
  Free PE               3
  Allocated PE          1215
  PV UUID               rQf0Rj-Nn9l-VgbP-0kIr-2lve-5jlC-TWTBGp

[email protected]:~#

vgdisplay

[email protected]:~# vgdisplay
  --- Volume group ---
  VG Name               server1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.76 GiB
  PE Size               4.00 MiB
  Total PE              1218
  Alloc PE / Size       1215 / 4.75 GiB
  Free  PE / Size       3 / 12.00 MiB
  VG UUID               hMwXAh-zZsA-w39k-g6Bg-LW4W-XX8q-EbyXfA

[email protected]:~#

lvdisplay

[email protected]:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/server1/root
  VG Name                server1
  LV UUID                b5A1R5-Zhml-LSNy-v7WY-NVmD-yX1w-tuQVUW
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.49 GiB
  Current LE             1149
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:0

  --- Logical volume ---
  LV Name                /dev/server1/swap_1
  VG Name                server1
  LV UUID                2UuF7C-zxKA-Hgz1-gZHe-rFlq-cKW7-jYVCzp
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                264.00 MiB
  Current LE             66
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:1

[email protected]:~#

Now we must change the partition type of /dev/sda1 to Linux raid autodetect as well:

fdisk /dev/sda

[email protected]:~# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):
 <-- t
Partition number (1-5): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[email protected]:~#

Now we can add /dev/sda1 to the /dev/md0 RAID array:

mdadm --add /dev/md0 /dev/sda1

Now take a look at

cat /proc/mdstat

[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
      248768 blocks [2/2] [UU]

md1 : active raid1 sda5[0] sdb5[1]
      4990912 blocks [2/2] [UU]

unused devices: <none>
[email protected]:~#

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:

cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Wed, 16 Jun 2010 20:01:25 +0200
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=90f05e41:bf936896:325ecf68:79913751
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=1ab36b7f:3e2031c0:325ecf68:79913751

Now we delete /etc/grub.d/09_swraid1_setup...

rm -f /etc/grub.d/09_swraid1_setup

... and update our GRUB2 bootloader configuration:

update-grub
update-initramfs -u

Now if you take a look at /boot/grub/grub.cfg, you should find that the menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section look pretty much the same as what we had in /etc/grub.d/09_swraid1_setup (they should now also be set to boot from /dev/md0 instead of (hd0,1) or (hd1,1)), that's why we don't need /etc/grub.d/09_swraid1_setup anymore.

Reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your running LVM system!

Share this page:

0 Comment(s)

Add comment

Comments

From: falko