How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB2 Configuration) (Debian Squeeze) - Page 3

6 Preparing GRUB2

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Now we reboot the system and hope that it boots ok from our RAID arrays:

reboot

 

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 in the output of

df -h

[email protected]:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/server1-root
                      4.5G  722M  3.6G  17% /
tmpfs                 249M     0  249M   0% /lib/init/rw
udev                  244M  128K  244M   1% /dev
tmpfs                 249M     0  249M   0% /dev/shm
/dev/md0              236M   18M  206M   8% /boot
[email protected]:~#

The output of

cat /proc/mdstat

should be as follows:

[email protected]:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda5[2] sdb5[1]
      4989940 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[1]
      248820 blocks super 1.2 [2/1] [_U]

unused devices: <none>
[email protected]:~#

The outputs of pvdisplay, vgdisplay, and lvdisplay should be as follows:

pvdisplay

[email protected]:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               server1
  PV Size               4.76 GiB / not usable 1012.00 KiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              1218
  Free PE               0
  Allocated PE          1218
  PV UUID               W4I07I-RT3P-DK1k-1HBz-oJvp-6in0-uQ53KS

[email protected]:~#

vgdisplay

[email protected]:~# vgdisplay
  --- Volume group ---
  VG Name               server1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.76 GiB
  PE Size               4.00 MiB
  Total PE              1218
  Alloc PE / Size       1218 / 4.76 GiB
  Free  PE / Size       0 / 0
  VG UUID               m99fJX-gMl9-g2XZ-CazH-32s8-sy1Q-8JjCUW

[email protected]:~#

lvdisplay

[email protected]:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/server1/root
  VG Name                server1
  LV UUID                8SNLPE-gHqA-a2LX-BO9o-0QQO-DV2z-3WvTYe
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.51 GiB
  Current LE             1155
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/server1/swap_1
  VG Name                server1
  LV UUID                kYaKtb-vkkV-TDDE-me1R-nnER-dzN8-BcVTwz
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                252.00 MiB
  Current LE             63
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

[email protected]:~#

Now we must change the partition type of /dev/sda1 to Linux raid autodetect as well:

fdisk /dev/sda

[email protected]:~# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):
 <-- t
Partition number (1-5): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[email protected]:~#

Now we can add /dev/sda1 to the /dev/md0 RAID array:

mdadm --add /dev/md0 /dev/sda1

Now take a look at

cat /proc/mdstat

[email protected]:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda5[2] sdb5[1]
      4989940 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[1]
      248820 blocks super 1.2 [2/2] [UU]

unused devices: <none>
[email protected]:~#

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:

cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Tue, 24 May 2011 21:11:37 +0200
# by mkconf 3.1.4-1+8efb9d1
ARRAY /dev/md/0 metadata=1.2 UUID=6cde4bf4:7ee67d24:b31e2713:18865f31 name=server1.example.com:0
ARRAY /dev/md/1 metadata=1.2 UUID=3ce9f2f2:ac89f75a:530c5ee9:0d4c67da name=server1.example.com:1

Now we delete /etc/grub.d/09_swraid1_setup...

rm -f /etc/grub.d/09_swraid1_setup

... and update our GRUB2 bootloader configuration:

update-grub
update-initramfs -u

Now if you take a look at /boot/grub/grub.cfg, you should find that the menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section look pretty much the same as what we had in /etc/grub.d/09_swraid1_setup (they should now also be set to boot from /dev/md0 instead of (hd0) or (hd1)), that's why we don't need /etc/grub.d/09_swraid1_setup anymore.

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your running LVM system!

Share this page:

Suggested articles

0 Comment(s)

Add comment

Comments

By: Falko Timme