How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Lenny) - Page 3

Want to support HowtoForge? Become a subscriber!
 
Submitted by falko (Contact Author) (Forums) on Thu, 2009-08-27 15:48. ::

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 and /dev/md2 in the output of

df -h

server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md2              5.3G  749M  4.3G  15% /
tmpfs                 126M     0  126M   0% /lib/init/rw
udev                   10M  108K  9.9M   2% /dev
tmpfs                 126M     0  126M   0% /dev/shm
/dev/md0              228M   30M  187M  14% /boot
server1:~#

The output of

cat /proc/mdstat

should be as follows:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1]
      5550336 blocks [2/1] [_U]

md1 : active (auto-read-only) raid1 sdb2[1]
      497920 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
      240832 blocks [2/1] [_U]

unused devices: <none>
server1:~#

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:

fdisk /dev/sda

server1:~# fdisk /dev/sda

Command (m for help): <-- t
Partition number (1-4): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): <-- t
Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): <-- t
Partition number (1-4): <-- 3
Hex code (type L to list codes): <-- fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help): <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
server1:~#

Now we can add /dev/sda1, /dev/sda2, and /dev/sda3 to the respective RAID arrays:

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
mdadm --add /dev/md2 /dev/sda3

Now take a look at

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[2] sdb3[1]
      5550336 blocks [2/1] [_U]
      [===>.................]  recovery = 15.9% (888704/5550336) finish=2.3min speed=32914K/sec

md1 : active raid1 sda2[2] sdb2[1]
      497920 blocks [2/1] [_U]
        resync=DELAYED

md0 : active raid1 sda1[0] sdb1[1]
      240832 blocks [2/2] [UU]

unused devices: <none>
server1:~#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[0] sdb3[1]
      5550336 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      497920 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      240832 blocks [2/2] [UU]

unused devices: <none>
server1:~#

).

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:

cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Mon, 17 Aug 2009 16:38:27 +0200
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=757afd26:543267ab:325ecf68:79913751
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=1e5f2139:0806d523:325ecf68:79913751
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=bc2dffb8:047b4ed5:325ecf68:79913751

 

8 Preparing GRUB (Part 2)

We are almost done now. Now we must modify /boot/grub/menu.lst again. Right now it is configured to boot from /dev/sdb (hd1,0). Of course, we still want the system to be able to boot in case /dev/sdb fails. Therefore we copy the first kernel stanza (which contains hd1), paste it below and replace hd1 with hd0. Furthermore we comment out all other kernel stanzas so that it looks as follows:

vi /boot/grub/menu.lst

[...]
## ## End Default Options ##

title           Debian GNU/Linux, kernel 2.6.26-2-686
root            (hd1,0)
kernel          /vmlinuz-2.6.26-2-686 root=/dev/md2 ro quiet
initrd          /initrd.img-2.6.26-2-686

title           Debian GNU/Linux, kernel 2.6.26-2-686
root            (hd0,0)
kernel          /vmlinuz-2.6.26-2-686 root=/dev/md2 ro quiet
initrd          /initrd.img-2.6.26-2-686

#title          Debian GNU/Linux, kernel 2.6.26-2-686
#root           (hd0,0)
#kernel         /vmlinuz-2.6.26-2-686 root=/dev/sda3 ro quiet
#initrd         /initrd.img-2.6.26-2-686

#title          Debian GNU/Linux, kernel 2.6.26-2-686 (single-user mode)
#root           (hd0,0)
#kernel         /vmlinuz-2.6.26-2-686 root=/dev/sda3 ro single
#initrd         /initrd.img-2.6.26-2-686

#title          Debian GNU/Linux, kernel 2.6.26-1-686
#root           (hd0,0)
#kernel         /vmlinuz-2.6.26-1-686 root=/dev/sda3 ro quiet
#initrd         /initrd.img-2.6.26-1-686

#title          Debian GNU/Linux, kernel 2.6.26-1-686 (single-user mode)
#root           (hd0,0)
#kernel         /vmlinuz-2.6.26-1-686 root=/dev/sda3 ro single
#initrd         /initrd.img-2.6.26-1-686

### END DEBIAN AUTOMAGIC KERNELS LIST

In the same file, there's a kopt line; replace /dev/sda3 with /dev/md2 (don't remove the # at the beginning of the line!):

[...]
# kopt=root=/dev/md2 ro
[...]

Afterwards, update your ramdisk:

update-initramfs -u

... and reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your running Debian Lenny system!


Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.
Submitted by Lopinhos (registered user) on Wed, 2011-03-30 21:19.
After upgrading the debian server failed to start. I now try to recover the boot through the rescue system, tried to mount / dev/md1 on / mnt/md1 but there are folders that are empty, such as root, etc, proc.
root@teste:/# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[0] sda1[1]
97536 blocks [2/2] [UU]

md1 : active raid1 sda3[1]
296993088 blocks [2/1] [_U]

unused devices: <none>


######################################

root@teste:/# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Jan 27 16:55:06 2011
Raid Level : raid1
Array Size : 97536 (95.27 MiB 99.88 MB)
Used Dev Size : 97536 (95.27 MiB 99.88 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Mar 30 19:10:44 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : adbe2b1b:4917a312:7792c71e:7dc17aa4
Events : 0.44

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 1 1 active sync /dev/sda1
######
root@xteste:/# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90.03
Creation Time : Thu Jan 27 16:55:06 2011
Raid Level : raid1
Array Size : 296993088 (283.23 GiB 304.12 GB)
Used Dev Size : 296993088 (283.23 GiB 304.12 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Wed Mar 30 19:09:54 2011
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : 5c8e4766:9653e87b:7792c71e:7dc17aa4
Events : 0.15600

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 3 1 active sync /dev/sda3
root@teste:/#
###############
 
root@teste:/# fdisk -l
Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000b01cb
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          13       97656   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2              13         137     1000000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             137       37111   296993164   fd  Linux raid autodetect
Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000b32a6
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          13       97656   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2              13         137     1000000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sdb3             137       37111   296993164   fd  Linux raid autodetect
Disk /dev/md1: 304.1 GB, 304120922112 bytes
2 heads, 4 sectors/track, 74248272 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md0: 99 MB, 99876864 bytes
2 heads, 4 sectors/track, 24384 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
root@teste:/#
 
what should I do to solve the problem of raid1?