HowtoForge Forums | HowtoForge - Linux Howtos and Tutorials

HowtoForge Forums | HowtoForge - Linux Howtos and Tutorials (http://www.howtoforge.com/forums/index.php)
-   HOWTO-Related Questions (http://www.howtoforge.com/forums/forumdisplay.php?f=2)
-   -   Raid1 + LVM on fc6 (some assistance needed) (http://www.howtoforge.com/forums/showthread.php?t=20909)

rbanks 1st March 2008 14:50

Raid1 + LVM on fc6 (some assistance needed)
 
Hello,
I followed along with the HowTo, "How To Set Up Software RAID1 On A Running System" by falko. I am adapting this HowTo that is based on fc8 to a fc6 server that I have. I have 2 disks sda (empty) and sdb (LVM). I am in unfamiliar water here with raid or lvm. I went ahead and tried it because I knew I had a previous kernel image that I could fall back on if I managed to fubar this setup. I am now using that boot image as predicted. There may be more to it but I think all that I need is to set up the LVM properly in the fstab/mtab and mkinitrd a proper image. What I have now is errors of "setuproot error mounting /proc" and "cannot opendir(/proc)" so obviously the root file system is not being pointed to correctly. I thought I would just stop at this point instead of messing about and learning all that would be required to sort it out myself and give this forum a chance to point out my obvious mistakes first. Here is the configurations that I have now:

[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 107G 80G 22G 79% /
/dev/sdb1 99M 18M 76M 20% /boot
tmpfs 506M 0 506M 0% /dev/shm

[root@localhost ~]# fdisk -l

Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 14593 117113850 fd Linux raid autodetect

Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 83 Linux
/dev/sdb2 14 14593 117113850 8e Linux LVM

Disk /dev/md0: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 119.9 GB, 119924457472 bytes
2 heads, 4 sectors/track, 29278432 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

[root@localhost ~]# cat /etc/grub.conf
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd1,0)
# kernel /vmlinuz-version ro root=/dev/VolGroup01/LogVol00
# initrd /initrd-version.img
#boot=/dev/sda
default=0
fallback=1
timeout=5
splashimage=(hd1,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora Core (2.6.22.14-72.fc6)
root (hd0,0)
kernel /vmlinuz-2.6.22.14-72.fc6 ro root=/dev/md1 rhgb quiet
initrd /initrd-2.6.22.14-72.fc6.img
title Fedora Core (2.6.22.14-72.fc6)
root (hd1,0)
kernel /vmlinuz-2.6.22.14-72.fc6 ro root=/dev/VolGroup01/LogVol00 rhgb quiet
initrd /initrd-2.6.22.14-72.fc6.img
title Fedora Core (2.6.22.9-61.fc6)
root (hd1,0)
kernel /vmlinuz-2.6.22.9-61.fc6 ro root=/dev/VolGroup01/LogVol00 rhgb quiet
initrd /initrd-2.6.22.9-61.fc6.img

[root@localhost ~]# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=c6fd5698:b976df7f:b2c25c31:740a24c9
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=5185e26d:7275e924:550164b9:3326281b

[root@localhost ~]# cat /etc/fstab
/dev/md0 / ext3 defaults 1 1
LABEL=/boot1 /boot ext3 defaults 1 2
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
/dev/md1 swap swap defaults 0 0

[root@localhost ~]# cat /etc/mtab
/dev/md0 / ext3 rw 0 0
proc /proc proc rw 0 0
sysfs /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
/dev/sdb1 /boot ext3 rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0

falko 2nd March 2008 14:29

I think the problem is LVM. I haven't tried yet to convert a running LVM system to RAID1 yet, but I might soon... ;)

rbanks 2nd March 2008 15:16

Raid1 + LVM on fc6 (some assistance needed)
 
Well, how about now? Any suggestions on which way to go with this would not only be appreciated but would also help me to get my server back to where it should be. I have recorded each step along the way and backed up everythin so far. Correct me if I am wrong but as I stated in my original post, I think it is just a matter of pointed the initrd at the correct mount point or do you think there is more to it than that? Thanks though for the response.

rbanks 3rd March 2008 18:56

Raid1 + LVM on fc6 (some assistance needed)
 
falco, I'm going to go ahead and try a few things here on my own. I just started looking into what might have went wrong and the first thing that I checked was that when I did the copy:

cp -dpRx / /mnt/md1

/proc and /dev didn't copy so of course initrd is looking for non-existant directories. I'm going to try:

dd if=/dev/sdb of=/dev/sda

and then continue from there to see if I can make this work. Do you have any suggestions at this point?

rbanks 3rd March 2008 23:19

Raid1 + LVM on fc6 (some assistance needed)
 
Well, I decided to bail on this one. No one seems to want to tackle this problem and I don't have the time so if anyone figures this out and catches this thread post your methods. I would sure like to convert this 2 disk running system to a raid1.

falko 4th March 2008 19:21

I can't help at this stage, but as I said earlier, I might write a tutorial about it soon. :)

jabetcha 15th March 2008 18:42

rbanks,

I've got a similar situation, with a little bit of a twist.

I've just migrated my FC8 system with 3-160GB drives to 2-750GB drives. Using the pvmove command, I was able to move all my LVM extents from the old drives to one of the new drives with no issues.

Now, I want to use the 2nd 750GB drive as a mirror, but cannot rebuild the system from scratch, as this would take about a week, with all the custom software I have running.

I can't seem to do this automatically since the segments from the 3 individual drives show up in the Redhat LVM manager as separate, and the LVM Manager wants 3 disks to configure a mirror.

So, I'm not sure if this is even possible, but I'd imagine I need to somehow rebuild or merge the 3 segments into one before I can configure the mirror.

I'm going to play with this for a while in a virtual let you know what I find out.

Falco, if you have any suggestions to start from, I'd apprediate it.


All times are GMT +2. The time now is 14:19.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.