trouble configuring RAID1

Discussion in 'Installation/Configuration' started by ali888, Oct 5, 2011.

  1. ali888

    ali888 New Member

    Hi,

    I have got a machine that has two of identical Hard disks and have already built a RAID1 on it.

    Now, I'd like to install ubuntu server 10.04 (32 bits) onto the machine but been encountering some issues.

    The issue is when it comes to partitioning disks, it said it has detected active SATA drives there - "One or more drives containing Serial ATA RAID configurations have been found. Do you wish to activate these RAID devices? Activate Serial ATA RAID devices. I was given two options "yes" or "no". I clicked "Yes" because I thought this would enable ubuntu server to set up RAID1 during the installation. However, I was brought to the Partition disks that says "This is an overview of your currently configured partitions and mount points. Select a partition to modify its settings...., then there are three options to choose namely:
    Configure iSCSI volumes
    Undo changes to partitions
    Finish partitioning and write changes to disk

    So I am not sure if I am doing the write thing here. When I chose "Configure iSCSI volumes", I was prompted to choose either Log into iSCSI targets or Finish. I do not know what to do from here.

    Any help would be greatly appreciated.

    Thank you
     
    Last edited: Oct 5, 2011
  2. falko

    falko Super Moderator ISPConfig Developer

  3. ali888

    ali888 New Member

    Hi Falko,

    I have somehow managed to get it to work and installed the Ubuntu OS onto the system. I got a funny feeling if the RAID1 has been properly set up.

    Anyway, today I wanted to check how much disk space left after installing a number of applications like postfix, dovecot and the others by typing into the terminal: $ df

    All I could see in return after executing that command was
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/md2 5766196 3527544 1945740 65% /
    none 507784 224 507560 1% /dev
    none 512248 1272 510976 1% /dev/shm
    none 512248 104 512144 1% /var/run
    none 512248 0 512248 0% /var/lock
    none 512248 0 512248 0% /lib/init/rw

    This can't be right because I put in 2 x 1TB HD. I got a feeling I might accidentally tell it to use the swap size which I set it to 6GB. By the way, is it the right way (using 'df' command) to check how much disk space being used.

    So I do need to try out the one you recommended there that is setting up RAID on a running system. I try not to build it from scratch.

    Will keep you updated.
     
  4. falko

    falko Super Moderator ISPConfig Developer

    Can you post the outputs of
    Code:
    df -h
    and
    Code:
    fdisk -l
    ?
    What's in your /etc/fstab?
     
  5. ali888

    ali888 New Member

    Hi Falko,

    Here is the output of 'df -h'
    Filesystem Size Used Avail Use% Mounted on
    /dev/md2 5.5G 3.4G 1.9G 65% /
    none 496M 224K 496M 1% /dev
    none 501M 248K 500M 1% /dev/shm
    none 501M 108K 501M 1% /var/run
    none 501M 0 501M 0% /var/lock
    none 501M 0 501M 0% /lib/init/rw

    The output of 'fdisk -l' returned nothing

    And there is no directory called fstab under /etc.

    Now, I am not sure if I really have to re-install it from scratch because I did not get the similar output to the link you suggested before.

    Many thanks
     
    Last edited: Oct 12, 2011
  6. nbhadauria

    nbhadauria New Member

    How about the output of these commands..

    cat /proc/mdstat

    mdadm --query --detail /dev/md2

    cat /etc/fstab

    fdisk -l /dev/sda

    fdisk -l /dev/sdb
     
  7. ali888

    ali888 New Member

    Thanks for your reply.

    Here is the output when executing cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md1 : active raid1 sda2[0] sdb2[1]
    2999232 blocks [2/2] [UU]

    md2 : active raid1 sdb1[1] sda1[0]
    5858240 blocks [2/2] [UU]

    unused devices: <none>

    ========================================================
    Here is the output of mdadm --query --detail /dev/md2
    /dev/md2:
    Version : 00.90
    Creation Time : Fri Oct 7 09:21:05 2011
    Raid Level : raid1
    Array Size : 5858240 (5.59 GiB 6.00 GB)
    Used Dev Size : 5858240 (5.59 GiB 6.00 GB)
    Raid Devices : 2
    Total Devices : 2
    Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Thu Oct 13 10:57:47 2011
    State : active
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0

    UUID : 043cb6aa:ea79656a:7cd3b1f0:128dceb2
    Events : 0.47

    Number Major Minor RaidDevice State
    0 8 1 0 active sync /dev/sda1
    1 8 17 1 active sync /dev/sdb1

    ========================================================
    Here is the output of cat /etc/fstab
    # /etc/fstab: static file system information.
    #
    # Use 'blkid -o value -s UUID' to print the universally unique identifier
    # for a device; this may be used with UUID= as a more robust way to name
    # devices that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    proc /proc proc nodev,noexec,nosuid 0 0
    # / was on /dev/md2 during installation
    UUID=a629cfb3-2c07-4e98-91ca-b7eb7a5b1c4b / ext4 errors=remount-ro 0 1
    # swap was on /dev/md1 during installation
    UUID=b26b15fc-a4ab-45a2-8ca1-b5d1b3f2de10 none swap sw 0 0

    ========================================================
    Here is the output of fdisk -l /dev/sda
    Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000a1cd6

    Device Boot Start End Blocks Id System
    /dev/sda1 1 730 5858304 fd Linux RAID autodetect
    Partition 1 does not end on cylinder boundary.
    /dev/sda2 * 730 121602 970902528 fd Linux RAID autodetect

    ========================================================
    Here is the output of fdisk -l /dev/sdb
    Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000c948c

    Device Boot Start End Blocks Id System
    /dev/sdb1 1 730 5858304 fd Linux RAID autodetect
    Partition 1 does not end on cylinder boundary.
    /dev/sdb2 * 730 121602 970902528 fd Linux RAID autodetect

    I think from the output of cat /proc/mdstat and mdadm --query --detail /dev/md2, I gathered I have used the wrong md, haven't I? I've used the swap size.

    This is where I got stuck during the installation. During the installation, I used the guide from the link help.ubuntu.com/10.04/serverguide/C/advanced-installation.html. I got stuck when doing the RAID configuration. You notice that it says when creating MD device, I need to follow the steps for sda1 and sdb1, and then create another MD device by repeating the same steps for sda2 and sdb2. What happened with mine was I could only do sda1 and sdb1, I got some kinda error (can't remember exactly) when trying to repeat the same steps to create another MD device. So I'm confused now. I mean I could re-install all from scratch again but am not sure if I would bump into the same problem as earlier. Need your guidance please.

    Thank you
     
  8. nbhadauria

    nbhadauria New Member

    Yes you are right. you have chosen the wrong partition.

    Now as your document suggest you might miss this step.

    Select "#1" under the "RAID1 device #0" partition.

    Choose "Use as:". Then select "swap area", then "Done setting up partition".

    Next, select "#1" under the "RAID1 device #1" partition.

    Choose "Use as:". Then select "Ext4 journaling file system".

    Then select the "Mount point" and choose "/ - the root file system". Change any of the other options as appropriate, then select "Done setting up partition".

    Finally, select "Finish partitioning and write changes to disk".



    try again and make sure that your chosen the right partition.
     
  9. ali888

    ali888 New Member

    Hi nbhadauria,

    I just got a time to re-do the RAID partition on my ubuntu server 10.04 (32bits). I seemed unable to remove all the partition during the installation of the OS.

    I got problem when trying to create another MD Device after the first one because it only showed there are only two active devices i.e. sda1 and sdb1. So when I tried to create another md device, it gave me the error saying there is no more drive to configure RAID. I do not know why. I carefully followed the instructions from the link: https://help.ubuntu.com/10.04/serverguide/C/advanced-installation.html. I wonder why there are only two active devices detected, should it not be more than two, for example sda1, sda2,sdb1 and sdb2.

    Anyway, below is what I got under Partition Disks.

    RAID1 device #1 -3.1GB Software RAID device
    #1 3.1GB f swap
    983.0 KB unusable
    RAID2 device #2 -6.0GB Software RAID device
    #1 6.0GB f ext4
    983.0 KB unusable
    SCSI3 (0,0,0) (sda) -1.0TB ATA WDC ...
    #1 primary 6.0GB K raid
    #2 primary 994.2GB B K raid
    SCSI4 (0,0,0) (sdb) -1.0TB ATA WDC ...
    #1 primary 6.0GB K raid
    #2 primary 994.2GB B K raid

    I know the reason why I had previously only 6GB because I set it to ext4 under RAID2. And I also know that I need to create md device - one for sda1 and sdb1, and the other one for sda2 and sdb2. The problem I am having is there are only two active devices for RAID1: sda1 and sdb1. Because when I tried to create another md device, I got error as I stated above. I wonder what I have done wrong here.

    Now, I have to find out way to completely remove all the RAIDs from the installation of the OS

    very frustrating.

    Thank you
     
    Last edited: Oct 17, 2011
  10. ali888

    ali888 New Member

    Just an update,

    I finally managed to get rid of the RAID array using the Live CD.

    So, very excited at the moment but just wanna double check with you if that's ok.

    the output of 'cat /proc/mdstat' is
    md1: active raid1 sdb2[1] sda2[0]
    970902464 blocks [2/2] [UU]
    resync = 10.6% (103727552/970902464) finish=130min speed=110951K/sec

    md0: active raid1 sdb1[1] sda1[0]
    5858240 blocks [2/2] [UU]

    unused devices: <none>

    I think that looks all right as I set 6GB for the swap and the rest is for ext4. The total size of each HD is 1TB.

    Thank you
     

Share This Page