software raid problem

Discussion in 'Installation/Configuration' started by Daisy, Jun 10, 2007.

  1. Daisy

    Daisy New Member

    Ok. So I don't have a drive that failed but I just noticed that two of my partitions aren't mirrored anymore. They used to be. Can anyone give some helpful tips or point me in the right direction on how to fix this? Everything I'm reading is on drive failure only. I think I may just need to resynchronize them but I'm not sure if you can do that.

    Code:
    [root@server ~]# cat /proc/mdstat
    Personalities : [raid1] 
    md0 : active raid1 sdb1[0]
          104320 blocks [2/1] [U_]
          
    md1 : active raid1 sdb2[0] sda2[1]
          4192896 blocks [2/2] [UU]
          
    md2 : active raid1 sdb3[0]
          308271168 blocks [2/1] [U_]
          
    unused devices: <none>
    
    Code:
    [root@server ~]# fdisk -l
    
    Disk /dev/sda: 320.0 GB, 320072933376 bytes
    255 heads, 63 sectors/track, 38913 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          13      104391   fd  Linux raid autodetect
    /dev/sda2              14         535     4192965   fd  Linux raid autodetect
    /dev/sda3             536       38913   308271285   fd  Linux raid autodetect
    
    Disk /dev/sdb: 320.0 GB, 320072933376 bytes
    255 heads, 63 sectors/track, 38913 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1   *           1          13      104391   fd  Linux raid autodetect
    /dev/sdb2              14         535     4192965   fd  Linux raid autodetect
    /dev/sdb3             536       38913   308271285   fd  Linux raid autodetect
    
    Disk /dev/md2: 315.6 GB, 315669676032 bytes
    2 heads, 4 sectors/track, 77067792 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    
    Disk /dev/md2 doesn't contain a valid partition table
    
    Disk /dev/md1: 4293 MB, 4293525504 bytes
    2 heads, 4 sectors/track, 1048224 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    
    Disk /dev/md1 doesn't contain a valid partition table
    
    Disk /dev/md0: 106 MB, 106823680 bytes
    2 heads, 4 sectors/track, 26080 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    
    Disk /dev/md0 doesn't contain a valid partition table
    # mdadm.conf written out by anaconda
    DEVICE partitions
    MAILADDR root
    ARRAY /dev/md2 level=raid1 num-devices=2 uuid=ae0c2514:084a7067:2f452462:3d34e6ea
    ARRAY /dev/md0 level=raid1 num-devices=2 uuid=722785de:74ec7f53:38daed28:0929896a
    ARRAY /dev/md1 level=raid1 num-devices=2 uuid=44f67f49:e88f3958:dea4af5b:d7863e55
    ~
     
  2. falko

    falko Super Moderator ISPConfig Developer

  3. Daisy

    Daisy New Member

    awesome, that looks like exactly what I'm looking for. Question:

    Personalities : [raid1]
    md0 : active raid1 sdb1[0]
    104320 blocks [2/1] [U_]

    see how is says sdb1? I have sda1 and sdb1. Which one failed? How do I know which one the underscore is for?
     
  4. falko

    falko Super Moderator ISPConfig Developer

    I think that in your case sdb is ok and sda failed.
     
  5. Daisy

    Daisy New Member

    for anyone who reads this.... Software RAID on fedora Core 5 using CLI.

    ok, so let's break down the cat /proc/mdstat first
    Code:
    [root@phoenix-nest /]# cat /proc/mdstat
    Personalities : [raid1] 
    md0 : active raid1 sdb1[0]
          104320 blocks [2/1] [U_]
          
    md1 : active raid1 sdb2[0] sda2[1]
          4192896 blocks [2/2] [UU]
          
    md2 : active raid1 sdb3[0]
          308271168 blocks [2/1] [U_]
          
    unused devices: <none>
    ok, so there are two raid arrays here missing a drive/partition. md0 and md2. How can you tell? well, notice md1 where there are two U's? [UU] That means it's working. I have two hard drives. sda and sdb. You'll notice that both drives with both partitions are listed. i.e. active raid1 sdb2[0] sda2[1]

    (If the hdd had actually died and needed to be replaced, you would have had to manually mark them as failed following the steps provided in the link above. Thanks Falco! Mine was just missing the drive in the first place.)

    The other two, (md0 and md2) only have one drive and partition listed. All I had to do was re-add these using:
    Code:
    [root@phoenix-nest /]# mdadm --manage /dev/md0 --add /dev/sda1
    mdadm: re-added /dev/sda1
    [root@phoenix-nest /]# mdadm --manage /dev/md2 --add /dev/sda3
    mdadm: re-added /dev/sda3
    and I got this:
    Code:
    [root@phoenix-nest /]# cat /proc/mdstat
    Personalities : [raid1] 
    md0 : active raid1 sda1[1] sdb1[0]
          104320 blocks [2/2] [UU]
          
    md1 : active raid1 sdb2[0] sda2[1]
          4192896 blocks [2/2] [UU]
          
    md2 : active raid1 sda3[2] sdb3[0]
          308271168 blocks [2/1] [U_]
          [>....................]  recovery =  0.1% (544896/308271168) finish=122.3min speed=41915K/sec
          
    unused devices: <none>
    Ta da. I hope this helps someone.
     
    Last edited: Jun 12, 2007
  6. Tommy Silver

    Tommy Silver New Member

    it seems like both disks aren't synchronized with each other. you better google 'raid recovery online' for a quick solution
     

Share This Page