System RAID Messages -- what do they mean?

Discussion in 'Technical' started by dpicella, Dec 24, 2012.

  1. dpicella

    dpicella New Member

    I recently set up a new server. The OS has 2 drives on RIAD 1 and the data has 3 disks on RAID 5. I got this message from the system. Not sure what it means. It is the only one a got and it has been several days, so maybe I can ignore it.

     
  2. falko

    falko Super Moderator ISPConfig Developer

    What's the output of
    Code:
    cat /proc/mdstat
    ?
     
  3. dpicella

    dpicella New Member

    Personalities : [raid1] [raid6] [raid5] [raid4]
    md1 : active raid5 sdc1[0] sdd1[1] sde1[3]
    1953518592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    bitmap: 2/8 pages [8KB], 65536KB chunk

    md0 : active raid1 sdb1[1]
    312363900 blocks super 1.1 [2/1] [_U]
    bitmap: 3/3 pages [12KB], 65536KB chunk

    unused devices: <none>
     
  4. falko

    falko Super Moderator ISPConfig Developer

    Your /dev/md0 array is degraded - /dev/sda1 is missing. Try to re-add it as follows:

    Code:
    mdadm --manage /dev/md0 --fail /dev/sda1
    mdadm --manage /dev/md0 --remove /dev/sda1
    mdadm --zero-superblock /dev/sda1
    mdadm -a /dev/md0 /dev/sda1
    (see http://www.howtoforge.com/how-to-se...ystem-incl-grub-configuration-debian-lenny-p4 ).
     
  5. dpicella

    dpicella New Member

    Weird ... Now I have 2 arrays that have degraded

    # cat /proc/mdstat

    Personalities : [raid1] [raid6] [raid5] [raid4]
    md1 : active raid5 sdd1[1] sde1[3]
    1953518592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
    bitmap: 7/8 pages [28KB], 65536KB chunk

    md0 : active raid1 sdb1[1]
    312363900 blocks super 1.1 [2/1] [_U]
    bitmap: 3/3 pages [12KB], 65536KB chunk


    # fdisk -l

    Disk /dev/sda: 320.1 GB, 320072933376 bytes
    255 heads, 63 sectors/track, 38913 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x0005a631

    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 26 204800 83 Linux
    Partition 1 does not end on cylinder boundary.
    /dev/sda2 26 38914 312365056 fd Linux raid autodetect

    Disk /dev/sdb: 320.1 GB, 320072933376 bytes
    255 heads, 63 sectors/track, 38913 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x000831b7

    Device Boot Start End Blocks Id System
    /dev/sdb1 1 38914 312569856 fd Linux raid autodetect

    Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00020849

    Device Boot Start End Blocks Id System
    /dev/sdc1 1 121602 976760832 fd Linux raid autodetect

    Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00031948

    Device Boot Start End Blocks Id System
    /dev/sdd1 1 121602 976760832 fd Linux raid autodetect

    Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00059d1d

    Device Boot Start End Blocks Id System
    /dev/sde1 1 121602 976760832 fd Linux raid autodetect

    Disk /dev/md0: 319.9 GB, 319860633600 bytes
    2 heads, 4 sectors/track, 78090975 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/mapper/vg_localhost-lv_root: 262.1 GB, 262144000000 bytes
    255 heads, 63 sectors/track, 31870 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/mapper/vg_localhost-lv_swap: 8485 MB, 8485076992 bytes
    255 heads, 63 sectors/track, 1031 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/md1: 2000.4 GB, 2000403038208 bytes
    2 heads, 4 sectors/track, 488379648 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
    Disk identifier: 0x00000000


    Disk /dev/mapper/vg_localhost-lv_home: 49.2 GB, 49228546048 bytes
    255 heads, 63 sectors/track, 5985 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000

    When I ran the fix from above I got this ...

    # mdadm --manage /dev/md0 --fail /dev/sda1
    mdadm: set device faulty failed for /dev/sda1: No such device

    # mdadm --manage /dev/md0 --remove /dev/sda1
    mdadm: hot remove failed for /dev/sda1: No such device or address

    # mdadm --zero-superblock /dev/sda1
    mdadm: Couldn't open /dev/sda1 for write - not zeroing

    # mdadm -a /dev/md0 /dev/sda1
    mdadm: Cannot open /dev/sda1: Device or resource busy
     
  6. falko

    falko Super Moderator ISPConfig Developer

    Maybe one of your hard drives is dying...
     
  7. dpicella

    dpicella New Member

    It turned out to be loose cables. I was able to rebuild all the arrays!

    Personalities : [raid1] [raid6] [raid5] [raid4]
    md1 : active raid5 sdc1[0] sde1[3] sdd1[1]
    1953518592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    bitmap: 0/8 pages [0KB], 65536KB chunk

    md0 : active raid1 sda2[0] sdb1[1]
    312363900 blocks super 1.1 [2/2] [UU]
    bitmap: 0/3 pages [0KB], 65536KB chunk

    I found this to be helpful:
    http://www.zachburlingame.com/2011/05/howto-rebuild-a-software-raid-5-array-after-replacing-a-disk/

    I really like the last line that refreshes the progress!

    Cheers!
     

Share This Page