HowtoForge Forums | HowtoForge - Linux Howtos and Tutorials

HowtoForge Forums | HowtoForge - Linux Howtos and Tutorials (http://www.howtoforge.com/forums/index.php)
-   Technical (http://www.howtoforge.com/forums/forumdisplay.php?f=8)
-   -   System RAID Messages -- what do they mean? (http://www.howtoforge.com/forums/showthread.php?t=60039)

dpicella 24th December 2012 14:51

System RAID Messages -- what do they mean?
 
I recently set up a new server. The OS has 2 drives on RIAD 1 and the data has 3 disks on RAID 5. I got this message from the system. Not sure what it means. It is the only one a got and it has been several days, so maybe I can ignore it.

Quote:

To: root
Subject: Fail event on /dev/md0:picella.net

This is an automatically generated mail message from mdadm
running on picella.net

A Fail event had been detected on md device /dev/md0.

It could be related to component device /dev/sda2.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid5 sde1[3] sdc1[0] sdd1[1]
1953518592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 3/8 pages [12KB], 65536KB chunk

falko 28th December 2012 11:48

What's the output of
Code:

cat /proc/mdstat
?

dpicella 30th December 2012 21:28

Quote:

Originally Posted by falko (Post 290061)
What's the output of
Code:

cat /proc/mdstat
?

Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid5 sdc1[0] sdd1[1] sde1[3]
1953518592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 2/8 pages [8KB], 65536KB chunk

md0 : active raid1 sdb1[1]
312363900 blocks super 1.1 [2/1] [_U]
bitmap: 3/3 pages [12KB], 65536KB chunk

unused devices: <none>

falko 31st December 2012 13:58

Quote:

md0 : active raid1 sdb1[1]
312363900 blocks super 1.1 [2/1] [_U]
bitmap: 3/3 pages [12KB], 65536KB chunk
Your /dev/md0 array is degraded - /dev/sda1 is missing. Try to re-add it as follows:

Code:

mdadm --manage /dev/md0 --fail /dev/sda1
mdadm --manage /dev/md0 --remove /dev/sda1
mdadm --zero-superblock /dev/sda1
mdadm -a /dev/md0 /dev/sda1

(see http://www.howtoforge.com/how-to-set...ebian-lenny-p4 ).

dpicella 1st January 2013 08:56

Weird ... Now I have 2 arrays that have degraded
 
# cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid5 sdd1[1] sde1[3]
1953518592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
bitmap: 7/8 pages [28KB], 65536KB chunk

md0 : active raid1 sdb1[1]
312363900 blocks super 1.1 [2/1] [_U]
bitmap: 3/3 pages [12KB], 65536KB chunk


# fdisk -l

Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0005a631

Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 38914 312365056 fd Linux raid autodetect

Disk /dev/sdb: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000831b7

Device Boot Start End Blocks Id System
/dev/sdb1 1 38914 312569856 fd Linux raid autodetect

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00020849

Device Boot Start End Blocks Id System
/dev/sdc1 1 121602 976760832 fd Linux raid autodetect

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00031948

Device Boot Start End Blocks Id System
/dev/sdd1 1 121602 976760832 fd Linux raid autodetect

Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00059d1d

Device Boot Start End Blocks Id System
/dev/sde1 1 121602 976760832 fd Linux raid autodetect

Disk /dev/md0: 319.9 GB, 319860633600 bytes
2 heads, 4 sectors/track, 78090975 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_localhost-lv_root: 262.1 GB, 262144000000 bytes
255 heads, 63 sectors/track, 31870 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_localhost-lv_swap: 8485 MB, 8485076992 bytes
255 heads, 63 sectors/track, 1031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000


Disk /dev/md1: 2000.4 GB, 2000403038208 bytes
2 heads, 4 sectors/track, 488379648 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_localhost-lv_home: 49.2 GB, 49228546048 bytes
255 heads, 63 sectors/track, 5985 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

When I ran the fix from above I got this ...

# mdadm --manage /dev/md0 --fail /dev/sda1
mdadm: set device faulty failed for /dev/sda1: No such device

# mdadm --manage /dev/md0 --remove /dev/sda1
mdadm: hot remove failed for /dev/sda1: No such device or address

# mdadm --zero-superblock /dev/sda1
mdadm: Couldn't open /dev/sda1 for write - not zeroing

# mdadm -a /dev/md0 /dev/sda1
mdadm: Cannot open /dev/sda1: Device or resource busy

falko 1st January 2013 15:16

Maybe one of your hard drives is dying...

dpicella 1st January 2013 18:40

Quote:

Originally Posted by falko (Post 290178)
Maybe one of your hard drives is dying...

It turned out to be loose cables. I was able to rebuild all the arrays!

Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid5 sdc1[0] sde1[3] sdd1[1]
1953518592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/8 pages [0KB], 65536KB chunk

md0 : active raid1 sda2[0] sdb1[1]
312363900 blocks super 1.1 [2/2] [UU]
bitmap: 0/3 pages [0KB], 65536KB chunk

I found this to be helpful:
http://www.zachburlingame.com/2011/0...lacing-a-disk/

I really like the last line that refreshes the progress!

Cheers!


All times are GMT +2. The time now is 09:07.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.