Go Back   HowtoForge Forums | HowtoForge - Linux Howtos and Tutorials > Linux Forums > Installation/Configuration

Do you like HowtoForge? Please consider supporting us by becoming a subscriber.
Reply
 
Thread Tools Display Modes
  #1  
Old 29th August 2007, 12:52
snowfly snowfly is online now
Member
 
Join Date: Jul 2006
Posts: 93
Thanks: 0
Thanked 6 Times in 5 Posts
 
Default Debian: Failed Raid5 Array & Kernal Panic

Hi,

I've got a debian sarge server, which has been running fine for over 6 months.
I then moved house, restarted the server back up, and all was fine.

However, the next day, when I restarted the server (after having to move it), it wouldn't even boot up.

I really need to get this back up and running, as it has some important data on it (50% of it is backed up).

Quick specs of the server:
- 3x 120 GB drives, as RAID5 (1 spare)
- Debian Sarge, 2.6 kernel

Here's some of the errors that I managed to write down on boot up:

Code:
md: md1 stopped.
md: bind<hdb2>
md: bind<hde2>
md: bind<hda2>
md: kicking non-fresh hde2 from array!
md: unbind<hde2>
md: export_rdev(hde2)
md: md1: raid array is not clean -- starting background reconstruction
raid5: device hda2 operational as raid disk 0
raid5: device hdb2 operational as raid disk 1
raid5: cannot start dirty degraded array for md1
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 0, o:1, dev:hda1
 disk 1, o:1, dev:hdb2
raid5: failed to run raid set md1
md: pers->run() failed ...
This bit repeated lots:
Code:
devfs_mk_dir: invalid argument.<4>devfs_mk_dev: could not append parent for /disk
This bit repeated for each vg drive (vg00-root, vg00-usr, vg00-var, vg00-tmp, vg00-home):

Code:
device-mapper: error adding taget to table
  device-mapper ioctl cmd 9 failed: Invalid argument
  Couldn't load device 'vg00-home'.
And this is the last bit:

Code:
 6 logical volume(s) in volume group "vg00" now active
EXT3-fs: unable to read superblock
pivot_root: No such file or directory
/sbin/init: 432: cannot open dev/console: No such file
Kernel panic: Attempted to kill init!
And thats where it stopped.

So next I tried to unplug each drive, 1 by 1, and reboot, to see if 1 drive had failed and if the raid would work running of 2 drives.
No luck.

Next I grabbed a spare 200GB drive, installed a fresh copy of Debian sarge 2.6 kernel.

Once I had booted that up successfully, I tried to look at the raid, and maybe assemble it.

fdisk -l gave the appropriate results: (summarised for easy reading)

Code:
Disk /dev/hda: 120.0 GB
/dev/hda1  258976 blocks  Linux raid autodetect
/dev/hda2  116945167+ blocks  Linux raid autodetect

Disk /dev/hdb: 120.0 GB
/dev/hda1  258976 blocks  Linux raid autodetect
/dev/hda2  116945167+ blocks  Linux raid autodetect

Disk /dev/hde: 120.0 GB
/dev/hda1  258976 blocks  Linux raid autodetect
/dev/hda2  116945167+ blocks  Linux raid autodetect

Disk /dev/hdd: 200.0 GB    (the newly created bootable linux system)
/dev/hdd1  192707676  Linux
/dev/hdd2  2650723  Extended
/dev/hdd5  2650693+  Linux Swap

Now I'm not quite sure, but I think the array had 2 parts, /dev/md0 (small bootable area) and /dev/md1 (main)

And each main area was then split up using LVM.

I then tried this:

Code:
mdadm --verbose --assemble /dev/md1 /dev/hda2 /dev/hdb2 /dev/hde2
Results:

Code:
md: md1 stopped
mdadm: looking for devices for /dev/md1
mdadm: /dev/hda2 identified as a member of /dev/md1, slot 0
mdadm: /dev/hdb2 identified as a member of /dev/md1, slot 1
mdadm: /dev/hde2 identified as a member of /dev/md1, slot 2
md: bind<hdb2>
mdadm: added /dev/hdb2 to /dev/md1 as 1
md: bind<hde2>
mdadm: added /dev/hde2 to /dev/md1 as 1
md: bind<hda2>
mdadm: added /dev/hda2 to /dev/md1 as 1
md: kicking non-fresh hde2 from array!
md: unbind<hde2>
md: export_rdev(hde2)
md: md1: raid array is not clean -- starting background reconstruction
raid5: device hda2 operational as raid disk 0
raid5: device hdb2 operational as raid disk 1
raid5: cannot start dirty degraded array for md1
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 0, o:1, dev:hda1
 disk 1, o:1, dev:hdb1
raid5: failed to run raid set md1
md: pers->run() failed ...
mdadm: failed to RUN_ARRAY /dev/md1: Invalid argument

Now thats about as far as I've got, after looking through various google searches.

Anyone had anything similar?
Anyone seen those errors before?

Any help would be much appreciated.

Thanks
Mike
Reply With Quote
Sponsored Links
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Freebsd 6.1 support misterm Installation/Configuration 10 9th April 2009 09:29
Bind Failed christoph2k HOWTO-Related Questions 4 28th April 2007 00:57
Systemimager (rsync) doesn't copy all comedit HOWTO-Related Questions 11 19th January 2007 17:17
How to install BFD (Brute Force Detection) domino Tips/Tricks/Mods 9 31st March 2006 22:40
e-mail problem!!! Debian 3.1 maroonworks Installation/Configuration 18 6th December 2005 14:42


All times are GMT +2. The time now is 11:43.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.