PDA

View Full Version : HOWTO: SUSE 10.0 and Software RAID a.k.a FakeRAID


crushton
13th December 2005, 22:15
Motivation: Recently purchased another hard drive to compliment my existing hard drive in hope of using a BIOS Software RAID 0 (via the VIA chip) config with SUSE 10.0. This turned out to be a "no-go". 2.6 kernels apparently no longer supported BIOS fakeraid setups. So, I rummaged through all the forums that even remotely discussed dmraid or RAID in general. Eventually I came across 2 howto's: one was for Gentoo and the other for Ubuntu/Kubuntu. Neither provided enough info to get SUSE up and running. Of course, this would all be unnecessary if VIA Tech had simply made the Linux drivers as promised by the end of November. This did not happen, so I was on my own to find a way to "make" SUSE work. Thus, I present the consequence of my labour in the attached doc file. I hope it helps you to get SUSE up and running as it did me. If not, post a message here and tell me what went wrong. I'll try my best to help. Regards...C.R.

EDIT: See below. I have attached an Open Document File (odt) and reformatted the howto and posted here for quick reference if you do not wish to download anything. Enjoy!

falko
13th December 2005, 23:53
Could you make a PDF out of the doc file and post it here? :) Or simply post the content of the file here?

crushton
14th December 2005, 04:34
How about an Open Document File ? PDF is too large and exceeds my upload limit for these forums =( If I post the content...all the formating will be lost unless I reformat it for the forums, which will take quite a while. Hmm, well I guess I will do both (ODT and post content). Sorry that I used doc, at the time I was just trying to get the file size down.
Hope this is sufficient...regards C.R.
************************************************** *******
HOWTO: SUSE 10.0 and Software RAID a.k.a FakeRAID
A Complete Guide by C. R.

Due to the nature of SUSE 10.0, this how-to is rather long, but necessary in order to get SUSE installed and running correctly without a hitch. Also, this how-to was devised using BIOS software RAID 0, while others may work by following this guide, you are on your own if they don't.

Also, while I am sure there are quicker methods of reaching the same goal (i.e. if you have a spare disk a few of the steps listed can become unnecessary if other changes are made etc), I have purposefully left them out as this guide is designed to be as generic as possible. Other than that, read carefully, send me a post if you have any questions and good luck!

Prerequisites:

1. One of the following software RAID chip sets:
Highpoint HPT37X
Highpoint HPT45X
Intel Software RAID
LSI Logic MegaRAID
NVidia NForce
Promise FastTrack
Silicon Image Medley
VIA Software RAID

2. A working SUSE 10.0 installation and the original installation CD/DVD (this guide assumes KDE as the GUI and does not contain any information regarding Gnome or the like). Also, this working installation of SUSE should be installed on a plain hard drive with no Linux software RAID or LVM enabled. Make sure it is formated with the defaults presented during the original installation onto a single disk.
3. Access to another PC via FTP, a spare hard drive (one which is not included in the RAID), 2 CD/DVD drives (one of which must be a burner), or some type of removable storage (i.e. USB drive etc, keep in mind however, about 1 GB of extra space will be required depending on the installation options you choose for SUSE 10.0)
4. The latest source for dmraid which can be obtained from http://people.redhat.com/~heinzm/sw/dmraid/src/ (as of this writing, latest = 1.0.0.rc9). You'll want to keep the dmraid Internet address handy throughout this guide, so it would be best to write it down on a piece of paper.
5. A Gentoo LiveCD (because it's quick and easy to use =P ) for your machine (i.e. if you have Intel x86 get the latest x86 version or x86_64 if you have an AMD64 etc). Also, you should have a wired Ethernet card, unfortunately getting a wireless card to work with any distros LiveCD is next to impossible. If you have both wired an wireless, use the wired for Gentoo and do things as you normally would when the new SUSE install is about to be booted.
6. The originally installed kernel (i.e. 2.6.13-15-default) currently installed in your running SUSE 10.0 installation. If you updated to the new patch 2.6.13-15.7-default, then you will have to use YaST to downgrade to the original.

The Procedure:

Step 1 – Installing the new SUSE 10.0 system

Boot SUSE 10.0 and log into KDE
Insert the SUSE 10.0 CD1 or DVD disk into your drive
Start the YaST Control Center
Under Software, choose Installation into Directory
Click on Options and choose a Target Directory or leave as the defaut
Check Run YaST and SuSEconfig on first boot
DO NOT check Create Image
Click Accept
Click on Software and make your software choices
Click Accept
Click Next

The new system is being installed into the directory (default = /var/tmp/drinstall) and may take some time depending on your software choices.
When the installation is nearly complete, YaST will complain about installation of the kernel. This can be safely ignored, as the mkinitrd is what is actually failing and we must make our own anyway.

Step 2 – Preparing the new SUSE install for RAID (i.e. hacking it)

Make a directory on your desktop and call it backup, then copy and paste the following files/folders to it:

/boot (this is a directory...duh!)
/sbin/mkinitrd (script file – the one that failed earlier during install)
/etc/fstab (mounted file system file – or rather what should be mounted during boot)

Now, open the original /sbin/mkinitrd in Kate with root permissions so it can be modified.
Select View->Show Line Numbers from Kate's menu.
At line 1178, insert the following exactly:

# Add dmraid
echo "Adding dmraid..."
cp_bin /sbin/dmraid $tmp_mnt/sbin/dmraid

Make sure to have an empty line above and below the new code.
At line 1971, insert the following exactly:

cat_linuxrc <<-EOF
|# Workaround: dmraid should not probe cdroms, but it does.
|# We'll remove all cdrom device nodes till dmraid does this check by itself.
|for y in hda hdb hdc hdd hde hdf hdg hdh sr0 sr1 sr2 sr3;
|do
| if (grep -q "$y" /proc/sys/dev/cdrom/info)
| then
| rm -f /dev/"$y"
| fi
|done
|# Now we can load dmraid
|dmraid -ay -i
EOF
echo

NOTE: This is VERY IMPORTANT! The spaces before the | character are tabs and MUST be tabs.

Make sure to have an empty line above and below the new code.
At line 2927, insert the following exactly:
# HACKED: prevent LVM and DM etc from being detected
Now, comment out (i.e. place a # character at the beginning of the line, like the code you just inserted) all line numbers from 2929 to 2941.
Save the file.

This next part requires gcc to be installed on your system, so run sudo yast -i gcc gcc-c++ at a command line if you do not already have it installed.
Download the latest version of dmraid from the web address listed above in the prerequisites section. Also, be sure to download the one with tar.bz2 as the extension. Extract it to your desktop. Find the file ~/tools/Makefile.in within the extracted folder and open it in Kate. Remove line number 36 or comment it out as mentioned above with a # character. Then in a terminal cd to your desktop and the newly extracted dmraid directory - with root permissions (i.e. type su – ). While in the directory that lists the configure script file,type:

./configure
make
cp -f tools/dmraid /sbin/dmraid
vi /etc/sysconfig/kernel

Near the top of the file, from the last command, there should be a line that looks similar to this:

INITRD_MODULES="sata_via via82cxxx reiserfs processor thermal fan”

Write the information within the quotes on a piece of paper, then type, just before the last quote dm-mod. In vi, to edit a file, press Ins on your keyboard, once modified, press Esc, Shift + ; then w and finally Shift + ; then q to quit.

Back at the command prompt, type mkinitrd. If all goes well, you should see Adding dmraid... and a bunch of other messages that don't say error. We should now have a new initrd/initramfs located in the /boot directory, in fact it replaced the one that was there originally. Copy this new file to your new SUSE installation by issuing the following command:

cp /boot/initrd-2.6.13-15-default your-new-suse-installation-directory/boot/ initrd-2.6.13-15-default

Copy some other needed files to the new system:

cp /boot/initrd your-new-suse-installation-directory/boot/initrd
cp /sbin/dmraid your-new-suse-installation-directory/sbin/dmraid
cp /sbin/mkinitrd your-new-suse-installation-directory/sbin/mkinitrd
cp /etc/sysconfig/kernel your-new-suse-installation-directory/etc/sysconfig/kernel
cp /etc/fstab your-new-suse-installation-directory/etc/fstab

Copy and paste your /boot/grub directory over to your-new-suse-installation-directory/boot directory. You will need root permissions to do this, so use File Manager – Super User Mode if necessary.

Step 3 – Archiving and storing the new SUSE installation

Navigate using the File Manager – Super User Mode and go to the new SUSE installation directory. Select all the directories contained within, right-click and choose Compress->Add to Archive... . In the new window change Location to the directory and filename you want and Open as to Gzipped Tar Archive. This may take a while...

Once finished, copy your-new-suse-installation-archive.tar.gz to whatever medium you like. As long as it will be retrievable once your RAID hard drives have been wiped clean. For example, copy it to a CD/DVD disc if you have 2 or more CD/DVD drives, or to a spare hard drive that will not be included in the RAID, or, in my case, I had to ftp it to a remote computer running Windows XP (sad but true). Originally, I didn't compress the archive and it was 2GB and oddly, Windows wouldn't allow it to be retrieved by ftp afterwards, however, once compressed down to less than 1GB, no problem...just one of the many reasons why I now use Linux!.

crushton
14th December 2005, 04:35
Step 4 – Setting up the RAID and restoring the new SUSE installation onto it

Make sure you have a running wired Internet connection, place the Gentoo LiveCD into your drive, reboot and change the BIOS accordingly to boot from CD and setup your RAID in it's BIOS to configure your RAID disks. At the boot: prompt just hit Enter and for every option thereafter until you get to the Gnome desktop.
Download the dmraid source, like you did before, to the Gnome desktop. Extract it to the desktop, navigate to the extracted directory via a command terminal window using root permissions. This is done by typing sudo su – at the command prompt in the terminal window.
Compile the source in the same manner as before (you will have to modify the ~tools/Makeconfig.in file once again, you can use vi this time, now that you know how):

vi extracted-dmraid-directory/tools/Makeconfig.in

After editing the line in Makeconfig.in type:

./configure
make
modprobe dm-mod
tools/dmraid -ay -i
ls /dev/mapper

Your output should resemble something like:

control via_ebfejiabah

The important file (or more correctly known as a device node) is the one that begins with via_. It will have a different prefix depending on your RAID hardware. Make note of it, but for simplicity I will use via_ebfejiabah and you should substitute it with yours. Now type:

fdisk /dev/mapper/via_ebfejiabah

Setup at least 2 partitions with fdisk, one type 82 for your swap and the other type 83 for your main SUSE installation. Refer to the fdisk help (m for help) for info on what to do. Afterwards and before writing the partition tables and exiting fdisk, type p to get the partition tables. Your output might look something like this:

Command (m for help): p

Disk /dev/mapper/via_ebfejiabah: 163.9 GB, 163928603648 bytes
255 heads, 63 sectors/track, 19929 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/mapper/via_ebfejiabah1 1 125 1004031 82 Linux swap / Solaris
/dev/mapper/via_ebfejiabah2 126 19929 159075630 83 Linux

The important parts of the output have been bold typed in the above listing, make note of them on your output (i.e. heads=?, sectors=? and cylinders=?). We will need them later.
You may now write the partition table and quit fdisk. You must now reboot and start the LiveCD again following everything in this step again excluding the initial RAID BIOS setup and upto the point of where we begin to use fdisk. We don't need to setup the partitions again. Gain access to your-new-suse-installation-archive.tar.gz by either mounting the spare disk, mount the CD drive or using ftp etc etc. Remember to mount a volume type:

mkdir /mnt/your-mount-point
mount -t your-volumes-filesystem /dev/your-device /mnt/your-mount-point

If using ftp, like I had to, use Gnome to Connect to Server and it will mount the ftp directory on the desktop. Now we must format the new partitions and extract our new installation onto the root partition. Type the following:

mkswap /dev/mapper/via_ebfejiabah1
mkreiserfs /dev/mapper/via_ebfejiabah2
mkdir /mnt/suse10
mount -t reiserfs /dev/mapper/via_ebfejiabah2 /mnt/suse10

Of course you'll want to replace anything listed in bold above to your specific settings/info. Copy your-new-suse-installation-archive.tar.gz to /mnt/suse10. Extract, using tar at the command prompt.
For example:

cd /mnt/suse10
tar --preserve -xf your-new-suse-installation-archive.tar.gz

This will take a while...then:

rm your-new-suse-installation-archive.tar.gz
vi etc/fstab

In vi change your root device to /dev/mapper/your-root-partition and your swap device to /dev/mapper/your-swap-partition. (i.e. mine were via_ebfejiabah2 and via_ebfejiabah1 respectively)

Step 5 – Making GRUB work with RAID

First we need to modify some files in the /mnt/suse10/boot/grub directory using vi. Type the following:

cd /mnt/suse10/boot/grub
vi device.map

The structure of the device.map file is fairly simple. Just make sure that each entry corresponds to your new drive layout. For example:

(hd0) /dev/mapper/your-raid-device

Save the changes then edit the Grub menu:

vi menu.lst

My menu reads as follows:

# Modified by YaST2. Last modification on Sun Dec 11 20:40:40 UTC 2005

color white/blue black/light-gray
default 0
timeout 5
gfxmenu (hd0,1)/boot/message

###Don't change this comment - YaST2 identifier: Original name: linux###
title SUSE LINUX 10.0
root (hd0,1)
kernel /boot/vmlinuz root=/dev/mapper/via_ebfejiabah2 vga=0x31a selinux=0 resume=/dev/mapper/via_ebfejiabah1 splash=silent showopts
initrd /boot/initrd

###Don't change this comment - YaST2 identifier: Original name: failsafe###
title Failsafe -- SUSE LINUX 10.0
root (hd0,1)
kernel /boot/vmlinuz root=/dev/mapper/via_ebfejiabah2 vga=normal showopts ide=nodma apm=off acpi=off noresume selinux=0 edd=off 3
initrd /boot/initrd

The necessary changes have been bold typed, change to your configuration appropriately. Now we install the grub MBR on our disk so it finds and boots SUSE – or more correctly the kernel and initrd/ramfs.
When using grub, we must know the partition layout of our disks. In the example I am about to express, my partitions were setup as displayed by the fdisk output mentioned above in step 4. My root partition for Linux/SUSE was my second partition, thus, when using grub, I have to refer to that partition as (hd0,1), whereas, (hd0,0) would refer to the first rather than the second. Also, (hd0) refers to the first disk assuming you installed your RAID as the first 2 or more disks. I assume you get the idea. Just make sure the numbers correspond to your particular setup when typing in the details below. Type the following in a terminal with root permissions (i.e. sudo su -):

grub

At the grub prompt type:

device (hd0,1) /dev/mapper/via_ebfejiabah2
device (hd0) /dev/mapper/via_ebfejiabah

This is where we need the fdisk info recorded earlier. Replace the numbers bold typed with yours:

geometry (hd0) 19929 255 63
root (hd0,1)
setup (hd0)

You should now get an output saying some stuff, but nothing referring to errors. Thus all is well so far.

Step 6 – Booting the new SUSE installation

At this point the new installation is ready to be booted. Just make sure your BIOS settings are configured for booting from your RAID disk setup and you should probably disable boot from CD. Assuming everything worked, a familiar SUSE boot screen should appear and naturally SUSE should begin the boot process. On first boot, SUSE will start YaST. We selected this option earlier during the installation of SUSE and is required to properly setup the new system. Just follow the instructions and do what you normally would during SUSE installation. The only significant difference is YaST is displayed in terminal mode, rather than GUI. Otherwise, it is identical to the GUI counterpart. Once YaST has completed, the system defaults to terminal mode.
You will need to edit the /etc/inittab file in order to to boot into graphical mode by default. This is rather simple, at the command prompt type the following:

vi /etc/inittab

And then find the line that says:

d:3:initdefault:

Change the bold typed number to a 5, save the file, exit and reboot.

DONE...Have fun!

crushton
14th December 2005, 07:17
Just to be on the safe side...have a look at the attached mkinitrd. Yours should be identical. You can either just use mine or follow the directions to do it yourself. I recommend that you try yourself however =)
Also, just incase the question will be asked, which I am sure someone intuitive enough will, the reason for the commented out lines near the end of the file relating to LVM are required is this...

If you ever plan on updating your kernel (i.e. through YOU the online updater), which of course is highly recommended considering the bug fixes, then SUSE will try to rebuild the initrd image. This is not good news without these lines commented out. Basically, SUSE will assume you have LVM partitioned disks because it detects the use of the device-mapper and isn't aware that we are using it for our own purposes which currently are not supported. Therefore, we are preventing SUSE from making this false assumption of our disk layout and thus retaining our forced setup allowing mkinitrd to fly-by not knowing any different. With this being said, it may also be a good idea to backup your modified mkinitrd script in the unfortunate event that a future SUSE update replaces it. However, if this happens, chances are they added something new to the boot process that is necessary in the initrd...to be on the safe side, always read the updates YOU is providing, and don't be too hasty accepting the updates unless your sure this critical file is not being replaced.

Don't forget to change the permissions on this file after downloading it, only root should have access to write!

Regards...C.R.

mshah
2nd January 2006, 23:40
1.Have 1 IDE that hosts SUSE 10, XP and have other partition.
2. Then have 4 x 250Gig SATA drives on Intel mother board with Intel software raid.
3. Have created 3 volumes/partitions on SATA drives. First one is 250 MB Raid1 on first 2 drives, then on later 2 drives created 215 MB Raid1 and 70 MB Raid0 partitions.

Now the problem description:
I can use all 3 RAID volumes correctly on XP. However, when I boot SUSE, do not see RAID0 volume at all. See Raid1 volumes as unbound (4 volumes v/s 2). This happens before I tried attached how-tos and without using dmraid.

Tried to follow instructions posted here for 2 days, made adjustments as suggested and considered that I'm not booting from RAID drive so it should be simpler, but it didn't help. I must be doing someting wrong.

Any help would be appreciated. I'm linux newbee so please consider that.

till
3rd January 2006, 09:24
As far as i know, the SATA raid controllers that are available as onBoard controllers currently where not supported by linux.

mshah
4th January 2006, 02:19
As far as i know, the SATA raid controllers that are available as onBoard controllers currently where not supported by linux.

I thought that this thread and how-tos address how to make linux work with those STAT (fake) raids. Are you sure that STAT raids will not work with linux ?

till
4th January 2006, 11:06
I thought that this thread and how-tos address how to make linux work with those STAT (fake) raids. Are you sure that STAT raids will not work with linux ?

Yes, this thraed is how to make fake raids.

You see in windows one raid volume, because there exist drivers for windows.
On linux you see the single harddisks, thats because there are no linux RAID drivers for SATA available for your controller.

That explains why you see 4 vs. 2 volumes.

If you explain the errors you get a bit more detailed, we can try to fix them.

Dieda2000
4th January 2006, 18:46
Hi,
Nice guide, works almost like a charm.
Apart from the fact that every third or fourth boot my machine hangs while displaying the code:
".. waiting for /dev/mapper/sil_afbieacedhaj2 to appear ..."
As said, the other times it works.

Moreover, while booting there is alway the message
" grep: command not found"
How did you use grep in this early stage of booting?

Specs: Suse 10.0 x86-x64, A8n-SLI Prem, pcie-Conroller Sil 3132
kernel: 2.6.15-rc6-smp

Another Notation:
Silicon Image´s Raid-Controller like the 3132 or 3114 can use a certain mixed mode raid, like
Intels matrix raid of the ich6 oer ich7. For example, I use two Maxtor 6V300F0, created on the first 200Gb of each disk a raid0-array and on the remaining 100Gb of each disk a raid1-array. I can use it with windows but dmraid can only discover the first raid array.
I think its a nice feature. Any clues to make it discover?

falko
4th January 2006, 20:17
Moreover, while booting there is alway the message
" grep: command not found"
How did you use grep in this early stage of booting?

Is grep installed? Run which grep to find out.

mshah
7th January 2006, 17:22
Yes, this thraed is how to make fake raids.

You see in windows one raid volume, because there exist drivers for windows.
On linux you see the single harddisks, thats because there are no linux RAID drivers for SATA available for your controller.

That explains why you see 4 vs. 2 volumes.

If you explain the errors you get a bit more detailed, we can try to fix them.

Till - thanks for the repsonse. As I explained, I don't see any error. Only thing I see is that one of the RAID0 volume is not visible while other RAID1 volumes are visible as unbound. So can't use them. Should I be attaching some file from the computer so that we can find out what's going on ? Let me know where to look for boot log file or any other file and I'll attach it here. Again, thanks for your help.

joek9k
6th February 2006, 02:55
I managed to trick Fedora Core 4 into using my Silicon Image (SIL) SATA controller for RAID 1 (mirroring) by first configuring the software RAID 1 in SuSE 10 on fresh install
md0 mounted /
md1 mounted /home
md2 mounted /swap

The funny part is the whole reason why I then took these partitions into Fedora Core 4 was because after doing the install with SuSE 10 and formatting the drives, installing everything, SuSE 10 kernel panicked upon first reboot.

So I put Fedora core 4 in there, DiskDruid picked up the partition info and then I installed it and it gave me a warning message that you'd have to see to believe but after Fedora installed I could definately hear the drives working as a Software RAID (the sound of the configuration is a dead give-away, it's like an echo, same as a recent XP pro install I did) . So it worked, but it didn't work in my O/S of choice (SuSE 10)

Another thing is I had to go and purchase the Silicon Image controller (PCI) for 40 bux (a software raid controller) which makes me want to take back my SATA drives and just get a couple IDE drives and do a software IDE raid and save all the effort.

Now that I see how much BS the software raid is I'm thinking that a 3ware hardware raid controller with true Linux support and a Server motherboard with PCI 64bit is probably worth the money because my time is worth a lot more than all this BS. Anyone selling a server motherboard for cheap? :)

It'd be nice if all this bios raid software raid worked right now and saw one drive instead of two. I've been searching for other distros but I think it'd be cheaper to just get a real server instead of trying to turn a $50 dollar motherboard into a real server. Maybe in Kernel 2.7.

markes
14th March 2006, 19:27
Nice Howto, it´s works fine, but some peculiar things i have noticed:

- after boot and login in KDE
in MyComputer/media floppy is mounted always althougt i unmounted it.
media:/ -Konqueror
/Diskette (fd0)
/DVD (dvdram)
/Festplatte (mapper/via_bdeaacdjjh1)
/Festplatte (mapper/via_bdeaacdjjh10)
/Festplatte (mapper/via_bdeaacdjjh4)
/Festplatte (mapper/via_bdeaacdjjh5)
/Festplatte (mapper/via_bdeaacdjjh6)
/Festplatte (mapper/via_bdeaacdjjh7)
/Festplatte (mapper/via_bdeaacdjjh8)
/Festplatte (mapper/via_bdeaacdjjh9)

- If i logout and then login as different or same user without reboot i will get:
MyComputer/media:/ -Konqueror
/8.4G Medium
/Diskettenlaufwerk
after clicking 8.4G Medium i get the message:
Could not mount device.
The reported error was:
mount: can't find /dev/sda1 in /etc/fstab or /etc/mtab

- some warnings and errors in /var/log/messages and /var/log/boot.msg
my /var/log/messages, the warning: grep not found too
...
Mar 14 10:31:41 linux kernel: attempt to access beyond end of device
Mar 14 10:31:41 linux kernel: sda: rw=0, want=312581850, limit=312581808
Mar 14 10:31:41 linux kernel: printk: 807 messages suppressed.
Mar 14 10:31:41 linux kernel: Buffer I/O error on device dm-0, logical block 312581804
...
Mar 14 10:32:01 linux kernel: bootsplash: status on console 0 changed to on
Mar 14 10:32:01 linux hal-subfs-mount[6327]: By hald-subfs-mount created dir /media/floppy got removed.
Mar 14 10:32:01 linux kernel: printk: 90 messages suppressed.
Mar 14 10:32:01 linux kernel: Buffer I/O error on device sda4, logical block 8241312
...
Mar 14 10:32:02 linux hal-subfs-mount[6338]: MOUNTPOINT:: /media/floppy
Mar 14 10:32:02 linux kernel: subfs 0.9
Mar 14 10:32:02 linux hal-subfs-mount[6338]: Collected mount options and Called(0) /bin/mount -t subfs -o fs=floppyfss,sync,procuid,nosuid,nodev,exec /dev/fd0 "/media/floppy"
Mar 14 10:32:02 linux kernel: end_request: I/O error, dev fd0, sector 0
Mar 14 10:32:02 linux submountd: mount failure, No such device or address
Mar 14 10:32:02 linux kernel: end_request: I/O error, dev fd0, sector 0
Mar 14 10:32:02 linux kernel: subfs: unsuccessful attempt to mount media (256)

/var/log/boot.msg
...
<6>scsi0 : sata_via
<7>ata2: dev 0 cfg 49:2f00 82:746b 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:80ff
<6>ata2: dev 0 ATA, max UDMA7, 312581808 sectors: lba48
<6>ata2: dev 0 configured for UDMA/133
<6>scsi1 : sata_via
<5> Vendor: ATA Model: SAMSUNG HD160JJ Rev: ZM10
<5> Type: Direct-Access ANSI SCSI revision: 05
<5>SCSI device sda: 312581808 512-byte hdwr sectors (160042 MB)
<5>SCSI device sda: drive cache: write back
<5>SCSI device sda: 312581808 512-byte hdwr sectors (160042 MB)
<5>SCSI device sda: drive cache: write back
<6> sda: sda1 sda2 < > sda3 sda4
<5>Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
<5> Vendor: ATA Model: SAMSUNG HD160JJ Rev: ZM10
<5> Type: Direct-Access ANSI SCSI revision: 05
<5>SCSI device sdb: 312581808 512-byte hdwr sectors (160042 MB)
<5>SCSI device sdb: drive cache: write back
<5>SCSI device sdb: 312581808 512-byte hdwr sectors (160042 MB)
<5>SCSI device sdb: drive cache: write back
<6> sdb:<3>Buffer I/O error on device sda3, logical block 2361344
<3>Buffer I/O error on device sda3, logical block 2361345
<3>Buffer I/O error on device sda3, logical block 2361346
<3>Buffer I/O error on device sda3, logical block 2361347
<3>Buffer I/O error on device sda3, logical block 2361348
<3>Buffer I/O error on device sda3, logical block 2361349
<3>Buffer I/O error on device sda3, logical block 2361350
<3>Buffer I/O error on device sda3, logical block 2361351
<3>Buffer I/O error on device sda4, logical block 8241312
<3>Buffer I/O error on device sda4, logical block 8241313
<5>Attached scsi generic sg0 at scsi0, channel 0, id 0, lun 0, type 0
...
<3>Buffer I/O error on device sda3, logical block 2361344
..
Loading required kernel modules
doneRestore device permissionsdone
Warning: ignoring extra data in partition table 5
Warning: ignoring extra data in partition table 5
Warning: ignoring extra data in partition table 5
Warning: invalid flag 0xffffbf76 of partition table 5 will be corrected by w(rite)
Disk /dev/sdb doesn't contain a valid partition table
Activating remaining swap-devices in /etc/fstab...

I have installed it like you described, without any errors:
-the line in /etc/sysconfig/kernel i have changed to
INITRD_MODULES="sata_via via82cxxx processor thermal fan reiserfs dm-mod"

-my /boot/grub/device.map
(fd0) /dev/fd0
(hd0) /dev/mapper/via_bdeaacdjjh

-my /boot/grub/menu.lst
color white/blue black/light-gray
default 0
timeout 8
gfxmenu (hd0,3)/boot/message

###Don't change this comment - YaST2 identifier: Original name: windows###
title Windows
chainloader (hd0,0)+1

###Don't change this comment - YaST2 identifier: Original name: linux###
title SUSE LINUX 10.0
root (hd0,3)
kernel /boot/vmlinuz root=/dev/mapper/via_bdeaacdjjh4 vga=0x317 selinux=0 resume=/dev/mapper/via_bdeaacdjjh3 splash=silent showopts
initrd /boot/initrd

###Don't change this comment - YaST2 identifier: Original name: floppy###
title Diskette
chainloader (fd0)+1

###Don't change this comment - YaST2 identifier: Original name: failsafe###
title Failsafe -- SUSE LINUX 10.0
root (hd0,3)
kernel /boot/vmlinuz root=/dev/mapper/via_bdeaacdjjh4 vga=normal showopts ide=nodma apm=off acpi=off noresume selinux=0 nosmp noapic maxcpus=0 edd=off 3
initrd /boot/initrd

linux:/home/mk # fdisk -l
Warnung: ignoriere weitere Daten in Partitionstabelle 5
Warnung: ignoriere weitere Daten in Partitionstabelle 5
Warnung: ignoriere weitere Daten in Partitionstabelle 5
Warnung: Schreiben wird ungültiges Flag 0xffffbf76 in Part.-tabelle 5 korrigiere n

Platte /dev/sda: 160.0 GByte, 160041885696 Byte
255 Köpfe, 63 Sektoren/Spuren, 19457 Zylinder
Einheiten = Zylinder von 16065 × 512 = 8225280 Bytes

Gerät boot. Anfang Ende Blöcke Id System
/dev/sda1 * 1 1020 8193118+ 7 HPFS/NTFS
/dev/sda2 1021 36715 286720087+ f W95 Erw. (LBA)
/dev/sda3 36716 36862 1180777+ 82 Linux Swap / Solaris
/dev/sda4 36863 38914 16482690 83 Linux
/dev/sda5 ? 44606 181585 1100285363 3c PartitionMagic recovery

Platte /dev/sdb: 160.0 GByte, 160041885696 Byte
255 Köpfe, 63 Sektoren/Spuren, 19457 Zylinder
Einheiten = Zylinder von 16065 × 512 = 8225280 Bytes

Festplatte /dev/sdb enthält keine gültige Partitionstabelle


linux:/home/mk # fdisk /dev/mapper/via_bdeaacdjjh

Die Anzahl der Zylinder für diese Platte ist auf 38914 gesetzt.
Daran ist nichts verkehrt, aber das ist größer als 1024 und kann
in bestimmten Konfigurationen Probleme hervorrufen mit:
1) Software, die zum Bootzeitpunkt läuft (z. B. ältere LILO-Versionen)
2) Boot- und Partitionierungssoftware anderer Betriebssysteme
(z. B. DOS FDISK, OS/2 FDISK)

Befehl (m für Hilfe): p

Platte /dev/mapper/via_bdeaacdjjh: 320.0 GByte, 320083770368 Byte
255 Köpfe, 63 Sektoren/Spuren, 38914 Zylinder
Einheiten = Zylinder von 16065 × 512 = 8225280 Bytes

Gerät boot. Anfang Ende Blöcke Id System
/dev/mapper/via_bdeaacdjjh1 * 1 1020 8193118+ 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh2 1021 36715 286720087+ f W95 Erw. (LBA)
/dev/mapper/via_bdeaacdjjh3 36716 36862 1180777+ 82 Linux Swap / Solaris
/dev/mapper/via_bdeaacdjjh4 36863 38914 16482690 83 Linux
/dev/mapper/via_bdeaacdjjh5 1021 3570 20482843+ 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh6 3571 6120 20482843+ 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh7 6121 18868 102398278+ 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh8 18869 31616 102398278+ 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh9 31617 36575 39833136 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh10 36576 36715 1124518+ b W95 FAT32


my /etc/fstab
/dev/mapper/via_bdeaacdjjh4 / reiserfs acl,user_xattr 1 1
/dev/mapper/via_bdeaacdjjh1 /windows/C ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
/dev/mapper/via_bdeaacdjjh5 /windows/D ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
/dev/mapper/via_bdeaacdjjh6 /windows/E ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
/dev/mapper/via_bdeaacdjjh7 /windows/F ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
/dev/mapper/via_bdeaacdjjh8 /windows/G ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
/dev/mapper/via_bdeaacdjjh9 /windows/H ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
/dev/mapper/via_bdeaacdjjh10 /windows/I vfat users,gid=users,umask=0002,utf8=true 0 0
/dev/mapper/via_bdeaacdjjh3 swap swap defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
/dev/dvdram /media/dvdram subfs noauto,fs=cdfss,ro,procuid,nosuid,nodev,exec,iocha rset=utf8 0 0
/dev/fd0 /media/floppy subfs noauto,fs=floppyfss,procuid,nodev,nosuid,sync 0 0

greets
markes

HBauer
19th March 2006, 01:10
I installed everything following the Howto on my system using a via SATA-controller. But booting the installed system results in a lot of timeouts for the underlying SATA (/dev/sda, /dev/sdb) disks. Also the "missing grep" appears. Any recommendations?

Greetings, HB

BTW: Isn't it possible to transfer the part you did using the Gentoo Live-CD to the installed SuSE using chroot?

markes
19th March 2006, 09:55
The warnings: "grep: command not found" is produced by this Code in line 1971 from mkinitrd:
cat_linuxrc <<-EOF
|# Workaround: dmraid should not probe cdroms, but it does.
|# We'll remove all cdrom device nodes till dmraid does this check by itself.
|for y in hda hdb hdc hdd hde hdf hdg hdh sr0 sr1 sr2 sr3;
|do
| if (grep -q "$y" /proc/sys/dev/cdrom/info)
| then
| rm -f /dev/"$y"
| fi
|done
|# Now we can load dmraid
|dmraid -ay -i
EOF

Solution:
Find out on wich port your cdrom drive hangs and delete all the other registers. In my case my cdrom hangs on Secondary Port as Master and so "hdc". So i deletetd all except hdc (|for y in hdc;) and save the file. After that you have to type mkinitrd in console as root.

The warnings
...
Mar 19 08:58:44 linux kernel: attempt to access beyond end of device
Mar 19 08:58:44 linux kernel: sda: rw=0, want=312581850, limit=312581808
Mar 19 08:58:44 linux kernel: Buffer I/O error on device dm-0, logical block 312581804
Mar 19 08:58:44 linux kernel: attempt to access beyond end of device
Mar 19 08:58:44 linux kernel: sda: rw=0, want=312581852, limit=312581808
Mar 19 08:58:44 linux kernel: Buffer I/O error on device dm-0, logical block 312581805
Mar 19 08:58:44 linux kernel: attempt to access beyond end of device
Mar 19 08:58:44 linux kernel: sda: rw=0, want=312581854, limit=312581808
Mar 19 08:58:44 linux kernel: Buffer I/O error on device dm-0, logical block 312581806
etc

are produced from your linux kernel. You have to fix your kernel if these warnings will be awkward for you:

diff -Nur linux-2.6.15/fs/partitions/check.c linux-2.6.15-check/fs/partitions/check.c
--- linux-2.6.15/fs/partitions/check.c 2006-01-03 04:21:10.000000000 +0100
+++ linux-2.6.15-check/fs/partitions/check.c 2006-02-08 21:20:03.000000000 +0100
@@ -175,8 +175,19 @@
memset(&state->parts, 0, sizeof(state->parts));
res = check_part[i++](state, bdev);
}
- if (res > 0)
+ if (res > 0) {
+ sector_t from, cap;
+ for(i = 1; i < state->limit; i++) {
+ from = state->parts[i].from;
+ cap = get_capacity(hd);
+ if(state->parts[i].size + from > cap) {
+ printk(KERN_WARNING " %s: partition %s%d beyond device capacity\n",
+ hd->disk_name, hd->disk_name, i);
+ state->parts[i].size = cap - (from < cap ? from : cap);
+ }
+ }
return state;
+ }
if (!res)
printk(" unknown partition table\n");
else if (warn_no_part)

Look at http://tienstra4.flatnet.tudelft.nl/~gerte/gen2dmraid/ and https://www.redhat.com/archives/ataraid-list/2006-February/msg00015.html for further information. On http://tienstra4.flatnet.tudelft.nl/~gerte/gen2dmraid/gen2dmraid-2.0.iso you can also download the Gentoo based LiveCD with dmraid-1.0.0-rc9. So you can use Gentoo directly without installing dmraid.

greets
markes

HBauer
20th March 2006, 02:31
Thanks for your answer, but that's not exactly my problem. ;)
Booting the original SuSE-kernel results in endless hanging periods (ata1/2 timeout command 0x?? stats 0x?? host_stats 0x??).
Booting a reduced kernel I compiled myself results in a kernel panic:
waiting for device /dev/mapper/via_ebdfgdfgeg2 to appear: ok
no record for 'mapper/via_ebdfgdfgeg2' in database
rootfs: major=254 minor=2 devn=65026
Mounting root /dev/mapper/via_ebdfgdfgeg2
mount: no such device
umount2: device or ressource busy
Kernel panic - not syncing: Attempted to kill init!
I suspect udev to be responsible for that. Does anybody know the exact reason?

Booting from a separate hd the same modified kernel (my own compilation) drastically reduces the timeout hanging time.

Any suggestions about that? :)

Greetings, HB

markes
21st March 2006, 16:22
hmmm, seems to be an fstab or mkinitrd problem. Have you tryed also mkintrd from crushton?

greets
markes

HBauer
21st March 2006, 22:38
Yes, it's exactly the one I tried. I tried Fedora 5 and Gentoo, both of them work with the same hardware, but I don't know where to start with the analysis of mkinitrd...

Greetings, HB

mgosr
22nd March 2006, 06:53
If you aren't too crazy about which distro, RedHat Fedora Core 5 [Bordeaux] set up (AND BOOTED!!) with out a glitch on my via sata raid 0 and opteron 242*2. Could be best to get it and go on, at least for now.

Eventually maybe we'll have answers to why 64-bit technology and sata raid has been so lagging in support. Our hardware will be outdated before working solutions arrive. Anyone know if there is a 64-bit flash plug-in?

oL'z
29th March 2006, 05:25
Fake Raid? LVM in SuSe 10 64-bit on LP UT nf3 UD
2-WD 75GB on SATA 1-2
Do not enable raid in bios-Grub no LiLo
setup 3 partitions on each drive, mirror
sda1 native 1-13 boot ext2
sda2 swap 14-275 swap
sda3 LVM 276-9728
sdb1 native 1-13 boot ext2
sdb2 swap 14-275 swap
sdb3 LVM 276-9728
system 144GB LVM2 2-stripe 64k
system/root/LV xfs

YAST does most of the work in expert mode
You can add drives, backup, and boot with rescue media.
Some of this is covered here (http://aplawrence.com/Linux/lvm.html) some here (http://emidio.planamente.ch/pages/linux_howto_root_lvm_raid.php) some here (http://www.howtoforge.com/forums/showthread.php?t=2454&highlight=sata+raid)

fpereira
20th April 2006, 23:32
I have recently installed following a similar procedure in a server with lsi megaraid and noted very high load that seems to be caused by kmirrord process.

Anyone have experience with LSI megaraid & kernel 2.6?

The machine still doesnt't boot. Will solve the "grep: command not found" tomorrow.

Thanks for the info.

falko
21st April 2006, 11:46
Will solve the "grep: command not found" tomorrow.

Thanks for the info.
You must install the grep package.

schlocke
15th May 2006, 10:41
hi,
I had the same problems installtion Suse 10 on my raid. But with Suse 10.1 there is a better way:

While installing, switch to the console an type:
dmraid -an
modprobe dm-mirror
dmraid -ay -i

now you can list your raid volumes:
ls /dev/mapper/

or you can edit the partitions on this raid:
fdisk /dev/mapper/<raiddevice>
for example:
fdisk /dev/mapper/nvidia_jhadcged

Next step: Use the yast partition manager and re-read the partition table. Now there is a new drive /dev/dm-3 or similar.
This drive can not be edited or re-partitioned. You have to do this before with fdisk.
Do NOT mount anything from your native drives (e.g. sda or sdb) because there are now "busy" and can't be used.

Next problem apears while installing the bootloader grub: It seems there is a bug resolving the drive name, so you must edit the parameters by hand. Instead /dev/sda use /dev/mapper/<your raid device>. Edit the device map and replace (hd0) with /dev/mapper/<your raid device>.
Edit menu.lst and replace (/dev/dm-,-1) with (dev/mapper/<your raid device>,<your linux partition number>). See grub manual for details.

After Installation you can't boot from this volume, because dm-mirror is not loaded and dmraid not active. So you must boot from CD and create your own initrd, see the first topic in this thread. After this everything should work fine.

Question to the others: mkinitrd seems to be changed, so the line numbers do not match. Anybody knows the best places??

falko
15th May 2006, 15:32
Question to the others: mkinitrd seems to be changed, so the line numbers do not match. Anybody knows the best places??
What line numbers?

MoonenW
20th May 2006, 11:47
hi,
I had the same problems installtion Suse 10 on my raid. But with Suse 10.1 there is a better way:

While installing, switch to the console an type:
dmraid -an
modprobe dm-mirror
dmraid -ay -i

now you can list your raid volumes:
ls /dev/mapper/

or you can edit the partitions on this raid:
fdisk /dev/mapper/<raiddevice>
for example:
fdisk /dev/mapper/nvidia_jhadcged

Next step: Use the yast partition manager and re-read the partition table. Now there is a new drive /dev/dm-3 or similar.
This drive can not be edited or re-partitioned. You have to do this before with fdisk.
Do NOT mount anything from your native drives (e.g. sda or sdb) because there are now "busy" and can't be used.

Next problem apears while installing the bootloader grub: It seems there is a bug resolving the drive name, so you must edit the parameters by hand. Instead /dev/sda use /dev/mapper/<your raid device>. Edit the device map and replace (hd0) with /dev/mapper/<your raid device>.
Edit menu.lst and replace (/dev/dm-,-1) with (dev/mapper/<your raid device>,<your linux partition number>). See grub manual for details.

After Installation you can't boot from this volume, because dm-mirror is not loaded and dmraid not active. So you must boot from CD and create your own initrd, see the first topic in this thread. After this everything should work fine.

Question to the others: mkinitrd seems to be changed, so the line numbers do not match. Anybody knows the best places??

I don't understand this.
Does this mean that I should be able to use a NVRAID-0 set build with WinXP as a boot device for Suse Linux 10.1 ?
And if not, do you know how this can be done - if at all?

markes
22nd May 2006, 00:06
FakeRaid for SuSe 10.1 is working fine!

I hope my way is more easier for you.


First of all install windows on your raid disk, because otherwise it will overwrite the bootloader.

You have to change all following enters to your system required configuration!

-Take an old ata hard disk and install the new suse10.1 on it.
in Yast also install the GCC-C++packet
Download the dmraid-1.0.0rc11-pre1.tar.bz2 from http://people.redhat.com/~heinzm/sw/dmraid/tst/
(Warning older versions are not working for the 2.6.16.13 kernel, the dmraid 0.99_1.0.0rc8-12 package included in the suse10.1 also isn´t working)

-Shut down system and strip on your raid disk, start suse10.1 from your old hard disk.
Log in as root, extract your downloaded dmraid, change into dmraid folder, open console and type the following:

„./configure“
„make“
„cp -f /tools/dmraid /sbin/dmraid“
„modprobe dm-mod“
„tools/dmraid -ay -i“
„ls /dev/mapper“

now you should have an output like control via_bdeaacdjjh

-create your partitons
„fdisk /dev/mapper/via_bdeaaccdjjh“
create one swap and one reiserfs partition

this is for example my partition table:
linux:/home/mk # fdisk /dev/mapper/via_bdeaacdjjh

Die Anzahl der Zylinder für diese Platte ist auf 38914 gesetzt.
Daran ist nichts verkehrt, aber das ist größer als 1024 und kann
in bestimmten Konfigurationen Probleme hervorrufen mit:
1) Software, die zum Bootzeitpunkt läuft (z. B. ältere LILO-Versionen)
2) Boot- und Partitionierungssoftware anderer Betriebssysteme
(z. B. DOS FDISK, OS/2 FDISK)

Befehl (m für Hilfe): p

Platte /dev/mapper/via_bdeaacdjjh: 320.0 GByte, 320083770368 Byte
255 Köpfe, 63 Sektoren/Spuren, 38914 Zylinder
Einheiten = Zylinder von 16065 × 512 = 8225280 Bytes

Gerät boot. Anfang Ende Blöcke Id System
/dev/mapper/via_bdeaacdjjh1 * 1 1020 8193118+ 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh2 1021 36715 286720087+ f W95 Erw. (LBA)
/dev/mapper/via_bdeaacdjjh3 36716 36862 1180777+ 82 Linux Swap / Solaris <- my swap device
/dev/mapper/via_bdeaacdjjh4 36863 38914 16482690 83 Linux <-my root device
/dev/mapper/via_bdeaacdjjh5 1021 3570 20482843+ 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh6 3571 6120 20482843+ 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh7 6121 18868 102398278+ 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh8 18869 31616 102398278+ 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh9 31617 36575 39833136 7 HPFS/NTFS
/dev/mapper/via_bdeaacdjjh10 36576 36715 1124518+ b W95 FAT32

-format your partions:
„mkswap /dev/mapper/via_bdeaacdjjh3“
„mkreiserfs /dev/mappper/via_bdeaacdjjh4“

-mount your new suse10.1 root partition
„mkdir /mnt/suse10.1“
„mount -t reiserfs /dev/mapper/via_bdeaacdjjh4 /mnt/suse10.1“

-start yast and under software choose Installation into Directory
change the target directory to /mnt/suse10.1
Check Run Yast and SuseConfig on first start
the warning with /suse/i586/kernel-default-2.6.16.13-4i586.rpm you can ignore


-after the installation process have finished open the kernel file
type „vi /etc/sysconfig/kernel“ in console
press the insert key and change „INITRD_MODULES=“sata_via via 82cxxx processor thermal fan“ in „INITRD_MODULES=“sata_via via 82cxxx processor thermal fan dm-mod“
press ESC
„:w“
„:q“

-take my mkinird extract it and copy it to /sbin/mkinitrd
type „mkinitrd“ in console
copy some files:
„cp -R -T /boot/grub/ /mnt/suse10.1/boot/grub“
„cp /boot/initrd-2.6.16.13-4-default /mnt/suse10.1/boot/initrd-2.6.16.13-4-default“
„cp /boot/initrd /mnt/suse10.1/boot/initrd“
„cp /sbin/dmraid /mnt/suse10.1/sbin/dmraid“
„cp /sbin/mkinitrd /mnt/suse10.1/sbin/mkinitrd“
„cp /etc/sysconfig/kernel /mnt/suse10.1/etc/sysconfig/kernel“
„cp /etc/fstab /mnt/suse10.1/etc/fstab“

-modify the files device.map and menu.lst in your /mnt/suse10.1/boot/grub/ dir
modify fstab in your /mnt/suse10.1/sbin/ dir

-grub
switch to console and type „grub“
„device (hd0,3) /dev/mapper/via_bdeaacdjjh4“
„device (hd0) /dev/mapper/via_bdeaacdjjh“
„geometry (hd0) 38914 255 63“
„root (hd0,3)“
„setup (hd0)“

-shut down system and put away your old hard disk. boot system, after the yast procedure log in as root and open initab
„vi /etc/initab“
press insert key
change „id:3:initdefault:“ into „id:5:initdefault:“
press ESC
„:w“
„:q“

-Reboot your system.

-Don´t forget to change your installation source!
put in your first suse10.1 cd and open yast, ad installation source, cd, the same with your add on cd.

-I have also attached my fstab, device.map and menu.lst.

Goood luck!

mamat_fr
27th May 2006, 18:23
hi,
i have an error (with the Suse 10.1):
ERROR: dos: reading /dev/mapper/sil_agafdjcaceaj[Invalid argument]
i have compiled dmraid-1.0rc11-pre1 with no problems
anyone have an idea ?
whereas for the suse10.0 this method work fine.
thank
Mamat

falko
27th May 2006, 21:41
When do you get that error?

mamat_fr
27th May 2006, 21:53
When do you get that error?
just after the compilation of dmraid when i try do discover the partition
when i type ./tools/dmraid -ay -i : i have this message : ERROR: dos: reading /dev/mapper/sil_agafdjcaceaj[Invalid argument]
but with the suse 10.0 i haven't this problem

markes
28th May 2006, 12:00
I got that message linux-r1rn:/home/mk # dmraid -ay -i
ERROR: dos: reading /dev/mapper/via_bdeaacdjjh[No such file or directory] ,too. But this was because of an old dmraid version. With the new version of dmraid-1.0.0rc11-pre1.tar.bz2 i still have no problems.

/var/log/messages:
May 21 07:44:32 linux-r1rn kernel: device-mapper: dm-stripe: Target length not divisible by chunk size
May 21 07:44:32 linux-r1rn kernel: device-mapper: error adding target to table

falko
28th May 2006, 22:26
I found this regarding your problem:
http://www.redhat.com/archives/ataraid-list/2005-October/msg00005.html
http://ubuntuforums.org/archive/index.php/t-2557.html
http://www.linuxquestions.org/questions/showthread.php?t=377346

WEARENOTALONE
28th June 2006, 19:01
Hello,
thanks to crushton, markes and maaki (from www.linux-club.de (http://www.linux-club.de)) finally i just managed to install Suse Linux 10.1 on my FakeRAID! Because i had some problems with the ways provided here i wrote another guide (german language only) how i did it.

180
181

Sincerely yours,
WEARENOTALONE

lt_wentoncha
2nd July 2006, 08:35
Hmm,

From a fresh install, can't you setup RAID 1 from there? In setting up the initial system partitions, can't you set it so that /boot is mounted on a normal partition and format two identical partitions on two separate HDDs as Linux RAID and using the RAID utility thereafter? Atleast that's what I did...

WEARENOTALONE
2nd July 2006, 09:54
Like splitting the RAID 1, installing and setting up (incl. dmraid) Linux to the first (primary) HDD and finally rebuilding the mirrored RAID 1?

For RAID 1 that is probably true, but you can not do that with RAID 0. For me it was much faster to install Linux to a seperate HDD, because i already had one extra HDD built-in and i did not have to rebuild my RAID.

Sincerely yours,
WEARENOTALONE

gelah3
16th January 2009, 19:40
This is the first time I installed Linux on this Intel ICH9R sofRAID system, but need updated version of this howto. If anybody has one for openSUSE 11.1 please share with us.

One more question is: what line of /etc/mkinitrd on a runnning Linux should I put the "dmraid" script to have openSUSE recognize my Vista Device Mapper at boot time?

I have a strange situation that I can only boot with floppy but also disk order changes on the fly. GRUB knows disk order at boot time but OS changes all disks to non-raid. I put dmraid -ay in /etc/init.d/boot.local to access Vista.

I have fair amount of knowledge with Linux but I'm not good with programming logics. Please reply, this is my first post in this list.

thanks in advance