HowtoForge Forums | HowtoForge - Linux Howtos and Tutorials

HowtoForge Forums | HowtoForge - Linux Howtos and Tutorials (
-   HOWTO-Related Questions (
-   -   HOWTO: SUSE 10.0 and Software RAID a.k.a FakeRAID (

crushton 13th December 2005 23:15

HOWTO: SUSE 10.0 and Software RAID a.k.a FakeRAID
1 Attachment(s)
Motivation: Recently purchased another hard drive to compliment my existing hard drive in hope of using a BIOS Software RAID 0 (via the VIA chip) config with SUSE 10.0. This turned out to be a "no-go". 2.6 kernels apparently no longer supported BIOS fakeraid setups. So, I rummaged through all the forums that even remotely discussed dmraid or RAID in general. Eventually I came across 2 howto's: one was for Gentoo and the other for Ubuntu/Kubuntu. Neither provided enough info to get SUSE up and running. Of course, this would all be unnecessary if VIA Tech had simply made the Linux drivers as promised by the end of November. This did not happen, so I was on my own to find a way to "make" SUSE work. Thus, I present the consequence of my labour in the attached doc file. I hope it helps you to get SUSE up and running as it did me. If not, post a message here and tell me what went wrong. I'll try my best to help. Regards...C.R.

EDIT: See below. I have attached an Open Document File (odt) and reformatted the howto and posted here for quick reference if you do not wish to download anything. Enjoy!

falko 14th December 2005 00:53

Could you make a PDF out of the doc file and post it here? :) Or simply post the content of the file here?

crushton 14th December 2005 05:34

1 Attachment(s)
How about an Open Document File ? PDF is too large and exceeds my upload limit for these forums =( If I post the content...all the formating will be lost unless I reformat it for the forums, which will take quite a while. Hmm, well I guess I will do both (ODT and post content). Sorry that I used doc, at the time I was just trying to get the file size down.
Hope this is sufficient...regards C.R.
************************************************** *******
HOWTO: SUSE 10.0 and Software RAID a.k.a FakeRAID
A Complete Guide by C. R.

Due to the nature of SUSE 10.0, this how-to is rather long, but necessary in order to get SUSE installed and running correctly without a hitch. Also, this how-to was devised using BIOS software RAID 0, while others may work by following this guide, you are on your own if they don't.

Also, while I am sure there are quicker methods of reaching the same goal (i.e. if you have a spare disk a few of the steps listed can become unnecessary if other changes are made etc), I have purposefully left them out as this guide is designed to be as generic as possible. Other than that, read carefully, send me a post if you have any questions and good luck!


1. One of the following software RAID chip sets:
Highpoint HPT37X
Highpoint HPT45X
Intel Software RAID
LSI Logic MegaRAID
NVidia NForce
Promise FastTrack
Silicon Image Medley
VIA Software RAID
2. A working SUSE 10.0 installation and the original installation CD/DVD (this guide assumes KDE as the GUI and does not contain any information regarding Gnome or the like). Also, this working installation of SUSE should be installed on a plain hard drive with no Linux software RAID or LVM enabled. Make sure it is formated with the defaults presented during the original installation onto a single disk.
3. Access to another PC via FTP, a spare hard drive (one which is not included in the RAID), 2 CD/DVD drives (one of which must be a burner), or some type of removable storage (i.e. USB drive etc, keep in mind however, about 1 GB of extra space will be required depending on the installation options you choose for SUSE 10.0)
4. The latest source for dmraid which can be obtained from (as of this writing, latest = 1.0.0.rc9). You'll want to keep the dmraid Internet address handy throughout this guide, so it would be best to write it down on a piece of paper.
5. A Gentoo LiveCD (because it's quick and easy to use =P ) for your machine (i.e. if you have Intel x86 get the latest x86 version or x86_64 if you have an AMD64 etc). Also, you should have a wired Ethernet card, unfortunately getting a wireless card to work with any distros LiveCD is next to impossible. If you have both wired an wireless, use the wired for Gentoo and do things as you normally would when the new SUSE install is about to be booted.
6. The originally installed kernel (i.e. 2.6.13-15-default) currently installed in your running SUSE 10.0 installation. If you updated to the new patch 2.6.13-15.7-default, then you will have to use YaST to downgrade to the original.

The Procedure:

Step 1 – Installing the new SUSE 10.0 system
Boot SUSE 10.0 and log into KDE
Insert the SUSE 10.0 CD1 or DVD disk into your drive
Start the YaST Control Center
Under Software, choose Installation into Directory
Click on Options and choose a Target Directory or leave as the defaut
Check Run YaST and SuSEconfig on first boot
DO NOT check Create Image
Click Accept
Click on Software and make your software choices
Click Accept
Click Next
The new system is being installed into the directory (default = /var/tmp/drinstall) and may take some time depending on your software choices.
When the installation is nearly complete, YaST will complain about installation of the kernel. This can be safely ignored, as the mkinitrd is what is actually failing and we must make our own anyway.

Step 2 – Preparing the new SUSE install for RAID (i.e. hacking it)

Make a directory on your desktop and call it backup, then copy and paste the following files/folders to it:

/boot (this is a directory...duh!)
/sbin/mkinitrd (script file – the one that failed earlier during install)
/etc/fstab (mounted file system file – or rather what should be mounted during boot)
Now, open the original /sbin/mkinitrd in Kate with root permissions so it can be modified.
Select View->Show Line Numbers from Kate's menu.
At line 1178, insert the following exactly:

        # Add dmraid
        echo "Adding dmraid..."
        cp_bin /sbin/dmraid $tmp_mnt/sbin/dmraid

Make sure to have an empty line above and below the new code.
At line 1971, insert the following exactly:

        cat_linuxrc <<-EOF
        |# Workaround: dmraid should not probe cdroms, but it does.
        |# We'll remove all cdrom device nodes till dmraid does this check by itself.
        |for y in hda hdb hdc hdd hde hdf hdg hdh sr0 sr1 sr2 sr3;
        |      if (grep -q "$y" /proc/sys/dev/cdrom/info)
        |      then
        |              rm -f /dev/"$y"
        |      fi
        |# Now we can load dmraid
        |dmraid -ay -i

NOTE: This is VERY IMPORTANT! The spaces before the | character are tabs and MUST be tabs.

Make sure to have an empty line above and below the new code.
At line 2927, insert the following exactly:

        # HACKED: prevent LVM and DM etc from being detected
Now, comment out (i.e. place a # character at the beginning of the line, like the code you just inserted) all line numbers from 2929 to 2941.
Save the file.

This next part requires gcc to be installed on your system, so run sudo yast -i gcc gcc-c++ at a command line if you do not already have it installed.
Download the latest version of dmraid from the web address listed above in the prerequisites section. Also, be sure to download the one with tar.bz2 as the extension. Extract it to your desktop. Find the file ~/tools/ within the extracted folder and open it in Kate. Remove line number 36 or comment it out as mentioned above with a # character. Then in a terminal cd to your desktop and the newly extracted dmraid directory - with root permissions (i.e. type su – ). While in the directory that lists the configure script file,type:

        cp -f tools/dmraid /sbin/dmraid
        vi /etc/sysconfig/kernel

Near the top of the file, from the last command, there should be a line that looks similar to this:

INITRD_MODULES="sata_via via82cxxx reiserfs processor thermal fan”
Write the information within the quotes on a piece of paper, then type, just before the last quote dm-mod. In vi, to edit a file, press Ins on your keyboard, once modified, press Esc, Shift + ; then w and finally Shift + ; then q to quit.

Back at the command prompt, type mkinitrd. If all goes well, you should see Adding dmraid... and a bunch of other messages that don't say error. We should now have a new initrd/initramfs located in the /boot directory, in fact it replaced the one that was there originally. Copy this new file to your new SUSE installation by issuing the following command:

        cp /boot/initrd-2.6.13-15-default your-new-suse-installation-directory/boot/ initrd-2.6.13-15-default
Copy some other needed files to the new system:

        cp /boot/initrd your-new-suse-installation-directory/boot/initrd
        cp /sbin/dmraid  your-new-suse-installation-directory/sbin/dmraid
        cp /sbin/mkinitrd your-new-suse-installation-directory/sbin/mkinitrd
        cp /etc/sysconfig/kernel your-new-suse-installation-directory/etc/sysconfig/kernel
        cp /etc/fstab your-new-suse-installation-directory/etc/fstab

Copy and paste your /boot/grub directory over to your-new-suse-installation-directory/boot directory. You will need root permissions to do this, so use File Manager – Super User Mode if necessary.

Step 3 – Archiving and storing the new SUSE installation

Navigate using the File Manager – Super User Mode and go to the new SUSE installation directory. Select all the directories contained within, right-click and choose Compress->Add to Archive... . In the new window change Location to the directory and filename you want and Open as to Gzipped Tar Archive. This may take a while...

Once finished, copy your-new-suse-installation-archive.tar.gz to whatever medium you like. As long as it will be retrievable once your RAID hard drives have been wiped clean. For example, copy it to a CD/DVD disc if you have 2 or more CD/DVD drives, or to a spare hard drive that will not be included in the RAID, or, in my case, I had to ftp it to a remote computer running Windows XP (sad but true). Originally, I didn't compress the archive and it was 2GB and oddly, Windows wouldn't allow it to be retrieved by ftp afterwards, however, once compressed down to less than 1GB, no problem...just one of the many reasons why I now use Linux!.

crushton 14th December 2005 05:35

Step 4 – Setting up the RAID and restoring the new SUSE installation onto it

Make sure you have a running wired Internet connection, place the Gentoo LiveCD into your drive, reboot and change the BIOS accordingly to boot from CD and setup your RAID in it's BIOS to configure your RAID disks. At the boot: prompt just hit Enter and for every option thereafter until you get to the Gnome desktop.
Download the dmraid source, like you did before, to the Gnome desktop. Extract it to the desktop, navigate to the extracted directory via a command terminal window using root permissions. This is done by typing sudo su – at the command prompt in the terminal window.
Compile the source in the same manner as before (you will have to modify the ~tools/ file once again, you can use vi this time, now that you know how):

vi extracted-dmraid-directory/tools/
After editing the line in type:

modprobe dm-mod
tools/dmraid -ay -i
ls /dev/mapper

Your output should resemble something like:

control via_ebfejiabah
The important file (or more correctly known as a device node) is the one that begins with via_. It will have a different prefix depending on your RAID hardware. Make note of it, but for simplicity I will use via_ebfejiabah and you should substitute it with yours. Now type:

fdisk /dev/mapper/via_ebfejiabah
Setup at least 2 partitions with fdisk, one type 82 for your swap and the other type 83 for your main SUSE installation. Refer to the fdisk help (m for help) for info on what to do. Afterwards and before writing the partition tables and exiting fdisk, type p to get the partition tables. Your output might look something like this:

Command (m for help): p

Disk /dev/mapper/via_ebfejiabah: 163.9 GB, 163928603648 bytes
255 heads, 63 sectors/track, 19929 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

                    Device Boot      Start        End      Blocks  Id  System
/dev/mapper/via_ebfejiabah1              1        125    1004031  82  Linux                    swap / Solaris
/dev/mapper/via_ebfejiabah2            126      19929  159075630  83  Linux

The important parts of the output have been bold typed in the above listing, make note of them on your output (i.e. heads=?, sectors=? and cylinders=?). We will need them later.
You may now write the partition table and quit fdisk. You must now reboot and start the LiveCD again following everything in this step again excluding the initial RAID BIOS setup and upto the point of where we begin to use fdisk. We don't need to setup the partitions again. Gain access to your-new-suse-installation-archive.tar.gz by either mounting the spare disk, mount the CD drive or using ftp etc etc. Remember to mount a volume type:

mkdir /mnt/your-mount-point
mount -t your-volumes-filesystem /dev/your-device /mnt/your-mount-point

If using ftp, like I had to, use Gnome to Connect to Server and it will mount the ftp directory on the desktop. Now we must format the new partitions and extract our new installation onto the root partition. Type the following:

mkswap /dev/mapper/via_ebfejiabah1
mkreiserfs /dev/mapper/via_ebfejiabah2
mkdir /mnt/suse10
mount -t reiserfs /dev/mapper/via_ebfejiabah2 /mnt/suse10

Of course you'll want to replace anything listed in bold above to your specific settings/info. Copy your-new-suse-installation-archive.tar.gz to /mnt/suse10. Extract, using tar at the command prompt.
For example:

cd /mnt/suse10
tar --preserve -xf your-new-suse-installation-archive.tar.gz

This will take a while...then:

rm your-new-suse-installation-archive.tar.gz
vi etc/fstab

In vi change your root device to /dev/mapper/your-root-partition and your swap device to /dev/mapper/your-swap-partition. (i.e. mine were via_ebfejiabah2 and via_ebfejiabah1 respectively)

Step 5 – Making GRUB work with RAID

First we need to modify some files in the /mnt/suse10/boot/grub directory using vi. Type the following:

cd /mnt/suse10/boot/grub

The structure of the file is fairly simple. Just make sure that each entry corresponds to your new drive layout. For example:

(hd0) /dev/mapper/your-raid-device
Save the changes then edit the Grub menu:

vi menu.lst
My menu reads as follows:

# Modified by YaST2. Last modification on Sun Dec 11 20:40:40 UTC 2005

color white/blue black/light-gray
default 0
timeout 5
gfxmenu (hd0,1)/boot/message

###Don't change this comment - YaST2 identifier: Original name: linux###
title SUSE LINUX 10.0
    root (hd0,1)
    kernel /boot/vmlinuz root=/dev/mapper/via_ebfejiabah2 vga=0x31a selinux=0    resume=/dev/mapper/via_ebfejiabah1  splash=silent showopts
    initrd /boot/initrd

###Don't change this comment - YaST2 identifier: Original name: failsafe###
title Failsafe -- SUSE LINUX 10.0
    root (hd0,1)
    kernel /boot/vmlinuz root=/dev/mapper/via_ebfejiabah2 vga=normal showopts ide=nodma apm=off acpi=off noresume selinux=0 edd=off 3
    initrd /boot/initrd

The necessary changes have been bold typed, change to your configuration appropriately. Now we install the grub MBR on our disk so it finds and boots SUSE – or more correctly the kernel and initrd/ramfs.
When using grub, we must know the partition layout of our disks. In the example I am about to express, my partitions were setup as displayed by the fdisk output mentioned above in step 4. My root partition for Linux/SUSE was my second partition, thus, when using grub, I have to refer to that partition as (hd0,1), whereas, (hd0,0) would refer to the first rather than the second. Also, (hd0) refers to the first disk assuming you installed your RAID as the first 2 or more disks. I assume you get the idea. Just make sure the numbers correspond to your particular setup when typing in the details below. Type the following in a terminal with root permissions (i.e. sudo su -):

At the grub prompt type:

device (hd0,1) /dev/mapper/via_ebfejiabah2
device (hd0) /dev/mapper/via_ebfejiabah

This is where we need the fdisk info recorded earlier. Replace the numbers bold typed with yours:

geometry (hd0) 19929 255 63
root (hd0,1)
setup (hd0)

You should now get an output saying some stuff, but nothing referring to errors. Thus all is well so far.

Step 6 – Booting the new SUSE installation

At this point the new installation is ready to be booted. Just make sure your BIOS settings are configured for booting from your RAID disk setup and you should probably disable boot from CD. Assuming everything worked, a familiar SUSE boot screen should appear and naturally SUSE should begin the boot process. On first boot, SUSE will start YaST. We selected this option earlier during the installation of SUSE and is required to properly setup the new system. Just follow the instructions and do what you normally would during SUSE installation. The only significant difference is YaST is displayed in terminal mode, rather than GUI. Otherwise, it is identical to the GUI counterpart. Once YaST has completed, the system defaults to terminal mode.
You will need to edit the /etc/inittab file in order to to boot into graphical mode by default. This is rather simple, at the command prompt type the following:

vi /etc/inittab
And then find the line that says:

Change the bold typed number to a 5, save the file, exit and reboot.

DONE...Have fun!

crushton 14th December 2005 08:17

1 Attachment(s)
Just to be on the safe side...have a look at the attached mkinitrd. Yours should be identical. You can either just use mine or follow the directions to do it yourself. I recommend that you try yourself however =)
Also, just incase the question will be asked, which I am sure someone intuitive enough will, the reason for the commented out lines near the end of the file relating to LVM are required is this...

If you ever plan on updating your kernel (i.e. through YOU the online updater), which of course is highly recommended considering the bug fixes, then SUSE will try to rebuild the initrd image. This is not good news without these lines commented out. Basically, SUSE will assume you have LVM partitioned disks because it detects the use of the device-mapper and isn't aware that we are using it for our own purposes which currently are not supported. Therefore, we are preventing SUSE from making this false assumption of our disk layout and thus retaining our forced setup allowing mkinitrd to fly-by not knowing any different. With this being said, it may also be a good idea to backup your modified mkinitrd script in the unfortunate event that a future SUSE update replaces it. However, if this happens, chances are they added something new to the boot process that is necessary in the be on the safe side, always read the updates YOU is providing, and don't be too hasty accepting the updates unless your sure this critical file is not being replaced.

Don't forget to change the permissions on this file after downloading it, only root should have access to write!


mshah 3rd January 2006 00:40

Need help - boot from IDE, can't see RAID voulumes
1.Have 1 IDE that hosts SUSE 10, XP and have other partition.
2. Then have 4 x 250Gig SATA drives on Intel mother board with Intel software raid.
3. Have created 3 volumes/partitions on SATA drives. First one is 250 MB Raid1 on first 2 drives, then on later 2 drives created 215 MB Raid1 and 70 MB Raid0 partitions.

Now the problem description:
I can use all 3 RAID volumes correctly on XP. However, when I boot SUSE, do not see RAID0 volume at all. See Raid1 volumes as unbound (4 volumes v/s 2). This happens before I tried attached how-tos and without using dmraid.

Tried to follow instructions posted here for 2 days, made adjustments as suggested and considered that I'm not booting from RAID drive so it should be simpler, but it didn't help. I must be doing someting wrong.

Any help would be appreciated. I'm linux newbee so please consider that.

till 3rd January 2006 10:24

As far as i know, the SATA raid controllers that are available as onBoard controllers currently where not supported by linux.

mshah 4th January 2006 03:19


Originally Posted by till
As far as i know, the SATA raid controllers that are available as onBoard controllers currently where not supported by linux.

I thought that this thread and how-tos address how to make linux work with those STAT (fake) raids. Are you sure that STAT raids will not work with linux ?

till 4th January 2006 12:06


Originally Posted by mshah
I thought that this thread and how-tos address how to make linux work with those STAT (fake) raids. Are you sure that STAT raids will not work with linux ?

Yes, this thraed is how to make fake raids.

You see in windows one raid volume, because there exist drivers for windows.
On linux you see the single harddisks, thats because there are no linux RAID drivers for SATA available for your controller.

That explains why you see 4 vs. 2 volumes.

If you explain the errors you get a bit more detailed, we can try to fix them.

Dieda2000 4th January 2006 19:46

waiting to appear ...
Nice guide, works almost like a charm.
Apart from the fact that every third or fourth boot my machine hangs while displaying the code:
".. waiting for /dev/mapper/sil_afbieacedhaj2 to appear ..."
As said, the other times it works.

Moreover, while booting there is alway the message
" grep: command not found"
How did you use grep in this early stage of booting?

Specs: Suse 10.0 x86-x64, A8n-SLI Prem, pcie-Conroller Sil 3132
kernel: 2.6.15-rc6-smp

Another Notation:
Silicon Image´s Raid-Controller like the 3132 or 3114 can use a certain mixed mode raid, like
Intels matrix raid of the ich6 oer ich7. For example, I use two Maxtor 6V300F0, created on the first 200Gb of each disk a raid0-array and on the remaining 100Gb of each disk a raid1-array. I can use it with windows but dmraid can only discover the first raid array.
I think its a nice feature. Any clues to make it discover?

All times are GMT +2. The time now is 08:15.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.