Comments on Install Ubuntu With Software RAID 10

Install Ubuntu With Software RAID 10 The Ubuntu Live CD installer doesn't support software RAID, and the server and alternate CDs only allow you to do RAID levels 0, 1, and 5. Raid 10 is the fastest RAID level that also has good redundancy too. So I was disappointed that Ubuntu didn't have it as a option for my new file server. I didn't want shell out lots of money for a RAID controller, especially since benchmarks show little performance benefit using a Hardware controller configured for RAID 10 in a file server.

29 Comment(s)

Add comment

Please register in our forum first to comment.

Comments

By:

Problems with this HowTo are discussed on the linux-raid mailing list:

Linux RAID [email protected]

The HowTo should also refer to recent documentation on Linux RAID:

http://wiki.linux-raid.osdl.org

By: counterdutch
By: Anonymous

The linux-kernel wiki moved to https://raid.wiki.kernel.org

By:

Brm didn't have anything specific for me to help with. The mailing list mentioned did have some concerns that I will address. It is possible to put RAID 10 on two drives, its technically possible but practically useless. Striping or mirroring two partitions on the same hard drive causes a nasty performance hit, RAID 1 would be better for two drives. Yes RAID 100 is faster than RAID 10, but I think the added overhead wouldn't speed up software RAID, and it decreases the level of redundancy. I would love to see someone benchmark it. Putting swap on top of software RAID will add unnecessary overhead. Virtual memory in the kernel automatically optimizes the use of multiple swap partitions, and the kernel adapts if a swap partition becomes unavailable. You can put /boot on a RAID 1, but /boot is easy to regenerate, and you will have to partition a new drive and rebuild your RAID anyways if you lost a drive. I can redo the Guide to make /boot redundant if I get at lest a few people to request it. My setup lets you have a high performance storage system that allows you retain your data if a hard drive fails. If you want system with high availability and seamless fail over you will need hardware RAID with hot swappable drive bays, but that is expensive and not required for someone doesn't need the hardware for high availability.

By: Rick

First of all cheers for the tutorial. I learnt heaps even though i couldnt get it to work! Because the partitioner would not see md0

The reason why it failed for me is.

mkfs.xfs /dev/md0
ln /dev/md0 /dev/sde

When installing from ubuntu server cd.

The trick is in a system recovery or live cd command prompt to type

mkfs.ext3 /dev/md0

Instead (dont bother with ln /dev/md0 etc)

This formats the raid array in ext3 which unlike xfs can actually be seen by the server installer!

Now when in the partitioner you select manual setup. At first you still wont see md0 but fear not! Setup your boot partition (/dev/sda1) and your swap partitions (sda3 sdb2 etc) then go into configure software raid. Now click finish (if you click on delete raid array youll see your md0 array! yay! But don't delete it of course!). Now when back in the partion screen you will see the md0 partition!!! yay!!

Now change it to use as ext3 with / (root) but dont format

All done!

 

By: Anonymous

Hi

 Great info, thanks!

 I have a question - I have set my system up following your tutorial, but wanted to upgrade to Ubuntu 8.10.  My /boot partition was too small at 50Mb so I used the Live CD to resize it to 200Mb, deleting the /dev/sda2 partition in the process.

How do I resync the RAID array to bring the recreated /dev/sda2 back into the RAID?  It says /dev/md0 is not started when trying to do it from the Live CD, and booting from the actual system itself I cant do it either as I am unable to mount the RAID mount as it is in use by the system!!

Any ideas?


Thanks!

By: Fernando Salas

First of all I want to thank to this howto and also to the comments as I'm quite new to linux, and found them VERY usefull.

Now to add my 2 cents, I will just tell my little experience with RAID

I had to build a server and the hardware turn to be a FakeRAID one, so at a first attempt I thought to give it a try to FakeRAID, issued a dmraid -ay from the LIVE CD and play a little with it. Then after some reading about pro/cons of FakeRAID vs Software RAID, I made my mind and took the software RAID path, as I wanted to have RAID10 as root filesystem Ive made 2 partitions more or less like in one of this comments is recommend, I formatted then, run the server installer, partitionated in manual, and all installs ok.

Then I boot and...

Initramfs appears (what the heck is this? was my first thought)

well there Ive tried mdadm --assemble --scan , answers "no device found"

To making it short, after 2 days of swering and with quite some less hair in my head I found the culprict

dmraid

The ubuntu boot takes up dmraid and it gets the devices for him even if I actually not used it for the install, I had to chroot, then dmraid -an , apt-get remove dmraid, and my problem solved, as I didn't found this anywhere, I thought it can help others 

Hope it helps someone

Fer

 

 

 

 

By: Fernando Salas

First of all I want to thank to this howto and also to the comments as I'm quite new to linux, and found them VERY usefull.

Now to add my 2 cents, I will just tell my little experience with RAID

I had to build a server and the hardware turn to be a FakeRAID one, so at a first attempt I thought to give it a try to FakeRAID, issued a dmraid -ay from the LIVE CD and play a little with it. Then after some reading about pro/cons of FakeRAID vs Software RAID, I made my mind and took the software RAID path, as I wanted to have RAID10 as root filesystem Ive made 2 partitions more or less like in one of this comments is recommend, I formatted then, run the server installer, partitionated in manual, and all installs ok.

Then I boot and...

Initramfs appears (what the heck is this? was my first thought)

well there Ive tried mdadm --assemble --scan , answers "no device found"

To making it short, after 2 days of swering and with quite some less hair in my head I found the culprict

dmraid

The ubuntu boot takes up dmraid and it gets the devices for him even if I actually not used it for the install, I had to chroot, then dmraid -an , apt-get remove dmraid, and my problem solved, as I didn't found this anywhere, I thought it can help others 

Hope it helps someone

Fer

 

 

 

By: Anonymous

If you have many disks to partition using identical layout, using cfdisk gets rather tiresome. Instead use sfdisk like:

sfdisk -d /dev/sdX | sfdisk /dev/sdY

 which should save some time and effort.

By: Toni W.

Hi !!

Great HowTo, but is it possible a similar solution to install raid 10 on Ubuntu Server, where there are no a desktop live to make the intermediate steps ?

Any ideas ?

 Thanks !!

 

By:

Yes you can install as a server, stop at step 3, and then follow my guide at http://www.howtoforge.net/minimal-ubuntu-8.04-server-install using the correct device names.

By: E. Darwin

Hi there,

This HowTo is really really great.

I would like to thank for providing a very clear step tutorial on howto install RAID10 in ubuntu and it is working really great in my system with no doubt.

Just wondering if I can make a request on how to add "Hot Spares" in this RAID and the troubleshooting on how to replace and reinstall a new RAID as well as email sent to user if one of the hardware fail.

I think this request will make the PERFECT HowTo RAID 10 for ubuntu user.

Thank you for your consideration.

By: mrt181

And now please write a similar tutorial for fakeraid 10 for those of use who want to dual boot with other operating systems and still be able to access all data in the whole array.

By: jaxån

You might want to consider using swap on RAID too.  If one swap disk crashes, the machine will go down, even though data stored in the RAID is still intact.  And you do not need a swap disk until the system (and RAID) is up, so the boot partition is the only one needed.  Might I sugest a USB stick for the boot partition :)

See:  http://linux-raid.osdl.org/index.php/Why_RAID%3F

By: Toni W.

> Say for /boot:

> mdadm -C /dev/md1 -c 256 -n 4 -l 10 -p n4 /dev/sd[abcd]1

 

Boot from raid 10 ?  Is this possible with Grub or Lilo ?

I thought isn't possible.

 

> mdadm -C /dev/md2 -c 256 -n 4 -l 10 -p f4 /dev/sd[abcd]2 

This give a 25 % usability of the disk space. Am i wrong ?

 

Thanks

 

By: Anonymous

With raid10,f2 you can almost double the sequential read performance of your raid, while other performance numbers are about the same.

Using all of the 4 drives you can 4-double your read performance, and something like double other read performance measures, compared to your setup, while writing will be about the same. I would also recommend using a bigger chunk size, say 256 KiB.

Your point 3 would then be:

mdadm -C /dev/md2 -c 256 -n 4 -l 10 -p f4 /dev/sd[abcd]2

I would also recommend using raid for boot and swap, and using all of the 4 drives would actually let you run if even 3 disks crashed, plus you get the added performance of all of the drives. /boot need to be on a standard raid10, as grub and lilo only can boot raid partitions that looks as a standalone partition.

Say for /boot:

mdadm -C /dev/md1 -c 256 -n 4 -l 10 -p n4 /dev/sd[abcd]1

And for swap:

mdadm -C /dev/md3 -c 256 -n 4 -l 10 -p f4 /dev/sd[abcd]3

For /home I would not waiste all the space on having 4 copies, so:

mdadm -C /dev/md4 -c 256 -n 4 -l 10 -p f2 /dev/sd[abcd]4

You may even consider running RAID5 on /home, to get more space.

There is more on the setup at  http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk

Compared to your setup, this would give you:

1. Survival of 3 disks crashing - your setup would not survive a dish crash where your /boot was placed, and your setup will stop if any of your swap partitions were damaged.

2. Almost 4 times the sequential read performance, and double random read performance for your basic /root and swap partitions.

 

 

By: Travis

Hey man, just wanted to say thanks! Finally got RAID10 up and running. Had to tweak a little though....ended up running totally seperate drive for boot and swap as the install kept hanging on me at 15%. Also im a complete newbie; for all the other newbies out there, you have to run the command: apt-get update before you run apt-get mdadm. Cheers

 

By: Travis

Hey man, just wanted to say thanks! Finally got RAID10 up and running. Had to tweak a little though....ended up running totally seperate drive for boot and swap as the install kept hanging on me at 15%. Also im a complete newbie; for all the other newbies out there, you have to run the command: apt-get update before you run apt-get mdadm. Cheers

 

By: Rauls

I used this manual to create software RAID 5 and install Ubuntu 8.04 LTS server. The detailed info is here:

http://ubuntuforums.org/showthread.php?t=1357561&highlight=software+raid5

Only remark on this guide - create larger boot partition, 50 Mb is not enough if you have two kernel versions (2.6.24-24-server and 2.6.24-26-server in my case). The kernel removal via aptitude or apt-get failed because there was insuffiecient disk space - only 10% free. I will now reinstall and create boot partition 100Mb, that should be enough for future kernel updates and I won't have to worry about disk space.

By: Dritan

Thanx for this guide. You saved my life. After all efforts lasting 3-4 days with no result to install Element Os on raid 0, finally I came across your guide, which made worth all my efforts, headache, sweat. Do not know how to thank you. It all finally worked out smoothly. I could at last boot on my new Element Os install. No other guides or forums helped.

 Greetings from Albania.

By: Peter

A kind chap called "symbolik" published a description of building his own RAID on Kubuntu 9.04 at

http://symbolik.wordpress.com/2009/05/01/howto-kubuntu-904-raid-10-lvm2-and-xfs/

and I've now developed a readily-customisable set of scripts to implement the process to your own preference, and to add the LILO boot-loader to the result.  If anyone can recommend a website that would be willing to host it I'll happily pass the set on for publication. Takes just a few minutes, and saves an awful lot of careful typing!

 

By:

Thanks for this, it was a very useful reference.

One thing i wanted to mention though it that in addition to some oddities ive noticed with mdadm raid10, the read speeds are very slow. 

 with 4 drive is get just 260MB/s reading in raid10, in raid 0 i average 520MB/s. Given that this is approximately half the speed i strongly suspect that raid 10 is not stripe reading from all 4 drives as it could and only reading from 2. Even raid 5 is much faster ~400MB/s.

 I don't think ill chance 4 striped drives but after considering the performance hit, raid5 is much more attractive that raid10. 

 

 

By: Diederik

This is a very helpful guide.  Just one note:

When installing Ubuntu 11.04  64 bit I had to make the /boot partition larger.  Trying to install mdadm on the chrooted system failed, as I only had 7MB free space on that partition.  After changing it to 500MB the install worked flawlessly.

By: Anonymous

 Here is another guide on the same subject for your consideration:

http://iiordanov.blogspot.com/2011/07/how-to-install-linux-ubuntu-debian-etc.html

By:

Thanks for the comments! I'm happy to see people still find this useful after 3 years. I'm not using software RAID anymore after getting a couple second hand LSI cards for a low price. If I redid this guide I would do a couple things different: I would make the boot partition bigger. I would mount all the devpts, proc, and sysfs kernel filesystem after chroot, because it less likely to cause problem if you have to chroot again. (of course you would take /myraid off of the commands) I would also put swap on a second md1 raid, because there is a chance, especially if you don't have enough RAM, that a process or maybe even the kernel could crash if one of the drives failed. Unless the kernel has something built-in to handle one of multiple swap partitions failing. Someone smarter than me would know that.

By: Anonymous

http://www.youtube.com/watch?v=zlOK1voR2nA

By: Anonymous

If apt-get or apt-get update fails, try copying the /etc/resolve.conf into the chroot environment before calling chroot (ex, to /myraid/etc/resolve.conf).

By: randomstranger

Hi all,You can now disregard the part where you need to create the hard link to the array (ubuntu 14.04 installer) because the installer does not see the link. You can just select the md0 directly as /, remember to not format it.Also, I prefer ext4 so I used mkfs.ext4 /dev/md0 and also mkfs.ext4 /dev/sda1.Another thing that did not work is the installation of the mdadm in the chroot (final step). had to manually wget it as .deb (from outside the chroot environment) and then install it with dpkg -i (inside the chroot). it gave me a weird error about it being unable to find "/dev/md/0" which obviously made sense... I crossed my fingers, rebooted and then, for my relief, I saw the ubuntu logo.Hey and btw, thanks a lot for the tutorial!

By: Marco

did you test this?

my PC doesn't boot if grub is on a raid partition.