Install Ubuntu With Software RAID 10

Want to support HowtoForge? Become a subscriber!
 
Submitted by maxbash (Contact Author) (Forums) on Mon, 2008-08-18 18:06. :: Ubuntu | Storage

Install Ubuntu With Software RAID 10

The Ubuntu Live CD installer doesn't support software RAID, and the server and alternate CDs only allow you to do RAID levels 0, 1, and 5. Raid 10 is the fastest RAID level that also has good redundancy too. So I was disappointed that Ubuntu didn't have it as a option for my new file server. I didn't want shell out lots of money for a RAID controller, especially since benchmarks show little performance benefit using a Hardware controller configured for RAID 10 in a file server.

 

1 Before you start

I'll asume you have already known about RAID 10, but I'll cover a some important information before you begin.

  • You will need 4 partitions dedicated for the RAID array, each will need to be on their own physical drive.
  • Only half of the disk space used for the RAID 10 volume will be useable.
  • All partitions used for RAID should be the same or close to the same size.
  •  

    2 Prepare your disks

    Use a partition program that can create RAID partitions, I use cfdisk which is text based but easier to use than fdisk. Partition your disks, make a 50 MB partition on the first drive, this is for /boot since grub doesn't support RAID well. Set up a partition on four drives to be RAID type, in cfdisk choose FD as the type. In my setup all of the system besides /boot will reside in one RAID 10 volume.

    For best swap performance put a swap partition on each drive. I put a one GB swap on each drive.

    Boot the Ubuntu Live CD.

    Run the Terminal.

    sudo su
    cfdisk /dev/sda

    cfdisk /dev/sdb

    The next two drives are partitioned the same as /dev/sdb:

    cfdisk /dev/sdc
    cfdisk /dev/sdd

    3 Install RAID utility, mdadm, and set up the RAID array

    apt-get install mdadm
    mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/sda2 /dev/sdb1 /dev/sdc1 /dev/sdd1

    Then create the file system on the RAID array. Format it now because the partitioner in the installer doesn't know how to modify or format RAID arrays. I used XFS file system, because XFS has great large file performance. Then you will create an alias for the RAID array with the link command because the Ubuntu installer won't find devices starting with "md".

    mkfs.xfs /dev/md0
    ln /dev/md0 /dev/sde

     

    4 Ubuntu Install

    Run the installer, when you are in the partitioner choose manual and be careful not to modify the partition layout. For the /dev/sda1 partition choose ext3 as the file system and set /boot on.

    Set your swap partitions to be used as swap.

    You select the type of file system you already formatted the RAID device and set the mount point. Do not choose to reformat or make partition table changes to the RAID array, because the partitioner will misconfigure it.

    Click continue on the warning about the RAID not being marked for formatting.

    When the installer finishes tell it to continue to use the Live CD.

     

    5 Install RAID support inside the new install

    A default Ubuntu setup won't automatically boot into a software RAID setup, you will need to chroot into the new install and have the chroot configured to see all the device information available in the LiveCD environment so that the mdadm install scripts can properly set up config and boot files for RAID support.

    mkdir /myraid
    mount /dev/md0 /myraid
    mount /dev/sda1 /myraid/boot
    mount --bind /dev /myraid/dev
    mount -t devpts devpts /myraid/dev/pts
    mount -t proc proc /myraid/proc
    mount -t sysfs sysfs /myraid/sys
    chroot /myraid
    apt-get install mdadm
    exit

    You can now reboot into your new system.

     

    Extra commands you may need

    A helpful command that will tell you the status of the RAID and which partitions belong to a volume:

    cat /proc/mdstat

    If you reboot into the Live CD and want to mount your RAID array you will need to install mdadm in the Live CD environment and activate the RAID:

    sudo su
    apt-get install mdadm
    mdadm --assemble /dev/md0

     

    If you need to start over or remove the RAID array

    Software RAID information is embedded in a place on each RAID partition called the superblock. If you decide to change your RAID setup and start over, you can't just repartition and try to recreate the RAID array. You will need to erase the superblock first on each partition belonging to the RAID array you want to remove.

    Make sure your important data has been backed up before doing these steps.

    First we will make sure RAID is unmounted and stopped.

    sudo su
    umount /dev/md0
    mdadm --stop /dev/md0
    mdadm --zero-superblock /dev/sda2
    mdadm --zero-superblock /dev/sdb1
    mdadm --zero-superblock /dev/sdc1
    mdadm --zero-superblock /dev/sdd1


    Please do not use the comment function to ask for help! If you need help, please use our forum.
    Comments will be published after administrator approval.
    Submitted by Anonymous (not registered) on Thu, 2013-03-07 04:43.
    If apt-get or apt-get update fails, try copying the /etc/resolve.conf into the chroot environment before calling chroot (ex, to /myraid/etc/resolve.conf).
    Submitted by Anonymous (not registered) on Fri, 2012-04-20 23:10.
    http://www.youtube.com/watch?v=zlOK1voR2nA
    Submitted by maxbash (registered user) on Thu, 2011-08-18 21:07.
    Thanks for the comments! I'm happy to see people still find this useful after 3 years. I'm not using software RAID anymore after getting a couple second hand LSI cards for a low price. If I redid this guide I would do a couple things different: I would make the boot partition bigger. I would mount all the devpts, proc, and sysfs kernel filesystem after chroot, because it less likely to cause problem if you have to chroot again. (of course you would take /myraid off of the commands) I would also put swap on a second md1 raid, because there is a chance, especially if you don't have enough RAM, that a process or maybe even the kernel could crash if one of the drives failed. Unless the kernel has something built-in to handle one of multiple swap partitions failing. Someone smarter than me would know that.
    Submitted by Diederik (not registered) on Wed, 2011-07-20 07:53.

    This is a very helpful guide.  Just one note:

    When installing Ubuntu 11.04  64 bit I had to make the /boot partition larger.  Trying to install mdadm on the chrooted system failed, as I only had 7MB free space on that partition.  After changing it to 500MB the install worked flawlessly.

    Submitted by Anonymous (not registered) on Mon, 2011-07-11 16:56.

     Here is another guide on the same subject for your consideration:

    http://iiordanov.blogspot.com/2011/07/how-to-install-linux-ubuntu-debian-etc.html

    Submitted by quantum.leaf (registered user) on Sun, 2011-05-01 18:44.

    Thanks for this, it was a very useful reference.

    One thing i wanted to mention though it that in addition to some oddities ive noticed with mdadm raid10, the read speeds are very slow. 

     with 4 drive is get just 260MB/s reading in raid10, in raid 0 i average 520MB/s. Given that this is approximately half the speed i strongly suspect that raid 10 is not stripe reading from all 4 drives as it could and only reading from 2. Even raid 5 is much faster ~400MB/s.

     I don't think ill chance 4 striped drives but after considering the performance hit, raid5 is much more attractive that raid10. 

     

     

    Submitted by Peter (not registered) on Fri, 2010-11-19 20:20.

    A kind chap called "symbolik" published a description of building his own RAID on Kubuntu 9.04 at

    http://symbolik.wordpress.com/2009/05/01/howto-kubuntu-904-raid-10-lvm2-and-xfs/

    and I've now developed a readily-customisable set of scripts to implement the process to your own preference, and to add the LILO boot-loader to the result.  If anyone can recommend a website that would be willing to host it I'll happily pass the set on for publication. Takes just a few minutes, and saves an awful lot of careful typing!

     

    Submitted by Dritan (not registered) on Sat, 2010-08-14 20:18.

    Thanx for this guide. You saved my life. After all efforts lasting 3-4 days with no result to install Element Os on raid 0, finally I came across your guide, which made worth all my efforts, headache, sweat. Do not know how to thank you. It all finally worked out smoothly. I could at last boot on my new Element Os install. No other guides or forums helped.

     Greetings from Albania.

    Submitted by Rauls (not registered) on Fri, 2010-01-22 20:40.

    I used this manual to create software RAID 5 and install Ubuntu 8.04 LTS server. The detailed info is here:

    http://ubuntuforums.org/showthread.php?t=1357561&highlight=software+raid5

    Only remark on this guide - create larger boot partition, 50 Mb is not enough if you have two kernel versions (2.6.24-24-server and 2.6.24-26-server in my case). The kernel removal via aptitude or apt-get failed because there was insuffiecient disk space - only 10% free. I will now reinstall and create boot partition 100Mb, that should be enough for future kernel updates and I won't have to worry about disk space.

    Submitted by Rick (not registered) on Sun, 2009-03-29 23:12.

    First of all cheers for the tutorial. I learnt heaps even though i couldnt get it to work! Because the partitioner would not see md0

    The reason why it failed for me is.

    mkfs.xfs /dev/md0
    ln /dev/md0 /dev/sde

    When installing from ubuntu server cd.

    The trick is in a system recovery or live cd command prompt to type

    mkfs.ext3 /dev/md0

    Instead (dont bother with ln /dev/md0 etc)

    This formats the raid array in ext3 which unlike xfs can actually be seen by the server installer!

    Now when in the partitioner you select manual setup. At first you still wont see md0 but fear not! Setup your boot partition (/dev/sda1) and your swap partitions (sda3 sdb2 etc) then go into configure software raid. Now click finish (if you click on delete raid array youll see your md0 array! yay! But don't delete it of course!). Now when back in the partion screen you will see the md0 partition!!! yay!!

    Now change it to use as ext3 with / (root) but dont format

    All done!

     

    Submitted by Fernando Salas (not registered) on Wed, 2009-03-25 18:06.

    First of all I want to thank to this howto and also to the comments as I'm quite new to linux, and found them VERY usefull.

    Now to add my 2 cents, I will just tell my little experience with RAID

    I had to build a server and the hardware turn to be a FakeRAID one, so at a first attempt I thought to give it a try to FakeRAID, issued a dmraid -ay from the LIVE CD and play a little with it. Then after some reading about pro/cons of FakeRAID vs Software RAID, I made my mind and took the software RAID path, as I wanted to have RAID10 as root filesystem Ive made 2 partitions more or less like in one of this comments is recommend, I formatted then, run the server installer, partitionated in manual, and all installs ok.

    Then I boot and...

    Initramfs appears (what the heck is this? was my first thought)

    well there Ive tried mdadm --assemble --scan , answers "no device found"

    To making it short, after 2 days of swering and with quite some less hair in my head I found the culprict

    dmraid

    The ubuntu boot takes up dmraid and it gets the devices for him even if I actually not used it for the install, I had to chroot, then dmraid -an , apt-get remove dmraid, and my problem solved, as I didn't found this anywhere, I thought it can help others 

    Hope it helps someone

    Fer

     

     

     

    Submitted by Fernando Salas (not registered) on Wed, 2009-03-25 18:01.

    First of all I want to thank to this howto and also to the comments as I'm quite new to linux, and found them VERY usefull.

    Now to add my 2 cents, I will just tell my little experience with RAID

    I had to build a server and the hardware turn to be a FakeRAID one, so at a first attempt I thought to give it a try to FakeRAID, issued a dmraid -ay from the LIVE CD and play a little with it. Then after some reading about pro/cons of FakeRAID vs Software RAID, I made my mind and took the software RAID path, as I wanted to have RAID10 as root filesystem Ive made 2 partitions more or less like in one of this comments is recommend, I formatted then, run the server installer, partitionated in manual, and all installs ok.

    Then I boot and...

    Initramfs appears (what the heck is this? was my first thought)

    well there Ive tried mdadm --assemble --scan , answers "no device found"

    To making it short, after 2 days of swering and with quite some less hair in my head I found the culprict

    dmraid

    The ubuntu boot takes up dmraid and it gets the devices for him even if I actually not used it for the install, I had to chroot, then dmraid -an , apt-get remove dmraid, and my problem solved, as I didn't found this anywhere, I thought it can help others 

    Hope it helps someone

    Fer

     

     

     

     

    Submitted by Anonymous (not registered) on Mon, 2009-01-12 12:11.

    With raid10,f2 you can almost double the sequential read performance of your raid, while other performance numbers are about the same.

    Using all of the 4 drives you can 4-double your read performance, and something like double other read performance measures, compared to your setup, while writing will be about the same. I would also recommend using a bigger chunk size, say 256 KiB.

    Your point 3 would then be:

    mdadm -C /dev/md2 -c 256 -n 4 -l 10 -p f4 /dev/sd[abcd]2

    I would also recommend using raid for boot and swap, and using all of the 4 drives would actually let you run if even 3 disks crashed, plus you get the added performance of all of the drives. /boot need to be on a standard raid10, as grub and lilo only can boot raid partitions that looks as a standalone partition.

    Say for /boot:

    mdadm -C /dev/md1 -c 256 -n 4 -l 10 -p n4 /dev/sd[abcd]1

    And for swap:

    mdadm -C /dev/md3 -c 256 -n 4 -l 10 -p f4 /dev/sd[abcd]3

    For /home I would not waiste all the space on having 4 copies, so:

    mdadm -C /dev/md4 -c 256 -n 4 -l 10 -p f2 /dev/sd[abcd]4

    You may even consider running RAID5 on /home, to get more space.

    There is more on the setup at  http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk

    Compared to your setup, this would give you:

    1. Survival of 3 disks crashing - your setup would not survive a dish crash where your /boot was placed, and your setup will stop if any of your swap partitions were damaged.

    2. Almost 4 times the sequential read performance, and double random read performance for your basic /root and swap partitions.

     

     

    Submitted by Toni W. (not registered) on Mon, 2009-02-02 13:45.

    > Say for /boot:

    > mdadm -C /dev/md1 -c 256 -n 4 -l 10 -p n4 /dev/sd[abcd]1

     

    Boot from raid 10 ?  Is this possible with Grub or Lilo ?

    I thought isn't possible.

     

    > mdadm -C /dev/md2 -c 256 -n 4 -l 10 -p f4 /dev/sd[abcd]2 

    This give a 25 % usability of the disk space. Am i wrong ?

     

    Thanks

     

    Submitted by Travis (not registered) on Sun, 2009-01-11 10:52.

    Hey man, just wanted to say thanks! Finally got RAID10 up and running. Had to tweak a little though....ended up running totally seperate drive for boot and swap as the install kept hanging on me at 15%. Also im a complete newbie; for all the other newbies out there, you have to run the command: apt-get update before you run apt-get mdadm. Cheers

     

    Submitted by Travis (not registered) on Sat, 2009-01-10 09:37.

    Hey man, just wanted to say thanks! Finally got RAID10 up and running. Had to tweak a little though....ended up running totally seperate drive for boot and swap as the install kept hanging on me at 15%. Also im a complete newbie; for all the other newbies out there, you have to run the command: apt-get update before you run apt-get mdadm. Cheers

     

    Submitted by jaxån (not registered) on Tue, 2008-12-23 14:46.

    You might want to consider using swap on RAID too.  If one swap disk crashes, the machine will go down, even though data stored in the RAID is still intact.  And you do not need a swap disk until the system (and RAID) is up, so the boot partition is the only one needed.  Might I sugest a USB stick for the boot partition :)

    See:  http://linux-raid.osdl.org/index.php/Why_RAID%3F

    Submitted by mrt181 (not registered) on Fri, 2008-12-19 22:04.
    And now please write a similar tutorial for fakeraid 10 for those of use who want to dual boot with other operating systems and still be able to access all data in the whole array.
    Submitted by E. Darwin (not registered) on Wed, 2008-12-03 04:52.

    Hi there,

    This HowTo is really really great.

    I would like to thank for providing a very clear step tutorial on howto install RAID10 in ubuntu and it is working really great in my system with no doubt.

    Just wondering if I can make a request on how to add "Hot Spares" in this RAID and the troubleshooting on how to replace and reinstall a new RAID as well as email sent to user if one of the hardware fail.

    I think this request will make the PERFECT HowTo RAID 10 for ubuntu user.

    Thank you for your consideration.

    Submitted by Toni W. (not registered) on Fri, 2008-11-07 08:54.

    Hi !!

    Great HowTo, but is it possible a similar solution to install raid 10 on Ubuntu Server, where there are no a desktop live to make the intermediate steps ?

    Any ideas ?

     Thanks !!

     

    Submitted by maxbash (registered user) on Mon, 2008-11-17 09:02.
    Yes you can install as a server, stop at step 3, and then follow my guide at http://www.howtoforge.net/minimal-ubuntu-8.04-server-install using the correct device names.
    Submitted by Anonymous (not registered) on Mon, 2008-11-03 22:59.

    Hi

     Great info, thanks!

     I have a question - I have set my system up following your tutorial, but wanted to upgrade to Ubuntu 8.10.  My /boot partition was too small at 50Mb so I used the Live CD to resize it to 200Mb, deleting the /dev/sda2 partition in the process.

    How do I resync the RAID array to bring the recreated /dev/sda2 back into the RAID?  It says /dev/md0 is not started when trying to do it from the Live CD, and booting from the actual system itself I cant do it either as I am unable to mount the RAID mount as it is in use by the system!!

    Any ideas?


    Thanks!

    Submitted by Anonymous (not registered) on Thu, 2008-10-23 13:04.

    If you have many disks to partition using identical layout, using cfdisk gets rather tiresome. Instead use sfdisk like:

    sfdisk -d /dev/sdX | sfdisk /dev/sdY

     which should save some time and effort.

    Submitted by maxbash (registered user) on Mon, 2008-08-25 19:26.

    Brm didn't have anything specific for me to help with. The mailing list mentioned did have some concerns that I will address. It is possible to put RAID 10 on two drives, its technically possible but practically useless. Striping or mirroring two partitions on the same hard drive causes a nasty performance hit, RAID 1 would be better for two drives. Yes RAID 100 is faster than RAID 10, but I think the added overhead wouldn't speed up software RAID, and it decreases the level of redundancy. I would love to see someone benchmark it. Putting swap on top of software RAID will add unnecessary overhead. Virtual memory in the kernel automatically optimizes the use of multiple swap partitions, and the kernel adapts if a swap partition becomes unavailable. You can put /boot on a RAID 1, but /boot is easy to regenerate, and you will have to partition a new drive and rebuild your RAID anyways if you lost a drive. I can redo the Guide to make /boot redundant if I get at lest a few people to request it. My setup lets you have a high performance storage system that allows you retain your data if a hard drive fails. If you want system with high availability and seamless fail over you will need hardware RAID with hot swappable drive bays, but that is expensive and not required for someone doesn't need the hardware for high availability.

    Submitted by brm (registered user) on Wed, 2008-08-20 23:50.

    Problems with this HowTo are discussed on the linux-raid mailing list:

    Linux RAID linux-raid@vger.kernel.org

    The HowTo should also refer to recent documentation on Linux RAID:

    http://wiki.linux-raid.osdl.org

    Submitted by counterdutch (not registered) on Sat, 2008-09-20 17:45.
    Submitted by Anonymous (not registered) on Fri, 2011-04-08 15:24.
    The linux-kernel wiki moved to https://raid.wiki.kernel.org