Setting Up LVM On Top Of Software RAID Subsystem - RHEL & Fedora - Page 2

Step 9:

Using fdisk, change the partition ID of /dev/sda5 from 8e to fd (raid type), save by typing “w” in the fdisk menu.

then do:

# partprobe


Step 10:

Create the RAID ARRAY with /dev/sda5 and 1 missing device (/dev/sda6). Remember that RAID 1 requires a min. of 2 active devices, we will add /dev/sda6 later.

# mdadm - -create /dev/md0 - -level 1 –n 2 /dev/sda5 missing


Step 11:

Create PV on /dev/md0 and add it to the VG vg1.

# pvcreate /dev/md0
# vgextend vg1 /dev/md0



Then move the Physical Extents on PV, /dev/sda6 to /dev/md0 and pull /dev/sda6 out of VG.

# pvmove /dev/sda6 /dev/md0
# vgreduce vg1 /dev/sda6
# pvremove /dev/sda6

Use pvdisplay, vgdisplay and lvdisplay to get the status of PV’s, VG and LV’s. Note the size of /dev/md0from the output of pvdisplay.



Using fdisk change the partition type ID of /dev/sda6 to fd (raid), save by “w” and then do

# partprobe


Step 14:

Now add the /dev/sda6 to the RAID array /dev/md0.

# mdadm - -manage /dev/md0 - -add /dev/sda6

# pvresize /dev/md0 (Resize the PV on /dev/md0)

# mdadm - -grow /dev/md0 –z max (Refresh the Array)

The GROW mode is used for changing the size or shape of an active array. " –z max" means, amount (in Kilobytes) of space to use from each drive in /dev/md0. This value is set with --grow for RAID l since, the array was created with a size smaller than the currently active drives. Now the extra space can be accessed using --grow.

Use pvdisplay, vgdisplay and lvdisplay to get the status of PV’s, VG and LV’s. Can you notice the changes in the output of the pvdisplay command?


Step 15:

Get the details of /dev/md0 and add to /etc/mdadm.conf. First copy the mdadm.conf.sample file to the /etc directory. This step is usefull to rectify situations where after creating RAID array and rebooting the system, you get error saying "no /dev/md0 found".

# cp /usr/share/doc/mdadm*/mdadm.conf-sample /etc/mdadm.conf

# mdadm - -detail /dev/md0

Note down the UUID of the raid array.

# vi /etc/mdadm.conf

And add these two lines:

DEVICE /dev/sda5 /dev/sda6
ARRAY /dev/md0 UUID=<ID>

Save the file using Esc:wq. Then do:

# mdadm - -detail /dev/md0

So we have seen that the LVM (LV-lv1) sits on top of RAID 1 array, called the LVM mirrored set. This method of configuration does not need to unmount the LV and also prevents data loss. We will further go one step ahead and see how to add a spare drive if a device in the array fails or becomes faulty.

Open a new terminal (ctrl+alt+F2) and type:

# watch cat /proc/mdstat (to see the live status of RAID array /dev/md0)

Make /dev/sda6 faulty and then remove it from the raid array.

# mdadm - -manage /dev/md0 - -fail /dev/sda6 - -remove /dev/sda6

Create a new partition of type fd. Here, /dev/sda7.

# fdisk /dev/sda

      n-> l-> +2048M (/dev/sda7)-> t ->7 -> fd -> w

# partprobe

Add the /dev/sda7 device to the raid array:

# mdadm - -manage /dev/md0 - -add /dev/sda7

# vi /etc/mdadm.conf

Change the DEVICE line:

DEVICE /dev/sda5 /dev/sda7

Save the file using Esc:wq. Then do:

# mdadm - -detail /dev/md0

# pvresize /dev/md0

# mdadm - -grow /dev/md0 –z max

This will refresh and resize the PV and RAID array respectively.



You can test this setup with different Physical Hard Disks. Combing LVM and RAID gives more Flexibility as well as safeguards the data to great extent.

Idea and Concepts: Datacentre Planning & Implementation, (For a Govt. Organization at Port Blair).


Swapan Karmakar (RHCE, BCA)


Andaman & Nicobar Islands.

My mission is to bring Linux & OSS Into these beautiful Islands of Andaman & Nicobar.

Share this page:

1 Comment(s)