Step-by-Step Tutorial: Configure Hybrid Software RAID 10 in Linux


RAID, Linux

In this article I will share the steps to Configure Hybrid Software RAID 10 (1+0) using four disks. I will explain this in more detail in the upcoming chapters. I have written another article with comparison and difference between various RAID types using figures including pros and cons of individual RAID types so that you can make an informed decision before choosing a RAID type for your system.

 

 

Hybrid Array

After the Berkeley Papers were published, many vendors began combining different RAID levels in an attempt to increase both performance and reliability. These hybrid arrays are supported by most hardware RAID controllers and external systems. The Linux kernel will also allow the combination of two or more RAID levels to form a hybrid array. In fact, it allows any combination of arrays, although some of them might not offer any benefit.

 

RAID-10 (striping mirror)

The most widely used, and effective, hybrid array results from the combination of RAID-0 and RAID-1. The fast performance of striping, coupled with the redundant properties of mirroring, create a quick and reliable solution—although it is the most expensive solution.

A striped-mirror, or RAID-10, is simple. Two separate mirrors are created, each with a unique set of member disks. Then the two mirror arrays are added to a new striped array. When data is written to the logical RAID device, it is striped across the two mirrors.

Step-by-Step Tutorial: Configure Hybrid Software RAID 10 in Linux

 

How Hybrid Software RAID 10 works?

Configuring hybrid software RAID 10 requires a lot of surplus disk hardware, although it provides a fast and reliable solution. I/O approaches a throughput close to that of a standalone striped array. When any single disk in a RAID-10 fails, both sides of the hybrid (each mirror) may still operate, although the one with the failed disk will be operating in degraded mode. A RAID-10 arrangement could even withstand multiple disk failures on different sides of the stripe.

When creating a RAID-10, it’s a good idea to distribute the mirroring arrays across multiple I/O channels. This will help the array withstand controller failures. For example, take the case of a RAID-10 consisting of two mirror sets, each containing two member disks. If each mirror is placed on its own I/O channel, then a failure of that channel will render the entire hybrid array useless. However, if each member disk of a single mirror is placed on a separate channel, then the array can withstand the failure of an entire I/O channel

Step-by-Step Tutorial: Configure Hybrid Software RAID 10 in Linux

While you could combine two stripes into a mirror, this arrangement offers no increase in performance over RAID1+0 and does not increase redundancy. In fact, RAID 10 can withstand more disk failures than what many manufacturers call RAID 0+1 (two stripes combined into a mirror). While it’s true that a RAID 0+1 could survive two disk failures within the same stripe, that second disk failure is trivial because it’s already part of a nonfunctioning stripe.

If we arranged our mirrors so that /dev/sda and /dev/sdc were on controller A and /dev/sdb and /dev/sdd were on controller B? In this case, the failure of a single controller would only place our mirrors into degraded mode, leaving /dev/md2 operational.

Step-by-Step Tutorial: Configure Hybrid Software RAID 10 in Linux

 

 

Configure Hybrid Software RAID 10 Array

There are below certain steps which you must follow before creating hybrid software raid 10 array on your Linux node. Since I have already performed those steps in my older article, I will share the hyperlinks here

Important Rules of Partitioning

To setup a hybrid software raid 10 array you will need minimum of 4 disks (sdb1, sdc1, sdd1 and sde1) wherein we will configure Software RAID 1 on (sdb1 + sdc1) mapped to /dev/md0 and (sdd1 + sde1) mapped to /dev/md1. On top of this we will configure Software RAID 0 array with (/dev/md0 + /dev/md1) mapped to /dev/md2 array.

Partitioning with fdisk

 

Create hybrid software raid 10 array

Now since we have all the partitions with us, we will create hybrid software RAID 10 array on those partitions.

[root@node1 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   30G  0 disk
├─sda1            8:1    0  512M  0 part /boot
└─sda2            8:2    0 27.5G  0 part
  ├─centos-root 253:0    0 25.5G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0    2G  0 disk
└─sdb1            8:17   0    2G  0 part
sdc               8:32   0    2G  0 disk
└─sdc1            8:33   0    2G  0 part
sdd               8:48   0    2G  0 disk
└─sdd1            8:49   0    2G  0 part
sde               8:64   0    2G  0 disk
└─sde1            8:65   0    2G  0 part
sr0              11:0    1 1024M  0 rom

So our first software RAID 1 array /dev/md0 will be on /dev/sdb1 and /dev/sdc1

[root@node1 ~]# mdadm -C -n2 -l1 /dev/md0 /dev/sd{b,c}1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Our next RAID 1 array will be created using /dev/sdd1 and /dev/sde1

[root@node1 ~]# mdadm -C -n2 -l1 /dev/md1 /dev/sd{d,e}1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

Here,

-C, --create
       Create a new array.
			  
-v, --verbose
       Be  more  verbose about what is happening. 

-l, --level=
       Set RAID level.  When used with --create, options are: linear, raid0,  0,  stripe,  raid1,  1,  mirror,
       raid4,  4,  raid5,  5, raid6, 6, raid10, 10, multipath, mp, faulty, container.  Obviously some of these
       are synonymous.

-c, --chunk=
       Specify chunk size of kilobytes.

-n, --raid-devices=
       Specify  the number of active devices in the array.

 

Verify software raid 1 changes

After each RAID-1 is initialized using mdadm, it will commence resynchronization. /proc/mdstat should report two-disk mirrors at /dev/md0 and /dev/md1

[root@node1 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sde1[1] sdd1[0]
      2094080 blocks super 1.2 [2/2] [UU]
      [=========>...........]  resync = 48.3% (1012736/2094080) finish=0.0min speed=202547K/sec

md0 : active raid1 sdc1[1] sdb1[0]
      2094080 blocks super 1.2 [2/2] [UU]

unused devices: <none>

 

Create Software RAID 0 Array

Next to configure our hybrid software raid 10 array, we will create software raid 1 using both md0 and md1 array mapped to /dev/md2

[root@node1 ~]# mdadm -C -n2 -l0 -c64 /dev/md2 /dev/md{0,1}
mdadm: /dev/md1 appears to contain an ext2fs file system
       size=2094080K  mtime=Mon Jun 10 16:31:37 2019
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.

 

Verify Hybrid Software RAID 10 changes

Now /proc/mdstat should report two-disk mirrors at /dev/md0 and /dev/md1 and a RAID-0 at /dev/md2, which consists of /dev/md0 and /dev/md1.

[root@node1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md2 : active raid0 md1[1] md0[0]
      4184064 blocks super 1.2 64k chunks

md1 : active raid1 sde1[1] sdd1[0]
      2094080 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdc1[1] sdb1[0]
      2094080 blocks super 1.2 [2/2] [UU]

unused devices: <none>

 

Create filesystem and mount point

Once all three arrays are activated, simply build a filesystem on the stripe /dev/md2, in this case and then mount /dev/md2. We will create ext4 filesystem for this article.

[root@node1 ~]# mkfs.ext4 /dev/md2
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=16 blocks, Stripe width=32 blocks
261632 inodes, 1046016 blocks
52300 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1071644672
32 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Next we will create a mount point for /dev/md2 under /hybrid_array

[root@node1 ~]# mkdir /hybrid_array

You can manually mount your new filesystem to be able to access and store data

[root@node1 ~]# mount /dev/md2 /hybrid_array

[root@node1 ~]# df -h /hybrid_array
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2        3.9G   16M  3.7G   1% /hybrid_array
NOTE:
Using mount command the filesystem is mounted temporarily but these changes will be lost after reboot so you must update your /etc/fstab to make these changes persistent across reboots.

 

Create Hybrid Software RAID 1+0 Array with more disks

You could also add a spare disk to each of the mirroring arrays to make the solution more robust. And you can combine more than two mirrors into a RAID-0:

# mdadm -C -n2 -l1 -x1 /dev/md0 /dev/sd{b,c,d}1
# mdadm -C -n2 -l1 -x1 /dev/md1 /dev/sd{e,f,g}1
# mdadm -C -n2 -l1 -x1 /dev/md1 /dev/sd{h,i,j}1
# mdadm -C -n3 -l0 -c64 /dev/md2 /dev/md{0,1,2}

For more details on your hybrid software raid 10 use mdadm --detail as shown below

[root@node1 ~]# mdadm --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Fri Jun 14 14:07:15 2019
        Raid Level : raid0
        Array Size : 4184064 (3.99 GiB 4.28 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Jun 14 14:07:15 2019
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 64K

Consistency Policy : none

              Name : node1.golinuxcloud.com:2  (local to host node1.golinuxcloud.com)
              UUID : c2ba009d:d077b4b5:eb28f342:e91ea39e
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       9        0        0      active sync   /dev/md0
       1       9        1        1      active sync   /dev/md1

 

Lastly I hope the steps from the article to configure hybrid software raid 10 array on Linux was helpful. So, let me know your suggestions and feedback using the comment section.

 

References:
Managing RAID in Linux

 

Deepak Prasad

Deepak Prasad

He is the founder of GoLinuxCloud and brings over a decade of expertise in Linux, Python, Go, Laravel, DevOps, Kubernetes, Git, Shell scripting, OpenShift, AWS, Networking, and Security. With extensive experience, he excels in various domains, from development to DevOps, Networking, and Security, ensuring robust and efficient solutions for diverse projects. You can connect with him on his LinkedIn profile.

Can't find what you're searching for? Let us assist you.

Enter your query below, and we'll provide instant results tailored to your needs.

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can send mail to admin@golinuxcloud.com

Thank You for your support!!

Leave a Comment