Mirrored Logical Volumes - Overview
LVM supports mirrored volumes. A mirror maintains identical copies of data on different devices. LVM ensures that data written to an underlying physical volume is mirrored onto a separate physical volume.
The main advantage of Mirrored Logical Volumes is similar to RAID1. When one leg of a mirror fails, the logical volume becomes a linear volume and can still be accessed.
Lab Environment and Prerequisites
We should have a Linux Operating system of any distribution and two or more drives attached to it. In this tutorial, I have used an additional 2x10GB of drives for configuring an LVM mirror.
Please refer to the article to learn more about Logical Volume Manager :
Beginners guide to how LVM works in Linux (architecture)
Manage Logical Volume in Linux – One STOP Solution
Step-1: Create Disk Partition
If you are wondering if it is important to partition the disk or can I just directly use /dev/sdX then you should read Why should I partition the disk while creating Logical Volume?
We will use parted
command to create our disk patition but you can also use fdisk to create disk partitions:
[root@server ~]# parted --script /dev/sdb "mklabel gpt" [root@server ~]# parted --script /dev/sdb "mkpart 'LVM2' 0% 100%" [root@server ~]# parted --script /dev/sdb "set 1 lvm on" [root@server ~]# parted --script /dev/sdc "mklabel gpt" [root@server ~]# parted --script /dev/sdc "mkpart 'LVM2' 0% 100%" [root@server ~]# parted --script /dev/sdc "set 1 lvm on"
Step-2: Create Physical Volume
The next step is to create Physical Volume using the following command. We will create two physical volumes using /dev/sdb1
and /dev/sdc1
respectively.
[root@server ~]# pvcreate /dev/sdb1 /dev/sdc1
Physical volume "/dev/sdb1" successfully created.
Physical volume "/dev/sdc1" successfully created.
Step-3: Create Volume Group
Now that the physical volume has been created, you can assign it to a volume group. It is possible to add a physical volume to an existing volume group or to create a new volume group and add the physical volume to it.
I have created VG named volgroup_mirror
for this tutorial with default physical extent.
[root@server ~]# vgcreate volgroup_mirror /dev/sdb1 /dev/sdc1
Volume group "volgroup_mirror" successfully created
What are Physical Extents used in LVM?
- When creating volume groups, a physical extent size is used.
- The physical extent size defines the size of the building blocks used to create logical volumes.
- A logical volume always has a size that is a multiple of the physical extent size.
- If you need to create huge logical volumes, it is more efficient to use a big physical extent size.
- If you do not specify anything, a default extent size of 4.00 MiB is used.
- The physical extent size is always specified as a multiple of 2 MiB, with a maximum size of 128 MiB.
- Use the
vgcreate -s
option to specify the physical extent size you want to use.
Step-4: Create Mirrored Logical Volumes
When we create a mirrored logical volume, make use of -m
argument of the lvcreate
command for the number of copies of the data that needs to be created. For example Specifying -m1
creates one mirror. Which means a linear logical volume plus one more mirrored copy. Similarly, specifying -m2
creates two mirrors, yielding three copies of the file system.
4.1 Create logical volume with mirroring
Let us create mirrored volume with 2 copies using the argument -m1
like below:
[root@server ~]# lvcreate -L 2GB -m1 -n testmirror_lv volgroup_mirror
Logical volume "testmirror_lv" created.
After creating a mirrored volume, the metadata will be copied automatically to both volumes. Use lvs
command to monitor the sync Cpy%Sync
progress as shown below
We can use the lvs
command with the -o +devices
options to display the configuration of the mirror, including which devices make up the mirror legs. You can see the example outputs below; where the logical volumes are created under both /dev/sdb1
and /dev/sdc1
. We can also check this using the Linux command lsblk -f
[root@server ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices testmirror_lv volgroup_mirror rwi-a-r--- 2.00g 100.00 testmirror_lv_rimage_0(0),testmirror_lv_rimage_1(0) [testmirror_lv_rimage_0] volgroup_mirror iwi-aor--- 2.00g /dev/sdb1(1) [testmirror_lv_rimage_1] volgroup_mirror iwi-aor--- 2.00g /dev/sdc1(1) [testmirror_lv_rmeta_0] volgroup_mirror ewi-aor--- 4.00m /dev/sdb1(0) [testmirror_lv_rmeta_1] volgroup_mirror ewi-aor--- 4.00m /dev/sdc1(0)
In the above example;
SubLVs
holding LV data or parity blocks have the suffix_rimage_#
. These SubLVs are sometimes referred to as DataLVs.SubLVs
holding RAID metadata have the suffix_rmeta_#
. RAID metadata includes superblock information, RAID type, bitmap, and device health information. TheseSubLVs
are sometimes referred to as MetaLVs.
4.2 What is SubLVs in LVM
4.3 Create filesystem and mount the mirrored volume
Let's create filesystem ext4 and mount the mirrored volume. Please refer for more information : Create file systems and mount the Logical Volumes
Mount this Logical Volume:
# Create mount point [root@server ~]# mkdir /mnt/mirror_vol # mount the filesystem [root@server ~]# mount /dev/volgroup_mirror/testmirror_lv /mnt/mirror_vol
4.4 Create some data to test LVM mirroring
Check the mounted volume and create some test files on it.
# Check the mounted volume [root@server ~]# df -h /mnt/mirror_vol Filesystem Size Used Avail Use% Mounted on /dev/mapper/volgroup_mirror-testmirror_lv 2.0G 6.0M 1.8G 1% /mnt/mirror_vol # Chnage directory and create Some test files [root@server ~]# cd /mnt/mirror_vol [root@server mirror_vol]# touch testfile.txt [root@server mirror_vol]# ls -al total 16 drwx------ 2 root root 16384 Aug 11 11:44 lost+found -rw-r--r-- 1 root root 0 Aug 11 11:49 testfile.txt
We have confirmed that we are able to make use of the mirrored volume same as Linear logical volume. You can also manage the mirrored volumes with commands explained in Managing LVM Logical Volumes
Step-5: Recover from LVM Mirror failure
In this section let us discuss how to troubleshoot issues where one leg of an LVM mirrored volume fails because the underlying device for a physical volume goes down.
In the example, I have detached the Physical volume /dev/sdb1
for testing purposes. When we check the lvs
output is shown below, we can notice that the VG volgroup_mirror
is missing PV sdb1
.
[root@server ~]# lvs -a -o +devices WARNING: Couldn't find device with uuid 3RKCLI-Z1Hn-X7iV-yoKK-FDNA-5j4u-tn8Fe0. WARNING: VG volgroup_mirror is missing PV 3RKCLI-Z1Hn-X7iV-yoKK-FDNA-5j4u-tn8Fe0 (last written to /dev/sdb1). WARNING: Couldn't find all devices for LV volgroup_mirror/testmirror_lv_rimage_0 while checking used and assumed devices. WARNING: Couldn't find all devices for LV volgroup_mirror/testmirror_lv_rmeta_0 while checking used and assumed devices. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices testmirror_lv volgroup_mirror rwi-a-r-p- 2.00g 100.00 testmirror_lv_rimage_0(0),testmirror_lv_rimage_1(0) [testmirror_lv_rimage_0] volgroup_mirror iwi-aor-p- 2.00g [unknown](1) [testmirror_lv_rimage_1] volgroup_mirror iwi-aor--- 2.00g /dev/sdc1(1) [testmirror_lv_rmeta_0] volgroup_mirror ewi-aor-p- 4.00m [unknown](0) [testmirror_lv_rmeta_1] volgroup_mirror ewi-aor--- 4.00m /dev/sdc1(0)
As discussed in the previous sections, if one leg fails we can still use the logical volume. However, it will act as a linear logical volume. We can confirm this by running commands. We can still use the volume mounted to /mnt/mirror_vol
[root@server ~]# df -h /mnt/mirror_vol Filesystem Size Used Avail Use% Mounted on /dev/mapper/volgroup_mirror-testmirror_lv 2.0G 6.0M 1.8G 1% /mnt/mirror_vol [root@server ~]# ls -l /mnt/mirror_vol total 16 drwx------ 2 root root 16384 Aug 11 11:44 lost+found -rw-r--r-- 1 root root 0 Aug 11 11:49 testfile.txt
If you use the same disk rather than replacing it with a new one, you will see "inconsistent" warnings when you run the pvcreate
command. We can prevent that warning from appearing by executing the vgreduce
as shown below
[root@server ~]# vgreduce --removemissing volgroup_mirror --force
WARNING: Couldn't find device with uuid 3RKCLI-Z1Hn-X7iV-yoKK-FDNA-5j4u-tn8Fe0.
WARNING: VG volgroup_mirror is missing PV 3RKCLI-Z1Hn-X7iV-yoKK-FDNA-5j4u-tn8Fe0 (last written to [unknown]).
WARNING: Couldn't find all devices for LV volgroup_mirror/testmirror_lv_rimage_0 while checking used and assumed devices.
WARNING: Couldn't find all devices for LV volgroup_mirror/testmirror_lv_rmeta_0 while checking used and assumed devices.
WARNING: Couldn't find device with uuid 3RKCLI-Z1Hn-X7iV-yoKK-FDNA-5j4u-tn8Fe0.
WARNING: Couldn't find device with uuid 3RKCLI-Z1Hn-X7iV-yoKK-FDNA-5j4u-tn8Fe0.
Wrote out consistent volume group volgroup_mirror.
Once we remove the missing PV from the VG. Let's start adding back to the mirror. Make sure you are creating partitions when you are adding a new drive.
To FIX this, first we need to create PV and then add the respective PV to the existing VG volgroup_mirror
# Create PV [root@server ~]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created. # Add the newly created PV to VG [root@server ~]# vgextend volgroup_mirror /dev/sdb1 Volume group "volgroup_mirror" successfully extended
Now, we will extend our VG with the re-added drive. Next, we will be using a command lvconvert
to repair the mirrored volume /dev/volgroup_mirror/testmirror_lv
back to its original mirrored state.
[root@server ~]# lvconvert --repair volgroup_mirror/testmirror_lv
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
Faulty devices in volgroup_mirror/testmirror_lv successfully replaced.
Once it's repaired, You will see the mirror rebuild progress in Cpy%Sync
column.
[root@server ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices testmirror_lv volgroup_mirror rwi-aor--- 2.00g 5.74 testmirror_lv_rimage_0(0),testmirror_lv_rimage_1(0) [testmirror_lv_rimage_0] volgroup_mirror iwi-aor--- 2.00g /dev/sdc1(1) [testmirror_lv_rimage_1] volgroup_mirror Iwi-aor--- 2.00g /dev/sdb1(1) [testmirror_lv_rmeta_0] volgroup_mirror ewi-aor--- 4.00m /dev/sdc1(0) [testmirror_lv_rmeta_1] volgroup_mirror ewi-aor--- 4.00m /dev/sdb1(0)
We can also confirm the data under the mount /mnt/mirror_vol
will remain the same.
Convert a Mirrored Logical Volumes to a Linear Logical Volume
In this section let us learn how to convert a Mirrored Logical Volumes to a Linear Logical Volume using lvconvert
Specifying -m0
creates zero mirrors.
[root@server ~]# lvconvert -m0 /dev/volgroup_mirror/testmirror_lv /dev/sdb1
Are you sure you want to convert raid1 LV volgroup_mirror/testmirror_lv to type linear losing all resilience? [y/n]: y
Logical volume volgroup_mirror/testmirror_lv successfully converted.
In the example, we have converted our LV /dev/volgroup_mirror/testmirror_lv
to linear LV by removing a PV /dev/sdb1
from the mirror.
[root@server ~]# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
testmirror_lv volgroup_mirror -wi-ao---- 2.00g /dev/sdc1(1)
Convert a Linear Logical Volume to a Mirrored Logical Volume
In the previous section, we have removed the mirror. In this section let's learn to convert it again into Mirrored volume.
Specifying
-m1
creates one mirror. We can also add more than one PV to the mirror. We need to create PV and run vgextend
to expand the VG before adding it to the mirror.
[root@server ~]# lvconvert -m 1 /dev/volgroup_mirror/testmirror_lv /dev/sdb1
Are you sure you want to convert linear LV volgroup_mirror/testmirror_lv to raid1 with 2 images enhancing resilience? [y/n]: y
Logical volume volgroup_mirror/testmirror_lv successfully converted.
Once the LV RAID1 mirror is created, it will start the rebuild process Cpy%Sync
[root@server /]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices testmirror_lv volgroup_mirror rwi-aor--- 2.00g 5.74 testmirror_lv_rimage_0(0),testmirror_lv_rimage_1(0) [testmirror_lv_rimage_0] volgroup_mirror iwi-aor--- 2.00g /dev/sdc1(1) [testmirror_lv_rimage_1] volgroup_mirror Iwi-aor--- 2.00g /dev/sdb1(1) [testmirror_lv_rmeta_0] volgroup_mirror ewi-aor--- 4.00m /dev/sdc1(0) [testmirror_lv_rmeta_1] volgroup_mirror ewi-aor--- 4.00m /dev/sdb1(0)
Add new disk to an existing Mirrored Logical Volume
Adding a new mirror is similar to converting Linear LV to a Mirrored LV. When the new disk is added, we need to create partitions and create physical volumes. Once done, we can add the new PV to the VG. In the example, I have attached a new drive /dev/sdc
and created partitions, and converted to PV. Let's add the new PV to our existing VG volgroup_mirror
[root@server ~]# vgextend volgroup_mirror /dev/sdc1
Physical volume "/dev/sdc1" successfully created.
Volume group "volgroup_mirror" successfully extended
Once the Volume group is successfully extended, let's convert the existing 1 copy mirrored RAID1 LV to 2 copy mirrored Logical volume. Specifying
-m2
creates two mirrors.
[root@server ~]# lvconvert -m 2 /dev/volgroup_mirror/testmirror_lv /dev/sdc1
Are you sure you want to convert raid1 LV volgroup_mirror/testmirror_lv to 3 images enhancing resilience? [y/n]: y
Logical volume volgroup_mirror/testmirror_lv successfully converted.
Now run the lvs
command to confirm. The Following example has added 3 drives sdb1, sdc1, sdd1 to the mirrored logical volume. We are adding more than one mirror because, as in the case of more than one drive failure we can still access the data from the available drive.
[root@server ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices testmirror_lv volgroup_mirror rwi-a-r--- 2.00g 5.18 testmirror_lv_rimage_0(0),testmirror_lv_rimage_1(0),testmirror_lv_rimage_2(0) [testmirror_lv_rimage_0] volgroup_mirror iwi-aor--- 2.00g /dev/sdd1(1) [testmirror_lv_rimage_1] volgroup_mirror iwi-aor--- 2.00g /dev/sdc1(1) [testmirror_lv_rimage_2] volgroup_mirror Iwi-aor--- 2.00g /dev/sdb1(1) [testmirror_lv_rmeta_0] volgroup_mirror ewi-aor--- 4.00m /dev/sdd1(0) [testmirror_lv_rmeta_1] volgroup_mirror ewi-aor--- 4.00m /dev/sdc1(0) [testmirror_lv_rmeta_2] volgroup_mirror ewi-aor--- 4.00m /dev/sdb1(0)
For any above changes that we made to the mirrored volume. The data on the mirrored volume will remain the same. We should be able to test this by checking the mounted volume /mnt/mirror_vol
where the data is safe and you will not experience any downtimes.