In this article I will give you and overview on Linux LVM snapshot backup and restore and cover below topics relatd to LVM Snapshots:
- Overview on LVM Snapshots
- Comparison between LVM1 and LVM2 Snapshots
- Create, Extend, Merge and Remove LVM Snapshots
- How to check status of LVM Snapshot
Linux LVM Snapshot backup and restore for RHEL/CentOS Update and Upgrade
- Ideally experts recommend to avoid performing RHEL/CentOS upgrades for major version such as RHEL/CentOS 7 to RHEL/CentOS 8.
- In such case it is recommended that you take a backup of your data, scratch install new RHEL/CentOS major release and restore your data
- But Linux LVM Snapshot backup and restore is a very ideal solution for handling timely security updates and patches on RHEL and CentOS environment.
- The only challenge you may face is taking LVM snapshot of boot partition.
- Since boot is always a standard partition and not part of logical volume, you will not be able to take a snapshot of boot partition.
- LVM Snapshot uses COW (Copy-on-Write) technology for Linux LVM snapshot backup and restore.
- COW offers both safety and performance via snapshots. For modifying data, a clone or writable-snapshot is used.
- Instead of writing modified data over current data, the modified data is put in a new filesystem location.
- Even when data modification is completed, the old data is never overwritten.
What is the difference between LVM1 and LVM2 Snapshot?
LVM1 Snapshots
- LVM1 has read-only snapshots.
- Read-only snapshots work by creating an exception table, which is used to keep track of which blocks have been changed.
- If a block is to be changed on the origin, it is first copied to the snapshot, marked as copied in the exception table, and then the new data is written to the original volume.
LVM2 Snapshots
- In LVM2, snapshots are read/write by default.
- Read/write snapshots work like read-only snapshots, with the additional feature that if data is written to the snapshot, that block is marked in the exception table as used, and never gets copied from the original volume.
- For example if you have some experimental program, mount the snapshot, and try an experimental program that change files on that volume. If you don't like what it did, you can unmount the snapshot, lvm remove snapshot, and mount the original file system in its place.
How LVM Snapshot is different compared to backup solution?
- LVM snapshots allow you to capture a logical volume at a certain point in time and preserve it while with backup you copy and archive the content of a partition or logical volume.
- LVM Snapshot size is independent of the source logical volume size, as the LVM snapshot takes very less size while a backup would take almost equivalent size compared to the source logical volume (based on the compression type).
- There is a possibility that if the source logical volume continues to get filled then the LVM snapshot gets corrupted but a backup will stay intact on a safe location and is independent of the source logical volume usage.
- Hence a LVM snapshot must be created but should not be kept for a long time as it will end up eating a lot of space when source logical volume is used. In such cases backup is the preferred option instead of snapshot.
Lab Environment
I have created a Virtual Machine with CentOS 8 on Oracle VirtualBox installed on Linux server.
Below are node specs:
[root@centos-8 ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 2.4G 8.0K 2.4G 1% /dev tmpfs tmpfs 2.4G 0 2.4G 0% /dev/shm tmpfs tmpfs 2.4G 8.6M 2.4G 1% /run tmpfs tmpfs 2.4G 0 2.4G 0% /sys/fs/cgroup /dev/mapper/rhel-root ext4 7.9G 1.9G 5.6G 26% / /dev/sda1 ext4 488M 130M 323M 29% /boot tmpfs tmpfs 479M 0 479M 0% /run/user/0 /dev/mapper/rhel-data ext4 2.0G 1.1G 804M 57% /data
I have a single volume group "vgs
"
[root@centos-8 ~]# vgs VG #PV #LV #SN Attr VSize VFree rhel 2 3 0 wz--n- 24.49g <13.56g
Below are the available logical volumes.
[root@centos-8 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data rhel -wi-ao---- 2.00g root rhel -wi-ao---- 8.00g swap rhel -wi-ao---- 956.00m
List of installed kernel
[root@centos-8 ~]# rpm -qa | grep kernel kernel-tools-4.18.0-80.el8.x86_64 ukernel-tools-libs-4.18.0-80.el8.x86_64 kernel-4.18.0-80.el8.x86_64 namkernel-core-4.18.0-80.el8.x86_64 kernel-modules-4.18.0-80.el8.x86_64 [root@centos-8 ~]# uname -r 4.18.0-80.el8.x86_64
Step 1: Create LVM Snapshot Linux
Since we plan to update our RHEL/CentOS 7/8 Linux node and apply security patch, we will create LVM snapshot for all our available logical volumes and boot partition. On this node I have two logical volumes to perform Linux lvm snapshot backup and restore:
/dev/mapper/rhel-root <-- root logical volume /dev/mapper/rhel-data <-- data logical volume
Apart from this I have /boot partition which is a standard partition:
/dev/sda1 <-- boot partition
I will create LVM snapshot Linux (data_snap
) for data partition:
[root@centos-8 ~]# lvcreate --size 1G --snapshot --name data_snap /dev/mapper/rhel-data
Logical volume "data_snap" created.
Similarly I will create LVM snapshot Linux (root_snap
) for root file system:
[root@centos-8 ~]# lvcreate --size 2G --snapshot --name root_snap /dev/mapper/rhel-root
Logical volume "root_snap" created.
So create LVM snapshot Linux was successful to perform Linux LVM snapshot backup and restore.
Step 2: Check LVM Snapshot Metadata and Allocation size
With create LVM snapshot Linux we have utilised around ~3GB
of available space with Linux snapshot for backup and restore which you can check using vgs
command under "VFree
"
[root@centos-8 ~]# vgs VG #PV #LV #SN Attr VSize VFree rhel 2 5 2 wz--n- 24.49g <10.56g
But if you see only 0.01
- 0.02%
of these snapshot partition size is actually "in use"
[root@centos-8 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data rhel owi-aos--- 2.00g data_snap rhel swi-a-s--- 1.00g data 0.01 root rhel owi-aos--- 8.00g root_snap rhel swi-a-s--- 2.00g root 0.02 swap rhel -wi-ao---- 956.00m
With lvdisplay
also you see that only 0.01
- 0.02%
is allocated to snapshot.
[root@centos-8 ~]# lvdisplay /dev/rhel/root_snap | grep "Allocated to snapshot" Allocated to snapshot 0.02% [root@centos-8 ~]# lvdisplay /dev/rhel/data_snap | grep "Allocated to snapshot" Allocated to snapshot 0.01%
You can use dmsetup
to check snapshot metadata sectors.
# dmsetup status rhel-root_snap-cow: 0 4194304 linear rhel-root_snap: 0 28442624 snapshot 534664/4194304 2096 rhel-swap: 0 1957888 linear rhel-root: 0 28442624 snapshot-origin rhel-data: 0 4194304 linear rhel-data: 0 4194304 snapshot-origin rhel-data_snap-cow: 0 2097152 linear rhel-data_snap: 0 4194304 snapshot 24/2097152 16 rhel-root-real: 0 28442624 linear
To check snapshot metadata of a single snapshot volume:
[root@centos-8 ~]# dmsetup status rhel-data_snap 0 4194304 snapshot 24/2097152 16
Here the last three fields are
<sectors_allocated> / <total_sectors> <metadata_sectors>
So currently there are 16 metadata sectors.
We will learn more about these metadata sectors in the next chapters of this article.
Step 3: Backup boot partition (Optional)
- If you plan only to perform Linux LVM snapshot backup and restore for data partition then you can ignore this step.
- This step is required only if you plan to modify
/boot
partition content such as installing new kernel. - Since boot is a standard partition and not a logical volume hence Linux create LVM snapshot for boot partition is not possible.
- Hence we can use any traditional method to take backup of
/boot
partition. For this example we will use tar to take backup of boot partition:
[root@centos-8 ~]# cd /boot/
I will take the backup using tar (You can also use any other preferred backup tool)
[root@centos-8 boot]# tar -czvf /tmp/boot_backup.tgz *
Verify your backup file of boot partition:
[root@centos-8 boot]# ls -l /tmp/boot_backup.tgz
-rw-r--r-- 1 root root 126149595 Mar 29 12:17 /tmp/boot_backup.tgz
As a precaution it is recommended that you copy this backup file to a different server.
Step 4: Mount LVM snapshot
With LVM2, the snapshots are read/write so you can mount LVM snapshot and perform read/write operation on the snapshot volume.
Below are our two snapshot logical volumes
/dev/rhel/data_snap
/dev/rhel/root_snap
Mount LVM snapshot for root partition on temporary mount point (/mnt
):
[root@centos-8 ~]# mount /dev/rhel/root_snap /mnt/
After you mount LVM snapshot, next verify the content of root snapshot mounted over (/mnt
):
[root@centos-8 ~]# ls -l /mnt
total 88
lrwxrwxrwx. 1 root root 7 May 11 2019 bin -> usr/bin
drwxr-xr-x. 2 root root 4096 Nov 13 11:23 boot
drwxr-xr-x 2 root root 4096 Mar 29 09:45 data
drwxr-xr-x. 2 root root 4096 Mar 7 18:33 dev
drwxr-xr-x. 105 root root 12288 Mar 29 11:19 etc
drwxr-xr-x. 5 root root 4096 Mar 28 12:14 home
lrwxrwxrwx. 1 root root 7 May 11 2019 lib -> usr/lib
lrwxrwxrwx. 1 root root 9 May 11 2019 lib64 -> usr/lib64
Similarly now unmount /mnt
and then mount LVM snapshot for data logical volume on /mnt
[root@centos-8 ~]# umount /mnt
Using below command we mount LVM snapshot volume for data partition:
[root@centos-8 ~]# mount /dev/rhel/data_snap /mnt
Verify the content of data LVM snapshot:
[root@centos-8 ~]# ls -l /mnt/
total 1048600
-rw-r--r-- 1 root root 536870912 Mar 29 10:03 dummy_file_1
-rw-r--r-- 1 root root 536870912 Mar 29 10:04 dummy_file_2
-rw-r--r-- 1 root root 0 Mar 29 10:51 file
drwx------ 2 root root 16384 Mar 29 09:45 lost+found
With RHEL 7.7 and higher releases you can use BOOM utility to boot your RHEL/CentOS node using the LVM snapshot. So instead of mounting the LVM snapshot and then working on these logical volumes, you can boot using the LVM snapshot using BOOM and then perform an read/write operation.
Step 5: Using source logical volume with snapshots
After Linux create LVM snapshot for Linux LVM snapshot backup and restore, you should make sure that the snapshot size allocation stays below 95-98%. As if the snapshot allocation size reaches to 100% then the snapshot partition will become corrupted.
Perform system updates, upgrades or patching
Next I will perform some security updates and apply vulnerability patch on this CentOS 8 server. After applying the patch now I have a newer kernel installed on the server
[root@centos-8 ~]# uname -r 4.18.0-147.5.1.el8_1.x86_64
List of installed kernel:
[root@centos-8 ~]# rpm -qa | grep kernel
kernel-tools-4.18.0-80.el8.x86_64
kernel-modules-4.18.0-147.5.1.el8_1.x86_64
kernel-tools-libs-4.18.0-80.el8.x86_64
kernel-4.18.0-80.el8.x86_64
kernel-core-4.18.0-147.5.1.el8_1.x86_64
kernel-4.18.0-147.5.1.el8_1.x86_64
kernel-core-4.18.0-80.el8.x86_64
kernel-modules-4.18.0-80.el8.x86_64
Recommendations to prevent Linux LVM snapshot corruption
- If you have noticed my source data logical volume was 2GB while my data snapshot volume size was 1GB.
- So the snapshot size is much lesser than the source LVM size even when the source data logical volume already contains 1GB of data.
- Such situation arises the chances of snapshot logical volume getting corrupted when you continue to write data on the source logical volume without merging the snapshot.
- So it is strongly recommended to keep the snapshot size same as the source logical volume to minimise the risk of snapshot volume corruption.
For example:
In Linux LVM snapshot backup and restore when we take LVM snapshot the initial metadata sector value is very less. As you can see here the current metadata sector is 16, used sector is 24 out of 2097152 available sectors
[root@centos-8 ~]# dmsetup status rhel-data_snap 0 4194304 snapshot 24/2097152 16
If we add content in the source data logical volume i.e. /dev/mapper/rhel-data
mounted on /data
after taking the snapshot, these metadata sector values will continue to increase.
For example: Here I have added 512MB file to /data
[root@centos-8 ~]# dd if=/dev/zero of=/data/dummy_file3 bs=512M count=1 oflag=dsync
1+0 records in
1+0 records out
536870912 bytes (537 MB, 512 MiB) copied, 26.2951 s, 20.4 MB/s
As you see the snapshot allocated size has increased to 50.20%
[root@centos-8 ~]# lvs rhel/data_snap LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data_snap rhel swi-a-s--- 1.00g data 50.20
Accordingly the metadata sector of data_snap
has increased to 4112
[root@centos-8 ~]# dmsetup status rhel-data_snap 0 4194304 snapshot 1052904/2097152 4112
If we continue to add more content to the source logical volume, and the snapshot volume size is less than the source logical volume (as in my case), there is a very high probability that the snapshot logical volume will become INVALID or INACTIVE (Corrupted)
For the sake of demonstration: I have intentionally placed more content in my source volume (/dev/mapper/rhel-data
) after taking the snapshot and now my snapshot partition is shown as INVALID
with dmsetup
and INACTIVE
with lvdisplay
(corrupted)
[root@centos-8 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data rhel owi-aos--- 2.00g data_snap rhel swi-I-s--- 500.00m data 100.00 root rhel -wi-ao---- 13.56g swap rhel -wi-ao---- 956.00m [root@centos-8 ~]# dmsetup status rhel-data_snap 0 4194304 snapshot Invalid [root@centos-8 ~]# lvdisplay /dev/rhel/data_snap | grep "LV snapshot status" LV snapshot status INACTIVE destination for data
Now you will not be able to mount LVM snapshot volume to recover your data
[root@centos-8 ~]# mount /dev/rhel/data_snap /mnt mount: /mnt: can't read superblock on /dev/mapper/rhel-data_snap.
So it is better you lvm remove snapshot:
[root@centos-8 ~]# lvremove /dev/rhel/data_snap Do you really want to remove active logical volume rhel/data_snap? [y/n]: y Logical volume "data_snap" successfully removed
How to automatically extend Linux LVM snapshot size?
To avoid snapshot corruption with Linux LVM snapshot backup and restore, it is also possible to configure snapshot_autoextend_threshold
and snapshot_autoextend_percent
in lvm.conf
.
With these settings, as soon as the Linux LVM snapshot size is about to reach 100% threshold, the Linux LVM snapshot size will be extended automatically based on these values.
You can configure these values in /etc/lvm/lvm.conf
# Configuration option activation/snapshot_autoextend_threshold. # Auto-extend a snapshot when its usage exceeds this percent. # Setting this to 100 disables automatic extension. # The minimum value is 50 (a smaller value is treated as 50.) # Also see snapshot_autoextend_percent. # Automatic extension requires dmeventd to be monitoring the LV. # # Example # Using 70% autoextend threshold and 20% autoextend size, when a 1G # snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds # 840M, it is extended to 1.44G: # snapshot_autoextend_threshold = 70 # snapshot_autoextend_threshold = 70 <-- I am using 70 as the threshold value # Configuration option activation/snapshot_autoextend_percent. # Auto-extending a snapshot adds this percent extra space. # The amount of additional space added to a snapshot is this # percent of its current size. # # Example # Using 70% autoextend threshold and 20% autoextend size, when a 1G # snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds # 840M, it is extended to 1.44G: # snapshot_autoextend_percent = 20 # snapshot_autoextend_percent = 20 <-- I am using 20 as the percent value
lvm.conf
already gives detailed descriptionAfter making these changes, if I add more content to my data volume (/dev/mapper/rhel-data
), as you can see the Linux LVM snapshot size has automatically increased to 1.5GB
[root@centos-8 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data rhel owi-aos--- 2.00g data_snap rhel swi-a-s--- 1.50g data 66.95 root rhel -wi-ao---- 13.56g swap rhel -wi-ao---- 956.00m
If I continue to add more content to /dev/mapper/rhel-data
, the Linux LVM snapshot size will further increase
[root@centos-8 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data rhel owi-a-s--- 2.00g data_snap rhel swi-a-s--- 2.01g data 69.12 root rhel owi-aos--- 13.56g root_snap rhel swi-a-s--- 2.00g root 12.71 swap rhel -wi-ao---- 956.00m
The Linux LVM snapshot size will continue to increase as long as you have free space in your Volume Group.
Step 6: Perform LVM Restore Snapshot for data partition
Since we have written some data on our root logical volume with the new security updates, the snapshot size has increased along with the source logical volume
[root@centos-8 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data rhel owi-a-s--- 2.00g data_snap rhel swi-a-s--- 1.00g data 0.01 root rhel owi-aos--- 8.00g root_snap rhel swi-a-s--- 2.00g root 20.49 swap rhel -wi-ao---- 956.00m
To perform a LVM restore snapshot you must unmount the respective logical volume. Now you can easily unmount data partition but you cannot unmount parimary partition such as root partition runtime in our case.
First we will LVM restore snapshot for my data
logical volume. To achieve this I will unmount my data partition before performing lvm restore snapshot.
[root@centos-8 ~]# umount /data
umount: /data: target is busy
, it means that the data partition is still being used by some process. You can use lsof <partition>
to get a list of process using this partition and make sure that respective partition is not used to be able to unmount it.[root@centos-8 ~]# lvconvert --merge /dev/rhel/data_snap
Merging of volume rhel/data_snap started.
rhel/data: Merged: 31.04%
rhel/data: Merged: 100.00%
-b
or --background
with lvmconvert
to perform lvm restore snapshot in background. In such case you can use dmsetup status
to monitor the LVM snapshot merge progress
What is capital "O" in the Attr section of lvs command?
If you immediately check the lvs
status after merging the snapshot, you will observe capital "O
" under "Attr
" section of lvs
command. The capital "O
" means snapshot is still merging with the origin
[root@centos-8 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data rhel Owi-a-s--- 2.00g root rhel owi-aos--- 8.00g root_snap rhel swi-a-s--- 2.00g root 20.82 swap rhel -wi-ao---- 956.00m
After sometime once the merge activity is complete you can check that the first field of Attr
section is blank again for data partition
[root@centos-8 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data rhel -wi-a----- 2.00g root rhel owi-aos--- 8.00g root_snap rhel swi-a-s--- 2.00g root 20.82 swap rhel -wi-ao---- 956.00m
Step 7: Perform LVM restore snapshot for root file system
Since we cannot unmount root file system runtime, we will initiate the LVM restore snapshot command. The actual restore will happen at the reboot stage when root is in unmounted state.
[root@centos-8 ~]# lvconvert --merge /dev/rhel/root_snap Delaying merge since origin is open. Merging of snapshot rhel/root_snap will occur on next activation of rhel/root.
As you see we get a warning "Delaying merge since origin is open
", so now the lvm restore snapshot will happen after next reboot.
But before we go for reboot, we must also restore boot partition content:
Step 8: Restore boot partition (Optional)
If you intention is Linux LVM snapshot back and restore of data partition only then you can ignore this step and proceed with the reboot. Now we had taken /boot
backup with tar
utility with /tmp/backup_boot.tgz
. Before we restore boot partition, first we must delete the existing content of /boot
[root@centos-8 ~]# rm -rf /boot/*
Make sure there are no files or directories inside /boot
[root@centos-8 ~]# cd /boot/ [root@centos-8 boot]# ls -l total 0
Next extract the content of boot_backup.tgz
under /boot
to restore boot partition:
[root@centos-8 boot]# tar -xzvf /tmp/boot_backup.tgz
Verify the content of /boot
partition
[root@centos-8 boot]# ls -l
total 248276
-rw-r--r-- 1 root root 126101631 Mar 29 13:58 config-4.18.0-80.el8.x86_64
drwxr-xr-x 3 root root 4096 Nov 13 11:25 efi
drwxr-xr-x 2 root root 4096 Nov 13 11:25 extlinux
drwx------ 4 root root 4096 Nov 13 11:37 grub2
-rw------- 1 root root 65326006 Nov 13 11:29 initramfs-0-rescue-9836eca1fe1c4fa9b693aa2fb3d3137c.img
-rw------- 1 root root 26049862 Nov 13 11:31 initramfs-4.18.0-80.el8.x86_64.img
-rw------- 1 root root 17224369 Jan 26 16:18 initramfs-4.18.0-80.el8.x86_64kdump.img
drwxr-xr-x 3 root root 4096 Nov 13 11:27 loader
drwx------ 2 root root 4096 Nov 13 11:23 lost+found
-rw------- 1 root root 3751920 Jun 4 2019 System.map-4.18.0-80.el8.x86_64
-rwxr-xr-x 1 root root 7872760 Nov 13 11:28 vmlinuz-0-rescue-9836eca1fe1c4fa9b693aa2fb3d3137c
-rwxr-xr-x 1 root root 7872760 Jun 4 2019 vmlinuz-4.18.0-80.el8.x86_64
Next we need to re-install GRUB2 on our primary disk. You can check the primary disk using below command:
[root@centos-8 ~]# lvs -a -o+devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices data rhel owi-aos--- 2.00g /dev/sdb(0) root rhel -wi-ao---- 13.56g /dev/sda2(239) swap rhel -wi-ao---- 956.00m /dev/sda2(0)
As you see /dev/sda
is where our boot partition is installed so we will install GRUB2 on /dev/sda
. For systems with legacy BIOS and GRUB2, install the GRUB2 on your primary hard disk using grub2-install
[root@centos-8 boot]# grub2-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.
For RHEL 6 systems with UEFI boot setup:
# efibootmgr -c -w -L "Backup Red Hat Enterprise Linux" -d /dev/vda -p 1 -l "\EFI\redhat\grub.efi"
For RHEL 7 systems with UEFI boot setup:
# efibootmgr -c -w -L "Red Hat Enterprise Linux" -d /dev/vda -p 1 -l "\EFI\redhat\grubx64.efi"
To verify current UEFI boot entries:
# efibootmgr -v
We are all done with Linux LVM Snapshot restore. Now reboot your Linux node to complete the root file system merge.
[root@centos-8 boot]# reboot
Post reboot verify the content of your Linux node to make sure your Linux LVM snapshot backup and restore was successful
[root@centos-8 ~]# uname -r 4.18.0-80.el8.x86_64 [root@centos-8 ~]# rpm -qa | grep kernel kernel-tools-4.18.0-80.el8.x86_64 kernel-tools-libs-4.18.0-80.el8.x86_64 kernel-4.18.0-80.el8.x86_64 kernel-core-4.18.0-80.el8.x86_64 kernel-modules-4.18.0-80.el8.x86_64
So now our Linux node has come up with old kernel which was active at the time of taking the snapshot.
Step 9: LVM Remove Snapshot
If you wish to keep the software update changes, then you can choose to remove the LVM snapshot. To execute LVM remove snapshot
[root@centos-8 ~]# lvremove /dev/rhel/data_snap Do you really want to remove active logical volume rhel/data_snap? [y/n]: y Logical volume "data_snap" successfully removed
Lastly I hope the steps from the article for Linux LVM Snapshot backup and restore in RHEL/CentOS 7 and 8 Linux was helpful. So, let me know your suggestions and feedback using the comment section.
References:
Linux LVM Snapshot HOW TO
Archiving Data with Snapshots in LVM2
How can I deploy a system and use LVM snapshot/merge to be able to restore an earlier state of the root filesystem?
How to determine when a snapshot merging is complete ?
Related Searches: create lvm snapshot linux, how to restore vg in linux, how to snapshot, lvm backup and restore, linux lvm snapshot backup and restore, centos lvm snapshot, root with restore. revert to snapshot
Fantastic article …
Beautiful and Detailed explanation, Thank you!
Great article.