5 easy steps to recover LVM2 partition, PV, VG, LVM metdata in Linux


Written By - admin
Advertisement

In this article we will learn

  • How to recover LVM2 partition (Restore deleted LVM)
  • How to restore PV (Physical Volume) in Linux
  • How to restore VG (Volume Group) in Linux
  • How to restore LVM metadata in Linux

 

Earlier we had a situation wherein the LVM metadata from one of our CentOS 8 node was missing. Due to this all the logical volumes, volume groups and physical volumes mapped to that LVM metadata was not visible on the Linux server. So we had to restore LVM metadata from the backup using vgcfgrestore. I will share the steps to reproduce the scenario i.e. manually delete the LVM metadata and then steps to recover LVM2 partition, restore PV, restore VG and restore LVM metadata in Linux using vgcfgrestore.

vgcfgbackup can be used to manually create LVM backups, as these backups are very helpful and can also be used in LVM Disaster Recovery.

 

Prepare Lab Environment

Before we go ahead with the steps to recover LVM2 partition in Linux, we must first prepare Lab Environment with logical volumes. Next we will manually delete lvm metadata to reproduce the issue scenario.

I have created a Virtual Machine with CentOS 8 OS using Oracle VirtualBox which is installed on a Linux server. Next I added an additional virtual disk to this VM which is mapped to /dev/sdb.

 

Create Physical Volume

The first step is to create physical volume using pvcreate

[root@centos-8 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

 

Advertisement

Create Volume Group

Next create a new Volume Group, we will name this VG as test_vg.

[root@centos-8 ~]# vgcreate test_vg /dev/sdb
  Volume group "test_vg" successfully created

List the available volume groups using vgs. I currently have two volume groups wherein rhel volume group contains my system LVM2 partitions

[root@centos-8 ~]# vgs
  VG      #PV #LV #SN Attr   VSize   VFree
  rhel      1   2   0 wz--n- <14.50g     0
  test_vg   1   0   0 wz--n-  <8.00g <8.00g  <-- new VG

 

Create Logical Volume

Create a new logical volume test_lv1 under our new volume group test_vg

[root@centos-8 ~]# lvcreate -L 1G -n test_lv1 test_vg
  Logical volume "test_lv1" created.

 

Create File System on the Logical Volume

Create ext4 file system on this new logical volume

[root@centos-8 ~]# mkfs.ext4 /dev/mapper/test_vg-test_lv1
mke2fs 1.44.6 (5-Mar-2019)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: c2d6eff5-f32f-40d4-88a5-a4ffd82ff45a
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

List the available volume groups along with the mapped storage device. Here as you see test_vg is mapped to /dev/sdb

[root@centos-8 ~]# vgs -o+devices
  VG      #PV #LV #SN Attr   VSize   VFree  Devices
  rhel      1   2   0 wz--n- <14.50g     0  /dev/sda2(0)
  rhel      1   2   0 wz--n- <14.50g     0  /dev/sda2(239)
  test_vg   1   1   0 wz--n-  <8.00g <7.00g /dev/sdb(0)

Similarly you can see the new logical volume test_lv1 is mapped to /dev/sdb device

[root@centos-8 ~]# lvs -o+devices
  LV       VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
  root     rhel    -wi-ao----  13.56g                                                     /dev/sda2(239)
  swap     rhel    -wi-ao---- 956.00m                                                     /dev/sda2(0)
  test_lv1 test_vg -wi-a-----   1.00g                                                     /dev/sdb(0)  <-- new Logical Volume

 

Add some data to Logical Volume

We will put some data into our logical volume to make sure there are no data loss after we recover LVM2 partition, restore PV and restore VG using LVM metadata in the next steps.

[root@centos-8 ~]# mkdir /test
[root@centos-8 ~]# mount /dev/mapper/test_vg-test_lv1 /test/

Create a dummy file and note down the md5sum value of this file

[root@centos-8 ~]# touch /test/file
[root@centos-8 ~]# md5sum /test/file
d41d8cd98f00b204e9800998ecf8427e  /test/file

Next un-mount the logical volume

Advertisement
[root@centos-8 ~]# umount /test/

 

How to manually delete LVM metadata in Linux?

To manually delete LVM metadata in Linux you can use various tools such as wipefs, dd etc. wipefs can erase filesystem, raid or partition-table signatures (magic strings) from the specified device to make the signatures invisible for libblkid. wipefs does not erase the filesystem itself nor any other data from the device.

WARNING:

Execute this command wisely and is not recommended to be executed in production environments as it will delete all the file system signature of the device.

In this example we will use wipefs to delete LVM metadata from /dev/sdb device. Since the device in question /dev/sdb is in use by Volume Group, we have to use -f to forcefully wipe the LVM metadata

[root@centos-8 ~]# wipefs --all --backup -f /dev/sdb
/dev/sdb: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31

We have used --backup so that before deleting the LVM metadata, wipefs will create a backup of the ext4 signature containing LVM metadata under the home folder of the user who is executing the command. Since we used root user, our LVM metadata backup is stored under root user's home folder.

[root@centos-8 ~]# ls -l /root/wipefs-sdb-0x00000218.bak
-rw------- 1 root root 8 Apr  5 13:45 /root/wipefs-sdb-0x00000218.bak

Next you can verify that all the logical volumes, volume groups and physical volume part of /dev/sdb is missing from the Linux server

Advertisement
[root@centos-8 ~]# lvs -o+devices
  LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
  root rhel -wi-ao----  13.56g                                                     /dev/sda2(239)
  swap rhel -wi-ao---- 956.00m                                                     /dev/sda2(0)  <--Our Logical volume no more visible
[root@centos-8 ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  rhel   1   2   0 wz--n- <14.50g    0  <-- test_vg no more visible
[root@centos-8 ~]# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <14.50g    0  <-- /dev/sdb no more visible

Similarly with lsblk also we can verify that there are no LVM2 partitions under /dev/sdb

[root@centos-8 ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0   15G  0 disk
├─sda1          8:1    0  512M  0 part /boot
└─sda2          8:2    0 14.5G  0 part
  ├─rhel-root 253:0    0 13.6G  0 lvm  /
  └─rhel-swap 253:1    0  956M  0 lvm  [SWAP]
sdb             8:16   0    8G  0 disk
sr0            11:0    1 1024M  0 rom
sr1            11:1    1 1024M  0 rom

 

Step 1: List backup file to restore LVM metadata in Linux

  • LVM metadata backups and archives are automatically created whenever there is a configuration change for a volume group or logical volume, unless this feature is disabled in the lvm.conf file.
  • By default, the metadata backup is stored in the /etc/lvm/backup file and the metadata archives are stored in the /etc/lvm/archive file.
  • How long the metadata archives stored in the /etc/lvm/archive file are kept and how many archive files are kept is determined by parameters you can set in the lvm.conf file.
  • A daily system backup should include the contents of the /etc/lvm directory in the backup.
  • You can manually back up the LVM metadata to the /etc/lvm/backup file with the vgcfgbackup command.
  • You can restore LVM metadata with the vgcfgrestore command.

To list the available backups of LVM metadata use vgcfgrestore --list. Currently we have three backup stages where the last backup was taken after we created test_lv1 logical volume.

[root@centos-8 ~]# vgcfgrestore --list test_vg

  File:         /etc/lvm/archive/test_vg_00000-1327770182.vg
  VG name:      test_vg
  Description:  Created *before* executing 'vgcreate test_vg /dev/sdb'
  Backup Time:  Sun Apr  5 13:43:26 2020


  File:         /etc/lvm/archive/test_vg_00001-1359568949.vg
  VG name:      test_vg
  Description:  Created *before* executing 'lvcreate -L 1G -n test_lv1 test_vg'
  Backup Time:  Sun Apr  5 13:44:02 2020


  File:         /etc/lvm/backup/test_vg
  VG name:      test_vg
  Description:  Created *after* executing 'lvcreate -L 1G -n test_lv1 test_vg'
  Backup Time:  Sun Apr  5 13:44:02 2020

So we will use the last backup i.e. /etc/lvm/backup/test_vg to restore LVM metadata till the stage where test_lv1 was created.

 

Step 2: Restore PV (Physical Volume) in Linux

IMPORTANT NOTE:

In my case the physical volume was also missing hence I am creating a new Physical Volume, but if in your case your Physical Volume is present and only Volume Groups and Logical Volumes are missing then you can ignore this step.
You must perform proper pre-checks and take backup of your file system before executing these steps in production environment to prevent any data loss.
  • It is very important that to restore PV, you create the new PV using the same UUID as it was earlier or else restore VG and recover LVM2 partition will fail in the next steps.
  • You can get the UUID of your Physical Volume from backup file "/etc/lvm/backup/test_vg"
  • Below is a sample content of physical_volumes from the backup file. If you have more than one physical volumes then you need to search for the missing PV's UUID
  • In my case SBJi2o-jG2O-TfWb-3pyQ-Fh6k-fK6A-AslOg1 is the UUID of the missing PV so I will use this to restore PV in Linux
        physical_volumes {

                pv0 {
                        id = "SBJi2o-jG2O-TfWb-3pyQ-Fh6k-fK6A-AslOg1"
                        device = "/dev/sdb"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 16777216     # 8 Gigabytes
                        pe_start = 2048
                        pe_count = 2047 # 7.99609 Gigabytes
                }
        }

Next again it is important that you test the physical volume restore. We use --test mode to verify the operation. With --test commands will not update LVM metadata. This is implemented by disabling all metadata writing but nevertheless returning success to the calling function.

So here I have provided the same UUID of /dev/sdb as we collected earlier, followed by the backup file we want to use to restore PV and then the device name using which we will perform pvcreate. The pvcreate command overwrites only the LVM metadata areas and does not affect the existing data areas.

[root@centos-8 ~]# pvcreate --test --uuid "SBJi2o-jG2O-TfWb-3pyQ-Fh6k-fK6A-AslOg1" --restorefile /etc/lvm/backup/test_vg /dev/sdb
  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
  WARNING: Couldn't find device with uuid SBJi2o-jG2O-TfWb-3pyQ-Fh6k-fK6A-AslOg1.
  Physical volume "/dev/sdb" successfully created.

With --test mode we know that the command execution is successful. So we will run the same command without --test to restore PV in real.

[root@centos-8 ~]# pvcreate  --uuid "SBJi2o-jG2O-TfWb-3pyQ-Fh6k-fK6A-AslOg1" --restorefile /etc/lvm/backup/test_vg /dev/sdb
  WARNING: Couldn't find device with uuid SBJi2o-jG2O-TfWb-3pyQ-Fh6k-fK6A-AslOg1.
  Physical volume "/dev/sdb" successfully created.

Next verify the list of available Physical Volumes

[root@centos-8 ~]# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <14.50g    0
  /dev/sdb        lvm2 ---    8.00g 8.00g  <-- Now /dev/sdb is visible

 

Step 3: Restore VG to recover LVM2 partition

  • After we restore PV, next step is to restore VG which will further recover LVM2 partitions and also will recover LVM metadata.
  • Similar to pvcreate, we will execute vgcfgrestore with --test mode to check the if restore VC would be success or fail.
  • This command will not update any LVM metadate
[root@centos-8 ~]# vgcfgrestore --test -f /etc/lvm/backup/test_vg test_vg
  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
  Restored volume group test_vg.

As we see that the command execution in --test mode was successful so now we can safely execute our command to restore VG and recover LVM2 partition in Linux using vgcfgrestore.

[root@centos-8 ~]# vgcfgrestore  -f /etc/lvm/backup/test_vg test_vg
  Restored volume group test_vg.

Using vgs your can check if restore VG was successful.

[root@centos-8 ~]# vgs
  VG      #PV #LV #SN Attr   VSize   VFree
  rhel      1   2   0 wz--n- <14.50g     0
  test_vg   1   1   0 wz--n-  <8.00g <7.00g  <-- test_vg is not visible

Next verify the if you were able to restore deleted lvm and recover LVM2 partition using lvs.

Advertisement
[root@centos-8 ~]# lvs
  LV       VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root     rhel    -wi-ao----  13.56g
  swap     rhel    -wi-ao---- 956.00m
  test_lv1 test_vg -wi-------   1.00g  <-- our logical volume is also visible

 

Step 4: Activate the Volume Group

Next activate the volume group test_vg

[root@centos-8 ~]# vgchange -ay test_vg
  1 logical volume(s) in volume group "test_vg" now active

 

Step 5: Verify the data loss after LVM2 partition recovery

The most crucial part, make sure there was no data loss in the entire process to restore PV, restore VG, restore LVM metadata and recover LVM2 partition.

[root@centos-8 ~]# mount /dev/mapper/test_vg-test_lv1 /test/

If we are able to mount the logical volume so it means our ext4 file system signature is intact and not lost or else the mount would fail.

[root@centos-8 ~]# ls -l /test/
total 16
-rw-r--r-- 1 root root     0 Apr  5 13:45 file
drwx------ 2 root root 16384 Apr  5 13:44 lost+found

Our test file exists and the md5sum matches the value of what we had taken before deleting the LVM metadata

[root@centos-8 ~]# md5sum /test/file
d41d8cd98f00b204e9800998ecf8427e  /test/file  <-- same as earlier

So overall restore PV, restore VG, restore LVM metadata and recover LVM2 partition was successful.

 

Lastly I hope the steps from the article to recover LVM2 partition using vgcfgrestore on Linux was helpful. So, let me know your suggestions and feedback using the comment section.

 

References:
Restore deleted LVM metadata
Performing Logical Volume Backup and restore using vgcfgbackup and vgcfgrestore

 

Related Searches: lvm backup and restore in linux, lvm disaster recovery, how to restore vg in linux, how to restore pv in linux, how to restore lvm metadata in linux

Didn't find what you were looking for? Perform a quick search across GoLinuxCloud

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can either use the comments section or contact me form.

Thank You for your support!!

15 thoughts on “5 easy steps to recover LVM2 partition, PV, VG, LVM metdata in Linux”

  1. It works smoothly but after reboot:

    [*     ] (3 of 3) A start job is running for...x2dthird.device (47s / 1min 30s)[   52.625434] hv_balloon: Max. dynamic memory size: 3584 MB
    [ TIME ] Timed out waiting for device dev-mapper-testvg\x2dthird.device.
    [DEPEND] Dependency failed for /third.
    [DEPEND] Dependency failed for Local File Systems.
    [DEPEND] Dependency failed for Mark the need to relabel after reboot.
    [DEPEND] Dependency failed for Migrate local... structure to the new structure.
    [DEPEND] Dependency failed for Relabel all filesystems, if necessary.
    [ TIME ] Timed out waiting for device dev-mapper-testvg\x2dfirst.device.
    # Created by anaconda on Thu Feb 20 13:54:25 2020
    #

    I am working on testvg , lab for your reference.

    [root@centos7 ~]# pvs
      PV         VG     Fmt  Attr PSize   PFree
      /dev/sdd1  vg1    lvm2 a--  <20.00g <2.00g
      /dev/sde   testvg lvm2 a--  <32.00g <7.00g
      /dev/sdf   testvg lvm2 a--  <32.00g <7.00g
    [root@centos7 ~]#
    [root@centos7 ~]#
    [root@centos7 ~]#
    [root@centos7 ~]# vgs
      VG     #PV #LV #SN Attr   VSize   VFree
      testvg   2   3   0 wz--n-  63.99g 13.99g
      vg1      1   3   0 wz--n- <20.00g <2.00g
    [root@centos7 ~]#
    [root@centos7 ~]#
    [root@centos7 ~]# lvs
      LV     VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      first  testvg -wi------- 10.00g
      second testvg -wi------- 15.00g
      third  testvg -wi------- 25.00g
      lv1    vg1    -wi-ao----  4.00g
      lv2    vg1    -wi-ao----  6.00g
      lv3    vg1    -wi-ao----  8.00g
    Reply
  2. I commented out the entries to make my machine up. But post-reboot what I saw was really surprising. testvg was not existing anymore. I doubt if this can be trusted in PROD environment.

    [root@centos7 ~]# pvs
      PV         VG  Fmt  Attr PSize   PFree
      /dev/sdd1  vg1 lvm2 a--  <20.00g <2.00g
      /dev/sde       lvm2 ---   32.00g 32.00g
      /dev/sdf       lvm2 ---   32.00g 32.00g
    [root@centos7 ~]#
    [root@centos7 ~]# vgs
      VG  #PV #LV #SN Attr   VSize   VFree
      vg1   1   3   0 wz--n- <20.00g <2.00g
    [root@centos7 ~]#
    [root@centos7 ~]# vgscan
      Reading volume groups from cache.
      Found volume group "vg1" using metadata type lvm2
    [root@centos7 ~]#
    
    [root@centos7 ~]# vgchange -ay testvg
      Volume group "testvg" not found
      Cannot process volume group testvg
    [root@centos7 ~]#
    [root@centos7 ~]# ls /dev/mapper/
    control  vg1-lv1  vg1-lv2  vg1-lv3
    [root@centos7 ~]#
    Reply
    • I see testvg was created on /dev/sde and /dev/sdf. Are they still attached to the node?
      Can you paste the output of lsblk command

      Reply
  3. You saved me from a lot of trouble today!!!!

    I have had the same problem. By mistake the lvm and pv was gone. The system was always entering in Ctrl D prompt. Restored with this method here.

    Works great!
    Thanks Dude!

    Reply
  4. Hi,
    just befor going into the weekend I was able to rescue our production environment.
    A guy in the Phillipines now is able to sleep!
    And I will enhance our internal documentation with the link to your howto 🙂
    Thank you very much from good old Germany ….

    Reply
  5. Thanks for writing this! It’s really counter-intuitive that there are no native lvm tools to replace the failed disk in lvm raid and you have to manually edit the files.

    Reply

Leave a Comment