Configure Thin Provision LVM using kickstart with example in CentOS/RHEL 7/8


Kickstart, How To, Linux, Storage

In this article I will share the steps to configure thin provision LVM using kickstart with examples for RHEL 7 and RHEL 8, also validated on CentOS 7. Now at the time of writing this article CentOS 8 was not released but assuming on the RHEL 8 distribution, the example should also work on CentOS 8.

 

What are Thin Provisioned LVM?

In simple terms, "thin provisioning" is a way of managing storage in computers where you tell the system to pretend it has more space than it really does. This can be useful when you want to make sure you have enough space for multiple applications or users without buying a lot of hard drives upfront.

Let's say you have a bookshelf that's only big enough for 10 books, but you promise 20 friends they can each keep a book on it. In reality, you know not all friends will bring their book at the same time, so you can manage with the space you have. Thin provisioning in a computer's storage works similarly.

LVM stands for Logical Volume Management, which is a way of dividing up the space on your hard drives more flexibly than the traditional method of partitions. With LVM, you can create "volumes" that look like regular hard drives to your system, but they can be resized or moved around much more easily.

When you combine thin provisioning with LVM, you get "thin provisioned LVM." This allows you to create a logical volume that seems to have more space than is actually available on the physical hard drives. Here's how it's beneficial:

In my older articles I have shared various examples to perform PXE installation using kickstart with different storage and network layouts.

 

Configure Thin Provision LVM

To configure thin provision LVM, a thin pool LV must be created before thin LV's can be created within it.

A thin pool LV is created by combining two standard LV's:

  • a large data LV that will hold blocks for thin LVs, and
  • a metadata LV that will hold metadata.

The metadata tracks which data blocks belong to each thin LV.

 

Sample Kickstart File Example

In my example I have a single disk "sda" on which I plan to configure thin provision LVM using a kickstart based installation. Now the below snippet only contains a section of kickstart file. You can refer my other article to get a snippet of complete sample kickstart configuration file.

# System bootloader configuration
bootloader --location=mbr --driveorder=sda --append="rhgb novga console=ttyS0,115200 console=tty0"

# Partition clearing information
clearpart --all --initlabel --drives=sda

# Disk partitioning information
part pv.746 --fstype="lvmpv" --ondisk=sda --size=1 --grow
part /boot --fstype="ext4" --ondisk=sda --size=512
part swap --fstype="swap" --ondisk=sda --size=4096
volgroup rhel --pesize=4096 pv.746
logvol none  --fstype="None" --size=1 --grow --thinpool --metadatasize=4 --chunksize=65536 --name=pool00 --vgname=rhel
logvol /  --fstype="ext4" --size=40960 --thin --poolname=pool00 --name=root --vgname=rhel
logvol /home  --fstype="ext4" --size=20480 --thin --poolname=pool00 --name=home --vgname=rhel
logvol swap  --fstype="swap" --size=4096 --thin --poolname=pool00 --name=swap --vgname=rhel

In this example

  • We create a physical volume pv.746 on our disk /dev/sda. Now since we do not know the size of the disk, we will use --grow argument so it will take all the space available for the pool.
  • The /boot partition cannot be a part of logical volume so we create it as a separate standard partition
  • Create a volume group "rhel"
  • Create thin provision LVM "thinpool"
  • Create a swap and other partition under thinpool with pre-defined size
NOTE:
Here my node had single disk, if you plan to use multiple disk then the configuration option will also change depending upon your environment and requirement.

 

Chunk size

The size of data blocks managed by a thin pool can be specified with the --chunksize option when the thin pool LV is created. The default unit is kilobytes and the default value is 64KiB. The value must be a power of two between 4KiB and 1GiB.
When a thin pool is used primarily for the thin provisioning feature, a larger value is optimal. To optimise for a lot of snapshotting, a smaller value reduces copying time and consumes less space.

 

Size of pool metadata LV

The amount of thin metadata depends on how many blocks are shared between thin LVs (i.e. through snapshots). A thin pool with many snapshots may need a larger metadata LV. When a command automatically creates a thin metadata LV, the --poolmetadatasize option can be used specify a non-default size. The default unit is megabytes.

Once the installation is successful, validate the available partition using df command

rhel-8:~ # df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   40G  1.8G   36G   5% /
devtmpfs                63G     0   63G   0% /dev
tmpfs                  2.0G     0  2.0G   0% /dev/shm
tmpfs                  500M  9.7M  491M   2% /run
tmpfs                   63G     0   63G   0% /sys/fs/cgroup
/dev/sda1              488M  127M  326M  28% /boot
/dev/mapper/rhel-home   20G   45M   19G   1% /home
tmpfs                  100M     0  100M   0% /run/user/1000
tmpfs                  1.0M  4.0K 1020K   1% /opt/sdf/queues
tmpfs                  100M     0  100M   0% /run/user/1003
tmpfs                  100M     0  100M   0% /run/user/1006

Check the available logical volumes. You can see our thinpool also configured as logical volume but with all the free space.

rhel-8:~ # lvs
  LV     VG   Attr       LSize    Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home   rhel Vwi-aotz-- 20.00g   pool00        8.12
  pool00 rhel twi-aotz-- <1.09t                 0.70   11.13
  root   rhel Vwi-aotz-- 40.00g   pool00        15.47
  swap   rhel Vwi-aotz-- 4096.00m pool00        0.01

Check the available physical volume.

rhel-8:~ # pvs
  PV         VG   Fmt  Attr PSize  PFree
  /dev/sda3  rhel lvm2 a--  <1.09t    0

Check the available volume group.

rhel-8:~ # vgs
  VG   #PV #LV #SN Attr   VSize  VFree
  rhel   1   3   0 wz--n- <1.09t    0

Lastly I hope the steps from the article to Configure Thin provision LVM using kickstart in CentOS/RHEL 7/8 Linux was helpful. So, let me know your suggestions and feedback using the comment section.

 

Deepak Prasad

Deepak Prasad

He is the founder of GoLinuxCloud and brings over a decade of expertise in Linux, Python, Go, Laravel, DevOps, Kubernetes, Git, Shell scripting, OpenShift, AWS, Networking, and Security. With extensive experience, he excels in various domains, from development to DevOps, Networking, and Security, ensuring robust and efficient solutions for diverse projects. You can connect with him on his LinkedIn profile.

Can't find what you're searching for? Let us assist you.

Enter your query below, and we'll provide instant results tailored to your needs.

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can send mail to admin@golinuxcloud.com

Thank You for your support!!

2 thoughts on “Configure Thin Provision LVM using kickstart with example in CentOS/RHEL 7/8”

  1. You mention what it is and how to set it up, but you don’t really talk about WHY we’d want to do this. What are the goals this solves? (I’m having trouble coming up with a good use for this.) Also, what are the drawbacks? (This sounds particularly dangerous if used with DRBD, is it?) You mention it “can then be much larger than physically available storage”, what happens if you do that then later do exceed physical storage? (I’d imagine that would be very, very bad, but what does it do and can you recover other than with smaller backups?)

    Reply
    • Hello Kevin, This article is for users who wish to implement Thin provision LVM. With your list of questions I can only assume you are looking for a proper solution for your environment. Thin provision is only used if you know that your requirement does not requires all the storage and only a part of it so rather than reserving the entire space, you only use the required storage. It can never happen that the utilization will go beyond the available storage, I think you are mixing this solution with VMware or similar virtual environment. The technology is the same but the underlying solution is different here.

      Reply

Leave a Comment