Create Stratis Snapshot, Restore Stratis FileSystem & more (CentOS/RHEL 8)


Linux, Storage

This is the second part of the main Stratis related article where in the first part I gave you an overview and steps to install and configure stratis with examples on RHEL 8 Linux host which includes installing pre-requisite rpms and starting required daemon services, creating pools, thin provisioned file systems and mounting these file systems to access and store our data. Now in this article I will share the steps to create stratis snapshot, restore stratis file system, Remove stratis snapshot and more related commands with examples using RHEL 8 Linux host.

Create Stratis snapshots, Restore Stratis File System & more (CentOS/RHEL 8)

 

Create Stratis snapshots

You can create Stratis Snapshots and Restore Stratis File Systems. With create Stratis snapshot you can capture file system state at arbitrary times and restore stratis file system in the future.

In Stratis, a snapshot is a regular Stratis file system created as a copy of another Stratis file system. The snapshot initially contains the same file content as the original file system, but can change as the snapshot is modified. Whatever changes you make to the snapshot will not be reflected in the original file system.

The current snapshot implementation in Stratis is characterized by the following:

  • A snapshot of a file system is another file system.
  • A snapshot and its origin are not linked in lifetime. A snapshotted file system can live longer than the file system it was created from.
  • A file system does not have to be mounted to create a snapshot from it.
  • Each snapshot uses around half a gigabyte of actual backing storage, which is needed for the XFS log.

 

Before we create stratis snapshot let us put some dummy data in our stratis file system mount point.

[root@node4 ~]# cd /test-fs1/

Here I am creating a dummy 1 GB file and also a text file with some content.

[root@node4 test-fs1]# dd if=/dev/zero of=test1G bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.1357 s, 260 MB/s

[root@node4 test-fs1]# echo "This is an important file." > importantfile

[root@node4 test-fs1]# ls -al
total 1048584
drwxr-xr-x   2 root root         41 Jun 17 12:11 .
dr-xr-xr-x. 20 root root       4096 Jun 17 12:06 ..
-rw-r--r--   1 root root         27 Jun 17 12:11 importantfile
-rw-r--r--   1 root root 1073741824 Jun 17 12:10 test1G

Next verify the pool size. if you notice the used size section is now showing 2.12 GB instead of 1.12 GB as shown earlier when the file systems were empty.

[root@node4 test-fs1]# stratis pool
Name       Total Physical Size  Total Physical Used
my-pool                  6 GiB             2.12 GiB

Similarly check the file system size which will reflect the new size for test-fs1 as our data is under this file system.

[root@node4 test-fs1]# stratis fs list
Pool Name  Name      Used      Created            Device
my-pool    test-fs1  1.53 GiB  Jun 17 2019 12:05  /dev/stratis/my-pool/test-fs1
my-pool    test-fs2  546 MiB   Jun 17 2019 12:06  /dev/stratis/my-pool/test-fs2

To create Stratis snapshot, use:

[root@node4 test-fs1]# stratis filesystem snapshot my-pool test-fs1 test-fs1-snapshot

Here,

my-pool -> The pool name in which the file system exists
test-fs1 -> The file system name for which you wish to create snapshot
test-fs1-snapshot -> The snapshot name

Now our create stratis snapshot command is executed successfully, verify the list of available file systems. You will observe that now we have a new file system for the snapshot.

[root@node4 test-fs1]# stratis fs list
Pool Name  Name               Used      Created            Device
my-pool    test-fs1           1.53 GiB  Jun 17 2019 12:05  /dev/stratis/my-pool/test-fs1
my-pool    test-fs2           546 MiB   Jun 17 2019 12:06  /dev/stratis/my-pool/test-fs2
my-pool    test-fs1-snapshot  1.53 GiB  Jun 17 2019 12:12  /dev/stratis/my-pool/test-fs1-snapshot

 

Accessing the content of Stratis snapshot

After we create stratis snapshot, using this procedure we mount a snapshot of a Stratis file system to make it accessible for read and write operations.

[root@node4 ~]# mkdir /test-fs1-snapshot
[root@node4 ~]# mount /dev/stratis/my-pool/test-fs1-snapshot /test-fs1-snapshot

Next verify the df output with the list of mounted file systems. As you see our stratis snapshot file system has been mounted successfully on /test-fs1-snapshot.

[root@node4 ~]# df -h
Filesystem                                                                                       Size  Used Avail Use% Mounted on
devtmpfs                                                                                         1.9G     0  1.9G   0% /dev
tmpfs                                                                                            1.9G     0  1.9G   0% /dev/shm
tmpfs                                                                                            1.9G  8.5M  1.9G   1% /run
tmpfs                                                                                            1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/rhel-root                                                                            6.4G  1.4G  4.8G  23% /
/dev/sda1                                                                                        488M  124M  329M  28% /boot
tmpfs                                                                                            379M     0  379M   0% /run/user/0
/dev/mapper/stratis-1-a35d85c5802c49b59bb8c81104d43bb4-thin-fs-a4df0ba34ecf427296fed4bc8c4e7570  1.0T  8.2G 1016G   1% /test-fs1
/dev/mapper/stratis-1-a35d85c5802c49b59bb8c81104d43bb4-thin-fs-0f6d34239dcb4955803a1054c3a65972  1.0T  7.2G 1017G   1% /test-fs2
/dev/mapper/stratis-1-a35d85c5802c49b59bb8c81104d43bb4-thin-fs-4e39ac444ca343da9968a4b68999ecc0  1.0T  8.2G 1016G   1% /test-fs1-snapshot

next verify the content of test-fs1-snapshot

[root@node4 ~]# cd /test-fs1-snapshot

As expected, since we had two files at the time of creating snapshot, so the same two files are present inside this snapshot.

[root@node4 test-fs1-snapshot]# ls -al
total 1048584
drwxr-xr-x   2 root root         41 Jun 17 12:11 .
dr-xr-xr-x. 21 root root       4096 Jun 17 12:12 ..
-rw-r--r--   1 root root         27 Jun 17 12:11 importantfile
-rw-r--r--   1 root root 1073741824 Jun 17 12:10 test1G

Verify the content of the file

[root@node4 test-fs1-snapshot]# cat importantfile
This is an important file.

 

Restore Stratis File System to a previous snapshot

This procedure reverts or restore Stratis File System to the state captured in a Stratis snapshot. Let us remove one of the files under test-fs1 file system to make sure the restore stratis file system using snapshot revert is working:

[root@node4 test-fs1]# rm importantfile
rm: remove regular file 'importantfile'? y

[root@node4 test-fs1]# ls -al
total 1048580
drwxr-xr-x   2 root root         20 Jun 17 12:13 .
dr-xr-xr-x. 21 root root       4096 Jun 17 12:12 ..
-rw-r--r--   1 root root 1073741824 Jun 17 12:10 test1G

Next unmount the test-fs1 file system as the file system must be in un-mounted state before attempting to restore stratis file system using snapshot:

[root@node4 test-fs1]# cd
[root@node4 ~]# umount /test-fs1

 

Remove original Stratis File System

[root@node4 ~]# stratis filesystem destroy my-pool test-fs1

Create a copy of the snapshot under the name of the original file system:

[root@node4 ~]# stratis filesystem snapshot my-pool test-fs1-snapshot test-fs1

Verify the list of available file systems

[root@node4 ~]# stratis fs list
Pool Name  Name               Used      Created            Device
my-pool    test-fs2           546 MiB   Jun 17 2019 12:06  /dev/stratis/my-pool/test-fs2
my-pool    test-fs1-snapshot  1.53 GiB  Jun 17 2019 12:12  /dev/stratis/my-pool/test-fs1-snapshot
my-pool    test-fs1           1.53 GiB  Jun 17 2019 12:14  /dev/stratis/my-pool/test-fs1

Mount the snapshot, which is now accessible with the same name as the original file system:

[root@node4 ~]# mount /dev/stratis/my-pool/test-fs1 /test-fs1

The content of the file system named test-fs1 is now identical to the snapshot test-fs1-snapshot.

[root@node4 ~]# ls -al /test-fs1
total 1048584
drwxr-xr-x   2 root root         41 Jun 17 12:11 .
dr-xr-xr-x. 21 root root       4096 Jun 17 12:12 ..
-rw-r--r--   1 root root         27 Jun 17 12:11 importantfile
-rw-r--r--   1 root root 1073741824 Jun 17 12:10 test1G

 

Remove Stratis snapshot

This procedure removes Stratis snapshot from a pool. Data on the snapshot are lost.
Unmount the snapshot file system (if mounted):

[root@node4 ~]# umount /test-fs1-snapshot

Destroy the snapshot:

[root@node4 ~]# stratis filesystem destroy my-pool test-fs1-snapshot

Verify the available file system:

[root@node4 ~]# stratis fs list
Pool Name  Name      Used      Created            Device
my-pool    test-fs2  546 MiB   Jun 17 2019 12:06  /dev/stratis/my-pool/test-fs2
my-pool    test-fs1  1.53 GiB  Jun 17 2019 12:14  /dev/stratis/my-pool/test-fs1

 

Remove Stratis file system

This procedure removes an existing Stratis file system. Data stored on it are lost.

Unmount the file system to be removed (if in mounted state) and clear any content related to the respective file system from /etc/fstab so that kernel does not attempts to mount an un-available file system at boot stage:

[root@node4 ~]# umount /test-fs2/

Destroy the file system:

[root@node4 ~]# stratis filesystem destroy my-pool test-fs2

Verify the list of available file system, pools and block devices on your system:

[root@node4 ~]# stratis filesystem list my-pool
Pool Name  Name      Used      Created            Device
my-pool    test-fs1  1.53 GiB  Jun 17 2019 12:14  /dev/stratis/my-pool/test-fs1
[root@node4 ~]# stratis pool list
Name       Total Physical Size  Total Physical Used
my-pool                  6 GiB             1.59 GiB
[root@node4 ~]# stratis blockdev list
Pool Name  Device Node    Physical Size       State  Tier
my-pool    /dev/sdb               2 GiB      In-use  Data
my-pool    /dev/sdc               2 GiB      In-use  Data
my-pool    /dev/sdd               2 GiB  Not-in-use  Data

 

Remove Stratis pool

This procedure removes an existing Stratis pool. Data stored on it are lost.

List the available file system part of the respective pool which you intend to remove

[root@node4 ~]# stratis filesystem list my-pool
Pool Name  Name      Used      Created            Device
my-pool    test-fs1  1.53 GiB  Jun 17 2019 12:14  /dev/stratis/my-pool/test-fs1

Next unmount the file system to be removed (if in mounted state) and clear any content related to the respective file system from /etc/fstab so that kernel does not attempts to mount an un-available file system at boot stage:

[root@node4 ~]# umount /test-fs1

Destroy the file systems:

[root@node4 ~]# stratis filesystem destroy my-pool test-fs1

verify the list of available file systems in my-pool

[root@node4 ~]# stratis filesystem list my-pool
Pool Name  Name  Used  Created  Device

Next destroy the respective pool

[root@node4 ~]# stratis pool destroy my-pool

Verify the list of available stratis pools

[root@node4 ~]# stratis pool list
Name    Total Physical Size  Total Physical Used

 

Lastly I hope the steps from the article to create stratis snapshot, restore stratis file system, remove or destroy stratis file system, pools,snapshots on RHEL8 Linux was helpful. So, let me know your suggestions and feedback using the comment section.

 

Deepak Prasad

Deepak Prasad

He is the founder of GoLinuxCloud and brings over a decade of expertise in Linux, Python, Go, Laravel, DevOps, Kubernetes, Git, Shell scripting, OpenShift, AWS, Networking, and Security. With extensive experience, he excels in various domains, from development to DevOps, Networking, and Security, ensuring robust and efficient solutions for diverse projects. You can connect with him on his LinkedIn profile.

Can't find what you're searching for? Let us assist you.

Enter your query below, and we'll provide instant results tailored to your needs.

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can send mail to admin@golinuxcloud.com

Thank You for your support!!

Leave a Comment