In this article we will learn about most used NFS mount options and NFS exports options with examples. I have tried to be as simple as possible in my examples so that even a beginner to Linux can understand these and then make a decision to use the respective NFS mount and export options in his/her setup.
There are two types of permissions which can be implemented between NFS Server and Client
- NFS Server Side (NFS Exports Options)
- NFS Client side (NFS Mount Options)
Let us jump into the details of each type of permissions. I have already configured a NFS server and client to demonstrate about NFS mount options and NFS exports options as this is a pre-requisite to this article.
NFS Exports Options
NFS exports options are the permissions we apply on NFS Server when we create a NFS Share under /etc/exports
Below are the most used NFS exports options in Linux
NFS exports options example with secure vs insecure
- With
secure
the port number from which the client requests a mount must be lower than 1024. - The
secure
permission is on by default. - To turn it off, specify
insecure
instead
Below I have shared /nfs_shares
folder on the NFS Server
[root@nfs-server ~]# cat /etc/exports /nfs_shares *(rw,no_root_squash)
As you see by default NFS exports options takes secure
[root@nfs-server ~]# exportfs -v
/nfs_shares (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
In such case the client will be forced to use port number less than 1024 to access the NFS shares. Here as you see client is using port 867
to access the share.
[root@nfs-server ~]# netstat | grep nfs
tcp 0 0 nfs-server:nfs 10.10.10.16:867 ESTABLISHED
rpcinfo -p
To allow client any available free port use insecure
in the NFS share
[root@nfs-server ~]# cat /etc/exports
/nfs_shares *(rw,no_root_squash,insecure)
Next re-export your shares
[root@nfs-server ~]# exportfs -r
Verify the NFS Share permissions
[root@nfs-server ~]# exportfs -v
/nfs_shares (sync,wdelay,hide,no_subtree_check,sec=sys,rw,insecure,no_root_squash,no_all_squash)
So now a client is free to use any port. Using insecure does not mean that you are forcing a client to use port higher than 1024, a client can still use a port value lesser than 1024, it is just that now the client will also be allowed to connect to NFS server with higher port numbers which are considered insecure.
NFS exports options example with ro vs rw
I believe the naming syntax explains the definition here.
- ro means read-only access to the NFS Share
- rw means read write access to the NFS Share
But what if you share a directory as read-only but mount the NFS share as read-write?
In the below example I have shared /nfs_shares
with read-only permission
[root@nfs-server ~]# cat /etc/exports
/nfs_shares *(ro,no_root_squash)
List the available shares
[root@nfs-server ~]# exportfs -v
/nfs_shares (sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,no_root_squash,no_all_squash)
But on the NFS Client, I will mount the NFS Share with read write permission
[root@nfs-client ~]# mount -o rw 10.10.10.12:/nfs_shares /mnt
Verify if the mount was successful. As you see the NFS share is mounted as read write
[root@nfs-client ~]# mount | grep nfs
10.10.10.12:/nfs_shares on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.16,local_lock=none,addr=10.10.10.12)
Let us try to create a file in our NFS mount point on the client
[root@nfs-client ~]# touch /mnt/file
touch: cannot touch 'file': Read-only file system
So I hope this is clear, if a directory is shared as read only then you will not be allowed to perform any write operation on that directory, even if you mount the share using read write permission.
root_squash vs no_root_squash
- If you read the text carefully, the text itself explains the meaning of the parameter.
- Here, squash literally means to squash (destroy) the power of the remote root user or don't squash the power of the remote root user
root_squash
prevents remote root users from having superuser (root) privileges on remote NFS-mounted volumes.no_root_squash
allows root user on the NFS client host to access the NFS-mounted directory with the same rights and privileges that the superuser would normally have.
NFS exports options root_squash example
Let us understand root_squash
with some examples:
I have a directory /nfs_shares
with 700 permission on my NFS Server. So only user owner is allowed to read, write and execute in this directory
[root@nfs-server ~]# ls -ld /nfs_shares
drwx------ 2 root root 4096 Apr 17 18:01 /nfs_shares
Now this directory is shared va NFS Server using /etc/exports. I have given read write permission and all other permissions are set to default
[root@nfs-server ~]# cat /etc/exports
/nfs_shares *(rw)
Re-export the shares
[root@nfs-server ~]# exportfs -r
List the shared directories
[root@nfs-server ~]# exportfs -v
/nfs_shares (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,no_all_squash)
On the Client I will mount the NFS Share to /mnt
[root@nfs-client ~]# mount -t nfs 10.10.10.12:/nfs_shares /mnt
Next let me try to navigate to the NFS mount point
[root@nfs-client ~]# cd /mnt
-bash: cd: /mnt: Permission denied
Here since we have used default NFS exports options, the NFS share will be mounted as nobody user.
Also we had given 700 permission for /nfs_shares
which means no permission for "others
" so "nobody
" user is not allowed to do any activity in /nfs_shares
Next I will give read and execute permission to others for /nfs_shares
on the NFS Server
[root@nfs-server ~]# chmod o+rx /nfs_shares
[root@nfs-server ~]# ls -ld /nfs_shares
drwx---r-x 2 root root 4096 Apr 19 11:37 /nfs_shares
Now I will be allowed to navigate inside the mount point
[root@nfs-client ~]# cd /mnt
but since there is no write permission, even root user will not be allowed to write inside /mnt
[root@nfs-client mnt]# touch file
touch: cannot touch 'file': Permission denied
Next I will also give write access to /nfs_shares
(so now others have full access to /nfs_shares
)
[root@nfs-server ~]# chmod o+w /nfs_shares
[root@nfs-server ~]# ls -ld /nfs_shares
drwx---rwx 2 root root 4096 Apr 19 11:37 /nfs_shares
Now I should be allowed to write inside /mnt
(where /nfs_shares
is mounted)
[root@nfs-client mnt]# touch file
As expected the we were able to create a file and this file is created with nobody user and group permission as we are using root_squash
on the NFS Share
[root@nfs-client mnt]# ls -l total 4 -rw-r--r-- 1 nobody nobody 0 Apr 19 2020 file -rw-r--r-- 1 root root 10 Apr 19 2020 file1
file1
which is owned by root user. To prevent such scenario you must also implement sticky bit to enhance security which will restrict user on client node from deleting files owned by other users.
NFS exports options no_root_squash example
Next let's see the the behaviour of no_root_squash
I will update the NFS exports options on NFS Server to use no_root_squash
[root@nfs-server ~]# cat /etc/exports
/nfs_shares *(rw,no_root_squash)
Re-export the shares
[root@nfs-server ~]# exportfs -r
List the properties of the NFS Shares on the NFS Server
[root@nfs-server ~]# exportfs -v
/nfs_shares (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
nfs-server
service on the NFS Server. Since we have used exportfs -r
, this will re-export all the shares with new propertiesOn the NFS client now if I create a new file
[root@nfs-client mnt]# touch new_file
So the new file is created with root permission.
[root@nfs-client mnt]# ls -l total 0 -rw-r--r-- 1 nobody nobody 0 Apr 19 2020 file -rw-r--r-- 1 root root 0 Apr 19 2020 new_file
This should prove the fact that the NFS share is accessed as root user with no_root_squash
.
Understanding all_quash vs no_all_squash
all_squash
will map all User IDs (UIDs) and group IDs (GIDs) to the anonymous user.all_squash
is useful for NFS-exported public FTP directories, news spool directories- By default
no_all_squash
is applied to the NFS Shares
Understanding sync vs aysnc
- With
sync
reply to requests are done only after the changes have been committed to stable storage - While
async
allows the NFS server to violate the NFS protocol and reply to requests before any changes made by that request have been committed to stable storage - Using
aysnc
option usually improves performance, but at the cost that an unclean server restart (i.e. a crash) can cause data to be lost or corrupted.
no_subtree_check
, fsid
which are mostly theoretical and you should learn about it from the official man pages of exports
NFS Mount Options with mount
NFS Mount Options are the ones which we will use to mount a NFS Share on the NFS Client.
Below are the most used NFS mount options we are going to understand in this article with different examples.
Hard Mount vs Soft Mount
- By default all the NFS Shares are mounted as hard mount
- With hard mount if a NFS operation has a major timeout, a "server not responding" message is reported and the client continues to try indefinitely
- With hard mount there are chances that a client performing operations on NFS Shares can get stuck indefinitiley if the NFS server becomes un-reachable
- Soft mount allows client to timeout the connection after a number of retries specified by retrams=n
NFS mount options hard mount example
In this NFS mount point example, I will mount my NFS share using hard
mount
[root@nfs-client ~]# mount -o hard 10.10.10.12:/nfs_shares /mnt
Check the share properties to make sure hard mount is implemented.
[root@nfs-client ~]# mount | grep /mnt
10.10.10.12:/nfs_shares on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=60,retrans=2,sec=sys,clientaddr=10.10.10.16,local_lock=none,addr=10.10.10.12)
Next I will create a small script to write to NFS Shares and also print on screen so we know the progress or the script:
[root@nfs-client ~]# cat /tmp/script.sh #!/bin/bash for (( i=0; i<1000; i++ )); do # Echo to the file available under /nfs_shares mount point echo $i >> /mnt/file; # Also print on STDOUT echo $i # Sleep for 1 second after each write sleep 1; done
Next I executed the script on client node
[root@nfs-client ~]# /tmp/script.sh
0
1
2
3
4
<- here we stopped nfs-server service on our NFS Server node
During the execution after "4" was printed, I stopped the nfs-server service
[root@nfs-server ~]# systemctl stop nfs-server
On Client node I started getting these messages in /var/log/messages
Apr 19 04:00:54 nfs-client.example.com kernel: nfs: server 10.10.10.12 not responding, still trying Apr 19 04:01:55 nfs-client.example.com kernel: nfs: server 10.10.10.12 not responding, timed out Apr 19 04:02:13 nfs-client.example.com kernel: nfs: server 10.10.10.12 not responding, timed out Apr 19 04:02:19 nfs-client.example.com kernel: nfs: server 10.10.10.12 not responding, timed out Apr 19 04:02:31 nfs-client.example.com kernel: nfs: server 10.10.10.12 not responding, timed out
Then I started NFS Server service after which the client was able to establish the connection with NFS server
Apr 19 04:08:20 nfs-client.example.com kernel: nfs: server 10.10.10.12 OK
And our script on client node again started to write on the NFS Share
[root@nfs-client ~]# /tmp/script.sh
0
1
2
3
4
<- here we stopped nfs-server service on our NFS Server node
As soon as we start the NFS Server service, the script continues to write
5
6
7
8
9
So we see there was no data loss with hard
mount
Advantage and Disadvantage of NFS Hard Mount
- The demerit of hard mount is that this will consume more resources on your system, as your client will hold the write process until the NFS server is UP.
- This can be used in mission critical systems where data is more important to make sure the data is not lost while writing to NFS Shares
NFS mount options Soft Mount example
Let us also examine the behaviour with NFS Soft Mount in our NFS mount options example"
First I will un-mount the NFS Share. Although I could also do a remount but let's keep it simple.
[root@nfs-client ~]# umount /mnt
Then I will do a soft
mount along with some more values such as retrans=2 and timeo=60
So the client will transmit two packets at an interval of 60 seconds before announcing the NFS Server as unreachable
[root@nfs-client ~]# mount -o nfsvers=4,soft,retrans=2,timeo=60 10.10.10.12:/nfs_shares /mnt
Verify the NFS Mount Options on the client
[root@nfs-client ~]# mount | grep /mnt 10.10.10.12:/nfs_shares on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=60,retrans=2,sec=sys,clientaddr=10.10.10.16,local_lock=none,addr=10.10.10.12)
Next we will again execute our script
[root@nfs-client ~]# /tmp/script.sh
0
1
2
3
4
5
<- At this stage I stopped nfs-server service on the server
Here I have stopped the nfs-server
service to make my server unreachable.
[root@nfs-server ~]# systemctl stop nfs-server
In couple of seconds we start getting the below alarms in /var/log/messages
which is similar to hard
mount
Apr 19 04:19:32 nfs-client.example.com kernel: nfs: server 10.10.10.12 not responding, timed out Apr 19 04:19:50 nfs-client.example.com kernel: nfs: server 10.10.10.12 not responding, timed out Apr 19 04:20:09 nfs-client.example.com kernel: nfs: server 10.10.10.12 not responding, timed out Apr 19 04:20:17 nfs-client.example.com kernel: nfs: server 10.10.10.12 not responding, timed out
But the script continues to execute even if it fails to write on the NFS Shares
[root@nfs-client ~]# /tmp/script.sh
0
1
2
3
4
5
/tmp/script.sh: line 3: /mnt/file: Input/output error
6
/tmp/script.sh: line 3: /mnt/file: Input/output error
7
Advantage and Disadvantage of NFS Soft Mount
- So this can lead to data loss in real time environment.
- Although in this example if I start the
nfs-server
, the server would be reachable again and the client will again start writing to the NFS share but while the time our NFS Server was un-reachable, that data would be lost. - So in production environment where data is important, it is recommended to use hard mount as preferred NFS mount options.
Define NFS version while mounting NFS Share
- You can explicitly define the NFS version you wish to use to mount the NFS Share.
- RHEL/CentoS 7/8 by default support NFSv3 and NFSv4 (unless you have explicitly disabled either of them).
- So the client has an option to define the NFS version it wants to use to connect to the NFS Server
- You can use
nfsvers=n
to define the NFS version
For example:
To mount NFS Share using NFSv4
[root@nfs-client ~]# mount -o nfsvers=4 10.10.10.12:/nfs_shares /mnt
Similarly to mount NFS Share using NFSv3
[root@nfs-client ~]# mount -o nfsvers=3 10.10.10.12:/nfs_shares /mnt
Recommended NFS Mount Options
Use wsize and rsize mount option
- There is no 'default' value for
rsize
andwsize
. The 'default
' is to use the largest value that both the client and server support. - If
rsize
/wsize
is not specified in the mount options, the client will query the server and will use the largest size that both support. - If
rsize
/wsize
is specified in the mount options and it exceeds the maximum value that either the client or server support, the client will use the largest size that both support. - However based on your system resources and requirement, you can choose to define your own
rsize
andwsize
value
You can define your own wsize
and rsize
using
[root@nfs-client ~]# mount -o nfsvers=4,wsize=65536,rsize=65536 10.10.10.12:/nfs_shares /mnt
Verify the new properties
[root@nfs-client ~]# mount | grep /mnt 10.10.10.12:/nfs_shares on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.16,local_lock=none,addr=10.10.10.12)
For more details on the supported maximum read and write size with different Red Hat kernels check
What are the default and maximum values for rsize and wsize with NFS mounts?
Use intr mount option
- When a process makes a system call, the kernel takes over the action.
- During the time that the kernel is handling the system call, the process may not have control over itself.
- When there’s an error, however, it can be quite a nuisance.
- Because of this, NFS has an option to mount file systems with the interruptible flag (the
intr
option), which allows a process that is waiting on an NFS request to give up and move on. - In general, unless you have reason not to use the intr option, it is usually a good idea to do so.
Using bg and fg NFS mount options
- I wouldn't blindly recommend this and it mostly depends on your use case.
- These options can be used to select the retry behavior if a mount fails.
- The
bg
option causes the mount attempts to be run in the background. - The
fg
option causes the mount attempt to be run in the foreground. - The default is
fg
, which is the best selection for file systems that must be available. This option prevents further processing until the mount is complete. bg
is a good selection for noncritical file systems because the client can do other processing while waiting for the mount request to be completed.
NFS Mount Options with Fstab
If you mount a share using mount command then the changes will be intact only for the current session and post reboot you will have to again mount the NFS share
To make persistent changes you must create a new entry in /etc/fstab
with the NFS share details. In /etc/fstab
you can define any additional NFS mount options for the share path
For example:
In this NFS mount options example I will mount /nfs_shares
path as soft mount, NFSv3, timeout value of 600 and retrans
value of 5
10.10.10.2:/nfs_shares /mnt nfs defaults,soft,nfsvers=3,timeo=60,retrans=5 0 0
Save and exit the /etc/fstab
file
Next execute mount -a
to mount all the paths from /etc/fstab
# mount -a
Next verify the mount points on the client.
# mount | grep nfs
References:
Linux Administration Guide: Configure NFS Mount Options with Examples
Lastly I hope the steps from the article to understand NFS Exports Options and NFS Mount Options on Linux was helpful. So, let me know your suggestions and feedback using the comment section.
Related Searches: nfs mount options performance, linux nfs mount options example, nfs exports options example, nfs client options, nfs unix commands, linux mount options
Excellent descriptive article! Thanks!
The bullet points under root_squash vs no_root_squash is what I was googling for and this was the best site to clarify what no_root_squash meant.
Quote
Here, squash literally means to squash (destroy) the power of the remote root user or don’t squash the power of the remote root user
root_squash prevents remote root users from having superuser (root) privileges on remote NFS-mounted volumes.
no_root_squash allows root user on the NFS client host to access the NFS-mounted directory with the same rights and privileges that the superuser would normally have.
This is very complete, especially the hard and soft mounts that I saw nowhere else. I am using RPi to RPi. I think the server is complete
Entry in exports (with root_squash). Tried many things
The sharename is 777
When I try to mount I get
Any ideas? Thanks
Thanks for your feedback, please use <pre class=comments>your code</pre> to place the log messages. I am unable to see any messages other than the sharename.
Hi,
Don’t know when you write this guide, but very useful
regards.
Philippe