In this article we will cover:
- Comparison between NFSv2 vs NFSv3 vs NFSv4
- How to configure NFS server and client using NFSv4 in RHEL/CentOS 7/8 Linux
- How to configure NFS server and client using NFSv3 and NFSv2 in RHEL/CentOS 7/8 Linux
- Access NFS shares persistently and non-persistently in Linux
Network File System (NFS) is one of the native ways of sharing files and applications across the network in the Linux/UNIX world. NFS is somewhat similar to Microsoft Windows File Sharing, in that it allows you to attach to a remote file system (or disk) and work with it as if it were a local drive—a handy tool for sharing files and large storage space among users.
NFS2 vs NFS3 vs NFSv4
NFSv2
- Mount requests are granted on a per-host basis and not on a per-user basis.
- This version uses Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) as its transport protocol.
- Version 2 clients have a file size limitation of less than 2GB that they can access.
NFSv3
- This version has more features than version 2, has performance gains over version 2, and can use either TCP or UDP as its transport protocol.
- Depending on the local file system limits of the NFS server itself, clients can access files larger than 2GB in size.
- Mount requests are also granted on a per-host basis and not on a per-user basis.
NFSv4
- This version of the protocol uses a stateful protocol such as TCP or Stream Control Transmission Protocol (SCTP) as its transport.
- The services of the RPC binding protocols (such as
rpc.mountd
,rpc.lockd
, andrpc.statd
) are no longer required in this version of NFS because their functionality has been built into the server - NFSv4 combines these previously disparate NFS protocols into a single protocol specification.
- The
portmap
service is no longer necessary. - It includes support for file access control list (ACL) attributes and can support both version 2 and version 3 clients.
- NFSv4 introduces the concept of the pseudo-file system, which allows NFSv4 clients to see and access the file systems exported on the NFSv4 server as a single file system.
Lab Environment
I have three Virtual Machines which I will use for NFS configuration of server and client. Below are the server specs of these Virtual Machines. These VMs are installed on Oracle VirtualBox running on a Linux server.
VM1 | VM2 | VM3 | |
---|---|---|---|
Hostname | centos-8 | centos-7 | nfs-client |
OS | CentOS 8.1 | CentOS 7.7 | CentOS 8 |
IP Address | 10.10.10.12 | 10.10.10.2 | 10.10.10.16 |
Purpose | Configure NFS Server as NFSv4 | Configure NFS Server as NFSv3 (and/or NFSv4) | Configure as NFS Client |
Install and Configure NFS Server (NFSv4) in RHEL/CentOS 7/8
- By default, the NFS server supports NFSv2, NFSv3, and NFSv4 connections in Red Hat /CentOS 7/8.
- However, you can also configure NFS server to support only NFS version 4.0 and later.
- This minimizes the number of open ports and running services on the system, because NFSv4 does not require the rpcbind service to listen on the network.
- When your NFS server is configured as NFSv4-only, clients attempting to mount shares using NFSv2 or NFSv3 fail with an error like the following:
Requested NFS version or transport protocol is not supported.
Install nfs-utils rpm
Install the nfs-utils
package:
# yum install nfs-utils
NFS configuration using /etc/nfs/conf
Starting with RHEL/CentOS 7.7, to configure NFS server you must use /etc/nfs.conf
instead of /etc/sysconfig/nfs
. Since we plan to only enable NFSv4, we will disable older NFS versions using /etc/nfs.conf
.
[root@centos-8 ~]# vim /etc/nfs.conf [nfsd] vers2=n vers3=n vers4=y vers4.0=y vers4.1=y vers4.2=y
Optionally, disable listening for the RPCBIND, MOUNT, and NSM protocol calls, which are not necessary in the NFSv4-only case. Disable related services:
[root@centos-8 ~]# systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socket Created symlink /etc/systemd/system/rpc-statd.service → /dev/null. Created symlink /etc/systemd/system/rpcbind.service → /dev/null. Created symlink /etc/systemd/system/rpcbind.socket → /dev/null.
After you configure NFS server, restart the NFS server to activate the changes and enable it start automatically post reboot. You can also check nfs status using systemctl status nfs-server
[root@centos-8 ~]# systemctl restart nfs-server [root@centos-8 ~]# systemctl enable nfs-server
Use the netstat
utility to list services listening on the TCP and UDP protocols:
The following is an example netstat
output on an NFSv4-only server; listening for RPCBIND, MOUNT, and NSM is also disabled. Here, nfs
is the only listening NFS service:
[root@centos-8 ~]# netstat --listening --tcp --udp | grep nfs tcp 0 0 0.0.0.0:nfs 0.0.0.0:* LISTEN tcp6 0 0 [::]:nfs [::]:* LISTEN
Create NFS share using /etc/exports
The /etc/exports
file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules:
- Blank lines are ignored.
- To add a comment, start a line with the hash mark (
#
). - You can wrap long lines with a backslash (
\
). - Each exported file system should be on its own individual line.
- Any lists of authorized hosts placed after an exported file system must be separated by space characters.
- Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis.
Syntax:
export host1(options1) host2(options2) host3(options3)
In this structure:
- export: The directory being exported
- host: The host or network to which the export is being shared
- options: The options to be used for host
I have a folder /nfs_shares
which we will share on our NFS server
[root@centos-8 ~]# mkdir /nfs_shares
In this NFS configuration guide, we create NFS share /nfs_shares
to world (*
) with rw
and no_root_squash
permission
[root@centos-8 ~]# cat /etc/exports /nfs_shares *(rw,no_root_squash)
The list of supported options which we can use in /etc/exports
for NFS server
secure: The port number from which the client requests a mount must be lower than 1024. This permission is on by default. To turn it off, specify insecure instead ro: Allows read-only access to the partition. This is the default permission whenever nothing is specified explicitly rw: Allows normal read/write access noaccess: The client will be denied access to all directories below /dir/to/mount. This allows you to export the directory /dir to the client and then to specify /dir/to as inaccessible without taking away access to something like /dir/from root_squash: This permission prevents remote root users from having superuser (root) privileges on remote NFS-mounted volumes. Here, squash literally means to squash the power of the remote root user no_root_squash: This allows root user on the NFS client host to access the NFS-mounted directory with the same rights and privileges that the superuser would normally have. all_squash: Maps all User IDs (UIDs) and group IDs (GIDs) to the anonymous user. The opposite option is no_all_squash, which is the default setting.
Once you have an /etc/exports
file setup, use the exportfs
command to tell the NFS server processes to refresh NFS shares.
To export all file systems specified in the /etc/exports
file:
[root@centos-8 ~]# exportfs -a
Use exportfs -r
to refresh shares and reexport all directories (optional as we have already used exportfs -a
)
[root@centos-8 ~]# exportfs -r
To view and list the available NFS shares use exportfs -v
[root@centos-8 ~]# exportfs -v
/nfs_shares (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/etc/exports
, you don't need to restart nfs-server
, you can use exportfs -r
to update the exports content or alternatively you can execute systemctl reload nfs-server
to refresh the /etc/exports
contentHere,
-r: Re-exports all entries in the /etc/exports file. This synchronizes /var/lib/nfs/xtab
with the contents of /etc/exports file. For example it deletes entries from /var/lib/nfs/xtab
that are no longer in /etc/exports and removes the stale entries from the kernel export table.
-a: Exports all entries in the /etc/exports file. It can also be used to unexport the exported
file systems when used along with the u option, for example exportfs -ua
-v: Print the existing shares
-u: Unexports the directory /dir/to/mount to the host clientA
For complete list of supported options with exportfs
, follow man page of exportfs
Allow NFS server services with firewalld
We will add all the NFS services to our firewalld
rule to allow NFS server client communication.
[root@centos-8 ~]# firewall-cmd --permanent --add-service mountd success [root@centos-8 ~]# firewall-cmd --permanent --add-service nfs success [root@centos-8 ~]# firewall-cmd --reload success
Access NFS shares temporarily (non-persistent)
- We will next use
mount
command to access NFS shares on Linux client. - In this NFS configuration guide example, we have explicitly defined additional options
-o
argument to choose NFSv4 as the preferred option to mount the NFS share. - Since we are using mount command, the changes will not be persistent across reboot.
- The
/nfs_shares
from nfs-server(centos-8
) will be mounted on/mnt
on thenfs-client
.
[root@nfs-client ~]# mount -o nfsvers=4 10.10.10.12:/nfs_shares /mnt
If I try to access NFS shares using NFSv3, as you see after waiting for the timeout period the client fails to mount the NFS share as we have restricted the NFS server to only allow NFSv4 connections.
[root@nfs-client ~]# mount -o nfsvers=3 10.10.10.12:/nfs_shares /mnt mount.nfs: No route to host
We can use mount command to list NFS mount points on nfs-client
.
[root@nfs-client ~]# mount | grep nfs 10.10.10.12:/nfs_shares on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.16,local_lock=none,addr=10.10.10.12)
To remove NFS share access you can unmount the mount point
[root@nfs-client ~]# umount /mnt
Allow permanent access to NFS shares (Persistent)
To access NFS shares persistently i.e. across reboots then you can add the mount point details to /etc/fstab. But be cautious before using this as it would mean that your NFS server is always accessible and it during boot stage of the NFS client, the NFS server is un-reachable then your client may fail to boot.
Add NFS mount point details in /etc/fstab in the below format. Here 10.10.10.12
is my NFS server. I have added some additional mount options rw
and soft
to access the NFS shares.
rw
permission on my NFS configuration steps hence I am using rw
on client, if you have a read only NFS shares then accordingly use ro
in mount options.10.10.10.12:/nfs_shares /mnt nfs rw,soft 0 0
Next execute mount -a to mount all the partitions from /etc/fstab
[root@nfs-client ~]# mount -a
Check if the mount was successful and you can access NFS share on the client.
[root@nfs-client ~]# mount | grep /mnt 10.10.10.12:/nfs_shares on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.16,local_lock=none,addr=10.10.10.12)
Install and Configure NFS Server (NFSv3) in RHEL/CentOS 7/8
NFSv2 and NFSv3 rely heavily on RPCs to handle communications between clients and servers. RPC services in Linux are managed by the portmap service.
The following list shows the various RPC processes that facilitate the NFS service under Linux:
rpc.statd
- This process is responsible for sending notifications to NFS clients whenever the NFS server is restarted without being gracefully shut down.
- It provides status information about the server to
rpc.lockd
when queried. - This is done via the Network Status Monitor (NSM) RPC protocol. It is an optional service that is started automatically by the
nfslock
service. - It is not required in NFSv4.
rpc.rquotad
- As its name suggests,
rpc.rquotad
supplies the interface between NFS and the quota manager. - NFS users/clients will be held to the same quota restrictions that would apply to them if they were working on the local file system instead of via NFS.
- It is not required in NFSv4.
rpc.mountd
- When a request to mount a partition is made, the rpc.mountd daemon takes care of verifying that the client has the appropriate permission to make the request.
- This permission is stored in the /etc/exports file.
- It is automatically started by the NFS server init scripts.
- It is not required in NFSv4.
rpc.nfsd
- The main component to the NFS system, this is the NFS server/daemon.
- It works in conjunction with the Linux kernel either to load or unload the kernel module as necessary.
- It is, of course, still relevant in NFSv4.
rpc.lockd
- The rpc.statd daemon uses this daemon to handle lock recovery on crashed systems.
- It also allows NFS clients to lock files on the server.
- The nfslock service is no longer used in NFSv4.
rpc.idmapd
- This is the NFSv4 ID name-mapping daemon.
- It provides this functionality to the NFSv4 kernel client and server by translating user and group IDs to names, and vice versa.
rpc.svcgssd
- This is the server-side rpcsec_gss daemon.
- The rpcsec_gss protocol allows the use of the gss-api generic security API to provide advanced security in NFSv4.
rpc.gssd
- This provides the client-side transport mechanism for the authentication mechanism in NFSv4 and higher.
Install nfs-utils and rpcbind to setup NFSv3
We will install nfs-utils
and additionally we will also need rpcbind
to configure NFS server (NFSv3) in Red Hat/CentOS 7/8 Linux
[root@centos-7 ~]# yum -y install nfs-utils rpcbind
On Debian and Ubuntu you should install below list of rpms
# apt-get -y install nfs-common nfs-kernel-server rpcbind
Start nfs-server, rpcind services and check nfs status
We do not need any additional NFS configuration to configure NFS server (basic). But you can check /etc/sysconfig/nfs
(if using RHEL/CentOS 7.6 and earlier) or /etc/nfs.conf
(if using RHEL/CentOS 7.7 or higher) for any customization.
[root@centos-7 ~]# systemctl enable nfs-server --now Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service. [root@centos-7 ~]# systemctl enable rpcbind
Check nfs status of nfs-server
and rpcbind
services to make sure the are active and running
[root@centos-7 ~]# systemctl status nfs-server
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Sat 2020-04-18 17:03:24 IST; 8s ago
Main PID: 1999 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Apr 18 17:03:24 centos-7.example.com systemd[1]: Starting NFS server and services...
Apr 18 17:03:24 centos-7.example.com systemd[1]: Started NFS server and services.
rpcbind
(as a dependency) whenever the nfs server is started, and so you don’t need to explicitly start rpcbind
separately.[root@centos-7 ~]# systemctl status rpcbind
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2020-04-18 17:03:18 IST; 20s ago
Main PID: 1982 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─1982 /sbin/rpcbind -w
Apr 18 17:03:18 centos-7.example.com systemd[1]: Starting RPC bind service...
Apr 18 17:03:18 centos-7.example.com systemd[1]: Started RPC bind service.
Check the netstat
output for listening TCP and UDP ports
[root@centos-7 ~]# netstat -ntulp | egrep nfs\|rpc tcp 0 0 0.0.0.0:42725 0.0.0.0:* LISTEN 1991/rpc.statd tcp 0 0 0.0.0.0:20048 0.0.0.0:* LISTEN 1995/rpc.mountd tcp6 0 0 :::20048 :::* LISTEN 1995/rpc.mountd tcp6 0 0 :::44816 :::* LISTEN 1991/rpc.statd udp 0 0 0.0.0.0:880 0.0.0.0:* 1982/rpcbind udp 0 0 127.0.0.1:895 0.0.0.0:* 1991/rpc.statd udp 0 0 0.0.0.0:20048 0.0.0.0:* 1995/rpc.mountd udp 0 0 0.0.0.0:51945 0.0.0.0:* 1991/rpc.statd udp6 0 0 :::880 :::* 1982/rpcbind udp6 0 0 :::20048 :::* 1995/rpc.mountd udp6 0 0 :::42581 :::* 1991/rpc.statd
You can compare this output with NFSv4 setup, here we have more number of ports and service running with NFSv3 compared to NFSv4
Create NFS Shares
Next we will create a directory which we can share over NFS server. In this NFS configuration guide, I will create a new directory /nfs_shares to share for NFS clients.
[root@centos-7 ~]# mkdir /nfs_shares
The syntax and procedure to create NFS share is same between NFSv4 and NFSv3
Syntax:
export host1(options1) host2(options2) host3(options3)
In this structure:
- export: The directory being exported
- host: The host or network to which the export is being shared
- options: The options to be used for host
In this NFS configuration guide, we create NFS share /nfs_shares
to world (*
) with rw
and no_root_squash
permission
[root@centos-7 ~]# cat /etc/exports /nfs_shares *(rw,no_root_squash)
The list of options supported with NFSv3 configuration remains same as I shared under NFSV4 section of this article.
Refresh NFS shares
Once you configure NFS server and have an /etc/exports
file setup, use the exportfs
command to tell the NFS server processes to refresh NFS shares.
To export all file systems specified in the /etc/exports
file:
[root@centos-7 ~]# exportfs -a
exportfs -r
as re-exports are the shares. The list of options supported with exportfs
between NFSv3 and NFSv4 are same which I shared above in this article.List the currently exported NFS shares on the server. This command will also show the default permissions applied to the NFS share.
[root@centos-7 ~]# exportfs -v
/nfs_shares (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
Allow NFS server services with firewalld
You can get the list of NFS and rpcbind
ports used by NFSv3 from the netstat
output we shared, instead we will use service list to allow firewall access for NFSV3
[root@centos-7 ~]# firewall-cmd --permanent --add-service nfs success [root@centos-7 ~]# firewall-cmd --permanent --add-service mountd success [root@centos-7 ~]# firewall-cmd --permanent --add-service rpc-bind success
Reload the firewall service to make the changes persistent
[root@centos-8 ~]# firewall-cmd --reload success
Access NFS shares temporarily (non-persistent)
- We will next use mount command to access NFS shares on Linux client.
- These changes will not survive reboot and will be non-persistent.
- In this NFS configuration guide example, we have explicitly defined additional options
-o
argument to choose NFSv3 as the preferred option to mount the NFS share.
[root@nfs-client ~]# mount -o nfsvers=3 10.10.10.2:/nfs_shares /mnt
Check if the mount was successful
[root@nfs-client ~]# mount | grep /mnt 10.10.10.2:/nfs_shares on /mnt type nfs (rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.10.10.2,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.10.10.2)
If I try to access NFS shares using NFSv4
[root@nfs-client ~]# mount -o nfsvers=4 10.10.10.2:/nfs_shares /mnt
As you see the client was allowed to access the NFS share even with NFSv4 so you see since we have not restricted our NFS server to only use NFSv3, it is allowing NFSv4 connections also.
[root@nfs-client ~]# mount | grep /mnt 10.10.10.2:/nfs_shares on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.16,local_lock=none,addr=10.10.10.2)
You can use the same list of commands to list NFS mount points for NFSv3 mounts on the clients as I listed under NFSv4.
To remove NFS share access you can unmount the mount point
[root@nfs-client ~]# umount /mnt
Allow permanent access to NFS shares (Persistent)
To access NFS shares persistently i.e. across reboots then you can add the mount point details to /etc/fstab
. But be cautious before using this as it would mean that your NFS server is always accessible and it during boot stage of the NFS client, the NFS server is un-reachable then your client may fail to boot.
Add NFS mount point details in /etc/fstab
in the below format. Here 10.10.10.2 is my NFS server. I have added some additional mount options other than defaults, such as defaults
, soft
and nfsvers=3
to access the NFS shares only with v3 protocol.
10.10.10.2:/nfs_shares /mnt nfs defaults,soft,nfsvers=3 0 0
Next execute mount -a
to mount all the partitions from /etc/fstab
[root@nfs-client ~]# mount -a
Check if the mount was successful and you can access NFS share on the client.
[root@nfs-client ~]# mount | grep /mnt 10.10.10.2:/nfs_shares on /mnt type nfs (rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.10.10.2,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.10.10.2)
Lastly I hope the steps from the article to install and configure NFS server and client using NFSv3 and NFSv4 on Red Hat and CentOS 7/8 Linux was helpful. So, let me know your suggestions and feedback using the comment section.
References:
Configure NFS Server with NFSv3 and NFSv4 in RHEL 8
NFS wiki page
Linux Administration: Network File System (NFS)
Related Searches: centos nfs server, how to setup nfs share, centos 7 install nfs server, how to check nfs status in linux, how to check if nfs server is running on linux, nfs in linux tutorial, nfs configuration in rhel 7 step by step, install and configure NFS server and client
I feel that _netdev option would be helpful within the fstab.