In my last article I shared the steps to encrypt a file using gpg key in Linux. Now in this article I will share various commands and tools which you can use to securely copy file from one server to another in Linux. There are additionally other ways to transfer files which I cannot cover here for example you can also use HTTPS to upload and download files.
I may write another article with detail list of steps to use HTTPS and curl for secure file upload and download.If you wish to copy files between Windows and Linux then you can always use Samba but since here we are targeting file transfer between two Linux machines, I will not share any steps related to Samba configuration.
Some more articles on related topic you may be interested in
- 2 commands to copy folder from local to remote server or vice versa in Linux with examples
- Step-by-Step Guide to setup SFTP chroot Jail to restrict user to a specfic directory when copying files in Linux
- How to securely transfer files between two hosts using HTTPS in Linux
Using SFTP to copy file from one server to another
In computing, the SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer, and file management over any reliable data stream.
SFTP is easy to work with: You enter sftp
along with the name of the remote system on the command line. You are prompted for the account password
; then you are dropped into SFTP with the connection open and waiting.
You can also automate the file transfer using SFTP in shell script, or you can also use one liner SFTP commands to perform file transfer rather than interactive sessions.
[deepak@Ban17-inst01-a ~]$ sftp -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no deepak@10.43.138.2 Password:
Once you give the password of deepak
user on target node, you will get sftp
shell
sftp> ls -l -rw-r----- 1 root root 401 Dec 7 11:53 new_key.pub -rw-r----- 1 deepak users 9 Dec 10 14:13 pwd.txt drwxr-xr-x 2 root root 4096 Nov 28 11:38 scripts -rw-r--r-- 1 root root 0 Dec 10 14:07 test_file
Next to copy file from host to client (i.e. upload a file from host to client)
Here I have a file 'pwd.txt
' on my host server under '/home/deepak/pwd.txt
' which I wish to copy to my client's current working directory
sftp> put /home/deepak/pwd.txt .
Uploading /home/deepak/pwd.txt to /home/deepak/./pwd.txt
/home/deepak/pwd.txt 100% 9 26.6KB/s 00:00
To copy a directory and all it's content use (-r). Here /home/deepak/mydir is available on my host machine which I am copying to the connected client node under current working directory.
sftp> put -r /home/deepak/mydir .
Entering /home/deepak/mydir/
So the file was successfully uploaded. You can verify the same
sftp> ls
new_key.pub pwd.txt scripts test_file
Next copy a file from client node to your host server. I have a file 'test_file' on my client node under '/home/deepak/test_file'
sftp> get test_file /tmp/
Fetching /home/deepak/test_file to /tmp/test_file
Validate the same on your host server
# ls -l /tmp/test_file -rw-r----- 1 deepak deepak 0 Dec 10 14:09 /tmp/test_file
You can get more supported options from the man page of sftp
.
Using RSYNC to copy file from one server to another
rsync is a utility that you can use to copy file from one server to another very easily, and there are many options available to allow you to be very specific about how you want the data transferred. Another aspect that makes rsync flexible is the many ways you can manipulate the source and target directories. However, you don't even have to use the network; you can even copy data from one directory to another on the same server.
Copying a file within the same server from one location to another
# rsync -r /home/deepak/mydir/test /tmp/
Here we are using -r
to copy recursively, you can also use (-a
) i.e. for archive which retains as much metadata as possible (in most cases, it should make everything an exact copy).
This works because whenever
rsync
runs, it will copy what's different from the last time it ran. The files from our first backup were already there, but the permissions were wrong. When we ran the second command, rsync
only needed to copy what was different, so it applied the correct permissions to the files. If any new files were added to the source directory since we last ran the command, the new or updated files would be copied over as well.To copy files between two servers
# rsync -av test deepak@10.43.138.2:/tmp/ Password: sending incremental file list sent 44 bytes received 12 bytes 22.40 bytes/sec total size is 5 speedup is 0.09
Using SCP to copy file from one server to another
A useful alternative to rsync is the Secure Copy (SCP) utility to copy file from one server to another, which comes bundled with OpenSSH. It allows you to quickly copy files from one node to another. If your goal is to send a single file or a small number of files to another machine, SCP is a great tool you can use to get the job done. To utilize SCP, we'll use the scp command. Since you most likely already have OpenSSH installed, you should already have the scp command available
Using SCP is very similar in nature to rsync. The command requires a source, a target, and a filename. To transfer a single file from your local machine to another, the resulting command would look similar to the following:
# scp -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null host_list deepak@10.43.138.2:/tmp/ Password: host_list 100% 30 83.2KB/s 00:00
If you do not specifiy the target directory while doing scp, then the home directory of the target user will be used are destination.
# scp -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null host_list deepak@10.43.138.2: Password:
Make sure you always include at least the colon when copying a file, since if you don't include it, you'll end up copying the file to your current working directory instead of the target.
With our previous scp
examples, we've only been copying a single file. If we want to transfer or download an entire directory and its contents, we will need to use the -r option, which allows us to do a recursive copy:
# scp -r -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null /home/deepak/mydir deepak@10.43.138.2: Password:
A Network File System (NFS) is a great method of sharing files between Linux or UNIX servers. I have written another article with detailed steps to setup NFSv4 and NFSv3 with examples in RHEL/CentOS 7 and 8 Linux
On my RHEL node I have installed nfs-utils
and now will setup my exports. To set up NFS, let's first create some directories that we will share to other users. Each share in NFS is known as an Export
.
# cat /etc/exports /share *(rw,no_root_squash) /share 10.0.2.0/255.255.255.0(rw,no_root_squash)
- In the first line I have given share access to world for
/share
directory. - In the second line, after the directory is called out in a line, we're also setting which network is able to access them (
10.0.2.0/255.255.255.0
in our case). This means that if you're connecting from a different network, your access will be denied. - As far as what these options do, the first (
rw
) is rather self-explanatory. - One option you'll see quite often in the wild is
no_root_squash
. Normally, the root user on one system will get map to nobody on the other for security reasons. In most cases, one system having root access to another is a bad idea. Theno_root_squash
option disables this, and it allows the root user on one end to be treated as the root user on the other. .
Check the man pages for export for more information on additional options you can pass to your exports.
Next restart your nfs-server
services on the server
# systemctl restart nfs-server.service
To check the list of shares currently exported
# exportfs -v /share 10.0.2.0/255.255.255.0(rw,sync,wdelay,hide,no_subtree_check,sec=sys,secure,no_root_squash,no_all_squash) /share <world>(rw,sync,wdelay,hide,no_subtree_check,sec=sys,secure,no_root_squash,no_all_squash)
Now try to mount the directory /share
from the client side
[root@node1 ~]# mount -t nfs node2:/share /mnt
So our directory mount is successful, Next validate the content
[root@node1 ~]# cd /mnt/ [root@node1 mnt]# ls test1 test2
We can validate the same on our client node using below command
[root@node1 mnt]# mount | grep share node2:/share on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.20,local_lock=none,addr=10.0.2.21)
For our examples we have disabled the firewall and
selinux
. So if you face any issues, check your iptables
and selinux
rules and add iptables rules to allow ssh connection.After successfully mounting the share on the client node, you can copy the file locally to your node.
# cp -av /mnt/* /tmp/ ‘/mnt/test1’ -> ‘/tmp/test1’ ‘/mnt/test2’ -> ‘/tmp/test2’
Using SSHFS to copy file from one server to another
SSH Filesystem (SSHFS) is a file sharing solution similar to NFS and Samba. NFS and Samba are great solutions for designating file shares but these technologies may be more complex than necessary if you want to set up a temporary file-sharing service to use for a specific period of time. SSHFS allows you to mount a remote directory on your local machine, and have it treated just like any other directory. The mounted SSHFS directory will be available for the life of the SSH connection and can be used to copy file from one server to another.
Drawbacks of using SSHFS
- Performance of file transfers won't be as fast as with an NFS mount, since there's encryption that needs to be taken into consideration as well
- Another downside is that you'd want to save your work regularly as you work on files within an SSHFS mount, because if the SSH connection drops for any reason, you may lose data.
SSHFS is part of EPEL repository, which you can install using yum
# yum -y install sshfs
For SSHFS to work, we'll need a directory on both your local Linux machine as well as a remote Linux server. SSHFS can mount any directory from the remote server where you have SSH access.
Here I am mounting /share
from node2
on node1
[root@node1 ~]# sshfs root@node2:/share /mnt root@node2's password:
Now validate the content of /mnt
and make sure the path is properly mounted
[root@node1 ~]# cd /mnt/ [root@node1 mnt]# ls test1 test2 [root@node1 mnt]# mount | grep share root@node2:/share on /mnt type fuse.sshfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
Now you can copy the files from /mnt
locally to any other directory
[root@node1 mnt]# cp -av * /tmp/ ‘test1’ -> ‘/tmp/test1’ ‘test2’ -> ‘/tmp/test2’
Once your copying is complete, manually unmount the respective directory. There are two ways to do so. First, we can use the umount
command as the root
(just like we normally would):
[root@node1 ~]# umount /mnt/
You can also use HTTPS for file sharing, although it would be more like of file uploading and downloading via GET and PUT using curl command.
Lastly I hope the commands from this article to copy file from one server to another in Linux or Unix was helpful. So, let me know your suggestions and feedback using the comment section.
For rsync, note that
rsync foo /tmp
creates a directory /tmp/foo, whereas adding a slash like this
rsync foo/ /tmp
just copies the contents of foo to /tmp.
Yes..thank’s so much even still dont know how make it same program .but still interested for me .and try . my sincerly
Roby
Hello, congratulations on the blog.
To big data, such as 500Gb or 1Tb, rysnc is better than scp. Have you used it to perform backups throughout the day? How to improve throughput, or bandwidth? Thanks
Hello, thank you for the feedback
In our production environment also we perform transfer of multiple file each 500GB+
Now generally when you attempt to copy such big file, the copy tool (rsync or any other tool) will eat up all the available bandwidth so it is very important that you assign a BW limit. In our network we have pre-defined a restriction of 1Gb so that other applications using BW don’t get impacted
You can use
--bwlimit
to limit the BW with rsync, also--compare-dest
to make sure only newer data is copied (assuming the transfer was stopped due to some reason)Can we copy one public key for many systems for passwordless login to another systems
You can follow these documents
https://www.golinuxcloud.com/pssh-public-key-authentication-passwordless/#Configure_SSH_public_key_authentication
https://www.golinuxhub.com/2014/01/how-to-create-password-less-ssh.html
Is it possible to transfer 20TB data of file from one system to another system using scp
20TB is a big data, theoretically I don’t see any problem. I would recommend using rsync as it will perform incremental copy. But copying such large chunk of data then you need heave bandwidth, memory resources in your system. With rsync atleast you can be safe to pick up from where you lost in case the transfer fails during the transaction.
Can we transfer 20tb data of file from one system to anothersystem
This is the right site for everyone who hopes to understand this topic.
You know so much its almost tough to argue with you (not that I really would want to…HaHa).
You certainly put a brand new spin on a subject that’s been written about for decades.
Great stuff, just wonderful!
Excellent post. Keep writing such kind of info on your
blog. Im really impressed by your site.
Lovely just what I was searching for.Thanks to the author for taking his clock time on this one.