How to mount a remote directory using SSH to be available same as if it is a local directory?
7 Answers
First install the module:
sudo apt-get install sshfs
Load it to kernel:
sudo modprobe fuse
Setting permissions (Ubuntu versions < 16.04):
sudo adduser $USER fuse
sudo chown root:fuse /dev/fuse
sudo chmod +x /dev/fusermount
Now we'll create a directory to mount the remote folder in.
I chose to create it in my home directory and call it remoteDir.
mkdir ~/remoteDir
Now I ran the command to mount it (mount on home):
sshfs maythux@192.168.xx.xx:/home/maythuxServ/Mounted ~/remoteDir
Now it should be mounted:
cd ~/remoteDir
ls -l
- 119,640
- 87,123
Configure ssh key-based authentication
Generate key pair on the local host.
$ ssh-keygen -t rsa
Accept all sugestions with enter key.
Copy public key to the remote host:
$ ssh-copy-id -i .ssh/id_rsa.pub user@host
Install sshfs
$ sudo apt install sshfs
Mount remote directory
$ sshfs user@host:/remote_directory /local_directory
Don't try to add remote fs to /etc/fstab
Or don't try to mount shares via /etc/rc.local .
In both cases it won't work as the network is not available when init reads /etc/fstab.
Install AutoFS
$ sudo apt install autofs
Edit /etc/auto.master
Comment out the following lines
#+/etc/auto.master.d
#+/etc/auto.master
Add a new line
/- /etc/auto.sshfs --timeout=30
Save and quit
Edit /etc/auto.sshfs
Add a new line
/local_directory -fstype=fuse,allow_other,IdentityFile=/local_private_key :sshfs\#user@remote_host\:/remote_directory
Remote user name is obligatory.
Save and quit
Start autofs in debug mode
$ sudo service autofs stop
$ sudo automount -vf
Observe logs of the remote ssh server
$ ssh user@remote_server
$ sudo tailf /var/log/secure
Check content of the local directory
You should see contents of the remote directory
Start autofs in normal mode
Stop AutoFS running in debug mode with CTRL-C .
Start AutoFS in normal mode
$ sudo service autofs start
Enjoy
(Tested on Ubuntu 14.04)
Based on my experiments, explicitly creating the fuse group and adding your user to it is NOT required to mount ssh file system.
To summarize, here are the steps copied from this page:
- Install
sshfs
$ sudo apt-get install sshfs
2.Create local mount point
$ mkdir /home/johndoe/sshfs-path/
3.Mount remote folder /remote/path to /home/johndoe/sshfs-path/
$ sshfs remoteuser@111.222.333.444:/remote/path /home/johndoe/sshfs-path/
- And finally, to umount ...
$ fusermount -u /home/johndoe/sshfs-path/
- 2,821
Install sshfs
sudo apt-get install sshfs
Add to fstab:
<USER>@<SERVER_NAME>:<server_path> <local_path> fuse.sshfs delay_connect,_netdev,user,idmap=user,transform_symlinks,identityfile=/home/<YOUR_USER_NAME>/.ssh/id_rsa,allow_other,default_permissions,rw,nosuid,nodev,uid=1000,gid=1000,nonempty 0 0
- 30,621
- 343
Although it is not answering your question exactly but I just wanted to mention that you can achieve the same goal using "sftp" as well. Just inside your file manager address bar type this command:
sftp://remoteuser@111.222.333.444/remote/path
- 181
An easy way to run sshfs mounts at startup is also by adding it to the root (or another user's) crontab, like this:
@reboot sshfs remoteuser@111.222.333.444:/remote/path /home/johndoe/sshfs-path/
And if you need to add a delay, you can use:
@reboot sleep 60 && sshfs remoteuser@111.222.333.444:/remote/path /home/johndoe/sshfs-path/
- 31,035
I would like to warn that, it seems that by default only the user which set up the mount can access the remote directory.
I set up a remote directory, and create a crontab with sudo crontab -e. Later I found out the backup file didn't write the remote directory at all. Then I found out that I could not cd into the remote disk as root ! So eventually I create the same task with crontab -e and everything works as I expected.
- 180
- 11