Mounting Remote Directories With SSHFS On Ubuntu 11.10

Version 1.0
Author: Falko Timme
Follow me on Twitter
Last edited 11/01/2011

This tutorial explains how you can mount a directory from a remote server on the local server securely using SSHFS. SSHFS (Secure SHell FileSystem) is a filesystem that serves files/directories securely over SSH, and local users can use them just as if the were local files/directories. On the local computer, the remote share is mounted via FUSE (Filesystem in Userspace). I will use Ubuntu 11.10 for both the local and the remote server.

I do not issue any guarantee that this will work for you!


1 Preliminary Note

I'm using the following two systems in this tutorial:

  • Local system:, IP address:
  • Remote system:, IP address:

I will show how to use SSHFS as root and also as a normal user.

I'm running all the steps in this tutorial with root privileges, so make sure you're logged in as root:

sudo su


2 Installing SSHFS


On the local system, SSHFS must be installed as follows:

apt-get install sshfs


3 Using SSHFS As root


Now I want to mount the remote directory /home/backup (on server2, owned by server2's root user) to the local /backup directory as the local root user.

First add root to the fuse group:

adduser root fuse

Create the local /backup directory and make sure it's owned by root (it should be anyway as you are running these commands as root):

mkdir /backup
chown root /backup

Then mount the remote /home/backup directory to /backup:

sshfs -o idmap=user root@ /backup

(You can use a full path for the remote system, as sown above, or a relative path, like this:

sshfs -o idmap=user root@ /backup

If you use a relative path, this path is relative to the remote user's home directory, so in this case it would be /root/backup. You can even leave out the remote directory, as follows:

sshfs -o idmap=user root@ /backup

This would then translate to the remote user's home directory - /root in this case.


-o idmap=user makes that it does not matter if the local and the remote system use different user IDs - files owned by the remote user are also owned by the local user. If you don't use this, you might get permission problems.

If you connect to the remote host for the first time, you will see a warning about the authenticity of the remote host (if you have connected to the remote host before using ssh or scp, you will not see the warning). In any case, you will be asked for the root password for server2:

root@server1:~# sshfs -o idmap=user root@ /backup
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is a2:38:f3:df:7a:6c:b6:3c:d6:c3:9c:88:93:e2:f0:63.
Are you sure you want to continue connecting (yes/no)?
<-- yes
root@'s password: <-- server2 root password

Let's check if the remote directory got mounted to /backup:


root@server1:~# mount
/dev/mapper/server1-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/sda1 on /boot type ext2 (rw)
root@ on /backup type fuse.sshfs (rw,nosuid,nodev,max_read=65536)

df -h

root@server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
                       29G 1015M   27G   4% /
udev                  238M  4.0K  238M   1% /dev
tmpfs                  99M  212K   99M   1% /run
none                  5.0M     0  5.0M   0% /run/lock
none                  247M     0  247M   0% /run/shm
/dev/sda1             228M   24M  193M  11% /boot
                       29G 1019M   27G   4% /backup

Looks good!

To unmount the share, run

fusermount -u /backup


3.1 Creating A Private/Public Key Pair On server1

Of course, we don't want to type in a password every time we try to mount the remote share. Therefore we create a private/public key pair and transfer the public key to server2 so that we will not be asked for a password anymore.


Create a private/public key pair on


root@server1:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
 <-- ENTER
Enter passphrase (empty for no passphrase): <-- ENTER
Enter same passphrase again: <-- ENTER
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/
The key fingerprint is:
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|                 |
|  o . . S        |
|.o = . = o       |
|.o= . o + .      |
|=.+o . .         |
|@*E.. o.         |

It is important that you do not enter a passphrase otherwise mounting will not work without human interaction so simply hit ENTER!

Next, we copy our public key to

ssh-copy-id -i $HOME/.ssh/ root@

Now check on server2 if server1's public key has correctly been transferred:


cat $HOME/.ssh/authorized_keys

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDnz2RwCZLLqBtB1rZKyN9XVfdAdt+PSpbGeLn+vlG/5nQvCSJhkRM3vpdmHPFrcYgJGtIU4gTCg6VDox2AxzJdGsrZN6zsLCndhgbs/r7N56ucuhdKSdeM/gLocnxkdQ86EECQqq42DaXgtqz3d8Q/Z+1KxYR82p7XK5ZoQG9vovNQNx9qhxIhsYIXMAbEv61bD1e0pBP9k9c1GfrZ79iRQrV+4UhHs/+Bca1YNby4gRmKIZK4FkzOYRUWYnIKVMMteC+lNho+ZMkKioo4CR3Z02hOV7ELFapqFY+6g7sj9cpLaM9gMY3rOd4EDARU+45U9yHBPsmIlA3zh4VkdnG/

Now back on server1, try to mount the remote share again (make sure it's unmounted before you run the command):


sshfs -o idmap=user root@ /backup

If all goes well, you should not be prompted for a password:

root@server1:~# sshfs -o idmap=user root@ /backup


3.2 Mounting The Remote Share Automatically At Boot Time


If you don't want to mount the remote share manually, it is possible to have it mounted automatically when the system boots (provided you have followed chapter 3.1 because otherwise an automatic mount is not possible because you will be asked for a password). Normally we would modify /etc/fstab to achieve this, but unfortunately the network isn't up yet when /etc/fstab is processed in the boot process, which means that the remote share cannot be mounted.

To circumvent this, we simply add our mount command to /etc/rc.local, which is the last file to be processed in the boot process, and at that time the network is up and running:

vi /etc/rc.local

#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.

/usr/bin/sshfs -o idmap=user root@ /backup
exit 0

You can test this by simply rebooting your system:


After the reboot, you can check with



df -h

if the remote share got mounted.

Share this page:

6 Comment(s)

Add comment


From: Anonymous at: 2011-12-14 13:36:58

Nice article. I'd expect that the filesystem could be mounted at boot time with fstab using the _netdev option as an alternative to mounting with rc.local?

From: Hemo at: 2011-12-29 22:44:41

Why not use




 to store scripts to mount and unmount the share when the network goes up/down.

 I am using this method to attach to a remote system with sshfs whenever my vpn connection is active and it works great. 

From: Anonymous at: 2012-04-08 02:09:54

I just tried this on ubuntu 12.04 precise and was not getting the password-less access until I generated a key on server2 and put it in the authorized_keys of server1.


From: Mondane at: 2012-10-17 13:42:01

Here's an easier method for automounting sshfs:

From: Evaggelos Balaskas at: 2011-12-06 10:50:45

I am using a socks proxy on my office. This of course isnt a bug but a feature!

This is how i've managed to use sshfs with a socks proxy


From: Bob Bowles at: 2012-06-19 12:37:38

Hi, Many Thanks for this thread. I have 2 questions (BTW I am using Ubuntu 12.04 LTS on both local and remote machines, not 11.10):

1) As root, I found the process does not work because in Ubuntu the root password is locked. It does not seem to be possible to use this way of mounting unless the root password has been unlocked, thus removing a significant security feature. How did you succeed using this method? Was server2 using a different distro?

2) As an ordinary user I found that the ownership of the mount point was changed by the sshfs mount process. The mount point was always reset to ownership by root, so I am unable to use the mount as a normal user, even though I performed the mount as the ordinary user. Is there something I am doing wrong, or is this a bug somewhere in sshfs?