Setting Up An NFS Server And Client On Ubuntu 10.04

Want to support HowtoForge? Become a subscriber!
 
Submitted by falko (Contact Author) (Forums) on Tue, 2010-10-05 17:06. :: Ubuntu | Storage

Setting Up An NFS Server And Client On Ubuntu 10.04

Version 1.0
Author: Falko Timme <ft [at] falkotimme [dot] com>
Follow me on Twitter
Last edited 09/14/2010

This guide explains how to set up an NFS server and an NFS client on Ubuntu 10.04. NFS stands for Network File System; through NFS, a client can access (read, write) a remote share on an NFS server as if it was on the local hard disk.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

I'm using two Ubuntu systems here:

  • NFS Server: server.example.com, IP address: 192.168.0.100
  • NFS Client: client.example.com, IP address: 192.168.0.101

Because we must run all the steps from this tutorial with root privileges, we can either prepend all commands in this tutorial with the string sudo, or we become root right now by typing

sudo su

 

2 Installing NFS

server:

On the NFS server we run:

aptitude install nfs-kernel-server nfs-common portmap

client:

On the client we can install NFS as follows:

aptitude install nfs-common portmap

 

3 Exporting Directories On The Server

server:

I'd like to make the directories /home and /var/nfs accessible to the client; therefore we must "export" them on the server.

When a client accesses an NFS share, this normally happens as the user nobody. Usually the /home directory isn't owned by nobody (and I don't recommend to change its ownership to nobody!), and because we want to read and write on /home, we tell NFS that accesses should be made as root (if our /home share was read-only, this wouldn't be necessary). The /var/nfs directory doesn't exist, so we can create it and change its ownership to nobody and nogroup:

mkdir /var/nfs
chown nobody:nogroup /var/nfs

Now we must modify /etc/exports where we "export" our NFS shares. We specify /home and /var/nfs as NFS shares and tell NFS to make accesses to /home as root (to learn more about /etc/exports, its format and available options, take a look at

man 5 exports

)

vi /etc/exports

# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/home           192.168.0.101(rw,sync,no_root_squash,no_subtree_check)
/var/nfs        192.168.0.101(rw,sync,no_subtree_check)

(The no_root_squash option makes that /home will be accessed as root.)

Whenever we modify /etc/exports, we must run

exportfs -a

afterwards to make the changes effective.

 

4 Mounting The NFS Shares On The Client

client:

First we create the directories where we want to mount the NFS shares, e.g.:

mkdir -p /mnt/nfs/home
mkdir -p /mnt/nfs/var/nfs

Afterwards, we can mount them as follows:

mount 192.168.0.100:/home /mnt/nfs/home
mount 192.168.0.100:/var/nfs /mnt/nfs/var/nfs

You should now see the two NFS shares in the outputs of

df -h

root@client:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/server2-root
                       29G  847M   26G   4% /
none                  243M  172K  242M   1% /dev
none                  247M     0  247M   0% /dev/shm
none                  247M   48K  247M   1% /var/run
none                  247M     0  247M   0% /var/lock
none                  247M     0  247M   0% /lib/init/rw
none                   29G  847M   26G   4% /var/lib/ureadahead/debugfs
/dev/sda1             228M   17M  199M   8% /boot
192.168.0.100:/home    18G  838M   16G   5% /mnt/nfs/home
192.168.0.100:/var/nfs
                       18G  838M   16G   5% /mnt/nfs/var/nfs
root@client:~#

and

mount

root@client:~# mount
/dev/mapper/server2-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
none on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
none on /dev type devtmpfs (rw,mode=0755)
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
none on /dev/shm type tmpfs (rw,nosuid,nodev)
none on /var/run type tmpfs (rw,nosuid,mode=0755)
none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
none on /var/lib/ureadahead/debugfs type debugfs (rw,relatime)
/dev/sda1 on /boot type ext2 (rw)
192.168.0.100:/home on /mnt/nfs/home type nfs (rw,addr=192.168.0.100)
192.168.0.100:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,addr=192.168.0.100)
root@client:~#

 

5 Testing

On the client, you can now try to create test files on the NFS shares:

client:

touch /mnt/nfs/home/test.txt
touch /mnt/nfs/var/nfs/test.txt

Now go to the server and check if you can see both test files:

server:

ls -l /home/

root@server:~# ls -l /home/
total 4
drwxr-xr-x 3 administrator administrator 4096 2010-04-29 14:21 administrator
-rw-r--r-- 1 root          root             0 2010-09-14 17:11 test.txt
root@server:~#

ls -l /var/nfs

root@server:~# ls -l /var/nfs
total 0
-rw-r--r-- 1 nobody nogroup 0 2010-09-14 17:12 test.txt
root@server:~#

(Please note the different ownerships of the test files: the /home NFS share gets accessed as root, therefore /home/test.txt is owned by root; the /var/nfs share gets accessed as nobody, therefore /var/nfs/test.txt is owned by nobody.)

 

6 Mounting NFS Shares At Boot Time

Instead of mounting the NFS shares manually on the client, you could modify /etc/fstab so that the NFS shares get mounted automatically when the client boots.

client:

Open /etc/fstab and append the following lines:

vi /etc/fstab

[...]
192.168.0.100:/home  /mnt/nfs/home   nfs      rw,sync,hard,intr  0     0
192.168.0.100:/var/nfs  /mnt/nfs/var/nfs   nfs      rw,sync,hard,intr  0     0

Instead of rw,sync,hard,intr you can use different mount options. To learn more about available options, take a look at

man nfs

To test if your modified /etc/fstab is working, reboot the client:

reboot

After the reboot, you should find the two NFS shares in the outputs of

df -h

root@client:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/server2-root
                       29G  847M   26G   4% /
none                  243M  172K  242M   1% /dev
none                  247M     0  247M   0% /dev/shm
none                  247M   48K  247M   1% /var/run
none                  247M     0  247M   0% /var/lock
none                  247M     0  247M   0% /lib/init/rw
/dev/sda1             228M   17M  199M   8% /boot
192.168.0.100:/var/nfs
                       18G  838M   16G   5% /mnt/nfs/var/nfs
192.168.0.100:/home    18G  838M   16G   5% /mnt/nfs/home
root@client:~#

and

mount

root@client:~# mount
/dev/mapper/server2-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
none on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
none on /dev type devtmpfs (rw,mode=0755)
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
none on /dev/shm type tmpfs (rw,nosuid,nodev)
none on /var/run type tmpfs (rw,nosuid,mode=0755)
none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
none on /var/lib/ureadahead/debugfs type debugfs (rw,relatime)
/dev/sda1 on /boot type ext2 (rw)
192.168.0.100:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,sync,hard,intr,addr=192.168.0.100)
192.168.0.100:/home on /mnt/nfs/home type nfs (rw,sync,hard,intr,addr=192.168.0.100)
root@client:~#

 

7 Links


Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.
Submitted by Christopher (not registered) on Tue, 2013-04-30 12:46.

Thank you very much for sharing - it really does work like charm!

Cheers,
Christopher

Submitted by Anonymous (not registered) on Tue, 2013-01-01 12:44.
Nobody talks about the boot race condition when you try to use fstab to automount nfs4 shares as hard mount points. It is almost as if nobody cares. You now have to automount in fstab with nfs4 - hard mounting in fstab no longer works. Fstab hangs waiting for network to be up in order to hard mount nfs, but (for desktop sesssions anyway) network hangs waiting for fstab to finish automounting before bringing the network up. They both wait forever for each other to finish. NFS3 had some workaround but it was removed in nfs4. I tried the script workaround in the 'bug report' but it still does not work. Even worse, when using a flat panel tv, it may also hang with scrambled display so you do not even know what the problem is until after the boot completes initializing the graphics too, which never happens and you have to either guess what the problem is or attach a different display to find out. Nice job.
Submitted by Anonymous (not registered) on Wed, 2013-03-13 23:38.
When hard mounting nfs on the desktop you can use the 'bg' switch in fstab to make nfs mount in the background. This avoids the race condition where the network and fstab wait for each other forever (and yes if your monitor is also your tv you will not see any error messages because the tv graphics are not up yet either at this point in the boot). With 'bg', all the hard mounted nfs shares will continue attempting to mount in the background while the network comes up on the client, and then nfs eventually connects. I have tested this and it works. The nfs shares take a while to come up, but if you use soft links on the client to point to the shares rather than accessing their mount points directly, the links will at least block file accesses until after nfs comes up. This might prevent writing files to the mount point underneath the nfs share. Unfortunately this only solves the client nfs/fstab boot race, but still leaves another problem, which is that any of the other boot processes that require nfs shares to be up (such as reading a user's home directory in order to initialize the shell environment at login, or accessing a local mailbox when the mail client starts, etc,) still have a race condition that persists until all the nfs mount points are up. In the case of the mail client 'firefox' it is possible to damage the mailbox if it suddenly 'appears' while firefox is in the process of initializing (because firefox will attempt to create a new mailbox if it cannot find your old one). In the case of the user's login shell, I am not sure there is any workaround, so it might make more sense to try some of the other solutions that have been presented on the 'net such as modifying the init processing to force it to wait until nfs is up... but that is over my head, and possibly over the head of most casual home users - which is the main reason why we want hard mounted nfs in the first place!! We would rather not waste months trying to figure out all these race conditions etc with no relevant experience to guide us and no senior colleagues one cubicle over to consult for advice. The last problem is that in the latest releases of Ubuntu, nfs seems to have many other issues (bugs?) such as: long file access delays that stall the clients, overloading the server cpu with even minimal network activity, hanging clients, memory leaks, inconsistencies in root-squash behavior between 32 bit and 64 bit clients... and I have not found a single workaround published anywhere for any of these problems, although a couple of developers claim that 'it is working fine' for them (so by extrapolation that must mean that I am doing something wrong?). More disturbing is the way some of the bug reports seem to have been 'cancelled' for lack of activity, even though nfs still is not working on my system and the symptoms are identical to those reported by myriad other users. Even so, my setup is so dirt simple, with no security or centralized login or anything, operating wide open behind a hardware firewall, that I cannot imagine what I did wrong, except not pay for tech support to tell me the magic password that makes all these new 'issues' vanish... I find it especially disconcerting when disaffected commercial users with presumably experienced administration claim to be switching to rhel in order to bring their own network admin department back online after an Ubuntu server/client upgrade failed to function normally for a whole month. If anyone has that magic password to make the long pauses in network traffic go away, please post it. I presume there is some sort of hard-coded network access, that nfs is expecting to find ldap or kerberos or whatever protocol initiated, and if that is not configured, nfs has to time out on it before waking up again and resume serving? Anybody know what/which protocol is no longer 'optional', or if there is some other mechanism that stalls nfs every time it severs a new file? Is it one of the block size parameters that changed the default value to a more 'sensible' setting? I presume that this issue has a solution, else how would Ubuntu still be number 3 in popularity? Or is Ubuntu on the way down a long slippery slope that begins with arrogant disregard of network issues while throwing all available resources at the problem of developing a macintosh/microsoft cloned 'user-friendly' desktop-in-the-cloud experience, and ends in oblivion? Not sure I would make such a tradeoff myself, given that the majority of the installed user base is headless network server or internet domain hosting... and linux has never broken into the desktop market successfully yet. If Ubuntu ignores its base can it successfully break into new markets? Especially when early adopters like myself are going to expect seamless networking that exceeds Microsoft and Macintosh performance?
Submitted by Anders (not registered) on Wed, 2010-10-20 17:49.

The no_root_squash option in /etc/exports tells the NFS deamon to NOT allow user root or group root any access to those exported filesystems from a NFS-client as such user/group.  If the client tries to access the file system as user root (uid/gid=0), the user or group will be mapped to anonymous (uid/gid=65534 ? don't have a NFS-server infront of me now) instead.

 You might want to check the exports(5) man page.

Submitted by Vkamobi (not registered) on Thu, 2010-10-14 09:41.
Thanks for this post, sure it will be handy for me.I spent hours searching the web, trying to come up with a solution to solve the problem – unfortunately, nothing worked. Your post however, worked like a charm.
Thanks again for the useful info!
Submitted by carldub (not registered) on Wed, 2010-10-06 15:18.
Why do you, and other tech writers, assume that home computer users have static IP addresses?  Less than 1% do.
Submitted by Anonymous (not registered) on Fri, 2010-11-19 04:35.

You too can have static IP's on your home servers.  http://en.wikipedia.org/wiki/Private_network#Private_IPv4_address_spaces

 You're only using one public IP at your home, the router performs NAT for you to use more than one system with that IP.  Check the link above for reserved private IPs.  Log into your router and either reserve a DHCP lease for your file server (based on your MAC address), or remove a block of IP's from the DHCP scope to use for your servers.

Submitted by alvarod_silva (not registered) on Wed, 2010-10-20 19:42.

I think that writers consider the most people whom uses this resources is very close to Linux or other SO. 

Isn't the reason that we're all here? Sharing knowledge? If you need to see explicit the phrase "you have a dynamic IP address", I think you're not prepared to take the howto's, don't you think so? Or the fact of your mention is about to cover "who uses STATIC IP"? 

Submitted by Anders (not registered) on Wed, 2010-10-20 18:01.

The server should always have a static address.  Or else your clients have not much controll of who to use as server.

 Clients usually need to have static addresses (or at least one assigned to it's mac-address from the DHCP-server).  Because then the server can have some controll over who will be allowed to talk to it.

 Another importent thing that this article has not mention, is that the numerical user id:s and groups id:s need to be synchronized between the clients and the server.  Or else the owner of a filer or directory on a client will be different than on the server.

The owner is decide on the numerical user id on server and client. And if different users has same id on client and servers, they can access each others files through NFS.

 If you have a small system, you MIGHT want to synchronize manually, but for a system with three or more computers, you really want to set up Kerberos or Ldap to handle user and group id:s.  With kerberos you will also get some security to NFS, which you will not have without kerberos.

 NFS without Kerberos is fast, efficient and unsecure.

Submitted by Anonymous (not registered) on Wed, 2010-10-06 04:29.

To become root on Ubuntu it's recommended you use sudo -i rather than sudo su .. 

On a second note, you don't mention accessing the NFS with a window client.  I don't know NFS well enough to know if it's accessible without samba from windows.

Submitted by alxgomz (registered user) on Fri, 2010-10-08 07:14.
Installing Windows services for UNIX should allow you to access NFS exports with windows clients.
Submitted by Anonymous (not registered) on Thu, 2013-03-14 06:46.
Just about every forum I have checked recommends against using Windows services to access NFS servers. The access control implemented in Windows is incompatible with that used in Unix/Linux etc. Most forums recommend Samba instead, which does the attribute translation at the 'nix side rather than leaving it up to Microsoft to handle it. I have never used Windows services for Unix so I have no direct experience but everyone I know who has set it up took it down shortly afterward and went back to Samba, including my prior employer that never even released it for general use. Never had an issue with Samba except the speed. If anyone has direct positive experience with Windows services for 'nix over NFS maybe they should comment. My impression of Windows is that it implements foreign protocols carelessly if at all.