Comments on Setting Up An NFS Server And Client On Ubuntu 10.04
Setting Up An NFS Server And Client On Ubuntu 10.04 This guide explains how to set up an NFS server and an NFS client on Ubuntu 10.04. NFS stands for Network File System; through NFS, a client can access (read, write) a remote share on an NFS server as if it was on the local hard disk.
12 Comment(s)
Comments
You too can have static IP's on your home servers. http://en.wikipedia.org/wiki/Private_network#Private_IPv4_address_spaces
You're only using one public IP at your home, the router performs NAT for you to use more than one system with that IP. Check the link above for reserved private IPs. Log into your router and either reserve a DHCP lease for your file server (based on your MAC address), or remove a block of IP's from the DHCP scope to use for your servers.
To become root on Ubuntu it's recommended you use sudo -i rather than sudo su ..
On a second note, you don't mention accessing the NFS with a window client. I don't know NFS well enough to know if it's accessible without samba from windows.
Installing Windows services for UNIX should allow you to access NFS exports with windows clients.
Just about every forum I have checked recommends against using Windows services to access NFS servers. The access control implemented in Windows is incompatible with that used in Unix/Linux etc. Most forums recommend Samba instead, which does the attribute translation at the 'nix side rather than leaving it up to Microsoft to handle it. I have never used Windows services for Unix so I have no direct experience but everyone I know who has set it up took it down shortly afterward and went back to Samba, including my prior employer that never even released it for general use. Never had an issue with Samba except the speed. If anyone has direct positive experience with Windows services for 'nix over NFS maybe they should comment. My impression of Windows is that it implements foreign protocols carelessly if at all.
Why do you, and other tech writers, assume that home computer users have static IP addresses? Less than 1% do.
The server should always have a static address. Or else your clients have not much controll of who to use as server.
Clients usually need to have static addresses (or at least one assigned to it's mac-address from the DHCP-server). Because then the server can have some controll over who will be allowed to talk to it.
Another importent thing that this article has not mention, is that the numerical user id:s and groups id:s need to be synchronized between the clients and the server. Or else the owner of a filer or directory on a client will be different than on the server.
The owner is decide on the numerical user id on server and client. And if different users has same id on client and servers, they can access each others files through NFS.
If you have a small system, you MIGHT want to synchronize manually, but for a system with three or more computers, you really want to set up Kerberos or Ldap to handle user and group id:s. With kerberos you will also get some security to NFS, which you will not have without kerberos.
NFS without Kerberos is fast, efficient and unsecure.
I think that writers consider the most people whom uses this resources is very close to Linux or other SO.
Isn't the reason that we're all here? Sharing knowledge? If you need to see explicit the phrase "you have a dynamic IP address", I think you're not prepared to take the howto's, don't you think so? Or the fact of your mention is about to cover "who uses STATIC IP"?
Thanks for this post, sure it will be handy for me.I spent hours searching the web, trying to come up with a solution to solve the problem – unfortunately, nothing worked. Your post however, worked like a charm.
Thanks again for the useful info!
The no_root_squash option in /etc/exports tells the NFS deamon to NOT allow user root or group root any access to those exported filesystems from a NFS-client as such user/group. If the client tries to access the file system as user root (uid/gid=0), the user or group will be mapped to anonymous (uid/gid=65534 ? don't have a NFS-server infront of me now) instead.
You might want to check the exports(5) man page.
Nobody talks about the boot race condition when you try to use fstab to automount nfs4 shares as hard mount points. It is almost as if nobody cares. You now have to automount in fstab with nfs4 - hard mounting in fstab no longer works. Fstab hangs waiting for network to be up in order to hard mount nfs, but (for desktop sesssions anyway) network hangs waiting for fstab to finish automounting before bringing the network up. They both wait forever for each other to finish. NFS3 had some workaround but it was removed in nfs4. I tried the script workaround in the 'bug report' but it still does not work. Even worse, when using a flat panel tv, it may also hang with scrambled display so you do not even know what the problem is until after the boot completes initializing the graphics too, which never happens and you have to either guess what the problem is or attach a different display to find out. Nice job.
When hard mounting nfs on the desktop you can use the 'bg' switch in fstab to make nfs mount in the background. This avoids the race condition where the network and fstab wait for each other forever (and yes if your monitor is also your tv you will not see any error messages because the tv graphics are not up yet either at this point in the boot). With 'bg', all the hard mounted nfs shares will continue attempting to mount in the background while the network comes up on the client, and then nfs eventually connects. I have tested this and it works. The nfs shares take a while to come up, but if you use soft links on the client to point to the shares rather than accessing their mount points directly, the links will at least block file accesses until after nfs comes up. This might prevent writing files to the mount point underneath the nfs share. Unfortunately this only solves the client nfs/fstab boot race, but still leaves another problem, which is that any of the other boot processes that require nfs shares to be up (such as reading a user's home directory in order to initialize the shell environment at login, or accessing a local mailbox when the mail client starts, etc,) still have a race condition that persists until all the nfs mount points are up. In the case of the mail client 'firefox' it is possible to damage the mailbox if it suddenly 'appears' while firefox is in the process of initializing (because firefox will attempt to create a new mailbox if it cannot find your old one). In the case of the user's login shell, I am not sure there is any workaround, so it might make more sense to try some of the other solutions that have been presented on the 'net such as modifying the init processing to force it to wait until nfs is up... but that is over my head, and possibly over the head of most casual home users - which is the main reason why we want hard mounted nfs in the first place!! We would rather not waste months trying to figure out all these race conditions etc with no relevant experience to guide us and no senior colleagues one cubicle over to consult for advice. The last problem is that in the latest releases of Ubuntu, nfs seems to have many other issues (bugs?) such as: long file access delays that stall the clients, overloading the server cpu with even minimal network activity, hanging clients, memory leaks, inconsistencies in root-squash behavior between 32 bit and 64 bit clients... and I have not found a single workaround published anywhere for any of these problems, although a couple of developers claim that 'it is working fine' for them (so by extrapolation that must mean that I am doing something wrong?). More disturbing is the way some of the bug reports seem to have been 'cancelled' for lack of activity, even though nfs still is not working on my system and the symptoms are identical to those reported by myriad other users. Even so, my setup is so dirt simple, with no security or centralized login or anything, operating wide open behind a hardware firewall, that I cannot imagine what I did wrong, except not pay for tech support to tell me the magic password that makes all these new 'issues' vanish... I find it especially disconcerting when disaffected commercial users with presumably experienced administration claim to be switching to rhel in order to bring their own network admin department back online after an Ubuntu server/client upgrade failed to function normally for a whole month. If anyone has that magic password to make the long pauses in network traffic go away, please post it. I presume there is some sort of hard-coded network access, that nfs is expecting to find ldap or kerberos or whatever protocol initiated, and if that is not configured, nfs has to time out on it before waking up again and resume serving? Anybody know what/which protocol is no longer 'optional', or if there is some other mechanism that stalls nfs every time it severs a new file? Is it one of the block size parameters that changed the default value to a more 'sensible' setting? I presume that this issue has a solution, else how would Ubuntu still be number 3 in popularity? Or is Ubuntu on the way down a long slippery slope that begins with arrogant disregard of network issues while throwing all available resources at the problem of developing a macintosh/microsoft cloned 'user-friendly' desktop-in-the-cloud experience, and ends in oblivion? Not sure I would make such a tradeoff myself, given that the majority of the installed user base is headless network server or internet domain hosting... and linux has never broken into the desktop market successfully yet. If Ubuntu ignores its base can it successfully break into new markets? Especially when early adopters like myself are going to expect seamless networking that exceeds Microsoft and Macintosh performance?
Thank you very much for sharing - it really does work like charm!
Cheers,
Christopher