Serving Files
with NFS
Actually serving files requires telling the
NFS server what directories you want to export
(that is, make available to others) and which clients should have access to
specific directories. You can also include options that affect access control
and other important server features. To mount an NFS server's exports from a
client, you use the mount command, but instead of specifying a local device file, you point
the utility at an NFS server and provide the name of the directory you want to
mount.
Defining NFS Exports
Linux uses the /etc/exports file to
control the NFS server. This file consists of a series of lines, each of which
defines a single directory to be exported. Each line has the following format: /path/to/export client1 ( options ) [ client2 ( options )[...]]
The /path/to/export is
the name of the directory you wish to export, such as /home or /usr/X11R6 . You
can list any directory you like, but of course some directories aren't useful
or would be security risks when exported. For instance, exporting /etc or /proc could be
potentially dangerous, because remote users might be able to view or modify
sensitive system-specific information. You might think that exporting /dev would give
remote users access to the server's devices, but this isn't sodevice files
always refer to devices on the local computer, so a /dev export
would just give users a duplicate means of accessing the client's devices. These
files might be named strangely or point to the wrong hardware if the export
were mounted on a different OS than the server uses. Such access can also be a
potential security risk to the client system,
if a user can create device files on the server with lax permissions. (The nodev mount
option, described later, addresses this issue.) You list clients singly or via wildcards. Possibilities
include the following: No name If you provide only a list of options in parentheses, any client
may connect to the export. This configuration is extremely insecure, and so
isn't normally used, except occasionally when restricting access to a
directory, as described shortly. Single computer name You can specify a single computer name, such as larch or larch.threeroomco.com , to allow that computer access to the share. If you don't include
the domain name, the server's own local domain is assumed. Wildcards You can use question mark ( ? ) and asterisk ( * ) wildcards
to represent single characters or a group of characters in a computer name, as
in *.threeroomco.com to provide access to all computers in the threeroomco.com domain. Wildcards don't match dots ( . ) though, so in this
example, computers in threeroomco.com subdomains, such as mulberry.bush.threeroomco.com , won't match. NIS netgroup
If your network uses a Network Information Service (NIS) server, you can
specify an NIS netgroup by
preceding the name by an at-sign ( @ ). Network by IP address You can specify a restricted group of computers by IP address by
listing a network address and netmask, as in 172.19.0.0/255.255.0.0 . You
may also specify the netmask as a single number of bits, as in 172.19.0.0/16 .
You may omit the netmask if you want to specify a single computer by IP
address.As a general rule, it's safest to specify
computers by IP address, because hostnames and NIS netgroup names can be altered if the DNS or NIS server is compromised. IP addresses
can also be faked, particularly if an intruder has physical access to your
network, but using IP addresses eliminates one possible method of attack. On
the other hand, using IP addresses can be inconvenient, and may complicate
matters if clients' IP addresses change frequently, as when they're assigned
via DHCP, as discussed in href="http:// /?xmlid=0-201-77423-2/ch05#ch05"> Chapter 5 .TIP

Specifying individual clients in this way
may seem redundant with blocking access to the portmapper via TCP Wrappers,
as described earlier. This is partially correct, in that both methods should restrict access to the server. There could
be bugs or a misconfiguration in one method or another, though, so this
redundancy isn't a bad thing. In fact, imposing additional blocks via packet
filter rules (as described in href="http:// /?xmlid=0-201-77423-2/ch25#ch25"> Chapter 25 ,
Configuring iptables) is advisable.
NOTE

Some Linux distributions now ship with
firewalls enabled by default, or easily configured at installation time. Some
of these, such as those in Red Hat, have been known to block access to NFS
servers, and some don't make it easy to open this access. If you're having
problems with NFS access, you may want to consult href="http:// /?xmlid=0-201-77423-2/ch25#ch25"> Chapter 25 to
learn how to examine and modify your system's firewall rules.
You can specify a different set of options
for each client or set of clients. These options appear in parentheses
following the computer specification, and they're separated from each other by
commas. Many of these options set access control features, as described shortly
in the section " href="http:// /JVXSL.asp?x=1&mode=section&sortKey=insertDate&sortOrder=desc&view=&xmlid=0-201-77423-2/ch08lev1sec4&open=true&title=New%20This%20Week&catid=&s=1&b=1&f=1&t=1&c=1&u=1#ch08lev2sec4#ch08lev2sec4"> Access Control Mechanisms ." Others relate to general performance issues or server
defaults. Examples of general options include the following: sync and async
These options force synchronous or asynchronous operation, respectively. Asynchronous
writes allow the server to tell the client that a write operation is complete
before the disk operations have finished. This process results in faster
operation, but is potentially risky because a server crash could result in data
loss. NFSv2 doesn't officially support asynchronous operation, but the Linux
NFS server implements this feature despite this fact. NFSv3 does support an
asynchronous option, and requires the client to buffer data to reduce the risk.
The default for this option is async , although beta-test versions of
Linux's NFSv3 support ignored it. wdelay and no_wdelay
By default, Linux's NFS server may delay writing data to disk if it suspects
that a related request is underway or imminent. This improves performance in
most situations. You can disable this behavior with the no_wdelay option, or explicitly request the default with wdelay .
Access Control Mechanisms
Many of the options you specify for
individual clients in /etc/exports relate to access control. As noted earlier, NFS uses a trusted
hosts security model, so you can't control access to specific exports or files
via user names and passwords as you can with Samba; if the client's security
can be trusted, the client will apply standard UNIX-style ownership and
permissions to file access. Security-related /etc/exports options include the following: secure and insecure
By default, the NFS server requires that access attempts originate from secure portsthat is, those numbered below 1024. On a
UNIX or Linux system, such ports can normally only be used by root , whereas
anybody may use ports with higher numbers. Thus, allowing access from higher
ports (as can be done with the insecure option) provides greater
opportunity for ordinary users on the client to abuse the server, but also
allows you to run NFS test client programs as an ordinary user. ro and rw The ro and rw options
specify read-only and read-write access to the export, respectively. The knfsd kernel-enabled server defaults to ro , but older servers default to rw . I
recommend explicitly specifying one option or the other to avoid confusion or
errors. hide and nohide
Suppose your NFS server stores the /usr directory tree on
its own partition, and /usr/local is on another partition. If you export /usr , is /usr/local also exported? The default has varied with different NFS servers in the past,
and the 2.2.x kernel included an option to set
the default. Recent NFS servers include the hide and nohide options
to hide a mounted partition or not hide it, respectively. Some clients don't
cope well with unhidden mounted partitions, so you may want to set the hide option and
explicitly export the mounted partition ( /usr/local in this
example). The client can then explicitly mount both exports. noaccess This option disables access to a directory, even if the directory
is a subdirectory of one that's been explicitly exported. For instance, suppose
you want to export the /home directory tree, except for /home/abrown . You
could create an ordinary /etc/exports line to export /home , then create a separate /etc/exports line for /home/abrown that includes the noaccess option. The end result is an inability to access /home/abrown . subtree_check and no_subtree_check Suppose you export a subdirectory of a partition, but not the
entire partition. In this case, the NFS server must perform extra checks to
ensure that all client accesses are to files in the appropriate subdirectory
only. These subtree checks slow access
slightly, but omitting them could result in security problems in some
situations, as when a file is moved from the exported subtree to another area. You
can disable the subtree check by specifying the no_subtree_check option,
or explicitly enable it with subtree_check (the latter is the default). You might consider disabling subtree
checks if the exported directory corresponds exactly to a single partition. root_squash and no_root_squash By default, the NFS server squashes
access attempts that originate from the client's root user. This means
that the server treats the accesses as if they came from the local anonymous
user (described shortly). This default improves security because it denies root privileges to other systems, which might be compromised. If you need to allow
the remote administrator local root privileges to an export, you can do
so by using the no_root_squash option. This might be required in some network backup situations,
for example. all_squash and no_all_squash Normally, accesses from ordinary users should not be squashed, but
you might want to enable this option on some particularly sensitive exports. You
can do this with the all_squash option; no_all_squash is the default. anonuid and anongid
The anonymous user, used for squashing, is normally nobody . You
can override this default by specifying a user ID (UID) and group ID (GID) with
the anonuid and anongid options, respectively. You might use this feature to give remote root users
access with a particular user's privileges, for instance, or in conjunction
with PC/NFS clients, which support just one local user. When using these
options, follow them with equal signs ( = ) and a UID or GID
number, as in anonuid=504 .As an example of a complete /etc/exports file, consider href="http:// /JVXSL.asp?x=1&mode=section&sortKey=insertDate&sortOrder=desc&view=&xmlid=0-201-77423-2/ch08lev1sec4&open=true&title=New%20This%20Week&catid=&s=1&b=1&f=1&t=1&c=1&u=1#ch08list01#ch08list01"> Listing 8.1 . This
file exports two directories, /usr/X11R6 and /home . It
includes a third entry to restrict access to /home/abrown by using the
noaccess option. (Because this final line restricts
access, it's used without explicitly specifying a hostall clients are denied
access to this directory.) Both /usr/X11R6 and /home are
accessible to the computer called gingko and all systems on the
192.168.4.0/24 network, but with different options. Read-only access is granted
to /usr/X11R6 , while clients have read/write access to /home . In the
case of gingko , the anonymous user ID is set to 504 for /usr/X11R6 ,
and no subtree checks are performed for /home .
Listing 8.1 A Sample /etc/exports File
/usr/X11R6 gingko(ro,anonuid=504) 192.168.4.0/24(ro) /home gingko(rw,no_subtree_check) 192.168.4.0/255.255.255.0(rw) /home/abrown (noaccess)
Mounting NFS Exports
From the client side, NFS exports work much
like disk partitions. Specifically, you mount an export using the mount command,
but rather than specify a partition's device filename, you provide the name of
the NFS server and the directory on that server you want to mount in the form server :/path/to/export . For
instance, the following command mounts the /home export from larch at /mnt/userfiles : # mount larch:/home /mnt/userfiles
Alternatively, if you want an export to be
available at all times, you can create an entry in /etc/fstab that
corresponds to the mount command. As with the mount command, you substitute the server name
and export path for a device filename. The filesystem type code is nfs (you can
also use this with a mount command, but Linux can normally determine this automatically). For
instance, the following /etc/fstab entry is equivalent to the preceding mount command: larch:/home /mnt/userfiles nfs defaults 0 0
Users may then access files from larch 's /home directory within the /mnt/userfiles directory. You can perform most operations on a mounted NFS export
that you can perform on a native Linux disk partition, such as reading files,
deleting files, editing files, and so on. There are a handful of operations
that don't work properly on NFS exports, though. For instance, you can't use a
swap file via NFS. In most cases, the performance of NFS exports won't match
the performance on local filesystems; the speed of most networks doesn't match
modern hard disk speed. NFS might provide superior performance if you have a
particularly fast network, though, such as gigabit Ethernet, or if your
clients' local hard disks are particularly old and slow. The server's disk
speed and number of clients being served will also influence NFS performance.Ownership and permissions are exported along
with filenames and file contents. Thus, you and your users can use ownership
and permissions much as you do locally to control access to files and
directories. You can even use these schemes to control access across multiple
computerssay, if a single NFS server supports several clients. There is a
potentially major problem, though: NFS uses UIDs and GIDs to identify users, so
if these don't match up across clients and the server, the result is confusion
and possible security breaches. There are several ways around this problem, as
discussed in the section " href="http:// /?xmlid=0-201-77423-2/ch08lev1sec5#ch08lev1sec5"> Username Mapping Options ."The upcoming sections describe some options
you can give to the mount command to modify the behavior of the NFS client/server interactions
with respect to performance, username mapping, and so on. A few
additional miscellaneous options include the following: hard If the server crashes or becomes unresponsive, programs attempting
to access the server hang; they wait indefinitely for the response. This
is the default behavior. soft If your NFS server crashes or becomes unresponsive frequently, you
may want to use this option, which allows the kernel to return an error to a
program after the NFS server has failed to respond for some time (set via the timeo=time option). nodev This option prevents the client from attempting to interpret
character or block special devices on the NFS export. This can help improve
security by reducing the risk of a miscreant creating a device file with lax
permissions on an NFS export and using it to wreak havoc on the client. nosuid This option prevents the client from honoring the set user ID
(SUID) bit on files on the NFS export. As with nodev , this can be an
important security measure, because if a user could create an SUID root program
on an NFS export, that user could potentially gain superuser access to the
client. noexec This option prevents the client from honoring the execute bit on
files on the NFS exportin other words, users can't run programs from the NFS
export. This option is clearly inappropriate in some cases, such as when you're
deliberately sharing a binaries directory, but it may further enhance security
if the export shouldn't hold executable files.You can include any of these options in a mount command
following the -o option, as in the following example: # mount -o noexec,nodev larch:/home /mnt/userfiles
If you create an /etc/fstab entry, place
these options in the options column (where the previous /etc/fstab example lists defaults ).
Optimizing Performance
Two of the most important performance
enhancements have already been described: Using the kernel's NFS support in
conjunction with knfsd , and using asynchronous mode whenever possible. (The latter option imposes
an increased risk of file loss in the event of a server crash, though.) Other
performance enhancements include the following: Optimizing mount
transfer size options The rsize and wsize options
to mount specify the size of data blocks passed between the client and
server. The defaults vary from one client and server to another, but 4096 is a
typical value. You may want to adjust these values, as in mount larch:/home /mnt/userfiles -o rsize=8192 . Place these options in the options column of /etc/fstab (where defaults is in the preceding example) when you want to mount an NFS export
automatically. Optimizing access
time option The noatime option to mount tells
Linux not to update access time information. Ordinarily, Linux records the last
time a file was accessed, as well as when it was created and changed. Omitting
access-time information can improve NFS performance. Number of running NFS
servers The NFS server startup scripts in most
distributions start eight instances of the server. This number is arbitrary. On
a lightly used system, it may be too high, resulting in wasted memory. On a
heavily used system, it may be too low, resulting in poor performance when
clients connect. You can adjust the value by editing the NFS server startup
script. These frequently set the number of instances via a variable near the
start of the script, such as RPCNFSDCOUNT=8 . Non-NFS performance
issues Many networking and non-networking
features can influence NFS performance. For instance, if your network card is
flaky or slow, you'll experience NFS performance problems. Similarly, a major
NFS server relies upon its hard disks, so it's important that you have a fast
hard disk, ideally driven by hardware that imposes low CPU overhead (such as a
DMA-capable EIDE controller or a good SCSI host adapter; SCSI is often
preferable because SCSI hard disks often outperform EIDE hard disks).If your NFS server is experiencing poor
performance, you should first try to ascertain whether the problem lies in the
NFS server software, in the NFS client systems, in the network configuration
generally, or in some more generalized area such as disk performance. You can
do this by running performance tests using a variety of protocols and clients,
as well as entirely local tests (such as using the -t option to hdparm to test
your hard disk performance).