Features of the NFS Service
This section describes the important features that are included in the NFS service.
NFS Version 2 Protocol
Version 2 was the first version of the NFS protocol in wide
use. Version 2 continues to be available on a large variety of platforms.
All Solaris releases support version 2 of the NFS protocol, but Solaris releases
prior to Solaris 2.5 support version 2 only.
NFS Version 3 Protocol
An implementation of NFS version 3 protocol was a new feature of
the Solaris 2.5 release. Several changes have been made to improve interoperability and performance.
For optimal use, the version 3 protocol must be running on both the
NFS servers and clients.
Unlike the NFS version 2 protocol, the NFS version 3 protocol can
handle files that are larger than 2 Gbytes. The previous limitation has been
removed. See NFS Large File Support.
The NFS version 3 protocol enables safe asynchronous writes on the server, which
improve performance by allowing the server to cache client write requests in memory.
The client does not need to wait for the server to commit
the changes to disk, so the response time is faster. Also, the server
can batch the requests, which improves the response time on the server.
Many Solaris NFS version 3 operations return the file attributes, which are stored
in the local cache. Because the cache is updated more often, the need
to do a separate operation to update this data arises less often. Therefore,
the number of RPC calls to the server is reduced, improving performance.
The process for verifying file access permissions has been improved. Version 2 generated
a “write error” message or a “read error” message if users tried
to copy a remote file without the appropriate permissions. In version 3, the
permissions are checked before the file is opened, so the error is reported
as an “open error.”
The NFS version 3 protocol removed the 8-Kbyte transfer size limit. Clients and
servers could negotiate whatever transfer size the clients and servers support, rather than
conform to the 8-Kbyte limit that version 2 imposed. Note that in the
Solaris 2.5 implementation, the protocol defaulted to a 32-Kbyte transfer size. Starting in
the Solaris 10 release, restrictions on wire transfer sizes are relaxed. The transfer
size is based on the capabilities of the underlying transport.
NFS Version 4 Protocol
NFS version 4 has features that are not available in the previous
The NFS version 4 protocol represents the user ID and the group ID
as strings. nfsmapid is used by the client and the server to do
For more information, refer to nfsmapid Daemon.
Note that in NFS version 4, the ID mapper, nfsmapid, is used to
map user or group IDs in ACL entries on a server to
user or group IDs in ACL entries on a client. The reverse is
also true. For more information, see ACLs and nfsmapid in NFS Version 4.
With NFS version 4, when you unshare a file system, all the
state for any open files or file locks in that file system is
destroyed. In NFS version 3 the server maintained any locks that the clients
had obtained before the file system was unshared. For more information, refer to
Unsharing and Resharing a File System in NFS Version 4.
NFS version 4 servers use a pseudo file system to provide clients
with access to exported objects on the server. Prior to NFS version 4
a pseudo file system did not exist. For more information, refer to File-System Namespace in NFS Version 4.
In NFS version 2 and version 3 the server returned persistent file
handles. NFS version 4 supports volatile file handles. For more information, refer to Volatile File Handles in NFS Version 4.
Delegation, a technique by which the server delegates the management of a file
to a client, is supported on both the client and the server.
For example, the server could grant either a read delegation or a write
delegation to a client. For more information, refer to Delegation in NFS Version 4.
Starting in the Solaris 10 release, NFS version 4 does not support
the LIPKEY/SPKM security flavor.
Also, NFS version 4 does not use the following daemons:
For a complete list of the features in NFS version 4, refer
to Features in NFS Version 4.
For procedural information that is related to using NFS version 4, refer to
Setting Up NFS Services.
Controlling NFS Versions
The /etc/default/nfs file has keywords to control the NFS protocols that are used
by both the client and the server. For example, you can use keywords
to manage version negotiation. For more information, refer to Keywords for the /etc/default/nfs File or the nfs(4)
NFS ACL Support
Access control list (ACL) support was added in the Solaris 2.5 release. ACLs
provide a finer-grained mechanism to set file access permissions than is available through
standard UNIX file permissions. NFS ACL support provides a method of changing and
viewing ACL entries from a Solaris NFS client to a Solaris NFS server.
See Using Access Control Lists to Protect Files in System Administration Guide: Security Services for more information about ACLs.
For information about support for ACLs in NFS version 4, see ACLs and nfsmapid in NFS Version 4.
NFS Over TCP
The default transport protocol for the NFS protocol was changed to the Transport
Control Protocol (TCP) in the Solaris 2.5 release. TCP helps performance on slow
networks and wide area networks. TCP also provides congestion control and error recovery.
NFS over TCP works with version 2, version 3, and version 4. Prior
to the Solaris 2.5 release, the default NFS protocol was User Datagram Protocol
Note - Starting in the Solaris 10 release, if RDMA for InfiniBand is available, RDMA
is the default transport protocol for NFS. For more information, see NFS Over RDMA. Note,
however, that if you use the proto=tcp mount option, NFS mounts are forced to
use TCP only.
NFS Over UDP
Starting in the Solaris 10 release, the NFS client no longer uses
an excessive number of UDP ports. Previously, NFS transfers over UDP used
a separate UDP port for each outstanding request. Now, by default, the
NFS client uses only one UDP reserved port. However, this support is
configurable. If the use of more simultaneous ports would increase system performance
through increased scalability, then the system can be configured to use more ports. This
capability also mirrors the NFS over TCP support, which has had this kind
of configurability since its inception. For more information, refer to the Solaris Tunable Parameters Reference Manual.
Note - NFS version 4 does not use UDP. If you mount a file
system with the proto=udp option, then NFS version 3 is used instead of version
Overview of NFS Over RDMA
Starting in the Solaris 10 release, the default transport for NFS is the
Remote Direct Memory Access (RDMA) protocol, which is a technology for memory-to-memory transfer
of data over high speed networks. Specifically, RDMA provides remote data transfer directly
to and from memory without CPU intervention. To provide this capability, RDMA combines
the interconnect I/O technology of InfiniBand-on-SPARC platforms with the Solaris Operating System. For more
information, refer to NFS Over RDMA.
Network Lock Manager and NFS
The Solaris 2.5 release also included an improved version of the network lock
manager. The network lock manager provided UNIX record locking and PC file sharing
for NFS files. The locking mechanism is now more reliable for NFS files,
so commands that use locking are less likely to hang.
Note - The Network Lock Manager is used only for NFS version 2 and
version 3 mounts. File locking is built into the NFS version 4 protocol.
NFS Large File Support
The Solaris 2.6 implementation of the NFS version 3 protocol was changed to
correctly manipulate files that were larger than 2 Gbytes. The NFS version 2
protocol and the Solaris 2.5 implementation of the version 3 protocol could not
handle files that were larger than 2 Gbytes.
NFS Client Failover
Dynamic failover of read-only file systems was added in the Solaris 2.6 release.
Failover provides a high level of availability for read-only resources that are already
replicated, such as man pages, other documentation, and shared binaries. Failover can occur
anytime after the file system is mounted. Manual mounts can now list multiple
replicas, much like the automounter in previous releases. The automounter has not changed,
except that failover need not wait until the file system is remounted. See
How to Use Client-Side Failover and Client-Side Failover for more information.
Kerberos Support for the NFS Service
Support for Kerberos V4 clients was included in the Solaris 2.0 release. In
the 2.6 release, the mount and share commands were altered to support NFS
version 3 mounts that use Kerberos V5 authentication. Also, the share command was
changed to enable multiple authentication flavors for different clients. See RPCSEC_GSS Security Flavor for more
information about changes that involve security flavors. See Configuring Kerberos NFS Servers in System Administration Guide: Security Services for information about Kerberos V5
The Solaris 2.6 release also included the ability to make a file system
on the Internet accessible through firewalls. This capability was provided by using an
extension to the NFS protocol. One of the advantages to using the WebNFSTM
protocol for Internet access is its reliability. The service is built as an
extension of the NFS version 3 and version 2 protocol. Additionally, the WebNFS
implementation provides the ability to share these files without the administrative overhead of
an anonymous ftp site. See Security Negotiation for the WebNFS Service for a description of more changes that are
related to the WebNFS service. See WebNFS Administration Tasks for more task information.
Note - The NFS version 4 protocol is preferred over the WebNFS service. NFS version
4 fully integrates all the security negotiation that was added to the MOUNT
protocol and the WebNFS service.
RPCSEC_GSS Security Flavor
A security flavor, called RPCSEC_GSS, is supported in the Solaris 7 release. This
flavor uses the standard GSS-API interfaces to provide authentication, integrity, and privacy, as
well as enabling support of multiple security mechanisms. See Kerberos Support for the NFS Service for more information about
support of Kerberos V5 authentication. See Solaris Security for Developers Guide for more information about GSS-API.
Solaris 7 Extensions for NFS Mounting
The Solaris 7 release includes extensions to the mount command and automountd command.
The extensions enable the mount request to use the public file handle
instead of the MOUNT protocol. The MOUNT protocol is the same access method
that the WebNFS service uses. By circumventing the MOUNT protocol, the mount can
occur through a firewall. Additionally, because fewer transactions need to occur between the
server and the client, the mount should occur faster.
The extensions also enable NFS URLs to be used instead of the standard
path name. Also, you can use the public option with the mount command
and the automounter maps to force the use of the public file handle.
See WebNFS Support for more information about changes to the WebNFS service.
Security Negotiation for the WebNFS Service
A new protocol has been added to enable a WebNFS client to
negotiate a security mechanism with an NFS server in the Solaris 8 release.
This protocol provides the ability to use secure transactions when using the WebNFS
service. See How WebNFS Security Negotiation Works for more information.
NFS Server Logging
In the Solaris 8 release, NFS server logging enables an NFS server to
provide a record of file operations that have been performed on
its file systems. The record includes information about which file was accessed, when the
file was accessed, and who accessed the file. You can specify the
location of the logs that contain this information through a set of configuration
options. You can also use these options to select the operations that should
be logged. This feature is particularly useful for sites that make anonymous FTP
archives available to NFS and WebNFS clients. See How to Enable NFS Server Logging for more information.
Note - NFS version 4 does not support server logging.
Autofs works with file systems that are specified in the local namespace. This
information can be maintained in NIS, NIS+, or local files.
A fully multithreaded version of automountd was included in the Solaris 2.6 release.
This enhancement makes autofs more reliable and enables concurrent servicing of multiple mounts,
which prevents the service from hanging if a server is unavailable.
The new automountd also provides better on-demand mounting. Previous releases would mount an
entire set of file systems if the file systems were hierarchically related.
Now, only the top file system is mounted. Other file systems that are
related to this mount point are mounted when needed.
The autofs service supports browsability of indirect maps. This support enables a user
to see which directories could be mounted, without having to actually mount each
file system. A -nobrowse option has been added to the autofs maps so
that large file systems, such as /net and /home, are not automatically
browsable. Also, you can turn off autofs browsability on each client by using
the -n option with automount. See Disabling Autofs Browsability for more information.