[ Previous | Next | Contents | Glossary | Home | Search ]
AIX Version 4.3 System Management Guide: Communications and Networks

Network File System Overview

The Network File System (NFS) is a distributed file system that allows users to access files and directories located on remote computers and treat those files and directories as if they were local. For example, users can use operating system commands to create, remove, read, write, and set file attributes for remote files and directories.

The NFS software package includes commands and daemons for NFS, Network Information Service (NIS), and other services. Although NFS and NIS are installed together as one package, each is independent and each is configured and administered individually. See "Network Information Service (NIS) Overview" for details on NIS.

AIX supports the latest NFS protocol update, NFS Version 3. AIX also provides an NFS Version 2 client and server and is, therefore, backward compatible with an existing install base of NFS clients and servers.

The following topics are discussed in this section:

NFS Services

NFS Access Control Lists (ACL) Support

Cache File System Support

NFS Mapped File Support

Three Types of Mounts

NFS Mounting Process

The /etc/exports File

The /etc/xtab File

Implementation of NFS

Controlling NFS

NFS Services

NFS provides its services through a client-server relationship. The computers that make their file systems, or directories, and other resources available for remote access are called servers. The act of making file systems available is called exporting. The computers, or the processes they run, that use a server's resources are considered clients. Once a client mounts a file system that a server exports, the client can access the individual server files (access to exported directories can be restricted to specific clients).

The major services provided by NFS are:

Mount service From the /usr/sbin/rpc.mountd daemon on the server and the /usr/sbin/mount command on the client.
Remote File access From the /usr/sbin/nfsd daemon on the server and the /usr/sbin/biod daemon on the client.
Remote execution service From the /usr/sbin/rpc.rexd daemon on the server and the /usr/bin/on command on the client.
Remote System Statistics service From the /usr/sbin/rpc.rstatd daemon on the server and the /usr/bin/rup command on the client.
Remote User Listing service From the /usr/lib/netsvc/rusers/rpc.rusersd daemon on the server and the /usr/bin/rusers command on the client.
Boot Parameters service Provides boot parameters to SunOS diskless clients from the /usr/sbin/rpc.bootparamd daemon on the server.
Remote Wall service From the /usr/lib/netsvc/rwall/rpc.rwalld daemon on the server and the /usr/sbin/rwall command on the client.
Spray service Sends a one-way stream of Remote Procedure Call (RPC) packets from the /usr/lib/netsvc/spray/rpc.sprayd daemon on the server and the /usr/sbin/spray command on the client.
PC authentication service Provides a user authentication service for PC-NFS from the /usr/sbin/rpc.pcnfsd daemon on the server.
Note: A computer can be both an NFS server and an NFS client simultaneously.

An NFS server is stateless. That is, an NFS server does not have to remember any transaction information about its clients. In other words, NFS transactions are atomic: a single NFS transaction corresponds to a single, complete file operation. NFS requires the client to remember any information needed for later NFS use.

NFS Access Control Lists (ACL) Support

Although NFS supports access control lists (ACLs), they are no longer used as the default. If you want to use access control lists with NFS, use the acl option with the NFS -o flag, as shown in the following example:

mount -o acl

This support is handled by an RPC program that exchanges information about ACLs between clients and servers. The ACL support does not change the NFS protocol specification; it is a separate function.

The operating system adds ACLs to the regular file system. Since the normal NFS protocol does not support them, ACLs cannot be seen by normal NFS clients. Unexpected behavior results. A user on an NFS client might presume access to a file after looking at the permission bits, but the permissions could have been altered by the ACL associated with the file. Permissions on a server are enforced at the server according to the ACL on the server, so a user on the client machine could receive a permissions error.

When a client first attempts to access a remote mounted file system, it attempts to contact the ACL RPC program on the server.

If the server is a Version 3.2 server, the client consults the ACL associated with a file before granting access to the program on the client. This provides the expected behavior on the client when the request is sent over to the server. In addition, the aclget, aclput, and alcedit commands can be used on the client to manipulate ACLs.

Note: NFS no longer uses access control lists as the default.

Cache File System (CacheFS) Support

The Cache File System (CacheFS) is a general-purpose file system caching mechanism that improves NFS server performance and scalability by reducing server and network load. Designed as a layered file system, CacheFS provides the ability to cache one file system on another. In an NFS environment, CacheFS increases the client-per-server ratio, reduces server and network loads and improves performance for clients on slow links, such as Point-to-Point Protocol (PPP).

You create a cache on the client so file systems that you specify to be mounted in the cache can be accessed by the user locally instead of across the network. When a user first requests access to these files, they are placed in the cache. The cache does not get filled until the user requests access to a file or files. Initial file requests may seem slow, but subsequent uses of the same file(s) are faster.

Notes:
  1. You cannot cache the / (root) or /usr file systems.
  2. You can mount only file systems that are shared. (See the exportfs command.)
  3. There is no performance gain in caching a local Journaled File System (JFS) disk file system.
  4. You must have root or system authority to do the tasks in the following table.
CacheFS Tasks
Web-based System Manager:    wsm network fast path
(Network application)

-OR-
Task SMIT Fast Path Command or File
Set up a cache cachefs_admin_create cfsadmin -c MountDirectoryName1
Specifying Files for Mounting cachefs_mount mount -F cachefs -o backfstype=FileSysType,cachedir=CacheDirectory[,options]
    BackFileSystem MountDirectoryName2
or
edit /etc/filesystems
Modify the Cache cachefs_admin_change remove the cache, then recreate it using appropriate mount command options
Display Cache Information cachefs_admin_change cfsadmin -l MountDirectoryName
Remove a Cache cachefs_admin_remove
  1. Unmount the file system:
    umount MountDirectoryName
  2. Determine the cache ID:
    cfsadmin -l MountDirectoryName
  3. Delete the file system:
    cfsadmin -d CacheID CacheDirectory
Check File System Integrity cachefs_admin_check fsck_cachefsCacheDirectory3
Notes:
  1. After you have created the cache, do not perform any operations within the cache directory (cachedir) itself. This causes conflicts within the CacheFS software.
  2. If you use the mount command option to specify files for mounting, the command must be reissued each time the system is rebooted.
  3. Use the -m or -o options of the fsck_cachefs command to check the file systems without making any repairs.

NFS Mapped File Support

NFS mapped file support allows programs on a client to access a file as though it were memory. Using the shmat subroutine, users can map areas of a file into their address space. As a program reads and writes into this region of memory, the file is read into memory from the server or updated as needed on the server.

Mapping files over NFS is limited in three ways:

If an NFS file is to be used for data sharing between programs on different clients, record locking and the regular read and write subroutines should be used.

Multiple programs on the same client can share data effectively using a mapped file. Advisory record locking can coordinate updates to the file on the client, provided that the entire file is locked. Multiple clients can share data-using mapped files only if the data never changes, as in a static database.

Three Types of Mounts

There are three types of NFS mounts: predefined, explicit, and automatic.

Predefined mounts are specified in the /etc/filesystems file. Each stanza (or entry) in this file defines the characteristics of a mount. Data such as the host name, remote path, local path, and any mount options are listed in this stanza. Predefined mounts should be used when certain mounts are always required for proper operation of a client.

Explicit mounts serve the needs of the root user. Explicit mounts are usually done for short periods of time when there is a requirement for occasional unplanned mounts. Explicit mounts can also be used if a mount is required for special tasks and that mount should not be generally available on the NFS client. These mounts are usually fully qualified on the command line by using the mount command with all needed information. Explicit mounts do not require updating the /etc/filesystems file. File systems mounted explicitly remain mounted unless explicitly unmounted with the umount command or until the system is restarted.

Automatic mounts are controlled by the automount command, which causes the AutoFS kernel extension to monitor specified directories for activity. If a program or user attempts to access a directory that is not currently mounted, then AutoFS intercepts the request, arranges for the mount of the file system, then services the request.

NFS Mounting Process

Clients access files on the server by first mounting a server's exported directories. When a client mounts a directory, it does not make a copy of that directory. Rather, the mounting process uses a series of remote procedure calls to enable a client to access the directories on the server transparently. The following describes the mounting process:

  1. When the server starts, the /etc/rc.nfs script runs the exportfs command, which reads the server /etc/exports file, and then tells the kernel which directories are to be exported and which access restrictions they require.
  2. The rpc.mountd daemon and several nfsd daemons (8, by default) are then started by the /etc/rc.nfs script.
  3. When the client starts, the /etc/rc.nfs script starts several biod daemons (8, by default), which forward client mount requests to the appropriate server.
  4. Then the /etc/rc.nfs script executes the mount command, which reads the file systems listed in the /etc/filesystems file.
  5. The mount command locates one ore more servers that export the information the client wants and sets up communication between itself and that server. This process is called binding.
  6. The mount command then requests that one or more servers allow the client to access the directories in the client /etc/filesystems file.
  7. The server rpc.mountd daemon receives the client mount requests and either grants or denies them. If the requested directory is available to that client, the rpc.mountd daemon sends the client's kernel an identifier called a file handle.
  8. The client kernel then ties the file handle to the mount point (a directory) by recording certain information in a mount record.

Once the file system is mounted, the client can perform file operations. When the client does a file operation, the biod daemon sends the file handle to the server, where the file is read by one of the nfsd daemons to process the file request. Assuming the client has access to perform the requested file operation, the nfsd daemon returns the necessary information to the client's biod daemon.

/etc/exports File

The /etc/exports file indicates all directories that a server exports to its clients. Each line in the file specifies a single directory. The server automatically exports the listed directories each time the NFS server is started. These exported directories can then be mounted by clients. The syntax of a line in the /etc/exports file is:

directory     -options[,option]

The directory is the full path name of the directory. Options may designate a simple flag such as ro or a list of host names. See the specific documentation of the /etc/exports file and the exportfs command for a complete list of options and their descriptions. The /etc/rc.nfs script does not start the nfsd daemons or the rpc.mountd daemon if the /etc/exports file does not exist.

The following example illustrates entries from an /etc/exports file:

/usr/games    -ro,access=ballet:jazz:tap
/home     -root=ballet,access=ballet
/var/tmp
/usr/lib      -access=clients

The first entry in this example specifies that the /usr/games directory can be mounted by the systems named ballet , jazz , and tap . These systems can read data and run programs from the directory, but they cannot write in the directory.

The second entry in this example specifies that the /home directory can be mounted by the system ballet and that root access is allowed for the directory.

The third entry in this example specifies that any client can mount the /var/tmp directory. (Notice the absence of an access list.)

The fourth entry in this example specifies an access list designated by the netgroup clients . In other words, these machines designated as belonging to the netgroup clients can mount the /usr/lib directory from this server. (A netgroup is a network-wide group allowed access to certain network resources for security or organizational purposes. Netgroups are controlled by using NIS and by editing the /etc/netgroup file. NIS must be used to do the netgroup mapping.)

/etc/xtab File

The /etc/xtab file has a format identical to the /etc/exports file and lists the currently exported directories. Whenever the exportfs command is executed, the /etc/xtab file changes. This allows you to export a directory temporarily without having to change the /etc/exports file. If the temporarily exported directory is unexported, the directory is removed from the /etc/xtab file.

Note: The /etc/xtab file is updated automatically, and should not be edited.

Implementation of NFS

NFS can be, and is, implemented on a wide variety of machine types, operating systems, and network architectures. NFS achieves this independence using the Remote Procedure Call (RPC) protocol.

Remote Procedure Call (RPC) Protocol

RPC is a library of procedures. The procedures allow one process (the client process) to direct another process (the server process) to execute procedure calls as if the client process had executed the calls in its own address space. Because the client and the server are two separate processes, they need not exist on the same physical system (although they can).

NFS implemented as a set of RPC calls in which the server services certain types of calls made by the client. The client makes such calls based on the file system operations that are done by the client process. NFS, in this sense, is an RPC application.

Because the server and client processes can reside on the two different physical systems which may have completely different architectures, RPC must address the possibility that the two systems may not represent data in the same manner. So, RPC uses data types defined by the eXternal Data Representation (XDR) protocol.

eXternal Data Representation (XDR) Protocol

XDR is the specification for a standard representation of various data types. By using a standard data type representation, a program can be confident that it is interpreting data correctly, even if the source of the data is a machine with a completely different architecture.

In practice, most programs do not use XDR internally. Rather, they use the data type representation specific to the architecture of the computer on which the program is running. When the program needs to communicate with another program, it converts its data into XDR format before sending the data. Conversely, when it receives data, it converts the data from XDR format into its own specific data type representation.

The portmap Daemon

Each RPC application has associated with it a program number and a version number. These numbers are used to communicate with a server application on a system. The client, when making a request from a server, needs to know what port number that server is accepting requests on. This port number is associated with the User Datagram Protocol (UDP) or Transmission Control Protocol (TCP) that is being used by the service. The client knows the program number, the version number, and the system name or host name where the service resides. The client needs a way to map the program number and version number pair to the port number of the server application. This is done with the help of the portmap daemon.

The portmap daemon runs on the same system as the NFS application. When the server starts running it, registers with the portmap daemon. As a function of this registration, the server supplies its program number, version number, and UDP or TCP port number. The portmap daemon keeps a table of server applications. When the client tries to make a request of the server, it first contacts the portmap daemon (on a well-known port) to find out which port the server is using. The portmap daemon responds to the client with the port of the server that the client is requesting. The client, upon receipt of the port number, is able to make all of its future requests directly to the server application.

Controlling NFS

The NFS and NIS daemons are controlled by the System Resource Controller (SRC). This means you must use SRC commands such as startsrc, stopsrc, and lssrc to start, stop, and check the status of the NFS and NIS daemons.

Some NFS daemons are not controlled by the SRC: specifically, rpc.rexd, rpc.rusersd, rpc.rwalld, and rpc.rsprayd. These daemons are started and stopped by the inetd daemon.

The following table lists the SRC-controlled daemons and their subsystem names.

Daemons and Their Subsystems
File Path Subsystem Name Group Name
/usr/sbin/nfsd nfsd nfs
/usr/sbin/biod biod nfs
/usr/sbin/rpc.lockd rpc.lockd nfs
/usr/sbin/rpc.statd rpc.statd nfs
/usr/sbin/rpc.mountd rpc.mountd nfs
/usr/lib/netsvc/yp/ypserv ypserv yp
/usr/lib/netsvc/yp/ypbind ypbind yp
/usr/lib/netsvc/rpc.yppasswdd yppasswdd yp
/usr/lib/netsvc/rpc.ypupdated ypupdated yp
/usr/sbin/keyserv keyserv keyserv
/usr/sbin/portmap portmap portmap

Each of these daemons can be specified to the SRC commands by using either their subsystem name or the appropriate group name. These daemons support neither the long-listing facility of SRC nor the SRC trace commands.

For more information on using the SRC, see "System Resource Controller Overview" in AIX Version 4.3 System Management Guide: Operating System and Devices.

Change the Number of biod and nfsd Daemons

To change the number of biod or nfsd daemons running on the system, use the chnfs command. For example, to set the number of nfsd daemons to 10 and the number biod daemons to 4, run the command:

chnfs -n 10 -b 4

This command temporarily stops the daemons currently running on the system, modifies the SRC database code to reflect the new number, and restarts the daemons.

Note: In the AIX NFS implementation, the number of biod daemons are controllable only per mount point using the biod -o option. Specification using chnfs is retained for compatibility purposes only and has no real effect on the number of threads performing I/O.

Change Command Line Arguments for Daemons Controlled by SRC

Many NFS and NIS daemons have command-line arguments that can be specified when the daemon is started. Since these daemons are not started directly from the command line, you must update the SRC database so that the daemons can be started correctly. To do this, use the chssys command. The chssys command has the format:

chssys -s Daemon -a 'NewParameter'

For example:

chssys -s nfsd -a '10'

changes the nfsd subsystem so that when the daemon is started, the command line looks like nfsd 10 . The changes made by the chssys command do not take effect until the subsystem has been stopped and restarted.

Start the NFS Daemons at System Startup

The NFS daemons, by default, are not started during installation. When installed, all of the files are placed on the system, but the steps to activate NFS are not taken. You can start the NFS daemons at system startup through:

All of these methods place an entry in the inittab file so that the /etc/rc.nfs script is run each time the system restarts. This script, in turn, starts all NFS daemons required for a particular system.

Start the NFS Daemons

The file size limit for files located on an NFS server is taken from the process environment when nfsd is started. To use a specific value, edit the file /etc/rc.nfs and add a ulimit, using the ulimit command, with the desired limit before the startsrc command for nfsd.

The NFS daemons can be started individually or all at once. To start NFS daemons individually:

startsrc -s Daemon

where Daemon is anyone of the SRC-controlled daemons. For example, to start the nfsd daemons:

startsrc -s nfsd

To start all of the NFS daemons:

startsrc -g nfs
Note: If the /etc/exports file does not exist, the nfsd and the rpc.mountd daemons will not be started. You can create an empty /etc/exports file by running the command touch /etc/exports. This will allow the nfsd and the rpc.mountd daemons to start, although no file systems will be exported.

Stop the NFS Daemons

The NFS daemons can be stopped individually or all at once. To stop NFS daemons individually:

stopsrc -s Daemon

where Daemon is anyone of the SRC-controlled daemons. For example, to stop the rpc.lockd daemon:

stopsrc -s rpc.lockd

To stop all NFS daemons at once:

stopsrc -g nfs

Get the Current Status of the NFS Daemons

You can get the current status of the NFS daemons individually or all at once. To get the current status of the NFS daemons individually:

lssrc -s Daemon

where Daemon is anyone of the SRC-controlled daemons. For example, to get the current status of the rpc.lockd daemon:

lssrc -s rpc.lockd

To get the current status of all NFS daemons at once:

lssrc -a

[ Previous | Next | Contents | Glossary | Home | Search ]