nfs_allow_all_signals |
- Purpose:
- Specifies that the NFS server adhere to signal handling requirements
for blocked locks for the UNIX 95/98 test suites.
- Values:
-
- Default: 0
- Range: 0 or 1
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- A value of 1 turns nfs_allow_all_signals on, and a value of 0 turns
it off.
|
nfs_device_specific_bufs (AIX 4.2.1 and later) |
- Purpose:
- This option allows the NFS server to use memory allocations from network
devices if the network device supports such a feature.
- Values:
-
- Default: 1
- Range: 0 or 1
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- Use of these special memory allocations by the NFS server can positively
affect the overall performance of the NFS server. The default of 1 means the
NFS server is allowed to use the special network device memory allocations.
If the value of 0 is used, the NFS server uses the traditional memory allocations
for its processing of NFS client requests. These are buffers managed by a
network interface that result in improved performance (over regular mbufs)
because no setup for DMA is required on these. Two adapters that support this include
the Micro Channel ATM adapter and the SP2 switch adapter.
|
nfs_dynamic_retrans |
- Purpose:
- Specifies whether the NFS client should use a dynamic retransmission
algorithm to decide when to resend NFS requests to the server.
- Values:
-
- Default: 1
- Range: 0 or 1
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- If this function is turned on, the timeo parameter is only used in the
first retransmission. With this parameter set to 1, the NFS client attempts
to adjust its timeout behavior based on past NFS server response. This allows
for a floating timeout value along with adjusting the transfer sizes used.
All of this is done based on an accumulative history of the NFS server's response
time. In most cases, this parameter does not need to be adjusted. There are
some instances where the straightforward timeout behavior is desired for the NFS
client. In these cases, the value should be set to 0 before mounting file
systems.
- Refer to:
- Tuning to Avoid Retransmits
|
nfs_gather_threshold |
- Purpose:
- Sets the minimum size of write requests for which write gathering is
done.
- Values:
-
- Default: 4096
- Useful Range: 512 to 8193
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- One of the following two situations exists:
- Delays are observed in responding to RPC requests, particularly those
where the client is exclusively doing non-sequential writes or the files being
written are being written with file locks held on the client.
- Clients are writing with write sizes < 4096 and write-gather is not
working. If write-gather is to be disabled, change the nfs_gather_threshold
to a value greater than the largest possible write. For AIX Version 4 running
NFS Version 2, this value is 8192. Changing the value to 8193 disables write
gather. Use this for the situation described above in scenario (1). If write
gather is being bypassed due to a small write size, say 1024, change the write
gather parameter to gather smaller writes; for example, set to 1024.
|
nfs_iopace_pages (AIX 4.1) |
- Purpose:
- Specifies the number of NFS file pages that are scheduled to be written
back to the server through the VMM at one time. This I/O scheduling control
occurs on close of a file and when the system invokes the syncd daemon.
- Values:
-
- Default: 0 (32 before AIX 4.2.1)
- Range: 0 to 65536
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- When an application writes a large file to an NFS mounted filesystem,
that file data is written to the NFS server when the file is closed. In some
cases, the resource it takes to write that file to the server may prevent
other NFS file I/O from occurring. This parameter limits the number of 4k
pages written to the server to the value of nfs_iopace_pages. The NFS client
will schedule nfs_iopace_pages for writing to the server and then will wait
for these to complete before scheduling the next batch of pages. The default value
will usually be sufficient for most environments. Decreased the values if
there are large amounts of contention for NFS client resources. If there is
low contention, the value can be increased. In AIX 4.2.1 and later, if
nfs_iopace_pages=0, then the number of pages written by the syncd daemon at one time is as follows:
MAX ((filesize/8)-1, 32)
|
nfs_max_connections |
- Purpose:
- Specifies the maximum number of TCP connections allowed into the server.
- Values:
-
- Default: 0 (indicates no limit)
- Range: 0 10 10000
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- Limits number of connections into the server in order to reduce load.
- Refer to:
- Tuning Other Layers to Improve NFS Performance
|
nfs_max_threads (AIX 4.2.1 and later) |
- Purpose:
- Specifies the maximum number of NFS server threads that are created
to service incoming NFS requests.
- Values:
-
- Default: 3891
- Range: 5 to 5000
- Type: Dynamic
- Diagnosis:
- With AIX 4.2.1, the NFS server is multithreaded. The NFS server
threads are created as demand increases for the NFS server. When the NFS server
threads become idle, they will exit. This allows the server to adapt to the
needs of the NFS clients. The nfs_max_threads parameter is the maximum number
of threads that can be created.
- Tuning:
- In general, it does not detract from overall system performance to have
the maximum set to something very large because the NFS server creates threads
as needed. However, this assumes that NFS-serving is the primary machine purpose.
If the desire is to share the system with other activities, then the maximum
number of threads may need to be set low. The maximum number can also be specified
as a parameter to the nfsd daemon.
- Refer to:
- How Many biod and nfsd Daemons Are Needed
|
nfs_repeat_messages (AIX Version 4) |
- Purpose:
- Checks for duplicate NFS messages. This option is used to avoid displaying
duplicate NFS messages.
- Values:
-
- Default: 0 (no)
- Range: 0 or 1
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- Tuning this parameter does not affect performance.
|
nfs_rfc1323 (AIX 4.3) |
- Purpose:
- Enables very large TCP window size negotiation (greater than 65535 bytes)
to occur between systems.
- Values:
-
- Default: 0
- Range: 0 or 1
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- If using the TCP transport between NFS client and server, and both
systems support it, this allows the systems to negotiate a TCP window size
in a way that allows more data to be in-flight between
the client and server. This increases the throughput potential between client
and server. Unlike the rfc1323 option of the no command, this only affects
NFS and not other applications in the system. Value of 0 means this is disabled,
and value of 1 means it is enabled. If the no command parameter rfc1323 is
already set, this NFS option does not need to be set.
|
nfs_server_base_priority |
- Purpose:
- Sets the base priority of nfsd daemons.
- Values:
-
- Default: 65
- Range: 31 to 125
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- By default, the nfsd daemons run with a floating
process priority. Therefore, as they increase their cumulative CPU time, their
priority changes. This parameter can be used to set a static parameter for
the nfsd daemons. The value of 0 represents the floating
priority (default). Other values within the acceptable range are used to set
the priority of the nfsd daemon when an NFS request
is received at the server. This option can be used if the NFS
server is overloading the system (lowering or making the nfsd daemon less
favored). It can also be used if you want the nfsd daemons be one of the most
favored processes on the server. Use caution when setting the parameter because
it can render the system almost unusable by other processes. This situation
can occur if the NFS server is very busy and essentially locks out other processes
from having run time on the server.
|
nfs_server_clread (AIX 4.2.1 and later) |
- Purpose:
- This option allows the NFS server to be very aggressive about the reading
of a file. The NFS server can only respond to the specific NFS-read request
from the NFS client. However, the NFS server can read data in the file which
exists immediately after the current read request. This is normally referred
to as read-ahead. The NFS server does read-ahead by default.
- Values:
-
- Default: 1
- Range: 0 or 1
- Type: Dynamic
- Diagnosis:
- May be useful in cases where server memory is low and too much disk-to-memory
activity is occurring.
- Tuning:
- With the nfs_server_clread option enabled, the NFS server becomes very
aggressive about doing read-ahead for the NFS client. If value is 1, then
aggressive read-ahead is done; If value is 0, normal system default read-ahead
methods are used. Normal system read-ahead is controlled by VMM. In AIX 4.2.1,
the more aggressive top-half JFS read-ahead was introduced. This mechanism
is less susceptible to read-ahead breaking down due to out-of-order requests
(which are typical in the NFS server case). When the mechanism is activated,
it will read an entire cluster (128 KB, the LVM logical track group size).
|
nfs_setattr_error (AIX 4.2.1 and later) |
- Purpose:
- When enabled, NFS server ignores setattr requests that are not valid.
- Values:
-
- Default: 0 (disabled)
- Range: 0 or 1
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- This option is provided for certain PC applications. Tuning this parameter
does not affect performance.
|
nfs_socketsize |
- Purpose:
- Sets the queue size of the NFS server UDP socket.
- Values:
-
- Default: 60000
- Practical Range: 60000 to 204800
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- Increase the size of the nfs_socketsize variable when netstat reports
packets dropped due to full socket buffers for UDP, and increasing the number
of nfsd daemons has not helped.
- Refer to:
- Increasing NFS Socket Buffer Size
|
nfs_tcp_duplicate_cache_size (AIX 4.2.1 and later) |
- Purpose:
- Specifies the number of entries to store in the NFS server's duplicate
cache for the TCP network transport.
- Values:
-
- Default: 5000
- Range: 1000 to 100000
- Type: Incremental
- Diagnosis:
- N/A
- Tuning:
- The duplicate cache size cannot be decreased. Increase the duplicate
cache size for servers that have a high throughput capability. The duplicate
cache is used to allow the server to correctly respond to NFS client retransmissions.
If the server flushes this cache before the client is able to retransmit,
then the server may respond incorrectly. Therefore, if the server can process
1000 operations before a client retransmits, the duplicate cache size must
be increased.
Calculate the number of NFS operations that
are being received per second at the NFS server and multiply this by 4. The
result is a duplicate cache size that should be sufficient to allow correct
response from the NFS server. The operations that are affected by the duplicate
cache are the following: setattr(), write(), create(), remove(), rename(), link(), symlink(), mkdir(), rmdir().
|
nfs_tcp_socketsize (AIX 4.2.1 and
later) |
- Purpose:
- Sets the queue size of the NFS server TCP socket. The queue size is
specified in number of bytes. The TCP socket is used for receiving the NFS
client requests and can be adjusted so that the NFS server is less likely
to drop packets under a heavy load. The value of the nfs_tcp_socketsize option
must be less than the sb_max option, which can be manipulated by the no command.
- Values:
-
- Default: 60000
- Practical Range: 60000 to sb_max
- Type: Dynamic
- Diagnosis:
- Packets dropped when examining the output of the command netstat -s -p tcp.
- Tuning:
- This option reserves, but does not allocate, memory for use by the send
and receive socket buffers of the socket. Do not set the nfs_tcp_socketsize
value to less than 60,000. Large or busy servers should have larger values
until TCP NFS traffic shows no packets dropped from the output of the netstat -s -p tcp command.
- Refer to:
- Tuning Other Layers to Improve NFS Performance
|
nfs_udp_duplicate_cache_size (AIX 4.2.1 and later) |
- Purpose:
- Specifies the number of entries to store in the NFS server's duplicate
cache for the UDP network transport.
- Values:
-
- Default: 5000
- Range: 1000 to 100000
- Type: Incremental
- Diagnosis:
- N/A
- Tuning:
- The duplicate cache size cannot be decreased. Increase the duplicate
cache size for servers that have a high throughput capability. The duplicate
cache is used to allow the server to correctly respond to NFS client retransmissions.
If the server flushes this cache before the client is able to retransmit,
then the server may respond incorrectly. Therefore, if the server can process
1000 operations before a client retransmits, the duplicate cache size must
be increased.
Calculate the number of NFS operations that
are being received per second at the NFS server and multiply this by 4. The
result is a duplicate cache size that should be sufficient to allow correct
response from the NFS server. The operations that are affected by the duplicate
cache are the following: setattr(), write(), create(), remove(), rename(), link(), symlink(), mkdir(), rmdir().
|
nfs_use_reserved_ports (AIX 4.2.1 and later) |
- Purpose:
- Specifies using nonreserved IP port number.
- Values:
-
- Default: 0
- Range: 0 or 1
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- Value of 0 use a nonreserved IP port number when the NFS client communicates
with the NFS server.
|
portcheck |
- Purpose:
- Checks whether an NFS request originated from a privileged port.
- Values:
-
- Default: 0
- Range: 0 or 1
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- Value of 0 disables the port-checking that is done by the NFS server.
A value of 1 directs the NFS server to do port checking on the incoming NFS
requests. This is a configuration decision with minimal performance consequences.
|
udpchecksum |
- Purpose:
- Turns on or off the generation of checksums on NFS UDP packets.
- Values:
-
- Default: 1
- Range: 0 or 1
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- Make sure this value is set to on in any network where packet corruption
might occur. Slight performance gains can be realized by turning it off, but
at the expense of increased chance of data corruption.
- Refer to:
- nfs_tcp_socketsize
|
nfs_v2_pdts |
- Purpose:
- Sets the number of tables for memory pools used by the biods for NFS
Version 2 mounts.
- Values:
-
- Default: 1
- Range: 1 to 8
- Type: Mount
- Diagnosis:
- N/A
- Tuning:
- N/A
|
nfs_v3_pdts |
- Purpose:
- Sets the number of tables for memory pools used by the biods for NFS
Version 3 mounts.
- Values:
-
- Default: 1
- Range: 1 to 8
- Type: Mount
- Diagnosis:
- N/A
- Tuning:
- N/A
|
nfs_v2_vm_bufs |
- Purpose:
- Sets the number of initial free memory buffers used for each NFS version
2 Paging Device Table (pdt) created after the first table. The very first
pdt has a set value of 256, 512, 640 or 1000, depending on system memory.
This initial value is also the default value of each newly created pdt.
Note
The initial set value for the first pdt table will never change.
- Values:
-
- Default: 1000
- Range: 512 to 5000
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- N/A
|
nfs_v3_vm_bufs |
- Purpose:
- Sets the number of initial free memory buffers used for each NFS version
3 Paging Device Table(pdt) created after the first table. The very first pdt
has a set value of 256, 512, 640 or 1000, depending on system memory. This
initial value is also the default value of each newly created pdt.
Note
The initial set value for the first pdt table will never change.
- Values:
-
- Default: 1000
- Range: 512 to 5000
- Type: Dynamic
- Diagnosis:
- N/A
- Tuning:
- N/A
|