[ Previous | Next | Table of Contents | Index | Library Home |
Legal |
Search ]
Commands Reference, Volume 4
Monitors activity and reports
statistics on network I/O and network-related CPU usage.
netpmon [ -o File ] [ -d ] [ -T n ] [ -P ] [ -t ] [ -v ] [ -O ReportType
... ] [ -i
Trace_File -n
Gennames_File ]
The netpmon command
monitors a trace of system events, and reports on network activity and
performance during the monitored interval. By default, the
netpmon command runs in the background while one or more
application programs or system commands are being executed and
monitored. The netpmon command automatically starts and
monitors a trace of network-related system events in real time. By
default, the trace is started immediately; optionally, tracing may be
deferred until the user issues a trcon command. When tracing
is stopped by a trcstop command, the
netpmon command generates all specified reports and exits.
The netpmon command can also work in offline mode, that is, on a
previously generated trace file. In this mode, a file generated by the
gennames command is also required. The gennames file should
be generated immediately after the trace has been stopped, and on the same
machine. When running in offline mode, the netpmon command
cannot recognize protocols used by sockets, which limits the level of detail
available in the socket reports.
The netpmon command
reports on the following system activities:
Note: The netpmon command does not work with
NFS3(ONC+)
- CPU Usage
- The netpmon command monitors CPU usage by all threads and
interrupt handlers. It estimates how much of this usage is due to
network-related activities.
- Network Device-Driver I/O
- The netpmon command monitors I/O operations through
Micro-Channel Ethernet, token-ring, and Fiber-Distributed Data Interface
(FDDI) network device drivers. In the case of transmission I/O, the
command also monitors utilizations, queue lengths, and destination
hosts. For receive ID, the command also monitors time in the demux
layer.
- Internet Socket Calls
- The netpmon command monitors all send, recv, sendto, recvfrom,
read, and write subroutines on Internet sockets. It
reports statistics on a per-process basis, for each of the following protocol
types:
- Internet Control Message
Protocol (ICMP)
- Transmission Control
Protocol (TCP)
- User Datagram Protocol
(UDP)
- NFS I/O
- The netpmon command monitors read and
write subroutines on client Network File System (NFS) files, client
NFS remote procedure call (RPC) requests, and NFS server read or write
requests. The command reports subroutine statistics on a per-process or
optional per-thread basis and on a per-file basis for each server. The
netpmon command reports client RPC statistics for each server, and
server read and write statistics for each client.
Any combination of the preceding
report types can be specified with the command line flags. By default,
all the reports are produced.
Notes: The
reports produced by the netpmon command can be quite long.
Consequently, the -o flag should usually be used to write the
report to an output file. The netpmon command obtains performance data
using the system trace facility. The trace facility only supports one
output stream. Consequently, only one netpmon or
trace process can be active at a time. If another
netpmon or trace process is already running, the
netpmon command responds with the message:
/dev/systrace: Device busy
While monitoring very network-intensive applications, the
netpmon command may not be able to consume trace events as fast as
they are produced in real time. When that happens, the error
message:
Trace kernel buffers overflowed, N missed entries
displays on standard error, indicating how many trace events were lost
while the trace buffers were full. The netpmon command
continues monitoring network activity, but the accuracy of the report
diminishes by some unknown degree. One way to avoid overflow is to
increase the trace buffer size using the -T flag, to accommodate
larger bursts of trace events before overflow. Another way to avoid
overflow problems all together is to run netpmon in offline mode.
When running in memory-constrained
environments (where demand for memory exceeds supply), the -P flag
can be used to pin the text and data pages of the real-time netpmon
process in memory so the pages cannot be swapped out. If the
-P flag is not used, allowing the netpmon process to be
swapped out, the progress of the netpmon command may be delayed
such that it cannot process trace events fast enough to prevent trace buffer
overflow.
If the /unix file and
the running kernel are not the same, the kernel addresses will be incorrect,
causing the netpmon command to exit.
This command is valid only on the POWER-based platform.
-d
| Starts the netpmon command, but defers tracing until the
trcon command has been executed by the user. By default,
tracing is started immediately.
|
-i Trace_File
| Reads trace records from the file Trace_File produced with the
trace command instead of a live system. The trace file must
be rewritten first in raw format using the trcpt -r command.
This flag cannot be used without the -n flag.
|
-n Gennames_File
| Reads necessary mapping information from the file
Gennames_File produced by the gennames command.
This flag is mandatory when the -i flag is used.
|
-o File
| Writes the reports to the specified File, instead of to
standard output.
|
-O ReportType
...
| Produces the specified report types. Valid report type values
are:
- cpu
- CPU usage
- dd
- Network device-driver I/O
- so
- Internet socket call I/O
- nfs
- NFS I/O
- all
- All reports are produced. This is the default value.
|
-P
| Pins monitor process in memory. This flag causes the
netpmon text and data pages to be pinned in memory for the duration
of the monitoring period. This flag can be used to ensure that the
real-time netpmon process does not run out of memory space when
running in a memory-constrained environment.
|
-t
| Prints CPU reports on a per-thread basis.
|
-T n
| Sets the kernel's trace buffer size to n bytes.
The default size is 64000 bytes. The buffer size can be increased to
accommodate larger bursts of events, if any. (A typical event record
size is on the order of 30 bytes.)
Note: The trace driver in the kernel uses double buffering,
so actually two buffers of size n bytes will be allocated.
These buffers are pinned in memory, so they are not subject to paging.
|
-v
| Prints extra information in the report. All processes and all
accessed remote files are included in the report instead of only the 20 most
active processes and files.
|
The reports generated by the
netpmon command begin with a header, which identifies the date, the
machine ID, and the length of the monitoring period in seconds. This is
followed by a set of summary and detailed reports for all specified report
types.
Process CPU Usage
Statistics: Each row describes the CPU usage associated with a
process. Unless the verbose option is specified, only the 20 most
active processes are listed. At the bottom of the report, CPU usage for
all processes is totaled, and CPU idle time is reported.
- Process
- Process name
- PID
- Process ID number
- CPU Time
- Total amount of CPU time used by this process
- CPU %
- CPU usage for this process as a percentage of total time
- Network CPU %
- Percentage of total time that this process spent executing network-related
code
- Thread CPU Usage
Statistics
- If the -t flag is used, each process row described above is
immediately followed by rows describing the CPU usage of each thread owned by
that process. The fields in these rows are identical to those for the
process, except for the name field. (Threads are not named.)
First-Level Interrupt
Handler Usage Statistics: Each row describes the CPU usage
associated with a first-level interrupt handler (FLIH). At the bottom
of the report, CPU usage for all FLIHs is totaled.
- FLIH
- First-level interrupt handler description
- CPU Time
- Total amount of CPU time used by this FLIH
- CPU %
- CPU usage for this interrupt handler as a percentage of total time
- Network CPU %
- Percentage of total time that this interrupt handler executed on behalf of
network-related events
Second-Level Interrupt
Handler Usage Statistics: Each row describes the CPU usage
associated with a second-level interrupt handler (SLIH). At the bottom
of the report, CPU usage for all SLIHs is totaled.
- SLIH
- Second-level interrupt handler description
- CPU Time
- Total amount of CPU time used by this SLIH
- CPU %
- CPU usage for this interrupt handler as a percentage of total time
- Network CPU %
- Percentage of total time that this interrupt handler executed on behalf of
network-related events
Network Device-Driver
Statistics (by Device): Each row describes the statistics
associated with a network device.
- Device
- Path name of special file associated with device
- Xmit Pkts/s
- Packets per second transmitted through this device
- Xmit Bytes/s
- Bytes per second transmitted through this device
- Xmit Util
- Busy time for this device, as a percent of total time
- Xmit Qlen
- Number of requests waiting to be transmitted through this device, averaged
over time, including any transaction currently being transmitted
- Recv Pkts/s
- Packets per second received through this device
- Recv Bytes/s
- Bytes per second received through this device
- Recv Demux
- Time spent in demux layer as a fraction of total time
Network Device-Driver
Transmit Statistics (by Destination Host): Each row describes the
amount of transmit traffic associated with a particular destination host, at
the device-driver level.
- Host
- Destination host name. An * (asterisk) is used for transmissions
for which no host name can be determined.
- Pkts/s
- Packets per second transmitted to this host
- Xmit Bytes/s
- Bytes per second transmitted to this host
- On-line mode: Socket Call Statistics for Each
Internet Protocol (by Process): Each row describes the amount of
read/write subroutine activity on sockets of this protocol type
associated with a particular process. Unless the verbose option is
specified, only the top 20 processes are listed. At the bottom of the
report, all socket calls for this protocol are totaled.
- Off-line mode: Socket Call Statistics for Each
Process: Each row describes the amount of read/write
subroutine activity on sockets associated with a particular process.
Unless the verbose option is specified, only the top 20 processes are
listed. At the bottom of the report, all socket calls are
totaled.
- Process
- Process name
- PID
- Process ID number
- Read Calls/s
- Number of read , recv , and recvfrom subroutines per second made by this
process on sockets of this type
- Read Bytes/s
- Bytes per second requested by the above calls
- Write Calls/s
- Number of write , send , and sendto
subroutines per second made by this process on sockets of this type
- Write Bytes/s
- Bytes per second written by this process to sockets of this protocol type
NFS Client Statistics for
Each Server (by File): Each row describes the amount of
read/write subroutine activity associated with a file
mounted remotely from this server. Unless the verbose option is
specified, only the top 20 files are listed. At the bottom of the
report, calls for all files on this server are totaled.
- File
- Simple file name
- Read Calls/s
- Number of read subroutines per second on this file
- Read Bytes/s
- Bytes per second requested by the above calls
- Write Calls/s
- Number of write subroutines per second on this file
- Write Bytes/s
- Bytes per second written to this file
NFS Client RPC Statistics
(by Server): Each row describes the number of NFS remote procedure
calls being made by this client to a particular NFS server. At the
bottom of the report, calls for all servers are totaled.
- Server
- Host name of server. An * (asterisk) is used for RPC calls for
which no hostname could be determined.
- Calls/s
- Number of NFS RPC calls per second being made to this server.
NFS Client Statistics (by
Process): Each row describes the amount of NFS
read/write subroutine activity associated with a
particular process. Unless the verbose option is specified, only the
top 20 processes are listed. At the bottom of the report, calls for all
processes are totaled.
- Process
- Process name
- PID
- Process ID number
- Read Calls/s
- Number of NFS read subroutines per second made by this process
- Read Bytes/s
- Bytes per second requested by the above calls
- Write Calls/s
- Number of NFS write subroutines per second made by this process
- Write Bytes/s
- Bytes per second written to NFS mounted files by this process
NFS Server Statistics (by
Client): Each row describes the amount of NFS activity handled by
this server on behalf of particular client. At the bottom of the
report, calls for all clients are totaled.
- Client
- Host name of client
- Read Calls/s
- Number of remote read requests per second processed on behalf of this
client
- Read Bytes/s
- Bytes per second requested by this client's read calls
- Write Calls/s
- Number of remote write requests per second processed on behalf of this
client
- Write Bytes/s
- Bytes per second written by this client
- Other Calls/s
- Number of other remote requests per second processed on behalf of this
client
Detailed reports are generated
for any of the specified report types. For these report types, a
detailed report is produced for most of the summary reports. The
detailed reports contain an entry for each entry in the summary reports with
statistics for each type of transaction associated with the entry.
Transaction statistics consist of
a count of the number of transactions of that type, followed by response time
and size distribution data (where applicable). The distribution data
consists of average, minimum, and maximum values, as well as standard
deviations. Roughly two-thirds of the values are between average -
standard deviation and average + standard deviation.
Sizes are reported in bytes. Response times are reported in
milliseconds.
Detailed Second Level
Interrupt Handler CPU Usage Statistics:
- SLIH
- Name of second-level interrupt handler
- Count
- Number of interrupts of this type
- CPU Time (Msec)
- CPU usage statistics for handling interrupts of this type
Detailed Network
Device-Driver Statistics (by Device):
- Device
- Path name of special file associated with device
- Recv Packets
- Number of packets received through this device
- Recv Sizes (Bytes)
- Size statistics for received packets
- Recv Times (msec)
- Response time statistics for processing received packets
- Xmit Packets
- Number of packets transmitted to this host
- Demux Times (msec)
- Time statistics for processing received packets in the demux layer
- Xmit Sizes (Bytes)
- Size statistics for transmitted packets
- Xmit Times (Msec)
- Response time statistics for processing transmitted packets
Detailed Network
Device-Driver Transmit Statistics (by Host):
- Host
- Destination host name
- Xmit Packets
- Number of packets transmitted through this device
- Xmit Sizes (Bytes)
- Size statistics for transmitted packets
- Xmit Times (Msec)
- Response time statistics for processing transmitted packets
Detailed Socket Call
Statistics for Each Internet Protocol (by Process):
(on-line mode)
Detailed Socket Call Statistics for Each Process:
(off-line mode)
- Process
- Process name
- PID
- Process ID number
- Reads
- Number of read , recv , recvfrom ,
and recvmsg subroutines made by this process
on sockets of this type
- Read Sizes (Bytes)
- Size statistics for read calls
- Read Times (Msec)
- Response time statistics for read calls
- Writes
- Number of write , send , sendto ,
and sendmsg subroutines made by this process
on sockets of this type
- Write Sizes (Bytes)
- Size statistics for write calls
- Write Times (Msec)
- Response time statistics for write calls
Detailed NFS Client
Statistics for Each Server (by File):
- File
- File path name
- Reads
- Number of NFS read subroutines for this file
- Read Sizes (Bytes)
- Size statistics for read calls
- Read Times (Msec)
- Response time statistics for read calls
- Writes
- Number of NFS write subroutines for this file
- Write Sizes (Bytes)
- Size statistics for write calls
- Write Times (Msec)
- Response time statistics for write calls
Detailed NFS Client RPC
Statistics (by Server):
- Server
- Server host name
- Calls
- Number of NFS client RPC calls made to this server
- Call Times (Msec)
- Response time statistics for RPC calls
Detailed NFS Client
Statistics (by Process):
- Process
- Process name
- PID
- Process ID number
- Reads
- Number of NFS read subroutines made by this process
- Read Sizes (Bytes)
- Size statistics for read calls
- Read Times (Msec)
- Response time statistics for read calls
- Writes
- Number of NFS write subroutines made by this process
- Write Sizes (Bytes)
- Size statistics for write calls
- Write Times (Msec)
- Response time statistics for write calls
Detailed NFS Server
Statistics (by Client):
- Client
- Client host name
- Reads
- Number of NFS read requests received from this client
- Read Sizes (Bytes)
- Size statistics for read requests
- Read Times (Msec)
- Response time statistics for read requests
- Writes
- Number of NFS write requests received from this client
- Write Sizes (Bytes)
- Size statistics for write requests
- Write Times (Msec)
- Response time statistics for write requests
- Other Calls
- Number of other NFS requests received from this client
- Other Times (Msec)
- Response time statistics for other requests
- To monitor network activity during the execution of
certain application programs and generate all report types, type:
netpmon
<run application programs and commands here>
trcstop
The netpmon command automatically starts the system trace and
puts itself in the background. Application programs and system commands
can be run at this time. After the trcstop command is issued, all reports are
displayed on standard output.
- To generate CPU and NFS report types and write the
reports to the nmon.out file, type:
netpmon -o nmon.out -O cpu,nfs
<run application programs and commands here>
trcstop
The netpmon command immediately starts the system trace.
After the trcstop command is issued, the I/O activity report is
written to the nmon.out file. Only the CPU and NFS
reports will be generated.
- To generate all report types and write verbose output
to the nmon.out file, type:
netpmon -v -o nmon.out
<run application programs and commands here>
trcstop
With the verbose output, the netpmon command indicates the steps
it is taking to start up the trace. The summary and detailed reports
include all files and processes, instead of just the 20 most active files and
processes.
- To use the netpmon command in offline mode, type:
trace -a
run application programs and commands here
trcoff
gennames > gen.out
trcstop
trcrpt -r /var/adm/ras/trcfile > tracefile.r
netpmon -i tracefile.r -n gen.out -o netpmon.out
The trcstop command, trace command, and gennames command.
The recv subroutine, recvfrom subroutine, send subroutine, sendto subroutine, and trcoff subroutine.
[ Previous | Next | Table of Contents | Index |
Library Home |
Legal |
Search ]