IBM Books

Command and Technical Reference, Volume 1

mkamdent

Purpose

mkamdent - Creates user home directory entries in the /u automounter map files.

Syntax

mkamdent [-s server_path] user_names

Flags

-s server_path
Specifies the location from which the users' home directory is served. The format is server_name:base_path. If this flag is not specified, the default values will be taken from the SP site environment variables homedir_server for the server_name and homedir_path for the base_path. These environment variables are set using the spsitenv command.

Operands

user_names
Specifies a list of users to add to the source file, separated by spaces.

Description

Use this command to create user home directory entries in the /u automounter map files. Typically, user home directory entries are generated by the SP User Management Services when a new user is added to the system. However, if SP User Management Services are turned off and SP Automounter Support is still turned on, this command can be used to add user entries to the automounter /u map. This command can also be used to add automounter support for preexisting users that were not added using SP User Management Services and for /u subdirectories that are not associated with SP users.

Files

/etc/auto/maps/auto.u
The default /u automounter map file.

Security

You must have root privilege to run this command.

Location

/usr/lpp/ssp/bin/mkamdent

Related Information

Commands: spsitenv

The "Managing the Automounter" and "Managing user accounts" chapters in PSSP: Administration Guide.

Examples

To create automounter entries in the /u map file for multiple users, enter:

mkamdent -s hostx:/home/hostx john ken pat paul ron

This assumes the following directories already exist on hostx:

mkautomap

Purpose

mkautomap - Generates an equivalent Automount map file from an Amd map file.

Syntax

mkautomap [-n] [ -o Automount_map] [-f filesystem] [Amd_map ]

Flags

-n
Specifies that an entry for the Automount map should not be added to the /etc/auto.master master map file.

-o Automount_map
Specifies the file name of the Automount map file in which the generated output will be placed. If Automount_map does not exist, it will be created. If it does exist, it will be replaced. If this flag is not specified, Automount_map will default to /etc/auto/maps/auto.u.

-f filesystem
Specifies the name of the file system associated with the automounter map files. If this flag is not specified, the file system will default to /u.

Operands

Amd_map
Specifies the file name of the Amd map file that is used as input for generating the Automount map file. If Amd_map does not exist, an error will occur. If this option is not specified, Amd_map will default to /etc/amd/amd-maps/amd.u.

Description

The mkautomap command is a migration command used to generate an Automount map file from the Amd map file Amd_map created by a previous SP release. Only Amd map file entries created by a previous SP release will be recognized. If the Amd map file was modified by the customer, results may be unpredictable. If an Amd map entry cannot be properly interpreted, a message will be written to standard error, and that entry will be ignored. Processing will continue with the next map entry. All recognized entries will be interpreted and equivalent Automount map entries will be written to a temporary file Automount_map.tmp. If no errors were encountered during processing, the temporary file will be renamed to Automount_map.

If all Amd map entries were successfully generated into Automount map entries and written to Automount_map, the /etc/auto.master Automount master file will be updated unless the -n flag is specified. A master map file entry associating the filesystem with the Automount_map will be added. Also, any default mount options specified in Amd_map will be added to the master map file entry for filesystem. This master map file entry will be appended to /etc/auto.master and if the file does not exist, it will be created.

Files

/etc/amd/amd-maps/amd.u
The default Amd map file used as input to this command.

/etc/auto/maps/auto.u
The default Automount map file generated as output from this command.

/etc/auto/maps/auto.u.tmp
The default temporary Automount map file containing all successfully generated Automount entries. This file will only remain after command execution if errors occurred while processing some Amd map file entries.

/etc/auto.master
The Automount master map file which contains a list of all directories controlled by the automount daemon and their corresponding map files and default mount options.

Security

You must have root privilege to run this command.

Restrictions

Use this command only with amd.u map files created by PSSP User Management Services. Using other Amd map files or modified amd.u map files as input to this command, will produce unpredictable results.

Related Information

The "Migrating to the latest level of PSSP" chapter in PSSP: Installation and Migration Guide

The "Managing the automounter" chapter in PSSP: Administration Guide

Location

/usr/lpp/ssp/install/bin/mkautomap

Examples

To create the SP Automount /u map file from the Amd map file generated by a previous SP release, enter:

mkautomap

mkconfig

Purpose

mkconfig - Creates the config_info file for each of the boot/install server's clients on the server.

Syntax

mkconfig

Flags

None.

Operands

None.

Description

Use this command to make the config_info files for all the clients of a boot/install server if the client is not set to boot from disk. The mkconfig command is intended to run only on the server node. This command creates a config_info file named /tftpboot/host_name.config_info for each client node.

Standard Error

This command writes error messages (as necessary) to standard error.

Exit Values

0
Indicates the successful completion of the command.

-1
Indicates that an error occurred.

Security

You must have root privilege to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/bin/mkconfig

Related Information

Commands: setup_server

Examples

To make the config.info files for all boot/install clients of a server, enter on the server:

mkconfig

mkinstall

Purpose

mkinstall - Creates the install_info file for each of the server's clients on the server.

Syntax

mkinstall

Flags

None.

Operands

None.

Description

Use this command on the server node to make the install_info files for all clients of a boot/install server. The mkinstall command creates a /tftpboot/ host_name.install_info file for each client node.

Standard Error

This command writes error messages (as necessary) to standard error.

Exit Values

0
Indicates the successful completion of the command.

-1
Indicates that an error occurred.

Security

You must have root privilege to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/bin/mkinstall

Related Information

Commands: setup_server

Examples

To make the install.info files for all boot/install clients of a server, enter on the server:

mkinstall

mkkp

Purpose

mkkp - Makes Kerberos Version 4 principals.

Syntax

mkkp -h

mkkp [-e expiration] [ -l lifetime] name[.instance] ...

Flags

-h
Displays usage information.

-e expiration
Specifies the expiration date for new principals. If omitted, the expiration date is set to the value assigned to the principal named default. The date must be entered in the format yyyy-mm-dd and the year must be a value from 1970 to 2037. The time of expiration is set to 11:59 PM local time on the date specified.

-l lifetime
Specifies the maximum ticket lifetime for new principals. If omitted, the maximum ticket lifetime is set to the value assigned to the principal named default. The lifetime must be specified as a decimal number from 0 to 255. These values correspond to a range of time intervals from five minutes to 30 days. Refer to PSSP: Administration Guide for a complete list of the possible ticket lifetime values you can enter and the corresponding durations in days, hours, and minutes. The following list shows a representative sample with approximate durations:
lifetime operand - Approximate duration
      141                1 day
      151                2 days
      170                1 week
      180                2 weeks
      191                1 month

Operands

name[.instance] ...
Identifies the principals to add to the Kerberos authentication database.

Description

Use this command to create principals in the Kerberos Version 4 database on the local host. It allows the default values for the expiration date and maximum ticket lifetime to be overridden. Principals created in this way have no passwords. Before a user can k4init as the new principal, an administrator must set your initial password using the kpasswd, kadmin, or kdb_edit command directly. This command should normally be used only on the primary server. If there are secondary authentication servers, the push-kprop command is invoked to propagate the change to the other servers. The command can be used to update a secondary server's database, but the changes may be negated by a subsequent update from the primary.

Files

/var/kerberos/database/admin_acl.add
Access control list for kadmin, mkkp, and rmkp.

/var/kerberos/database/principal.*
Kerberos database files.

Exit Values

0
Indicates the successful completion of the command. All specified principals that did not already exist were created. If you specified a principal that exists, a message is written to standard error and processing continues with any remaining principals.

1
Indicates that an error occurred and no principal was added. One of the following conditions was detected:

Security

The mkkp command can be run by the root user logged in on a Kerberos server host. It can be invoked indirectly as a Sysctl procedure by a Kerberos database administrator who has a valid ticket and is listed in the admin_acl.add file.

Location

/usr/kerberos/etc/mkkp

Related Information

Commands: chkp, kadmin, kdb_edit, kpasswd, lskp, rmkp, sysctl

Examples

The following example adds two principals to the database. Both principals are set to expire 30 June 2005. The default value for the maximum ticket lifetime is used.

mkkp -e 2005-06-30 kelly kelly.admin

mknimclient

Purpose

mknimclient - Makes a node a Network Installation Management (NIM) client of its boot/install server.

Syntax

mknimclient -h | -l node_list

Flags

-h
Displays usage information. If the command is issued with the -h flag, the syntax description is displayed to standard output and no other action is taken (even if other valid flags are entered along with the -h flag).

-l node_list
Indicates by node_list the SP nodes to be configured as clients of their boot/install servers. The node_list is a comma-separated list of node numbers.

Operands

None.

Description

Use this command to define a node as a NIM client. This is accomplished by determining the node's boot/install server from the System Data Repository (SDR) and configuring that client node as a NIM client on that server. When complete, the NIM configuration database on the server contains an entry for the specified client.

Notes:

  1. This command results in no processing on the client node.

  2. The assignment of a boot/install server for a node must first be made using spbootins.

|Environment Variables

|PSSP 3.4 provides the ability to run commands using secure remote |command and secure remote copy methods.

|To determine whether you are using either AIX rsh or rcp |or the secure remote command and copy method, the following environment |variables are used. |If no environment variables are set, the defaults are |/bin/rsh and /bin/rcp.

|You must be careful to keep these environment variables consistent. |If setting the variables, all three should be set. The DSH_REMOTE_CMD |and REMOTE_COPY_CMD executables should be kept consistent with the choice of |the remote command method in RCMD_PGM: |

|For example, if you want to run mknimclient using a secure remote |method, enter:

|export RCMD_PGM=secrshell
|export DSH_REMOTE_CMD=/bin/ssh
|export REMOTE_COPY_CMD=/bin/scp

Standard Error

This command writes error messages (as necessary) to standard error.

Exit Values

0
Indicates the successful completion of the command.

-1
Indicates that an error occurred.

Security

|You must have root privilege to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/bin/mknimclient

Related Information

Commands: delnimclient, setup_server

Examples

To define nodes 1, 3, and 5 as NIM clients of their respective boot/install servers, enter:

mknimclient -l 1,3,5

mknimint

Purpose

mknimint - Creates the necessary Network Installation Management (NIM) interfaces on a NIM master.

Syntax

mknimint -h | -l node_list

Flags

-h
Displays usage information. If the command is issued with the -h flag, the syntax description is displayed to standard output and no other action is taken (even if other valid flags are entered along with the -h flag).

-l node_list
Indicates by node_list the SP nodes on which to perform this operation. The node_list is a comma-separated list of node numbers. These nodes should have been previously configured as NIM masters (see the mknimmast command).

Operands

None.

Description

Use this command to define to NIM new Ethernet network adapters and interfaces on the control workstation and boot/install servers. On the control workstation, any networks not previously defined are defined and NIM interfaces added. On a boot/install server, all the Ethernet networks and interfaces are defined; it then defines all token ring and Ethernet networks that are known on the control workstation (with the netstat -ni command) and defines interfaces for them as well. This is so that resources like the lppsource can be served from the control workstation to a client node by the boot/install server if the client and control workstation are on the same subnetwork.

To serve a resource to a client that is not on the same subnetwork as the control workstation, routing is required. Routing is done in mknimclient.

|Environment Variables

|PSSP 3.4 provides the ability to run commands using secure remote |command and secure remote copy methods.

|To determine whether you are using either AIX rsh or rcp |or the secure remote command and copy method, the following environment |variables are used. |If no environment variables are set, the defaults are |/bin/rsh and /bin/rcp.

|You must be careful to keep these environment variables consistent. |If setting the variables, all three should be set. The DSH_REMOTE_CMD |and REMOTE_COPY_CMD executables should be kept consistent with the choice of |the remote command method in RCMD_PGM: |

|For example, if you want to run mknimint using a secure remote |method, enter:

|export RCMD_PGM=secrshell
|export DSH_REMOTE_CMD=/bin/ssh
|export REMOTE_COPY_CMD=/bin/scp

Standard Error

This command writes error messages (as necessary) to standard error.

Exit Values

0
Indicates the successful completion of the command.

-1
Indicates that an error occurred.

Security

|You must have root privilege to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/bin/mknimint

Related Information

Commands: setup_server

Examples

To make NIM interface definitions for nodes 1, 3, and 5, enter:

mknimint -l 1,3,5
 

mknimmast

Purpose

mknimmast - Configures a node as a Network Installation Management (NIM) master.

Syntax

mknimmast -h -l node_list

Flags

-h
Displays usage information. If the command is issued with the -h flag, the syntax description is displayed to standard output and no other action is taken (even if other valid flags are entered along with the -h flag).

-l node_list
Indicates by node_list the SP nodes to be configured as NIM masters. The node_list is a comma-separated list of node numbers.

Operands

None.

Description

Use this command to define a boot/install server node as a NIM master for the subsequent installation of client nodes. It verifies that the listed nodes are defined as boot/install servers in the System Data Repository (SDR). It then installs the NIM master AIX file sets and configures the nodes as NIM masters.

|Environment Variables

|PSSP 3.4 provides the ability to run commands using secure remote |command and secure remote copy methods.

|To determine whether you are using either AIX rsh or rcp |or the secure remote command and copy method, the following environment |variables are used. |If no environment variables are set, the defaults are |/bin/rsh and /bin/rcp.

|You must be careful to keep these environment variables consistent. |If setting the variables, all three should be set. The DSH_REMOTE_CMD |and REMOTE_COPY_CMD executables should be kept consistent with the choice of |the remote command method in RCMD_PGM: |

|For example, if you want to run mknimmast using a secure remote |method, enter:

|export RCMD_PGM=secrshell
|export DSH_REMOTE_CMD=/bin/ssh
|export REMOTE_COPY_CMD=/bin/scp

Standard Error

This command writes error messages (as necessary) to standard error.

Exit Values

0
Indicates the successful completion of the command.

-1
Indicates that an error occurred.

Security

|You must have root privilege to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/bin/mknimmast

Related Information

Commands: delnimmast, setup_server

Examples

To define nodes 1, 3, and 5 as NIM masters, enter:

mknimmast -l 1,3,5
 

mknimres

Purpose

mknimres - Creates the necessary Network Installation Management (NIM) resources on a NIM master.

Syntax

mknimres -h | -l node_list

Flags

-h
Displays usage information. If the command is issued with the -h flag, the syntax description is displayed to standard output and no other action is taken (even if other valid flags are entered along with the -h flag).

-l node_list
Indicates by node_list the SP nodes on which to perform this operation. The node_list is a comma-separated list of node numbers. These nodes should have been previously configured as NIM masters (see mknimmast).

Operands

None.

Description

Use this command to make all the NIM resources for installation, diagnostics, migration, and customization. No resources are allocated to client nodes. The set of resources needed is determined from the list of client nodes found in the System Data Repository (SDR) for the node_list. Any required AIX install and mksysb images are defined as NIM resources. For boot/install server nodes, NIM Shared Product Object Tree (SPOT) directories are created and mksysb images are copied, as required. Because of the large data volumes required for SPOTs and install images, all checking is done before copying data.

Creation of the NIM lppsource resource on a boot/install server will result in setup_server creating a lock in the lppsource directory on the control workstation.

|Environment Variables

|PSSP 3.4 provides the ability to run commands using secure remote |command and secure remote copy methods.

|To determine whether you are using either AIX rsh or rcp |or the secure remote command and copy method, the following environment |variables are used. |If no environment variables are set, the defaults are |/bin/rsh and /bin/rcp.

|You must be careful to keep these environment variables consistent. |If setting the variables, all three should be set. The DSH_REMOTE_CMD |and REMOTE_COPY_CMD executables should be kept consistent with the choice of |the remote command method in RCMD_PGM: |

|For example, if you want to run mknimres using a secure remote |method, enter:

|export RCMD_PGM=secrshell
|export DSH_REMOTE_CMD=/bin/ssh
|export REMOTE_COPY_CMD=/bin/scp

Standard Error

This command writes error messages (as necessary) to standard error.

Exit Values

0
Indicates the successful completion of the command.

-1
Indicates that an error occurred.

Security

|You must have root privilege to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/bin/mknimres

Related Information

Commands: setup_server

Examples

To make NIM resources for boot/install servers 1, 3, and 5, enter:

mknimres -l 1,3,5

monitorvsd

Purpose

monitorvsd - Enables, disables, or lists the virtual shared disks that will be monitored.

Syntax

monitorvsd
-d vsd_name ... | -e vsd_name ... | -E | -D | -l

Flags

-d vsd_name ...
Disables monitoring the specified shared disks. The vsd_names are space separated.

-e vsd_name ...
Enables monitoring the specified shared disks.

-E
Enables monitoring all shared disks that were previously enabled with the -e flag.

-D
Disables all monitoring.

-l
Lists the virtual shared disks that are being monitored.

Operands

None.

Description

The monitorvsd command enables and disables virtual shared disks to be monitored by the PSSP Event Management services. In particular, the statistics that are returned by the lsvsd -s command are made available to Event Management.

Monitoring can be enabled for a maximum of 300 virtual shared disks on a given node.

Security

You must have root privilege to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

For information on the Event Management services, refer to "The Event Management subsystem" and "Using the Problem Management subsystem" chapters in PSSP: Administration Guide.

Location

/usr/lpp/csd/bin/monitorvsd

Related Information

Commands: cfgvsd, lsvsd, spevent

Examples

  1. To enable monitoring the "vsd1n1" shared disk, enter:
    monitorvsd -e vsd1n1
    

    The system displays a message similar to the following:

    monitorvsd: Enabled: vsd1n1
    
  2. To list all monitored virtual shared disks on a node, enter:
    monitorvsd -l
    

    The system displays a message similar to the following:

    vsd1n1
        vsd2n1
        vsd9n1
        
    
  3. To disable all monitoring, enter:
    monitorvsd -D
    

mult_senders_test

Purpose

mult_senders_test - Detects nodes that are injecting damaged packets into the switch network.

Attention

ATTENTION - READ THIS FIRST: Do not activate the switch advanced diagnostic facility until you have read this section completely, and understand this material. If you are not certain how to properly use this facility, or if you are not under the guidance of IBM Service, do not activate this facility.

Activating this facility may result in degraded performance of your system. Activating this facility may also result in longer response times, higher processor loads, and the consumption of system disk resources. Activating this facility may also obscure or modify the symptoms of timing-related problems.

Syntax

mult_senders_test
-r receiver [-g] [-m model] [-t max_time]
 
[-a allowed_sender(s)] [ -f forbidden sender(s)]
 
[-A allowed_senders_file] |
| 
|[-F forbidden_senders_file] |[-n {0|1}
 
[-z data_size] [-p pattern_file(s)] [-h]

Flags

-r receiver
Specifies a receiver node ID (or name).

-m model
Specifies a test model that will be used for testing. model is the name of the model to be used.

-t max_time
Specifies maximal execution time.

-a allowed_sender(s)
Specifies a list of nodes that the test can use. allowed_senders is a blank-separated list of node identifiers. A node identifier can be a host name, IP address, frame,slot pair, or node number.

-f forbidden_sender(s)
Specifies a list of nodes that the test cannot use. forbidden_sender is a blank-separated list of node identifiers.

-A allowed_senders_file
Specifies a file containing the list of nodes that the test can use. allowed_senders_file is a path to a file that contains a list of node identifiers.

-F forbidden_senders_file
Specifies a file containing the list of nodes that the test cannot use. forbidden_senders_file is a path to a file that contains a list of node identifiers. |

|-n {0|1}
|Specifies the plane where the test will be run. If a plane is not |specified, the default is 0. This flag is valid only on SP Switch2 |systems.

-z data_size
Specifies an amount of data, in MB, to be sent in every single test iteration by each sender.

-p pattern_file(s)
Specifies a list of paths to the pattern files. pattern_files is a blank-separated list of paths. Each pattern file path is a full path to a file accessible from each participating node.

-g
Request to use SPD GUI.

-h
Request usage information be displayed.

Operands

None.

Description

This command starts the multiple senders test, which will find the malfunctioning sender(s) among a specified group of nodes or among the whole partition. You are required to specify the receiver that reported the "bad packet" error by node ID, hostname or IP address.

Primary and Backup nodes cannot participate in the test as receiver(s) or sender(s). If you specify Primary or Backup nodes as receiver(s), test will exit and an error message will be displayed.

The model argument lets you select a test model. By default the "All available nodes are senders" model is selected (this is the only supported model).

You can specify the nodes that are allowed to participate in the test, or nodes that are not allowed to participate in the test. If the same node is present in both lists, it is not allowed to participate in the test. You must be aware that the selected nodes will not be able to run any application that uses a switch network during the test execution. By default all nodes are allowed to participate in the test. These nodes could be specified as a list of nodes or as a file that contains the list. The data_size argument allows you to control the amount of data that will be sent by every sender on every test iteration. By default this value is set to 360MB.

You can provide a path to a file that contains the data pattern to be used during the test. By default the output of the test is displayed on the command line. You can request to display the output on the SPD GUI.

|Security

|When restricted root access (RRA) is enabled, this command can only be run |from the control workstation.

Location

/usr/lpp/ssp/bin/spd/mult_senders_test

Examples

  1. To execute multiple senders test using receiver node #11 enter:
    mult_senders_test -r 11
    
  2. To execute multiple senders test using receiver node n01 and specifying allowed nodes by host name, enter:
    mult_senders_test -r n01 -a n05 n06 n11
    
  3. To execute multiple senders test using receiver node n01, specifying a forbidden node by frame,slot, enter:
    mult_senders_test -r n01 -f 2,9
    
  4. To execute model A of multiple senders test, enter:
    mult_senders_test -r 11 -m ModelA
    
  5. To increase amount of data sent through by each sender to the receiver enter:
    mult_senders_test -r 11 -z 1000
    
  6. To use a different data pattern, create a data file, make it accessible to nodes (copy to every node or mount using the same name), enter:
    mult_senders_test -r 11 -p /tmp/spd/pattern1.dat
    
  7. |To specify that you want the test performed on the second plane, |enter:
    |mult_senders_test -n 1

ngaddto

Purpose

ngaddto - Adds nodes and node groups to the definition list of the destination node group.

Syntax

ngaddto
[-h] | [ -G] dest_nodegroup
 
nodenum | nodegroup [nodenum | nodegroup] ...

Flags

-h
Displays usage information.

-G
Specifies that the destination node group is global.

Operands

dest_nodegroup
Specifies the node group to receive the new additions.

nodenum
Specifies a node to add to the definition list of the destination node group. This is supplied as a space-delimited list of node numbers.

nodegroup
Specifies a named node group to add to the definition list of the destination node group. Node groups are given as a space-delimited list. Node numbers and node group names being added to the destination node group can be intermixed.

Description

Use this command to add nodes and node groups to the definition list of the destination node group. If the -G flag is specified, the destination node group must be global. If the -G flag is not specified, the destination node group must belong to the current system partition. If the destination node group does not exist, you will receive an error. If the destination node group or nodegroup is a name that is not valid, you will receive an error. Nodes and node groups that do not currently exist can be added to the destination node group. When the node group is resolved by the ngresolve command, nonexistent members are ignored.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Security

You must have write access to the SDR to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Prerequisite Information

Refer to the "Managing node groups" chapter in PSSP: Administration Guide for additional node grouping information.

Location

/usr/lpp/ssp/bin/ngaddto

Related Information

Commands: ngcreate, ngdelete, ngdelfrom, ngfind, nglist, ngnew, ngresolve

Examples

  1. To add nodes 1 and 3 and node group ngb to the definition list of node group nga, enter:
    ngaddto nga 1 3 ngb
    
  2. To add nodes 1 and 16 and global node group g2 to the global definition list of node group g1, enter:
    ngaddto -G g1 1 16 g2
    

ngclean

Purpose

ngclean - Cleans up a node group, removing references to nodes and node groups that are not in the current system partition. Node groups with empty definition lists will be deleted.

Syntax

ngclean [-h] | [-G] [-r] {-a | nodegroup [nodegroup ...]}

Flags

-h
Displays usage information.

-a
Cleans up all node groups in the current system partition or all system-wide node groups if the -G flag is also specified.

-r
Does not modify node groups. Issues a report on how node groups would be affected by running this command (without the -r option).

-G
Examines global node groups.

Operands

nodegroup
Specifies the node groups to be cleaned. If the -a flag is provided, all node groups will be cleaned and no node groups should be specified.

Description

Use this command to examine node group definition lists and to remove references to nodes and node groups that do not exist in the current system partition or the SP system if -G is supplied. Node groups with empty definition lists will be deleted. If the -r flag is specified, the nodes and node groups will not be removed, but a report will be generated.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Security

You must have write access to the SDR to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Prerequisite Information

Refer to the "Managing node groups" chapter in PSSP: Administration Guide for additional node grouping information.

Location

/usr/lpp/ssp/bin/ngclean

Related Information

Commands: ngaddto, ngcreate, ngdelete, ngdelfrom, ngfind, nglist, ngnew, ngresolve

Examples

  1. To clean up all system node groups, enter:
    ngclean -Ga
    
  2. To clean up the node group my.ng in the current system partition, enter:
    ngclean my.ng
    

ngcreate

Purpose

ngcreate - Creates and optionally populates a named node group.

Syntax

ngcreate
[-h] | [ -s frame_range:slot_range] [ -n node_range]
 
[-w host_name,host_name, ...] [-e host_name,host_name, ...]
 
[-N nodegroup,nodegroup, ...] [-a] [-G] dest_nodegroup

Flags

-h
Displays usage information.

-s
Specifies a range of frames and slots on each frame to add to the node group.

-n
Specifies a range of nodes to be added to the node group.

-w
Specifies a comma-delimited list of hosts to add to the node group.

-a
Specifies that all nodes in the current system partition be added to the node group. If the -G flag is also provided, all nodes in the SP system are included.

-e
Specifies a comma-delimited exclusion list. These hosts are not added to the node group even if they are specified by another option.

-N
Specifies a comma-delimited list of node groups to add to this node group.

-G
Creates a global node group. System partition boundaries are ignored.

Operands

dest_nodegroup
Specifies the name associated with the node group being created.

Description

Use this command to create a node group named dest_nodegroup. The destination node group is populated based on the supplied options. Node group names must begin with a letter and can be followed by any letters or numbers, a period (.), or an underscore (_). If the destination node group already exists, you will receive an error.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Security

You must have write access to the SDR to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Prerequisite Information

Refer to the "Managing node groups" chapter in PSSP: Administration Guide for additional node grouping information.

Location

/usr/lpp/ssp/bin/ngcreate

Related Information

Commands: ngaddto, ngdelete, ngdelfrom, ngfind, nglist, ngnew, ngresolve

Examples

To create a node group called sample_ng that contains all the nodes in the current system partition except for k22n01, enter:

ngcreate -ae k22n01 sample_ng
 

ngdelete

Purpose

ngdelete - Removes node groups from persistent storage.

Syntax

ngdelete [-h] | [ -u] [-G] nodegroup [nodegroup ...]

Flags

-h
Displays usage information.

-u
Removes the nodegroup, but leaves references to this nodegroup in the definition list of any any node group that contains it.

-G
Specifies that the nodegroup is global.

Operands

nodegroup
Specifies the name of the node group to be deleted.

Description

Use this command to remove node groups from persistent storage. By default, the node group is removed from any node group that contains it. If the -u flag is specified, references to this deleted node group will remain in containing node groups.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Security

You must have write access to the SDR to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Prerequisite Information

Refer to the "Managing node groups" chapter in PSSP: Administration Guide for additional node grouping information.

Location

/usr/lpp/ssp/bin/ngdelete

Related Information

Commands: ngaddto, ngcreate, ngdelfrom, ngfind, nglist, ngnew, ngresolve

Examples

To delete nodegroups ngc and ngd, enter:

ngdelete ngc ngd

ngdelfrom

Purpose

ngdelfrom - Deletes nodes and node groups from the definition list of the destination node group.

Syntax

ngdelfrom
[-h] | [ -G] dest_nodegroup
 
nodenum | nodegroup [nodenum | nodegroup] ...

Flags

-h
Displays usage information.

-G
Specifies that the dest_nodegroup is global.

Operands

dest_nodegroup
Specifies the node group to be modified.

nodenum
Specifies a node to remove. Nodes are specified as a space-delimited list of node numbers.

nodegroup
Specifies a named node group to remove. Node groups are specified as a space-delimited list of node group names. Only the node group name will be removed from the destination nodegroup. The group will not be resolved into an individual list of nodes.

Note:
Nodes numbers and node group names being removed can be intermixed.

Description

Use this command to remove nodes and node groups from the definition list of the destination node group. If the -G flag is specified, the dest_nodegroup must be global. If the -G flag is not specified, the dest_nodegroup must belong to the current system partition.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Security

You must have write access to the SDR to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Prerequisite Information

Refer to the "Managing node groups" chapter in PSSP: Administration Guide for additional node grouping information.

Location

/usr/lpp/ssp/bin/ngdelfrom

Related Information

Commands: ngaddto, ngcreate, ngdelete, ngfind, nglist, ngnew, ngresolve

Examples

To remove node 5 and node group ngc from nga, enter:

ngdelfrom nga 5 ngc
 

ngfind

Purpose

ngfind - Returns a list of all node groups whose definition list contains the specified node or node group.

Syntax

ngfind [-h] | [-G] nodegroup | node

Flags

-h
Displays usage information.

-G
Returns all global node groups that contain the specified global node group or node in their definition list. The default scope is the current system partition.

Operands

nodegroup
Searches node group definition lists for references to this node group.

node
Searches node group definition lists for references to this node.

Description

Use this command to list all node groups that contain the specified node or node group in their definition list. If the specified node or node group does not exist in a node group definition list, no node groups will be listed and the command will complete successfully. Use this command to determine what other node groups would be affected by changes to the specified node group.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Security

You must have write access to the SDR to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Prerequisite Information

Refer to the "Managing node groups" chapter in PSSP: Administration Guide for additional node grouping information.

Location

/usr/lpp/ssp/bin/ngfind

Related Information

Commands: ngaddto, ngcreate, ngdelete, ngdelfrom, nglist, ngnew, ngresolve

Examples

To display a list of all node groups that contain node group test_B, enter:

ngfind test_B
 
test_A
test_D

nglist

Purpose

nglist - Returns a list of all node groups in the current system partition.

Syntax

nglist [-h] | [-G]

Flags

-h
Displays usage information.

-G
Returns all global node groups.

Operands

None.

Description

Use this command to list all node groups in the current system partition to standard output. If the -G flag is specified, it will list all system node groups.

Standard Output

A list of node groups is written to standard output, one node group per line.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Security

You must have write access to the SDR to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Prerequisite Information

Refer to the "Managing node groups" chapter in PSSP: Administration Guide for additional node grouping information.

Location

/usr/lpp/ssp/bin/nglist

Related Information

Commands: ngaddto, ngcreate, ngdelete, ngdelfrom, ngfind, ngnew, ngresolve

Examples

  1. To display a list of all node groups in the current system partition, enter:
    nglist
     
    nga
    ngb
    sampleng
    test_A
    
  2. To display a list of all global node groups, enter:
    nglist -G
     
    g1
    g2
    g3
    test_A
    
Note:
The global node group test_A is not the same as node group test_A in the current system partition. The global scope and system partition dependent scope are independent name spaces and are stored in separate classes in the System Data Repository (SDR).

ngnew

Purpose

ngnew - Creates but does not populate new node groups in persistent storage.

Syntax

ngnew [-h] | [-G] nodegroup [ nodegroup ...]

Flags

-h
Displays usage information.

-G
Creates global node groups.

Operands

nodegroup
Specifies the node group to be created.

Description

Use this command to create new node groups. If the nodegroup already exists, you will receive an error. A valid node group name must begin with a letter. If the nodegroup is not a valid name, you will receive an error. If a node group in the list cannot be successfully created, it will not affect the creation of the other supplied node groups. A nonzero return code is returned.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Security

You must have write access to the SDR to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Prerequisite Information

Refer to the "Managing node groups" chapter in PSSP: Administration Guide for additional node grouping information.

Location

/usr/lpp/ssp/bin/ngnew

Related Information

Commands: ngaddto, ngcreate, ngdelete, ngdelfrom, ngfind, nglist, ngresolve

Examples

To create node groups called nga, ngb, and ngc, enter:

ngnew nga ngb ngc
 

ngresolve

Purpose

ngresolve - Returns a list of hosts in the specified node group.

Syntax

ngresolve [-h] | [-u | -n | -w | -d] [-G] nodegroup [nodegroup ...]

Flags

-h
Displays usage information.

-u
Writes the definition list of nodegroup. Node groups contained by nodegroup are left unresolved.

-n
Specifies that nodes are written as node numbers. This is the default.

-w
Specifies that nodes are written as fully qualified host names.

-d
Specifies that nodes are written as fully qualified IP addresses.

-G
Specifies that node groups are global.

Operands

nodegroup
Specifies the node group to be resolved.

Description

Use this command to resolve the supplied named node groups into their constituent nodes. Nodes and node groups that are in the supplied node group but do not currently exist, will resolve to an empty list. If the -u flag is specified, these nonexistent nodes and node groups will be displayed.

Standard Output

A resolved list of nodes is written to standard output, one node per line.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Security

You must have write access to the SDR to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Prerequisite Information

Refer to the "Managing node groups" chapter in PSSP: Administration Guide for additional node grouping information.

Location

/usr/lpp/ssp/bin/ngresolve

Related Information

Commands: ngaddto, ngcreate, ngdelete, ngdelfrom, ngfind nglist, ngnew

Examples

  1. To display the definition list for node group nga, enter:
    ngresolve -u nga
     
    1
    3
    ngb
    
  2. To resolve node group nga into its constituent nodes, enter:
    ngresolve nga
     
    1
    3
    6
    8
    
  3. To resolve node group nga into fully qualified host names, enter:
    ngresolve -w nga
     
    k22n01.ppd.pok.ibm.com
    k22n03.ppd.pok.ibm.com
    k22n06.ppd.pok.ibm.com
    k22n08.ppd.pok.ibm.com
    
  4. To display the IP addresses of the nodes in node group nga, enter:
    ngresolve -d nga
     
    129.40.157.65
    129.40.157.67
    129.40.157.70
    129.40.157.72
    

|nlssrc

|Purpose

|nlssrc - Gets the status of a subsystem, a group of |subsystems, or a subserver in canonical form. The status is displayed |in English regardless of the installed language locale. | |

|Syntax

|nlssrc [-h host] |-a

|nlssrc [-h host] |-g group_name

|nlssrc [-h host] |[-l] [-c] -s |subsystem_name

|nlssrc [-h host] |[-l] [-c] -p |subsystem_pid

|The syntax for the first two usages of nlssrc will generate the |exact same output as lssrc. The syntax for the last two usages |will generate the output in the canonical form as lssrc.

|Flags

|

|-a
|Lists the current status of all defined subsystems.

|-c
|Requests the canonical lssrc output of the supported |subsystems.

|-g group_name
|Specifies a group of subsystems to get status for. The command is |unsuccessful if the group_name variable is not contained in the |subsystem object class.

|-h host
|Specifies the foreign host on which this status action is |requested. The local user must be running as root. The remote |system must be configured to accept remote System Resource Controller |requests. That is, the srcmstr daemon (see |/etc/inittab) must be started with the -r flag and the |/etc/hosts.equiv or .rhosts file must be |configured to allow remote requests.

|-l
|Requests that a subsystem send its current status in long form. |Long status requires that a status request be sent to the subsystem; it |is the responsibility of the subsystem to return the status.

|-p subsystem_pid
|Specifies a particular instance of the subsystem_pid variable to |get status for, or a particular instance of the subsystem to which the status |subserver request is to be taken.

|-s subsystem_name
|Specifies a subsystem to get status for. The |subsystem_name variable can be the actual subsystem name or the |synonym name for the subsystem. The command is unsuccessful if the |subsystem_name variable is not contained in the subsystem object |class. |

|Operands

|None.

|Description

|Use the nlssrc -c command to get language-independent output for |supported subsystems from the lssrc command. The status is |displayed in English regardless of the installed language locale. If |the -c flag is not present, the nlssrc command will |invoke the lssrc command which uses the daemon's locale.

|Location

|/usr/sbin/rsct/bin/nlssrc

|Related Information

|PSSP commands: hagsd

|AIX commands: lssrc

|Refer to the "System Resource Controller Overview" in AIX System |User's Guide: Operating System and Devices for an explanation |of subsystems, subservers, and the System Resource Controller.

|Refer to PSSP: Diagnosis Guide for diagnosis |information.

|Examples

|

  1. |To get nlssrc output from the HAGS subsystem in English, |enter:
    |nlssrc -c -ls hags
  2. |The following examples are sample outputs for the same information.

    |nlssrc -ls hags (locale-dependent)
    | 
    |Subsystem Group PID Status
    |hags hags 6334 active
    |2 locally-connected clients.  Their PIDs:
    |15614 23248
    |HA Group Services domain information:
    |Domain established by node 5
    |Number of groups known locally: 1
    |                     Number of          Number of local
    |Group Name           providers          providers/subscribers
    |ha_em_peers             7                   1        0

    |nlssrc -ls hags -c (canonical form)
    | 
    |Number of local clients: 2
    |PIDs: 15614 23248
    |HAGS domain information:
    |Domain established by node 5.
    |Number of known local groups: 1
    |Group Name: ha_em_peers
    |     Providers: 7
    |     Local Providers: 1
    |     Local Subscribers: 0
    |

node_number

Purpose

node_number - Obtains the node number attribute for a node from the ODM.

Syntax

node_number [-h] [-new]

Flags

-h
Display node_number command syntax.

-new
Sets return code to -1 if node_number is equal to a null string.

Operands

None.

Description

This command is used by the PSSP software to determine the node number of an SP node. The PSSP installation process places the node number in the ODM on the node. This command will retrieve that data.

Standard Output

The node number obtained is printed to standard output.

Standard Error

Any errors from the ODM query will be printed to standard error.

Exit Values

0
Indicates the successful completion of the command.

1
Indicates that an error occurred.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/install/bin/node_number

Examples

To obtain the node number of an SP node, issue the following on that node:

node_number
 
5

nodecond

Purpose

nodecond - Conditions an SP processing node.

Syntax

|nodecond [-G] |[-s] [[-n] |[-p | -P] | -a] |frame_ID slot_ID

Flags

|-a
|Returns a list of physical location codes for all Ethernet, token ring, |FDDI, and switch adapters for the SP Switch2 on the IBM e(logo)server pSeries |690 server along with all hardware addresses for the Ethernet adapters. |A network boot is not done. This flag is valid only for p690 servers |and their logical partitions.

-G
Specifies Global mode. With this flag, the node to be conditioned can be outside of the current system partition.

-n
Obtains the Ethernet hardware address instead of doing a network boot. |

|-p
|Performs a verbose ping from the adapter to the p690's |boot/install server instead of doing a network boot. Output is written |to standard output. This option can be used with the -n |flag. This flag is valid only for p690 servers and their logical |partitions. |

|-P
|Performs a quiet ping from the adapter to the p690's |boot/install server instead of doing a network boot. If specified |alone, the return code value of this command indicates ping success or |failure. If specified with the -n flag, a return code |value is appended to the command output. This flag is valid only for |p690 servers and their logical partitions. |

|-s
|Specifies that an aixterm or s1term will not be opened |after a node is network booted in diagnostic mode. In addition, this |flag also performs a slow boot (disables fast IPL mode) for MCA |nodes.

Operands

frame_ID
Specifies the number of the frame containing the node to be conditioned.

slot_ID
Specifies the number of the slot containing the node to be conditioned.

Description

Node conditioning is the administrative procedure used to obtain the Ethernet hardware address of an SP processing node or to initiate a network boot of an SP processing node. The Ethernet hardware address is required by SP System Management for the proper configuration of the system. A network boot of the node is required by the System Management installation procedures.

By default, the nodecond command initiates a network boot of the node specified by the frame_ID and slot_ID operands. The specified node must be in the current system partition unless the -G flag is also specified. The frame ID is any configured frame number and the slot ID is taken from the set 1 through 16. The command completes when the node has booted to the point of configuring its console. Using -n, the nodecond command obtains the Ethernet hardware address of the processing node, specified by the frame_ID and slot_ID operands. The hardware address is written to standard output and the node is left powered off with the keylock in the Normal position. Using -s, the nodecond command runs with fast IPL disabled, allowing more diagnostic information to be collected. After this slow boot, s1term will not open as it does by default.

As the command executes, it writes status information indicating its progress to /var/adm/SPlogs/spmon/nc/nc.frame_ID.slot_ID.

This command uses the SP Hardware Monitor. Therefore, the user must be authorized to access the Hardware Monitor subsystem and, for the frame specified to the command, the user must be granted Virtual Front Operator Panel (VFOP) and S1 (serial port on the node that you can access via the s1term command) permission. Since the Hardware Monitor subsystem uses SP authentication services, the user must execute the k4init command prior to executing this command. Alternatively, site-specific procedures can be used to obtain the tokens that are otherwise obtained by k4init.

|Instead of performing a network boot, the nodecond command |can be used to verify the operation of the SP Ethernet administrative local |area network (LAN) adapter for this node by performing a ping test across the |adapter to the node's boot/install server. The -p |flag is the recommended usage and, if specified, the results of the ping test |are written to standard output. If -P is specified, the |ping result is returned in the return code from this command. This |option can be combined with the -n flag to obtain the hardware |Ethernet address in order to consolidate operations. This command may |take several minutes to complete since it must boot the node to the Open |Firmware prompt in order to perform the operation. The node is left |powered off after the command completes.

|Another operation that the nodecond command can perform, |instead of a network boot, is to return a list of the physical location codes |for all Ethernet, token ring, FDDI, and |switch adapters for the SP Switch2 installed on the node. Included with the Ethernet physical |location codes are the corresponding hardware |Ethernet addresses for those adapters. This command may take several |minutes to complete since it must boot the node to the Open Firmware prompt in |order to perform the operation. The node is left powered off after the |command completes.

Files

/var/adm/SPlogs/spmon/nc
Directory containing nodecond status files.

|Exit Values

|

|0
|Indicates successful completion of the command.

|1
|Indicates that an error occurred. |

|For the -p and -P flags, the values |are: |

|0
|Indicates the ping was successful.

|1
|Indicates the ping failed. |

Security

You must have Hardware Monitor "VFOP" access and serial access to run this command.

|Restrictions

|The -p, -P, and -a flags are |only supported for p690 servers and their logical |partitions.

Location

/usr/lpp/ssp/bin/nodecond

Related Information

Commands: hmcmds, hmmon, s1term

Examples

  1. To fetch the Ethernet hardware address of the node in frame 5 in slot 1 and save it in a file, enter:
    nodecond -n 5 1 > eth_adrr.5.1
    
  2. To network boot the node in frame 7 in slot 16, enter:
    nodecond 7 16
    
  3. |To obtain the physical location codes of the adapters in a p690 |server attached as frame 2, enter:
    |nodecond -a 2 1

    |You should receive output similar to the following:

    |Ethernet U1.1-P2/E1 0004acec064d
    |FDDI U1.1-P2-I3/Q1 N/A
    |Ethernet U1.1-P2-I1/E1 0060949dd7ae
    |Token Ring U1.1-P2-I5/T1 N/A
    |SP Switch2 U1.1-P2-W1 N/A

    |

    |Note:
    For more useful output, use the spadaptr_loc command instead of |nodecond -a. |
  4. |Use the following command to test the network connection between a |p690 logical partition (LPAR) and its default route, as defined in the System |Data Repository (SDR). In this example, the test is performed on the |second LPAR in frame 8. The verbose ping test uses the SP |Ethernet administrative LAN adapter, as defined in the SDR.
    |nodecond -p 8 2

    |You should receive output similar to the following:

    |Ping successful
  5. |Use the following command to test the network connection between a |p690 LPAR and its default route, as defined in the System Data Repository |(SDR). In this example, the test is performed on the second LPAR in |frame 8. The quiet ping test uses the SP Ethernet |administrative LAN adapter, as defined in the SDR.
    |nodecond -P 8 2
    |echo $?

    |If the value 0 is returned, the ping was successful. If |the value 1 is returned, the ping failed.

nrunacct

Purpose

nrunacct - Runs on each node every night to merge raw accounting data from the login, fee, disk, print, and process subsystems.

Syntax

nrunacct
yyyymmdd
 
[SETUP | WTMPFIX | CONNECT1 | CONNECT2 | PROCESS |
 
MERGE | FEES | DISK | QUEUEACCT | CMS |
 
USEREXIT | CLEANUP]

Flags

SETUP
Moves the active accounting files to working files and restarts the active files.

WTMPFIX
Verifies the integrity of the wtmp file and corrects dates if necessary.

CONNECT1
Calls the acctcon1 command to produce connect session records.

CONNECT2
Converts connect session records into total accounting records (tacct.h format).

PROCESS
Converts process accounting records into total accounting records (tacct.h format). Filters out the records that belong to processes that were part of a job that had exclusive use of the node and appends a total accounting fee record to the fee file for each of these jobs. Records are identified as belonging to processes that were part of a job that had exclusive use of the node, only if exclusive use accounting was enabled at the time the job ran.

MERGE
Merges the connect and process total accounting records.

FEES
Converts accounting fee file records into total accounting records (tacct.h format) and merges them with the connect and process total accounting records.

DISK
Merges disk accounting records with connect, process, and fee total accounting records.

QUEUEACCT
Sorts the queue (printer) accounting records, converts them into total accounting records (tacct.h format), and merges them with other total accounting records.

CMS
Produces command summaries and updates the file that records the date each user last logged into the node.

USEREXIT
If the /var/adm/nsiteacct shell file exists, calls it at this point to perform site-dependent processing.

CLEANUP
Deletes temporary files and exits.

Operands

yyyymmdd
Specifies the date when accounting is to be rerun.

Description

The nrunacct command is the main daily accounting shell procedure, for each individual node. Normally initiated by the cron daemon, the nrunacct command merges the day's raw connect, fee, disk, queuing system (printer), and process accounting data files for the node.

This command has two parameters that must be entered from the keyboard should you need to restart the nrunacct procedure. The date parameter, YYYYMMDD enables you to specify the date for which you want to rerun the node accounting. The state parameter enables a user with administrative authority to restart the nrunacct procedure at any of its states. For more information on restarting nrunacct procedures and on recovering from errors, see "Restart Procedure."

The nrunacct command protects active accounting files and summary files in the event of runtime errors, and records its progress by writing descriptive messages into the /var/adm/acct/nite/activeYYYYMMDD file. When the nrunacct procedure encounters an error, it sends mail to users root and adm, and writes standard errors to /var/adm/acct/nite/accterr.

The nrunacct procedure also creates two temporary files, lock and lock1, in the directory /var/adm/acct/nite, which it uses to prevent two simultaneous calls to the nrunacct procedure. It uses the lastdate file (in the same directory) to prevent more than one invocation per day.

The nrunacct command breaks its processing into separate, restartable states. As it completes each state, it writes the name of the next state in the /var/adm/acct/nite/stateYYYYMMDD file.

Restart Procedure

To restart the nrunacct command after an error, first check the /var/adm/acct/nite/activeYYYYMMDD file for diagnostic messages, then fix any damaged data files, such as pacct or wtmp. Remove the lock files and lastdate file (all in the /var/adm/acct/nite directory, before restarting the nrunacct command. You must specify the YYYYMMDD parameter if you are restarting the nrunacct command. It specifies the date for which the nrunacct command is to rerun accounting. The nrunacct procedure determines the entry point for processing by reading the /var/adm/acct/nite/statefileYYYYMMDD file. To override this default action, specify the desired state on the nrunacct command line.

It is not usually a good idea to restart the nrunacct command in the SETUP state. Instead, perform the setup actions manually and restart accounting with the WTMPFIX state, as follows:

/usr/lpp/ssp/bin/nrunacct YYYYMMDD WTMPFIX

If the nrunacct command encounters and error in the PROCESS state, remove the last ptacct file, because it is incomplete.

Files

/var/adm/wtmp
Log in/log off history file.

/var/adm/pacct*
Process accounting file.

/var/adm/acct/nite/dacct
Disk usage accounting file.

/var/adm/qacct
Active queue accounting file.

/var/adm/fee
Record of fees charged to users.

/var/adm/acct/nite/ptacct*.mmdd
Summary version of pacct files.

/var/adm/acct/nite/activeYYYYMMDD
The nrunacct message file.

/var/adm/acct/nite/lock*
Prevents simultaneous invocation of nrunacct.

/var/adm/acct/nite/lastdate
Contains last date nrunacct was run.

/var/adm/acct/nite/statefileYYYYMMDD
Contains current state to process.

Security

You must have root privilege to run this command.

Restrictions

Access Control: This command should grant execute (x) access only to members of the adm group.

Location

/usr/lpp/ssp/bin/nrunacct

Related Information

Commands: acctcms, acctcom, acctcon1, acctcon2, acctmerg, accton, acctprc1, acctprc2, crontab, fwtmp, nrunacct,

Daemons: cron

Subroutines: acct

File format: acct, failedlogin, tacct, wtmp

The System Accounting information found in AIX System Management Guide

Examples

  1. To restart a node's system accounting procedures for a specific date, enter a command similar to the following:
    nohup /usr/lpp/ssp/bin/nrunacct 19950601 2>> \
          /var/adm/acct/nite/accterr &
    

    This example restarts nrunacct for the day of June 1 (0601), 1995. The nrunacct command reads the file /var/adm/acct/nite/statefile19950601 to find out the state with which to begin. The nrunacct command runs in the background (&), ignoring all INTERRUPT and QUIT signals (nohup). Standard error output (2) is added to the end (>>) of the /var/adm/acct/nite/accterr file.

  2. To restart a node's system accounting procedures for a particular date at a specific state, enter a command similar to the following
    nohup /usr/lpp/ssp/bin/nrunacct 19950601 FEES 2>> \
          /var/adm/acct/nite/accterr &
    

    This example restarts the nrunacct command for the day of June 1 (0601), 1995, starting with the FEES state. The nrunacct command runs in the background (&), ignoring all INTERRUPT and QUIT signals (the nohup command). Standard error output (2) is added to the end (>>) of the /var/adm/acct/nite/accterr file.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]