IBM Books

Command and Technical Reference, Volume 1

cksumvsd

Purpose

cksumvsd - Views and manipulates the IBM Virtual Shared Disk component's checksum parameters.

Syntax

cksumvsd [-s] [-R] [-i | -I]

Flags

-s
Shows IP checksum counters only.

-R
Resets IP checksum counters.

-i
Calculates IP checksum on all IBM Virtual Shared Disk remote messages.

-I
Indicates not to calculate IP checksum on all IBM Virtual Shared Disk remote messages.

If no flags are specified, the current setting of all IBM Virtual Shared Disk checksum parameters and counters are displayed.

Operands

None.

Description

The IBM Virtual Shared Disk IP device driver can calculate and send checksums on remote packets it sends. It also can calculate and verify checksums on remote packets it receives. The cksumvsd command is used to tell the device driver whether to perform checksum processing. The default is no checksumming.

Issuing cksumvsd -i turns on checksumming on the node on which it is run. cksumvsd -i must be issued on all virtual shared disk nodes in the system partition, or the IBM Virtual Shared Disk software will stop working properly on the system partition. If node A has cksumvsd -i (checksumming turned on) and node B has cksumvsd -I (checksumming turned off, the default), then A will reject all messages from B (both requests and replies), since A's checksum verification will be unsuccessful on all B's messages. The safe way to run cksumvsd -i is to make sure that all virtual shared disks on all nodes are in the STOPPED or SUSPENDED states, issue cksumvsd -i on all nodes, then resume the needed virtual shared disks on all nodes.

In checksumming mode, the IBM Virtual Shared Disk IP device driver keeps a counter of the number of packets received with good checksums, and the number received with problem checksums. cksumvsd and statvsd both display these values (statvsd calls cksumvsd -s).

cksumvsd dynamically responds to the configuration of the IBM Virtual Shared Disk IP device driver loaded in the kernel. Its output and function may change if the IBM Virtual Shared Disk IP device driver configuration changes.

Files

/dev/kmem
cksumvsd reads and writes /dev/kmem to get information to and from the IBM Virtual Shared Disk IP device driver in the kernel.

Security

You must be in the AIX bin group to run this command.

You must have write access to the SDR to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Related Information

Commands: cfgvsd

Examples

  1. To display the IBM Virtual Shared Disk checksum settings and counter values, enter:
    cksumvsd
    

    You should receive output similar to the following:

    VSD cksum: current values:
    do_ip_checksum: 0
    ipcksum_cntr:   350 good,       0 bad,  0 % bad.
    

    The IBM Virtual Shared Disk checksumming is currently turned off on the node. Prior to this, checksumming was turned on and 350 IBM Virtual Shared Disk remote messages were received, all with good checksumming.

  2. To turn IBM Virtual Shared Disk checksumming on and display counters, enter:
    cksumvsd -i
    

    You should receive output similar to the following:

    VSD cksum: current values:
    do_ip_checksum: 0
    ipcksum_cntr:   350 good,       0 bad,  0 % bad.
    VSD cksum: new values:
    do_ip_checksum: 1
    ipcksum_cntr:   350 good,       0 bad,  0 % bad.
    

    The command displays old and new values. As before, the node has received 350 IBM Virtual Shared Disk remote messages with good checksums.

  3. To display only the IBM Virtual Shared Disk checksum counters, enter:
    cksumvsd -s
    

    You should receive output similar to the following:

    ipcksum_cntr:   350 good,       0 bad,  0 % bad.
    

cmonacct

Purpose

cmonacct - Performs monthly or periodic SP accounting.

Syntax

cmonacct [number]

Flags

None.

Operands

number
Specifies which month or other accounting period to process. The default is the current month.

Description

The cmonacct command performs monthly or periodic SP system accounting. The intervals are set in the crontab file. You can set the cron daemon to run the cmonacct command once each month or at some other specified time period. By default, if accounting is enabled for at least one node, cmonacct executes on the first day of every month.

The cmonacct command creates summary files under the /var/adm/cacct/fiscal directory and restarts summary files under the /var/adm/cacct/sum directory, the cumulative summary to which daily reports are appended.

Security

You must have root privilege to run this command.

Location

/usr/lpp/ssp/bin/cmonacct

Examples

  1. To produce reports for the current month, enter:
    cmonacct
    
  2. To produce reports for fiscal period 12, enter:
    cmonacct 12
    

config_spsec

Purpose

config_spsec - Configures SP Services into the DCE database. Services which use DCE as an authentication method are required to have certain information entered in the CDS registry and Security Server to perform client/server authentication.

Syntax

|
|config_spsec
|[-h] [-v] |[-c | -p partition_name]
| 
|[-r {SP|WS} |dce_hostname]

Flags

-h
Prints the command syntax to standard output. |

|-v
|Prints progress messages to standard output. This flag is primarily |used for debug purposes.

-c
Configures only those principals specific to the control workstation. This flag is required when running this command prior to the node number being available in the ODM. |

|-p partition_name
|Configures only the partition_name sensitive |services. |

|-r {SP|WS}
|Specifies that the command is being run remotely from the machine being |configured. Valid values are SP or WS. Use |SP to state the target machine is an RS/6000 SP. Use |WS to specify that the target machine is a standalone workstation |(for example, not an RS/6000 SP). If you specify -r, you |must also specify a dce_hostname.

Operands

|dce_hostname
|Specifies the DCE host name of the remote system being configured into the |DCE database. This operand is optional, but is required with the |-r SP flag.
|Note:
|When -r is specified, this operand must be the |control workstation DCE host name. Both the control workstation and the |nodes will be configured if they are not already configured. |

Description

The config_spsec command enters data into the CDS registry and Security Server database. You must be logged into DCE with cell administration authority to use the command. This command reads from two files which specify groups, service principals and members. These files contain the information necessary for each service to be configured to use DCE authentication. |There is a default file |(/usr/lpp/ssp/config/spsec_defaults) and an overrides file |(/spadata/sys1/spsec/spec_overrides). The spsec_defaults file is shipped with the product and should not be altered by users. The spsec_overrides file is provided to allow users to modify principal, group, and organization names. The program reads the two files, and creates all the necessary entries in the CDS registry and Security Server. If the information is already present, an appropriate message will be issued and logged into the log file (not an error).

For syntax errors within either file, an error message will be issued, logged, and processing will halt. Processing of both files occurs prior to any changes being made to any DCE database.

The command prompts for an ID with cell administrator authority, which will be added to the spsec-admin group. The command also prompts for a password. Since the user is required to be logged into the DCE cell as an administrator, the password is that of the cell administrator. This password is required by the config.dce program (called from within this program).

|To run the command remotely, use the -r flag. |Specifying -r allows an administrator to run the command from |one machine on behalf of another machine. Using -r SP |requires that the SP_NAME environment variable be set to the short host name |of an appropriate SDR daemon. When -r SP is specified, |the SP_NAME environment variable must be set to the short host name of the SDR |daemon on the SP being configured. When -r WS is |specified, the SP_NAME can specify the short host name of any working |SDR. Additionally, since this command depends on the two input files |listed in the "Files" section, the administrator must ensure that these files |are copied from the SP control workstation, the location for the master |copies, to the machine running the command.

|The results of this command depend on both the input parameters and |where the command is run. If the input parameters include -r |WS, principals for a standalone workstation are configured. If the |input parameters include -r SP, principals |and accounts for an SP control workstation and nodes are configured. If |the command is run |without the -r flag, the command will determine where it is |running (on an SP or workstation) and create the appropriate principals and |accounts.

Files

input:
/usr/lpp/ssp/config/spsec_defaults

/spdata/sys1/spsec/spsec_overrides |

|output:
|Log file created: /var/adm/SPlogs/auth_install/log

|CDS registry and Security Server database updated

Exit Values

0
Indicates successful completion of the command. |

|1
|Indicates that errors occurred during the execution of this |program. Review any reported errors either on the console or in the Log |file.

|An unsuccessful run of this command (depending on where it encountered a |problem) may leave the state of service principals in an incomplete |state. Some service principals, groups, and directories may not be |created or updated. This will cause services to not operate correctly |in a DCE environment. After fixing the cause of the problem, rerun |config_spsec with the same parameters to complete the |configuration.

Security

Users need to be logged into the cell with cell administrator authority |because creating accounts and groups requires that |authority.

Location

/usr/lpp/ssp/bin/config_spsec

Related Information

Commands: rm_spsec

DCE Administration publications for AIX.

Examples

  1. To configure all service principals and accounts and to set an initial key for each service as a DCE ID with cell administrator authority, |on the control workstation of the machine being configured, enter:
    config_spsec -v
    
  2. To configure control workstation only services (required when the SDR is not available during an initial install) as a DCE ID with cell administrator authority, |on the control workstation of the machine being configured, enter:
    config_spsec -v -c
    
  3. |To configure system partition my_par services, as a DCE ID |with cell administrator authority, on the control workstation of the machine |being configured, enter:
    |config_spsec -v -p my_par
  4. |To configure all service principals and accounts for the SP from a |remote workstation, as a DCE ID with cell administrator authority, |enter:
    |export SP_NAME=mySPcws
    |config_spsec -v -r SP mySPcws.abc.com

cprdaily

Purpose

cprdaily - Creates an ASCII report of the previous day's accounting data.

Syntax

cprdaily [-c] [[-l] [yyyymmdd]]

Flags

-c
Reports exceptional resource usage by command. This flag may be used only on the current day's accounting data.

-l
Reports exceptional usage by login ID for the specified date specified in mmdd variable, if other than current day's reporting is desired. (This is lowercase l, as in list.)

Operands

yyyymmdd
Specifies the date for exceptional usage report if other than the current date.

Description

This command is called by the crunacct command to format an ASCII report of the previous day's accounting data for all nodes. The report resides in the /var/adm/cacct/sum/rprtyyyymmdd file, where yyyymmdd specifies the year, month, and day of the report.

Security

You must have root privilege to run this command.

Location

/usr/lpp/ssp/bin/cprdaily

Examples

  1. To print usual daily accounting reports (Daily Report, Daily Usage Report, Daily Command Summary, Monthly Total Command Summary, Last Login Report ), enter:
    cprdaily
    
  2. To print a Command Exception and Login Exception Report, enter:
    cprdaily -c -l
    
  3. To print a Login Exception Report for March 16, 1994, enter:
    cprdaily -l 19940316
    

cptuning

Purpose

cptuning - Copies a file to /tftpboot/tuning.cust.

Syntax

cptuning -h | file_name

Flags

-h
Displays usage information for this command (syntax message). If the command is issued with the -h flag, the syntax description is displayed to standard output and no other action is taken (even if other valid flags are entered along with the -h flag).

Operands

file_name
Specifies the name of a file to copy to /tftpboot/tuning.cust. If the file_name begins with a slash (/), the name is considered to be a fully qualified file name. Otherwise, the file name is considered to be in the /usr/lpp/ssp/install/config directory.

Description

Use this command to copy the specified file to the /tftpboot/tuning.cust file. IBM ships the following four predefined tuning parameter files in /usr/lpp/ssp/install/config:

tuning.development
Contains initial performance tuning parameters for a typical development system.

tuning.scientific
Contains initial performance tuning parameters for a typical scientific system.

tuning.commercial
Contains initial performance tuning parameters for a typical commercial system.

tuning.default
Contains initial performance tuning parameters for a general SP system.

This command is intended for use in copying one of these files to /tftpboot/tuning.cust on the control workstation for propagation to the nodes in the SP. It can also be used on an individual node to copy one of these files to /tftpboot/ tuning.cust.

Files

Upon successful completion, the /tftpboot/tuning.cust file is updated.

Standard Output

When the command completes successfully, a message to that effect is written to standard output.

Standard Error

This command writes error messages (as necessary) to standard error.

Exit Values

0
Indicates the successful completion of the command.

1
Indicates that an error occurred.

If the command does not run successfully, it terminates with an error message and a nonzero return code.

Security

You must have root privilege to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/bin/cptuning

Related Information

PSSP Files: tuning.commercial, tuning.default, tuning.development, tuning.scientific

PSSP: Installation and Migration Guide

Examples

  1. To copy the /tmp/my-tuning-file file to the /tftpboot/tuning.cust file, enter:
    cptuning /tmp/my-tuning-file
    
  2. To copy the /usr/lpp/ssp/install/config/tuning.commercial file to the /tftpboot/tuning.cust file, enter:
    cptuning tuning.commercial
    

create_dcehostname

Purpose

create_dcehostname - Populates the System Data Repository (SDR) with DCE hostnames for each node in a partition set to use DCE.

Syntax

create_dcehostname [-h] [-v]

Flags

-h
Prints command syntax to standard output.

-v
Prints out progress messages to standard output.

Operands

None.

Description

The create_dcehostname command must be run on the control workstation. It queries the DCE Security registry for information about nodes which may have already been configured and have current dcehostnames. For those entries found, it will update the SDR Node object's dcehostname attribute with this information. For all nodes which were not found in the DCE Security registry and do not already have a dcehostname attribute assigned, it will assign the node's reliable hostname to the attribute. Additionally, this program will update the SDR's SP object with the control workstation's dcehostname in the same manner it did for the nodes. All control workstation IP addresses will be used to search the DCE Security registry to determine if the control workstation has a defined DCE hostname in the registry. Since it is required that the control workstation be configured into the DCE cell, one of the IP addresses will be found in the registry.

Standard Input

CDS Registry and Security Server

SDR

Standard Output

Log file created: /var/adm/SPlogs/auth_install/log

Exit Values

0
Indicates successful completion of the command.

1
Indicates that an error occurred during the execution of the command. Review any reported errors either on the console or in the Log file.

The node boot process requires the DCE hostname. Authentication will not be properly set up, if at all, without this information and the node may not be accessible by some services or users.

Security

You must have root privilege and write access to the SDR for all partitions to run this command.

Location

/usr/lpp/ssp/bin/create_dcehostname

Related Information

DCE Administration publications for AIX.

Examples

To create a DCE hostname for all defined nodes in the SDR, enter:

create_dcehostname -v

create_keyfiles

Purpose

create_keyfiles - Creates DCE keytab objects and stores them into specified keyfiles on the local file system. Services which use DCE as an authentication method will use these keys to log into DCE.

Syntax

create_keyfiles [-h] [-v] [-c | -p partition_name]

Flags

-h
Prints out syntax of command to standard output.

-v
Prints progress messages to standard output (for debugging).

-c
Creates keyfiles for only those principals specific to the control workstation. This option is required when running this command prior to node number being available in the ODM and the SDR is not available.

-p
Creates keyfiles only for the partition_name principals.

Operands

None.

Description

The create_keyfiles command reads from two files, a default file (/usr/lpp/ssp/config/spsec_defaults) and an override file (/spdata/sys1/spsec/spsec_overrides ). It will process these files and generate an effective service principal based on the attributes specified for each service and its location (node, control workstation, or non-SP workstation). It will create keytab objects based on these effective service principals and store the keys in keyfiles located in the /spdata/sys1/keyfiles directory.

Standard Input

/usr/lpp/ssp/config/spsec_defaults

/spdata/sys1/spsec/spsec_overrides

Standard Output

Log file created: /var/adm/SPlogs/auth_install/log.

Keyfiles located in /spdata/sys1/keyfiles (with subdirectories based on the service name).

Exit Values

0
Indicates successful completion of the command.

1
Indicates that errors occurred during the execution of the command. Review any reported errors either on the console or in the Log file.

Security

This command requires write access to the /spdata/sys1 filesystem, and read access to the two configuration files specified in the description. You must also be root with default dce credentials.

Location

/usr/lpp/ssp/bin/create_keyfiles

Related Information

Commands: rm_spsec

Files: /usr/lpp/ssp/config/spsec_defaults, /spdata/sys1/spsec/spsec_overrides

DCE Administration publications for AIX (relating to keyfiles and keytab object creation)

IBM Distributed Computing Environment for AIX: Administration Commands Reference

Examples

  1. To create keyfiles for all services designated to run on the local machine as root user, enter:
    create_keyfiles -v
    
  2. To create control workstation only service principals and the default partition service principals (used during initial install when the SDR is not available) as root user, enter:
    create_keyfiles -v
    
  3. To create keyfiles for partition my_par services, enter:
    create_keyfiles -v -p my_par
    

create_krb_files

Purpose

create_krb_files - Creates the necessary krb_srvtab and tftp access files on the Network Installation Management (NIM) master for Kerberos Version 4 authentication.

Syntax

create_krb_files [-h]

Flags

-h
Displays usage information. If the command is issued with the -h flag, the syntax description is displayed to standard output and no other action is taken.

Operands

None.

Description

Use this command on a boot/install server (including the control workstation). On the server, it creates the Kerberos Version 4 krb_srvtab file for each boot/install client of that server and also updates the /etc/tftpaccess.ctl file on the server.

Standard Error

This command writes error messages (as necessary) to standard error.

Exit Values

0
Indicates the successful completion of the command.

-1
Indicates that an error occurred.

Security

You must have root privilege to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/bin/create_krb_files

Related Information

Commands: setup_server

Examples

To create or update the krb_srvtab and tftp access files on a boot/install server, enter the following command on that server:

create_krb_files

createhsd

Purpose

createhsd - Creates one hashed shared disk that encompasses two or more virtual shared disks.

Syntax

createhsd
-n {node_list | ALL} -s size_in_MB
 
-g volume _group_name -t stripe_size_in_KB
 
[-T lp_size_in_MB] [{-c vsd_per_node | -L} [-A]]
 
[-S] [-o cache | nocache] [-m mirror_cnt]
 
[-d hsd_name] [-l lv_name_prefix] [-k vsd_type] [-x]

Flags

Note:
Some examples shown in this list do not contain enough flags to be executable. They are shown in an incomplete form to illustrate specific flags.

|-n node_list
|Specifies the nodes on which you are creating virtual shared disks. |The backup node cannot be the same as the primary node. For VSD, the |node list is:
|[P/S] : disk_list1+disk_list2/

|For CVSD, the node list is:

|[P/S] : disk_list1+disk_list2/

|"P" specifies the primary server node for serially accessed shared disks, |"S" specifies the backup (secondary) server node for serially accessed shared |disks, and S1 and S2 specifies the server nodes for concurrently accessed |shared disks. disk_list1 is the list of local physical disks, |or vpaths, for the logical volume on the primary. In other words, this |list can be made up of hdiskx, hdisky,... or vpathx, |vpathy,....

|Notes:

  1. |Vpaths are available only if the "Subsystem Device Driver" is |installed. Vpaths provide "virtual paths" to the same physical |volume.

  2. |Hdisks and vpaths cannot both be specified in the same list. |

|disk_list1+disk_list2 is the list of local physical disks or |vpaths in the volume group on the primary, if you want to have more disks in |the volume group than are needed for the logical volume. The sequence |in which nodes are listed determines the names given to the virtual shared |disks. For example:

|createvsd -n 1,6,4 -v PRE

|(with the vsd_prefix PRE) creates virtual shared disks PRE1n1 on |node 1, PRE2n6 on node 6, and PRE3n4 on node 4.

|To create a volume group that spans hdisk2, hdisk3, and hdisk4 on node 1, |with a backup on node 3, enter:

|createvsd -n 1/3:hdisk2,hdisk3,hdisk4/ -v DATA

|This command creates: |

|To create volume groups just like that one on nodes 1, 2, and 3 of a system |with backup on nodes 4, 5, and 6 of the same system, enter:

|createvsd -n 1/4:hdisk1,hdisk2,hdisk3/,2/5:hdisk5,hdisk6, \
|          hdisk7/,3/6:hdisk2,hdisk4,hdisk6/ -v DATA

|This command is shown on two lines here, but you must enter it without any |spaces between the items in node_list.

|The command creates: |

|To create a virtual shared disk where the logical volume spans only two of |the physical disks in the volume group, enter:

|createvsd -n 1/3:hdisk1,hdisk2+hdisk3/ -v DATA

|This command creates the virtual shared disk DATA1n1 with logical volume |lvDATA1n1 spanning hdisk1 and hdisk2 in the volume group DATA, which includes |hdisk1, hdisk2, and hdisk3. It exports the volume group DATA to node |3.

|If a volume group is already created and the combined physical hdisk lists |contain disks that are not needed for the logical volume, those hdisks are |added to the volume group. If the volume group has not already been |created, createvsd creates a volume group that spans |hdisk_list1+hdisk_list2.

|Backup nodes cannot use the same physical disk as the primary does to serve |virtual shared disks.

|ALL specifies that you are creating virtual shared disks on all |nodes in the system or system partition. No backup nodes are assigned |if you use this operand. The virtual shared disks will be created on |all the physical disks attached to the nodes in node_list (you |cannot specify which physical disks to use.)

-s
Specifies the total usable size of the hashed shared disk in MB. Unless -S is specified, createhsd adds at least a stripe size to each virtual shared disk's size for each hashed shared disk.

-g
Specifies the Logical Volume Manager (LVM) volume group name, or local volume group name. This name is concatenated with the node number to form the global volume group name (VSD_GVG). For example:
createhsd -n 6 -g VSDVG

creates a new volume group with the local AIX volume group name VSDVG and the virtual shared disk global volume group name VSDVGn6. The node number is added to the local volume group name to create a unique global volume group name within a system partition to avoid name conflicts with the name used for volume groups on other nodes. If a backup node exists, the global volume group name will be created by concatenating the backup node number as well as the primary node number to the local volume group name. For example:

createhsd -n 6/3/ -g VSDVG

creates VSDVGn6b3, where the primary node is node 6 and the backup node for this global volume group is node 3. The local AIX volume group name will still be VSDVG. You can specify a local volume group that already exists. You do not need to use the -T flag if you specify a volume group name that already exists.

-t
Specifies the stripe size in kilobytes that a hashed shared disk will use. The stripe size must be a multiple of 4KB and less than or equal to 1GB.

-T
Specifies the size of the physical partition in the Logical Volume Manager logical volume group and also the logical partition size (they will be the same) in megabytes. You must select a power of 2 in the range 2--256. The default is 4MB.

The Logical Volume Manager limits the number of physical partitions to 1016 per disk. If a disk is greater than 4 gigabytes in size, the physical partition size must be greater than 4MB to keep the number of partitions under the limit.

-c
Specifies the number of virtual shared disks to be created on each node. If number_of_vsds_per_node is not specified, one virtual shared disk is created for each node specified on createvsd. If more than one virtual shared disk is to be created for each node, the names will be allocated cyclically. For example:
createhsd -n 1,6 -c 2 -d DATA

creates virtual shared disks DATA1n1 on node 1, DATA2n6 on node 6, DATA3n1 on node 1, and DATA4n6 on node 6 and uses them to make up the hashed shared disk DATA.

-L
Allows you to create one virtual shared disk on each node without using sequential numbers for locally-accessed IBM Virtual Shared Disks.

-A
Specifies that virtual shared disk names will be allocated to each node in turn. For example:
createhsd -n 1,6 -c 2 -A DATA

creates DATA1n1 and DATA2n1 on node 1, and DATA3n6 and DATA4n6 on node 6.

-S
Specifies that the hashed shared disk overrides the default skip option and does not skip the first stripe to protect the first LVM Control Block (LVCB). |

|-o
|Specifies either the cache or nocache option for the |underlying virtual shared disks. The default is |nocache.
|Note:
IBM Virtual Shared Disk caching is no longer supported. This |information will still be accepted for compatibility with previous releases, |but the IBM Virtual Shared Disk device driver will ignore the |information. |

-m
Specifies the LVM mirroring count. The mirroring count sets the number of physical partitions allocated to each logical partition. The range is from 1 to 3. If -m is not specified, the count is set to 1.

-d
Specifies the name assigned to the created hashed shared disk. It is used as the virtual shared disk prefix name (the -v in createvsd). If a hashed shared disk name is not specified, a default name, xHsD is used, where x denotes a sequence number.

The command:

createhsd -n 1,2 -d DATA

creates two virtual shared disks, DATA1n1 and DATA2n2. These virtual shared disks make up one hashed shared disk named DATA.

-l
Overrides the prefix lvx that is given by default to a logical volume by the createvsd command, where x is the virtual shared disk name prefix specified by vsd_name_prefix or the default (vsd). For example:
createhsd -n 1 -v DATA

creates one virtual shared disk on node 1 named DATA1n1 with an underlying logical volume lvDATA1n1. If the command

createhsd -n 1 -v DATA -l new

is used, the virtual shared disk on node 1 is still named DATA1n1, but the underlying logical volume is named lvnew1n1.

It is usually more helpful not to specify -l, so that your lists of virtual shared disk names and logical volume names are easy to associate with each other and you avoid naming conflicts.

-k vsd_type
Specifies the type of virtual shared disk. The options are:

The default is VSD.

-x
Specifies that the steps required to synchronize the underlying virtual shared disks on the primary and secondary nodes should not be performed; that is, the sequence:

is not done as part of the createvsd processing that underlies the createhsd command. This speeds the operation of the command and avoids unnecessary processing in the case where several IBM Virtual Shared Disks are being created on the same primary/secondary nodes. In that case, however, you should either not specify -x on the last createhsd in the sequence or issue the volume group commands listed above explicitly.

Operands

None.

Description

This command uses the sysctl facility.

You can use the System Management Interface Tool (SMIT) to run this command. To use SMIT, enter:

smit createhsd_dialog

or

smit vsd_data

and Select the Create an HSD option with the vsd_data fastpath.

Standard Output

For the following command:

createhsd -n 1/:hdisk2,hdisk3/ -g twinVG -s 1600 -t 8 -S -l \
 
twinLV -d twinHSD -c 4

The messages returned to standard output are:

OK:0:vsdvg -g twinVGn1 twinVG 1
OK:0:defvsd twinLV1n1 twinVGn1 twinHSD1n1 nocache
OK:0:defvsd twinLV2n1 twinVGn1 twinHSD2n1 nocache
OK:0:defvsd twinLV3n1 twinVGn1 twinHSD3n1 nocache
OK:0:defvsd twinLV4n1 twinVGn1 twinHSD4n1 nocache
 
OK:createvsd { -n 1/:hdisk2,hdisk3/ -s 401 -T 4 -g twinVG
-c 4 -v twinHSD -l twinLV -o cache -K }
 
OK:0:defhsd twinHSD not_protect_lvcb 8192 twinHSD1n1 twinHSD2n1
twinHSD3n1 twinHSD4n1

Exit Values

0
Indicates the successful completion of the command.

-1
Indicates that an error occurred.

Security

You must have access to the virtual shared disk subsystem via the sysctl service to run this command.

Restrictions

  1. The backup node cannot be the same as the primary node.
  2. The last character of hsd_name cannot be numeric.
  3. The vsd_name_prefix cannot contain the character '.'. See the createvsd -v option for details.

Prerequisite Information

PSSP: Managing Shared Disks

Location

/usr/lpp/csd/bin/createhsd

Related Information

Commands: createvsd, defhsd, vsdvg

Examples

To create six 4MB virtual shared disks and their underlying logical volumes with a prefix of TEMP, as well as a hashed shared disk comprising those virtual shared disks (24MB overall) with a stripe size of 32KB, enter the following (assuming that no previous virtual shared disks are defined with the TEMP prefix):

createhsd -n 3,4,7/8/ -c 2 -s 1024 -g vsdvg -d TEMP -t 32

This creates the following virtual shared disks:

and the HSD:

Note:
TEMP does not write to the first 32KB of each of its virtual shared disks.

createvsd

Purpose

createvsd - Creates a set of virtual shared disks, with their associated logical volumes, and puts information about them into the System Data Repository (SDR).

Syntax

createvsd
-n {node_list | ALL} -s size_in_MB -g vg_name
 
[{-c vsds_per_node | -L}] [-A]
 
[{-m mirror_count | -p lvm_stripe_size_in_K}] [-v vsd_name_prefix]
 
[-l lv_name_prefix] [ -o cache | nocache]
 
[-T lp_size_in_MB] [ -k vsd_type] [-x]

Flags

Note:
Some examples shown in this list do not contain enough flags to be executable. They are shown in an incomplete form to illustrate specific flags.

|-n node_list
|Specifies the nodes on which you are creating virtual shared disks. |The backup node cannot be the same as the primary node. For VSD, the |node list is:
|[P/S] : disk_list1+disk_list2/

|For CVSD, the node list is:

|[P/S] : disk_list1+disk_list2/

|"P" specifies the primary server node for serially accessed shared disks, |"S" specifies the backup (secondary) server node for serially accessed shared |disks, and S1 and S2 specifies the server nodes for concurrently accessed |shared disks. disk_list1 is the list of local physical disks, |or vpaths, for the logical volume on the primary. In other words, this |list can be made up of hdiskx, hdisky,... or vpathx, |vpathy,....

|Notes:

  1. |Vpaths are available only if the "Subsystem Device Driver" is |installed. Vpaths provide "virtual paths" to the same physical |volume.

  2. |Hdisks and vpaths cannot both be specified in the same list. |

|disk_list1+disk_list2 is the list of local physical disks or |vpaths in the volume group on the primary, if you want to have more disks in |the volume group than are needed for the logical volume. The sequence |in which nodes are listed determines the names given to the virtual shared |disks. For example:

|createvsd -n 1,6,4 -v PRE

|(with the vsd_prefix PRE) creates virtual shared disks PRE1n1 on |node 1, PRE2n6 on node 6, and PRE3n4 on node 4.

|To create a volume group that spans hdisk2, hdisk3, and hdisk4 on node 1, |with a backup on node 3, enter:

|createvsd -n 1/3:hdisk2,hdisk3,hdisk4/ -v DATA

|This command creates: |

|To create volume groups just like that one on nodes 1, 2, and 3 of a system |with backup on nodes 4, 5, and 6 of the same system, enter:

|createvsd -n 1/4:hdisk1,hdisk2,hdisk3/,2/5:hdisk5,hdisk6, \
|          hdisk7/,3/6:hdisk2,hdisk4,hdisk6/ -v DATA

|This command is shown on two lines here, but you must enter it without any |spaces between the items in node_list.

|The command creates: |

|To create a virtual shared disk where the logical volume spans only two of |the physical disks in the volume group, enter:

|createvsd -n 1/3:hdisk1,hdisk2+hdisk3/ -v DATA

|This command creates the virtual shared disk DATA1n1 with logical volume |lvDATA1n1 spanning hdisk1 and hdisk2 in the volume group DATA, which includes |hdisk1, hdisk2, and hdisk3. It exports the volume group DATA to node |3.

|If a volume group is already created and the combined physical hdisk lists |contain disks that are not needed for the logical volume, those hdisks are |added to the volume group. If the volume group has not already been |created, createvsd creates a volume group that spans |hdisk_list1+hdisk_list2.

|Backup nodes cannot use the same physical disk as the primary does to serve |virtual shared disks.

|ALL specifies that you are creating virtual shared disks on all |nodes in the system or system partition. No backup nodes are assigned |if you use this operand. The virtual shared disks will be created on |all the physical disks attached to the nodes in node_list (you cannot |specify which physical disks to use.)

-s
Specifies the size in megabytes of each virtual shared disk.

-g
Specifies the Logical Volume Manager (LVM) volume group name. This name is concatenated with the node number to produce the global volume group name. For example:
createvsd -n 6 -g VSDVG

creates a volume group with the local volume group name VSDVG and the global volume group name VSDVG1n6 on node 6. The node number is added to the prefix to avoid name conflicts when a backup node takes over a volume group. If a backup node exists, the global volume group name will be concatenated with the backup node number as well as the primary. For example:

createvsd -n 6/3/ -g VSDVG

creates a volume group with the local volume group name VSDVG and the global volume group name VSDVGn6b3. The primary node is node 6 and the backup node for this volume group is node 3.

-c
Specifies the number of virtual shared disks to be created on each node. If number_of_vsds_per_node is not specified, one virtual shared disk is created for each node specified on createvsd. If more than one virtual shared disk is to be created for each node, the names will be allocated alternately. For example:
createvsd -n 1,6 -c 2 -v DATA

creates virtual shared disks DATA1n1 on node 1, DATA2n6 on node 6, DATA3n1 on node 1, and DATA4n6 on node 6.

-L
Allows you to create one virtual shared disk on each node without using sequential numbers, for locally-accessed virtual shared disks.

-A
Specifies that virtual shared disk names will be allocated to each node in turn, for example:
createvsd -n 1,6 -c 2 -A DATA

creates DATA1n1 and DATA2n1 on node 1, and DATA3n6 and DATA4n6 on node 6.

-m
Specifies the LVM mirroring count. The mirroring count sets the number of physical partitions allocated to each logical partition. The range is from 1 to 3 and the default value is 1.

-p
Specifies the LVM stripe size. If this flag is not specified, the logical volumes are not striped. To use striping, the node on which the virtual shared disks are defined must have more than one physical disk.

-v
Specifies a prefix to be given to the names of the created virtual shared disks. This prefix will be concatenated with the virtual shared disk number, node number, and backup node number, if a backup disk is specified. For example, if the prefix PRE is given to a virtual shared disk created on node 1 and there are already two virtual shared disks with this prefix across the partition, the new virtual shared disk name will be PRE3n1. The name given to the underlying logical volume will be lvPRE3n1, unless the -l flag is used. The createvsd command continues to sequence virtual shared disk names from the last PRE-prefixed virtual shared disk.

If -v is not specified, the prefix vsd is used.

Note:
The last character of the vsd_name_prefix cannot be a digit. Otherwise, the 11th virtual shared disk with the prefix PRE would have the same name as the first virtual shared disk with the prefix PRE1. Nor can the vsd_name_prefix contain the character '.' because '.' can be any character in regular expressions.

-l
Overrides the prefix lvx that is given by default to a logical volume by the createvsd command, where x is the virtual shared disk name prefix specified by vsd_name_prefix or the default (vsd). For example:
createvsd -n 1 -v DATA

creates one virtual shared disk on node 1 named DATA1n1 with an underlying logical volume lvDATA1n1. If the command

createvsd -n 1 -v DATA -l new

is used, the virtual shared disk on node 1 is still named DATA1n1, but the underlying logical volume is named lvnew1n1.

It is usually more helpful not to specify -l, so that your lists of virtual shared disk names and logical volume names are easy to associate with each other and you avoid naming conflicts. |

|-o
|Specifies either the cache or the nocache option. |The default is nocache.
|Note:
IBM Virtual Shared Disk caching is no longer supported. This |information will still be accepted for compatibility with previous releases, |but the IBM Virtual Shared Disk device driver will ignore the |information. |

-T
Specifies the size of the physical partition in the Logical Volume Manager logical volume group and also the logical partition size (they will be the same) in megabytes. You must select a power of 2 in the range 2 - 256. The default is 4MB.

The Logical Volume Manager limits the number of physical partitions to 1016 per disk. If a disk is greater than 4 gigabytes in size, the physical partition size must be greater than 4MB to keep the number of partitions under the limit.

-k vsd_type
Specifies the type of virtual shared disk. The options are:

The default is VSD.

-x
Specifies that the steps required to synchronize the virtual shared disks on the primary and secondary nodes should not be performed; that is, the sequence:

is not done as part of the createvsd processing. This speeds the operation of the command and avoids unnecessary processing in the case where several IBM Virtual Shared Disks are being created on the same primary/secondary nodes. In this case, however, you should either not specify -x on the last createvsd in the sequence or issue the volume group commands listed above explicitly.

Operands

None.

Description

Use this command to create a volume group with the specified name (if one does not already exist) and creates a logical volume of size s within that volume group.

You can use the System Management Interface Tool (SMIT) to run this command. To use SMIT, enter:

smit vsd_data

and select the Create a virtual shared disk option.

Standard Output

For the following command:

createvsd -n 1/:hdisk1/ -g testvg -s 16 -T 8 -l lvtest -v test -c 4

The messages returned to standard output are:

OK:0:vsdvg -g testvgn1 testvg 1
OK:0:defvsd lvtest1n1 testvgn1 test1n1 nocache
OK:0:defvsd lvtest2n1 testvgn1 test2n1 nocache
OK:0:defvsd lvtest3n1 testvgn1 test3n1 nocache
OK:0:defvsd lvtest4n1 testvgn1 test4n1 nocache

For the following command:

createvsd -n 1/:hdisk1/ -g testvg -s 16 -T 8 -l lvtest -v test -c 4

The messages returned to standard output are:

OK:0:defvsd lvtest5n1 testvgn1 test5n1 nocache
OK:0:defvsd lvtest6n1 testvgn1 test6n1 nocache
OK:0:defvsd lvtest7n1 testvgn1 test7n1 nocache
OK:0:defvsd lvtest8n1 testvgn1 test8n1 nocache

Exit Values

0
Indicates the successful completion of the command.

-1
Indicates that an error occurred.

Security

You must have access to the virtual shared disk subsystem via the sysctl service to run this command.

Restrictions

  1. The backup node cannot be the same as the primary node.
  2. The last character of vsd_name_prefix cannot be numeric.
  3. The vsd_name_prefix cannot contain the character '.'.

Prerequisite Information

PSSP: Managing Shared Disks

Location

/usr/lpp/csd/bin/createvsd

Related Information

Commands: defvsd, vsdvg

Examples

To create two 4MB virtual shared disks on each of three primary nodes, one of which has a backup, enter:

createvsd -n 3,4,7/8/ -c 2 -s 4 -g vsdvg -v TEMP

This command creates the following virtual shared disks:

To create three virtual shared disks, where the logical volume created on node 3 spans fewer disks than the volume group does, enter:

createvsd -n 3,4/:hdisk1,hdisk2+hdisk3/,7/8/ -s 4 -g datavg -v USER

This command creates:

crunacct

Purpose

crunacct - Runs on the acct_master node to produce daily summary accounting reports and to accumulate accounting data for the fiscal period using merged accounting data from each node.

Syntax

crunacct
[-r] |
 
[-r SETUP | DELNODEDATA | MERGETACCT | CMS | USEREXIT | CLEANUP]

Flags

-r
Specifies a restart of the crunacct process. The restart process begins at the state listed in the statefile found in the /var/adm/caact directory.

Operands

SETUP
Copies the files produced by nrunacct on each node to the acct_master node. For each node named by the string node, these files:
/var/adm/acct/nite/lineuseYYYYMMDD
/var/adm/acct/nite/rebootsYYYYMMDD
/var/adm/acct/nite/daytacctYYYYMMDD
/var/adm/acct/sum/daycmsYYYYMMDD
/var/adm/acct/sum/loginlogYYYYMMDD

are copied to the acct_master node to the following files:

/var/adm/cacct/node/nite/lineuseYYYYMMDD
/var/adm/cacct/node/nite/rebootsYYYYMMDD
/var/adm/cacct/node/nite/daytacctYYYYMMDD
/var/adm/cacct/node/sum/daycmsYYYYMMDD
/var/adm/cacct/node/sum/loginlogYYYYMMDD

for all YYYYMMDD prior or equal to the YYYYMMDD being processed.

DELNODEDATA
Deletes files that have been copied to the acct_master node in the SETUP step, as well as the associated /var/adm/acct/statefile YYYYMMDD files.

MERGETACCT
Produces a daily total accounting file and merges this daily file into the total accounting file for the fiscal period, for each accounting class. If there are no defined accounting classes, the output of this step represents data for the entire SP system.

CMS
Produces a daily command summary file and merges this daily file into the total command summary file for the fiscal period, for each accounting class. If there are no defined accounting classes, the output of this step represents data for the entire SP system.

It also creates an SP system version of the loginlog file, in which each line consists of a date, a user login name and a list of node names. The date is the date of the last accounting cycle during which the user, indicated by the associated login name, had at least one connect session in the SP system. The associated list of node names indicates the nodes on which the user had a login session during that accounting cycle.

USEREXIT
If the /var/adm/csiteacct shell file exists, calls it to perform site specific accounting procedures that are applicable to the acct_master node.

CLEANUP
Prints a daily report of accounting activity and removes files that are no longer needed.

Description

In order for SP accounting to succeed each day, the nrunacct command must complete successfully on each node for which accounting is enabled and then the crunacct command must complete successfully on the acct_master node. However, this may not always be true. In particular, the following scenarios must be taken into account:

  1. The nrunacct command does not complete successfully on some nodes for the current accounting cycle. This can be the result of an error during the execution of nrunacct, nrunacct not being executed at the proper time by cron or the node being down when nrunacct was scheduled to run.
  2. The acct_master node is down or the crunacct command cannot be executed.

From the point of view of the crunacct command, the first scenario results in no accounting data being available from a node. The second scenario results in more than one day's accounting data being available from a node. If it is the case that no accounting data is available from a node, the policy of crunacct is that the error condition is reported and processing continues with data from the other nodes. If data cannot be obtained from at least X percent of nodes, then processing is terminated. "X" is referred to as the spacct_actnode_thresh attribute and can be set via a SMIT panel.

If node data for accounting cycle N is not available when crunacct executes and then becomes available to crunacct during accounting cycle N+1, the node data for both the N and N+1 accounting cycles is merged by crunacct. In general, crunacct merges all data from a node that has not yet been reported into the current accounting cycle, except as in the following case.

If it is the case that crunacct has not run for more than one accounting cycle, such that there are several day's data on each node, then the policy of crunacct is that it processes each accounting cycle's data to produce the normal output for each accounting cycle. For example, if crunacct has not executed for accounting cycles N and N+1, and it is now accounting cycle N+2, then crunacct first executes for accounting cycle N, then executes for accounting cycle N+1 and finally executes for accounting cycle N+2.

However, if the several accounting cycles span from the previous fiscal period to the current fiscal period, then only the accounting cycles that are part of the previous fiscal period are processed. The accounting cycles that are part of the current fiscal period are processed during the next night's execution of crunacct. Appropriate messages are provided in the /var/adm/cacct/active file so that the administrator can execute cmonacct prior to the next night's execution of crunacct.

To restart the crunacct command after an error, first check the /var/adm/cacct/activeYYYYMMDD file for diagnostic messages, and take appropriate actions. For example, if the log indicates that data was unavailable from a majority of nodes, and their corresponding nrunacct state file indicate a state other than complete, check their /var/adm/acct/nite/activeYYYYMMDD files for diagnostic messages and then fix any damaged data files, such as pacct or wtmp.

Remove the lock files and lastdate file (all in the /var/adm/cacct directory), before restarting the crunacct command. You must specify the -r flag. The command begins processing cycles starting with the cycle after the last successfully completed cycle. This cycle will be restarted at the state specified in the statefile file. All subsequent cycles, up to and including the current cycle, will be run from the beginning (SETUP state).

You may choose to start the process at a different state by specifying a state with the -r flag. The command begins processing cycles starting with the cycle after the last successfully completed cycle. This cycle will be restarted at the state entered on the command line. All subsequent cycles, up to and including the current cycle, will be run from the beginning ( SETUP state).

Files

/var/adm/cacct/activeYYYYMMDD
The crunacct message file.

/var/adm/cacct/fiscal_periods
Customer-defined file indicating start date of each fiscal period.

/var/adm/cacct/lastcycle
Contains last successful crunacct completed cycle.

/var/adm/cacct/lock*
Prevents simultaneous invocation of crunacct.

/var/adm/cacct/lastdate
Contains last date crunacct was run.

/var/adm/cacct/nite/statefileYYYYMMDD
Contains current state to process.

Security

You must have root privilege to run this command.

Prerequisite Information

For more information about the Accounting System, the preparation of daily and monthly reports, and the accounting files, see PSSP: Administration Guide.

Location

/usr/lpp/ssp/bin/crunacct

Related Information

Commands: acctcms, acctcom, acctcon1, acctcon2, acctmerg, acctprc1, acctprc2, accton, crontab, fwtmp, nrunacct

Daemon: cron

The System Accounting information found in AIX System Management Guide

Examples

  1. To restart the SP system accounting procedures, enter a command similar to the following:
    nohup /usr/lpp/ssp/bin/crunacct -r 2>> \
          /var/adm/cacct/nite/accterr &
    

    This example restarts crunacct at the state located in the statefile file. The crunacct command runs in the background (&), ignoring all INTERRUPT and QUIT signals (nohup). Standard error output (2) is added to the end (>>) of the /var/adm/cacct/nite/accterr file.

  2. To restart the SP system accounting procedures at a specific state, enter a command similar to the following:
    nohup /usr/lpp/ssp/bin/crunacct -r CMS 2>> \
          /var/adm/cacct/nite/accterr &
    

    This example restarts the crunacct command starting with the CMS state. The crunacct command runs in the background (&), ignoring all INTERRUPT and QUIT signals (nohup). Standard error output (2) is added to the end (>>) of the /var/adm/cacct/nite/accterr file.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]