Purpose
cksumvsd - Views and manipulates the IBM Virtual Shared Disk component's checksum parameters.
Syntax
cksumvsd [-s] [-R] [-i | -I]
Flags
If no flags are specified, the current setting of all IBM Virtual Shared Disk checksum parameters and counters are displayed.
Operands
None.
Description
The IBM Virtual Shared Disk IP device driver can calculate and send checksums on remote packets it sends. It also can calculate and verify checksums on remote packets it receives. The cksumvsd command is used to tell the device driver whether to perform checksum processing. The default is no checksumming.
Issuing cksumvsd -i turns on checksumming on the node on which it is run. cksumvsd -i must be issued on all virtual shared disk nodes in the system partition, or the IBM Virtual Shared Disk software will stop working properly on the system partition. If node A has cksumvsd -i (checksumming turned on) and node B has cksumvsd -I (checksumming turned off, the default), then A will reject all messages from B (both requests and replies), since A's checksum verification will be unsuccessful on all B's messages. The safe way to run cksumvsd -i is to make sure that all virtual shared disks on all nodes are in the STOPPED or SUSPENDED states, issue cksumvsd -i on all nodes, then resume the needed virtual shared disks on all nodes.
In checksumming mode, the IBM Virtual Shared Disk IP device driver keeps a counter of the number of packets received with good checksums, and the number received with problem checksums. cksumvsd and statvsd both display these values (statvsd calls cksumvsd -s).
cksumvsd dynamically responds to the configuration of the IBM Virtual Shared Disk IP device driver loaded in the kernel. Its output and function may change if the IBM Virtual Shared Disk IP device driver configuration changes.
Files
Security
You must be in the AIX bin group to run this command.
You must have write access to the SDR to run this command.
Prerequisite Information
PSSP: Managing Shared Disks
Related Information
Commands: cfgvsd
Examples
cksumvsd
You should receive output similar to the following:
VSD cksum: current values: do_ip_checksum: 0 ipcksum_cntr: 350 good, 0 bad, 0 % bad.
The IBM Virtual Shared Disk checksumming is currently turned off on the node. Prior to this, checksumming was turned on and 350 IBM Virtual Shared Disk remote messages were received, all with good checksumming.
cksumvsd -i
You should receive output similar to the following:
VSD cksum: current values: do_ip_checksum: 0 ipcksum_cntr: 350 good, 0 bad, 0 % bad. VSD cksum: new values: do_ip_checksum: 1 ipcksum_cntr: 350 good, 0 bad, 0 % bad.
The command displays old and new values. As before, the node has received 350 IBM Virtual Shared Disk remote messages with good checksums.
cksumvsd -s
You should receive output similar to the following:
ipcksum_cntr: 350 good, 0 bad, 0 % bad.
Purpose
cmonacct - Performs monthly or periodic SP accounting.
Syntax
cmonacct [number]
Flags
None.
Operands
Description
The cmonacct command performs monthly or periodic SP system accounting. The intervals are set in the crontab file. You can set the cron daemon to run the cmonacct command once each month or at some other specified time period. By default, if accounting is enabled for at least one node, cmonacct executes on the first day of every month.
The cmonacct command creates summary files under the /var/adm/cacct/fiscal directory and restarts summary files under the /var/adm/cacct/sum directory, the cumulative summary to which daily reports are appended.
Security
You must have root privilege to run this command.
Location
/usr/lpp/ssp/bin/cmonacct
Examples
cmonacct
cmonacct 12
Purpose
config_spsec - Configures SP Services into the DCE database. Services which use DCE as an authentication method are required to have certain information entered in the CDS registry and Security Server to perform client/server authentication.
Syntax
Flags
Operands
Description
The config_spsec command enters data into the CDS registry and Security Server database. You must be logged into DCE with cell administration authority to use the command. This command reads from two files which specify groups, service principals and members. These files contain the information necessary for each service to be configured to use DCE authentication. |There is a default file |(/usr/lpp/ssp/config/spsec_defaults) and an overrides file |(/spadata/sys1/spsec/spec_overrides). The spsec_defaults file is shipped with the product and should not be altered by users. The spsec_overrides file is provided to allow users to modify principal, group, and organization names. The program reads the two files, and creates all the necessary entries in the CDS registry and Security Server. If the information is already present, an appropriate message will be issued and logged into the log file (not an error).
For syntax errors within either file, an error message will be issued, logged, and processing will halt. Processing of both files occurs prior to any changes being made to any DCE database.
The command prompts for an ID with cell administrator authority, which will be added to the spsec-admin group. The command also prompts for a password. Since the user is required to be logged into the DCE cell as an administrator, the password is that of the cell administrator. This password is required by the config.dce program (called from within this program).
|To run the command remotely, use the -r flag. |Specifying -r allows an administrator to run the command from |one machine on behalf of another machine. Using -r SP |requires that the SP_NAME environment variable be set to the short host name |of an appropriate SDR daemon. When -r SP is specified, |the SP_NAME environment variable must be set to the short host name of the SDR |daemon on the SP being configured. When -r WS is |specified, the SP_NAME can specify the short host name of any working |SDR. Additionally, since this command depends on the two input files |listed in the "Files" section, the administrator must ensure that these files |are copied from the SP control workstation, the location for the master |copies, to the machine running the command.
|The results of this command depend on both the input parameters and |where the command is run. If the input parameters include -r |WS, principals for a standalone workstation are configured. If the |input parameters include -r SP, principals |and accounts for an SP control workstation and nodes are configured. If |the command is run |without the -r flag, the command will determine where it is |running (on an SP or workstation) and create the appropriate principals and |accounts.
Files
/spdata/sys1/spsec/spsec_overrides |
|CDS registry and Security Server database updated
Exit Values
|An unsuccessful run of this command (depending on where it encountered a |problem) may leave the state of service principals in an incomplete |state. Some service principals, groups, and directories may not be |created or updated. This will cause services to not operate correctly |in a DCE environment. After fixing the cause of the problem, rerun |config_spsec with the same parameters to complete the |configuration.
Security
Users need to be logged into the cell with cell administrator authority |because creating accounts and groups requires that |authority.
Location
/usr/lpp/ssp/bin/config_spsec
Related Information
Commands: rm_spsec
DCE Administration publications for AIX.
Examples
config_spsec -v
config_spsec -v -c
|config_spsec -v -p my_par
|export SP_NAME=mySPcws |config_spsec -v -r SP mySPcws.abc.com
Purpose
cprdaily - Creates an ASCII report of the previous day's accounting data.
Syntax
cprdaily [-c] [[-l] [yyyymmdd]]
Flags
Operands
Description
This command is called by the crunacct command to format an ASCII report of the previous day's accounting data for all nodes. The report resides in the /var/adm/cacct/sum/rprtyyyymmdd file, where yyyymmdd specifies the year, month, and day of the report.
Security
You must have root privilege to run this command.
Location
/usr/lpp/ssp/bin/cprdaily
Examples
cprdaily
cprdaily -c -l
cprdaily -l 19940316
Purpose
cptuning - Copies a file to /tftpboot/tuning.cust.
Syntax
cptuning -h | file_name
Flags
Operands
Description
Use this command to copy the specified file to the /tftpboot/tuning.cust file. IBM ships the following four predefined tuning parameter files in /usr/lpp/ssp/install/config:
This command is intended for use in copying one of these files to /tftpboot/tuning.cust on the control workstation for propagation to the nodes in the SP. It can also be used on an individual node to copy one of these files to /tftpboot/ tuning.cust.
Files
Upon successful completion, the /tftpboot/tuning.cust file is updated.
Standard Output
When the command completes successfully, a message to that effect is written to standard output.
Standard Error
This command writes error messages (as necessary) to standard error.
Exit Values
If the command does not run successfully, it terminates with an error message and a nonzero return code.
Security
You must have root privilege to run this command.
Implementation Specifics
This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).
Location
/usr/lpp/ssp/bin/cptuning
Related Information
PSSP Files: tuning.commercial, tuning.default, tuning.development, tuning.scientific
PSSP: Installation and Migration Guide
Examples
cptuning /tmp/my-tuning-file
cptuning tuning.commercial
Purpose
create_dcehostname - Populates the System Data Repository (SDR) with DCE hostnames for each node in a partition set to use DCE.
Syntax
create_dcehostname [-h] [-v]
Flags
Operands
None.
Description
The create_dcehostname command must be run on the control workstation. It queries the DCE Security registry for information about nodes which may have already been configured and have current dcehostnames. For those entries found, it will update the SDR Node object's dcehostname attribute with this information. For all nodes which were not found in the DCE Security registry and do not already have a dcehostname attribute assigned, it will assign the node's reliable hostname to the attribute. Additionally, this program will update the SDR's SP object with the control workstation's dcehostname in the same manner it did for the nodes. All control workstation IP addresses will be used to search the DCE Security registry to determine if the control workstation has a defined DCE hostname in the registry. Since it is required that the control workstation be configured into the DCE cell, one of the IP addresses will be found in the registry.
Standard Input
CDS Registry and Security Server
SDR
Standard Output
Log file created: /var/adm/SPlogs/auth_install/log
Exit Values
The node boot process requires the DCE hostname. Authentication will not be properly set up, if at all, without this information and the node may not be accessible by some services or users.
Security
You must have root privilege and write access to the SDR for all partitions to run this command.
Location
/usr/lpp/ssp/bin/create_dcehostname
Related Information
DCE Administration publications for AIX.
Examples
To create a DCE hostname for all defined nodes in the SDR, enter:
create_dcehostname -v
Purpose
create_keyfiles - Creates DCE keytab objects and stores them into specified keyfiles on the local file system. Services which use DCE as an authentication method will use these keys to log into DCE.
Syntax
create_keyfiles [-h] [-v] [-c | -p partition_name]
Flags
Operands
None.
Description
The create_keyfiles command reads from two files, a default file (/usr/lpp/ssp/config/spsec_defaults) and an override file (/spdata/sys1/spsec/spsec_overrides ). It will process these files and generate an effective service principal based on the attributes specified for each service and its location (node, control workstation, or non-SP workstation). It will create keytab objects based on these effective service principals and store the keys in keyfiles located in the /spdata/sys1/keyfiles directory.
Standard Input
/usr/lpp/ssp/config/spsec_defaults
/spdata/sys1/spsec/spsec_overrides
Standard Output
Log file created: /var/adm/SPlogs/auth_install/log.
Keyfiles located in /spdata/sys1/keyfiles (with subdirectories based on the service name).
Exit Values
Security
This command requires write access to the /spdata/sys1 filesystem, and read access to the two configuration files specified in the description. You must also be root with default dce credentials.
Location
/usr/lpp/ssp/bin/create_keyfiles
Related Information
Commands: rm_spsec
Files: /usr/lpp/ssp/config/spsec_defaults, /spdata/sys1/spsec/spsec_overrides
DCE Administration publications for AIX (relating to keyfiles and keytab object creation)
IBM Distributed Computing Environment for AIX: Administration Commands Reference
Examples
create_keyfiles -v
create_keyfiles -v
create_keyfiles -v -p my_par
Purpose
create_krb_files - Creates the necessary krb_srvtab and tftp access files on the Network Installation Management (NIM) master for Kerberos Version 4 authentication.
Syntax
create_krb_files [-h]
Flags
Operands
None.
Description
Use this command on a boot/install server (including the control workstation). On the server, it creates the Kerberos Version 4 krb_srvtab file for each boot/install client of that server and also updates the /etc/tftpaccess.ctl file on the server.
Standard Error
This command writes error messages (as necessary) to standard error.
Exit Values
Security
You must have root privilege to run this command.
Implementation Specifics
This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).
Location
/usr/lpp/ssp/bin/create_krb_files
Related Information
Commands: setup_server
Examples
To create or update the krb_srvtab and tftp access files on a boot/install server, enter the following command on that server:
create_krb_files
Purpose
createhsd - Creates one hashed shared disk that encompasses two or more virtual shared disks.
Syntax
Flags
|[P/S] : disk_list1+disk_list2/
|For CVSD, the node list is:
|[P/S] : disk_list1+disk_list2/
|"P" specifies the primary server node for serially accessed shared disks, |"S" specifies the backup (secondary) server node for serially accessed shared |disks, and S1 and S2 specifies the server nodes for concurrently accessed |shared disks. disk_list1 is the list of local physical disks, |or vpaths, for the logical volume on the primary. In other words, this |list can be made up of hdiskx, hdisky,... or vpathx, |vpathy,....
|Notes:
|disk_list1+disk_list2 is the list of local physical disks or |vpaths in the volume group on the primary, if you want to have more disks in |the volume group than are needed for the logical volume. The sequence |in which nodes are listed determines the names given to the virtual shared |disks. For example:
|createvsd -n 1,6,4 -v PRE
|(with the vsd_prefix PRE) creates virtual shared disks PRE1n1 on |node 1, PRE2n6 on node 6, and PRE3n4 on node 4.
|To create a volume group that spans hdisk2, hdisk3, and hdisk4 on node 1, |with a backup on node 3, enter:
|createvsd -n 1/3:hdisk2,hdisk3,hdisk4/ -v DATA
|This command creates: |
|To create volume groups just like that one on nodes 1, 2, and 3 of a system |with backup on nodes 4, 5, and 6 of the same system, enter:
|createvsd -n 1/4:hdisk1,hdisk2,hdisk3/,2/5:hdisk5,hdisk6, \ | hdisk7/,3/6:hdisk2,hdisk4,hdisk6/ -v DATA
|This command is shown on two lines here, but you must enter it without any |spaces between the items in node_list.
|The command creates: |
|To create a virtual shared disk where the logical volume spans only two of |the physical disks in the volume group, enter:
|createvsd -n 1/3:hdisk1,hdisk2+hdisk3/ -v DATA
|This command creates the virtual shared disk DATA1n1 with logical volume |lvDATA1n1 spanning hdisk1 and hdisk2 in the volume group DATA, which includes |hdisk1, hdisk2, and hdisk3. It exports the volume group DATA to node |3.
|If a volume group is already created and the combined physical hdisk lists |contain disks that are not needed for the logical volume, those hdisks are |added to the volume group. If the volume group has not already been |created, createvsd creates a volume group that spans |hdisk_list1+hdisk_list2.
|Backup nodes cannot use the same physical disk as the primary does to serve |virtual shared disks.
|ALL specifies that you are creating virtual shared disks on all |nodes in the system or system partition. No backup nodes are assigned |if you use this operand. The virtual shared disks will be created on |all the physical disks attached to the nodes in node_list (you |cannot specify which physical disks to use.)
createhsd -n 6 -g VSDVG
creates a new volume group with the local AIX volume group name VSDVG and the virtual shared disk global volume group name VSDVGn6. The node number is added to the local volume group name to create a unique global volume group name within a system partition to avoid name conflicts with the name used for volume groups on other nodes. If a backup node exists, the global volume group name will be created by concatenating the backup node number as well as the primary node number to the local volume group name. For example:
createhsd -n 6/3/ -g VSDVG
creates VSDVGn6b3, where the primary node is node 6 and the backup node for this global volume group is node 3. The local AIX volume group name will still be VSDVG. You can specify a local volume group that already exists. You do not need to use the -T flag if you specify a volume group name that already exists.
The Logical Volume Manager limits the number of physical partitions to 1016 per disk. If a disk is greater than 4 gigabytes in size, the physical partition size must be greater than 4MB to keep the number of partitions under the limit.
createhsd -n 1,6 -c 2 -d DATA
creates virtual shared disks DATA1n1 on node 1, DATA2n6 on node 6, DATA3n1 on node 1, and DATA4n6 on node 6 and uses them to make up the hashed shared disk DATA.
createhsd -n 1,6 -c 2 -A DATA
creates DATA1n1 and DATA2n1 on node 1, and DATA3n6 and DATA4n6 on node 6.
The command:
createhsd -n 1,2 -d DATA
creates two virtual shared disks, DATA1n1 and DATA2n2. These virtual shared disks make up one hashed shared disk named DATA.
createhsd -n 1 -v DATA
creates one virtual shared disk on node 1 named DATA1n1 with an underlying logical volume lvDATA1n1. If the command
createhsd -n 1 -v DATA -l new
is used, the virtual shared disk on node 1 is still named DATA1n1, but the underlying logical volume is named lvnew1n1.
It is usually more helpful not to specify -l, so that your lists of virtual shared disk names and logical volume names are easy to associate with each other and you avoid naming conflicts.
The default is VSD.
is not done as part of the createvsd processing that underlies the createhsd command. This speeds the operation of the command and avoids unnecessary processing in the case where several IBM Virtual Shared Disks are being created on the same primary/secondary nodes. In that case, however, you should either not specify -x on the last createhsd in the sequence or issue the volume group commands listed above explicitly.
Operands
None.
Description
This command uses the sysctl facility.
You can use the System Management Interface Tool (SMIT) to run this command. To use SMIT, enter:
smit createhsd_dialog
or
smit vsd_data
and Select the Create an HSD option with the vsd_data fastpath.
Standard Output
For the following command:
createhsd -n 1/:hdisk2,hdisk3/ -g twinVG -s 1600 -t 8 -S -l \ twinLV -d twinHSD -c 4
The messages returned to standard output are:
OK:0:vsdvg -g twinVGn1 twinVG 1 OK:0:defvsd twinLV1n1 twinVGn1 twinHSD1n1 nocache OK:0:defvsd twinLV2n1 twinVGn1 twinHSD2n1 nocache OK:0:defvsd twinLV3n1 twinVGn1 twinHSD3n1 nocache OK:0:defvsd twinLV4n1 twinVGn1 twinHSD4n1 nocache OK:createvsd { -n 1/:hdisk2,hdisk3/ -s 401 -T 4 -g twinVG -c 4 -v twinHSD -l twinLV -o cache -K } OK:0:defhsd twinHSD not_protect_lvcb 8192 twinHSD1n1 twinHSD2n1 twinHSD3n1 twinHSD4n1
Exit Values
Security
You must have access to the virtual shared disk subsystem via the sysctl service to run this command.
Restrictions
Prerequisite Information
PSSP: Managing Shared Disks
Location
/usr/lpp/csd/bin/createhsd
Related Information
Commands: createvsd, defhsd, vsdvg
Examples
To create six 4MB virtual shared disks and their underlying logical volumes with a prefix of TEMP, as well as a hashed shared disk comprising those virtual shared disks (24MB overall) with a stripe size of 32KB, enter the following (assuming that no previous virtual shared disks are defined with the TEMP prefix):
createhsd -n 3,4,7/8/ -c 2 -s 1024 -g vsdvg -d TEMP -t 32
This creates the following virtual shared disks:
and the HSD:
Purpose
createvsd - Creates a set of virtual shared disks, with their associated logical volumes, and puts information about them into the System Data Repository (SDR).
Syntax
Flags
|[P/S] : disk_list1+disk_list2/
|For CVSD, the node list is:
|[P/S] : disk_list1+disk_list2/
|"P" specifies the primary server node for serially accessed shared disks, |"S" specifies the backup (secondary) server node for serially accessed shared |disks, and S1 and S2 specifies the server nodes for concurrently accessed |shared disks. disk_list1 is the list of local physical disks, |or vpaths, for the logical volume on the primary. In other words, this |list can be made up of hdiskx, hdisky,... or vpathx, |vpathy,....
|Notes:
|disk_list1+disk_list2 is the list of local physical disks or |vpaths in the volume group on the primary, if you want to have more disks in |the volume group than are needed for the logical volume. The sequence |in which nodes are listed determines the names given to the virtual shared |disks. For example:
|createvsd -n 1,6,4 -v PRE
|(with the vsd_prefix PRE) creates virtual shared disks PRE1n1 on |node 1, PRE2n6 on node 6, and PRE3n4 on node 4.
|To create a volume group that spans hdisk2, hdisk3, and hdisk4 on node 1, |with a backup on node 3, enter:
|createvsd -n 1/3:hdisk2,hdisk3,hdisk4/ -v DATA
|This command creates: |
|To create volume groups just like that one on nodes 1, 2, and 3 of a system |with backup on nodes 4, 5, and 6 of the same system, enter:
|createvsd -n 1/4:hdisk1,hdisk2,hdisk3/,2/5:hdisk5,hdisk6, \ | hdisk7/,3/6:hdisk2,hdisk4,hdisk6/ -v DATA
|This command is shown on two lines here, but you must enter it without any |spaces between the items in node_list.
|The command creates: |
|To create a virtual shared disk where the logical volume spans only two of |the physical disks in the volume group, enter:
|createvsd -n 1/3:hdisk1,hdisk2+hdisk3/ -v DATA
|This command creates the virtual shared disk DATA1n1 with logical volume |lvDATA1n1 spanning hdisk1 and hdisk2 in the volume group DATA, which includes |hdisk1, hdisk2, and hdisk3. It exports the volume group DATA to node |3.
|If a volume group is already created and the combined physical hdisk lists |contain disks that are not needed for the logical volume, those hdisks are |added to the volume group. If the volume group has not already been |created, createvsd creates a volume group that spans |hdisk_list1+hdisk_list2.
|Backup nodes cannot use the same physical disk as the primary does to serve |virtual shared disks.
|ALL specifies that you are creating virtual shared disks on all |nodes in the system or system partition. No backup nodes are assigned |if you use this operand. The virtual shared disks will be created on |all the physical disks attached to the nodes in node_list (you cannot |specify which physical disks to use.)
createvsd -n 6 -g VSDVG
creates a volume group with the local volume group name VSDVG and the global volume group name VSDVG1n6 on node 6. The node number is added to the prefix to avoid name conflicts when a backup node takes over a volume group. If a backup node exists, the global volume group name will be concatenated with the backup node number as well as the primary. For example:
createvsd -n 6/3/ -g VSDVG
creates a volume group with the local volume group name VSDVG and the global volume group name VSDVGn6b3. The primary node is node 6 and the backup node for this volume group is node 3.
createvsd -n 1,6 -c 2 -v DATA
creates virtual shared disks DATA1n1 on node 1, DATA2n6 on node 6, DATA3n1 on node 1, and DATA4n6 on node 6.
createvsd -n 1,6 -c 2 -A DATA
creates DATA1n1 and DATA2n1 on node 1, and DATA3n6 and DATA4n6 on node 6.
If -v is not specified, the prefix vsd is used.
createvsd -n 1 -v DATA
creates one virtual shared disk on node 1 named DATA1n1 with an underlying logical volume lvDATA1n1. If the command
createvsd -n 1 -v DATA -l new
is used, the virtual shared disk on node 1 is still named DATA1n1, but the underlying logical volume is named lvnew1n1.
It is usually more helpful not to specify -l, so that your lists of virtual shared disk names and logical volume names are easy to associate with each other and you avoid naming conflicts. |
The Logical Volume Manager limits the number of physical partitions to 1016 per disk. If a disk is greater than 4 gigabytes in size, the physical partition size must be greater than 4MB to keep the number of partitions under the limit.
The default is VSD.
is not done as part of the createvsd processing. This speeds the operation of the command and avoids unnecessary processing in the case where several IBM Virtual Shared Disks are being created on the same primary/secondary nodes. In this case, however, you should either not specify -x on the last createvsd in the sequence or issue the volume group commands listed above explicitly.
Operands
None.
Description
Use this command to create a volume group with the specified name (if one does not already exist) and creates a logical volume of size s within that volume group.
You can use the System Management Interface Tool (SMIT) to run this command. To use SMIT, enter:
smit vsd_data
and select the Create a virtual shared disk option.
Standard Output
For the following command:
createvsd -n 1/:hdisk1/ -g testvg -s 16 -T 8 -l lvtest -v test -c 4
The messages returned to standard output are:
OK:0:vsdvg -g testvgn1 testvg 1 OK:0:defvsd lvtest1n1 testvgn1 test1n1 nocache OK:0:defvsd lvtest2n1 testvgn1 test2n1 nocache OK:0:defvsd lvtest3n1 testvgn1 test3n1 nocache OK:0:defvsd lvtest4n1 testvgn1 test4n1 nocache
For the following command:
createvsd -n 1/:hdisk1/ -g testvg -s 16 -T 8 -l lvtest -v test -c 4
The messages returned to standard output are:
OK:0:defvsd lvtest5n1 testvgn1 test5n1 nocache OK:0:defvsd lvtest6n1 testvgn1 test6n1 nocache OK:0:defvsd lvtest7n1 testvgn1 test7n1 nocache OK:0:defvsd lvtest8n1 testvgn1 test8n1 nocache
Exit Values
Security
You must have access to the virtual shared disk subsystem via the sysctl service to run this command.
Restrictions
Prerequisite Information
PSSP: Managing Shared Disks
Location
/usr/lpp/csd/bin/createvsd
Related Information
Commands: defvsd, vsdvg
Examples
To create two 4MB virtual shared disks on each of three primary nodes, one of which has a backup, enter:
createvsd -n 3,4,7/8/ -c 2 -s 4 -g vsdvg -v TEMP
This command creates the following virtual shared disks:
To create three virtual shared disks, where the logical volume created on node 3 spans fewer disks than the volume group does, enter:
createvsd -n 3,4/:hdisk1,hdisk2+hdisk3/,7/8/ -s 4 -g datavg -v USER
This command creates:
Purpose
crunacct - Runs on the acct_master node to produce daily summary accounting reports and to accumulate accounting data for the fiscal period using merged accounting data from each node.
Syntax
Flags
Operands
are copied to the acct_master node to the following files:
for all YYYYMMDD prior or equal to the YYYYMMDD being processed.
It also creates an SP system version of the loginlog file, in which each line consists of a date, a user login name and a list of node names. The date is the date of the last accounting cycle during which the user, indicated by the associated login name, had at least one connect session in the SP system. The associated list of node names indicates the nodes on which the user had a login session during that accounting cycle.
Description
In order for SP accounting to succeed each day, the nrunacct command must complete successfully on each node for which accounting is enabled and then the crunacct command must complete successfully on the acct_master node. However, this may not always be true. In particular, the following scenarios must be taken into account:
From the point of view of the crunacct command, the first scenario results in no accounting data being available from a node. The second scenario results in more than one day's accounting data being available from a node. If it is the case that no accounting data is available from a node, the policy of crunacct is that the error condition is reported and processing continues with data from the other nodes. If data cannot be obtained from at least X percent of nodes, then processing is terminated. "X" is referred to as the spacct_actnode_thresh attribute and can be set via a SMIT panel.
If node data for accounting cycle N is not available when crunacct executes and then becomes available to crunacct during accounting cycle N+1, the node data for both the N and N+1 accounting cycles is merged by crunacct. In general, crunacct merges all data from a node that has not yet been reported into the current accounting cycle, except as in the following case.
If it is the case that crunacct has not run for more than one accounting cycle, such that there are several day's data on each node, then the policy of crunacct is that it processes each accounting cycle's data to produce the normal output for each accounting cycle. For example, if crunacct has not executed for accounting cycles N and N+1, and it is now accounting cycle N+2, then crunacct first executes for accounting cycle N, then executes for accounting cycle N+1 and finally executes for accounting cycle N+2.
However, if the several accounting cycles span from the previous fiscal period to the current fiscal period, then only the accounting cycles that are part of the previous fiscal period are processed. The accounting cycles that are part of the current fiscal period are processed during the next night's execution of crunacct. Appropriate messages are provided in the /var/adm/cacct/active file so that the administrator can execute cmonacct prior to the next night's execution of crunacct.
To restart the crunacct command after an error, first check the /var/adm/cacct/activeYYYYMMDD file for diagnostic messages, and take appropriate actions. For example, if the log indicates that data was unavailable from a majority of nodes, and their corresponding nrunacct state file indicate a state other than complete, check their /var/adm/acct/nite/activeYYYYMMDD files for diagnostic messages and then fix any damaged data files, such as pacct or wtmp.
Remove the lock files and lastdate file (all in the /var/adm/cacct directory), before restarting the crunacct command. You must specify the -r flag. The command begins processing cycles starting with the cycle after the last successfully completed cycle. This cycle will be restarted at the state specified in the statefile file. All subsequent cycles, up to and including the current cycle, will be run from the beginning (SETUP state).
You may choose to start the process at a different state by specifying a state with the -r flag. The command begins processing cycles starting with the cycle after the last successfully completed cycle. This cycle will be restarted at the state entered on the command line. All subsequent cycles, up to and including the current cycle, will be run from the beginning ( SETUP state).
Files
Security
You must have root privilege to run this command.
Prerequisite Information
For more information about the Accounting System, the preparation of daily and monthly reports, and the accounting files, see PSSP: Administration Guide.
Location
/usr/lpp/ssp/bin/crunacct
Related Information
Commands: acctcms, acctcom, acctcon1, acctcon2, acctmerg, acctprc1, acctprc2, accton, crontab, fwtmp, nrunacct
Daemon: cron
The System Accounting information found in AIX System Management Guide
Examples
nohup /usr/lpp/ssp/bin/crunacct -r 2>> \ /var/adm/cacct/nite/accterr &
This example restarts crunacct at the state located in the statefile file. The crunacct command runs in the background (&), ignoring all INTERRUPT and QUIT signals (nohup). Standard error output (2) is added to the end (>>) of the /var/adm/cacct/nite/accterr file.
nohup /usr/lpp/ssp/bin/crunacct -r CMS 2>> \ /var/adm/cacct/nite/accterr &
This example restarts the crunacct command starting with the CMS state. The crunacct command runs in the background (&), ignoring all INTERRUPT and QUIT signals (nohup). Standard error output (2) is added to the end (>>) of the /var/adm/cacct/nite/accterr file.