IBM Books

Command and Technical Reference, Volume 2

spbootins

Purpose

spbootins - Enters boot/install configuration data for a node or series of nodes in the System Data Repository (SDR).

Syntax

spbootins
{-c volume_group_name |
 
-r {install | customize | disk | maintenance | diag | migrate}}
 
[-s {yes | no}] {start_frame start_slot {node_count | rest} | -l node_list }

Flags

-c volume_group_name
Specifies the name of the volume group to select for the target nodes. This volume group will become the current volume group for subsequent installations, and customizations.

-r
Specifies the boot/install server's response to the bootp request from the nodes.

install
Indicates that you should specify install if you want the server to perform a network install (overwrite install) and customize each node.

customize
Indicates that you should specify customize if you want the server to place node-specific configuration information from the SDR into each node's local Object Data Management (ODM).

disk
Indicates that you should specify disk if you want the server to ignore the bootp request and have each node boot from its local disk.

maintenance
Indicates that you should specify maintenance to have each node boot in a prompted mode.

A node that boots in a prompted mode, comes up with the "Install/Maintenance" panel. From this panel, you can choose option 3 to start a limited function maintenance shell. You may access files in the root volume group (rootvg) by choosing the panels to mount the root volume group and enter a shell.

diag
Sets the bootp_response to diag. The next time the node is network booted, a diagnostic menu will be displayed on the tty. From the diagnostic menu, you can execute simple or advanced diagnostics on the node or execute service aids. Service aids allow you to perform such tasks as formatting and certifying the hard drive on the node, or downloading microcode to a device attached to the node. When diagnostics are complete, set the bootp_response back to disk and reboot the node.

migrate
Indicates that you want the server to perform a migration installation on the specified nodes. See the PSSP: Installation and Migration Guide for more details on the migration installation method.

-s no | yes
Indicates whether setup_server should be run on the boot servers (including the control workstation) of the indicated nodes. If you specify -s no, setup_server is not run on the node's boot server, and it must be run later to make any necessary changes to installation-related files. Specify -s yes if you have finished entering boot/install/usr server data during your initial installation or if you are changing data after the initial installation. Otherwise, specify -s no. If -s is not specified, the default is -s yes.
Note:
|In order to run the spbootins -s yes command using |rsh as your remote command method, you must have SDR write authority |and be authorized to perform an rsh to the target nodes. |Therefore, your user ID must be in the appropriate authorization file |(.k5login, .klogin, or |.rhosts) on the target nodes. |

-l node_list
Specifies a list of nodes to be used for this operation. Either specify a comma-delimited list of node numbers, or a file containing one line of data which is a comma-delimited list of node numbers. The file can also contain comment lines (preceded by a #) and lines that are all white space. If you use the node_list field, do not use the start_frame, start_slot, or node_count fields. (This is lowercase l, as in list.)

Operands

start_frame
Specifies the frame number of the first node to be used for this operation. Specify a value between 1 and 128 inclusive.

start_slot
Specifies the slot number of the first node to be used for this operation. Specify a value between 1 and 16 inclusive.
Note:
The start_frame and start_slot must resolve to a node in the current system partition.
|

|node_count
|Specifies the number of nodes to be used for this operation. The |node information is added for successive nodes within a frame. If the |count of nodes causes the nodes in a frame to be exhausted, the operation |continues for nodes in the next sequential frame. Specify a value |between 1 and 512 inclusive. If rest is specified, all the |nodes from start_frame start_slot to the end of your system are |used.
|Note:
The node_count is considered to be within the current system |partition. |

Description

Use this command to select a volume group for the target nodes to use as their root volume group and to select what action to perform using that volume group the next time this node is booted or network booted. Each time this command is run, the setup_server command is run on each of the affected boot/install servers.

You can use the System Management Interface Tool (SMIT) to run the spbootins command. To use SMIT, enter:

smit node_data

and select the Boot/Install Information option.

You cannot use SMIT if you are using AFS authentication services.

Notes:

  1. This command should be run only on the control workstation. You must be logged into the control workstation as root to execute this command.

  2. Any changes made will not take effect on the nodes until they are customized.

|Environment Variables

|PSSP 3.4 provides the ability to run commands using secure remote |command and secure remote copy methods.

|To determine whether you are using either AIX rsh or rcp |or the secure remote command and copy method, the following environment |variables are used. |If no environment variables are set, the defaults are |/bin/rsh and /bin/rcp.

|You must be careful to keep these environment variables consistent. |If setting the variables, all three should be set. The DSH_REMOTE_CMD |and REMOTE_COPY_CMD executables should be kept consistent with the choice of |the remote command method in RCMD_PGM: |

|For example, if you want to run spbootins using a secure remote |method, enter:

|export RCMD_PGM=secrshell
|export DSH_REMOTE_CMD=/bin/ssh
|export REMOTE_COPY_CMD=/bin/scp

Security

|You must have root privilege and write access to the SDR to run this |command.

Location

/usr/lpp/ssp/bin/spbootins

Examples

  1. To change the root volume group for node 1 and install that volume group, enter:
    spbootins -c rootvg2 -r install -s yes -l 1
    
  2. To customize nodes 3 and 7 using their current volume group, enter:
    spbootins -r customize -s yes -l 3,7
    

spbootlist

Purpose

spbootlist - Sets the bootlist on a node or set of nodes based on the values in the Node and Volume Group objects.

Syntax

spbootlist {start_frame start_slot {node_count | rest} | -l node_list}

Flags

-l node_list
Specifies a list of nodes for this operation. This list can be a single numeric node number, or a list of numeric node numbers separated by commas.

Operands

|start_frame
|Specifies the frame number of the first node to be used for this |operation. Specify a value between 1 and 128 inclusive. |

|start_slot
|Specifies the slot number of the first node to be used for this |operation. Specify a value between 1 and 16 inclusive.
|Note:
The start_frame and start_slot must resolve to a node in |the current system partition. |
|

|node_count
|Specifies the number of nodes to be used for this operation. The |node information is added for successive nodes within a frame. If the |count of nodes causes the nodes in a frame to be exhausted, the operation |continues for nodes in the next sequential frame. Specify a value |between 1 and 512 inclusive. If rest is specified, all the |nodes from start_frame start_slot to the end of your system are |used.
|Note:
The node_count is considered to be within the current system |partition. |

Description

The spbootlist command is used to set the bootlist on a node or set of nodes based on the values in the Node and Volume Group objects. The selected_vg attribute of the Node object will point to a unique Volume_Group object for a node. spbootlist will look at the vg_name of the Volume_Group object and determine which physical volumes are in the volume group, and set the bootlist to "ent0" followed by all the physical volumes which contain boot logical volumes. In a mirrored environment, more than one physical volume will contain a boot logical volume.

|Environment Variables

|PSSP 3.4 provides the ability to run commands using secure remote |command and secure remote copy methods.

|To determine whether you are using either AIX rsh or rcp |or the secure remote command and copy method, the following environment |variables are used. If no environment variables are set, the defaults |are /bin/rsh and /bin/rcp.

|You must be careful to keep these environment variables consistent. |If setting the variables, all three should be set. The DSH_REMOTE_CMD |and REMOTE_COPY_CMD executables should be kept consistent with the choice of |the remote command method in RCMD_PGM: |

|For example, if you want to run spbootlist using a secure remote |method, enter:

|export RCMD_PGM=secrshell
|export DSH_REMOTE_CMD=/bin/ssh
|export REMOTE_COPY_CMD=/bin/scp

Exit Values

0
Indicates the successful completion of the command.

1
Indicates that a recoverable error occurred, some changes may have succeeded.

2
Indicates that an irrecoverable error occurred and no changes were made.

Security

You must have root privilege to run this command.

|You must have access to the AIX remote commands or the secure remote |commands to run this command.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/bin/spbootlist

Related Information

Commands: spchvgobj

Examples

  1. To set the bootlist on node one, enter:
    spbootlist -l 1
    
  2. To set the bootlist on a list of nodes, enter:
    spbootlist -l 1,2,3
    

spchuser

Purpose

spchuser - Changes the attributes of an SP user account.

Syntax

spchuser attribute= value ... name

Flags

None.

Operands

attribute=value
Pairs of the supported attributes and values as follows.

name
Name of the user account whose information you want to change.

Supported Attributes and Values

id
ID of the user specified by the name parameter.

pgrp
Principle group of the user specified by the name parameter.

gecos
General information about the user.

groups
The secondary groups to which the user specified by the name parameter belongs.

home
Host name of the file server where the home directory resides and the full path name of the directory. You can specify a host and directory in the format host:path, just specify the directory and have the host default to a value set in SMIT site environment panel or the spsitenv command, or just specify a directory and have the host default to the local machine.

login
Indicates whether the user specified by the name parameter can log in to the system with the login command. This option does not change the /etc/security/user file. Instead, it alters the user password field in /etc/security/passwd.

shell
Program run for the user specified by the name parameter at the session initiation.

Description

No flags are supported. Except for home, the rules for the supported attributes and values correspond to those enforced by the AIX chuser command.

You can only change the values of the supported attributes.

You can use the System Management Interface Tool (SMIT) to run the spchuser command. To use SMIT, enter:

smit spusers

and select the Change/Show Characteristics of a User option.

|Environment Variables

|PSSP 3.4 provides the ability to run commands using secure remote |command and secure remote copy methods.

|To determine whether you are using either AIX rsh or rcp |or the secure remote command and copy method, the following environment |variables are used. |If no environment variables are set, the defaults are |/bin/rsh and /bin/rcp.

|You must be careful to keep these environment variables consistent. |If setting the variables, all three should be set. The DSH_REMOTE_CMD |and REMOTE_COPY_CMD executables should be kept consistent with the choice of |the remote command method in RCMD_PGM: |

|For example, if you want to run spchuser using a secure remote |method, enter:

|export RCMD_PGM=secrshell
|export DSH_REMOTE_CMD=/bin/ssh
|export REMOTE_COPY_CMD=/bin/scp

Security

You must have root privilege to run this command. This command is run on the control workstation only.

Location

/usr/lpp/ssp/bin/spchuser

Examples

To change the default shell to /bin/csh, and change the secondary group membership to dev and dev2 for the user account charlie:

spchuser groups=dev,dev2 shell=/bin/csh charlie
 

spchvgobj

Purpose

spchvgobj - Changes the contents of a Volume_Group object.

Syntax

spchvgobj
-r volume_group_name [-h pv_list] [-i install_image]
 
[-p code_version] [-v lppsource_name] [-n boot_server]
 
[-c {1 | 2 | 3}] [ -q {true | false}]
 
{start_frame start_slot {node_count | rest} | -l node_list}

Flags

-r volume_group
Specifies the root volume group name to apply the changes towards. |

|-h pv_list
|Indicates the physical volumes to be used for installation for the volume |group specified. The root volume group is defined on the disks |indicated, and all data on the disks is destroyed. The physical volumes |may be specified as: |
|Logical names (for example, hdisk0)
|Hardware location (for example, 00-00-00-0,0)
|SSA connwhere (for example, ssar//012345678912345)
|Physical volume identifier (for example, 0123456789abcdef)
|SAN disk identifier with world-wide port name and logical unit identifier |(for example, 0x0123456789abcdef//0x0) |

|If multiple physical volumes are specified, separate them by commas for |logical names and by colons for all other formats. Different formats |may be used for the different physical volumes except when using logical |names. The default is hdisk0.

|Note:
IBM strongly suggests that formats other than logical names be used to |specify the physical volumes. It ensures that you install on the |intended disk by targeting a specific disk at a specific location. The |logical naming of physical volumes may change depending on hardware installed |or possible hardware problems. This is especially true when there are |external drives present, as the manner in which the device names are defined |may not be obvious. |

-i install_image
Specifies the name of the install image to be used for the volume group when they are next network-installed. Specify a file in the /spdata/sys1/install/images directory on the control workstation. At installation, the value for each volume group's install image name is default, which means that the default install image name for the system partition or the system is used for each node. The default install image name is found in the Syspar or the SP object in that order.

-p code_version
Sets the volume group's code version. Use this to indicate the PSSP level to install on the node. The code_version value you choose must match the directory name that the PSSP installation files are placed under in the /spdata/sys1/install/pssplpp directory during installation. See the PSSP: Installation and Migration Guide for more details.

-v lppsource_name
Sets the volume group's lppsource name. Use this to indicate the AIX level to install on the node. The lppsource_name value you choose must match the directory name you choose to place the lppsource files under in the /spdata/sys1/install directory during installation. See the PSSP: Installation and Migration Guide for more details. |

|-n boot_server
|Identifies the boot/install server for the volume groups you have |specified. The boot/install server is identified by a node |number. Node number 0 represents the control workstation. The |value of the boot/install server at installation depends on how many frames |are in your system. In a single frame system, the control workstation |(node 0) is the default server for each node. In a system with more |than 40 nodes, the default server for the first node in each frame is the |control workstation, and the default server for the rest of the nodes in a |frame is the first node in that frame.

-c copies
Specifies the number of mirrors to create for the volume group. To enable mirroring, set this to 2 or 3. Setting this to 1 disables mirroring. When enabling mirroring, be sure that there are enough physical volumes to contain all the copies of the volume group. Each copy must have at least 1 physical volume.

-q true | false
Specifies whether quorum should be enabled. If quorum is enabled, a voting scheme will be used to determine if the number of physical volumes that are up is enough to maintain quorum. If quorum is lost, the entire volume group will be taken off line to preserve data integrity. If quorum is disabled, the volume group will remain on line as long as there is at least 1 running physical volume.

-l node_list
Specifies a list of nodes to be used for this operation. Specify a comma-delimited list of node numbers. If you use the -l flag, do not use the start_frame, start_slot, or node_count operands.

Operands

|start_frame
|Specifies the frame number of the first node to be used for this |operation. Specify a value between 1 and 128 inclusive. |

|start_slot
|Specifies the slot number of the first node to be used for this |operation. Specify a value between 1 and 16 inclusive.
|Note:
The start_frame and start_slot must resolve to a node in |the current system partition. |
|

|node_count
|Specifies the number of nodes to be used for this operation. The |node information is added for successive nodes within a frame. If the |count of nodes causes the nodes in a frame to be exhausted, the operation |continues for nodes in the next sequential frame. Specify a value |between 1 and 512 inclusive. If rest is specified, all the |nodes from start_frame start_slot to the end of your system are |used.
|Note:
The node_count is considered to be within the current system |partition. |

Description

This command is used to change the configuration information for an existing volume group on a node or group of nodes in the System Data Repository (SDR). When this command is run and the SDR is changed, setup_server must be run on the affected boot/install servers and affected nodes may need to be customized or installed to apply the changes. Certain volume group information such as mirroring and the pv_list may be updated using the spmirrorvg or spunmirrorvg commands.

Exit Values

0
Indicates the successful completion of the command.

1
Indicates that a recoverable error occurred, some changes may have succeeded.

2
Indicates that an irrecoverable error occurred and no changes were made.

Security

You must have root privilege and write access to the SDR to run this command.

|When restricted root access (RRA) is enabled, this command can only |be run from the control workstation.

Implementation Specifics

This command is part of the IBM Parallel System Support Programs (PSSP) Licensed Program (LP).

Location

/usr/lpp/ssp/bin/spchvgobj

Related Information

Commands: spbootins, spmirrorvg, spmkvgobj, sprmvgobj, spunmirrorvg

Examples

  1. To specify node 1 as the boot/install server for the volume group "rootvg" on nodes 2-16, enter:
    spchvgobj -r rootvg -n 1 1 2 15
    
  2. To enable mirroring with 2 copies, no quorum and 2 SSA physical volumes for the volume group "rootvg" on nodes 2 and 3, enter:
    spchvgobj -r rootvg -c 2 -q false -h \
     
    ssar//567464736372821:ssar//67464736372821 -l 2,3
    
  3. |To specify a FibreChannel disk as the physical volume to be used for |installation on node 5, enter:
    |spchvgobj -r rootvg -h 0x0123456789abcdef//0x0 -l 5

spcustomize_syspar

Purpose

spcustomize_syspar - Enters customization information to be used in creating a system partition.

Syntax

spcustomize_syspar
[-h] [-n syspar_name | IP_address]
 
[-l PSSP_code_level]
 
[-d default_install_image | default]
 
[-e primary_node | default]
 
[-b backup_primary_node | default] |
| 
|-i {[dce], [k4] | |none} |
| 
|-r {[dce], |[k4],[std] | none}
 
-m {[k5], [k4],[std]}
 
-t {[dce], [compat] | none}
 
config_dir/layout_dir/ syspar_dir |
 
fully_qualified_path_name

Flags

-h
Displays usage information.

-n syspar_name | IP_address
Specifies the system partition name (the control workstation host name or host name alias) or IP address (which corresponds to the system partition name) associated with this system partition.

-l PSSP_code_level
Specifies the IBM Parallel System Support Programs for AIX (PSSP) code level for the system partition. For mixed system partitions, partitions that have multiple supported levels of PSSP coexisting in the same partition, should be set to the minimum (earliest) level of PSSP in this system partition.

-d default_install_image | default
Specifies the default install image for the system partition or default to direct the system to use the system-wide default install image. Refer to PSSP: Installation and Migration Guide for additional information on the default install image.

-e primary_node | default
Specifies the primary node number for switch operations or default to direct the system to automatically set the default which is the first node in the node list.

-b backup_primary_node | default
Specifies the primary backup node number for switch operations or default to direct the system to automatically set the default which is the last node in the node list. This flag is valid only on SP Switch systems.

-i
Sets security capabilities for the nodes in the specified partition. |

|-r
|Authorization methods for AIX remote commands.
|Note:
If none is specified as an option, you cannot select any other |methods. If none is chosen, a secure remote command method |must be enabled. |

-m
Enables authentication methods for AIX remote commands.

-t
Enables authentication methods for SP Trusted Services.

Operands

config_dir
Specifies the directory name for a configuration directory.

layout_dir
Specifies the directory name for a layout directory within the configuration directory.

syspar_dir
Specifies the directory name for a system partition directory within the layout directory.

fully_qualified_path_name
Specifies the fully qualified path name to a system partition directory.

Description

|The spcustomize_syspar command is not valid on a system |with an SP Switch2 switch or on a switchless clustered enterprise server |system.

Use this command to customize a system partition customization file (custom).

For a specified system partition, the customization data can be entered with the optional parameters. If the custom file does not exist, you can create one by specifying the -n and -l flags. The -d and -e flags are optional when creating a custom file. If -d and -e are not specified, the system automatically specifies default to set the default install image and primary node in the newly-created custom file. Once the custom file is created, any combination of the optional parameters can be used to update the contents of the file.

|Use the spdisplay_config command with the -c |flag to display to standard output the contents of the customization file for |a specified system partition.

Exit Values

0
Indicates the successful completion of the command.

- 1
Indicates that the command was unsuccessful.

Security

You must have root privilege to run this command.

Location

/usr/lpp/ssp/bin/spcustomize_syspar

Related Information

Commands: spapply_config, spdisplay_config, spverify_config

Files: nodelist, topology

Examples

  1. |To modify the system partition name, PSSP code level, and primary |node information for the specified system partition, enter the following |command. Note that the security parameters (-i, |-r, -m, and -t) must be entered |every time spcustomize_syspar is used. Use the existing values |of the security parameters unless they need to be changed.
    |spcustomize_syspar -n c186sp1 -l PSSP-3.4 -e 2 -i dce -r dce -m k5 \
    |                   -t dce config.4_4_4_4/layout.1/syspar.1
  2. |To use the default primary node information for the specified system |partition, enter:
    |spcustomize_syspar -e default -i dce -r dce -m k4 -t dce \
    |                   config.4_12/layout.1/syspar.1

spcw_addevents

Purpose

spcw_addevents - Identifies the High Availability Cluster Multiprocessing (HACMP) event scripts supplied by the High Availability Control Workstation (HACWS) to the AIX High Availability Cluster Multi-Processing (HACMP) software.

Syntax

spcw_addevents

Flags

None.

Operands

None.

Description

HACWS customizes the recovery of control workstation services by providing HACMP event scripts, which get executed by the HACMP software. The spcw_addevents command is a shell script which identifies the HACMP event scripts to HACMP, without requiring the system administrator to go through all the equivalent HACMP SMIT panels.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Security

You must have root privilege to run this command.

Prerequisite Information

Refer to PSSP: Administration Guide for additional information on the HACWS option.

Location

/usr/sbin/hacws/spcw_addevents

spcw_apps

Purpose

spcw_apps - Starts or stops control workstation applications in a High Availability Control Workstation (HACWS) configuration.

Syntax

spcw_apps {-u | -d} [-i | -a ]

Flags

-u
Starts control workstation applications on the local host.

-d
Stops control workstation applications on the local host.

-i
Sets the local host to be the inactive control workstation before starting or after stopping control workstation applications.

-a
Sets the local host to be the active control workstation before starting or after stopping control workstation applications.

Operands

None.

Description

The control workstation services are started at boot time on a regular control workstation via entries in /etc/inittab. An HACWS configuration requires the capability to stop control workstation services on one control workstation and restart them on the other. The install_hacws command removes most of the control workstation entries from /etc/inittab, and the spcw_apps command is provided as a means to stop and start control workstation services in the HACWS configuration. In addition, the spcw_apps command can be used to make the inactive control workstation act as a client of the active control workstation to keep the two control workstations synchronized.

Note:
The High Availability Cluster Multiprocessing (HACMP) event scripts and installation scripts supplied with the High Availability Control Workstation (HACWS) option of the IBM Parallel System Support Programs for AIX (PSSP) will start or stop the control workstation applications during a fail over or reintegration. The administrator should not normally have to start or stop the control workstation applications.

Exit Values

0
Indicates the successful completion of the command.

nonzero
Indicates that an error occurred.

Prerequisite Information

Refer to PSSP: Administration Guide for additional information on the HACWS option.

Location

/usr/sbin/hacws/spcw_apps

Related Information

Commands: install_hacws

Examples

In the following example, assume that the primary control workstation is currently the active control workstation. This means that the primary control workstation is providing control workstation services to the SP system. When a control workstation failover occurs, the AIX High Availability Cluster Multi-Processing (HACMP) software moves the control workstation network and file system resources from the primary to the backup control workstation. In addition, control workstation applications must be stopped on the primary and restarted on the backup. HACWS provides the spcw_apps command to HACMP as the method to accomplish this. The HACMP software issues the following command on the primary:

spcw_apps -di

This command stops control workstation services on the active primary and then sets the primary to be the inactive control workstation. Next, the HACMP software issues the following command on the backup:

spcw_apps -ua

This command sets the backup to be the active control workstation and then starts the control workstation services on the backup. Finally, the HACMP software issues the following command on the primary:

spcw_apps -u

This command configures the primary to be a client of the backup (which is active) control workstation.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]