When a node is installed, migrated, or customized (set to customize and rebooted), and that node's boot/install server does not have a /tftpboot/tuning.cust file, a default file of system performance tuning variable settings in /usr/lpp/ssp/install/config/tuning.default is copied to /tftpboot/tuning.cust on that node. You can override these values by following one of the methods described in the following list:
IBM supplies three alternate tuning files which contain initial performance tuning parameters for three different SP environments:
To select one of these files for use throughout the nodes in your system, use SMIT or issue the cptuning command. When you select one of these files, it is copied to /tftpboot/tuning.cust on the control workstation and is propagated from there to each node in the system when it is installed, migrated, or customized. Each node inherits its tuning file from its boot/install server. Nodes that have as their boot/install server another node (other than the control workstation) obtain their tuning.cust file from that server node so it is necessary to propagate the file to the server node before attempting to propagate it to the client node. The settings in the /tftpboot/tuning.cust file are maintained across a boot of the node.
The following steps enable you to create your own customized set of network tunable values and have them propagated throughout the nodes in your system. These values are propagated to each node's /tftpboot/tuning.cust file from the node's boot/install server when the node is installed, migrated, or customized and are maintained across the boot of the node.
If using: | Do this: |
---|---|
SMIT |
|
Once you have updated tuning.cust, continue installing the nodes. After the nodes are installed and customized, on all subsequent boots, the tunable values in tuning.cust will be automatically set on the nodes.
Note that each of the supplied network tuning parameter files, including the default tuning parameter file, contains the line /usr/sbin/no -o ipforwarding=1. IBM suggests that on non-gateway nodes, you change this line to read /usr/sbin/no -o ipforwarding=0. After a non-gateway node has been installed, migrated, or customized, you can make this change in the /tftpboot/tuning.cust file on that node.
If you are configuring more than eight of one particular adapter type, you must change the ifsize parameter in the tuning.cust file.
For the latest performance and tuning information, refer to the RS/6000 Web site at:
http://www.rs6000.ibm.com/support/sp/perf
You can also access this information using the RS/6000 SP Resource Center.
Do this step to perform additional customization such as:
IBM provides the opportunity to run customer-supplied scripts during node installation:
See Appendix E, User-supplied node customization scripts for more detailed information on:
Appendix E, User-supplied node customization scripts also discusses migration and coexistence issues and techniques to use the same set of customization scripts across different releases and versions of AIX and PSSP.
Once your node is up and running, use:
There are special considerations that you must take into account if you are installing your system with the following security setup:
|splstdata -p |List System Partition Information |... |auth_install k4 |auth_root_rcmd k4 |auth_methods k5:k4:std |ts_auth_methods compat
Because auth_install does not contain DCE, you must ensure that DCE is installed on the nodes before psspfb_script sets the authentication methods during the install process. This same requirement existed for PSSP 3.1, so you may have already implemented a process to do mksysb installs. To mksysb install a node, you will need to add code to your /tftpboot/script.cust. The new code in script.cust will need to mount the directory containing DCE and install your required DCE clients.
|There are special considerations to take into account if you are |going to install nodes with secure remote command methods enabled. See Step 30: Enter site environment information and RS/6000 SP: Planning, Volume 2, Control |Workstation and Software Environment for additional |information.
When the node is installed, the secure remote command software must also be installed, configured, and the daemon started. The root public keys must be |copied from the control workstation to the node |and from the boot/install server nodes to nodes that they serve and |from the boot/install server nodes to the control workstation to enable the PSSP installation and configuration scripts to be able to run the secure remote commands from the control workstation and any other BIS nodes to the nodes being installed.
To enable the secure remote command software on the nodes during the node installation, the /tftpboot/script.cust file must be edited to install the secure remote command software and move the root public keys to the nodes. Examples are shipped in the script.cust sample file with |PSSP 3.4. The script.cust file also |adds the start of the daemon to /etc/inittab to ensure that the |secure remote command daemon is restarted after any node reboot.
The PSSP code must be able to issue secure remote commands and copies to the nodes without being prompted for passwords or passphrases during installation and configuration.
If additional files must be copied to the nodes during the installation process with secure remote command and restricted root remote commands enabled, the firstboot.cmds sample file gives examples of how to enable the copy from the control workstation to the nodes in the restricted access environment and in the secure remote command enabled environment.
If you do not have a switch, skip this step and proceed to Step 65: Set up system partitions (SP Switch or switchless systems only).
The optional switch connects all the nodes in the system to increase the speed of internal system communications. It supports the high volume of message passing that occurs in a parallel environment with increased bandwidth and low latency.
The switch includes software called the Worm which verifies the actual switch topology against an anticipated topology as specified in the switch topology file. This file tells the Worm your switch configuration. You create this file by copying one of the default topology files provided for each SP configuration.
The Worm verifies the switch connections beginning at a node designated as the primary node. By default, the primary node is the first node in the system or the partition. You can override the default and designate another node as the primary node. You must do this if the first node is not operational.
In addition to the primary node, a primary backup node exists that will take over for the primary node when it detects that the primary node is no longer functional. The primary backup node passively listens for activity from the primary node. When the primary backup node detects that it has not been contacted by the primary node for a specified amount of time, it assumes the role of the primary node. This takeover involves nondisruptively reinitializing the switch fabric, selecting another primary backup, and updating the SDR. By default, a node is selected from a frame that is different from the primary node. If no other frame exists (for example, a single frame system), a node is selected from a switch chip that is different from the primary node. If no other switch chip is available, any available node on the switch is selected. By default, the backup node is the last node in the system or the partition.
Select the correct switch topology file by counting the number of node switch boards (NSBs) and intermediate switch boards (ISBs) in your system, then apply these numbers to the naming convention. |If you have an SP Switch2 two plane system, count only the number of |NSBs and ISBs on one plane. The switch topology files are in the /etc/SP directory on the control workstation.
|NSBs are switches mounted in slot 17 of frames containing nodes or |SP Switch2 switches mounted in slots 2 through 16 of frames designed as |multiple NSB frames. Multiple NSBs are used in systems that require a |large number of switch connections for SP-attached servers or clustered |enterprise server configurations. ISBs are switches mounted in the |switch frame. ISBs are used in large systems, where more than four |switch boards exist, to connect many processor frames together. |SP-attached servers never contain a node switch board, therefore, never |include non-SP frames when determining your topology files.
The topology file naming convention is as follows:
expected.top.NSBnumnsb.ISBnumisb.type
where:
For example, expected.top.2nsb.0isb.0 is a file for a two frame and two switch system with no ISB switches.
The exception to this naming convention is the topology file for the SP Switch-8 configuration, which is expected.top.1nsb_8.0isb.1.
See the Etopology command in PSSP: Command and Technical Reference for additional information on topology file names.
The switch topology file must be stored in the SDR. The switch initialization code uses the topology file stored in the SDR when starting the switch (Estart). When the switch topology file is selected for your system's switch configuration, it must be annotated with Eannotator, then stored in the SDR with Etopology. The switch topology file stored in the SDR can be overridden by having an expected.top file in /etc/SP on the primary node. Estart always checks for an expected.top file in /etc/SP before using the one stored in the SDR. The expected.top file is used when debugging or servicing the switch.
Notes:
Annotate a switch topology file before storing it in the SDR. Refer
to the following table for instructions.
If using: | Do this: |
---|---|
Perspectives |
|
SMIT |
|
Eannotator | Use Eannotator to update the switch topology file's
connection labels with their correct physical locations. Use the -O
yes flag to store the switch topology file in the SDR. If -p is not specified, the default behavior is to
perform this action on all planes. Using Eannotator makes the switch hardware easier to debug
because the switch diagnostics information is based on physical
locations.
For example, to annotate a two-switch or maximum 32-node system, enter: Eannotator -F /etc/SP/expected.top.2nsb.0isb.0 \ -f /etc/SP/expected.top.annotated -O yes |
If you entered Eannotator -O yes or yes on the
Topology File Annotator menu in Step 62.3: Annotating a switch topology file, skip this step.
If using: | Do this: |
---|---|
Perspectives |
|
SMIT |
|
Etopology | Use Etopology to store the switch topology file in the SDR and
make sure that it has been annotated. If -p is not
specified, the default behavior is to perform this action on all
planes. For example, to store the annotated topology file
expected.top.annotated in the current directory, enter:
Etopology expected.top.annotated |
Frame 1, node 1 is the default oncoming primary node for the switch.
A node type exists called the primary backup node for the switch. The primary backup node passively listens for activity from the primary node. When the primary backup node detects that it has not been contacted by the primary node for a specified amount of time, it assumes the role of the primary node. This takeover involves nondisruptively reinitializing the switch fabric, selecting another primary backup, and updating the SDR. The default is the last node in the frame, not the last node slot. For partitions, the default primary is the first node and the default backup is the last node in the partition. You must override this selection if the node slot is not operational. Use SMIT or the Eprimary command to verify this node or change the primary to another node.
|The oncoming primary and backup nodes should not both be assigned to
|partitions on a single pSeries 690 server, if alternatives exist.
If using: | Do this: |
---|---|
Perspectives |
|
SMIT |
|
Eprimary | Enter:
Eprimary [new_primary_node] [-backup new_primary_backup_node_number]If -p is not specified, the default behavior is to perform this action on all planes. |
The Eprimary command, without any parameters, returns the node number of the current primary node, the primary backup node, the oncoming primary node, and the oncoming primary backup node.
Use SMIT or the Eclock command to initialize the switch's clock source. The SMIT and Eclock interfaces require that you know the number of Node Switch Boards (NSBs) and Intermediate Switch Boards (ISBs) in your RS/6000 SP system.
Select the Eclock topology file from the control workstation's /etc/SP subdirectory, based on these numbers. For example, if your RS/6000 SP system has six node switch boards and four intermediate switch boards, you would select /etc/SP/Eclock.top.6nsb.4isb.0 as an Eclock topology file.
See PSSP: Command and Technical Reference for the
Eclock topology file names.
If using: | Do this: |
---|---|
Perspectives |
|
SMIT |
|
Eclock | Use the Eclock command to set the switch's clock source for
all switches.
For example, if your RS/6000 SP system has six node switch boards and four intermediate switch boards, select /etc/SP/Eclock.top.6nsb.4isb.0 as an Eclock topology file. Enter: Eclock -f /etc/SP/Eclock.top.6nsb.4isb.0 This command sets the proper clock source settings on all switches within a 96-way (6 nsb, 4 isb) RS/6000 SP system. To verify the switch configuration information, enter: splstdata -s |
This step is optional. The PSSP installation code sets up a default system partition configuration to produce an initial, single-system partition including all nodes in the system. This system partition is created automatically. If you do not want to divide your system into partitions, continue with |Step 66: Configure the control workstation as the boot/install server.
If you want to partition your system, you can select an alternate configuration from a predefined set of system partitions to implement before booting the nodes or you can use the System Partitioning Aid to generate and save a new layout. Follow the procedure described in the "Managing system partitions" chapter in PSSP: Administration Guide and refer to information in the "The System Partitioning Aid" section of the "Planning SP system partitions" chapter in RS/6000 SP: Planning, Volume 2, Control Workstation and Software Environment. You do not have to partition your system now as part of this installation. You can partition it later.
For information on how to set a security setting in an established system partition, see Chapter 5, Reconfiguring security.
This step uses the information entered in the previous steps to set up the control workstation and optional boot/install servers on nodes. It configures the control workstation as a boot/install server and configures the following options (when selected in your site environment):
You can perform this step more than once. If you encounter any errors, see PSSP: Diagnosis Guide for further explanation. After you correct your errors, you can start the task again.
In previous releases of PSSP, most of the installation function which configured boot/install servers and clients was performed in the single program called setup_server which you could run by issuing the setup_server command. This is still the suggested way for configuring the control workstation. For more experienced system administrators, IBM has provided a set of Perl scripts you can issue to also configure the control workstation that enable you to diagnose how the setup_server program is progressing. For more information, refer to Appendix D, Boot/install server configuration commands.
If using: | Do this: |
---|---|
Perspectives |
|
SMIT |
|
setup_server |
Enter: setup_server with no parameters. |
The first time setup_server runs, depending upon your
configuration, it can take a significant amount of time to configure the
control workstation as a NIM master.
|
This step directs you to run a verification test that checks for correct
installation of the System Management tools on the control workstation.
If using: | Do this: |
---|---|
Perspectives |
|
SMIT |
|
sysman | Enter:
SYSMAN_test |
After the tests are run, the system creates a log in /var/adm/SPlogs called SYSMAN_test.log.
See PSSP: Diagnosis Guide for information about what this test does and what to do if the verification fails.