IBM Books

Installation and Migration Guide


mksysb install of nodes

Before proceeding with the steps in this section, be sure you have selected the correct migration path. |This method erases all existence of the current rootvg and installs |your target AIX level and PSSP 3.4 using either an AIX 4.3 or |AIX 5L 5.1 mksysb image for the node.

Step 1: Enter node configuration data

You need to set the appropriate SDR node object attributes for lppsource_name, code_version, bootp_response, next_install_image, and pv_list for each node being migrated. Use the spchvgobj and spbootins commands to update these fields. If you are migrating nodes in more than one system partition, you will need to issue these commands in each system partition. For a complete description of the flags associated with these commands, refer to PSSP: Command and Technical Reference.

Note:
The lppsource, PSSP code version, and install image you select in this step must be available on the control workstation. See Migrating the control workstation to PSSP 3.4.

For example, to migrate nodes 1 and 2 to AIX 4.3.3 and |PSSP 3.4 where lppsource was placed in /spdata/sys1/install/aix433/lppsource, issue the following two commands:

|spchvgobj -r selected_vg -p PSSP-3.4 -v aix433 -h 00-00-00-0,0 \
|          -l 1,2 -i bos.obj.ssp.433
| 
|spbootins -s no -r install -l 1,2

Note:
|In order to reinstall your nodes, the mksysb image and the lppsource |that you use must both contain the same version, release, modification, and |fix levels of AIX. If you do not have a mksysb image at the same level |as your lppsource, you may do one of the following: |
  1. |Make your own updated mksysb image. In order to do this, you will |need to: |
    1. |Update an existing lppsource to the most recent maintenance level of |AIX.
    2. |Perform a BOS node upgrade on a single node as described in BOS node upgrade or follow the steps in Installing updates on a per node basis.
    3. |Make a mksysb image of that node as described in Installing updates through reinstallation.
    4. |Use the mksysb created in Step 1c along with your updated lppsource to install your remaining |nodes. |
  2. |Contact IBM Level 1 service to obtain an updated mksysb image. |
|

Step 2: Verify installation settings

Make sure that the SDR has the appropriate values specified in the following attributes for each of the nodes. Issue the following command to display the values:

splstdata -G -b

Make sure that /tftpboot/script.cust and /tftpboot/firstboot.cust have been properly updated for |PSSP 3.4 modifications. See Appendix E, User-supplied node customization scripts for additional information.

Step 3: Run setup_server to configure the changes

The setup_server command must be run to properly set up NIM on the control workstation by issuing the following command:

setup_server 2>&1 | tee /tmp/setup_server.out

The output will be saved in a log file called setup_server.out.

If you have a node defined as a boot/install server, you must also run setup_server out on that server node.

dsh -w boot/install_node "/usr/lpp/ssp/bin/setup_server \
    2>&1" | tee /tmp/setup_server.boot/install_node.out

|Step 4: Refresh RSCT subsystems

The SDR has now been updated to reflect the new nodes that will run |PSSP 3.4. You now need to refresh the |RSCT subsystems on the control workstation and all nodes to pick up these changes. Run syspar_ctrl on the control workstation to refresh the subsystems on both the control workstation and on the nodes.

syspar_ctrl -r -G
Note:
You can ignore any messages that you receive from the nodes you are migrating at this point because the migration process is not yet complete. Once the process is complete, you should no longer receive error messages.

Step 5: Disable nodes from the switch

If you do not have a switch in your SP system, skip this step.

If you want to bring the switch down for all nodes, issue the Equiesce command for each system partition.

Note:
|If you are using the Switch Admin daemon for node recovery, stop it |by issuing stopsrc -s swtadmd on SP Switch systems or stopsrc -s |swtadmd2 on SP Switch2 systems before issuing the Equiesce |command. |

If you use the Equiesce command, you will need to later restart the switch using the Estart command. Issue the Estart command prior to the step where you "Verify the nodes."

If you are migrating a few nodes, you must disable these nodes from the switch (if appropriate, first reassign the primary node or primary backup node). To determine if one of the nodes you are migrating is a primary or primary backup node, issue the Eprimary command. If you need to reassign the primary or primary backup node, issue the Eprimary command with appropriate options. Then issue the Estart command to make your choices effective. You must then issue the Efence command to disable the nodes you are migrating from the switch.

Efence -G node_number node_number

Step 6: Shut down the node

|Nodes should be shut down gracefully using the following |command:

cshutdown -F -G -N node_number

|Step 7: Unconfigure DCE-related information for the node (required for DCE)

| |

|Issue the splstdata -p command and examine the security settings |for the system partition containing the nodes to be migrated. If |auth_install includes DCE, you must remove any DCE-related principles |and objects from the DCE registry before issuing the nodecond |command.

|Note:
You must have cell administrator authority to perform this step. |
|
  1. |On the control workstation, use the rm_spsec -t admin |node_dce_hostname command for each node being reinstalled.
    |Note:
    To run this command remotely off of the SP, you must set the SP_NAME |environment variable to point to the SDR you want to access. Refer to |the rm_spsec command in PSSP: Command and Technical |Reference for a description of the -r (remote) flag. |
  2. |Do a DCE Admin unconfigure for the node (smit rmdce).
    |Note:
    To remove any additional principals related to the node using the SMIT |panels, enter the host name of the adapter to be deleted. For example, |on the "Admin unconfiguration for another machine" panel in the |"Machine's name or TCP/IP address" field, enter the host name for |the additional adapters. |
  3. |For the nodes being removed, verify that all DCE principals have |been deleted from the DCE registry. Issue:
    |dcecp -c principal catalog -simplename
    |

|You must now create new DCE information for the node by performing the |following steps: |

  1. |Run the setupdce command.

    |Notes:

    1. |You will be prompted for the cell administrator's password when you |issue this command.

    2. |To run this command off of the SP, you must set the SP_NAME environment |variable on the remote workstation to point to the SDR of the SP system being |configured. The value must be a resolvable address. For |example:
      |export SP_NAME=spcws.abc.com
      |
  2. |As an ID with cell administrator authority, run the config_spsec |-v command.
    |Note:
    To run this command off of the SP, you must set the SP_NAME environment |variable on the remote workstation to point to the SDR of the SP system being |configured. Refer to the config_spsec command in |PSSP: Command and Technical Reference for a description of |the -r (remote) flag. |
    |

|PSSP 3.1.1 and DCE exception

|

|If at the start of your control workstation migration your system contained |PSSP 3.1.1 and DCE with auth_methods set to |k5:k4:std, you will need to do one of the |following: |

  1. |Automatically install DCE during the mksysb installation by putting code |in your script.cust to do the install. Because the node |was previously installed, you should review DCE documentation for additional |unconfiguration and reconfiguration steps that will be required. This |is the same process you should have developed for doing a mksysb install of a |DCE node with PSSP 3.1.
  2. |Have PSSP automatically install DCE and configure the DCE clients by doing |the following: |
    1. |Ensure that DCE is at 3.1 or later on the control workstation and |that your AIX lppsource contains the DCE file sets.
    2. |Use spsetauth -i to set auth_install to include |DCE.
      |spsetauth -p partition1 -i dce k4
    3. |Define DCE host names for the control workstation and for all the |nodes:
      |create_dcehostname
    4. |Update the SDR with DCE Master Security and CDS Server host names:
      |setupdce -u -s master_security_server_host -d CDS_primary_server_host
    5. |Remove existing self-host principles for the nodes being installed from |the DCE database. This command may need to be reissued for each adaptor |on each node that is being reinstalled to PSSP 3.4.
      |/bin/unconfig.dce -config_type admin -dce_hostname old_dcehostname \
      |-host_id adapter_host_name all
    6. |Add the nodes being installed back into the DCE database. You will |need DCE cell administrator authority to run the setupdce |command.
      |setupdce -v
      |
    |

Step 8: Network boot the node

Notes:

  1. If you have any boot/install servers in your system, you need to migrate them before migrating their clients. You should not netboot more than eight nodes with the same server at a time.

  2. For MCA nodes, the nodecond command remotely processes information from the initial AIX firmware menus. You should not change the language option on these menus. The language must be set to English in order for the nodecond command to run properly.

Network boot each node that you are migrating by using Perspectives or by using the nodecond command.

nodecond -G frame_id slot_id &

You should notice that the node has been properly installed when the LED's become blank, and the host_responds is active.

Verify that the bootp_response has been set to disk by issuing the following command:

splstdata -G -b

Step 9: Rejoin the nodes to the switch network

If you disabled all nodes in Step 5: Disable nodes from the switch using the Equiesce command, you must now issue the Estart command in each system partition to rejoin the nodes to the current switch network.

Note:
|If you are using the Switch Admin daemon for node recovery, start it |by issuing startsrc -s swtadmd on SP Switch systems or startsrc |-s swtadmd2 on SP Switch2 systems before issuing the Estart |command. |

If you disabled only a few nodes using the Efence command, you must now issue the Eunfence command to bring those nodes back to the switch network.

Step 10: Run verification tests

Verify that the nodes are running properly by issuing the following commands:

SYSMAN_test
CSS_test              * run only if you have a switch
spverify_config       * run only if your system is partitioned
st_verify             * run only if Job Switch Resource Table Services
                        is installed

Verify that host_responds and switch_responds, if you have a switch, are set to yes by issuing the following command:

spmon -d -G

Notes:

  1. If you are migrating nodes in more than one system partition, you need to run CSS_test in each of the system partitions.

  2. At this point, refer back to High-level migration steps to determine your next step in the migration process.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]