IBM Books

Installation and Migration Guide


Installing program updates

This section provides instructions for applying software updates (PTFs) to the SP system.

Before you begin

You need to be aware of the following items before you apply PTFs to the SP system.

PSSP READMEs

Make sure you read the README document that comes with any updates to PSSP. This information can also be viewed by running installp -i on the installation images. This document may convey important information that you need to know prior to installing the PTF. It may also contain instructions for activating particular fixes. This and additional information can also be printed to the screen during PTF installation.

Working with nodes that are down

Nodes that were down when service was applied must be updated when they become available. Simply follow the same procedure you used when updating the rest of the nodes.

Updating the css file set

When reinstalling or updating the ssp.css file set of PSSP, you must reboot all affected nodes to load changes that affect the kernel extensions.

Choosing an approach

Note:
Performing this task requires that your identity be authenticated as an authorized user of the system management commands and the Perspectives interface shown in the following steps.

For DCE, you should dce_login to the SP administrative principal created in Step 22.3: Create SP administrative principals.

For Kerberos V4, you should use the principal created in Step 21: Initialize RS/6000 SP Kerberos V4 (optional).

There are two approaches to installing program updates.

Approach
Description

Per-node
Apply the maintenance on each node individually.

For the per-node approach, you can apply service on all the nodes in one of the following ways:

Reinstall
Install the maintenance on a single node, building an image on that node, and then propagate the changes to all other nodes by reinstalling that image on each node.

The reinstall approach requires you install the programs and updates on your maintenance test node using SMIT or installp in order to generate an installation image and then place the new image on the control workstation. SMIT or the spbootins command enables you to specify all the nodes to be reinstalled with the new image. All that remains is to boot the nodes, which causes the nodes to be reinstalled.

Regardless of which approach you choose, do the following:

  1. Apply the desired maintenance to a single test node. This allows you to gauge how long the service takes for a single node and enables you to verify the success of the maintenance before applying the service to the rest of the nodes.
  2. Generate a mksysb image of your updated system. In the event of a required reinstall, you can use the mksysb image instead of reapplying the maintenance to any nodes that need to be reinstalled.

Which approach is right for you?

How do you know which approach to take? Consider these factors:

If you have fewer than 16 nodes in your system or the maintenance is minor, it may be faster to apply the maintenance directly on each node rather than to reinstall. On the other hand, if you have a large amount of maintenance to do and you have no user data to preserve in the root volume group, it may be faster to install the maintenance once, generate a new installation image, and reinstall all your nodes.

Preparing the control workstation

Regardless of your approach, you must install the maintenance of the control workstation first.

If you are using an HACWS configuration, before beginning, make sure that HACMP is running on both control workstations and that the primary control workstation is the active control workstation. Then perform Steps 1 through 4 on both the primary and backup control workstations and continue with Step 5. Once you start this procedure, you should not perform a control workstation failover at any time before Step 7.

  1. Create a backup mksysb image of the control workstation.
  2. |Copy the PTFs into an appropriate directory on the control |workstations, for example:
    |/spdata/sys1/install/pssplpp/code_version/ptf2
    |Note:
    Beware that the AIX update CD may contain PTFs for several levels of |AIX. You need only to copy the PTFs pertaining to the levels of AIX |installed on your system. |
  3. Run the inutoc . command in that directory to build the new .toc file.
  4. |To apply PTFs, you need write access to the SDR. Obtain DCE |or Kerberos V4 credentials for the SP administrative principal as described in |Step 24: Obtain credentials.
  5. Apply the PTFs to the control workstation.
  6. You may need to reboot the control workstation. Check the README in the PTF to see if this is required. If you have an HACWS and rebooting the control workstations is required, perform the preceding steps on both control workstations, then follow the instructions for rebooting in your HACMP documentation.

    If you have an HACWS configuration and rebooting was not required, you may want to recycle the control workstation applications to ensure that any fixes that affect HACMP/HACWS are enabled. Note that control workstation services will become unavailable during this procedure. To recycle the applications, first stop them on the backup control workstation using the following command. When the command completes, repeat it on the primary control workstation.

    /usr/sbin/hacws/spcw_apps -d
    

    Now restart the applications, first on the primary control workstation and then on the backup control workstation (note the order is the opposite of stopping them):

    /usr/sbin/hacws/spcw_apps -a
    
  7. Verify that /spdata/sys1/install/pssplpp is exported to all nodes. (In an HACWS environment, it needs only to be exported on the active control workstation.)
  8. Verify the correct operation of all SP and AIX control workstation functions.

Installing updates on a per node basis

This section outlines a procedure you can follow to install PTFs on your system.

Task A: Apply PTFs on one SP node and verify correct operation

Some PTFs require you apply them to all the nodes in your system. To do this, set up a test AIX 4.3.3 (SP Switch only) |or AIX 5L 5.1 partition. See the PSSP: Administration Guide for instructions.

  1. Select one node for PTF installation.
  2. Create a backup mksysb image of the test node.
  3. NFS mount /spdata/sys1/install/pssplpp from the control workstation onto that node:
    /usr/sbin/mount cw:/spdata/sys1/install/pssplpp /mnt
    
  4. Apply PTFs to that node.
  5. Reboot the node and verify correct operation.
  6. Verify correct installation and operation of the node.
  7. Create a mksysb image of this node and store it in an appropriate directory on the control workstation, for example:
    /spdata/sys1/install/images/bos.obj.ssp.43.ptf2
    

    Notes:

    1. You may want to install one node using the mksysb image you just saved to make sure the image is correct.

    2. If DCE is running on the host that the mksysb image is made from, you must first turn autostart off for the DCE daemons. To do this, issue:
      |config.dce -autostart no
      
      

      then create the mksysb image.

Task B: Apply PTFs to all nodes

You can follow these instructions to install the PTF code from the directory onto each node. You can also install the PTFs by using the mksysb you just saved.

  1. Using the dsh command, NFS mount /spdata/sys1/install/pssplpp onto each node. You can exclude the node you used for testing since it is now at the correct level.
    dsh -a /usr/sbin/mount cw:/spdata/sys1/install/pssplpp /mnt
    
  2. |Use dsh to run the appropriate installp command to |apply PTFs to all nodes.
    |dsh -a /usr/sbin/installp -Xagd /mnt/code_version ssp.st
    |Note:
    The dsh -f option allows the dsh commands to be fanned out |to multiple nodes. |
  3. Reboot the nodes and verify their correct operation.

Task C: Commit PTFs on the nodes and the control workstation

Committing the PTF will save file system space, but once the PTF is committed, you can never reject it. If you are not required to commit, you can skip this task.

  1. Using dsh, commit the PTFs onto all nodes.
  2. Commit the PTFs on the control workstation.

Task D: Update the state of the supervisor microcode

Refer to Step 34: Update the state of the supervisor microcode for more information.

Task E: Update the SPOT when installing AIX BOS service updates

Perform the following steps on the control workstation and on all of the boot/install servers:

  1. Deallocate the SPOT from all clients using the unallnimres command.
  2. On the control workstation only, copy the install images from the AIX BOS Service Updates to the lppsource directory that corresponds to the appropriate SPOT. For example, the directory could be:
    /spdata/sys1/install/aix433/lppsource
    
  3. For Boot Install Server (BIS) nodes, you must ensure that the BIS host name is in the /.rhosts file on the control workstation.
  4. On the control workstation only, run nim -o check lpp_source to create a new .toc file.
  5. Issue smit nim_res_op
    1. Select the appropriate SPOT.
    2. Select the update_all function.
    3. Hit F4 in the "Source of Install Images" field and select the appropriate lppsource.
    4. Hit Enter twice to initiate the update.
    5. After the update completes, run setup_server to reallocate the SPOT to the necessary clients.
  6. If you added .rhosts entries in Step 3, you can now delete them.
Note:
In a multiple Boot/Install Server (BIS) environment, the following actions can only be performed on one BIS at a time due to an AIX constraint regarding the inutoc command and the .toc file:
  1. Installing NIM master file sets
  2. Creating the SPOT

For more information, see the "NIM errors in a multiple boot/install server (BIS) environment" section of the "Diagnosing NIM problems" chapter of the PSSP: Diagnosis Guide.

Task F: Create new mksysb images

Create a new backup mksysb image of the control workstation. The mksysb image you created earlier is now the mksysb image for all nodes. Store any earlier mksysb images you created before you installed the PTFs in case you need to restore your system to its previous maintenance level.

Installing updates through reinstallation

Performing this task requires that your identity be authenticated as an authorized user of the system management commands and the Perspectives interface shown in the following steps.

For DCE, you should dce_login to the SP administrative principal created in Step 22.3: Create SP administrative principals.

For Kerberos V4, you should use the principal created in Step 21: Initialize RS/6000 SP Kerberos V4 (optional).

See the "Security features on the SP system" chapter in PSSP: Administration Guide for more information.

If you choose the reinstall approach, you may want to target particular nodes for maintenance images. For instance, if you have groups of nodes with distinct identities such as:

You may want to select one representative node in each group from which to apply maintenance and generate new installation images. The other nodes in the group should always then be reinstalled with the image generated from that node.

Note: For system partitions (SP Switch only):

  1. All nodes using the switch must be at the same level of base AIX.
  2. The communications subsystem software (ssp.css) must be at the same level on each node using the switch.

Build an installation image

Prior to creating a mksysb, you must do the following:

Notes:

  1. Make sure that no files named /etc/niminfo or /etc/niminfo.prev exist. If they do exist, rename them. These files should be saved for possible debugging later on.

  2. Verify the host name resolution for the control workstation and any nodes where the mksysb may be installed, if they are listed in the /etc/hosts file.

  3. If you want your machine to be a NIM master, make sure that the image you are building from had not executed the inurid -r command. Check to see if this command was run by issuing the following:
    /usr/lib/instl/inurid -q ; echo $?
    

    If the return code=1, the inurid -r command was executed. IBM suggests that you not execute the inurid -r command on any machine in case you want to use the machine as a boot/install server in the future.

Before building an installation image, install all the LPs and service to the node on which you intend to create your installation image. Careful planning in selecting LPs and required service spares you from repeating this process.

Note:
If DCE is running on the host that the mksysb image is made from, you must first turn autostart off for the DCE daemons. To do this, issue:
|config.dce -autostart no

then create the mksysb image.

After you have installed the LPs and service required for your nodes, you are ready to make an installation image. Login to the node where you want to create an installation image and enter:

smit -C mksysb

Enter the file name of the image that you want to create and press Enter.

Note:
The installation tools require that the name of this image begin with bos.obj. A suggested naming convention for these images is:
bos.obj.level.date

For example:

bos.obj.433.20000428

You can generate a mksysb image to install on your nodes. You can do this on a node after you have installed at least that node, or you can use a standalone workstation (either the control workstation or a different standalone workstation). However, after you have done Step 21: Initialize RS/6000 SP Kerberos V4 (optional), you cannot use the control workstation for this purpose. Running the setup_authent command on any workstation makes the workstation unsuitable for generating a mksysb image for a node.

The control workstation (or a different standalone workstation) must be at the right level of the AIX RS/6000 operating system and must have all required PTFs applied. One way to ensure this is to install the mksysb image that resides in the spimg installp image on its product tape. You can also install any of the RS/6000 SP software options on that machine. After installing extra LPs or maintenance, you can generate a mksysb image of the system and copy it back to /spdata/sys1/install/images on the control workstation. You can then use that image to install your nodes.

When using a mksysb, we strongly suggest that any AIX corrective service applied to the mksysb should also be placed in the lppsource directory. The Shared Product Object Tree (SPOT) should also be updated. Refer to the procedure documented in Task E: Update the SPOT when installing AIX BOS service updates.

After you install a node, you can install any RS/6000 SP software options or LPs, other LPs, or any maintenance that you need. You can then test your node and generate a mksysb image of that node. Before doing this, you may want to remove certain files from the node that you do not want in the image. These include any mksysb images in that node's /spdata/sys1/install/images directory (if the node is a boot/install server itself).

Do not remove /home from the node. If you do so, when you use the mksysb image to install a boot/install server node, you cannot create the netinst user ID that is required for network install after you install the image. After you create the mksysb image, copy it back to /spdata/sys1/install/images on the control workstation and install the rest of your nodes with that image.

Consider the following items prior to creating a mksysb backup:

To create the mksysb image, use the following command to create a /image.data file. It will also expand /tmp if needed:

mksysb -i -X

The following command creates a /image.data file, expands /tmp, and excludes all files listed in /etc/exclude.rootvg.

mksysb -e  -i -X

Unconfigure DCE-related information for the node (required for DCE)

Note:
You must have cell administrator authority to perform this step.

If a node was previously configured for DCE, you must remove any DCE-related principles and objects from the DCE registry before issuing the nodecond command.

  1. On the control workstation, use the |rm_spsec -t admin node_dce_hostname command.
    Note:
    To run this command remotely off of the SP, you must set the SP_NAME environment variable to point to the SDR you want to access. Refer to the rm_spsec command in PSSP: Command and Technical Reference for a description of the -r (remote) flag.
  2. Do a DCE Admin unconfigure for the node (smit rmdce).
    Note:
    To remove any additional principals related to the node using the SMIT panels, enter the host name of the adapter to be deleted. For example, on the "Admin unconfiguration for another machine" panel in the "Machine's name or TCP/IP address" field, enter the host name for the additional adapters.
  3. |For the nodes being removed, verify that all DCE principals have |been deleted from the DCE registry. Issue:
    |dcecp -c principal catalog -simplename

You must now create new DCE information for the node by performing the following steps:

  1. Run the setupdce command.

    Notes:

    1. You will be prompted for the cell administrator's password when you issue this command.

    2. To run this command off of the SP, you must set the SP_NAME environment variable on the remote workstation to point to the SDR of the SP system being configured. The value must be a resolvable address. For example:
      export SP_NAME=spcws.abc.com
      
  2. As an ID with cell administrator authority, run the config_spsec -v command.
    Note:
    To run this command off of the SP, you must set the SP_NAME environment variable on the remote workstation to point to the SDR of the SP system being configured. Refer to the config_spsec command in PSSP: Command and Technical Reference for a description of the -r (remote) flag.

Test your image on a single node

After you create your netinstall image, you must test the image. Use ftp in binary mode or rcp to transfer the image to the control workstation. The installation tools require all installation images to reside on the control workstation. Remember that the image must be placed in the /spdata/sys1/install/images directory and that the permissions must allow it to be read by other.

To test the image, you must reinstall a node with this new image and run your applications to see if it meets your requirements.

Propagate your installation image

After you create the netinstall image and copy it to the control workstation in /spdata/sys1/install/images, you are ready to reinstall.

On the control workstation, issue the following:

|spchvgobj -r selected_vg -i install_image_name \
|          -l node_list
| 
|spbootins -r install -l node_list

In the following example:

|spchvgobj -r selected_vg -i bos.obj.433.20000428 -l 23
| 
|spbootins -r install -l 23

would change the SDR information to install the image bos.obj.433.20000428 on |node 23 from its boot/install server.

This command automatically issues setup_server on |node 23's boot/install server to update its install files with the new information. Note that setup_server copies the installation image from the control workstation to the appropriate boot/install servers if the boot/install server is not the control workstation. If the image has to be copied to a boot/install server, this may take some time. You can now use Perspectives to reset the node, which causes the netinstall of the test image on the node.

The boot list of a node is set by /etc/rc.sp to boot from hdisk0 only. When the bootp_response of a node is changed from disk to be some other response, such as "install," the node is sent a boot list command to cause it to boot from ent0 before hdisk0. If a node is down when its bootp_response is changed, its boot list is not changed. From the Hardware Perspective, select one or more nodes and then select Actions > Network Boot.

After the netinstall is complete, the node reboots and you can verify the image runs your applications. When you are satisfied that this image meets your requirements, you can use the spbootins and the spchvgobj commands to change the install_image attribute and bootp_response and reinstall the rest of your nodes.

You may need to override the physical partition size (PPSIZE) of the root volume group in the mksysb. See Appendix F, Overriding the PPSIZE in a mksysb image for more information.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]