IBM Books

Administration Guide


Changing an IP address or host name

Changing an IP address or host name on your SP system causes changes to the control workstation and SP nodes. Be certain that you understand, have planned, and have documented all the IP address and host name changes you are about to make. It is particularly important that you understand which SP nodes are boot servers in your SP configuration. This procedure does not dictate precise instructions for performing every task. It lists the tasks you need or might need to perform and gives some examples of how you might do it. However, it is up to you to evaluate, based on your knowledge of your SP system, which tasks must be performed, when, and precisely how to perform them.

Note:
|If running with a secure remote command method enabled, changing the |hostname or IP address might require a regeneration of public keys and |known_hosts files, depending on your secure remote command |configuration. |

This procedure at the highest level includes the following:

  1. |Create a mksysb backup file of the control workstation, a |backup file of each node, and also back up critical file systems that might be |on the control workstation and on the nodes.
  2. If any part of the SP system is a member of a DCE cell, do the following:
    1. See the information on handling network reconfigurations in the book IBM DCE 3.1 for AIX: Administration Guide-Core Components.
    2. See the information on changing the IP address of a DCE server and on hostname change in the book IBM DCE 3.1 for AIX: Problem Determination Guide.
    3. You must have DCE cell administrator authority to run the config_spsec, setupdce, and rm_spsec commands. They are shown in this procedure as being run from the SP control workstation. Using different option flags, you can run them remotely from any appropriately configured workstation. If you do not intend to run them from the SP control workstation, see the book PSSP: Command and Technical Reference for the options available and note your changes in the steps of this procedure where they occur.
  3. Develop your own customized procedure by evaluating, based on your knowledge of your system, which tasks to perform for DCE or other software on your system and how to perform them.
  4. Evaluate the rest of this procedure considering where your additional tasks fit in and make notes to perform them plus only those tasks listed here that apply to your system configuration. The rest of this procedure is detailed in the sections that follow and include:
    1. Performing tasks on each node.
    2. Performing tasks on the control workstation.
    3. Performing tasks in HACWS configurations.
    4. Verifying that each node acknowledges the changes.
  5. Follow your customized procedure, which includes all the tasks that apply to your system, carefully performing the steps in proper sequence and on the nodes and control workstation respectively as instructed.

Performing tasks on each node

Perform the following tasks on all PSSP nodes before making updates to the control workstation.

On each PSSP node, perform each of the following tasks if they apply to your SP system. You can perform many of the tasks by using the SP Perspectives GUI. You might prefer to automate some tasks by implementing user shell scripts and by using the SP distributed shell (dsh) services from the control workstation to the target nodes. The important thing is that you do perform each task on each node:

  1. Stop all applications that are dependent on IBM VSD or RVSD subsystems.

    Two such applications are GPFS and ORACLE. For example, to stop GPFS use the command:

    |/bin/stopsrc -s mmfs
    
    
  2. Unconfigure virtual shared disks.

    Determine if there is a virtual shared disk on the node, regardless of whether it is recoverable, concurrent, or hashed. You can visually tell which nodes have any by using the Virtual Shared Disk Perspective graphical user interface. If there are any vsd nodes, you can stop the RVSD subsystem, and unconfigure all the virtual shared disks on all the nodes. Another way is to use the command line interface on one node at a time. You can issue the /usr/lpp/csd/bin/vsdatalst -n command. If you receive a message indicating "not found" or this SP node is not listed in the results, go to the next task. If this SP node is listed, stop the RVSD subsystem if it is running, then suspend, stop, and unconfigure the virtual shared disks for this SP node using the commands:

    /usr/lpp/csd/bin/ha.vsd stop      /* run if RVSD subsystem is active */
    /usr/lpp/csd/bin/suspendvsd -a
    /usr/lpp/csd/bin/stopvsd -a
    /usr/lpp/csd/bin/ucfgvsd -a
    
  3. Stop any other application you might have running that depends on RSCT services.

    |Shutdown the applications that work with RSCT services. The |RSCT services include Topology Services, Group Services, Event Management, and |Problem Management.

  4. Stop and remove the RSCT services

    Execute the syspar_ctrl command to stop and remove these resources.

    /usr/lpp/ssp/bin/syspar_ctrl -c
    
  5. Update the NIM files and configuration

    If available, update the /etc/niminfo file to provide the new host name and IP address changes for the NIM client, NIM master, and NIM file references. If the SP node is currently a NIM master (boot/install server), remove the NIM ODM objects and NIM master configuration files from the SP node by issuing the delnimmast command:

    /usr/lpp/ssp/bin/delnimmast -l node_number
    
  6. Disable HACMP on nodes

    Use instructions from the HACMP publications to disable the HACMP configuration. The nodes will not be able to communicate during this activity. After nodes are backed up and running again, reconfigure or re-enable the HACMP configuration.

  7. Update DCE client information

    Use the SMIT DCE client panels and dialog boxes to make proper changes to refer to DCE clients and DCE servers, IP addresses, and host names. Do one of the following:

  8. Update PSSP files for updates on each PSSP node

    Update the following files to reflect the new host name and IP address for the SP node and servers:

    1. Update the /etc/SDR_dest_info file using the new control workstation and SDR IP addresses and host names.
    2. Update the /etc/ssp/ cw_name, server_hostname and server_name files to reference the new boot server host name and IP address.
    3. Update the /etc/ssp/reliable_hostname file to reference the new SP node client host name and IP address.
    4. Make sure that the SP_NAME environment variable is updated or set to blank (export SP_NAME= ).
    5. For Kerberos V4, issue /usr/lpp/ssp/kerberos/bin/kdestroy to remove any active Kerberos ticket-granting-tickets. You might need to update the /etc/krb.conf file if the authentication server is changed. IBM suggests renaming the /etc/krb-srvtab file on the SP node.
    6. Update the files controlling authorization for root remote command access. Depending on your security configuration, you might have one or more of the following files: /.k5login, /.klogin, /.rhosts.
  9. Update the |SP Ethernet admin LAN interface

    To make sure that all IP addresses and host names are resolvable and that the SP node can properly communicate with the control workstation during reboot activity, do the following on each node:

    1. If the host name of the node is |also the name of the SP Ethernet admin LAN interface and that host name or IP address has changed, do the following:
      1. To change the host name and the IP address, you can use SMIT mktcpip or run the mktcpip command on each SP node. For example, to change the host name of node k22n06, |which has en0 as the SP Ethernet admin LAN interface, to the new IP address 129.40.88.70 and new host name k88n06, use the command:
        /usr/sbin/mktcpip -i en0 -h'k88n06.ppd.pok.ibm.com' -a'129.40.88.70'
        
      2. After running the mktcpip process, the node loses the current network connection to the control workstation and the other SP nodes. Any further updates to the SP node need to be performed using the s1term command on the tty0 console. You might be able to kill the telnet or rlogin process from the control workstation or the home machine to regain the telnet window.
    2. If the host name of the node is not |also the name of the SP Ethernet admin LAN interface, to change the |SP Ethernet admin LAN IP address without affecting the host name of the node, you can use the chdev command.

      For example, to change the IP address of |the SP Ethernet admin LAN interface en0 to 129.40.88.70, use the command:

      chdev -I en0 -a netaddr=129.40.88.70
      
    3. Update any other attributes for the interface that have changed, such as subnet_mask.

That completes the updates required on nodes. Do not reboot the nodes until you are instructed to. Continue with "Performing tasks required on the control workstation" to make the changes that are necessary on the control workstation.

Performing tasks required on the control workstation

The following tasks need to be performed by the admin (root) user on the control workstation when changing the SP Ethernet IP address or host name:

  1. Be certain you have a backup of the current /spdata file system and a mksysb backup before making any IP address and host name changes to the control workstation.
  2. To save the current SDR attributes, run the command /usr/lpp/ssp/bin/SDRArchive. This command saves SDR information for all system partitions.
  3. If you are currently in an HACWS configuration, properly disable the backup control workstation and unconfigure HACWS before changing any IP address and host names for the control workstation. See Performing tasks in HACWS configurations for more information.
  4. If you use an SP switch, quiesce the switch in each SP system partition at this time.
  5. Stop and remove all active resources that work with the RSCT services. You can run the command:
    /usr/lpp/ssp/bin syspar_ctrl -G -c
     
    
    Usage Note

    Tasks 6 through 17 are required when there are IP address and host name changes made to the control workstation. If there are only changes being made to the SP nodes, go to step 18.

  6. If you use DCE authentication for SP trusted services, remove SP DCE server key files, principals, and DCE ACLs from the DCE database for the control workstation. Unconfigure the local DCE client configuration information on the control workstation. If you are changing IP address or host name of the DCE server, reconfigure the SP security authentication configuration of DCE from the beginning. Run the commands:
    rm_spsec -t local
    rm_spsec -t admin cws_dce_hostname
    /bin/unconfig.dce -config_type local all
    

    Complete the DCE admin unconfiguration from a system with access to the DCE cell. Unconfigure all adapters for the changed control workstation. You need cell administrator authority for this task. Run the command:

    /bin/unconfig.dce -config_type admin -dce_hostname cws_dce_hostname \
    -host_id adapter_host_name all
    
  7. Stop PSSP daemons and remove current source master objects (sdrd, and hardmon services) on the control workstation. Use the stopsrc command to stop PSSP resources. Remove the SDR source master object for each defined system partition (SP_NAME).
    /bin/stopsrc -g sdr
    /bin/stopsrc -s hardmon
    /bin/stopsrc -s supfilesrv
    /usr/lpp/ssp/bin/sdr -spname SP_NAME rmsrc
    
  8. Using mktcpip or the chdev command, specify IP address or host name changes required for the control workstation. This includes changes being made for any affected adapter interfaces. It is also important to make appropriate updates to the route tables, netmasks, and gateway servers.
  9. If multiple system partitions exist, update the /etc/rc.net file to reference the new alias address used for each system partition. Issue /etc/rc.net or execute ifconfig command to configure the new alias IP addresses. You can issue the netstat -ni command to validate the new alias addresses. The old alias addresses used with system partitions will be removed during the next reboot of the control workstation.
  10. Manually update the /etc/SDR_dest_info and /spdata/sys1/spmon/hmacls files to the new IP address and host name for the SDR and hardmon interfaces.

    Manually update the SDR system partition map file /spdata/sys1/sdr/system/classes/Syspar_map to reflect the new IP address and their SP_NAME values with each system partition.

    Move the system partition directories found at location /spdata/sys1/sdr/partitions from the old IP address to the new IP address for each system partition being modified.

    /bin/mv old_IP_addr new_IP_addr
    

    Manually update the SDR system partition file /spdata/sys1/sdr/partitions/new_IP_addr/classes/Syspar to reflect the new IP address and their SP_NAME host name values for each defined system partition.

  11. You need to update the SP security configuration if you use DCE or Kerberos V4 authentication for SP trusted services. Do the following:
    1. For Kerberos V4

      Perform this step only for host name and domain changes for the control workstation. It is not required for changes made to control workstation IP addresses or to updates for SP nodes.

      Issue the setup_authent script to create authentication services for the new host names being used. See the step "Initialize RS/6000 SP Authentication Services" in PSSP: Installation and Migration Guide. The setup_authent script will get an SDR error attempting to set the nodes to customize. You will set the nodes to customize in a later step.

      Manually check that the authentication files /etc/krb.conf /etc/krb.realms reference the proper host names and domain. You can issue /usr/lpp/ssp/kerberos/bin/ksrvutil list to make sure that the rcmd and hardmon services reference the new host names in the /etc/krb-srvtab file. You might also need to recreate Kerberos principals for any users that were previously defined in the Kerberos database. You can use the lskp, mkkp, and add_principal commands to list, make, and add Kerberos principals.

    2. For DCE

      Configure the DCE client information using the updated IP addresses and host names. Make sure you have proper network communication to the DCE server. Both an admin and a local DCE configuration are required. You need root user and DCE cell_admin authority to add the SP security services for the control workstation into the DCE database. You can do the following:

      1. On a system with access to the DCE cell, to admin configure all the adapters on your control workstation run the following command:
        /bin/config.dce -config_type admin -lan_profile lan_profile_id \
        -dce_hostname new_dce_hostname -host_id adapter_host_name sec_cl cds_cl
         
        
      2. To complete the configuration of the DCE clients on the control workstation, run the command:
        /bin/config.dce -config_type local -cell_name DCE_cell_name \
        -dce_hostname new_dce_hostname -sec_master sec_master_hostname \
        -cds_server cds_server_hostname -autostart yes sec_cl cds_cl rpc
         
        
      3. Run the commands:
        /usr/lpp/ssp/bin/config_spsec -v -c
        /usr/lpp/ssp/bin/create_keyfiles -v -c
        
  12. Create the new source master resources for each SDR object, and then start the SDR and hardmon daemons.
    sdr -spname SP_NAME mksrc new_IP_addr (for each Syspar)
    startsrc -g sdr
    startsrc -s hardmon
    
  13. DCE principals and key files have already been created for the default partition. You must now create principals and key files for any other partitions that are using DCE. You need DCE cell administrator authority to run the config_spsec command. Being logged in as root with default credentials is sufficient to run the create_keyfiles command. Run the following commands:
    config_spsec -v -p partition_name
    create_keyfiles -v -p partition_name
    
  14. After the SDR daemon is properly activated on the SP system, manually issue SDRChangeAttrValues for the control workstation.
  15. Remove the current Network Installation Manager (NIM) ODM database and configuration files on the control workstation.

    Issue the command:

    /usr/lpp/ssp/bin/delnimmast -l 0
    

    Correct entries in the /etc/niminfo file that reference the old IP address and host names.

    Correct entries in the /etc/exports file that reference the old IP address and host names. Then stop and start the NFS subsystem.

  16. |Using SMIT or the spsitenv command, specify any host name |changes that might be referenced for NTP, printing, and user |management. See the step "Enter Site Environment Information" |in PSSP: Installation and Migration Guide |(Optional).
  17. Reboot the control workstation now. This establishes a clean system to reflect the IP address and host name changes. You should verify that all PSSP daemons are activated from /etc/inittab and the system resource master.
    Usage Note

    The remaining steps for the control workstation involve updating the SDR and SP system files for the SP node objects. Remember, some of the steps might be optional depending on your SP configuration changes.

  18. If you have extension nodes, like an SP Switch Router, you might need to reconfigure its host name and IP address. You can use CMI or the endefnode and endefadapter commands. See Chapter 18, Managing extension nodes for instructions.
  19. If you use DCE authentication for SP trusted services, remove SP DCE server key files, principals, and DCE ACLs from the DCE database for each node. Unconfigure the local DCE client configuration information for each node. Delete all adapters on each node. Reissue the unconfig.dce command for each node and each adapter on each node. DCE cell administrator authority is required for this task. Use the commands:
    /usr/lpp/ssp/bin/rm_spsec -t admin old_dcehostname
    /bin/unconfig.dce -config_type admin -dce_hostname old_dcehostname\
    -host_id adapter_host_name all
    
  20. Update configuration files on the control workstation

    Various AIX and SP files might need to be updated to reflect IP address or host name changes. Look through the following files for required updates.

    1. Update any files that are involved with host name resolution. The files are /etc/hosts, /etc/resolv.conf (DNS) and /var/yp/* (NIS).
    2. |Update the /etc/filesystems file for your SP |configuration.
    3. Make sure your /tftpboot/script.cust file and /tftpboot/firstboot.cust file are updated to reflect IP address and host name changes. Instead of hard coding host names, you can reference the $SERVER and $CWS variables.
    4. Update any DCE client and server files if supporting a DCE configuration.
  21. Update the SDR node objects by using commands or the SMIT-based SP Configuration Management Interface (CMI). You can perform the following tasks for each system partition by exporting the SP_NAME variable (for example, export SP_NAME=SP_NAME).
    1. Using CMI or the |spadaptrs command, specify the new SP Ethernet IP address or host name changes required for the SP nodes. See the step "Enter Required Node Information" in PSSP: Installation and Migration Guide.
    2. Using CMI or the spadaptrs command, reset the switch css0 adapter, |the css1 adapter, and other adapters that need to reference the new IP address or host names being changed. See the step "Configure Additional Adapters for Nodes" in PSSP: Installation and Migration Guide.
    3. Using CMI or the sphostnam command, reset the initial host name that you want to use in your system. See the step "Configure Initial Host Names for Nodes" in PSSP: Installation and Migration Guide (Optional).
    4. Using CMI or the spchvgobj and spbootins commands, reset all boot/install servers to reference the new SP Ethernet IP address and host names. It is important that you set your SP node boot/install servers into the proper configuration for your SP system. If you have system partitions, you should have already designated the proper PSSP boot server nodes.
      spchvgobj -r rootvg -n boot_node -l node_list
      

      You need to set the bootp response to customize for all the SP nodes.

      spbootins -r customize -l node_list
      

      You can issue the splstdata -G -b command to verify the correct boot information for the SP nodes.

      The spbootins command runs the setup_server command which creates all the NIM-based files and resources required for installation. It also creates the authentication rcmd principals for the new SP node host names.

      You might want to validate that the following files have the proper IP addresses and host names defined.

      • /etc/ntp.conf
      • /tftpboot/host.config_info
      • /tftpboot/host.install_info
      • /tftpboot/host-new-srvtab
    5. If you are using DCE authentication and if you have changed the reliable_hostnames of the node, you might want to change the dce_hostnames of the node to match. For each node you want to change, run the SDRChangeAttrValues command to set the dce_hostname to null, then run the create_dcehostname command to recreate the dce_hostnames based on the new reliable_hostnames. For example, run the commands:
      SDRChangeAttrValues Node node_number==xx dcehostname=""
      SDRChangeAttrValues Node node_number==yy dcehostname=""
      /usr/lpp/ssp/bin/create_dcehostname
      
  22. Update the SP security configuration for the SP nodes.
    1. You need DCE cell administrator authority to run the setupdce and config_spsec commands and you must be root to run the create_keyfiles command. When using DCE authentication for SP trusted services, run the following commands:
      /usr/lpp/ssp/bin/setupdce -v
      /usr/lpp/ssp/bin/config_spsec -v
      /usr/lpp/ssp/bin/create_keyfiles -v
      
    2. For all SP security configurations, create the root files for remote command processing on the control workstation. The possible files are /.k5login for Kerberos V5, /.klogin for Kerberos V4, and /.rhosts for standard AIX. You can run the command:
      /usr/lpp/ssp/bin/updauthfiles
      

      The updauthfiles command adds the new hostnames to /.k5login, /.klogin, or /.rhosts but it does not delete obsolete hostnames. Manually edit these files to remove entries for obsolete hostnames or to change any user added entries.

    3. Make certain the security options are properly set. Do the following:
      1. To check if the current security settings for each of the SP system partitions are properly set, you can use the splstdata -p command.
      2. Do not attempt to change your security setting during this procedure unless you must fix them because of this procedure. If any changes are necessary, see Managing the security configuration.
  23. If your SP system supports system partitions, reissue the system partitioning steps found in Chapter 16, Managing system partitions. The system partitioning spapply_config command re-creates the proper PSSP daemons and resynchronizes the SDR objects to reflect any IP address and host name changes.

    If you do not execute the system partitioning step, you will then need to create the source master objects and start the daemons for the RSCT services using the syspar_ctrl -G -A command.

    /usr/lpp/ssp/bin/syspar_ctrl -G -A
    
  24. For each PSSP boot server node, you should have already unconfigured NIM during PSSP node update activities. During the node customization, setup_server creates the proper install files for the nodes they are to customize. Since the SP nodes are in customize mode, most configuration files are updated to reflect the IP address and host name changes.

    Attention: Some files will not be updated on the nodes during the customization. You can rcp these files from the control workstation to the SP nodes by including them in the /tftpboot/script.cust file or the /tftpboot/firstboot.cust. These can include the /etc/resolv.conf, .rhosts, and other SP customer-owned files.

  25. If you have a switch on your SP system, reinitialize the switch interfaces.
  26. Perform a REBOOT for each PSSP node. You can do this by using Perspectives or by issuing the hmcmds command.

    Follow the proper install sequence by customizing each of the boot server nodes first. After the SP boot/install server node completes the installation setup, you can customize the remaining SP nodes.

    Now verify that the customization was successful on each of the nodes. When customization is complete, you can restart the switch using the Estart command.

Performing tasks in HACWS configurations

The following additional tasks for HACWS configurations relate to steps 6 through 17 in Performing tasks required on the control workstation.

Note:
Consider deferring the HACWS reconfiguration until your SP system has been properly updated and is stable on the new IP addresses and host names. Also, you need to be familiar with the HACWS information in the PSSP: Installation and Migration Guide.

When you are ready to reconfigure HACWS, do the following:

  1. Stop daemons. Stop HACMP on both the primary and backup control workstations (this automatically stops all HACWS related daemons).
  2. Use mktcpip. Manually issue the appropriate ifconfig commands to configure the control workstation service addresses (backup CWS) on the primary control workstation. Make sure the host name and IP address are configured on the backup control workstation. If you are using an alias IP address, make the required changes to the /etc/rc.backup_cw_alias script.

    Manually vary on the external volume group which contains the /spdata file system, and then mount the /spdata file system.

  3. Update rc.net. Make updates to the /spdata/sys1/hacws/rc.syspar_aliases file to reference the new alias address used for system partition.
  4. Update SDR_dest_info. No updates required.
  5. Issue setup_authent. No updates required.
  6. Create the new source master. Update the new SDR source master objects on the backup control workstation as well as the primary control workstation.
    sdr -spname SP_NAME mksrc new_IP_addr (for each Syspar)
    

    You can then start the SDR and hardmon daemons on the primary control workstation.

  7. Update the SDR. Update the Frame object to include the host name for the backup_MACN using SDRChangeAttrValues.
    SDRChangeAttrValues Frame backup_MACN=backup_CWS_hostname
    

    You should also now execute the HACWS installation script install_hacws on the primary control workstation.

    /usr/sbin/hacws/install_hacws -p primary_name  -b backup_name
    -s
    
  8. Remove the current NIM. No updates required.
  9. Update the SDR node objects. No updates required.
  10. Reboot.

    To reconfigure the cluster topology, follow the standard HACMP procedures documented in HACMP: Administration Guide.

    Verify that the HACWS configuration is properly setup by executing the /usr/sbin/hacws/hacws_verify command on the control workstation.

    Also verify that you can properly fail over from the control workstation to the backup control workstation.

Verifying that each node acknowledges the changes

To verify the changes, do the following on each node:

  1. After reboot is complete and the SP nodes are customized, verify that the following have the correct IP address and host name specified:
  2. Verify that the following files on the SP nodes reflect the updated IP addresses and host names:

    If any of the files are incorrect, make the proper updates for the correct IP address or host name.

  3. Verify that the SP security configuration is properly set. Use the commands:
    /usr/lpp/ssp/bin/splstdata -p      (security settings of Syspar)
    /bin/lsauthent                     (local authentication setting)
    /bin/lsauthts                      (local SP trusted service setting)
    
  4. Verify that the NIM resources were built on the PSSP boot server nodes by executing the lsnim command. If there were NIM problems, remove the current NIM database and configuration files.
    /usr/lpp/ssp/bin/delnimmast -l node_number
    

    Then issue setup_server on the PSSP boot server node.

  5. Verify that the RSCT services have been activated on the PSSP nodes. Execute lssrc -a command to list all the active subsystems.
    /bin/lssrc -a | grep rsct_susbsystem
    

    If you suspect that any of the expected RSCT services are inoperative or not available on the SP node, it is best to remove and then add the RSCT services using the syspar_ctrl command:

    /usr/lpp/ssp/bin/syspar_ctrl -c
    /usr/lpp/ssp/bin/syspar_ctrl -A
    
  6. Reconfigure and start the IBM Virtual Shared Disk for the SP nodes that support the IBM Virtual Shared Disk configuration. You can do this easily for all nodes by using the IBM Virtual Shared Disk Perspective. If you are implementing a script, you might prefer to issue the following commands on each node:
    /usr/lpp/csd/bin/cfgvsd -a
    /usr/lpp/csd/bin/startvsd -a
    

    You can validate that the IP address and host names are correctly specified by issuing the following command:

    /usr/lpp/csd/bin/vsdatalst -n
    
  7. If there were modifications made to any SP node system files, reboot the PSSP nodes to reflect the IP address and host name changes. When the SP nodes are initialized, your SP system should be activated using the new IP addresses and host names.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]