IBM Books

Managing Shared Disks


Migrating shared disk components

Follow these procedures to prepare a node for migrating either the PSSP shared disk component software or the IBM Recoverable Virtual Shared Disk licensed program to PSSP 3.4. Perform these procedures from the control workstation for each node that has or uses virtual shared disks.

Note:
Replace node_name, in the commands that follow, with the target hostname of the node you are migrating at the time, or with any other identifier the command accepts.
Note:
Note that although it is a separate file set, Recoverable Virtual Shared Disk is no longer a separate Licensed Program (LP) but is now an integral part of the Virtual Shared Disk package.

Preparing to migrate

Before migrating the nodes, update the control workstation in the following order:

  1. Stop rvsdd by issuing:

    /usr/lpp/csd/bin/ha.vsd stop

  2. Install the rvsd filesets.
  3. Use the rvsdrestrict command to set the lowest level of Recoverable Virtual Shared Disk that is currently running on the nodes.
  4. Start rvsdd by issuing:

    /usr/lpp/csd/bin/ha.vsd start

To prepare for migration, do the following:

  1. If you already use the IBM Recoverable Virtual Shared Disk function (the rvsd subsystem) and you plan to migrate a subset of nodes while keeping others running, you might need to change the quorum that is maintained by the rvsd subsystem. If quorum is not met, the rvsd subsystem will stop all the virtual shared disks. The ha.vsd command controls the rvsd subsystem.

    To determine the current quorum and the number of active nodes, issue the ha.vsd query command. For example:

    /usr/lpp/csd/bin/ha.vsd query

    The output results are similar to the following:

    Subsystem    Group      PID     Status
    rvsd          rvsd     2794     active
    rvsd(vsd): quorum= 7, active=1, state=idle, isolation=member,
              NoNodes=12, lastProtocol=nodes_joining,
              adapter_recovery=on, adapter_status=up.
    

    In this example, quorum is 7 and number of nodes is 12. You will be stopping the rvsd subsystem on the nodes you plan to migrate. If the number of nodes you intend to keep running is less than the quorum, set quorum to 1 with the following command:

    /usr/lpp/csd/bin/ha.vsd quorum 1

    Then repeat the ha.vsd query command to confirm the change.

  2. Shut down any application that is running on the node and uses virtual shared disks.
  3. Shut down the virtual shared disk connection manager subsystem, which is controlled by the hc.vsd command, on the node by issuing:

    dsh -w node_name /usr/lpp/csd/bin/hc.vsd stop

    Check the connection manager status by issuing:

    dsh -w node_name /usr/lpp/csd/bin/hc.vsd query

    Continue if all show inoperative.

  4. Shut down the rvsd subsystem on the node by issuing:

    dsh -w node_name /usr/lpp/csd/bin/ha.vsd stop

    Check the status by issuing:

    dsh -w node_name /usr/lpp/csd/bin/ha.vsd query

    Continue if all show inoperative.

  5. If you are migrating the virtual shared disk software, you must unconfigure all existing virtual shared disks and hashed shared disks. Issue the commands:

    dsh -w node_name /usr/lpp/csd/bin/ucfghsd -a

    dsh -w node_name /usr/lpp/csd/bin/ucfgvsd -a

Performing the migration install

Install the new PSSP file sets following the procedures in PSSP: Installation and Migration Guide. Install the file sets for the shared disk components that you want, which are listed in Installation file sets of this book, following the additional guidance in Installing shared disk components. Then continue with the rest of the steps in this section.

You can have mixed levels of PSSP and any earlier supported level of the IBM Recoverable Virtual Shared Disk licensed program in the same system partition with PSSP 3.4, but the control workstation must have the 3.4 level of the IBM Recoverable Virtual Shared Disk software. To set the level at which the rvsd subsystem is to run, do the following:

  1. To determine the current setting, issue the following command on the control workstation:

    /usr/lpp/csd/bin/rvsdrestrict -l

  2. To determine which level of the rvsd subsystem is installed on each node, issue the following AIX command on the control workstation:

    dsh -a lslpp -l "*.rvsd*"

  3. To set the level at which you want the rvsd subsystem to run, use the rvsdrestrict command on a node that has PSSP 3.4 installed.

    Set the level to the lowest level of the IBM Recoverable Virtual Shared Disk software that you have installed in the system partition. Choose a value from Table 1.

    Table 1. Levels for the rvsdrestrict Command

    IBM Recoverable Virtual Shared Disk Level Value for rvsdrestrict Command
    3.4   RVSD3.4
    3.2   RVSD3.2
    3.1   RVSD3.1
    2.1.1 RVSD2.1

    For example, if you have some nodes running IBM Recoverable Virtual Shared Disk 2.1.1 and you just installed some nodes with IBM Recoverable Virtual Shared Disk 3.4 but you want them to run in a coexistence environment, you need to set the IBM Recoverable Virtual Shared Disk 3.4 functioning level to RVSD2.1. The command to do this is:

    /usr/lpp/csd/bin/rvsdrestrict -s RVSD2.1

  4. The rvsdrestrict command does not dynamically change the rvsd subsystem run level across the SP. An instance of the rvsd subsystem only reacts to the setting after it is restarted. To override the level of an active rvsd subsystem, do the following on each node:
    1. Stop the rvsd subsystem.
    2. Run the rvsdrestrict command.
    3. Restart the rvsd subsystem.

If a node in the same system partition has a lower level of the rvsd subsystem than was set by the rvsdrestrict command, the rvsd subsystem will not start on that node.

Completing migration after the install

Perform these steps to bring your virtual shared disks back online.

  1. It is usually a good idea to reboot the node, but it is not a requirement.
  2. If your virtual shared disk node is configured to use the switch, bring the node back onto the switch by issuing the following command:

    /usr/lpp/csd/bin/Eunfence node_name

  3. If you did not reboot the node then restart the virtual shared disks on the node by issuing:

    dsh -w node_name /usr/lpp/csd/bin/ha_vsd reset

  4. If you performed step 1 in Preparing to migrate, and you have finished migrating the subset of nodes, reset quorum by issuing:

    /usr/lpp/csd/bin/ha.vsd reset_quorum

    Issue the ha.vsd query command to confirm that quorum has been reset.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]