IBM Books

Managing Shared Disks


Chapter 3. Understanding your Managing Shared Disks process

This chapter explains procedural considerations and points out, at a high level, the tasks to perform after you have completed installing the software and made it operational. The tasks are listed and procedural criteria is explained. The tasks appear in other chapters that expand them into lower level steps.

Which procedural choices apply to you depends on whether you are establishing a new virtual shared disk environment, or you are adding to or changing an existing virtual shared disk environment. Which steps you must explicitly perform versus which steps are done for you at once, depends on whether you already have logical volumes and global volume groups, which shared disk components you choose to use, and which interface you choose to use.

The global tasks in the process toward fully operational virtual shared disks are generally the same in all cases. However, you need to consider where you are, where you want to be with respect to the virtual shared disks on your system, and how you want to get there. Within a global task, different base actions or commands might be necessary to complete the steps.

For instance, if yours is a new system with all physical and software components just installed and configured, you can take full advantage of the IBM Virtual Shared Disk graphical user interface actions that work on multiple nodes and perform many steps at one time. On the other hand, you might already have used the Logical Volume Manager of AIX to establish volume groups and logical volumes or you might already have virtual shared disks. You might even have scripts that run the more basic single-node commands. In each case, different steps are necessary depending on what is already done and what has yet to be done.

Generally, the tasks are:

  1. Designate node as a virtual shared disk node

    Do this for every node that is to have or use a virtual shared disk, regardless of whether or not the virtual shared disks will have data striping or will be recoverable. Server nodes that will concurrently access shared disks must specify a "cluster name". Nodes cannot be defined in more than one cluster and must be rebooted once the cluster name is defined so that the IBM Recoverable Virtual Shared Disk subsystem can safely fence the disks when necessary. With the graphical user interface, you can select all the applicable nodes and apply the action Designate as a VSD Node... at one time. If you prefer, you can use the command vsdnode.

  2. Create or define the virtual shared disks or hashed shared disks

    Do this for each node that is to be a virtual shared disk server.

  3. Configure the virtual shared disks or hashed shared disks
  4. Activate the virtual shared disks

    Skip this step if you established recoverable virtual shared disks because it is already done.

    Otherwise, use the action from the Nodes pane (Change VSD state...), which can do this at once for all the nodes you select, or use the commands preparevsd and startvsd which must be run on each virtual shared disk node for each virtual shared disk.

Your applications can then begin to write to and read from virtual shared disks or hashed shared disks. To understand what you must do for applications to efficiently use them, see:

If you do use the IBM Recoverable Virtual Shared Disk subsystem, also see:

It should now be clear that you need to understand your starting point, your goal, and plan what to do to reach your goal. There are many possible scenarios and many actions and commands. Actions are explained in detail in the online help. Interfaces are summarized in Appendix A, Interface cross-reference. Usage of some commands is explained in Appendix B, Single-node command and SMIT interfaces.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]