IBM Books

Managing Shared Disks


IBM Virtual Shared Disk component overview

IBM Virtual Shared Disk is a subsystem that lets application programs that are running on different nodes of a system partition access a raw logical volume as if it were local at each of the nodes. See Figure 1 for an illustration of a simplified virtual shared disk implementation. Each virtual shared disk corresponds to a logical volume that is actually local at one of the nodes, which is called the server node. The IBM Virtual Shared Disk subsystem routes I/O requests from the other nodes, called client nodes, to the server node and returns the results to the client nodes.

The I/O routing is done by the IBM Virtual Shared Disk device driver that interacts with the AIX Logical Volume Manager (LVM). The device driver is loaded as a kernel extension on each node. Thus, raw logical volumes can be made globally accessible.

The application program interface to a virtual shared disk is the raw device (or device special file). This means application programs must issue requests to a virtual shared disk in the block size specified by the LVM (currently, requests are multiples of 512 bytes on 512-byte block boundaries).

You can find more information on logical volumes in AIX System Management Guide: Operating Systems and Devices. See the list included in Who should use this book for order numbers.

Figure 1. An IBM Virtual Shared Disk IP Network Implementation

View figure.

IBM concurrent virtual shared disks

This subsystem also includes concurrent disk access which allows you to use multiple servers to satisfy disk requests by taking advantage of the concurrent disk access environment supplied by AIX. In order to use this environment, VSD uses the services of Concurrent LVM (CLVM), which provides the synchronization of LVM and the management of concurrency for system administration services.

Concurrent disk access extends the physical connectivity of multi-tailed concurrent disks beyond their physical boundaries. You can configure volume groups with a list of Virtual Shared Disk servers. Nodes that are not locally attached will have their I/O distributed across these servers. For example, in the following illustration, node 1 through 4 are not attached to any disk. To access disk1, you could use nodes 5 or 6. You can access disk2 through node 7 only (or node 6 when node 7 fails).

Figure 2. An IBM Concurrent Virtual Shared Disk IP Network Implementation

View figure.

When you are using IBM Concurrent Virtual Shared Disk, recovery from node failure is much faster because the failed node is marked as unavailable to all other nodes and its access to the physical disk is fenced. This procedure is faster than the recovery procedure followed in the twin tailed environment. An additional benefit from multiple VSD servers is that disk services can be spread across multiple servers.

See Chapter 3, Understanding your Managing Shared Disks process for more information on concurrent virtual disks.

IBM Virtual Shared Disk restrictions


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]