The Hashed Shared Disk component has a data striping device driver that distributes data across multiple nodes and multiple virtual shared disks, thus reducing I/O bottlenecks. Instead of writing all the data from one application program I/O request onto one virtual shared disk at a specific location, the data striping device driver writes blocks of the data on each of several separate virtual shared disks.
Figure 3 illustrates data striping across two or more virtual shared disks. In this case, a write request to HSD1 is made by an application program running on client node 4. When you create a hashed shared disk, the virtual shared disks that comprise the hashed shared disk are created as well. They are then collectively known as a hashed shared disk though individually, they are still virtual shared disks. At the time you create the hashed shared disk you specify certain operational parameters, among which is the stripe size. The stripe size is the amount of data that you want as a stripe (or block) to be written as one unit.
Figure 3. Hashed Shared Disk Stripes Data Across Multiple Virtual Shared Disks
The principal value of the Hashed Shared Disk component is that it provides distribution of data across physical disks and nodes while being transparent to the application program using the virtual shared disks.
Data striping capabilities were introduced in the Logical Volume Manager (LVM) component of AIX. A logical volume can be striped across two or more physical disks. The Hashed Shared Disk component of PSSP can stripe data across two or more virtual shared disks and nodes (a virtual shared disk corresponds to a single logical volume).
The Hashed Shared Disk component can stripe data across multiple nodes, while the LVM striping function is limited to local physical disks. The Hashed Shared Disk component allows you to use mirroring while LVM striping does not.
If you do not use mirroring, IBM suggests that you use the LVM function for local striping and the Hashed Shared Disk component of PSSP for global striping. For example, if you have two physical disks per node on a 10-node system, and you want to stripe the data across all 20 physical disks, do the following:
You now have one hashed shared disk that spans 10 virtual shared disks and each virtual shared disk correlates to a logical volume that is striped across two physical disks.
If you do use mirroring, you can use the Hashed Shared Disk component to stripe a local mirrored disk.
Refer to the book AIX Performance and Tuning Guide, for more information on how to tune striped logical volumes.
Whether or not to use a hashed shared disk depends on your configuration of virtual shared disks and the I/O characteristics of your application programs.
If the I/O load to a specific virtual shared disk is too heavy, you can use a hashed shared disk to distribute the load to other virtual shared disks and nodes.
When you plan your application, you should be able to determine whether the bandwidth required for particular virtual shared disks is higher than the system supports. If so, you should consider using hashed shared disks.