[ Bottom of Page | Previous Page | Next Page | Contents | Index | Library Home |
Legal |
Search ]
Performance Management Guide
Reorganizing Logical Volumes
If you find that a volume was sufficiently fragmented to require reorganization,
you can use the reorgvg command (or smitty reorgvg) to reorganize a logical volume and make it adhere to
the stated policies. This command will reorganize the placement of physical
partitions within the volume group according to the logical volume characteristics.
If logical volume names are specified with the command, highest priority is
given to the first logical volume in the list. To use this command, the volume
group must be varied on and have free partitions. The relocatable flag of
each logical volume must be set to yes for the reorganization to take place,
otherwise the logical volume is ignored.
By knowing the usage pattern of logical volumes, you can make better decisions
governing the policies to set for each volume. Guidelines are:
- Allocate hot LVs to different PVs.
- Spread hot LV across multiple PVs.
- Place hottest LVs in center of PVs, except for LVs that have Mirror Write
Consistency Check turned on.
- Place coldest LVs on Edges of PVs (except when accessed sequentially).
- Make LV contiguous.
- Define LV to maximum size that you will need.
- Place frequently used logical volumes close together.
- Place sequential files on the edge.
Recommendations for Best Performance
Whenever logical volumes are configured for better performance, the availability
might be impacted. Decide whether performance or availability is more critical
to your environment.
Use these guidelines when configuring for highest performance with the
SMIT command:
- If the system does mostly reads, then mirroring with scheduling policy
set to parallel can provide for better performance since the read I/Os will
be directed to the copies that are least busy. If doing writes, then mirroring
will cause a performance penalty because there will be multiple copies to
write as well as the Mirror Write Consistency record to update. You may also
want to set the allocation policy to Strict to have each copy on a separate
physical volume.
- Set the write verify policy to No and, if the number of copies is greater
than one, set the Mirror Write Consistency to Off.
- In general, the most frequently accessed logical volumes should be in
the center in order to minimize seek distances; however, there are some exceptions:
- Disks hold more data per track on the edges of the disk. Logical volumes
being accessed in sequential manner could be placed on the edge for better
performance.
- Another exception is for logical volumes that have Mirror Write Consistency
Check (MWCC) turned on. Because the MWCC sector is on the edge of the disk,
performance may be improved if the mirrored logical volume is also on the
edge.
- Logical volumes that will be accessed frequently or concurrently should
be placed close together on the disk. Locality of reference is more important
than placing them in the center.
- Put moderately used logical volumes in the middle, and put seldom-used
logical volumes on the edge.
- By setting the Inter-Physical Volume Allocation Policy to maximum, you
also ensure that the reads and writes are shared among PVs.
Recommendations for Highest Availability
To configure the system for highest availability (with the SMIT command),
follow these guidelines:
- Use three LP copies (mirroring twice)
- Set write verify to Yes
- Set the inter policy to Minimum (mirroring copies = # of PVs)
- Set the scheduling policy to Sequential
- Set the allocation policy to Strict (no mirroring on the same PV)
- Include at least three physical volumes in a volume group
- Mirror the copies on physical volumes attached to separate buses, adapters,
and power supplies
Having at least three physical volumes allows a quorum to be maintained
in the event one physical volume becomes unavailable. Using separate busses,
adapters, and power allows the use of copies not attached to the failing device.
[ Top of Page | Previous Page | Next Page | Contents | Index | Library Home |
Legal |
Search ]