This topic provides recommended actions you should take (or not take) before and during the installation process.
Two situations require consideration, as follows:
Before you begin the installation process, be sure that you have made decisions about the size and location of disk file systems and paging spaces, and that you understand how to communicate those decisions to the operating system.
If you are upgrading to a new level of the operating system, do the following:
Use the default CPU scheduling parameters, such as the time-slice duration. Unless you have extensive monitoring and tuning experience with the same workload on a nearly identical configuration, leave these parameters unchanged at installation time.
See Chapter 6. Monitoring and Tuning CPU Use for post-installation recommendations.
Do not make any memory-threshold changes until you have had experience with the response of the system to the actual workload.
See Chapter 7. Monitoring and Tuning Memory Use for post-installation recommendations.
The mechanisms for defining and expanding logical volumes attempt to make the best possible default choices. However, satisfactory disk-I/O performance is much more likely if the installer of the system tailors the size and placement of the logical volumes to the expected data storage and workload requirements. Recommendations are as follows:
This approach separates journaled I/O activity from the high-activity data I/O, increasing the probability of overlap. This technique can have an especially significant effect on NFS server performance, because both data and journal writes must be complete before NFS signals I/O complete for a write operation.
In most situations this effect is not noticeable, but large sequential file operations have been known to exclude low-numbered drives from access to the bus. You should probably configure the disk drives holding the most response-time-critical data at the highest addresses on each SCSI bus.
The lsdev -Cs scsi command reports on the current address assignments on each SCSI bus. For the original SCSI adapter, the SCSI address is the first number in the fourth pair of numbers in the output. In the following output example, one 400 GB disk is at SCSI address 4, another at address 5, the 8mm tape drive at address 1, and the CDROM drive is at address 3.
cd0 Available 10-80-00-3,0 SCSI Multimedia CD-ROM Drive hdisk0 Available 10-80-00-4,0 16 Bit SCSI Disk Drive hdisk1 Available 10-80-00-5,0 16 Bit SCSI Disk Drive rmt0 Available 10-80-00-1,0 2.3 GB 8mm Tape Drive
See Chapter 8. Monitoring and Tuning Disk I/O Use for post-installation recommendations.
The general recommendation is that the sum of the sizes of the paging spaces should be equal to at least twice the size of the real memory of the machine, up to a memory size of 256 MB (512 MB of paging space).
Note: For memories larger than 256 MB, the following is recommended:
total paging space = 512 MB + (memory size - 256 MB) * 1.25However, starting with AIX 4.3.2 and Deferred Page Space Allocation, this guideline may tie up more disk space than actually necessary. See Choosing a Page Space Allocation Method.
Ideally, there should be several paging spaces of roughly equal size, each on a different physical disk drive. If you decide to create additional paging spaces, create them on physical volumes that are more lightly loaded than the physical volume in rootvg. When allocating paging space blocks, the VMM allocates four blocks, in turn, from each of the active paging spaces that has space available. While the system is booting, only the primary paging space (hd6) is active. Consequently, all paging-space blocks allocated during boot are on the primary paging space. This means that the primary paging space should be somewhat larger than the secondary paging spaces. The secondary paging spaces should all be of the same size to ensure that the algorithm performed in turn can work effectively.
The lsps -a command gives a snapshot of the current utilization level of all the paging spaces on a system. You can also used the psdanger() subroutine to determine how closely paging-space utilization is approaching critical levels. As an example, the following program uses the psdanger() subroutine to provide a warning message when a threshold is exceeded:
/* psmonitor.c Monitors system for paging space low conditions. When the condition is detected, writes a message to stderr. Usage: psmonitor [Interval [Count]] Default: psmonitor 1 1000000 */ #include <stdio.h> #include <signal.h> main(int argc,char **argv) { int interval = 1; /* seconds */ int count = 1000000; /* intervals */ int current; /* interval */ int last; /* check */ int kill_offset; /* returned by psdanger() */ int danger_offset; /* returned by psdanger() */ /* are there any parameters at all? */ if (argc > 1) { if ( (interval = atoi(argv[1])) < 1 ) { fprintf(stderr,"Usage: psmonitor [ interval [ count ] ]\n"); exit(1); } if (argc > 2) { if ( (count = atoi( argv[2])) < 1 ) { fprintf(stderr,"Usage: psmonitor [ interval [ count ] ]\n"); exit(1); } } } last = count -1; for(current = 0; current < count; current++) { kill_offset = psdanger(SIGKILL); /* check for out of paging space */ if (kill_offset < 0) fprintf(stderr, "OUT OF PAGING SPACE! %d blocks beyond SIGKILL threshold.\n", kill_offset*(-1)); else { danger_offset = psdanger(SIGDANGER); /* check for paging space low */ if (danger_offset < 0) { fprintf(stderr, "WARNING: paging space low. %d blocks beyond SIGDANGER threshold.\n", danger_offset*(-1)); fprintf(stderr, " %d blocks below SIGKILL threshold.\n", kill_offset); } } if (current < last) sleep(interval); } }
If mirroring is being used and Mirror Write Consistency is on (as it is by default), consider locating the copies in the outer region of the disk, because the Mirror Write Consistency information is always written in Cylinder 0. From a performance standpoint, mirroring is costly, mirroring with Write Verify is costlier still (extra disk rotation per write), and mirroring with both Write Verify and Mirror Write Consistency is costliest of all (disk rotation plus a seek to Cylinder 0). From a fiscal standpoint, only mirroring with writes is expensive. Although an lslv command will usually show Mirror Write Consistency to be on for non-mirrored logical volumes, no actual processing is incurred unless the COPIES value is greater than one. Write Verify defaults to off, because it does have meaning (and cost) for non-mirrored logical volumes.
Prior to AIX 4.3.3, logical volumes could not be mirrored and striped at the same time. Logical volume mirroring and striping combines the data availability of RAID 1 with the performance of RAID 0 entirely through software. Volume groups that contain striped and mirrored logical volumes cannot be imported into AIX 4.3.2 or earlier.
See the summary of communications tuning recommendations in Tuning TCP and UDP Performance and Tuning mbuf Pool Performance.
For correct placement of adapters and various performance guidelines, see PCI Adapter Placement Reference.