[ Previous | Next | Table of Contents | Index | Library Home | Legal | Search ]

Performance Management Guide


Tuning Asynchronous Disk I/O

An application's processing cannot continue until the I/O operation is complete. In contrast, asynchronous I/O operations run in the background and do not block user applications. This improves performance, because I/O operations and applications processing can run simultaneously. Many applications, such as databases and file servers, take advantage of the ability to overlap processing and I/O.

Applications can use the aio_read(), aio_write(), or lio_listio() subroutines (or their 64-bit counterparts) to perform asynchronous disk I/O. Control returns to the application from the subroutine as soon as the request has been queued. The application can then continue processing while the disk operation is being performed.

To manage asynchronous I/O, each asynchronous I/O request has a corresponding control block in the application's address space. This control block contains the control and status information for the request. It can be used again when the I/O operation is completed.

After issuing an asynchronous I/O request, the user application can determine when and how the I/O operation is completed. This information is provided in any of three ways:

In AIX Version 4, async I/O on JFS file systems is handled by kprocs. Async I/O on raw logical volume partitions is handled directly by the kernel. Starting with AIX 4.3.2 (and with a PTF for 4.3.1), Virtual Shared Disk (VSD) devices do not use kprocs.

Each I/O is handled by a single kproc, and typically the kproc cannot process any more requests from the queue until that I/O has completed. The default minimum number of servers configured when async I/O is enabled is 1. This is the minservers attribute. There is also a maximum number of async I/O servers that can get created which is controlled by the maxservers attribute; this has a default value of 10. The number of servers limits the number of asynchronous disk I/O operations that can be in progress in the system simultaneously. The number of servers can be set with the SMIT command (smitty->Devices->Asynchronous I/O->Change/Show Characteristics of Asynchronous I/O->{MINIMUM | MAXIMUM} number of servers or smitty aio) or with the chdev command.

In systems that seldom run applications that use asynchronous I/O, the defaults are usually adequate.

If the number of async I/O requests is high, then the recommendation is to increase maxservers to approximately the number of simultaneous I/Os there might be. In most cases, it is better to leave the minservers parameter at the default value because the AIO kernel extension will generate additional servers if needed.

Note: AIO actions performed against a raw Logical Volume do not use kproc server processes. The setting of maxservers and minservers have no effect in this case.

By looking at the CPU utilization of the AIO servers, if the utilization is evenly divided among all of them, that means that they're all being used; you may want to try increasing them in this case. To see the AIO servers by name, run the pstat -a command. Run the ps -k command to see the AIO servers as the name kproc.

For environments in which the performance of asynchronous disk I/O is critical and the volume of requests is high, but you do not have an approximate number of simultaneous I/Os, it is recommended that maxservers be set to at least 10*(number of disks accessed asynchronously).

This could be achieved for a system with three asynchronously accessed disks as follows:

# chdev -l aio0 -a maxservers='30'

In addition, you can set the maximum number of asynchronous I/O REQUESTS outstanding, and the server PRIORITY. If you have a system with a high volume of asynchronous I/O applications, it might be appropriate to increase the REQUESTS number and lower the PRIORITY number.


[ Previous | Next | Table of Contents | Index | Library Home | Legal | Search ]