Applications can use the aio_read and aio_write subroutines to perform asynchronous disk I/O. Control returns to the application from the subroutine as soon as the request has been queued. The application can then continue processing while the disk operation is being performed.
Although the application can continue processing, a kernel process (kproc) called a server is in charge of each request from the time it is taken off the queue until it completes. The number of servers limits the number of asynchronous disk I/O operations that can be in progress in the system simultaneously. The number of servers can be set with smit (smit->Devices->Asynchronous I/O->Change/Show Characteristics of Asynchronous I/O->{MINIMUM|MAXIMUM} number of servers or smit aio) or with chdev. The minimum number of servers is the number to be started at system boot. The maximum limits the number that can be started in response to large numbers of simultaneous requests.
The default values are minservers=1 and maxservers=10. In systems that seldom run applications that use asynchronous I/O, this is usually adequate. For environments with many disk drives and key applications that use asynchronous I/O, the default is far too low. The result of a deficiency of servers is that disk I/O seems much slower than it should be. Not only do requests spend inordinate lengths of time in the queue, the low ratio of servers to disk drives means that the seek-optimization algorithms have too few requests to work with for each drive.
For environments in which the performance of asynchronous disk I/O is critical and the volume of requests is high, we recommend that:
This could be achieved for a system with 3 asynchronously accessed disks with:
# chdev -l aio0 -a minservers='15' -a maxservers='30'