A device driver must call the d_slave service to set up a DMA slave transfer or call the d_master service to set up a DMA master transfer. The device driver then sets up the device to perform the DMA transfer. The device transfers data when it is available and interrupts the processor upon completion of the DMA transfer. The device driver then calls the d_complete service to clean up after the DMA transfer. This process is typically repeated each time a DMA transfer is to occur.
In this system, data can be located in the processor cache, system memory, or DMA buffer. The DMA services have been carefully written to ensure that data is moved between these three locations correctly. The d_master and d_slave services flush the data from the processor cache to system memory. They then hide the page, preventing data from being placed back into the processor cache. The hardware moves the data between system memory, the DMA buffers, and the device. The d_complete service flushes data from the DMA buffers to system memory and unhides the buffer.
A count is maintained of the number of times a page is hidden for DMA. A page is not actually hidden except when the count goes from 0 to 1 and is not unhidden except when the count goes from 1 to 0. Therefore, the users of the services must make sure to have the same number of calls to both the d_master and d_complete services. Otherwise, the page can be incorrectly unhidden and data lost. This count is intended to support operations such as logical volume mirrored writes.
All pages containing user data must be hidden while DMA operations are being performed on them. This is required to ensure that data is not lost by being put in more than one of these locations.
DMA operations can be carefully performed on kernel data without hiding the pages containing the data. The DMA_WRITE_ONLY flag, when specified to the d_master service, causes it not to flush the processor cache or hide the pages. The same flag when specified to the d_complete service causes it not to unhide the pages. This flag requires that the caller has carefully flushed the processor cache using the vm_cflush service. Additionally, the caller must carefully allocate complete pages for the data buffer and carefully split them up into transfers. Transferred pages must each be aligned at the start of a DMA buffer boundary, and no other data can be in the same DMA buffers as the data to be transferred. The d_align and d_roundup services help ensure that the buffer allocation is correct.
The d_align service (provided in libsys.a) returns the alignment value required for starting a buffer on a processor cache line boundary. The d_roundup service (also provided in libsys.a) can be used to round the desired DMA buffer length up to a value that is an integer number of cache lines. These two services allow buffers to be used for DMA to be aligned on a cache line boundary and allocated in whole multiples of the cache line size so that the buffer is not split across processor cache lines. This reduces the possibility of consistency problems because of DMA and also minimizes the number of cache lines that must be flushed or invalidated when used for DMA. For example, these services can be used to provide alignment as follows:
align = d_align(); buffer_length = d_roundup(required_length); buf_ptr = xmalloc(buffer_length, align, kernel_heap);
Note: If the kernel heap is used for DMA buffers, the buffer must be pinned using the pin kernel service before being utilized for DMA. Alternately, the memory could be requested from the pinned heap.
Data must be carefully accessed when a DMA operation is in progress. The d_move service provides a means of accessing the data while a DMA transfer is being performed on it. This service accesses the data through the same system hardware as that used to perform the DMA transfer. The d_move service, therefore, cannot cause the data to become inconsistent. This service can also access data hidden from normal processor accesses.
See "DMA Management Services" for a list of these services.