IBM Books

Administration Guide


The LAPI execution model

The goal of LAPI is to provide a threads-safe environment and support an execution model that allows for maximum execution concurrency within the LAPI library.

Using the setup function (LAPI_Init), a user process establishes a LAPI context. Within a LAPI context, the LAPI library is threads-safe, and multiple threads can make LAPI calls within the same context. The different calls can execute concurrently with each other and with the user threads. However, in reality execution concurrence among these calls is limited by the locking required with LAPI to maintain integrity of its internal data structures and the need to share a single underlying communication channel.

As with any multi-threaded application, coherence of user data is the responsibility of the user. Specifically, if two or more LAPI calls from different threads can execute concurrently and if they specify overlapping user buffer areas, then the result is undefined. It is the responsibility of the user to coordinate the required synchronization between threads that operate on overlapping buffers.

The user application thread, as well as the completion handlers, cannot hold mutual exclusion resources before making LAPI calls; if they do, it is possible to run into deadlock situations.

Because user-defined handlers can be called concurrently from multiple threads, it is the user's responsibility to make them threads-safe.

The application thread, notification thread and the completion handler thread are shown in Figure 45.

Figure 45. A LAPI thread model

View figure.

Threads 0 and 1 (the application thread and the notification thread) attempt to invoke the LAPI dispatcher whenever possible; in this way, progress on incoming and outgoing messages can be made while minimizing additional overhead. Most LAPI calls (though not all) made by the application thread also result in the LAPI dispatcher being automatically run. The notification thread waits in the Kernel for the occurrence of a notification event. When an event occurs, the Kernel wakes up the waiting thread. As shown in Figure 45, after the notification thread returns from waiting in the Kernel, it invokes the LAPI dispatcher.

The LAPI Dispatcher is the central control point that orchestrates the invocation of functions and threads necessary to process outstanding incoming and outgoing LAPI messages.

The LAPI Dispatcher can run from the application's user's thread, from the notification thread or from the completion handler thread. Locking is used to ensure that only one instance of the dispatcher runs at a time to maintain integrity. On incoming messages, the LAPI dispatcher manages the reassembly of data from different packets (which might arrive out-of-order) into the specified buffer, and then invokes the completion handler if necessary.

Thread 2 is created by LAPI_Init to execute completion handlers associated with active messages. Completion handlers are written by users and can make LAPI function calls which in turn will invoke the LAPI Dispatcher. The completion handler thread processes work from the completion handler queue. When the queue is empty the thread waits using a pthread_cond_wait(). If an active message (LAPI_Amsend) includes a completion handler, the dispatcher queues a request on the completion queue after the whole message has arrived and has been reassembled in the specified buffer; the dispatcher then sends a pthread_cond_signal to the completion handler thread. If this thread was in a wait state it will begin processing the completion handler queue, otherwise, if it was not waiting, the thread signal is ignored.

LAPI handlers are not guaranteed to execute one at a time. Note that LAPI calls can execute concurrently within the origin or target or both. The same restrictions stated previously about not holding on to mutual exclusion resources when making LAPI calls still applies.

This discussion of a threads-safe environment and maximum execution concurrence within the LAPI library applies to both the polling and interrupt modes. In polling mode any calls to the communication library attempt to make progress on the context specified in the call. Further, the function LAPI_Probe is provided to allow applications to explicitly check for and handle incoming messages.

The execution model of the handlers consists of the following events:

Event
Action

Message Arrival
Copies the message from the network into the appropriate data access memory space.

Interrupt / Poll
Causes an interrupt if required, based on the mode.

Dispatcher Start
Invokes the dispatcher.

New Message Packet
Checks the LAPI header and determines (by checking the receive state message reassembly table) if the packet is part of a pending message or if it is a new message. Calls the header-handler function for the first packet of a new message.

Return from Header-Handler
If the message is contained in more than one packet, the LAPI Dispatcher will log that there is a pending message, save the completion handler address, and save the user's buffer address to be used during the message reassembly of pending message packets.

Pending Message Packet
Copies the message to the appropriate portion of the user buffer specified through the header-handler. If the packet completes the message, the dispatcher queues the completion handler; otherwise the dispatcher returns to check for message arrivals.

Completion Handler
When the completion handler is executed, (after the return from the completion handler) updates the appropriate target counter before continuing.

Allocating buffers

  1. The user allocates as many buffers per origin as wanted.
  2. If the header handler blocks, no further progress is made, including messages pending (that is, the communications adapter is stalled).

Caution: The user is allowed to make LAPI calls from within the completion handler. However, the users should exercise caution when writing completion handlers which do long computation and then issue LAPI calls. The long computation might cause completion handler queues to fill up if multiple active messages are sent before these handlers complete. This can cause fetch deadlocks. In such cases, users should fork a separate thread from within the completion handler and have the forked thread make LAPI calls to eliminate the possibility of a fetch deadlock occurring.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]