Program Parallelization (C Only)

The compiler offers you two methods of implementing shared memory program parallelization. These are:

All methods of program parallelization are enabled when the -qsmp compiler option is in effect without the omp suboption. You can enable strict OpenMP compliance with the -qsmp=omp compiler option, but doing so will disable automatic parallelization.

Parallel regions of program code are executed by multiple threads, possibly running on multiple processors. The number of threads created is determined by the run-time options and calls to library functions. Work is distributed among available threads according to the specified scheduling algorithm.

Note: The -qsmp option must only be used together with thread-safe compiler invocation modes.

IBM Directives

IBM directives exploit shared memory parallelism through the parallelization of countable loops. A loop is considered to be countable if it has any of the forms described in Countable Loops.

The compiler can automatically locate and where possible parallelize all countable loops in your program code. In general, a countable loop is automatically parallelized only if all of the follow conditions are met:

You can also explicitly instruct the compiler to parallelize selected countable loops.

The C for AIX compiler provides pragma directives that you can use to improve on automatic parallelization performed by the compiler. Pragmas fall into two general categories.

  1. The first category of pragmas lets you give the compiler information on the characteristics of a specific countable loop. The compiler uses this information to perform more efficient automatic parallelization of the loop.
  2. The second category gives you explicit control over parallelization. Use these pragmas to force or suppress parallelization of a loop, apply specific parallelization algorithms to a loop, and synchronize access to shared variables using critical sections.

OpenMP Directives

OpenMP directives exploit shared memory parallelism by defining various types of parallel regions. Parallel regions can include both iterative and non-iterative segments of program code.

Pragmas fall into four general categories:

  1. The first category of pragmas lets you define parallel regions in which work is done by threads in parallel. Most of the OpenMP directives either statically or dynamically bind to an enclosing parallel region.
  2. The second category lets you define how work will be distributed or shared across the threads in a parallel region.
  3. The third category lets you control synchronization among threads.
  4. The fourth category lets you define the scope of data visibility across threads.


Countable Loops
Reduction Operations in Parallelized Loops
Shared and Private Variables in a Parallel Environment
Compiler Modes


Control Parallel Processing with Pragmas
Invoke the Compiler


#pragma Preprocessor Directives for Parallel Processing
IBM Run-time Options for Parallel Processing
OpenMP Run-time Options for Parallel Processing
Built-in Functions Used for Parallel Processing
smp Compiler Option
strict Compiler Option
strict_induction Compiler Option
OpenMP Specification