[ Bottom of Page | Previous Page | Next Page | Contents | Index | Library Home | Legal | Search ]

Performance Management Guide

Monitoring and Tuning Partitions

This chapter provides insights and guidelines for considering, monitoring, and tuning AIX performance in partitions running on POWER4-based systems. For more information about partitions and their implementation, see AIX 5L Version 5.2 AIX Installation in a Partitioned Environment or Hardware Management Console Installation and Operations Guide.

This chapter contains the following topics:

Performance Considerations with Logical Partitioning

POWER4-based systems are configured in a variety of ways, including the following:

Application workloads might vary in their performance characteristics on these systems.

Partitioning offers flexible hardware use when the application software does not scale well across large numbers of processors, or when flexibility of the partitions is needed. In these cases, running multiple instances of an application on separate smaller partitions can provide better throughput than running a single large instance of the application. For example, if an application is designed as a single process with little to no threading, it will often run fine on a 2-way or 4-way system but might run into limitations running on larger SMP systems. Rather than redesigning the application to take advantage of the larger number of CPUs, the application can run in a parallel set of smaller CPU partitions.

The performance implications of logical partitioning can be considered when doing detailed, small variation analysis. The hypervisor and firmware handle the mapping of memory, CPUs and adapters for the partition. Applications are generally unaware of where the partition's memory is located, which CPUs have been assigned, or which adapters are in use. There are often a number of performance monitoring and tuning considerations for applications with respect to locality of memory to CPUs, sharing L2 and L3 caches, and the overhead of the hypervisor managing the partitioned environment on the system.

Supported Operating Systems

Partitions on POWER4-based systems can run the following operating systems:

AIX 4.3 and earlier versions of the operating system are not enabled for partitions and are not supported.

Each of the partitions on a system can run a different level of an operating system. Partitions are designed to isolate software running in one partition from software running in the other partitions. This includes protection against natural software breaks and deliberate software attempts to break the LPAR barrier. Data access between partitions is prevented, other than normal network connectivity access. A software partition crash in one partition will not cause a disruption to other partitions, including failures for both application software and operating system software. Partitions cannot make extensive use of an underlying hardware shared resource to the point where other partitions using that resource become starved, for example partitions sharing the same PCI bridge chips are not able to lock the bus indefinitely.

System components

Several system components must work together to implement and support the LPAR environment. The relationship between processors, firmware, and operating system requires that specific functions need to be supported by each of these components. Therefore, an LPAR implementation is not based solely on software, hardware, or firmware, but on the relationship between the three components. The POWER4 microprocessor supports an enhanced form of system call, known as Hypervisor mode, that allows a privileged program access to certain hardware facilities. The support also includes protection for those facilities in the processor. This mode allows the processor to access information about systems located outside the boundaries of the partition where the processor is located. The hypervisor does use a small percentage of the system CPU and memory resources, so comparing a workload running with the hypervisor to one running without the hypervisor will typically show some minor impacts.

A POWER4-based system can be booted in a variety of partition configurations, including the following:

Affinity Logical Partitioning

Some POWER4-based systems have the ability to create affinity logical partitions. This feature automatically determines which system CPU and memory resources are to be used for each partition, based on their relative physical location to each other. The HMC divides the system into symmetrical LPARs with 4-processor or 8-processor partitions, depending on the selection of the administrator in the setup process. The processors and memory are aligned on MCM boundaries. This is designed to allow the system to be used as a set of identical cluster nodes and provides performance optimization for scientific and technical workloads. If the system is booted in this mode, the ability to tune resources by adding and deleting CPUs and memory is not available. There is a performance gain in workloads running in an affinity logical partition over a normal logical partition.

Workload Management

Workload Management in a Partition

The same workload management facilities in AIX exist within each AIX partition. There are no differences seen by the AIX Workload Manager running inside a partition. The AIX Workload Manager does not manage workloads across partitions. Application owners may be experienced with specifying CPUs or memory to a workload and want to extend this concept to partitions. CPUs are assigned to each partition outside the scope of the workload manager, so the ability to specify a set of CPUs from a specific MCM to a particular workload is not available. The workload manager and the bindprocessor command can still bind the previously assigned CPUs to particular workloads.

Using Partitioning or Workload Management

When making the choice between using partitions or using workload management for a particular set of workloads, applications, or solutions, there are several situations to consider. Generally, partitioning is considered the more appropriate mode of management in the following situations:

Strong separation of performance is important when monitoring or tuning application workloads on a system which supports partitioning. It can be challenging to establish effective AIX workload management controls when working in conjunction with other critical workloads at the same time. Monitoring and tuning multiple applications is more practical in separate partitions where granular resources can be assigned to the partition.

LPAR Performance Impacts

The hypervisor functions running a system in LPAR mode typically adds less than 5% overhead to normal memory and I/O operations. The impact of running in an LPAR is not significantly different from running on a similar processor in SMP mode. Running multiple partitions simultaneously generally has little performance impact on the other partitions, but there are circumstances that can have different characteristics. There is some extra overhead associated with the Hypervisor for the virtual memory management. This should be minor for most workloads, but the impact increases with extensive amounts of page mapping activity. Partitioning may actually help performance in some cases for applications that do not scale well on large SMP systems by enforcing strong separation between workloads running in the separate partitions.

Simulating Smaller Systems

When used on POWER4-based MCM systems, rmss allocates memory from the overall system without respect to the location of that memory to the MCM. Detailed specific performance characteristics may change depending on what memory is available and what memory is assigned to a partition. For example, if you were to use rmss to simulate an 8-way partition using local memory, the actual assigned memory is not likely to be the physical memory closest to the MCM. In fact, the 8 processors are not likely going to be the 8 processors on an MCM, but will instead be assigned from the available list.

When deconfiguring CPUs on an MCM based system, there are subtleties of the hypervisor implicitly using pathways between MCMs and memory. While the performance impacts are small, there can be some slight differences that may affect detailed performance analysis.

AIX Memory Affinity

AIX memory affinity is not available in LPAR mode.

CPUs in a partition

Assigned CPUs

To see what specific CPUs have been assigned to an LPAR, select the Managed System (CEC) object on the HMC and view its properties. There is a tab that shows the current allocation state of all processors that are assigned to running partitions. AIX uses the firmware-provided numbers, so you can tell from within a partition what processors are used, by looking at the CPU numbers and AIX loc codes.

A two-way partition checking on the status of the CPUs assigned to the partition looks similar to the following:

 > lsdev -C | grep proc 
proc17   Available 00-17   Processor 
proc23   Available 00-23   Processor

Disabling CPUs

When disabling CPUs on a POWER4-based system with an MCM, there is still routing of control flow and memory accessibility through the existing CPUs on the overall system. This could have some impacts on the overall specific performance of a workload.

Application Considerations

Generally, an application is not aware that it is running in an LPAR. There are some slight differences which the system administrator is aware of, but these are masked from the application. Apart from these considerations, AIX runs inside a partition the same way it runs on a standalone server. No differences are observed either from the application or the administrator's point of view. LPAR is transparent to AIX applications and in general AIX performance tools. Third party applications only need to be certified for a level of AIX.

The uname Command Run in LPAR

In applications running on AIX 5L, the following is an example of the uname command and the string returned:

> uname -L 
-1 NULL

The "-1" shows that the system is not running with any logical partitions, but in SMP mode.

For applications running on AIX 5L in a partition, this command will provide the partition number and the partition name as managed by the HMC as follows:

 > uname -L 
3 Web Server

Knowing that the application is running in an LPAR might be helpful when looking at and assessing slight performance differences.

Virtual Console

There is no physical console on each of the partitions. While the physical serial ports can be assigned to the partitions, they can only be in one partition at a time. To provide an output for console messages, and for diagnostic purposes, the firmware implements a virtual tty that is seen by AIX as a standard tty device. Its output is streamed to the HMC. The AIX diagnostics subsystem uses the virtual tty as the system console. From a performance perspective, if much data is being written to the system console, which is being monitored on the HMC's console, the connection to the HMC is limited by the serial cable connection.

Time-of-Day Clock

Each partition has its own Time-Of-Day clock values so that partitions can work with different time zones. The only way partitions can communicate with each other is through standard network connectivity. When looking at traces or time-stamped information from each of the partitions on a system, they will each be different according to how the partition was configured.

System serial number

The uname -m command provides a variety of system information as the partition is defined. The serial number is the system serial number, which is the same across the partitions. The same system serial number will be seen in each of the partitions.

Memory considerations

Partitions are defined with a "must have", a "desired", and a minimum amount of memory. When assessing changing performance conditions across system reboots, it is important to know that memory and CPU allocations might change based on the availability of the underlying resources. Also, keep in mind that the amount of memory allocated to the partition from the HMC is the total amount allocated. Within the partition itself, some of that physical memory is used in hypervisor page table translation support.

Memory is allocated by the system across the system. Applications in partitions cannot determine where memory has been physically allocated.

PTX considerations

Since each LPAR can logically be viewed as a separate machine with a distinct IP address, PTX monitors will treat each LPAR as a distinct machine. Each LPAR must have the PTX Agent xmservd installed to provide LPAR statistics. The PTX Manager, xmperf, can view the LPAR as a whole or provide finer granularity views into individual processors within the LPAR. The xmperf skeleton consoles are already set up to provide these views, but the LPAR naming process may need to be explained so that the user can select the proper LPAR and processors within the LPAR.

The PTX 3dmon component is updated to show a summary of partitions recognized as running on a single system. Like the xmperf operations, 3dmon views each LPAR as it would an individual SMP machine. Select LPARs by their assigned host name.

[ Top of Page | Previous Page | Next Page | Contents | Index | Library Home | Legal | Search ]