IBM SAP Technical Brief

Performance Tuning Tips
Top 40 - 1997

John Oustalet & Walter Orb
IBM SAP Competency Center
Foster City, California

Version: 1.0

Status: February 1998


Edition Notice (February 1998)


This document is written for IBM personnel involved in the installation, configuration and tuning of SAP R/3, release 3.x. The purpose of this document is to provide an experienced SAP BASIS specialist with a series of "tips" intended to optimize the performance of SAP on Oracle7 and AIX 4. The "tips" are based on current information, which may change with experience and new software rev levels

Standard disclaimer: This document is not a comprehensive tuning guide. It is a quick reference of recommendations given to our customers during several performance engagements last year. Use your best judgement.


The authors want to graciously acknowledge the work of Jim Dilley, who took the time to review, critique, and contribute to the contents of this document.

Copyright 1998 IBM Corporation. All Rights Reserved.

Neither this documentation nor any part of it may be copied or reproduced in any form or by any means or translated into another language, without the prior consent of the IBM Corporation.

IBM makes no warranties or representations with respect to the content hereof and specifically disclaims any implied warranties of merchantability or fitness for any particular purpose. IBM assumes no responsibility for any errors that may appear in this document. The information contained in this document is subject to change without any notice. IBM reserves the right to make any such changes without obligation to notify any person of such revision or changes. IBM makes no commitment to keep the information contained herein up to date.

SAP and ABAP/4 are registered trademarks of SAP AG

ORACLE is registered trademark of the Oracle Corporation

Oracle7 is a trademark of the Oracle Corporation

IBM and AIX are registered trademarks of the International Business Machines Corporation


. "AIX/6000 Internals and Architecture", David A. Kelly, copyright 1996 by McGraw-Hill

. "AIX/6000 System Guide", Frank Cervone, copyright 1996 by McGraw-Hill

. "The AIX Survival Guide", Andreas Siegert, copyright 1996 by Addison-Wesley

. "AIX Performance Tuning", Frank Waters, copyright 1996 by IBM

. "Oracle Performance Tuning", Mark Gurry and Peter Corrigan, copyright 1996 by O'Reilly & Associates

. "Advanced Oracle Tuning and Administration", Aronoff, Loney, and Sonawalla, copyright 1997 by Aronoff, Loney, and Sonawalla

. "Tuning Oracle", Corey, Abbey, and Dechichio, copyright 1995 by McGraw-Hill

. "Developing SAP's R/3 Applications with ABAP/4", Kretschmer and Weiss, copyright 1996 by SYBEX

. "Tuning ABAP/4 Programs for Performance", Udo Laub, copyright 1997 by SAP Technology, Inc.

. "R/3 Installation on UNIX - ORACLE Database", copyright 1997 by SAP AG


After reviewing several SAP systems last year, we recognized a common set of recommendations given to almost all of our customers. In a moment of inspiration, we decided to compile our recommendations into this document in the form of easily referenced "tips". We hope to update this document frequently with new tips and coverage.

Since this isn't a book, we must assume you are familiar with:

* AIX commands and utilities (e.g. vmstat, iostat, netstat, filemon, netpmon, tprof, trace, etc.) related to performance tuning and administration.

* Oracle7 performance monitoring and administration.

* SAP R/3 3.x performance monitoring and administration.

Please accept that this document will not:

* replace or supercede other IBM or SAP documents on the same subject.

* explain recommendations in complete detail.

* cover all possible performance problems a customer may encounter.

* attempt to define poor, acceptable, and optimum performance.

* address the performance implications of configuring SAP modules to meet specific business process requirements.

* take into account all possible uses and modifications of SAP standard transactions and their impact on performance.

Hopefully, this document will help you get the most out of your SAP system.

Good Luck,

John & Walter

AIX - Disk I/O

Tip 1: Keep Disk Utilization Below 15%

As disk utilization increases, disk response time suffers due to resource contention and queuing. In R/3, sustained disk utilization higher than 15% has proven to have a negative impact on an Oracle database using primarily random I/O. Most often, high disk utilization limits overall system throughput.

Use iostat to monitor disk utilization during peak workload (e.g., iostat 30 60). Look for any disk that has sustained utilization (% tm_act) above 15%. Find out what's on the disk and why it's working so hard. If you have multiple data files on the same disk, run filemon and look for high seek times. Hot logical volumes can be placed next to each other to help reduce seek time. You might have fragmented logical volumes, file systems, and/or files. Use lspv and fileplace to get a detailed layout of the disk. If fragmented, consider reorganization.

If high I/O volume is causing high disk utilization, look for heavily accessed data files sharing the same disk. If found, separate the data files by moving them to different disk. Ideally, the I/O workload should be evenly distributed over all available disks and controllers.

If utilization is too high and there's only one data file on the disk, or if the majority of I/O is going to one file, consider striping the tablespace or moving "hot" tables out into their own tablespaces.

The 15% rule is valid for Oracle tablespace data files. However, the I/O to redo and archive log files is sequential. With sequential I/O, disks can sustain higher utilization - up to 50-60%. Consider striping the redo log logical volumes if disk utilization exceeds 60%. Use a stripe width of 64K and stripe the logical volumes over 6 to 8 disks. Since AIX logical volume mirroring and striping are mutually exclusive, striping will require the Oracle multiplexing of the redo log files.

Tip 2: Pseudo-Stripe Hot Logical Volumes

Most production systems use logical volume mirroring for data fault tolerance. Logical volume striping can improve I/O performance by distributing I/O load. Unfortunately in AIX, logical volume mirroring and striping are mutually exclusive. However, a heavily accessed, mirrored logical volume can be "pseudo-striped" by spreading it across multiple physical volumes on a physical partition level. Striping at a physical partition level might effectively distribute I/O load due to random access of data within a tablespace

Initially, create a single copy of the logical volume specifying the primary set of physical volumes in the "PHYSICAL VOLUME names" field. Also, set the logical volume option "RANGE of physical volumes" to maximum to distribute I/O load evenly across the physical volumes. To mirror, add a copy of the logical volume specifying the mirrored set of physical volumes and set the interdisk range to maximum.

Tip 3: Carefully Place Hot Logical Volumes On Disk

The intradisk layout policy divides a physical volume into five partitions: inner edge, inner middle, center, middle, and outer edge. Average seek time decreases toward the center of the physical volume. However, data transfer rates are greater toward the outer edge due to rotational speed.

When multiple logical volumes are defined on a physical volume, highly accessed logical volumes should be placed at the center of the physical volume to take advantage of the lower seek times. Set the logical volume option "POSITION on physical volume" to center.

When a single logical volume is defined on a physical volume, then the logical volume should be placed on the outer edge off the disk to take advantage of a higher data transfer rate. In this case, the average seek time is only influenced by seeks within that logical volume. Set the logical volume option "POSITION on physical volume" to outer edge.

If you are mirroring logical volumes, disable Mirror Write Consistency (see Tip 4).

Tip 4: Disable Mirror Write Consistency

If you are mirroring, disable Mirror Write Consistency (MWC) for the Oracle data, redo log, and saparch logical volumes.

MWC tells the Logical Volume Manager (LVM) to keep track of the status of mirrored writes (see Appendix A). Each write to a mirrored logical volume results in a seek to the outer edge of the disk to update the status block and then a seek back to the data area to do the actual write. You end up with two physical writes for each request along with additional head movement. As a result, this parameter can result in I/O performance degradations of 20 to 50%.

Note: while turning off Mirror Write Consistency improves performance, it exposes you to mirror synchronization issues if the system abends. After an abend, you must manually resynch the mirrors which can take minutes to hours to complete depending on the size of the Oracle partitions (contact me for the manual procedures, baspence I do not recommend turning off MWC without careful consideration the performance/administration tradeoffs. Bruce Spencer (10/29/99)

Tip 5: Use Asynchronous I/O With AIX 4

Oracle7 takes full advantage of AIX asynchronous I/O (AIO) resulting in faster database access. AIO interweaves multiple I/O operations to improve I/O subsystem throughput. With AIX 4, AIO is kernelized resulting in a significant improvement over synchronous writes.

Set the following initSID.ora parameters:

use_async_io = TRUE

db_writers = 1

When using AIO, do not set the number of db_writers > 1. Data corruption could occur.

AIX 4 supports AIO for both file systems and raw devices. After issuing AIO requests, the Oracle processes can continue running. A kernel process (kproc), called a server, is in charge of each request until it completes. The number of servers limits the number of AIO requests that can be executed concurrently.

You can set the number of servers with smit or chdev. The minimum is the number of servers to start at system boot. The maximum limits the number of servers that can be started in response to a large number of concurrent requests. The default values are minservers=1 and maxservers=10.

Derive the optimum values for minservers and maxservers by monitoring a running system. Initially, set minservers=10 and maxservers=50. Check the number of active AIO servers periodically during the day with the AIX command:

pstat -a | grep aio | wc -l

Set minservers to the average number of observed AIO servers and maxservers to 2 times the minservers value. The changes will become effective at the next boot. Periodically recheck the values and adjust.

Tip 6: Use Raw Devices

When formatting a new physical volume, you first assign it to a volume group and then partition it into logical volumes (devices). You then have the option of creating file systems on the logical volumes, or leaving them in a raw format with no file systems (raw devices).

I/O performance can be improved by setting up the Oracle data files and redo log files as raw devices (see Appendix B). Doing so, you avoid Journalized File System (JFS) overhead and the buffering of data in the AIX file system cache. Reduced JFS overhead results in less CPU activity. With raw devices, Oracle can write directly from the SGA data block buffer to disk.

Using JFS I/O, each time Oracle data is transferred to or from the I/O subsystem, buffered copies are created in both the Oracle buffer cache (SGA) and the AIX file system cache. This double buffering of data increases the total virtual memory requirement of an Oracle instance and can lead to excessive paging in a memory-constrained environment. I/O performance can worsen as the I/O subsystem is more heavily exercised. Also, the pagedaemon has to work much harder flushing modified memory pages to disk.

The authors would also like to point out that file system journaling does not protect user data! Journaling will only protect the file system's structure and lists. In other words, if the system crashes in the middle of a database write, the file system will mount, but the data may be corrupted.

By using raw devices, you might lose JFS advantages for sequential I/O (e.g., sequential read ahead). However, in SAP R/3, most Oracle data accesses result in random I/O operations.

Also, the use of raw devices gives you better control of data placement making it easier to balance disk utilization and avoid fragmentation.

Overall, the experience of the Oracle Corporation on the RS/6000 platform reveals better overall performance and scalability using raw devices compared to file systems. Therefore, in a heavily used database configuration, the use of raw devices is encouraged.

Moving from file systems to raw devices can be done with the AIX dd command.

dd if=<my_old_JFS_file> of=/dev/<my_new_raw_device> seek=1 bs=4k

After the copy create a symbolic link from the old file name to the new raw device.

ln -s /dev/<my_new_raw_device> <my_old_JFS_file>

Finally, set ownership and permissions on the raw devices.

chown orasid.dba /dev/<my_new_raw_device>

chmod 600 /dev/<my_new_raw_device>

AIX - CPU, Memory, and Paging

Tip 7: Use vmtune To Reduce Paging

On dedicated database servers, excessive paging can cause severe performance degradation. The Virtual Memory Manager (VMM) pages out computational pages (buffers and processes) in an effort to cache large Oracle data files in real memory. Unfortunately, the default VMM operational parameters allow AIX to use as much as 80% of real memory for file cache. Since Oracle already buffers data in its SGA, why buffer the same data in AIX file system cache? Obviously, the caching of cache is a waste of memory and CPU. The AIX caching of Oracle data files can be avoided altogether with the use of raw devices (see Tip 6).

The vmtune utility can limit the amount of real memory AIX can use for file cache. The vmtune parameters limiting file cache are minperm (-p) and maxperm (-P). Parameter values are in percent of real memory. The minperm default is 20% and the maxperm default is 80%. The vmtune utility can only be called by root.

Ex: vmtune -p 5 -P 10

VMM operational parameters modified by vmtune are only effective until the next boot. To make the change permanent, create a /etc/rc.local shell script and insert the vmtune call.


if [ -f /usr/samples/kernel/vmtune ] ; then

/usr/samples/kernel/vmtune -p 5 -P 10


Add the following entry to /etc/inittab. The /etc/rc.local shell script will be called by init during boot.

rclocal:2:once:/etc/rc.local >/dev/console 2>&1 # local parameters

For central systems, vmtune changes need to be applied more carefully especially for systems reading large external flat files. If excessive paging is still observed after changing vmtune parameters, consider adding an application server and moving the SAP main instance to it (see Tip 26).

The vmtune utility is a Licensed Program Product (LPP) installed in the directory /usr/samples/kernel. If not found, it can be installed from the AIX installation CD (bos.adt.samples).

Tip 8: Paging Spaces Should Be The Same Size And Placed On Separate Disks

Never create multiple paging space logical volumes on the same physical disk. When allocating paging space blocks, VMM allocates four blocks at a time in round-robin fashion from each of the active paging spaces that have space available. Paging space logical volumes should be the same size and placed on different disks to ensure that the round-robin algorithm works effectively and that I/O load is evenly distributed.

The initial allocation of the default paging space logical volume (hd6) is too small. After installation, there is usually enough free space available on hdisk0 to enlarge hd6 first (smit chps) before adding additional paging space logical volumes on other disks. Ideally, a paging space logical volume should be contiguous on disk and may require reorganization (smit reorgvg).

As a rule of thumb, the total paging space should equal 3 x Real Memory or at least 1 GB, whichever is larger. SAP recommends a total paging space of 2-3 GB for optimum performance. The minimum and maximum (worst-case) paging space requirements of an R/3 application server can be determined with the SAP utility sappfpar.

From the AIX command prompt, run sappfpar to check the instance profile you intend to use.

sappfpar check pf=<Profile name> | more

In the output generated by sappfpar, the minimum paging space, maximum heap space, and worst case paging space requirements are listed.

Total, minimum requirement.....: 169.5 MB(shared memory requirements)

Process local heaps, worst case: 762.9 MB(set by abap/heap_area_total)

Total, worst case requirements : 962.5 MB

Oracle SQL*Net V2


On startup, Oracle opens an operating system socket in "TCP Delay" mode for communication between an R/3 work process and an Oracle shadow process on the database server. In "TCP Delay" mode, TCP/IP uses the "Nagle algorithm" for controlling the data traffic. Nagle does the following:

* If there are no outstanding packets on a socket and a client wants to send a data set that is smaller than the existing segment size, it waits up to 200ms for more data.

* If additional data is sent from the same socket, all the data is collected together to form a larger packet to reduce the number of packets on the network.

* The MTU size of the network adapter determines the segment size that is used for data traffic.

The duration of a data transfer between client and server is nonlinearly dependent on the data volume to be transferred. Put simply, for a relatively small data set, data transfer over the network can take as long as 200ms.

The above-mentioned factors can be controlled at socket level without influencing the rest of the system. To do this, Oracle would have to open the socket in "TCP No Delay" mode (see Appendix C). As a result, any data set put on a socket would be sent immediately, no matter how small. With no TCP/IP delay, average response time would improve.

Oracle - Redo Log Files

Tip 10: Isolate Redo Log Files

Oracle writes heavily to the redo log files. Redo logs contain all changes made to tables, indexes, and rollback segments as well as information on checkpoint and other administrative information needed to recover the database. Redo log file I/O is sequential. Oracle throughput is heavily dependent upon redo log write performance.

By placing redo log logical volumes on their own disk, seek times are virtually eliminated providing the best possible sequential I/O performance. No other logical volumes should be placed on redo log disk. Allocate redo log logical volumes starting from the outer edge of the disk to take advantage of better data transfer rates.

Tip 11: Use LVM Mirroring For The Redo Log Files

LVM mirroring is faster and more efficient than Oracle multiplexing for redo log files. Use Oracle multiplexing only if high disk utilization requires LVM striping.

Tip 12: Increase The Size Of The Redo Log Files

A log switch occurs when the Oracle log writer fills a redo log file while recording transactions and must switch to another redo log file in order to continue. A log switch forces a database checkpoint at which time Oracle must flush dirty blocks from the SGA to disk to ensure system integrity. When log switches are less frequent, more changes can be applied to the same dirty block before the next flush occurs resulting in less I/O.

Monitor V$LOGHIST for the frequency of redo log switches. Increase the size of your redo log files so log switches occur every 10 to 15 minutes (see Appendix D).

Tip 13: If the Archiver Cant Keep Up, Add Another Set of Log Files

Ideally, the redo log disks should never experience simultaneous read and write operations. Simultaneous operations could result in an excessive number of slow seeks reducing a disk's sequential I/O performance.

Under load, determine if the Oracle archiver is fast enough to save an inactive redo log file before the log writer can switch back to the same disk. If the switch occurs while archiver is still reading from an inactive redo log file, add another set of redo logs (e.g. origlogC). That way, the log writer switches from origlogA, to origlogB, to origlogC, and then back to origlogA giving the archiver plenty of time to complete its process.

Oracle - Rollback Segments

Tip 14: Isolate Rollback Segments

Oracle update performance is heavily dependent upon rollback segment I/O performance. To avoid I/O contention, a rollback tablespace's data file should be placed in its own logical volume and isolated on its own disk. No other logical volumes should be located on a rollback segment disk. Allocate rollback segment logical volumes starting from the outer edge of the disk to take advantage of better data transfer rates.

Tip 15: Increase The Number Of Rollback Segments

Every time SAP modifies the database, rollback information needs to be written to the rollback segments. This information is required for transaction rollback integrity, point-in-time read consistency, and database recovery. If users are performing a high number of simultaneous updates, SAP may experience rollback contention. In order to eliminate contention, the number of rollback segments would have to be increased.

SAP's rule of thumb is one rollback segment for every four SAP work processes.

Start with SAP's recommendation then monitor V$WAITSTAT. If there are any "undo header" or "undo block" waits, add more rollback segments until the undo waits are as close to zero as possible.

Each rollback segment has a transaction table that controls access. Check V$ROLLSTAT to determine the number of waits on a slot for each transaction table. If the hit ratio ((gets - waits) *100) /gets is less than 99%, add additional rollback segments until contention is reduced. Increase the number of rollback segments by the number of rollback segments experiencing waits.

Tip 16: Create Multiple Rollback Segment Tablespaces

High rollback activity resulting from a large number of simultaneous updates can lead to high disk utilization. Spreading the rollback I/O load over multiple disks can decrease individual disk utilization.

The Private Rollback Segments (PRS) should be spread over two or more isolated tablespaces on separate disks (e.g., PSAPROLL, PSAPROLL2, PSAPROLL3, ...). The rollback segments of each tablespace will be used alternately in the order in which they are listed in the initSID.ora file. This approach will distribute rollback I/O load equally over multiple disks (see Appendix E).

Oracle SGA

Tip 17: Reduce Redo Log Buffer Contention

The redo log buffers are used to hold database changes before writing them to the on-line redo log files. When a log buffer reaches one-third full (two-thirds full in Oracle 7.3), a commit, or a write takes place to the database, the log writer process writes the contents of the buffer to the active redo log file. It also writes from the log buffer every few seconds if there is no activity.

Redo log buffers operate in a circular fashion and are controlled via internal database latches. SAP can perform several simultaneous update transactions resulting in poor database performance if these buffers are not large enough to keep up.

Check V$LATCH, if the ratio of gets to misses or immediate_gets to immediate_misses for the redo allocation and the redo copy exceeds 1%, consider tuning the redo buffer.

To reduce redo buffer latch contention:

* Increase the size of the redo log buffer to 1048576 (1 MB) to reduce the number of log buffer allocation retries.

* Decrease the size of the initSID.ora parameter LOG_SMALL_ENTRY_MAX_SIZE to reduce contention for the redo allocation latch on SMP machines.

* Set the initSID.ora parameter LOG_SIMULTANEOUS_COPIES to 2 times the number of CPU's on the database server to reduce contention for the redo copy latches on SMP machines.

Tip 18: Maintain A High Data Buffer Cache Quality

The data buffer cache holds copies of database blocks for tables, indexes, rollback segments, sort data, dictionary data, and clusters. Each data buffer holds one Oracle data block. The size of the data buffer cache is determined by multiplying the number of data blocks by the block size. The more blocks you hold in memory, the better your performance will be. A small data buffer cache can cause excessive I/O if the database writer is forced to frequently flush dirty blocks to disk in an effort to free data blocks for new requests.

For good R/3 performance, it is important to maintain a high data buffer cache quality. A well-tuned Oracle database can have a data buffer cache quality as high as 100%. Check the V$WAITSTAT table for buffer busy waits on "data block". Increase the initSID.ora parameter DB_BLOCK_BUFFERS until buffer busy waits are less than 1% of logical reads. This should produce data buffer cache hit ratios close to 100%. A reasonable size for medium systems is about 200 - 300 MB; for large SMP servers, a data buffer size of 600 - 800 MB might be adequate.

As you increase the size of the data buffer cache, watch for paging. You may have to reduce the AIX file system cache (see Tip 7), or if that's not possible, you might have to reduce the data buffer cache size to avoid paging.

Tip 19: Reduce Recursive Calls

When a SQL statement first arrives at the Oracle kernel, the database parses the SQL statement. The dictionary objects involved in the query are resolved, and the SQL execution path is generated. If the SQL statement refers to an object (such as a table or view) that is not already buffered in the SGA, Oracle executes a SQL statement to query the data dictionary for information about the object. The query of a dictionary table is referred to as a recursive call. A recursive call is similar to a regular call, except that data dictionary tables are queried rather than user tables.

Like regular calls, blocks read from dictionary tables are loaded into the SGA. If the data dictionary information required by a query is already in the dictionary cache of the shared pool, then no recursive call is necessary. A large number of recursive calls is an indication that the dictionary cache of the shared pool is too small. From an application point of view, a recursive call is unproductive work, consuming CPU and I/O resources. Unfortunately, some recursive activity is unavoidable. A healthy ratio of user calls to recursive calls in R/3 is about 4:1.

The library cache, dictionary cache, and session information are all contained within the shared pool. To increase the size of the dictionary cache, simply increase the size of the shared pool with the initSID.ora parameter SHARED_POOL_SIZE. Reasonable sizes for medium systems are about 150 - 200 MB; shared pool sizes for large systems can go as high as 300 MB. If the shared pool size is already greater than 200 MB, only increase it if the value "free memory" in V$SGA is less than 1 MB.

Oracle - Database Administration

Tip 20: Reorganize !!!

When R/3 is installed, most of the Oracle tables and indexes are fully stored within their initial extent. Each tablespace consists of exactly one data file.

When data is added to the database, its status changes. As tables and indexes grow, additional extents are created to make space for new data. This may result in higher data access times. When a tablespace is full (there is not enough freespace to create a new extent), an additional data file must be added.

Adding extents or deleting complete tables causes freespace fragmentation (external fragmentation) in a tablespace. Free storage space in an Oracle data file becomes divided into smaller and smaller units. If contiguous free space is smaller than a requested extent, the space is lost, and cannot be used for storing data. However, newer versions of the Oracle system monitor can automatically merge adjacent areas of free space reducing the frequency of contiguous free space problems.

Internal fragmentation can occur if the fill level of the database blocks develops differently. The fill level of the individual blocks is initially identical. Creating and deleting data causes some blocks to be filled completely, while others may be relatively empty. As a result, space is used inefficiently.

If a data record does not fit into a database block, block chaining occurs. The database must then follow a chain from the first block of the data record to the further blocks. As a result, more time is needed for reading data from the disk.

Since R/3 usually accesses a table entry using an index, the above changes to the database usually have little influence on the time required to access data. However, the changes may adversely affect the run time of a query using a full table scan.

Reorganization can do the following:

* Object data is merged into one extent.

* Tablespace data is merged in one or more data files.

* Freespace fragments are merged into larger freespace segments (external defragmentation).

* The block fill level is evened (internal defragmentation).

* Block chains are resolved.

Tip 21: Avoid Aged Indexes Index Stagnation

Dynamic tables with frequent inserts and deletes require frequent reorganization of their indexes. Tables in PSAPBTABI and PSAPSTABI should be watched closely. Due to Oracle index management problems, the indexes of dynamic tables (frequent delete operations) tend to "age". Meaning, unless you insert a record with the same key as the one you deleted earlier, Oracle is not able to reuse this "free" entry in the index and might be forced to create new index leaf blocks. This results in index growth even for tables that stay relatively static in size. A larger index would force Oracle to read more index data blocks for the same indexed table access. A larger index would also force more blocks out of the data buffer cache to free up necessary space.

Reorganize the indexes of the update tables VBHDR, VBMOD, and VBDATA weekly.

Tip 22: Dont Put Data Files Of The Same Tablespace On The Same Disk

A frequently extended tablespace usually indicates a high level of use. Placing the data files of a highly accessed tablespace on the same disk might result in high disk utilization and seek time. I/O load should be balanced evenly over all available disks and controllers (see Tips 1-3).

Tip 23: Move Large And Highly Accessed Tables To Their Own Tablespaces

Tablespaces with disk utilization greater than 15%, which have already been isolated on their own disk and/or striped, may need to have highly accessed tables and indexes moved out to their own tablespaces in order to reduce disk utilization.

Try to keep tablespace size under 2GB to ease administration and reorganization. If a tablespace's size exceeds 2GB, consider moving large tables and indexes out to their own tablespaces.

Tip 24: Isolate The Archive Log File System

The archiving of an inactive redo log file must be finished before the Oracle log writer switches back to the same disk, otherwise log writer performance is severely impacted resulting in overall performance degradation.

Sequential write performance of the archive log file system is critical. Writing of an archive log file should not be interrupted by a seek to another area of the disk. The archive log file system should be isolated on its own disk.

Tip 25: Turn Off Timed Statistics

Enabling timed statistics provides CPU timing information on SQL statements and user sessions resulting in an additional 6 % CPU overhead. If your database CPU utilization is low, 6% might be considered negligible. On the other hand, if your database CPU utilization is high, 6% may be very expensive. We suggest you enable timed statistics only for system analysis and performance tuning. Turn it off for normal production operation. Set the initSID.ora parameter TIMED_STATISTICS equal to FALSE.

SAP - Application Servers

Tip 26: Move The SAP Main Instance Off The Database Server

If the SAP main application instance (e.g., DVEBMGS00) is located on the database server, it's possible to free as much as 15-20% CPU by moving the main instance to another machine. Also, the virtual memory requirement, operating system paging, and context switching are reduced.

Separating the main instance from the database server provides easier maintenance, administration, and tuning of servers with only one type of workload. Dedicate your database server to running only Oracle.

Tip 27: Consider Installing Two Application Instances On The Same Node

When installing a single SAP application instance on a powerful SMP machine with plenty of memory, several work processes are usually allocated to accommodate the hundreds of users expected to logon. While it's possible to allocate a large number of work processes, severe contention for semaphores can occur (e.g., #1-ABAP Program Buffer, #6-Roll Admin, #14-Presentation Buffer) limiting CPU utilization and system throughput. Also, several hundred users executing a wide variety of transactions may overwhelm the ABAP program buffer. The ABAP program buffer size is limited to 256MB (AIX segment size) which in this case may be too small - severe object swapping may occur.

An alternative to a single, big instance is two smaller ones. With two application instances, you have two sets of semaphores and two ABAP program buffers (512MB total). Semaphore contention would be reduced providing better CPU utilization and throughput. The required number of work processes could be split over two instances. Workload can be segregated resulting in more efficient buffer utilization and better cache hit rates.

When allocating R/3 work processes and buffers, the combined allocation should not force operating system paging (see Tip 29).

Tip 28: Update The SAP Kernel To The Latest Available Release

The SAP kernel should be maintained at the latest level available. From release 3.0C to 3.1H, the kernel is backwardly compatible and can be safely updated without requiring an R/3 upgrade (see Appendix F) . The latest R/3 kernel incorporates fixes for problems found in earlier versions, some fixes are performance related.

Tip 29: Its OK To Over Allocate Memory Until You Start Paging

R/3 memory allocation beyond real memory is acceptable until that allocation forces high operating system paging. If application server tuning has created a large SAP memory allocation and high operating system paging is observed, increase the real memory of the node. If the memory of the node cannot be increased, reduce the SAP memory allocation. OS paging has a system-wide impact on performance and response time. Occasional SAP object swaps usually affect individual users with less severity. Sometimes, object swaps can't be avoided as buffers age and becomes more fragmented. Only the cycling the instance can rid SAP buffers of fragmentation. Simply increasing the size of a buffer may not eliminate object swaps; it may just prolong the inevitable.

If a large extended memory requirement is contributing to a SAP over-allocation, there are job scheduling and programming techniques that can be applied to help reduce this requirement.

Tip 30: Maintain Sufficient Extended Memory

You should closely monitor the usage of SAP extended memory. If you run out of extended memory, work processes can be forced into "private mode". When in private mode, a work process is no longer available for normal SAP dispatching. It is dedicated to the current user until the transaction is finished. Afterwards, the work process is restarted. Please refer to the whitepaper `SAP 3.0.x Extended Memory on AIX 4.1/4.2' for more details on R/3 extended memory.

Check the SAP transaction ST02 for extended memory utilization, the max used size should always be lower than the available size. If the used heap memory size is greater than zero, investigate in more detail. Transaction RZ03>Edit>Views>Memory Management>Absolute Values shows detailed statistics about current memory usage. The number of restarted work processes (dialog and non-dialog) should be equal to zero. If not, work processes were forced into private mode.

Work processes that go into private mode might also influence the wait times observed in ST03. As work processes go into private mode, less are available to handle the remaining user workload.

Tip 31: Use Logon Groups

R/3 logon load balancing is used to dynamically distribute R/3 users across application server instances. Logon groups are defined consisting of one or more application instances having similar processing workloads. When logging on, users are evenly distributed across the available application servers of a logon group. This increases the efficiency of the defined groups in terms of performance and system resources.

Grouping users by functional areas, SAP buffer requirements may be reduced helping eliminate object swaps. Only objects relevant to a specific functional area would be buffered. For example, a "FI application server" would only load FI ABAPs in its program buffer.

Logon groups are installed and maintained centrally within the R/3 System (transaction SMLG). You can define the maximum response time for each application server group, and the maximum number of users that can log on to any particular group server. This means you can set up application servers with improved response time behavior for important workgroups with time-critical transactions. Setting up and using logon groups means that the behavior of each server becomes more predictable allowing you to tune more effectively.

Tip 32: Set ztta/roll_first = 1

By setting the ztta/roll_first parameter to 1, the system will use only the minimum space necessary (platform specific, about 200-300K in AIX) in roll area before going directly to extended memory. This will save time for the memory copy during roll in/out and will also reduce the requirement for roll shared memory and may alleviate contention on the roll administration semaphore. The freed memory can then be allocated to the extended memory.

Tip 33: Group Key 2 And Key 6 In The Same Shared Memory Pool

According to SAP, the Dispatcher Administration Table(key 2) and the ABAP Program Buffer(key 6) should reside in the same shared memory pool for performance.

Initially, keys 2 and 6 may be placed in pool 10.


ipc/shm_psize_02 = -10

ipc/shm_psize_06 = -10

If you need to enlarge the ABAP program buffer and pool 10 has already reached 256MB. If a shared memory segment is available, move the ABAP program buffer and the Dispatcher Administration table and out of pool 10.


ipc/shm_psize_02 = -30

ipc/shm_psize_06 = -30

ipc/shm_psize_30 = 256000000

abap/buffersize = 245000

Remember to decrease the size of pool 10.

SAP - ABAP/4 Programming

Tip 34: Transfer As Little Data As Possible

SELECT Only The Fields You Need

Using an asterisk in the SELECT clause of a SELECT statement, requests all fields of the table specified in the FROM clause (e.g., SELECT * FROM KNA1). Most often, the calling program doesn't need all the fields of a table.

Unwanted fields can be avoided by specifying fields in a SELECT clause field list or in the definition of a view. By eliminating extraneous fields, you reduce CPU, memory, network, and I/O overhead. On the other hand, extensive use of field lists (not views) might lead to a large variety of statements. This would influence the performance of the statement cache managed by the database interface.

Specify All Known Conditions In The WHERE Clause

A SELECT statement without a WHERE clause may result in an expensive index range scan on a client-dependent table. Be aware that SAP will add a WHERE clause specifying MANDT = SY-MANDT.

If you expect to fetch less than 20% of a table, specify all known conditions in the WHERE clause. With a WHERE clause, Oracle can choose the optimum execution plan and select the best index. Index range scans are great if you expect a small result set.

Note: If the full key is known, use SELECT SINGLE (Index Unique Scan).

If you expect to fetch more than 20% of a table, consider doing a full table scan with SELECT ... CLIENT SPECIFIED. This will force Oracle into table scan mode avoiding any index overhead. Also, data blocks are read sequentially instead of randomly resulting in less head movement (seeks) and better data transfer.

If the entire set of fields retrieved in a SELECT statement is stored in an index, do a full table scan only if you expect to fetch more than 40% of the table. All of the required data can be retrieved from the index without going to the table. Obtaining data directly from an index usually requires significantly fewer physical reads.

If you don't specify all known conditions in the WHERE clause, extraneous rows may be fetched creating unnecessary CPU, memory, network, and I/O overhead. Don't rely on the CHECK statement; be specific.

Tip 35: Avoid Nested Select Loops And Logical Databases

When querying large tables, nested SELECT loops can result in millions of database operations.

For example:










If the TABLE1 SELECT returns 1,000 rows - TABLE2 SELECT will be performed 1,000 times.

If each TABLE2 SELECT returns 100 rows - TABLE3 SELECT will be performed 100,000 times.

If each TABLE3 SELECT returns 100 rows - TABLE4 SELECT will be performed 1,000,000 times,

If each TABLE4 SELECT returns 10 rows - 10,000,000 single record fetches.

Result: 1,101,001 Opens + 11,101,000 Fetches = 12,202,001 Database Operations

If each database operation took only 10ms, the above example would take about 34 hours to run. Unfortunately, customer-written queries like this are not uncommon.

Also, logical database accesses (GETs) are nested SELECT loops in disguise. As you transverse the hierarchy of a logical database, SELECT-ENDSELECT loops are nested behind the scenes. For example, accessing a table in logical database five levels deep may result in nested select loops five levels deep. Also, a significant amount of unwanted processing may occur due to code imbedded in the logical database for other programs. If you want to write modular code, use function modules, not logical databases.

Instead of nested SELECT-ENDSELECT loops, use:

* joins implemented as views in the SAP Data Dictionary.


SELECT from a view directly into an internal table then loop process.


1. Create a dictionary view V_TABLES joining TABLE1, TABLE2 and TABLE3.

2. SELECT from view V_TABLES into internal table I_TABLES.


3. Sort I_TABLES.







The first SELECT will do an array fetch from a view (join) into an internal table. The second SELECT will UNION all of the resulting SELECT statements from FOR ALL ENTRIES. It will then do an array fetch and individually process each row of the result set. In this example, two array operations should run much quicker than twelve million individual database operations. The use of views and SELECT ... FOR ALL ENTRIES has reduced ABAP run times from hours to minutes, minutes to seconds.

Tip 36: Create Secondary Indexes Where Needed But Be Careful!

Secondary indexes can speed up queries dramatically. The ABAP/4 data dictionary allows 16 secondary indexes for each table. But be very careful! Adding secondary indexes will slow inserts and updates. Also, the database optimizer may take longer to choose the best execution plan.

Create a secondary index when:

* fields are selected without index support resulting in long sequential reads.

* only a small part of a table (< 5 %) is selected.

* the WHERE clause is simple (uses only ANDs).

* sorts are done without index support.

When creating a secondary index, follow these rules:

* An index must be selective. An index range scan should not return thousands of rows.

* Don't use too many fields.

* Specify the most selective fields first.

Tip 37: Use Table Buffering

Table buffering can improve performance by storing table data in local buffers (see Appendix G). "Local", in this case, means located on each application server. Requesting data directly from local table buffers would eliminate relatively slow database calls resulting in better response time.

Typical candidates for buffering include customizing and system tables. In certain cases, master data with customizing character can also be buffered.

Tip 38: Avoid Using ORDER BY

An ORDER BY sort in a SELECT statement can place heavy processing load on a database server especially if it is not supported well with an index. On a highly utilized database server, ORDER BY sorts can be very expensive in terms of system resources (CPU, memory, and I/O). Fortunately, ORDER BY processing can be avoided altogether by fetching into an internal table and sorting at the application server. Be kind to the database server, do your sorting on the application servers.

Tip 39: Choose A Suitable OCCURS

Fortunately, you don't need to know the size of an internal table in advance. The SAP kernel will handle memory allocation. The only size limit for an internal table is the application server's virtual memory space.

An ABAP program will run faster if you know the maximum size of an internal table when defining it. The estimated number of rows is specified with the OCCURS parameter (the value after addition OCCURS in the defining DATA statement of the table).

The estimated size of the table in bytes is the main factor influencing the speed of your program. The size is calculated as the product of the width and number of rows in the table.

When the first table entry is created (APPEND, COLLECT), the SAP kernel allocates enough memory space to hold the whole table or at least a block of approximately 8-16K. Making a good guess at the OCCURS parameter prevents the kernel from allocating unnecessary memory space. The OCCURS value is especially important if you define internal tables within internal tables. If the OCCURS parameter is set too large for an inner table, memory is wasted, since blocks of at least 8K are allocated for each row of the outer table.

Since the kernel allocates memory as needed, use the following guidelines when determining the value of OCCURS:

* The OCCURS parameter does not limit the actual size of a table; it only affects the performance of table operations.

* If you cannot estimate the size of the table, set OCCURS to zero and let the system do it for you.

* If the estimated size of the table is below 8K, set OCCURS equal to the estimated number of rows.

* If the estimated size of the table is larger than 8K, set OCCURS to zero.

Tip 40: Update Locally In Batch

For dialog performance, SAP transactions most often apply updates asynchronously. Asynchronous updates allow the user to continue working while the database is updated in the background. To update asynchronously, function modules are called in "UPDATE TASK".



A logical unit of work can be bundled and committed asynchronously by using several function module calls in update task followed by a COMMIT WORK.






When function modules are called in update task, field values and internal tables are written to the database (VBLOG tables). When the COMMIT WORK occurs, the queued function modules and data are passed to an update work process for processing.

For transactions run in dialog, asynchronous updates make sense when trying to provide the end user with the fastest possible response time. For transactions run in batch, end-to-end response time is no longer a concern; throughput is the challenge. If your goal is to maximize batch throughput, it makes sense to update locally instead of asynchronously. By disabling asynchronous update, update log overhead and contention are eliminated. Updates are done directly by the batch work process.

To disable asynchronous update, insert the line "SET UPDATE TASK LOCAL" just prior to the function module call in update task.




When disabling asynchronous update, we strongly recommend that you modify a copy of the SAP standard transaction and submitted it only in batch.


If a program submitted in batch is using "CALL TRANSACTION" and the transaction called is using asynchronous updates, the transaction can be called with an update mode `L' making all asynchronous updates local.



In CALL TRANACTION, specifying `A' or `S' for an update mode does not enable or disable asynchronous updating. If the update mode is set to `S', the batch work process will simply wait for the update work process to finish before continuing. If the update mode is set to `A', the batch work process will continue and not wait for the update work process to finish.

Note: the update mode "L" is not well documented.


Mirror Write Consistency

What is MWC?

Mirror write consistency is a parameter of an AIX logical volume (LV) which can be set to either YES or NO. It's meaningless unless the logical volume is mirrored.

What problem does MWC solve:

A write is in progress to a logical volume that has mirror copies. Before the write completes, power is lost so that the write never completes. When the system reboots, as with an unmirrored logical volume, the data read from the area to which the unfinished write was addressed may be old data, new data or a mixture of the two. The difference that mirroring introduces is that if the data is read twice the two reads may give different results. This is because the two reads may read from different copies of the data. In many environments this is not a problem but some applications would be sensitive to this and this is the reason that MWC has been provided.

How does MWC work.

Near the beginning (outer edge) of each disk AIX keeps the mirror write consistency cache record. This is a single disk block which is regarded as a array of 1 bit records. Each bit in the block represents the MWC state of one logical track group (LTG) on the disk. A LTG is 128K in size.

When a write must be performed to a mirrored logical volume which has MWC turned on, the LVM first checks the in memory version of the MWC record. If this says that any of the LTG areas affected by the write

are in sync then the LVM must update the in-memory version of the record and flush it to disk, marking the new LTGs as out of sync. The write of the MWC record involves a seek to the beginning of the disk. Once the MWC record has been flushed out to disk, the actual mirror writes can take place. This will obviously involve a seek back to the data area of the disk.

When the writes of the mirrored data complete, the LVM does not immediately update the MWC record on disk. That is handled later since it is not dangerous to have areas marked as out of sync when really they are in sync. If the system should crash, when it reboots, the MWC record on each disk is examined and if any LTGs are found to be marked out of sync they are synchronized by reading either copy and writing it to the other. The copy to read is chosen randomly since each copy of the data is equally valid (or invalid).

Implications of turning on MWC

If MWC is turned on for a logical volume, performance of writes to that logical volume will be severely impacted because each write will result in a seek to the beginning of the disk and a 512 byte write before seeking to the data to be written. If it can be determined that MWC can be turned off then a substantial performance gain will result.


Installing the ORACLE Database on Raw Devices



The database should only be installed and handled by personnel familiar with raw devices.

Installing an ORACLE database on raw devices increases the complexity of the R/3 installation, and increases the complexity of handling the database instance.


Before you install your ORACLE database with raw devices, please check whether it would not be better to increase performance by transferring the database instance to a separate host or by adding processors to your host.

The administration of a system which combines a file system with raw devices must be handled very carefully.

The installation of the ORACLE database on raw devices is only recommended for large databases.

This means:

* If the database server has 1 CPU -> install on file systems

* If the database server has 2 - 4 CPUs -> both versions can be considered

* If the database server has more than 4 CPUs -> we recommend considering both possibilities.

Before you start to install the R/3 System, read this chapter for information on how to prepare the normal installation procedure.

The person conducting the installation must be familiar with a standard ORACLE database configuration (redo logs, security, performance issues and disk configuration as described in the section "Overview: Database Installation" in the main manual).

As opposed to a standard (file system) installation, the ORACLE database (redo logs, ORACLE data and control information) uses raw devices here.

The ORACLE database raw devices are character devices accessed via links after the installation. Each tablespace can be extended by additional raw devices (this might be compared with creating several data files for each tablespace when data files are used).

The links for the database log files are located in the directories



If the ORACLE feature for mirroring logs is used, the following directories contain the links to the character devices for the mirror logs:



All other links (e.g. system_1, roll_1, temp_1) are located in the directory


As the mentioned above, directories only contain links, they need only a few bytes of space. The actual space required for the data, control data and logs of the database is located on the raw devices.

The naming convention for the links is similar to that of ordinary files and can be viewed in the create_RAW_links file that is written by R3INST (see description below).

Installation Procedure

The following steps need to be carried out in addition to those that are normally required for the installation of file system based databases.


Before you start the installation you must copy the file /<CD mount>/SAPFS.TPL from the CD SAP KERNEL to your installation directory <INSTDIR>

Edit the file <INSTDIR>/SAPFS.TPL with any editor. In the section [DB-OPTIONS]

change the line:




(note that the spaces are mandatory)

Then start R3INST as usual. When you are asked:


* No Yes *


* *

* Do you want to change SAPFS.TPL to change TABLESPACE sizes, *

* or change selected log mirroring mode? *

* *

* *

* *

* *

* *

* *

* *

* *

* *

*-------------- R/3 Release 3.0x (c) SAP AG --------------*

* *


you can increase tablespace sizes or change the mirroring mode.

Note that the SAPDATA numbering in the <INSTDIR>/SAPFS.TPL file still appears, but is not used. The size of each tablespace can be modified. The sum of the current sizes is given under <DBDATA>/sapraw.


Ignore the values for the log and mirror log files, because the log files are created as

raw devices.

The size of /oracle/<SAPSID>/sapraw is not the final size. The correct size of the raw devices is calculated when the link script is created (see below).

After modifying the size, the following question is shown on the screen (see the section "Creating UNIX Users" of chapter "Installing the Central Instance" in the main manual):


* Yes Skip Exit *


* *

* *

* *

* Shall unix users be created if required ? *

* *

* *

* *

* *

* *

* *

* *

* *

*-------------- R/3 Release 3.0x (c) SAP AG --------------*

* *


Then interrupt the installation using the Exit function.

In the next step, the link script is created. Restart R3INST and select:

Continue Interrupted Installation -> Cancel the Installation -> DB-TOOLS -> Create Link Script for RAW device-links.

The link script file create_RAW_links is then created in the directory <INSTDIR>.

The link script has the following contents:


#on sysname=<OS System> nodename=<hostname> release=<release> #version=A machine=<machine <id>

#at <Date>

#onto file create_RAW_links


ln -s <RAWDEV>/<SAPSID>cntrl1 /oracle/<SAPSID>/sapraw/cntrl1.dbf

# SIZE : 2000000 bytes

ln -s <RAWDEV>/<SAPSID>cntrl2 /oracle/<SAPSID>/sapraw/cntrl2.dbf

# SIZE : 2000000 bytes

ln -s <RAWDEV>/<SAPSID>cntrl3 /oracle/<SAPSID>/sapraw/cntrl3.dbf

# SIZE : 2000000 bytes

ln -s <RAWDEV>/<SAPSID>_g11m1 /oracle/<SAPSID>/origlogA/log_g11m1.dbf

# SIZE : 22.000.000 bytes

ln -s <RAWDEV>/<SAPSID>_g13m1 /oracle/<SAPSID>/origlogA/log_g13m1.dbf


The required raw device sizes are shown as a comment for each link.


Add at least 1 MB per raw device to your disk space requirements or these in SAPFS.TPL for administrative purposes.

Otherwise, the raw devices might be slightly too small and the errors

ORA-01119 "error in creating database file <file>" and

ORA-07245 "sfccf: unable to lseek and write the last block"

might occur at a later time.

Exit R3INST and copy the script used to create the links in the directory with:

cp <INSTDIR>/create_RAW_links /oracle/<SAPSID>/sapraw

Modify the script /oracle/<SAPSID>/sapraw/create_RAW_links to suit your

installation requirements.

In every line beginning with ln -s, replace the part after ln -s, for example <RAWDEV>/<SAPSID>cntrl1, with the real raw device name.


If you use logical volumes to create the raw devices on several operating systems, replace the part after ln -s with /dev/<VGName>/R<lvName>. If the raw device name cannot be determined (e.g. <RAWDEV>/<SAPSID>cntrl1), the link must be set to the real raw device name.


If you intend to use redo log mirroring, you have to add the following entries to the create_RAW_links script.

When raw devices are being used, the default for R3INST is hardware mirroring (replace the part after ln -s with the real raw device name and <SAPSID> with the SAP system ID):

ln -s <RAWDEV>/<SAPSID>_g11m2 /oracle/<SAPSID>/mirrlogA/log_g11m2.dbf

ln -s <RAWDEV>/<SAPSID>_g13m2 /oracle/<SAPSID>/mirrlogA/log_g13m2.dbf

ln -s <RAWDEV>/<SAPSID>_g12m2 /oracle/<SAPSID>/mirrlogB/log_g12m2.dbf

ln -s <RAWDEV>/<SAPSID>_g14m2 /oracle/<SAPSID>/mirrlogB/log_g14m2.dbf

The raw device files and the redo log files have the same size.

Create the required raw devices with the sizes described in the create_RAW_links script file.

If the hardware requirements do not correspond to the exact size of the raw devices because the extent size of logical volumes is defined as 4 MByte, you have to round off the raw device size.

Execute the script create_RAW_links as UNIX user root:

sh /oracle/<SAPSID>/sapraw/create_raw_links


Change the owner of the links and of the raw devices to ora<sid>:dba.

Restart R3INST and continue the installation using the same procedure as for file system installations.


R/3 note no. 72638



Number 0072638

Version 0009 from 07.01.1998

Status Released for customer

Set by Juergen Kirschner on 07.01.1998

Language E

Short text Long runtimes for individual database calls

Administrator Thomas Raupp

Component BC-DB-ORA Oracle


Long text


Database accesses from an application server take significantly longer than corresponding accesses on the database server. A more detailed analysis of the SQL statements in the SQL trace (Transaction ST05) shows that a considerable number of 'Fetch', 'Insert' or 'Update' operations require at least 200 milliseconds. If you carry out a corresponding analysis in the database server, the same statements require considerably less time (<10ms).

This behavior occurred with SQL*NET V2 and the Oracle release 7.2.3 for the first time.

Apply the patches listed below to solve the problem with 7.2.3, for 7.3.3 consider the separate section.

Additional key words


Cause and preconditions

The cause of this problem is that ORACLE opens the operating system socket for the communication between the client process (R/3 work process) and the ORACLE shadow process in the database server in the "TCP Delay" mode. This mode leaves it to the TCP/IP implementation on the operating system to delay sending half-empty TCP packages (so that the package may be filled). The wait cycle is approximately 200ms long.

The duration of a data transfer between the client and server is nonlinearly dependent on the data volume to be transferred (for example, on the results of a 'Fetch' operation). Putting it more simply, this means that for relatively small datasets, the data transfer can take 200ms.

This behavior can basically be observed on both SQL*Net V1 and SQL*Net V2. The difference between these SQL*Net versions is that this effect occurs for smaller datasets in SQL*Net V2 (from 2K) and more frequently than for SQL*Net V1 (effect is only visible from approximately 4K). This is due to the fact that SQL*Net V1 works with an internal 4K buffer, whereas SQL*Net V2 uses a 2K buffer.

As a standard, there is currently no way to configure SQL*Net, either V1 or V2 so that the sockets are opened with TCP_NODELAY. Furthermore, it is only possible to change the 2K blocking factor in SQL*Net V2 with ORACLE Version 7.3.3.

Up to now this performance problem was mainly observed on HP-UX but can also occur on other platforms.


In the meantime, there is a patch for SQL*Net V2 from ORACLE which allows the sockets to be opened with TCP_NODELAY. However, this patch is only available for Version 7.2.3!

Unfortunately, for this patch to be effective for the communication from the client to the server (thus, for example, for insert or update operations), it is not sufficient simply to import the patch in the database, but the R/3 kernel must also be linked with the patched ORACLE libraries. This has already been done with the Standard 3.1G kernels for HP UX. For all other platforms, this is the case with the 3.1G kernel as of patch level 26.

Preconditions for the patch are

1. Oracle Server 7.2.3

2. SQL*Net V2

3. R/3 kernel 3.1G (at least patch level 26)



The ORACLE patch only solves a very specific performance problem of SQL*Net. It cannot be seen as a solution for every type of performance problem. A precise analysis of the respective problem is absolutely necessary, and the patch should only be applied if the specified symptoms cause a significant deterioration in the remote database access when compared to the local database access.


To activate the patch, carry out the following steps:

1. If you have used SQL*Net V1 so far, you must first switch to SQL*Net V2. To do this, proceed as described in Note 61041.

2. Download the corresponding ORACLE patch for your hardware platform from sapservX. Follow the instructions in the respective README files when importing the patch.

AIX : ~ftp/general/3rdparty/oracle/AIX/7.2.3/bug330388


Caution: See Note 48087, bug330388 must be imported BEFORE bug475453

3. After applying the ORACLE patch, download the file

~ftp/general/3rdparty/oracle/ALL_OS/protocol.ora from sapservX and move the file to $TNS_ADMIN (/usr/sap/trans). Make sure that you can access this file from the R/3 System (as the user <sid>adm) in read mode.

4. Download the latest 3.1G kernel for your hardware platform from sapservX and import this according to the description in Note 72342. You can also use the 3.1G kernel without any problems in R/3.


30/D/E/F. You can also use the 3.1G kernel for R/3 Releases

3.0/D/E/F. However, a database upgrade to ORACLE 7.2.3 is required for the Start Release 3.0C.



As of Release 7.3, no patch is necessary to configure the nodelay behavior.

However, the file protocol.ora is only found in Release 7.3.3 if it is in the directory $ORACLE_HOME/network/admin.

Workarounds for 7.3.3

1. On EVERY application server!!!!!:

- create as user <sid>adm

directory /oracle/<SID>

position $ORACLE_HOME on /oracle/<SID>

- enter the corresponding profiles, create the directory

network/admin under $ORACLE_HOME and copy it from protocol.ora

into this directory


2. set the SDU size for communication



(SDU = 32768)






(SID_NAME = ...

SDU size is the communication size of SQL*NET V2. You can set it as of Release 7.3 (in bytes).

We recommend that you use solution 1 since it displays the same behavior as under 7.2.3.

Note that in every case an installation specific to every Appl. Server will be made in the near future (migration on Oracle8).

Source code corrections


Internal system environment

Release 31G to 31G

300 to 30F


External system environment

OP system AIX





DB system ORACLE


Enlarging the Redo Logs - Moving Redo Logs to Other Disks

During a standard installation, four redo log members are installed with a size of 20 MB each. The size of the redo log files directly affects database performance and security.


Large redo log files give better performance, but there are three disadvantages:

* If the redo log file currently in use is lost, the amount of lost data is big.

* If an instance recovery is necessary, it takes longer.

* Slow database startup. For example, a startup of a database with 80 MB redo logs could take 20 minutes.

Procedure for enlarging redo logs:

1. Force four log switches

As user ora<sapsid> force 4 switches:

sqldba lmode=y

sqldba> connect internal

sqldba> startup restricted

sqldba> alter system switch logfile; (4 times)

Make sure that all the redo logs are archived.

2. Backup the whole database. (This step is necessary to have a restart point if any of the following actions should fail)

3. Change the database mode to NOARCHIVELOG (see $UFKLYH_0DQDJHU in R3INST)

4. Startup the database in exclusive mode.

sqldba lmode=y

sqldba> connect internal

sqldba> startup restricted

5. Drop the redo logs:

sqldba> alter database drop logfile group <group_number>;


alter database drop logfile group 101;

Remove the corresponding UNIX file(s).

6. Create new log members with the new size (or at a new file location).


without ORACLE mirroring:

alter database add logfile group 101

'/oracle/<SID>/origlogA/log_g101_m1' SIZE 80M;

with ORACLE mirroring:

alter database add logfile group 101




Naming convention:


where the group_numbers are 101, 102, 103 and 104.

7. Repeat step 5 and 6 for the log groups 102, 103 and 104.

If you try to drop the logfile currently written, you get an ORACLE message:

cannot drop log at this time; log in use

In this case, force a log switch:

alter system switch logfile;

8. Change the database mode to ARCHIVELOG (see $UFKLYH_0DQDJHU in R3INST)

9. Backup the whole database. (Since any change of the redo log files results in a

change of the structural database description in the control files, this backup will be

used as a restart point for any future database recoveries.)


Creating Additional Rollback Tablespaces
















PSAPROLL contains PRS_1, PRS_2, PRS_3, ...

PSAPROLL2 contains PRS2_1, PRS2_2, PRS2_3, ...

PSAPROLL3 contains PRS3_1, PRS3_2, PRS3_3, ...

initSID.ora entry:

rollback_segments = (PRS_1,PRS2_1,PRS3_1,PRS_2,PRS2_2,PRS3_2,PRS_3,PRS2_3,PRS3_3, ...)


R/3 note no. 79376 abbreviated for Oracle and AIX



Number 0079376

Version 0016 from 19.12.1997

Status Released for customer

Set by Elke Doering on 19.12.1997

Language E

Short text Installation 3.1H kernel with 3.0C/D/E/F,3.1G DB

Administrator Ulrich Auer

Component BC-KRN-CST Client/Server Technology


Long text


Error in the SAP kernel of Releases 3.0C, 3.0D, 3.0E, 3.0F, 3.10 or 3.1G

that is corrected with Release 3.1H.

However, the customer cannot execute an R/3 upgrade directly.

Additional key words

Downward-compatible kernel, DCK

Cause and preconditions


The kernel delivered by SAP in Release 3.1H is downwardly compatible

with Releases 3.0C, 3.0D, 3.0E, 3.0F, 3.10 and 3.1G. Therefore, you can

correct errors in the 3.0C/D/E/F/3.10/3.1G kernels by replacing the

kernel with a 3.1H kernel, and an R/3 upgrade is not necessary.

Important: Replace the kernels on ALL application servers of the

affected system.

The general recommendation from SAP that you keep up to date with the

upgrades as far as possible is not affected by this downwards-

compatible kernel, since it does not correct errors in the applications.

Definition of terms:

In this case, 'kernel' refers to all executable programs contained in

/sapmnt/<SAPSID>/exe and not just the executable disp+work.

After installing a 3.1H kernel, you must

o still apply the Hot Packages for your R/3 Release for corrections

to ABAP/4 Repository objects (that is, apply the same Hot Packages

as before).

o only use 3.1H patches for kernel corrections in future.

Caution: If the system has Release 3.0D or 3.0E and you are working with

MM-SRV, you must implement the corrections attached to Note 88515.


o If you are using DBMS ORACLE under UNIX and the initial release is

3.0C, the DB must be upgraded to ORACLE Version 7.2.3, before an

SAP 3.1H kernel can be applied.

There is a guide and the software for this in the 3.1H upgrade


Also read Note 52041.

At the end of September/start of October, it is planned to release

ORACLE 7.3.3 with the 3.1H kernel for all R/3 releases since 3.0D.

o If you are using the operating system AIX, you must ensure that

you are running at least AIX version 4.1.3.

o Make sure that the languages and character sets (code pages) are

correctly set. Older kernels often did not notice missing or

inconsistent settings. Then they either acted correctly or

incorrectly but they did not cause a termination. New kernels no

longer accept contradictory settings. For further details read

Note 83196.


- The list of languages used in the profile parameter


- The number of the character set that your system uses. Table

TCPDB, report RSCP0004, Note 15023.

- The territory, if you want to use English or German outside

Western Europe, America or Africa. Table TCP0D, report

RSCP0020, Note 42305 (and 45790 if the database still has

Release 3.0D or 3.0E).


You can install a 3.1H kernel from a regular 3.1H kernel CD for by

carrying out the following actions as user <sapsid>adm (or <SAPSID>OFR

under AS/400) on the central instance host and then on all application

servers with local executables:

o Stop the R/3 System (caution: do not stop the database!)

o Stop saposcol

Under UNIX:

cd /usr/sap/<SAPSID>/SYS/exe/run

./saposcol -k

rm saposcol

o Exit the SAProuter.

o Insert the kernel CD and mount this.

o Import the new kernel from the CD:

- Under UNIX:

Change to the directory /sapmnt/<SAPSID>/exe

Save the old kernel with the following commands

(Caution: This is absolutely necessary for DB2 Common Server!

Sections of the old kernel are copied back into the directory

/sapmnt/<SAPSID>/exe when reprocessing it with the shell script

mkdir oldkernel

cp -p run/* oldkernel

su - root

chown <sapsid>adm /sapmnt/<SAPSID>/exe/*


The directory "oldkernel" is deleted during the next regular upgrade.

Unpack the new kernel with the following commands:

/<sapcd>/UNIX/<OS>/CAR -xvf /<sapcd>/UNIX/<OS>/SAPEXE.CAR

/<sapcd>/UNIX/<OS>/CAR -xvf /<sapcd>/UNIX/<OS>/SAPEXEDB.CAR

Log on as the user <sapsid>adm and execute the following commands:


su - root

cd /sapmnt/<SAPSID>/exe



o Import the frontend software from the CD (This is only absolutely

necessary for initial Release 3.0C.

- Under UNIX, you can download the frontend software from the

presentation CD with the following command:

/<sapcd>/GUI/UNIX//R3UP readunixgui

o Very important: For individual executables (particularly

disp+work), errors were corrected subsequently. Ensure that you

apply these "kernel patches". These are available on SAPSERVX

under general/R3server/patches/rel31H/OS/OPS/DBMS/*

OS = UNIX, NT or OS400



See Note 19466 (Downloading kernel patch from SAPSERVX) or Note

49365 for AS/400.

o only for AIX with ORACLE:

If the users <sapsid>adm and ora<sapsid> have the environment

variable MALLOCTYPE set to 3.1 in the user profiles .dbenv[_

<hostname>].sh and .dbenv[_<hostname>].csh, delete this entry from

the profiles and log on again.

o Reimporting additional programs

If you have installed additional program in the executable

directory such as RFC library, CPIC library, etc. these must be

reinstalled from the CD "Presentation". To do this, proceed as


- Under UNIX:

Execute the following commands as the user <sapsid>adm:

1. cd /usr/sap/<SAPSID>/SYS/exe/run

2. /<CD-Dir>/SDK/UNIX/<OS>/CAR -xvf

/ /<CD-Dir>/SDK/UNIX/<OS>/<additional package>.CAR

o Now restart the R/3 System.

o Now restart the SAProuter.

o For Initial Releases 3.0C or 3.0D: According to Note 47396,

include an empty form routine F4_SELTAB in the ABAP/4 program

RSDBRUNT, to prevent runtime errors for F4 on selection screens.

o Only for initial Releases 3.0C and 3.0D: After importing the

kernel, the following message occurs in the system log

'Load format of <program_name> invalid.'

You can ignore this message, as the load format is regenerated


o Only for initial Releases 3.0C and 3.0D: In 3.0C and 3.0D, it is

possible that Batch/CPIC user IDs could also log on online via RFC

and then execute normal screens and transactions.

This security gap is filled with this kernel. Therefore, these

links require a dialog user ID.

For details, See Note 49730.

Resetting to the old kernel.

If the system response time after installing the downwards-compatible

kernel does not meet your expectations, the old 3.0C/D/E/F or 3.10/3.1G

kernel can be activated again.


1. Stop the R/3 System (see above)

2. Recopy the old kernel

3. Reset profile changes

4. Start the R/3 System

You do not have to reset the database upgrades (DB patches) or the

modifications that were executed in the system.

Only for initial Releases 3.0C and 3.0D: In Release 3.0E, the algorithm

for password encryption has changed. This means that the password must

be reset for all users who were created in the system with the new

kernel or whose password has changed.

Note for future upgrades

Only for initial Releases 3.0C, 3.0D or 3.0E:

o Up to Release 3.0F, the kernel is imported during a normal upgrade

from the upgrade CDs, regardless of whether the current kernel is

newer or not. The system must still be operated with the 3.1H


If you execute an upgrade to 3.0D/E/F with initial Release 3.0C,

3.0D or 3.0E, proceed as follows:

Copy the 3.1H kernel to /usr/sap/put/exe after the PREPARE phase.

o From Release 3.1G, the downward-compatible kernel will be retained

during a regular upgrade if the active version is newer than the

one on the upgrade CD.

Source code corrections


Internal system environment

Release 31G to 31H

30C to 30F


Table Buffering

Only transparent tables and pooled tables can be buffered. Buffering is not possible for cluster tables.

The following two points indicate when a table should not be buffered:

* Data read by the application must always be up-to-date. Delays due to synchronization are unacceptable.

* Table data is frequently changed. In this case the work involved in synchronization can be greater than the improvement in performance gained by buffering. In general, buffering a table is not a good idea if more than one percent of the accesses involve changes.

Whether and how a table is buffered is defined in its technical settings:

* Buffering not allowed: Buffering is not allowed for this table, for example because application programs always require the most up-to-date data from the table or because it is too frequently changed.

* Buffering allowed but switched off: There is no commercial or technical reason not to buffer the table. Applications accessing the table run correctly whether it is buffered or not. Buffering for the table is not activated in the standard profile because it is not possible to estimate the table size and the access profile (frequency of the different ways of accessing the table). In the relevant installation buffering can be activated at any time by specifying a buffering type.

* Buffering active: The table is to be buffered. How this is achieved must in this case be determined by specifying a buffering type.

Tables can be buffered one of three ways in R/3:

* Full buffering: With full buffering, either the whole table or none of the table is located in the buffer. When a read access takes place, the entire table is loaded into the buffer.

* Generic buffering: When you access a record from a table that is generically buffered, all records with corresponding generic key fields are loaded into buffer. In other words, the generic areas of a table are fully buffered. Within the buffer, the generic areas are treated as objects in their own right (like fully buffered tables).

* Single-record buffering: Only those records of a table actually being accessed are loaded into the buffer. This saves storage space in the buffer. However, this type of buffering involves more administrative tasks than full buffering. In addition, many more database accesses are needed to load the records of the table.

Tables with single-record buffering by record are located in the single-record table buffer TABLP, fully and generically buffered tables are both located in the full table buffer TABL.

Full buffering is recommended for the following:

* Tables up to 30 KB in size. If a table is accessed frequently, but all accesses are read accesses, this value can be exceeded. However, you should always pay attention to the buffer utilization factor.

* Larger tables with several frequently accessed rows. However, if the application program is able to formulate an extremely selective WHERE condition for these multiple accesses using a database index, it may be advisable to dispense with buffering.

* Tables where frequent attempts are made to access data not contained in the table (resulting in a "No record found" message). With full buffering, all records of a table are always contained in the buffer. A direct decision can then be made as to whether or not the table contains a record for a specific key.

When considering whether a table should be fully buffered, you should take into account the size of the table, the number of read accesses, and the number of write accesses. Tables best suited to full buffering are small and are subjected to a large number of read accesses, but are rarely written to.

A table should be buffered generically if only certain generic areas of the table are generally needed in an application server.

Generic buffering is recommended especially for the following tables:

* Client-dependent, fully buffered tables are automatically buffered generically (even if full buffering was explicitly selected). The client field is the generic key. It is assumed that work is not being carried out in all clients simultaneously on an application server.

* Language-dependent tables are another example where generic buffering is sensible. In general, only records of one language will be needed for an application server.

The individual generic areas are treated as independent, fully buffered tables.

Generic buffering only makes sense if the corresponding table is accessed with fully qualified generic key fields. If a field of the generic key is not assigned a value in a SELECT statement, it is read directly from the database, circumventing the buffer.

If you access a generic area not buffered with fully specified key, it will be loaded with database call. If the table does not contain any records in the specified area ("No Record Found"), the area in the buffer is marked as non-existent. This eliminates further attempts to accesses the area.

In generic buffering, choosing an appropriate generic key is very important.

If a generic key is too short for a large table, very few large areas will be held in the buffer. When an access is made, too much data may then be loaded to the buffer.

If the generic key is too long, many small generic areas are buffered. If the full generic key is not specified, requests will circumvent the buffer resulting in database calls. If there are only a few records in each generic area, it may be more efficient to fully buffer the table.

Single-record buffering is recommended for large tables with a small number of frequently accessed records. The total size of records accessed should be between 100 and 200 KB.

In single-record buffering, each SELECT statement without the suffix SINGLE proceeds straight to the database. This applies even if the complete key is specified in the SELECT statement. In single-record buffering, therefore, the buffer can be used only for accesses with SELECT SINGLE.

If you access a record that is not yet buffered with SELECT SINGLE, there is a database call to load the record. This record is marked as nonexistent in the buffer if the table does not contain a record for the specified primary key ("No Record Found"). This prevents further attempts to access the record.