IBM Books

Planning Volume 2, Control Workstation and Software Environment


Hardware overview

The basic hardware components that can comprise an SP system are:

These components connect to each other by the SP administrative local area network (LAN). That network might be called the SP Ethernet admin LAN, the SP LAN, or the SP Ethernet. The SP nodes connect to your existing computer network through another LAN, making the SP system accessible from any network-attached workstation.

Hardware is described in Volume 1.

Keep in mind that this is merely a high level overview explaining some physical features used as points of reference in later discussions. Each type of hardware has its set of requirements. Be sure to read the book IBM RS/6000 SP: Planning Volume 1, Hardware and Physical Environment for physical specifications, connectivity, and requirements.

Figure 1 illustrates a basic SP suitable for parallel and serial batch technical computing in a departmental setting.

Figure 1. Basic SP configuration

View figure.

Processor nodes

|SP processor nodes are RS/6000 computers mounted in short or tall SP |frames. Other IBM RS/6000 and |e(logo)server pSeries computers that are not mounted in an SP frame can connect to the SP system |and function logically like SP processor nodes.

SP processor nodes, those mounted in SP frames, are available in three types: thin nodes, wide nodes, and high nodes. |These are sometimes called SP rack-mounted nodes. The frame spaces into which nodes fit are called drawers. A tall frame has eight drawers, while a short frame has four drawers. Each drawer is further divided into two slots. One slot can hold one thin node. A single thin node in a drawer, one that is not paired with another thin node in the same drawer, must occupy the odd numbered slot. A wide node occupies one drawer (two slots) and a high node occupies two drawers (four slots). The SP system is scalable from one to 128 processor nodes that can be contained in multiple SP frames in standard configurations. The maximum number of high nodes supported ranges from 64 to 128 depending on which high nodes you have. Systems that can have from 129 to 512 nodes are available by special bid.

SP-attached servers are nodes that are not mounted in an SP frame. |Generally, they are 24 inch or 19 inch rack-mounted |nodes. |Some are in physical units that might resemble an SP |frame. They have direct connection to the control workstation. They connect to the SP directly by the SP Ethernet administrative local area network (LAN). |Some have limited hardware control and monitoring from the control |workstation because they have no SP frame supervisor or SP node |supervisor. Others do have hardware control and monitoring capabilities |comparable to an SP frame. Except for the physical differences, after they are installed and running the PSSP software, they function just like SP processor nodes and interact with the other nodes in the system. With one exception, each server is managed by the PSSP software as though it occupies a separate SP frame.

|The IBM e(logo)server pSeries 690, which has physical components |that can be assigned to logical partitions (LPARs), is the exception that can |have multiple nodes in one frame. Each p690 server is a frame with |features that are similar to an SP frame. Each server can have up to |sixteen LPARs that are each seen by the PSSP software as individual nodes in |that one frame, with a maximum of 48 total LPARs. Additional |constraints apply in a system with a switch configuration.

|The number of nodes from SP-attached servers counts toward the |maximum number of nodes in the SP system. The number of SP-attached |servers counts toward the maximum number of frames with nodes in the SP |system.

Figure 2 illustrates a basic SP system that includes one SP-attached server.

Figure 2. SP with an SP-attached server

View figure.

|You can run PSSP 3.2 or later on clustered enterprise |servers. The term clustered enterprise servers is used |generically to mean a cluster of IBM e(logo)server pSeries or RS/6000 |computers, each running the PSSP software, connected to one control |workstation that is also running PSSP, and connected to the SP Ethernet admin |LAN, but with no SP rack-mounted nodes in the system. All the machine |types supported as SP-attached servers can participate in a clustered |enterprise server system configuration. PSSP 3.2 supports only |switchless clustered enterprise server systems. As of PSSP 3.4, |you can have a clustered enterprise server system that uses the SP Switch2 or |the SP Switch. You are not required to have an SP frame or SP node in |order to use the PSSP software, but you do need the appropriate frame for any |SP switch you decide to use in the system.

|Figure 3 illustrates a system of clustered enterprise |servers. There is no label on such a system to visually identify it as |a clustered enterprise server system, like the name RS/6000 SP on an SP frame |identifies an SP system. It is only when the control workstation and |each of the servers are appropriately connected and running the PSSP software |that it becomes a system to which the term clustered enterprise |servers applies as explained here and used in the PSSP |publications.

Figure 3. A system of clustered enterprise servers

View figure.

Keep in mind

Unless otherwise explicitly stated, the information in this book about SP processor nodes applies also to nodes that are configured from SP-attached servers or in a clustered enterprise server system. Functionally they are all simply nodes in the system.

|All the processor nodes currently available from IBM are symmetric multiprocessor (SMP) computers with varying levels of function, capacity, and performance. Each processor node includes memory, internal direct access storage devices (DASD) that are optional in some nodes, optional connectivity to external networks and DASD, and a method for Ethernet connection. The type of node and optional equipment it contains can lead to other requirements.

Base your choice of processor nodes on the function and performance you require today and in the foreseeable future. Thin nodes are typically configured as compute nodes, while wide nodes are more often used as servers to provide high-bandwidth data access. High nodes are typically used for database operations and for applications with extensive use of floating point. SP-attached servers are particularly suitable in SP systems with large serial databases. If you do not require a full scale SP system, a system of clustered enterprise servers might be right for you. No rigid rule governs the logical configuration of a node. You can configure any physical node type for the logical functions that best serve your computing requirements.

Note:
|Remember, this is an overview. Do not make your choices |before reading the planning information about the servers that are currently |available from IBM as nodes on which you can run the PSSP 3.4 |software. | |The maximum number of servers supported in either system configuration is |generally 32. More constraints apply to certain types of servers or in |systems with a switch configuration. See the information in "Chapter 2, Defining the system that fits your needs" under the heading Question 8: Which and how many nodes do you need?. |

Frames

SP frames have spaces into which the nodes fit. These spaces are called drawers. A tall frame has eight drawers and a short frame has four drawers. Each drawer is further divided into two slots. One slot can hold one thin node or SP Expansion I/O Unit. A wide node occupies one drawer (two slots) and a high node occupies two drawers (four slots). An internal power system is included with each frame. Frames get equipped with the optional processor nodes and switches that you order.

SP processor nodes can be multiply mounted in a tall or short SP frame. The maximum number of SP frames with nodes supported in an SP system is 128. Frames with only switches or SP I/O Extension Units can be numbered from 129 to 250 inclusive in order to allow the maximum of 128 frames with nodes in a standard system. The maximum number of high nodes supported in a 128-frame SP system varies depending on which high nodes you have. The SP system supports up to 128 POWER3 SMP high nodes while the older 604 series high nodes are limited to 64. If your system is fully populated with SP rack-mounted nodes, there is no room for SP-attached servers.

|Servers, whether configured as SP-attached or in a cluster, are |conceptually self-framed and apply 1 to 1 in the count of |frames. The 128 maximum can include up to 32 SP-attached servers in a |system with tall SP frames. A maximum of 32 servers are supported in |one system of clustered enterprise servers.

Switches

Switches are used to connect processor nodes, providing the message passing network through which they communicate with a minimum of four disjoint paths between any pair of nodes. In any complete system you can use only one type of switch, either the SP Switch2 or the SP Switch.

SP-attached servers can be connected to a switch in a tall SP frame. SP-attached servers are not supported with short SP frames.

|Nodes in a clustered configuration are also supported with switches, |but you need at least one tall SP frame for the switch. Though it is |only for housing the switch and not SP rack-mounted nodes, when an SP frame is |made part of a clustered configuration system, the system effectively becomes |an SP system of SP-attached servers.

To consider whether you need a switch and which switch to choose, see Choosing a switch.

Adapters are required to connect any processor node or extension node to the switch subsystem. See the book IBM RS/6000 SP: Planning Volume 1, Hardware and Physical Environment for which adapter is required for each supported node.

Extension nodes

Extension nodes are non-processor nodes that extend the capabilities of the SP system, but cannot be used in the same ways as SP processor nodes.

A specific type of extension node is a dependent node. A dependent node depends on SP processor nodes for certain functions, but much of the switch-related protocol that processor nodes use is implemented on the SP Switch.

A physical dependent node can support multiple dependent node adapters. If a dependent node contains more than one dependent node adapter, it can route data between SP system partitions. |The only node of this type is the SP Switch Router. It |is available only to enhance an SP system that uses the SP |Switch. Data transmission is accomplished by linking the dependent node adapters in the SP Switch Router with valid switch ports on the SP Switch. If these SP Switches are located in different SP system partitions, data can be routed at high speed between the system partitions.

The SP Switch Router can be used to scale your SP system into larger systems through high speed external networks such as a FDDI backbone. It can also dramatically speed up TCP/IP, file transfers, remote procedure calls, and relational database functions.

Control workstation

|The SP system uses an IBM RS/6000 or pSeries workstation with a |suitable hardware configuration, the PSSP software, and other optional |software as a central point of control for managing and maintaining the SP |processor nodes and related hardware and software (see Question 10: What do you need for your control workstation?). An authorized system administrator can log in to |the control workstation from any other workstation on the SP Ethernet admin |LAN to perform system management, monitoring, and control tasks.

|The control workstation connects directly to each SP frame to |provide hardware control functions. Each server that is SP-attached or |in a cluster configuration connects directly to the control |workstation. Depending on which machine types you choose to have as |nodes in your system, the hardware control might be comparable to that with an |SP frame or it might be minimal. Only some servers have features that |are comparable to an SP frame and node supervisor.

The control workstation acts as a boot-install server for nodes in the SP system. For security, the control workstation can be set up as a Distributed Computing Environment (DCE) or Kerberos Version 4 (V4) authentication server. See Chapter 6, Planning for security for more information.

The High Availability Control Workstation option enables you to have a primary and secondary control workstation for automatic failover and reintegration in the event that the primary control workstation is not available. See Chapter 4, Planning for a high availability control workstation for more information.

Network connectivity and I/O adapters

Network connectivity for the SP system is supplied by various adapters, some built in, some optional, that can provide connection to I/O devices, networks of workstations, and mainframe networks. Ethernet, FDDI, token-ring, HIPPI, SCSI, FCS, and ATM are some types of adapters that can be used as part of an SP system.

The SP Ethernet admin LAN is the network that connects all nodes to each other and to the control workstation in the SP system. A 15 meter (50 feet) Ethernet cable is provided with each frame to use in the wiring of this network. Additional optional adapters such as Ethernet, FDDI, and token-ring are automatically configured on each node. Other optional adapters are supported and can be individually configured on each node.

Note:
See the book IBM RS/6000 SP: Planning Volume 1, Hardware and Physical Environment for information about required and optional adapters.

SP Expansion I/O Unit

An SP Expansion I/O Unit is designed to satisfy the needs of customers running applications with a greater demand for internal DASD, external DASD, and network connectivity than is available in the node alone. The unit expands the capacity of a node by providing eight PCI slots and up to four hard disks. These hard disks are considered internal DASD of the associated node.

The SP Expansion I/O Unit has the following characteristics and restrictions:


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]