PC Server 325 Rack Cluster Example 2


PC Server 325 Rack Cluster Example 2



(Below) Figure 5 shows a low-cost, high-availability, shared-disk cluster consisting of two rack models of the PC Server 325 and two Netfinity EXP10 enclosures.
In addition to its standard features, each PC Server 325 contains two 266 MHz Intel® Pentium® II microprocessors with 512 KB of level-2 cache (one microprocessor standard), 128 MB of ECC system memory (64 MB standard), two 4.51 GB hard disk drives, two IBM 100/10 PCI EtherJet Adapters, and one IBM ServeRAID II Ultra SCSI Adapter.
(See 'Parts List for the PC Server 325 Rack Cluster Example 2' for a complete list of the components used in this example.)

Note: Although this example shows ServeRAID II adapters, you also could use ServeRAID-3H adapters.
 

Figure 5. PC Server 325 Rack Cluster Example 2 

The capacity of the Netfinity Rack is 42U. Each server occupies 5U and each EXP10 enclosure occupies 3U.
You can house this 16U cluster and its support devices (such as console, keyboard, and uninterruptible power supplies) in one IBM Netfinity Rack or in an industry-standard, 19-inch rack that meets EIA-310-D standards and has a minimum depth of 71.12 cm (28 inches).
(See 'Selecting the Rack Enclosures' for more information.)


The network-crossover cable, sometimes referred to as the cluster's heartbeat, provides the dedicated, point-to-point communication link between the servers.
This cable connects two IBM 100/10 PCI EtherJet Adapters (one in each server) and enables the servers to continuously monitor each other's functional status.
This example shows two separate connections to external, public networks.
The servers connect to Public Network 1 using the second IBM 100/10 PCI EtherJet Adapter in each server, and they connect to Public Network 2 using the Ethernet controllers on the system boards.
Using the public-network connections and the dedicated heartbeat link together ensures that a single network-hardware failure will not initiate a failover situation.

Notes:

  1.  You must  use IBM 100/10 PCI EtherJet Adapters for the cluster's heartbeat connection.
  2.  You can use the integrated Ethernet controllers that come standard on some server models to connect  the server to the public network, however, these integrated controllers are not certified for use as the  cluster's heartbeat connection.
  3.  You must  use a point-to-point, Category 5 crossover cable for the heartbeat connection. Connections  through a hub are not supported.


To maintain high availability, the two hard disk drives in each server are defined as RAID level-1 logical drives (Array A) using Channel 3 of the ServeRAID adapters.
Because these nonshared drives store the operating system and shared-disk clustering software needed during startup, these drives were defined first using the ServeRAID configuration program.
The internal SCSI cables remain attached to the CD-ROM drives, but the end connectors that were attached to the SCSI controllers on the system boards are now attached to the Channel 3 connectors on the ServeRAID adapters.
The hard disk drive attached to the end connector on the internal SCSI cable in each server has its termination set to Enabled.
The other hard disk drive in each server has its termination set to Disabled.

Note: The termination for the CD-ROM drive is permanently set to Disabled. You can not Enable termination on the CD-ROM drive.


The only difference between the hardware configuration of Server A and the hardware configuration of Server B is the SCSI ID settings for the ServeRAID adapters.
Channels 1 and 2 of the ServeRAID adapter in Server A are set to SCSI ID 7. Channels 1 and 2 of the ServeRAID adapter in Server B are both set to SCSI ID 6, because they share the same SCSI buses as Channels 1 and 2 of the ServeRAID adapter in Server A.
Channel 3 of both ServeRAID adapters connects to the nonshared drives in each server, it is set to SCSI ID 7 to avoid a conflict with the CD-ROM drive, which is set to SCSI ID 6.

The Netfinity EXP10 enclosures each contain ten hot-swap hard disk drives.
A SCSI cable (provided with each expansion enclosure) connects the SCSI Bus 1 OUT and SCSI Bus 2 IN connectors on the rear of the enclosures, forming one continuous SCSI bus in each enclosure.

Enclosure 1 contains ten 4.51 GB drives. Using auto-sensing cables, the SCSI Bus 1 IN connector is attached to Channel 1 of the ServeRAID adapter in Server A, and the SCSI Bus 2 OUT connector is attached to Channel 1 of the ServeRAID adapter in Server B.

Enclosure 2 contains ten 9.1 GB hot-swap hard disk drives. Using auto-sensing cables, the SCSI Bus 1 IN connector is attached to Channel 2 of the ServeRAID adapter in Server A and the SCSI Bus 2 OUT connector is attached to Channel 2 of the ServeRAID adapter in Server B.

Note: To help increase the availability of the shared disks and to enable the serviceability of a failing or offline server, you must  use Netfinity EXP10 Auto-Sensing Cables, IBM Part Number 03K9352, to connect clustered servers to Netfinity EXP10 enclosures.


The EXP10 auto-sensing cables contain circuits that can automatically sense the functional status of the server.
When the circuitry in an auto-sensing cable detects that the server attached to it is failing or offline, the cable circuitry automatically enables termination for that end of the SCSI bus.
This helps increase the availability of the shared disks and enables the serviceability of the failing or offline server.

To help maintain high availability, eight of the 4.51 GB drives are grouped into two RAID level-5 logical drives (arrays B and C) in enclosure 1, and eight of the 9.1 GB drives are grouped into two RAID level-5 logical drives (arrays D and E) in enclosure 2.
To further increase the availability of the shared disks, each ServeRAID adapter has its own hot-spare (HSP) drives: one 4.51 GB and one 9.1 GB.
A hot-spare Shared-Disk drive is a disk drive that is defined for automatic use in the event of a drive failure.
If a physical drive fails and it is part of a RAID level-1 or RAID level-5 logical drive, the ServeRAID adapter will automatically start to rebuild the data on the hot-spare drive.

Note: ServeRAID adapters cannot share hot-spare drives.
To maintain high availability and enable the automatic-rebuild feature, you must  define a hot-spare drive for each ServeRAID adapter.


The SCSI ID assignments for the shared hot-swap drives are controlled by the backplanes inside the Netfinity EXP10 enclosure.
The IDs alternate between low and high addresses, and might cause some confusion.
To avoid confusion with the SCSI IDs, consider placing a label with the SCSI IDs across the front of the drive bays.
In this example configuration, the SCSI ID assignments from left (bay 1) to right (bay 10) are: 0 8 1 9 2 10 3 11 4 12.


Ideally, the servers and storage enclosures are connected to different electrical circuits, however, this is rarely possible. To help prevent the loss of data and to maintain the availability of the shared disks during a power outage or power fluctuation, always connect the servers and expansion enclosures to uninterruptible power supplies (UPS).


Back to  Jump to TOP-of-PAGE

Please see the LEGAL  -  Trademark notice.
Feel free - send a Email-NOTE  for any BUG on this page found - Thank you.