Netfinity 5000 Rack Cluster Example


Netfinity 5000 Rack Cluster Example



(Below) Figure 6 shows a low-cost, high-availability, shared-disk cluster consisting of two rack models of the Netfinity 5000 and two Netfinity EXP15 enclosures.
In addition to its standard features, each Netfinity 5000 contains one IBM Netfinity ServeRAID-3L Ultra2 SCSI Adapter, one IBM Netfinity ServeRAID-3H Ultra2 SCSI Adapter, one IBM 100/10 PCI EtherJet Adapter, one additional 400 MHz microprocessor, one optional 175 Watt redundant power supply, one 256 MB memory kit, and two 9.1 GB hard disk drives.
(See 'Parts List for the Netfinity 5000 Cluster Example' for a complete list of the components used in this example.)

Note: Although this example shows ServeRAID-3H adapters, you also could use ServeRAID II adapters.
 

Figure 6. Netfinity 5000 Rack Cluster Example 

The network-crossover cable, sometimes referred to as the cluster's heartbeat, provides the dedicated, point-to-point communication link between the servers.
This cable connects the IBM 100/10 PCI EtherJet Adapters (one in each server) and enables the servers to continuously monitor each other's functional status.
The servers connect to the public network using the Ethernet controllers on the system boards.
Using the public-network connection and the dedicated heartbeat link together ensures that a single network-hardware failure will not initiate a failover situation.

Notes:
  1.  You must  use IBM 100/10 PCI EtherJet Adapters for the cluster's heartbeat connection.
  2.  You can use the integrated Ethernet controllers that come standard on some server models to connect  the server to the public network, however, these integrated controllers are not certified for use as the  cluster's heartbeat connection.
  3.  You must  use a point-to-point, Category 5 crossover cable for the heartbeat connection. Connections  through a hub are not supported.


Server A and Server B are configured identically.
To maintain high availability, the two hard disk drives in each server are connected to the single-channel ServeRAID-3L adapters, and they are configured as RAID level-1 logical drives.
In each server, the internal SCSI cable that comes attached to the SCSI controller on the system board has been moved from the system-board connector to the internal channel connector on the ServeRAID-3L adapter.
Because these nonshared drives store the operating system and shared-disk clustering software needed during startup, the ServeRAID-3L adapter is installed in PCI slot 5.

Note: When you install multiple hard-disk controllers, RAID controllers, or ServeRAID adapters in the same server, you must  install the device that will manage the startup (boot) drives in a PCI slot that is scanned before subsequent hard-disk controllers or RAID adapters.
The Netfinity 5000 has two primary PCI buses: PCI bus 1 and PCI bus 2.
Expansion slot 5 is on PCI bus 1, expansion slots 1 through 4 are on PCI bus 2, and the system scans PCI bus 1 (slot 5) first.


Other items in this example that increase the availability and reliability of the servers include the additional memory, microprocessors, and power supplies.
Each server comes with 64 MB of memory and supports up to 1 GB of system memory.
In this example, the additional 256 MB memory kits bring the total system memory for each server up to 320 MB, and the additional microprocessors enable symmetric multiprocessing for each server.
Each server also comes with two, 175 Watt power supplies packaged in one 350 Watt unit.
The additional 175 Watt supplies provide N+1 power redundancy for up to 350 Watts for each server.


On both ServeRAID-3H adapters, Channel 3 is available for use as a quorum-arbitration link with the Microsoft Cluster Server software, or for future expansion with the Vinca clustering software.


The maximum storage capacity 29 for each Netfinity EXP15 is 182 GB, using ten 18.2 GB hot-swap drives. However, this example shows ten 9.1 GB hot-swap hard disk drives in each enclosure. To help maintain high availability, the drives are grouped into four RAID level-5 logical drives (arrays A, B, C, and D).
To further increase the availability of the shared drives, each ServeRAID-3H adapter has its own hot-spare (HSP) drive.
A hot-spare drive is a disk drive that is defined for automatic use in the event of a drive failure.
If a physical drive fails and it is part of a RAID level-1 or RAID level-5 logical drive, the ServeRAID adapter will automatically start to rebuild the data on the hot-spare drive.

Note: ServeRAID adapters cannot share hot-spare drives. To maintain high availability and enable the automatic-rebuild feature, you must  define a hot-spare drive for each ServeRAID adapter.


Option Switch 1, on the rear of each EXP15 enclosure, is set to the 'Off' position, forming one continuous SCSI bus in each enclosure.

For EXP15 Enclosure 1, the Channel 1 connector on the ServeRAID-3H adapter in Server A is connected to the SCSI Bus 1 IN connector, and the Channel 1 connector on the ServeRAID-3H adapter in Server B is connected to the SCSI Bus 2 IN connector.

For EXP15 Enclosure 2, the Channel 2 connector on the ServeRAID-3H adapter in Server A is connected to the SCSI Bus 1 IN connector, and the Channel 2 connector on the ServeRAID-3H adapter in Server B is connected to the SCSI Bus 2 IN connector.

The SCSI ID assignments for the shared hot-swap drives are controlled by the backplanes inside the Netfinity EXP15 enclosures.
When configured as one continuous SCSI bus, the SCSI IDs alternate between low and high addresses, and might cause some confusion.
To avoid confusion with the SCSI IDs, consider placing a label with the SCSI IDs across the front of the drive bays.
In this example configuration, the SCSI ID assignments for each enclosure from left (bay 1) to right (bay 10) are: 0 8 1 9 2 10 3 11 4 12.


Ideally, the servers and storage enclosures are connected to different electrical circuits, however, this is rarely possible.
To help prevent the loss of data and to maintain the availability of the shared disks during a power outage or power fluctuation, always connect the servers and expansion enclosures to uninterruptible power supplies (UPS).

The capacity of the Netfinity Rack is 42U.
Each server occupies 5U and each EXP15 enclosure occupies 3U.
You can house this 16U cluster and its support devices (such as console, keyboard, and uninterruptible power supplies) in IBM Netfinity Racks or in industry-standard, 19-inch racks that meet EIA-310-D standards and have minimum depths of 71.12 cm (28 inches).
(See 'Selecting the Rack Enclosures' for more information.)


Back to  Jump to TOP-of-PAGE

Please see the LEGAL  -  Trademark notice.
Feel free - send a Email-NOTE  for any BUG on this page found - Thank you.