PD hints - RAID controller offline

 

 

Failover methods

 

Controller behavior summary

 

PD hints – Implicit Fail Over (Controller Failure)

 

Gathering Logs

 

 

Failover methods

 

The DS300 appliance supports two methods of fail over:

1)      Implicit fail over where the fail over process is managed by the appliance itself

2)      Explicit fail over where the fail over process is commanded by the host computer. 

 

 

Implicit Fail Over (Dual Controllers unit)

 

Implicit fail over occurs when a crash occurs on a particular storage controller.  Implicit fail over is managed by the DS x00 through the loss of the heartbeat between the two controllers.  In this case, the initiator will detect a failure on the primary connections and will attempt to direct traffic through the secondary ports.  The DS300 will automatically transition the secondary ports to an active state, multiplex the back end storage and ensure cache coherency.

 

Explicit Fail Over (Single and Dual Controller units)

 

Explicit fail over occurs when the initiator detects a connection failure and directly commands the DSx00 appliance to fail over to the other controller. On a single controller unit, the initiator may switch the data path to the second port of the controller (assuming redundant configuration and that the appropriate drivers are installed).  Most fabric incidents (e.g., cable failures, switch failures) will result in an explicit fail over.

 

 Figure 1 shows an implicit fail over in a dual controller configuration.

 

In normal operation, all storage is "visible” (dash lines to the HDDs) through all four physical ports of the storage enclosure (assuming dual controllers).  However, access to a physical storage volume is only available on the two ports of the respective controller (solid lines to the HDDs) .

The DS300 appliance defines two port groups corresponding to the two controllers. Only three states are valid for the port groups:  active/optimized, standby and unavailable. 

A target port group (controller) is in an active state for a logical unit when it can process I/O requests.  A target port group is in a standby state for a logical unit when the corresponding controller does not have access to the back end storage (SCSI HDDs).  A target port group is in an unavailable state when the corresponding controller has crashed or is not present.

 

Figure 1: Controller implicit failover

 

 

Note:  In a dual controller configuration the controllers are either active/active or active/passive.  However the active/passive version is actually just the active/active version with all storage configured on one controller.

 

Failover  overview

 

Each iSCSI controller has one or more iSCSI ports, and one management port, each with a unique IP address. Initiators establish iSCSI connections to TCP/IP ports on the controller. In a dual controller environment, each controller will 'know' the IP addresses being used by the other controller in the enclosure, along with the status of that controller. If one controller fails, the other controller will detect the failure. Any initiators with connections established to the failed controller will see those connections break. Any completed writes issued to the failed controller will be guaranteed to be saved to persistent storage; any read or write requests which have not been acknowledged will be lost.

 

When the working controller detects that the other controller has failed, any storage, iSCSI volumes and associated meta-data will be failed over to the working controller from the failed controller. The working controller will take over the failed controller's iSCSI and management IP addresses and establish them on its own Ethernet ports as an 'alias interface'. It will then send out 3 ARP packets containing the moved IP addresses along with the new MAC addresses (i.e., the MAC addresses of the ports on the working controller). This will force switches and hosts to update their ARP tables with the new addresses.

 

Any iSCSI initiator that was connected to the failed controller will, after seeing its connection fail, attempt to establish a new iSCSI connection using the IP address of the failed controller. Because the IP address has moved, it will actually establish a new connection on the working controller, which has now been assigned the storage volumes and metadata from the failed controller. Any I/O request to the failed controller that had not been completed will get re-issued and serviced by the working controller, and normal service will resume.

 

So long as the failover happens more quickly than the upper level operating system I/O timeout period (typically 30 seconds) applications using the failed-over volumes will not see any interruption of service.

 

Failback is the reverse process, and may be initiated manually or automatically according to a user-defined policy.

 

Controller behavior summary

 

            Devices (drives)

All drives will always appear in the lists of controller A and B. Although drives are located on either the A or B side of the enclosure, this does not indicate controller ownership. Thus drives installed on either side of the enclosure may be owned by either A or B controller. Drives do not indicate ownership.

 

Arrays

Arrays do indicate ownership – both current and preferred. The preferred ownership is established at array creation. Preferred ownership is only changed if an array is moved via management commands to the other controller. Preferred ownership will not change on a controller failure – either a real failure or a manual (controller pull) failure. After a controller failure (manual or real or commanded), the controller has to be brought back on-line via management tools, i.e., peer enable.  It is not automatic.

 

iSCSI Fail over

 

Port failover occurs when the controller port detects Link down (such as a pulled cable). In this case, the following port reassignment takes place:

 

Single port failure

The IP address will be moved to the other port on the same controller.

Double port failure (or controller failure)

Both IP addresses will be moved to the corresponding ports on the other controller

Triple port failure

All IP addresses will be mapped to the remaining port.

 

            Controller iSCSI ipfailover modes

 

There are 4 supported ipfailover modes. You can set the policy for the failover of iSCSI interfaces, using the CLI with these options:

           

local: The failover occurs on a port basis - that is if a link down is detected by one of the controller ports then the ip address of that port is mapped to the remaining port on the same controller (ETH2àETH3 or ETH3àETH2).

 

remote:  If a link down is detected on any interface port of a controller, the controller is failed over to the other controller (ETH2àETH2 and ETH3àETH3). The host is now able to access all storage through the online controller.

 

both: The failover occurs on a port basis - that is if a link down is detected by one of the controller ports then the ip address of that port is mapped to the remaining port (ETH2àETH3 or ETH3àETH2).  If that remaining port subsequently goes offline, the controller is failed over to the remaining controller. All offline port IP addresses are mapped to the corresponding ports of the remaining controller (ETH2àETH2 and ETH3àETH3).

 

            none: Failover is disabled – that is no failover occurs from the local controller ports or from controller to controller ports.

 

            Hot-spares

 

Hot-spares are assigned to an array. If an array is moved to the other controller (for any reason) then the hot-spare moves with it.

 

PD hints – Implicit Fail Over (Controller Failure)

Use the ServeRaid Manager to determine which controller has failed. The failing controller is overlaid with a warning or fatal icon and an Event is generated. The information recorded for an event includes a long and short textual description of the error, along with a severity level, a time stamp and details. Right-click on the flagged controller icon and select Properties to get the information for this controller.  Figure 2 shows array MCTRLB owned by controller B (HDD and controller icons are shaded). Figure 3 shows that controller B has failed and that controller A has assumed ownership of all arrays from controller B. The Event log displays the following information entry indicating that the heartbeat between both controllers has been interrupted:

Network interface link lost on interface1 (Interface1 is the internal Ethernet port that carries the heartbeat)

Figure 2: Array ownership

 

Figure 3: Controller failure - Array ownership switched to controller A

 

 

Steps to identify the failure mode of the controller (DS300)

  1. Check the controller LEDs for any abnormal indications. See Hints-Indicator lights and problem indications
  2. Analyze the Event log.
  3. Make sure that the network is not down. ServeRaid uses out-of–band management of the controllers.
  4. You should at this time save the Support Archive file in the event that the problem cannot be resolved locally. This file will be requested by IBM support to analyze the failure.
  5. Use the PD maps to assist you in solving the problem
  6. When the problem is resolved, restart the controller. Make sure the storage can now be accessed from the host.

 

 

Gathering logs

 

Support Archive

 

This is the most important set of logs that you can capture. It will be forwarded to IBM support for in depth analysis. You can analyze the Event log, Error log and the Controller configuration profile.

 

From ServeRAID Manager right click on desired enclosure and select "Save support archive". The following files are saved:

RaidEvt.log - The event log

RaidErr.log - The error log

RaidCfg.log - This is the configuration profile for that enclosure

diagnostics.tgz file - Compressed file containing binary and text files for engineering analysis

 

Management Station logs

 

 From Management Station collect the following files from the C:\WINDOWS\Temp directory or (C\winnt\temp):

 mgmtservice.log

 mgmtservice.log.old (if exists)

           

Event Log

 

From ServeRAID Manager click on the Event button on the tool bar. The Event log displays. You can save the log from the dialog "File" menu. The file Event.txt is saved in the ServeRaid install folder (typically C:\Program Files\IBM\ServeRaid Manager). This log contains the same info as Raidevt.log

 

Configuration Profile

 

This the configuration profile for the enclosure.

 From the ServeRaid console right click on the Host management station and select "Save Printable Configuration".  The resulting file – RaidExt1.log - can be found in the ServeRAID install folder in a sub-folder the name of which is derived from the name of the Host. This log is the same as the RaidCfg.log  that is saved in the Support archive file.

 

CLI Diagnostic Dump Command

 

The support archive can also be uploaded to a host using the diagnostic Dump command. See Using the CLI for additional information.

 

 

Back to the top