The DS400 employs Asynchronous Logical Unit Access method (ALUA) to access the physical drives. This means that only one controller at a time can access the physical disk drives. This is frequently referred to as an active\passive configuration.
Arrays are “owned” by their respective controllers. There
are two levels of array ownership:
Current Owner : Which controller is actively using the logical drive.
Preferred Owner: Which controller should be using the logical drive.
The preferred ownership is established at array creation. Therefore all drives assigned to that array will be “masked” from the other controller. Preferred ownership is only changed if an array is moved via management commands to the other controller. Preferred ownership will not change on a controller failure – either a real failure or a manual (controller pull) failure. After a failed controller is repaired\replaced, it has to be brought back on-line via management tools, i.e., peer enable or using the SRM GUI. It is not automatic.
The masking of the drives from one controller to the other is accomplished by the cluster services module. This service exchanges information between controllers using an RPC service and a dedicated back-channel path. This path is the internal Ethernet port (ETH1) that is not externally accessible. The IP address for this port is 172.28.9.66 for controller A and 172.28.9.65 for controller B.
Thee systems will run in one of two modes:
1- Normal Mode where cluster services is active and passing data between controllers. That is both controllers are operational.
2- Failover Mode where the cluster services module is active but not passing data between controllers.
Upon system boot up the Linux file system is mounted on each controller. Once the file system is loaded, the license key is validated. At this time, communication between controllers can be established across the backchannel. The firmware level of each controller is then compared and if they do not match, the down level controller will be updated by its peer. At this point the cluster services module is loaded. The configuration of each controller is compared and the newest configuration is used. Once this is complete, the RAID adapter loads and the drives and arrays are brought online for the owning controller. The virtualization stack then loads and communication to ServeRAID manager can be established.
Each controller has two FC ports that are labeled FC0 and FC1. Each of these two ports provides access to the logical drives. On the owning controller, both FC0 and FC1 will return good status for the Test Unit Ready command (TUR). For the alternate controller, the FC ports will return “path set – passive” (04/0B).
You can remove access to a logical drive from any of those ports using the “logical manage” command. See Using the CLI for additional information.
The port name follows the convention shown in Table 1.
Table 1 - WWNN and WWPN assignment convention
Node and Port Name Convention |
Enclosure |
FC0 |
FC1 |
Node Name |
20000000d1262472 |
NA |
NA |
Controller A Port names |
NA |
21000000d1262472 |
21010000d1262472 |
Controller B Port names |
NA |
21020000d1262472 |
21030000d1262472 |