IBM Books

Command and Technical Reference, Volume 2

verparvsd

Purpose

verparvsd - Verifies IBM Virtual Shared Disk system partitioning.

Syntax

verparvsd [-F] [-o output_file] layout_directory [new_partition ...]

Flags

-F
Returns success if correctable virtual shared disk errors are found in the system partitioning operation. This flag is the same as spapply_config -F and is used only when spapply_config invokes verparvsd when it is invoked with -F.

-o
Specifies the file where the System Data Repository (SDR) commands are placed to load the IBM Virtual Shared Disk data in the new system partitions. If -o is not specified, the output is placed in the /spdata/sys1/vsd/partitionVSDdata file.

Operands

layout_directory
Specifies the layout_directory that describes the new system partitions that the user wants to apply, and wants verparvsd to verify for IBM Virtual Shared Disk system partitioning. This operand is used as the first argument in the invocation of the spdisplay_config command. Refer to the spdisplay_config command for more details.

new_partition ...
Specifies the list of new system partitions to be processed. If some system partitions are going to be unaffected by the system partitioning operation implied by the layout_directory, and you do not want verparvsd to look at them, do not list them here, but list only the system partitions being affected. The verparvsd command only verifies and processes the system partitions passed as arguments. If no new system partitions are given as arguments, all system partitions in layout_directory are processed and analyzed. The spapply_config command invokes verparvsd listing only the new, changing system partitions.

Description

Use this command to verify that the system partition proposed in the layout_directory will work for all the existing IBM Virtual Shared Disk data. The spapply_config command invokes this command to partition the IBM Virtual Shared Disk data during a system partition operation. The verparvsd command extracts all IBM Virtual Shared Disk data from nodes involved in the system partitioning and writes SDR commands to the output file that will reload the IBM Virtual Shared Disk SDR data into the correct new system partitions. This file is executed during the system partitioning process to partition the IBM Virtual Shared Disk data.

|The verparvsd command is not valid on a system with an SP |Switch2 switch or on a switchless clustered enterprise server |system.

The spapply_config command invokes this command and its output to effect IBM Virtual Shared Disk system partitioning. You can also invoke the command prior to invoking the spapply_config command to see how well suited the desired layout is for the existing IBM Virtual Shared Disk configuration as defined in the SDR.

This command only checks and processes the new system partitions listed on the command line. If some existing system partitions are to be unchanged in the system partitioning operation, do not list those system partition names on the command line. If no new system partitions are listed, the default is to process all system partitions in the layout directory.

This command checks to see if the IBM Virtual Shared Disk data can be partitioned as specified by the layout directory without any problems. The command reports any problems it identifies, as well as reports how it would fix the problem.

The verparvsd command places global volume groups (GVGs) in the system partition containing their primary server node. Virtual shared disks are placed in the system partition of their GVG. HSDs are placed in the system partition containing their first virtual shared disk.

The verparvsd command looks for the following types of errors in each new system partition:

  1. Inconsistent VSD_max_buddy_buffer_size Node attributes. The verparvsd command sets the VSD_max_buddy_buffer_size field for all virtual shared disk nodes in the system partition to the largest value of any node in the system partition, and adjusts the VSD_max_buddy_buffers so that the buddy buffer is still the same size, or just minimally larger than it was before on each node.
  2. A twin-tailed GVG with primary and secondary server nodes in different system partitions. GVGs are placed in the system partition of the primary server. If the secondary is in a different system partition, the verparvsd command will set the secondary server to NULL, making the GVG have only one server, the primary.
  3. An HSD with virtual shared disks in more than one system partition. The verparvsd command appends .BAD to the HSD's name. These HSDs would be unusable if the new system partition were applied and the VSD_adapter was css0.

    As a corollary, if an HSD with .BAD at the end of its name is found in the new system partition to have all its virtual shared disks in the system partition, the .BAD will be removed from its name.

  4. Any duplicate GVG, virtual shared disk, or HSD name. The verparvsd command keeps the original name for the first name it encounters, but makes up unique names for any subsequent duplicate names encountered. New names follow the following suggested naming conventions:
    GVG
    vg01n01 for single tailed GVG on node 1.
     
    vg01p0ss02 for twin tailed GVGs, primary server node 1, secondary server node 2.
    VSD
    vsd01vg01n01 (for example, vsdnnGVG name)
    HSD
    hsd01 (for example, hsdnn)

Files

/spdata/sys1/vsd/partitionVSDdata
The default location of the output file containing all the SDR commands to correctly system partition the IBM Virtual Shared Disk data.

Exit Values

The verparvsd command looks for error types (described previously) in each new system partition and corrects them as specified:

In either case, verparvsd processes all the IBM Virtual Shared Disk data, and generates a complete list of errors on standard error, and a complete SDR command list to the output file.

Security

You must have write access to the SDR to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Related Information

Commands: defhsd, defvsd, spapply_config, spdisplay_config, vsdnode, vsdvg

Examples

To see how well suited the configuration specified in the config.4_4_8/layout.6 layout directory is to your IBM Virtual Shared Disk configuration, enter:

verparvsd config.4_4_8/layout.6

vhostname

Purpose

vhostname - Sets or displays the virtual host name of the local host.

Syntax

vhostname [-s] [ host_name]

Flags

-s
Trims any domain information from the printed name.

Operands

host_name
Sets the virtual host name to host_name.

Description

Use this command to display or set the virtual host name of the local host. Only users with root authority can set the virtual host name. The host_name is stored in the /etc/vhostname file.

If displaying the virtual host name and the virtual host name has not been set and the /etc/vhostname file does not exist, vhostname will return the real host name from the kernel variable.

When setting the virtual host name, if the /etc/vhostname file does not exist, it will be created. If it does exist, the file contents will be overwritten by the new virtual host name.

To clear the virtual host name so that the virtual host name no longer exists, remove the /etc/vhostname file.

Note:
You must have root authority to remove the /etc/vhostname file.

The virtual host name is used in fail over situations when an application has associated the host name in the kernel of a particular machine to the service it is providing. When the application is restarted on the fail over node that has a different host name, the application may not work or work incorrectly. If the application needs to associate a host name with a particular service and it cannot handle having multiple host names, a virtual host name can be provided. The application can call vhostname instead of hostname and get the host name of the node it normally runs on. This eliminates the need to change the real host name in the kernel on the fail over node. It should be noted that changing the real host name in the kernel can cause problems with other applications that rely on the real host name in the kernel to identify the physical machine.

Note:
The High Availability Cluster Multiprocessing (HACMP) event scripts provided with the High Availability Control Workstation (HACWS) option of the IBM Parallel System Support Programs for AIX (PSSP) set and clear the virtual host name in the HACMP pre- and post-event scripts. The administrator normally should not have to set or clear the virtual host name.

Files

/etc/vhostname
Contains the virtual host name.

Exit Values

0
Indicates that if a parameter was used, a virtual host name was successfully set. If a parameter was not used, either a virtual or real host name was printed out.

1
Indicates that an error occurred.

Security

You must have root authority to use the host_name operand.

Related Information

Subroutines: getvhostname, setvhostname

AIX commands: hostname

AIX subroutines: gethostname, sethostname

Examples

  1. To display the virtual host name, enter:
    vhostname
    
  2. To set the virtual host name to spcw_prim, enter:
    vhostname spcw_prim
    
  3. To display the virtual host name and trim domain information for host donald.ibm.com, enter:
    vhostname -s
    

    A vhostname of donald prints out.

  4. To clear the virtual host name so it no longer exists, enter:
    rm /etc/vhostname
    

    Note:
    You must have root authority to remove the /etc/vhostname file.

vsdatalst

Purpose

vsdatalst - Displays IBM Virtual Shared Disk subsystem definition data from the System Data Repository (SDR).

Syntax

vsdatalst [-G] {-g | -n | -v} [-c]

Flags

-G
Displays information for all system partitions on the SP, not only the current system partition.

Only one of the following flags can be specified with each invocation of vsdatalst:

-g
Displays the following SDR virtual shared disk global volume group data:
global_group_name,
local_group_name,
primary_server_node,
secondary_server_node. (This is only enabled with the Recoverable Virtual Shared Disk subsystem.)
eio_recovery
recovery
CVSD server_list

-n
Displays the following SDR virtual shared disk Node data:
node_number,
host_name,
adapter_name,
init_cache_buffer_count,
max_cache_buffer_count,
rw_request_count,
vsd_request_count,
min_buddy_buffer_size,
max_buddy_buffer_size,
max_buddy_buffers.

-v
Displays the following SDR virtual shared disk definition data:
vsd_name,
logical_volume_name,
global_group_name,
minor_number,
option (cache|nocache).

-c
Displays the following cluster information:
node_number
cluster_name

Operands

None.

Description

Use this command to display one of several kinds of information to standard output.

You can use the System Management Interface Tool (SMIT) to run the vsdatalst command. To use SMIT, enter:

smit list_vsd

and select the option for the kind of IBM Virtual Shared Disk SDR information you want to see.

Security

You must be in the AIX bin group to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Related Information

Commands: lsvsd, updatevsdnode, vsdnode

Examples

  1. To display SDR virtual shared disk global volume group data, enter:
    vsdatalst -g
    

    The system displays a message similar to the following:

    Note:
    backup or secondary_server_node is only enabled with the Recoverable Virtual Shared Disk subsystem.
    VSD Global Volume Group Information
    Global Volume     Local     Server Node  Numbers:  eio_
    Group name        VG name   primary      backup    recovery  Recovery
    ----------------  --------  -----------  --------  --------  --------
    hunter-rileysvg   rileysvg       1         0          0         0
    ppstest1-rootvg   rootvg         3         0          0         0
    tattooine-rootvg  rootvg         2         0          0         0
    
  2. To display SDR virtual shared disk node data, enter:
    vsdatalst -n
    

    The system displays a message similar to the following:

    VSD Node Information
                          Initial Maximum  VSD    rw    Buddy Buffer:
    node           VSD    cache   cache    req.   req.  min.  max.    size: #
    #    host_name adapt. buffers buffers  count  count size  size    maxbufs
    ---- --------- ------ ------- -------  -----  ----- ---- ----------------
     1   hunter     tr0     64      256     256    48   4096  65536     4
     2   tattooine  tr0     64      256     256    48   4096  65536     4
     3   ppstest1   tr0     64      256     256    48   4096  65536     4
    
  3. To display SDR virtual shared disk definition data, enter:
    vsdatalst -v
    

    The system displays a message similar to the following:

    VSD Table
    VSD name          logical volume  Global Volume Group     minor# option
    ----------------- --------------- ----------------------- ------ ------
    vsd.rlv01         rlv01           hunter-rileysvg              2 cache
    vsd.rlv02         rlv02           hunter-rileysvg              3 cache
    vsd.vsd1          vsd1            tattooine-rootvg             1 nocache
    vsd.vsdp1         vsd1            ppstest1-rootvg              4 nocache
    

vsdchgserver

Purpose

vsdchgserver - Switches the server function for one or more virtual shared disks from the node that is currently acting as the server node to the other.

Syntax

vsdchgserver
-g vsd_global_volume_group_name -p primary_node
 
[-b secondary_node] [ -o EIO_recovery]

Flags

-g
Specifies the Global Volume Group name for the volume group that represents all the virtual shared disks defined on a particular node.

-p
Specifies the node number defined as the primary server node for the global volume group specified with the -g flag. The value of the -p option must be the same as the current acting server of the global volume group.

-b
Specifies the node number defined as the secondary server node for the global volume group specified with the -g flag. If the -b flag is not specified, it will set the secondary_node to undefined in the System Data Repository (SDR). If the current secondary_node in the SDR is not defined and the -b flag is specified, the vsdchgserver command will set the secondary_node for the global volume group specified in the -g flag.

-o
Specified as 0, for no recovery on an EIO error, or 1, for recovery on an EIO error. The default is the current value defined in the SDR.

Operands

None.

Description

The vsdchgserver command allows the serving function for a global volume group defined on a primary node to be taken over by the secondary node, or to be taken over by the primary node from the secondary node. This allows an application to continue to use virtual shared disks in situations where the cable or adapter between the physical disks and one of the attached nodes is not working.

The Recoverable Virtual Shared Disk subsystem automatically updates the virtual shared disk devices if, and only if, the vsdchgserver command is used to flip the currently-defined primary node and secondary node in the global volume group specified in the -g flag.

Security

You must be in the AIX bin group and have write access to the SDR to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Related Information

Refer to PSSP: Managing Shared Disks for information on how to use this command in writing applications.

Location

/usr/lpp/csd/bin/vsdchgserver

Examples

To change the primary server node for the global volume group node12vg to node 1 and the secondary node to node 2, with EIO recovery, enter:

vsdchgserver -g node12vg -p 1 -b 2 -o 1

vsddiag

Purpose

vsddiag - Displays information about the status of virtual shared disks.

Syntax

vsddiag

Flags

None.

Operands

None.

Description

This command displays information about virtual shared disks that can help you determine their status and collect information that helps IBM service representatives diagnose system problems.

Note:
The vsddiag command can only be used when no virtual shared disk I/O is in progress.

Security

You must have access to the virtual shared disk subsystem via the sysctl service to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Location

/usr/lpp/csd/bin/vsddiag

Related Information

Commands: vsdatalst, vsdsklst

Examples

To display information about the virtual shared disks in your system or system partition, enter:

vsddiag

If all virtual shared disks are created and configured correctly, the output is:

Checking server vsds
Checking VSD request sequence number.
Checking device drivers.
end of vsdl1diag:checkvsdl1 program.

If there are no virtual shared disks defined, the output is:

k5n02.ppd.pok.ibm.com
VSD_ERROR:3:No IBM Virtual Shared Disks are configured on this node.
k5n01.ppd.pok.ibm.com
VSD_ERROR:3:No IBM Virtual Shared Disks are configured on this node.
Checking server vsds
Checking VSD request sequence number.
Checking device drivers.
end of vsdl1diag:checkvsdl1 program.

If there is something wrong with the virtual shared disks, the output is:

k5n02.ppd.pok.ibm.com
VSD_ERROR:3:No IBM Virtual Shared Disks are configured on this node.
k5n01.ppd.pok.ibm.com
VSD_ERROR:3:No IBM Virtual Shared Disks are configured on this node.
Checking server vsds
Checking VSD request sequence number.
Checking device drivers.
vsdl1diag:checkvsdl1: 0034-619 Device driver on node 14 is not at the
same level as others on this SP system or system partition.
vsdl1diag:checkvsdl1: 0034-620 VSD Maximum IP Message Size on node 14 is
not at the same level as others on this SP system or system partition.

vsdelnode

Purpose

vsdelnode - Removes IBM Virtual Shared Disk information for a node or series of nodes from the System Data Repository (SDR).

Syntax

vsdelnode node_number ...

Flags

None.

Operands

node_number
Specifies the number attribute assigned to a node in the SDR.

Description

This command is used to remove IBM Virtual Shared Disk data for a node or series of nodes from the SDR.

The vsdelnode command makes the listed nodes no longer virtual shared disk nodes so that no virtual shared disks can be accessed from them. This command is unsuccessful for any nodes that are servers for any global volume groups.

You can use the System Management Interface Tool (SMIT) to run the vsdelnode command. To use SMIT, enter:

smit delete_vsd

and select the Delete Virtual Shared Disk Node Information option.

Security

You must be in the AIX bin group and have write access to the SDR to run this command.

Restrictions

If you have the Recoverable Virtual Shared Disk software installed and operational, do not use this command. The results may be unpredictable.

See PSSP: Managing Shared Disks.

Prerequisite Information

PSSP: Managing Shared Disks

Related Information

Commands: vsdatalst, vsdnode

Examples

To delete virtual shared disk node information for nodes 3 and 6, enter:

vsdelnode 3 6

vsdelvg

Purpose

vsdelvg - Removes virtual shared disk global volume group information from the System Data Repository (SDR).

Syntax

vsdelvg [-f] global_group_name ...

Flags

-f
Forces the removal of any virtual shared disks defined on this global volume group.

Operands

global_group_name
Specifies the volume group that you no longer want to be global to the system.

Description

Use this command to remove virtual shared disk global volume group information from the SDR. If any virtual shared disks are defined on a global volume group, the vsdelvg command is unsuccessful unless -f is specified. If -f is specified, any such virtual shared disks must be unconfigured and in the defined state on all the virtual shared disk nodes to be deleted.

You can use the System Management Interface Tool (SMIT) to run the vsdelvg command. To use SMIT, enter:

smit delete_vsd

and select the Delete Virtual Shared Disk Global Volume Group Information option.

Security

You must be in the AIX bin group and have write access to the SDR to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Location

/usr/lpp/csd/bin/vsdelvg

Related Information

Commands: undefvsd, vsdatalst, vsdvg

Examples

To delete the virtual shared disk information associated with global volume group vg1n1 from the SDR, enter:

vsdelvg vg1n1

vsdnode

Purpose

vsdnode - Enters IBM Virtual Shared Disk information for a node or series of nodes into the System Data Repository (SDR).

Syntax

vsdnode
node_number... adapter_name init_cache_buffer_count
 
max_cache_buffer_count vsd_request_count rw_request_count
 
min_buddy_buffer_size max_buddy_buffer_size max_buddy_buffers
 
vsd_max_ip_msg_size [cluster_name]

Flags

None.

Operands

node_number
Specifies the node or nodes whose virtual shared disk information is to be set as identified by the node_number attribute of the SDR node class.

adapter_name
Specifies the adapter name to be used for virtual shared disk communications for the nodes specified. The adapter name must already be defined to the nodes. Note that the nodes involved in IBM Virtual Shared Disk support must be fully connected so that proper communications can take place. Use css0 to specify that the IBM Virtual Shared Disk device driver transmits data requests over the SP Switch. The css0 adapter will be used the next time the IBM Virtual Shared Disk device driver is loaded. |

|init_cache_buffer_count
|Specifies the number of 4KB blocks you want to assign to an optional cache |if you do not use the switch as your adapter. The recommended value is |256.
|Note:
IBM Virtual Shared Disk caching is no longer supported. This |information will still be accepted for compatibility with previous releases, |but the IBM Virtual Shared Disk device driver will ignore the |information. |
|

|max_cache_buffer_count
|Specifies the maximum number of buffers to be used for virtual shared disk |caching for the nodes specified. The recommended initial value is |256. If you use the switch as your adapter, no cache buffer is |allocated.
|Note:
IBM Virtual Shared Disk caching is no longer supported. This |information will still be accepted for compatibility with previous releases, |but the IBM Virtual Shared Disk device driver will ignore the |information. |

vsd_request_count
This value will be ignored, but a value must be specified for coexistence. The device driver will dynamically allocate the structures that it needs to manage. The previous recommended value was 256.

rw_request_count
This value will be ignored, but a value must be specified for coexistence. The device driver will dynamically allocate the structures that it needs to manage. The previous recommended value was 48.

min_buddy_buffer_size
Specifies the smallest buddy buffer a server uses to satisfy a remote request to a virtual shared disk. This value must be a power of 2 and greater than or equal to 4096. IBM suggests using a value of 4096 (4KB). For a 512 byte request, 4KB is excessive. However, recall that a buddy buffer is only used for the short period of time while a remote request is being processed at the server node.

max_buddy_buffer_size
Specifies the largest buddy buffer a server uses to satisfy a remote noncached request. This value must be a power of 2 and greater than or equal to the min_buddy_buffer_size. IBM suggests using a value of 262144 (256KB). This value depends on the I/O request size of applications using the virtual shared disks and the network used by the IBM Virtual Shared Disk software.

max_buddy_buffers
Specifies the number of max_buddy_buffer_size buffers to allocate. The buddy buffer is pinned kernel memory allocated when the IBM Virtual Disk device driver is loaded the first time and is freed when the device driver is unconfigured from the kernel. The recommended value is in the range of 32 to 96 buddy buffers where the max_buddy_buffer_size has been set to 256KB.

Buddy buffers are only used on the servers. On client nodes you may want to set max_buddy_buffers to 1.

Note:
The statvsd command will indicate if remote requests are queueing waiting for buddy buffers.

vsd_max_ip_msg_size
Specifies the maximum message size in bytes for virtual shared disks. If you use SMIT to define the virtual shared disk node the default is 61440 (61KB), which is the recommended value for the switch.

cluster_name
A cluster name must be specified for server nodes that will be serving concurrently accessed shared disks (CVSD). The cluster name can be any user provided name. A node can only belong to one cluster. For example, when you have a concurrent access environment, the two servers for the CVSD must both specify the same cluster name.

Description

Use this command to make the specified nodes virtual shared disk nodes and to assign their IBM Virtual Shared Disk operational parameters. The operational parameters are: adapter name, initial cache buffer count, maximum cache buffer count, read/write request count, virtual shared disk request count, and buddy buffer parameters. If this information is the same for all nodes, run this command once. If the information is different for the nodes, run this command once for each block of nodes that should have the same virtual shared disk information.

You can use the System Management Interface Tool (SMIT) to run the vsdnode command. To use SMIT, enter:

smit vsd_data

and select the IBM Virtual Shared Disk Node Information option.

Security

You must be in the AIX bin group and have write access to the SDR to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Location

/usr/lpp/csd/bin/vsdnode

Related Information

Commands: updatevsdnode, vsdatalst, vsdelnode

Refer to PSSP: Managing Shared Disks for defining virtual shared disk information in the SDR.

Examples

The following example adds SDR information for a css0 network and nodes 1 through 8.

vsdnode 1 2 3 4 5 6 7 8 css0 64 256 256 48 4096 262144 32 61440
 

vsdsklst

Purpose

vsdsklst - Produces output that shows you the disk resources used by the IBM Virtual Shared Disk subsystem across a system or system partition.

Syntax

vsdsklst [-v] [-d] {-a | -n node_number[, node_number2, ...]} [ -G]

Flags

-v
Displays only disk utilization information about volume groups and the virtual shared disks associated with them.

-d
Displays only disk utilization information about volume groups and the physical disks associated with them.

-a
Displays specified information for all nodes in the system or system partition.

-n node_number
Lists one or more node numbers for which information is to be displayed.

-G
Displays global disk information (across system partitions).

Operands

None.

Description

Use this command to check disk utilization across a system or system partition.

Security

You must have access to the virtual shared disk subsystem via the sysctl service to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Location

/usr/lpp/csd/bin/vsdisklist

Related Information

Commands: vsdatalst

Examples

This command:

vsdsklst -dv -a

displays the following information on a system that has volume groups and virtual shared disks defined on nodes 1, 3, 5, 7, 10, and 12. Node 5 is temporarily inactive.

k7n12.ppd.pok.ibm.com
Node Number:12; Node Name:k7n12.ppd.pok.ibm.com
    Volume group:rootvg; Partition Size:4; Total:537; Free:315
        Physical Disk:hdisk0; Total:537; Free:315
    Volume group:vsdvg; Partition Size:4; Total:537; Free:533
        Physical Disk:hdisk1; Total:537; Free:533
        VSD Name:1HsD8n12{lv1HsD8n12}; Size:2
        VSD Name:1HsD20n12{lv1HsD20n12}; Size:2
k7n01.ppd.pok.ibm.com
Node Number:1; Node Name:k7n01.ppd.pok.ibm.com
    Volume group:rootvg; Partition Size:4; Total:537; Free:210
        Physical Disk:hdisk0; Total:537; Free:210
    Volume group:vsdvg; Partition Size:4; Total:537; Free:533
        Physical Disk:hdisk1; Total:537; Free:533
        VSD Name:1HsD1n1{lv1HsD1n1}; Size:2
        VSD Name:1HsD13n1{lv1HsD13n1}; Size:2
k7n05.ppd.pok.ibm.com
No response
k7n10.ppd.pok.ibm.com
Node Number:10; Node Name:k7n10.ppd.pok.ibm.com
    Volume group:rootvg; Partition Size:4; Total:537; Free:303
        Physical Disk:hdisk0; Total:537; Free:303
        VSD Name:vsdn10v1{lvn10v1}; Size:4
        VSD Name:vsdn10v2{lvn10v2}; Size:4
        VSD Name:vsdn10v3{lvn10v3}; Size:4
    Volume group:vsdvg; Partition Size:4; Total:537; Free:533
        Physical Disk:hdisk1; Total:537; Free:533
        VSD Name:1HsD6n10{lv1HsD6n10}; Size:2
        VSD Name:1HsD18n10{lv1HsD18n10}; Size:2
k7n03.ppd.pok.ibm.com
Node Number:3; Node Name:k7n03.ppd.pok.ibm.com
    Volume group:rootvg; Partition Size:4; Total:537; Free:269
        Physical Disk:hdisk0; Total:537; Free:269
        VSD Name:vsdn03v1{lvn03v1}; Size:4
        VSD Name:vsdn03v2{lvn03v2}; Size:4
        VSD Name:vsdn03v3{lvn03v3}; Size:4
    Volume group:vsdvg; Partition Size:4; Total:537; Free:533
        Physical Disk:hdisk1; Total:537; Free:533
        VSD Name:1HsD2n3{lv1HsD2n3}; Size:2
        VSD Name:1HsD14n3{lv1HsD14n3}; Size:2
k7n07.ppd.pok.ibm.com
Node Number:7; Node Name:k7n07.ppd.pok.ibm.com
    Volume group:rootvg; Partition Size:4; Total:537; Free:300
        Physical Disk:hdisk0; Total:537; Free:300
        VSD Name:vsdn07v1{lvn07v1}; Size:4
        VSD Name:vsdn07v2{lvn07v2}; Size:4
        VSD Name:vsdn07v3{lvn07v3}; Size:4
    Volume group:vsdvg; Partition Size:4; Total:537; Free:533
        Physical Disk:hdisk1; Total:537; Free:533
        VSD Name:1HsD4n7{lv1HsD4n7}; Size:2
        VSD Name:1HsD16n7{lv1HsD16n7}; Size:2

To view the output for a specific node, type:

vsdsklst -n 12

The output is:

k7n07.ppd.pok.ibm.com
Node Number:7; Node Name:k7n07.ppd.pok.ibm.com
    Volume group:rootvg; Partition Size:4; Total:537; Free:300
        Physical Disk:hdisk0; Total:537; Free:300
        VSD Name:vsdn07v1{lvn07v1}; Size:4
        VSD Name:vsdn07v2{lvn07v2}; Size:4
        VSD Name:vsdn07v3{lvn07v3}; Size:4
    Volume group:vsdvg; Partition Size:4; Total:537; Free:533
        Physical Disk:hdisk1; Total:537; Free:533
        VSD Name:1HsD4n7{lv1HsD4n7}; Size:2
        VSD Name:1HsD16n7{lv1HsD16n7}; Size:2

If both the rootvg and testvg volume groups are varied on, the system displays output similar to the following:

Node Number:12; Node Name:k21n12.ppd.pok.ibm.com
    Volume group:rootvg; Partition Size:4; Total:537; Free:47
        Physical Disk:hdisk0; Total:537; Free:47
        VSD Name:1HsD1n12[lv1HsD1n12]; Size:5
        VSD Name:1HsD2n12[lv1HsD2n12]; Size:5
        VSD Name:vsd4n12[lvvsd4n12]; Size:4
        VSD Name:vsd5n12[lvvsd5n12]; Size:4
        VSD Name:vsd6n12[lvvsd6n12]; Size:4
    Volume group:testvg; Partition Size:4; Total:537; Free:313
        Physical Disk:hdisk1; Total:537; Free:313
        VSD Name:vsd14n12[lvvsd14n12]; Size:4

If the testvg volume group is not varied on, the system displays output similar to the following:

Node Number:12; Node Name:k21n12.ppd.pok.ibm.com
    Volume group:rootvg; Partition Size:4; Total:537; Free:47
        Physical Disk:hdisk0; Total:537; Free:47
        VSD Name:1HsD1n12[lv1HsD1n12]; Size:5
        VSD Name:1HsD2n12[lv1HsD2n12]; Size:5
        VSD Name:vsd4n12[lvvsd4n12]; Size:4
        VSD Name:vsd5n12[lvvsd5n12]; Size:4
        VSD Name:vsd6n12[lvvsd6n12]; Size:4
    Volume group:testvg is not varied on.
        Physical Disk:hdisk1;

Instead of issuing this command directly, you should use the appropriate SMIT panels to view it in the best format. To view information about volume groups,type:

smit lsvg

To view information about logical volumes, type:

smit lslv

To view information about physical volumes, type:

smit lspv

vsdvg

Purpose

vsdvg - Defines a virtual shared disk global volume group.

Syntax

vsdvg
[-g global_volume_group ] {-lserver_list local_group_name | local_group_name primary_node [secondary node]}
 
[secondary_server_node] [eio_recovery]

Flags

-g global_volume_group
Specifies a unique name for the new global volume group. This name must be unique across the system partition. It should be unique across the SP, to avoid any naming conflicts during future system partitioning operations. The suggested naming convention is vgxxnyy, where yy is the node number, and xx uniquely numbers the volume groups on that node. If this is not specified, the local group name is used for the global name. The length of the name must be less than or equal to 31 characters.

-l server_list
Define the list of servers for CVSD. More than one server indicates the global_volume_group is a concurrent volume group.

Operands

local_group_name
Specifies the name of a volume group that you want to indicate as being used for virtual shared disks. This name is local to the host upon which it resides. The length of the name must be less than or equal to 15 characters.

primary_node
Specifies the primary server node on which the volume group resides. The length of the name must be less than or equal to 31 characters. This can be specified in four different ways:

secondary_node
Specifies the secondary server node on which the volume group resides. The length of the name must be less than or equal to 31 characters.

This can be specified in four different ways:

Note:
This operand is used only by the Recoverable Virtual Shared Disk subsystem.

Description

Use this command to define volume groups for use by the IBM Virtual Shared Disk subsystem. This is done by specifying the local volume group name, the node on which it resides, and the name by which the volume group will be known throughout the cluster.

If eio_recovery is set (to a value of 1) due to disk error (EIO error), the IBM Recoverable Virtual Shared Disk system will perform a full recovery by flipping the current primary node and the secondary node and doing one more retry on the new primary node.

You can use the System Management Interface Tool (SMIT) to run the vsdvg command. To use SMIT, enter:

smit vsd_data

and select the Virtual Shared Disk Global Volume Group Information option.

Security

You must be in the AIX bin group and have write access to the SDR to run this command.

Restrictions

The secondary_server_node operand is used only by the Recoverable Shared Disk subsystem.

Prerequisite Information

PSSP: Managing Shared Disks

Location

/usr/lpp/csd/bin/vsdvg

Related Information

Commands: vsdelvg

Examples

  1. The following example adds SDR information indicating that the volume group known as vg2n17 on node 17 is available for global access and is known to the cluster as vg2n17. Node 17 is the primary and only server.
    vsdvg vg2n17 17
    
  2. The following example with the Recoverable Virtual Shared Disk subsystem adds SDR information indicating that the volume group known as vg1p3s15 on nodes 3 and 15 is available for global access and is known to the cluster as vg1p3s15. 3 is the primary server node and 15 is the secondary server node.
    vsdvg vg1p3s15 3 15
    

vsdvgts

Purpose

vsdvgts - Reads the timestamp from the volume group descriptor area (VGDA) of the physical disks and sets the value in the System Data Repository (SDR).

Syntax

vsdvgts [-a] [ volgrp]

Flags

-a
Specifies that the timestamps for this volume group for both primary and secondary nodes should be updated. If this flag is not specified, the timestamp is updated on the local node only.

Operands

volgrp
Specifies a volume group. If this operand is not specified, the timestamps for all the volume groups on this node are updated.

Description

Use this command to update the timestamp that the Recoverable Virtual Shared Disk subsystem uses to determine if a twin-tailed volume group has changed. When the subsystem detects a change, the recovery scripts export the volume group and then import the volume group.

This command can be used to avoid exporting the volume group and then importing the volume group during recovery in situations where the export and import operations are not really necessary. This command should be used very carefully.

Exit Values

0
Indicates the successful completion of the command.

1
Indicates that the program was unable to read one or more timestamps.

Security

You must be in the AIX bin group and have write access to the SDR to run this command.

Implementation Specifics

This command is part of the Recoverable Virtual Shared Disk optional component of PSSP.

Prerequisite Information

See PSSP: Managing Shared Disks.

Location

/usr/lpp/csd/bin/vsdvgts

Examples

To update the timestamp associated with the virtual shared disk volume group vsdvg1 for just this node, enter:

vsdvgts vsdvg1

vsdvts

Purpose

vsdvts - Verifies that the IBM Virtual Shared Disk component works.

Syntax

vsdvts [-b block_size] [-n number_of_blocks] vsd_name [file]

Flags

-b
Specifies the block_size used on the read and write calls to the virtual shared disk. Because the virtual shared disk raw device is used, the block size must be a multiple of 512. The default block size is 4096.

-n
Specifies the number of blocks of the file to read. The default is to read 1MB of data from the file, so 1MB divided by block_size is the default number of blocks. Specifying 0 means to read as many full blocks of data as there are in the file. If more blocks are specified than are in the file, only the number of full blocks that exist will be used.

Operands

vsd_name
Specifies the virtual shared disk to be verified (for example, that will be written and read with the data from the file). The virtual shared disk should be in the active state. Ensure that the virtual shared disk is large enough to hold all the data you plan to write to it. A virtual shared disk on a logical volume with one physical system partition is large enough if all the vsdvts defaults are taken.

file
Specifies the file to be written to the virtual shared disk to verify its operation. The data is then read from the virtual shared disk and compared to this file to ensure the virtual shared disk read and write operations are successful. The default file is /unix.

Description

Attention

Data on vsd_name and its underlying logical volume is overwritten and, therefore, destroyed. Use this command after you have defined a virtual shared disk (including its underlying logical volume), but before placing application data on it.

Use this command to verify that the vsd_name is in the active state and then to write the specified part of file to the raw vsd_name device, /dev/rvsd_name. This command reads the data back from the virtual shared disk, then compares it to file. If the data is the same, the test is successful and vsdvts succeeds. Otherwise, vsdvts is unsuccessful. The dd command is used for all I/O operations.

Try vsdvts on both a server and client node (for example, on both the node with a logical volume and one without it).

Security

You must be in the AIX bin group and have write access to the SDR to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Related Information

Commands: vsdnode, vsdvg, defvsd, cfgvsd, startvsd, dd

The preceding commands are listed in their order of use.

Examples

To verify that the IBM Virtual Shared Disk component works, choose a newly created vsd that has no application data on it, say vsd1, and enter:

vsdvts vsd1

wrap_test

Purpose

wrap_test - Checks the function of a link.

Attention

ATTENTION - READ THIS FIRST: Do not activate the switch advanced diagnostic facility until you have read this section completely, and understand this material. If you are not certain how to properly use this facility, or if you are not under the guidance of IBM Service, do not activate this facility.

Activating this facility may result in degraded performance of your system. Activating this facility may also result in longer response times, higher processor loads, and the consumption of system disk resources. Activating this facility may also obscure or modify the symptoms of timing-related problems.

Syntax

wrap_test
{[-j jack] [-s switch_chip_id -p switch_chip_port]}
 
[-c cable_length] |[-n {0|1}] [-h]

Flags

-j jack
Specifies the Frame-Switch-BulkHead-Jack connected to the suspected link.

-s switch_chip_id
Specifies the ID of a switch chip connected to the suspected link.

-p switch_chip_port
Specifies the number of the switch chip port connected to the suspected link.

-c cable_length
Specifies the length of the cable in meters. The flag is applicable only to the links connecting two switches. The default value is 10 m. |

|-n {0|1}
|Specifies the plane where the test will be run. If a plane is not |specified, the default is 0.

-h
Displays usage information.

Operands

None.

Description

The wrap_test command checks the functionality of a suspected link, and points to the faulty part of the link that should be replaced. You must specify either the Frame-Switch-BulkHead-Jack, or the switch_chip_id and switch_chip_port number that identify the switch chip port connected to the link. If the suspected link connects two switches, you might also specify the cable_length parameter. This will help the wrap test to choose the correct technique for the cable testing.

If the link under test connects a switch to a node, you are required to fence the node before running the test. If the link under test connects two switches, the link will be disabled during the test.

|Security

|When restricted root access (RRA) is enabled, this command can only be run |from the control workstation.

Location

/usr/lpp/ssp/bin/spd/wrap_test

Examples

  1. |To test the link connected to jack 6 of switch 17 of frame 1 with a 15 m |cable (switch - switch link), enter:
    |wrap_test -j E01-S17-BH-J6 -c 15
  2. |To test the link connected to port 3 of switch chip 23 enter:
    |wrap_test -s 23 -p 3
  3. |To specify that you want the test performed on the second plane, |enter:
    |switch_stress -n 1


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]