You can use the PSSP dsh command from a single point of control to execute commands on:
You do not have to limit the target nodes to those nodes on your SP system. |The dsh command uses the remote command process named and |enabled by the RCMD_PGM and the DSH_REMOTE_CMD environment variables as |explained below. The dsh command can execute commands on any |host in your network to which you can issue remote commands. Using dsh rather than an rsh loop offers better performance because the commands can run concurrently.
|PSSP 3.4 supports three environment variables to let you pick |whether you want the PSSP software to use AIX rsh and |rcp remote commands or a secure remote command process for parallel |commands like dsh, pcp, and others. The following are the environment |variables and how to use them: |
|Like the restricted root access option, a secure remote command process |can also be enabled by using the SP Site environment SMIT menu or |the spsitenv command. It is extremely important to keep |these environment variables consistent and set to the remote command process |you want to use. See Secure remote command process for more information.
The dsh command provides several ways for you to specify input:
The following dsh command targets three nodes, reads commands from stdin, and filters the output of the ps command remotely:
$ dsh -w host1,host2,host3 dsh> ps -ef "|" grep root
dsh -w host1,host2,host3 ps -ef "|" grep root
The dsh command sends the included commands to a node set called the working collective. The dsh process assembles the working collective from the first existence of one of these sources:
If neither of these exist, an error has occurred and no commands are issued.
You can have commands run on the working collective concurrently or in sequence. Default processing is done concurrently, but you can specify a maximum number of nodes for concurrent execution to prevent system degradation. When this maximum number, or fan out is reached, results from outstanding |remote commands are awaited before any further commands are run. These results are displayed as soon as they return from the remotely executed commands.
This example specifies a maximum of 8 concurrent commands even though the working collective is defined as the entire system partition.
dsh -f 8 -a cat /var/adm/SPlogs/sysman/"*"config.log"*" | pg
The dsh process displays information, that returned from the commands, grouped by hostname. The stdout data of the remotely executed commands goes to the stdout of the dsh command. The stderr of remotely executed commands goes to the stderr of the dsh command.
All lines of data in the dsh command stderr and stdout results are prefixed by the name of the host that sent them. You can format dsh stderr and stdout lines by piping them to the dshbak command, which strips off the hostnames and displays the lines grouped by host in alphabetic sequence, as shown in this example:
dsh -w host1,host2,host3 cat /etc/passwd 2>&1 | dshbak
Another example follows using a group of nodes:
dsh -N bis_nodes cat /etc/bootptab 2>&1 | dshbak
Alternatively, the dshbak command can be specified with the -c option. This causes any identical output from two or more nodes to be shown only once, with the hostnames displayed above the output. Remember, however, that the most efficient way to filter large amounts of output from parallel commands is to filter on the nodes before the output is returned to the workstation from which the parallel command was issued.
No special error recovery is provided for the dsh command on remote hosts. If dsh finds that a node in the working collective is down, no further commands will be sent to that node unless you specify the -c (continue) flag on the command line. If hosts are down, the underlying |remote command will time out in approximately 2.5 minutes.