ADSM/TSM Quick Facts in alphabetical order, supplemented thereafter by topic discussions as compiled by Richard Sims (r b s @ b u . e d u), Boston University (www.bu.edu), Office of Information Technology On the web at http://people.bu.edu/rbs/ADSM.QuickFacts Last update: 2004/10/06 This reference was originally created for my own use as a systems programmer's "survival tool", to accumulate essential information and references that I knew I would have to refer to again, and quickly re-find it. In participating in the ADSM-L mailing list, it became apparent that others had a similar need, and so it made sense to share the information. The information herein derives from many sources, including submissions from other TSM customers. This, the information is that which everyone involved with TSM has contributed to a common knowledge base, and this reference serves as an accumulation of that knowledge, largely reflective of the reality of working with the TSM product as an administrator. I serve as a compiler and contributor. This informal, "real-world" reference is intended to augment the formal, authoritative documentation provided by Tivoli and allied vendors, as frequently referenced herein. See the REFERENCES area at the bottom of this document for pointers to salient publications. Command syntax is included for the convenience of a roaming techie carrying a printed copy of this document, and thus is not to be considered definitive or inclusive of all levels for all platforms: refer to manuals for the syntax specific to your environment. Upper case characters shown in command syntax indicates that at least those characters are required, not that they have to be entered in upper case. I realize that I need to better "webify" this reference, and intend to do so in the future. (TSM administration is just a tiny portion of my work, and many other things demand my time.) In dealing with the product, one essential principle must be kept in mind, which governs the way the product operates and restricts the server administrator's control of that data: the data which the client sends to a server storage pool will always belong to the client - not the server. There is no provision on the server for inspecting or manipulating file system objects sent by the client. Filespaces are the property of the client, and if the client decides not to do another backup, that is the client's business: the server shall take no action on the Active, non-expiring files therein. It is incumbent upon the server administrator, therefore, to maintain a relationship with client administrators for information to be passed when a filespace is obsolete and discardable, when it has fallen into disuse. ? "Match-one" wildcard character used in Include/Exclude patterns to match any single character except the directory separator; it does not match to end of string. Cannot be used in directory or volume names. * "Match-all" wildcard character used in Include/Exclude patterns to match zero or more characters, but it does not cross a directory boundary. Cannot be used in directory or volume names. * (asterisk) SQL SELECT: to specify that all columns in a table are being referenced, which is to say the entirety of a row. As in: SELECT PLATFORM_NAME, COUNT(*) AS "Number of nodes" FROM NODES *.* Wildcard specification often seen in Windows include-exclude specifications. Note that *.* means any file name with the '.' character anywhere in the name, whereas * means any file name. *SM Wildcard product name first used on ADSM-L by Peter Jodda to generically refer to the ADSM->TSM product - which has become adroit, given the increasing frequency with which IBM is changing the name of the product. See also: ESM; ITSM & (ampersand) Special character in the MOVe DRMedia, MOVe MEDia, and Query DRMedia commands, CMd operand, as the lead character for special variable names. [ "Open character class" bracket character used in Include/Exclude patterns to begin the enumeration of a character class. That is, to wildcard on any of the individual characters specified. End the enumeration with ']'; which is to say, enclose all the characters within brackets. You can code like [abc] to represent the characters a, b, and c; or like [a-c] to accomplish the same thing. Within the character class specification, you can code special characters with a backslash, as in [abc\]de] to include the ']' char. > Redirection character in the server administrative command line interface, if at least one space on each side of it, saying to replace the specified output file. There is no "escape" character to render this character "un-special", as a backslash does in Unix. Thus, you should avoid coding " > " in an SQL statement: eliminate at least one space on either side of it. Ref: Admin Ref "Redirecting Command Output" >> Redirection characters in the server administrative command line interface, if at least one space on each side of it, saying to append to the specified output file. Ref: Admin Ref "Redirecting Command Output" {} Use braces in a file path specification within a query or restore/retrieve to isolate and explicitly identify the file space name (or virtual mount point name) to *SM, in cases where there can be ambiguity. By default, *SM uses the file space with the longest name which matches the beginning of that file path spec, and that may not be what you want. For example: If you have two filespaces "/a" and "/a/b" and want to query "/a/b/somefile" from the /a file system, specify "{/a/}somefile". See: File space, explicit specification || SQL: Logical OR operator. Also effects concatenation, as in SELECT filespace_name || hl_name || ll_name AS "_______File Name________" Note that not all SQL implementation support || for concatenation: you may have to use CONCAT() instead. - "Character class range" character used in Include/Exclude patterns to specify a range of enumerated characters as in "[a-z]". ] "Close character class" character used in Include/Exclude patterns to end the enumeration of a character class. \ "Literal escape" character used in Include/Exclude patterns to cause an enumerated character class character to be treated literally, as when you want to include a closing square bracket as part of the enumerated string ([abc\]xyz]). ... "Match N directories" characters used in Include/Exclude patterns to match zero or more directories. Example: "exclude /cache/.../*" excludes all directories (and files) under directory "/cache/". ... As a filespace name being displayed at the server, indicates that the client stored the filespace name in Unicode, and the server lacks the "code page" which allows displaying the name in its Unicode form. / (slash) At the end of a filespec, in Unix means "directory". A 'dsmc i' on a filespec ending in a slash says to backup only directories with matching names. To back up files under the directories, you need to have an asterisk after the slash (/*). If you specify what you know to be a directory name, without a slash, *SM will doggedly believe it to be the name of a file - which is why you need to maintain the discipline of always coding directory names with a slash at the end. /... In ordinary include-exclude statements, is a wildcard meaning zero or more directories. /... DFSInclexcl: is interepreted as the global root of DFS. /.... DFSInclexcl: Match zero or more directories (in that "/..." is interepreted as the global root of DFS). /* */ Used in Macros to enclose comments. J The comments cannot be nested and cannot span lines. Every line of a comment must contain the comment delimiters. = (SQL) Is equal to. The SQL standard specifies that the equality test is case sensitive when comparing strings. != (not equal) For SQL, you instead need to code "<>". <> SQL: Means "not equal". $$ACTIVE$$ The name given to the provisional active policy set where definitions have been made (manually or via Import), but you have not yet performed the required VALidate POlicyset and ACTivate POlicyset to commit the provisional definitions, whereafter there will be a policy set named ACTIVE. Ref: Admin Guide See also: Import 0xdeadbeef Some subsystems pre-populate allocated memory with the hexadecimal string 0xdeadbeef (this 32-bit hex value is a data processing affectation) so as to be able to detect that an application has failed to initialize an acquired subset with binary zeroes. Landing on a halfword boundary can obviously lead to getting variant "0xbeefdead". 10.0.0.0 - 10.255.255.255 Private subnet address range, as defined in RFC 1918, commonly used via Network Address Translation behind some firewall routers/switches. You cannot address such a subnet from the Internet: private subnet addresses can readily initiate communication with each other and servers on the Internet, but Internet users cannot initiate contacts with them. See also: 172.16.0.0 - 172.31.255.255; 192.168.0.0 - 192.168.255.255 1500 Server port default number for serving clients. Specify via TCPPort server option and DEFine SERver LLAddress. 1501 Client port for backups (schedule). Note that this port exists only when the scheduled session is due: the client does not keep a port when it is waiting for the schedule to come around. 1510 Client port for Shared Memory. 1543 ADSM HTTPS port number. 1580 Client admin port. HTTPPort default. 1581 Default HTTPPort number for the Web Client TCP/IP port. 172.16.0.0 - 172.31.255.255 Private subnet address range, as defined in RFC 1918, commonly used via Network Address Translation behind some firewall routers/switches. You cannot address such a subnet from the Internet: private subnet addresses can readily initiate communication with each other and servers on the Internet, but Internet users cannot initiate contacts with them. See also: 10.0.0.0 - 10.255.255.255; 192.168.0.0 - 192.168.255.255 192.168.0.0 - 192.168.255.255 Private subnet address range, as defined in RFC 1918, commonly used via Network Address Translation behind Asante and other brand firewall routers/switches. You cannot address such a subnet from the Internet: private subnet addresses can readily initiate communication with each other and servers on the Internet, but Internet users cannot initiate contacts with them. See also: 10.0.0.0 - 10.255.255.255; 172.16.0.0 - 172.31.255.255 2 GB limit (2 GB limit) Through AIX 4.1, Raw Logical Volume (RLV) partitions and files are limited to 2 GB in size. It takes AIX 4.2 to go beyond 2 GB. 2105 Model number of the IBM Versatile Storage Server. Provides SNMP MIB software ibm2100.mib . www.ibm.com/software/vss 3420 IBM's legacy, open-reel, half-inch tape format, circa 1974. Records data linearly in 9 tracks (1 byte plus odd parity). Reels could hold as much as 2400 feet of tape. Capacity: 150 MB Pigment: Iron Models 4,6,8 handle up to 6250 bpi, with an inter-block gap of 0.3". Reel capacity: Varies according to block size - max is 169 MB for a 2400' reel at 6250 bpi. 3466 See also: Network Storage Manager (NSM) 3466, number of *SM servers Originally, just one ADSM server per 3466 box. But as of 2000, multiple, as in allowing the 3466 to perfor DR onto another TSM server. (See http://www. storage.ibm.com/nsm/nsmpubs/nspubs.htm) 3466 web admin port number 1580. You can specify it as part of the URL, like http://______:1580 . 3480, 3490, 3490E, 3590, 3494... IBM's high tape devices (3480, 3490, 3490E, 3590, 3494, etc.) are defined in SMIT under DEVICES then TAPE DRIVES; not thru ADSM DEVICES. This is because they are shipped with the tape hardware, not with ADSM. Also, these devices use the "/dev/rmtX" format: all other ADSM tape drives are of the format "/dev/mtX" format. 3480 IBM's first generation of this 1/2" tape cartridge technology, announced March 22, 1984 and available January, 1985. Used a single-reel approach and servo tracking pre-recorded on the tape for precise positioning and block addressing. Excellent start-stop performance. The cartridge technology would endure and become the IBM cartridge standard, prevailing into the 3490 and 3590 models for at least 20 more years. Tracks: 18, recorded linearly and in parallel until EOT encountered (not serpentine like later technologies), whereupon the tape would be full. Recording density: 38,000 bytes/inch Read/write rate: 3 MB/sec Rewind time: 48 seconds Tape type: chromium dioxide (CrO2) Tape length: 550 feet Cartridge dimensions: 4.2" wide x 4.8" high x 1" thick Cartridge capacity: Varies according to block size - max is 208 MB. Transfer rate: 3 MB/s Next generation: 3490 3480 cleaning cartridge Employs a nylon filament ribbon instead of magnetic tape. 3480 tape cartridge AKA "Cartridge System Tape". Color: all gray. Identifier letter: '1'. See also: CST; HPCT; Media Type 3480 tape drive definition Defined in SMIT under DEVICES then TAPE DRIVES; not thru ADSM DEVICES. This is because as an IBM "high tape device" it is shipped with the tape hardware, not with ADSM. Also, these devices use the "/dev/rmtX" format: all other ADSM tape drives are of the format "/dev/mtX". 3490 IBM's second generation of this 1/2" tape cartridge technology, circa 1989, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Media type: CST Tracks: 18 (like its 3480 predecessor) recorded linearly and in parallel until EOT encountered (not serpentine like later technologies), whereupon the tape would be full. Transfer rate: 3 MB/sec sustained Capacity: 400 MB physical Tape type: chromium dioxide (CrO2) Tape length: 550 feet Note: Cannot read tapes produced on 3490E, due to 36-track format of that newer technology. Previous generation: 3480 Next generation: 3490E 3490 cleaning cartridge Employs a nylon filament ribbon instead of magnetic tape. 3490 EOV processing 3490E volumes will do EOV processing just before the drive signals end of tape (based on a calculation from IBM drives), when the drive signals end of tape, or when maxcapacity is reached, if maxcapacity has been set. When the drive signals end of tape, EOV processing will occur even if maxcapacity has not been reached. Contrast with 3590 EOV processing. 3490 not getting 2.4 GB per tape? In MVS TSM, if you are seeing your 3490 cartridges getting only some 800 MB per tape, it is probably that your Devclass specification has COMPression=No rather than Yes. Also check that your MAXCAPacity value allows filling the tape, and that at the 3490 drive itself that it isn't hard-configured to prevent the host from setting a high density. 3490 tape cartridge AKA "Enhanced Capacity Cartridge System Tape". Color: gray top, white base. Identifier letter: 'E' Capacity: 800 MB native; 2.4 GB compressed (IDRC 3:1 compression) 3490 tape drive definition Defined in SMIT under DEVICES then TAPE DRIVES; not thru ADSM DEVICES. This is because as an IBM "high tape device" it is shipped with the tape hardware, not with ADSM. Also, these devices use the "/dev/rmtX" format: all other ADSM tape drives are of the format "/dev/mtX". 3490E IBM's third generation of this 1/2" tape cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Designation: CST-2 Tracks: 36, implemented in two sets of 18 tracks: the first 18 tracks are recorded in the forward direction until EOT is encountered, whereupon the heads are electronically switched (no physical head or tape shifting) and the tape is then written backwards towards BOT. Can read 3480 and 3490 tapes. Capacity: 800 MB physical; 2.4 GB with 3:1 compression. IDRC recording mode is the default, and so tapes created on such a drive must be read on an IDRC-capable drive. Transfer rate: Between host and tape unit buffer: 9 MB/sec. Between buffer and drive head: 3 MB/sec. Capacity: 800 MB physical Tape type: chromium dioxide (CrO2) Tape length: 800 feet Previous generation: 3490 Next generation: 3590 3490E cleaning cartridge Employs a nylon filament ribbon instead of magnetic tape. 3490E Model F 36-track head to read/write 18 tracks bidirectionally. 349x tape library use, define "ENABLE3590LIBRary" definition in the server options file. Ref: Installing the Server and Administrative Client. 3494 IBM robotic libary with cartridge tapes, originally introduced to hold 3490 tapes and drives, but later to hold 3590 tapes and drives (same cartridge dimensions). Model HA1 is high availability: instead of just one accessor (robotic mechanism) at one end, it has two, at each end. See also: Convenience Input-Output Station; Dual Gripper; Fixed-home Cell; Floating-home Cell; High Capacity Output Facility; Library audit; Library; 3494, define; Library Manager; SCRATCHCATegory; Volume Categories; Volume States 3494, access via web This was introduced as part of the IBM StorWatch facility in a 3494 Library Manager component called 3494 Tape Library Specialist, available circa late 2000. It is a convenience facility, that is read-only: one can do status inquiries, but no functional operations. If at the appropriate LM level, the System Summary window will show "3494 Specialist". 3494, add tape to 'CHECKIn LIBVolume ...' Note that this involves a tape mount. 3494, audit tape (examine its barcode 'mtlib -l /dev/lmcp0 -a -V VolName' to assure physically in library) Causes the robot to move to the tape and scan its barcode. 'mtlib -l /dev/lmcp0 -a -L FileName' can be used to examine tapes en mass, by taking the first volser on each line of the file. 3494, CE slot See: 3494 reserved cells 3494, change Library Manager PC In rare circumstances it will be necessary to swap out the 3494's industrial PC and put in a new one. A major consideration here is that the tape inventory is kept in that PC, and the prospect of doing a Reinventory Complete System after such a swap is wholly unpalatable in that it will discard the inventory and rebuid it - with all the tape category code values being lost, being reset to Insert. So you want to avoid that. (A TSM AUDit LIBRary can fix the category codes, but...) And as Enterprise level hardware and software, such changes should be approached more intelligently by service personnel, anyway. Realize that the LM consists of the PC, the LM software, and a logically separate database - which should be as manageable as all databases can be. If you activate the Service menu on the 3494 control panel, under Utilities you will find "Dump database..." and "Restore database...", which the service personnel should fully exploit if at all possible to preserve the database across the hardware change. (The current LM software level may have to be brought up to the level of the intended, new PC for the database transfer to work well.) 3494, change to manual operation On rare occurrences, the 3494 robot will fail and you need to continue processing, by switching to manual operation. This involves: - Go to the 3494 Operator Station and proceed per the Using Manual Mode instructions in the 3494 OpGuide. Be sure to let the library Pause operation complete before entering Manual Mode. - TSM may have to be told that the library is in manual mode. You cannot achieve this via UPDate LIBRary: you have to define another instance of your library under a new name, with LIBType=MANUAL. Then do UPDate DEVclass to change your 3590 device class to use the library in manual mode for the duration of the robotic outage. - Either watch the Activity Log, doing periodic Query REQuest commands; or run 'dsmadmc -MOUNTmode'. REPLY to outstanding mount requests to inform TSM when a tape is mounted and ready. If everything is going right, you should see mount messages on the tape drive's display and in the Manual Mode console window, where the volser and slot location will be displayed. If a tape has already been mounted in Manual Mode, dismounted, and then called for again, there will be an "*" next to the slot number when it is displayed on the tape drive calling for the tape, to clue you in that it is a recent repeater. 3494, count of all volumes Via Unix command: 'mtlib -l /dev/lmcp0 -vqK' 3494, count of cartridges in There seems to be no way to determine Convenience I/O Station this. One might think of using the cmd 'mtlib -l /dev/lmcp0 -vqK -s ff10' to get the number, but the FF10 category code is in effect only as the volume is being processed on its way to the Convenience I/O. The 3494 Operator Station status summary will say: "Convenience I/O: Volumes present", but not how many. The only recourse seems to be to create a C program per the device driver manual and the mtlibio.h header file to inspect the library_data.in_out_status value, performing an And with value 0x20 and looking for the result to be 0 if the Convenience I/O is *not* all empty. 3494, count of CE volumes Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fff6' 3494, count of cleaning cartridges Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fffd' 3494, count of SCRATCH volumes Via Unix command: (3590 tapes, default ADSM SCRATCH 'mtlib -l /dev/lmcp0 -vqK -s 12E' category code) 3494, eject tape from See: 3494, remove tape from 3494, identify dbbackup tape See: dsmserv RESTORE DB, volser unknown 3494, inventory operations See: Inventory Update; Reinventory complete system 3494, list all tapes 'mtlib -l /dev/lmcp0 -qI' (or use options -vqI for verbosity, for more descriptive output) 3494, manually control Use the 'mtlib' command, which comes with 3494 Tape Library Device Driver. Do 'mtlib -\?' to get usage info. 3494, monitor See: mtevent 3494, not all drives being used See: Drives, not all in library being used 3494, number of drives in Via Unix command: 'mtlib -l /dev/lmcp0 -qS' 3494, number of frames (boxes) The mtlib command won't reveal this. The frames show in the "Component Availability" option in the 3494 Tape Library Specialist. 3494, partition/share TSM SAN tape library sharing support is only for libraries that use SCSI commands to control the library robotics and the tape management. This does *not* include the 3494, which uses network communication for control. Sharing of the 3494/3590s thus has to occur via conventional partitioning or dynamic drive sharing (which is via the Auto-Share feature introduced in 1999). There is no dynamic sharing of tape volumes: they have to be pre-assigned to their separate TSM servers via Category Codes. Ref: Redpaper "Tivoli Storage Manager: SAN Tape Library Sharing". Redbook "Guide to Sharing and Partitioning IBM Tape Library Data" (SG24-4409) 3494, ping You can ping a 3494 from another system within the same subnet, regardless of whether that system is in the LM's list of LAN-authorized hosts. If you cannot ping the 3494 from a location outside the subnet, it may mean that the 3494's subnet is not routed - meaning that systems on that subnet cannot be reached from outside. 3494, remote operation See "Remote Library Manager Console Feature" in the 3494 manuals. 3494, remove tape from 'CHECKOut LIBVolume LibName VolName [CHECKLabel=no] [FORCE=yes] [REMove=no]' To physically cause an eject via AIX command, change the category code to EJECT (X'FF10'): 'mtlib -l /dev/lmcp0 -vC -V VolName -t ff10' The more recent Library Manager software has a Manage Import/Export Volumes menu, wherein Manage Insert Volumes claims ejectability. 3494, RS-232 connect to SP Yes, you can connect a 3494 to an RS/6000 SP via RS-232, though it is uncommon, slow, and of limited distance compare to using ethernet. 3494, status 'mtlib -l /dev/lmcp0 -qL' 3494, steps to set up in ADSM - Define the library - Define the drives in it - Add "ENABLE3590LIBRARY YES" to dsmserv.opt - Restart the server. (Startup message "ANR8451I 349x library LibName is ready for operations".) 3494 Cell 1 Special cell in a 3494: it is specially examined by the robot after the doors are closed. You would put here any tape manually removed from a drive, for the robot to put away. It will read the serial name, then examine the cell which was that tape cartridge's last home: finding it empty, the robot will store the tape there. The physical location of that cell: first frame, inner wall, upper leftmost cell (which the library keeps empty). 3494 cells, total and available 'mtlib -l /dev/lmcp0 -qL' lines: "number of cells", "available cells". 3494 cleaner cycles remaining 'mtlib -l /dev/lmcp0 -qL' line: "avail 3590 cleaner cycles" 3494 cleaning cartridge See: Cleaner Cartridge, 3494 3494 connectivity A 3494 can be simultaneously connected via LAN and RS-232. 3494 diagnosis See: trcatl 3494 ESCON device control Some implementations may involve ESCON connection to 3490 drives plus SCSI connection to 3590 drives. The ESCON 3490 ATL driver is called mtdd and the SCSI 3590 ATL driver was called atldd, and they have shared modules between them. One thus may be hesitant to install atldd due to this "sharing". In the pure ESCON drive case, the commands go down the ESCON channel, which is also the data path. If you install atldd, the commands now first go to the Library Manager, which then reissues them to those drives. Thus, it is quite safe to install atldd for ESCON devices. 3494 inaccessible (usually after Check for the following: just installed) - That the 3494 is in an Online state. - In the server, that the atldd software (LMCPD) has been installed and that the lmcpd process is running. - That your /etc/ibmatl.conf is correct: if a TCP/IP connection, specify the IP addr; if RS/232, specify the /dev/tty port to which the cable is attached. - If a TCP/IP connection, that you can ping the 3494 by both its network name and IP address (to assure that DNS was correctly set up in your shop). - If a LAN connection: - Check that the 3494 is not on a Not Routed subnet: such a router configuration prevents systems outside the subnet from reaching systems residing on that subnet. - A port number must be in your host /etc/services for it to communicate with the 3494. By default, the Library Driver software installation creates a port '3494/tcp' entry, which should matches the default port at the 3494 itself, per the 3494 installation OS/2 TCP/IP configuration work. - Your host needs to be authorized to the 3494 Library Manager, under "LAN options", "Add LAN host". (RS/232 direct physical connection is its own authorization.) Make sure you specify the full host network name, including domain (e.g., a.b.com). If communications had been working but stopped when your OS was updated, assure that it still has the same host name! - If an RS/232 connection: - Check the Availability of your Direct Attach Ports (RS-232): the System Summary should show them by number, if Initialized, in the "CU ports (RTIC)" report line. If not, go into Service Mode, under Availability, to render them Available. - Connecting the 3494 to a host is a DTE<->DTE connection, meaning that you must employ a "null modem" cable or connector adapter. - Certainly, make sure the RS-232 cable is run and attached to the port inside the 3494 that you think it is. - Try performing 'mtlib' queries to verify, outside of *SM, that the library can be reached. Presuming 3590 drives in the 3494, make sure your server options file includes: ENABLE3590LIBRARY YES 3494 Intervention Required detail The only way to determine the nature of the Int Req on the 3494 is to go to its Operator Station and see, under menu Commands->Operator intervention. There is no programming interface available to allow you to get this information remotely. 3494 IP address, determine Go to the 3494 control panel. From the Commands menu, select "LAN options", and then "LM LAN information". 3494 Manual Mode If the 3494's Accessor is nonfunctional you can operate the library in Manual Mode. Using volumes in Manual Mode affects their status: The 3494 redbook (SG24-4632) says that when volumes are used in Manual Mode, their LMDB indicator is set to "Manual Mode", as used to direct error recovery when the lib is returned to Auto mode. This is obviously necessary because the location of all volumes in the library is jeopardized by the LM's loss of control of the library. The 3494 Operator Guide manual instructs you to have Inventory Update active upon return to Auto mode, to re-establish the current location of all volumes. 3494 microcode level See: "Library Manager, microcode level" 3494 port number See: Port number, for 3494 communication 3494 problem: robot is dropping This has been seen where the innards of cartridges the 3494 have gone out of alignment, for any of a number of reasons. Re-teaching can often solve the problem, as the robot re-learns positions and thus realigns itself. 3494 problem: robot misses some During its repositioning operations, the fiducials - but not all robot attempts to align itself with the edges of each fiducial, but after dwelling on one it keeps on searching, as though it didn't see it. This operation involves the LED, which is carried on the accessor along with the laser (which is only for barcode reading). The problem is that the light signal involved in the sensing is too weak, which may be due to dirt, an aged LED, or a failing sensor. The signal is marginal, so some fiducials are seen, but not others. 3494 problems See also "3494 OPERATOR STATION MESSAGES" section at the bottom of this document. 3494 reserved cells A 3494 minimally has two reserved cells: 1 A 1 Gripper error recovery (1 A 3 if Dual Gripper installed). 1 A 20 CE cartridge (3590). 1 A 19 is also reserved for 3490E, if such cartridges participate. _ K 6 Not a cell, but a designation for a tape drive on wall _. 3494 scratch category, default See: DEFine LIBRary 3494 sharing Can be done with TSM 3.7+, via the "3494SHARED YES" server option; but you still need to "logically" partition the 3494 via separate tape Category Codes. Ref: Guide to Sharing and Partitioning IBM Tape Library Dataservers, SG24-4409. Redbooks: Tivoli Storage Manager Version 3.7.3 & 4.1: Technical Guide, section 8.2; Tivoli Storage Manager SAN Tape Library Sharing. See also: 3494SHARED; DRIVEACQUIRERETRY; MPTIMEOUT 3494 sluggish The 3494 may be taking an unusually long time to mount tapes or scan barcodes. Possible reasons: - A lot of drive cleaning activity can delay mounts. (A library suddenly exposed to a lot of dust could evidence a sudden surge in cleaning.) A shortage of cleaning cartridges could aggravate that. - Drive problems which delay ejects or positioning. - Library running in degraded mode. - lmcpd daemon or network problems which delay getting requests to the library. - See if response to 'mtlib' commands is sluggish. This can be caused by DNS service problems to the OS2 embedded system. (That PC is typically configured once, then forgotten; but DNS servers may change in your environment, requiring the OS2 config to need updating.) Use the mtlib command to get status on the library to see if any odd condition, and visit the 3494 if necessary to inspect its status. Observe it responding to host requests to gauge where the delay is. 3494 SNMP support The 3494 (beginning with Library Manager code 518) supports SNMP alert messaging, enabling you to monitor 3494 operations from one or more SNMP monitor stations. This initial support provides more than 80 operator-class alert messages covering: 3494 device operations Data cartridge alerts Service requests VTS alerts See "SNMP Options" in the 3494 Operator Guide manual. 3494 status 'mtlib -l /dev/lmcp0 -qL' 3494 Tape Library Specialist Provides web access to your 3494 LM. Requires that the LM PC have at least 64 MB of memory, be at LM code level 524 or greater, and have FC 5045 (Enhanced Library Manager). 3494 tapes, list 'mtlib -l /dev/lmcp0 -qI' (or use options -vqI for verbosity, for more descriptive output) 3494 TCP/IP, set up This is done during 3494 installation, in OS/2 mode, upon invoking the HOSTINST command, where a virtual "flip-book" will appear so that you can click on tabs within it, including a Network tab. After installation, you could go into OS/2 and there do 'cd \tcpip\bin' and enter the command 'tcpipcfg' and click in the Network tab. Therein you can set the IP address, subnet mask, and default gateway. 3494 volume, list state, class, 'mtlib -l /dev/lmcp0 -vqV -V VolName' volser, category 3494 volume, last usage date 'mtlib -l /dev/lmcp0 -qE -uFs -V VolName' 3494 volumes, list 'mtlib -l /dev/lmcp0 -qI' (or use options -vqI for verbosity, for more descriptive output) 3494SHARED To improve performance of allocation of 3590 drives in the 3494, introduced by APAR IX88531... ADSM was checking all available drives on a 3494 for availability before using one of them. Each check took 2 seconds and was being performed twice per drive, once for each available drive and once for the selected drive. This resulted in needless delays in mounting a volume. The reason for this is that in a shared 3494 library environment, ADSM physically verifies that each drive assigned to ADSM is available and not being used by another application. The problem is that if ADSM is the only application using the assigned drives, this extra time to physically check the drives is not needed. This was addressed by adding a new option, 3494SHARED, to control sharing. Selections: No (default) The, the 3494 is not being shared by any other application. That is, only one or more ADSM servers are accessing the 3494. Yes ADSM will select a drive that is available and not being used by any other application. You should only enable this option if you have more than two (2) drives in your library. If you are currently sharing a 3494 library with other application, you will need to specify this option. See also: DRIVEACQUIRERETRY; MPTIMEOUT 3495 Predecessor to the 3494, containing a GM robot, like used in car assembly. 3570 The IBM 3570 Tape Subsystem is based on the same technology as the IBM 3590 High Performance Tape Subsystem. It functionally expands the capability of tape to perform both write and read-intensive operations. It provides a faster data access than other tape technologies with a drive time to read/write data of eight seconds from cassette insertion. The 3570 also incorporates a high-speed search function. The tape drive reads and writes data in a 128-track format, four tracks at a time. Data is written using an interleaved serpentine longitudinal recording format starting at the center of the tape (mid-tape load point) and continuing to near the end of the tape. The head is indexed to the next set of four tracks and data is written back to the mid-tape load point. This process continues in the other direction until the tape is full. Cartridge: The 3570 uses a unique, robust, twin-hub tape cassette that is approximately half the size of the 3490/3590 cartridge tapes, with a cassette capacity of 5 GB uncompressed and up to 15G per cassette with LZ1 data compaction. Also called "Magstar MP" (where the MP stands for Multi-Purpose), supported by the Atape driver. Think "3590, Jr." The tape is half-wound at load time, so can get to either end of the tape in half the time than if the tape were fully wound. Cartridge type letter: 'F' (does not participate in the volser). An early problem of "Lost tension" was common, attributed to bad tapes, rather than the tape drives. *SM library type: SCSI Library 3570 "tapeutil" for NT See: ntutil 3570, to act as an ADSM library Configure to operate in Random Mode and Base Configuration. This allows ADSM to use the second drive for reclamation. (The Magstar will not function as a library within ADSM when set to "automatic".) The /dev/rmt_.smc SCSI Media Changer special device allows library style control of the 3570. 3570/3575 Autoclean This feature does not interfere with ADSM: the 3570 has its own slot for the cleaner that is not visible to ADSM, and the 3575 hides the cleaners from ADSM. 3570 configurations Base: All library elements are available to all hosts. In dual drive models, it is selected from Drive 1 but applies to both drives. This config is primarily used for single host attachment. (Special Note for dual drive models: In this config, you can only load tapes to Drive 1 via the LED display panel as everything is keyed off of Drive 1. However, you may load tapes to Drive 2 via tapeutil if the Library mode is set to 'Random'.) Split: This config is most often used when the library unit is to be twin-tailed between 2 hosts. In this config, the library is "split" into 2 smaller half size libraries, each to be used by only one host. This is advantageous when an application does not allow the sharing of one tape drive between 2 hosts. The "first/primary" library consists of: Drive 1 The import/export (priority) cell The right most magazine Transport Mechanism The "second" library consists of: Drive 2 The leftmost magazine Transport Mechanism 3570 Element addresses Drive 0 is element 16, Drive 1 is element 17. 3570 mode A 3570 library must be in RANDOM mode to be usable by TSM: AUTO mode is no good. 3570 tape drive cleaning Enable Autocleaning. Check with the library operator guide. The 3570 has a dedicated cleaning tape tape storage slot, which does not take one of the library slots. 3575 3570 library from IBM. Attachment via: SCSI-2. As of early 2001, customers report problem of tape media snapping: the cartridge gets loaded into the drive by the library but it never comes ready: such a cartridge may not be repairable. Does not have a Teach operation like the 3494. Ref: Red book: Magstar MP 3575 Tape Library Dataserver: Muliplatform Implementation. *SM library type: SCSI Library 3575, support C-Format XL tapes? In AIX, do 'lscfg -vl rmt_': A drive capable of supporting C tapes should report "Machine Type and Model 03570C.." and the microcode level should be at least 41A. 3575 configuration The library should be device /dev/smc0 as reflected in AIX command 'lsdev -C tape'...not /dev/lb0 nor /dev/rmtX.smc as erroneously specified in the Admin manuals. 3575 tape drive cleaning The 3575 does NOT have a dedicated cleaning tape storage slot. It takes up one of the "normal" tape slots, reducing the Library capacity by one. 357x library/drives configuration You don't need to define an ADSM device for 357x library/drives under AIX: the ADSM server on AIX uses the /dev/rmtx device. Don't go under SMIT ADSM DEVICES but just run 'cfgmgr'. Once the rmtx devices are available in AIX, you can define them to ADSM via the admin command line. For example, assuming you have two drives, rmt0 and rmt1, you would use the following adsm admin commands to define the library and drives: DEFine LIBRary mylib LIBType=SCSI DEVice=/dev/rmt0.smc DEFine DRive mylib drive1 DEVice=/dev/rmt0 ELEMent=16 DEFine DRive drive mylib drive2 DEVice=/dev/rmt1 ELEMent=17 (you may want to verify the element numbers but these are usually the default ones) 3575 - L32 Magstar Library contents, Unix: 'tapeutil -f /dev/smc0 inventory' list 358x drives These are LTO Ultrium drives. Supported by IBM Atape device driver. See: LTO; Ultrium 3580 IBM model number for LTO Ultrium tape drive. A basic full-height, 5.25 drive SCSI enclosure; two-line LCD readout. Flavors: L11, low-voltage differential (LVD) Ultra2 Wide SCSI; H11, high-voltage differential SCSI. Often used with Adaptec 29160 SCSI card (but use the IBM driver - not the Adaptec driver). The 3580 Tape Drive is capable of data transfer rates of 15 MB per second with no compression and 30 MB per second at 2:1 compression. (Do not expect to come close to such numbers when backing up small files: see "Backhitch".) Review: www.internetweek.com/reviews00/ rev120400-2.htm The Ultrium 1 drives have had problems: - Tapes would get stuck in the drives. IBM (Europe?) engineered a field compensation involving installing a "clip" in the drive. This is ECA 009, which is not a mandatory EC; to be applied only if the customer sees frequent B881 errors in the library containing the drive. The part number is 18P7835 (includes tool). Taks about half an hour to apply. One customer reports having the clip, but still problems, which seems to be inferior cartridge construction. - Faulty microcode. As evidenced in late 2003 defect where certain types of permanent write errors, with subsequent rewind command, causes an end of data (EOD) mark to be written at the BOT (beginning of tape). See also: LTO; Ultrium 3580 (LTO) cleaning cartridge life The manual specifies how much you should expect out of a cleaning cartridge: "The IBM TotalStorage LTO Ultrium Cleaning Cartridge is valid for 50 uses." (2003 manual) 3581 IBM model number for LTO Ultrium tape drive with autoloader. Houses one drive and seven slots: five in front, two in the rear. *SM library type: SCSI Library See also: Backhitch; LTO; Ultrium 3581, configuring under AIX Simply install the device driver and you should be able to see both the drive and medium changer devices as SCSI tape devices (/dev/rmt0 and /dev/smc0). When you will configure the library and drive in TSM, use device type "LTO", not SCSI. Ref: TSM 4.1.3 server README file 3582 IBM LTO Ultrium cartridge tape library. Up to 2 Ultrium 2 tape drives and 23 tape cartridges. Requires Atape driver on AIX and like hosts: Atape level 8.1.3.0 added support for 3582 library. Reportedly not supported by TSM 5.2.2. See also: Backhitch; LTO; Ultrium 3583 IBM LTO Ultrium cartridge tape library. Formal name: "LTO Ultrium Scalable Tape Library 3583". (But it is only slightly scalable: look to the 3584 for higher capacity.) Six drives, 18 cartridges. Can have up to 5 storage columns, which the picker/mounter accesses as in a silo. Column 1 can contain a single-slot or 12-slot I/O station. Column 2 contains cartridge storage slots and is standard in all libraries. Column 3 contains drives. Columns 4 and 5 may be optionally installed and contain cartridge storage slots. Beginning with Column 1 (the I/O station column), the columns are ordered clockwise. The three columns which can house cartridges do so with three removable magazines of six slots each: 18 slots per column, 54 slots total. Add two removable I/O station magazines through the door and one inside the door to total 72 cells, 60 of which are wholly inside the unit. total cartridge storage slots. (There are reports that 2 of those 60 slots are reserved for internal tape drive mounts, though that doesn't show up in the doc.) Model L72: 72 cartridge storage slots As of 2004 handles the Ultrium 2 or Ultrium 1 tape drive. The Ultrium 2 drive can work with Ultrium 1 media, but at lesser speeds (see "Tape Drive Performance" in the 3583 Setup and Operator Guide manual. Cleaning tapes should live in the reserved, nonaddressable slots at the top of silo columns (where the picker's bar code reader cannot look). http://www.storage.ibm.com/hardsoft/tape /pubs/pubs3583.html *SM library type: SCSI Library The 3583 had a variety of early problems such as static buildup: the picker would run fine for a while, until enough static built up, then it would die for no reason apparent to the user. The fix was to replace the early rev picker with a newer design. See also: 3584; Accelis; L1; Ultrium 3583, convert I/O station to slots Via Setup->Utils->Config. Then you have to get the change understood by TSM - and perhaps the operating system. A TSM AUDit LIBRary may be enough; or you may have to incite an operating system re-learning of the SCSI change, which may involve rebooting the opsys. 3583 cleaning cartridge Volser must start with "CLNI" so that the library recognizes the cleaning tape as such (else it assumes it's a data cartridge). The cleaning cartridge is stored in any slot in the library. Recent (2002/12) updates to firmware force the library to handle cleaning itself and hide the cleaning cartridges from *SM. 3583 door locked, never openable See description of padlock icon in the 3583 manual. A basic cause is that the I/O station has been configured as all storage slots (rather than all I/O slots). In a Windows environment, this may be cause by RSM taking control of the library: disable RSM when is it not needed. This condition may be a fluke which power-cycling the library will undo. 3583 driver and installation The LTO/Ultrium tape technology was jointly developed by IBM, and so they provide a native device driver. In AIX, it is supported by Atape; in Solaris, by IBMtape; in Windows, by IBMUltrium; in HP-UX, by atdd. 1. Install the Ultrium device driver, available from ftp://ftp.software.ibm.com/storage /devdrvr// directory 2. In NT, under Tape Devices, press ESC on the first panel. 3. Select the Drivers tab and add your library. 4. Select the 3583 library and click on OK. 5. Press Yes to use the existing files. 3583 "missing slots" If not all storage cells in the library are usable (the count of usable slots is short), it can be caused by a corrupt volume whose label cannot be read during an AUDit LIBRary. You may have to perform a Restore Volume once the volume is identified. 3584 The high end of IBM's mid-range tape library offerings. Formal name: LTO UltraScalable Tape Library Initially housed LTO Ultrium drives and cartridges; but as of mid 2004 also supports 3592 J1A. Twelve drives, 72 cartridges. Can also support DLT. Interface: Fibre Channel or SCSI Its robotics are reported to be much faster than those in the 3494, making for faster mounting of tapes. In Unix, the library is defined as device /dev/smc0, and by default is LUN 1 on the lowest-number tape drive in the partition - normally drive 1 in the library, termed the Master Drive by CEs. (Remove that drive and you suffer ANR8840E trying to interact with the library.) In AIX, 'lsdev -Cc tape' should show all the devices. *SM library type: SCSI Library See also: LTO; Ultrium 3584 bar code reading The library can be set to read either just the 6-char cartridge serial ("normal" mode) or that plus the "L1" tape cartridge identifier as well ("extended" mode). 3584 cleaning cartridge Volser must start with "CLNI" or "CLNU" so that the library recognizes the cleaning tape as such (else it assumes it's a data cartridge). The cleaning cartridge is stored in any data-tape slot in the library (but certainly not the Diagnostic Tape slot). Follow the 3584 manual's procedure for inserting cleaning cartridges. Auto Clean should be activated. The cleaning tape is valid for 50 uses. When the cartridge expires, the library displays an Activity screen like the following: Remove CLNUxxL1 Cleaning Cartridge Expired 3590 IBM's fourth generation of this 1/2" tape cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Uses magneto-resistive heads for high density recording. Introduced: 1995 Tape length: 300 meters (1100 feet) Tracks: 128, written 16 at a time, in serpentine fashion. The head contains 32 track writers: As the tape moves forward, 16 tracks are written until EOT is encountered, whereupon electronic switching causes the other 16 track writers in the heads to be used as the tape moved backwards towards BOT. Then, the head is physically moved (indexed) to repeat the process, until finally all 128 tracks are written as 8 interleaved sets of 16 tracks. Transfer rate: Between host and tape unit buffer: 20 MB/sec with fast, wide, differential SCSI; 17 MB/sec via ESCON channel interface. Between buffer and drive head: 9 MB/sec. Pigment: MP1 (Metal Particle 1) Note that "3590" is a special, reserved DEVType used in 'DEFine DEVclass'. Cartridge type letter: 'J' (does not participate in the volser). See publications references at the bottom of this document. See also: 3590E Previous generation: 3490E Next generation: 3590E See also: MP1 3590, AIX error messages If a defective 3590 is continually putting these out, rendering the drive Unavailable from the 3494 console will cause the errors to be discontinued. 3590, bad block, dealing with Sometimes there is just one bad area on a long, expensive tape. Wouldn't it be nice to be able to flag that area as bad and be able to use the remainder of the tape for viable storage? Unfortunately, there is no documented way to achieve this with 3590 tape technology: when just one area of a tape goes badk the tape becomes worthless. 3590, handling DO NOT unspool tape from a 3590 cartridge unless you are either performing a careful leader block replacement or a post-mortem. Unspooling the tape can destroy it! The situation is clearances: The spool inside the cartridge is spring-loaded so as to keep it from moving when not loaded. The tape drive will push the spool hub upward into the cartridge slightly, which disengages the locking. The positioning is exacting. If the spool is not at just the right elevation within the cartridge, the edge of the tape will abrade against the cartridge shell, resulting in substantial, irreversible damage to the tape. 3590, write-protected? With all modern media, a "void" in the sensing position indicates writing not allowed. IBM 3480/3490/3590 tape cartridges have a thumbwheel (File Protect Selector) which, when turned, reveals a flat spot on the thumbwheel cylinder, which is that void/depression indicating writing not allowed. So, when you see the dot, it means that the media is write-protected. Rotate the thumbwheel away from that to make the media writable. Some cartridges show a padlock instead of a dot, which is a great leap forward in human engineering. See also: Write-protection of media 3590 barcode Is formally "Automation Identification Manufacturers Uniform Symbol Description Version 3", otherwise known as Code 39. It runs across the full width of the label. The two recognized vendors: Engineered Data Products (EDP) Tri-Optic Wright Line Tri-Code Ref: Redbook "IBM Magstar Tape Products Family: A Practical Guide", topic Cartridge Labels and Bar Codes. See also: Code 39 3590 Blksize See: Block size used for removable media 3590 capacity See: 3590 'J'; 3590 'K' See also: ESTCAPacity 3590 cleaning See: 3590 tape drive cleaning 3590 cleaning interval The normal preventve maintenance interval for the 3590 is once every 150 GB (about once every 15 tapes). Adjust via the 3494 Operator Station Commands menu selection "Schedule Cleaning, in the "Usage clean" box. The Magstar Tape Guide redbook recommends setting the value to 999 to let the drive incite cleaning, rather than have the 3494 Library Manager initiate it (apparently to minimize drive wear). Ref: 3590 manual; "IBM Magstar Tape Products Family: A Practical Guide" redbook 3590 cleaning tape Color: Black shell, with gray end notches 3590 cleaning tape mounts, by drive, Put the 3494 into Pause mode; display Open the 3494 door to access the given 3590's control panel; Select "Show Statistics Menu"; See "Clean Mounts" value. 3590 compression of data The 3590 performs automatic compression of data written to the tape, increasing both the effective capacity of the 10 GB cartridge and boosting the effective write speed of the drive. The 3590's data compression algorithm is a Ziv-Lempel technique called IBMLZ1, more effective than the BAC algorithm used in the 3480 and 3490. Ref: Redbook "Magstar and IBM 3590 High Performance Tape Subsystem Technical Guide" (SG24-2506) See also: Compression algorithm, client 3590 Devclass, define 'DEFine DEVclass DevclassName DEVType=3590 LIBRary=LibName [FORMAT=DRIVE|3590B|3590C| 3590E-B|3590E-C] [MOUNTLimit=Ndrives] [MOUNTRetention=Nmins] [PREFIX=TapeVolserPrefix] [ESTCAPacity=X] [MOUNTWait=Nmins]' Note that "3590" is a special, reserved DEVType. 3590 drive* See: 3590 tape drive* 3590 EOV processing There is a volume status full for 3590 volumes. 3590 volumes will do EOV processing when the drive signals end of tape, or when the maxcapacity is reached, if maxcapacity has been set. When the drive signals end of tape, EOV processing will occur even if maxcapacity has not been reached. Contrast with 3490 EOV processing. 3590 errors See: MIM; SARS; SIM; VCR 3590 exploded diagram (internals) http://www.thic.org/pdf/Oct00/ imation.jgoins.001003.pdf page 20 3590 Fibre Channel interface There are two fibre channel interfaces on the 3590 drive, for attaching to up to 2 hosts. Supported in TSM 3.7.3.6 Available for 3590E & 3590H drives but not for 3590B. 3590 'J' 3590 High Performance Cartridge Tape (HPCT), the original 3590 tape cartridge, containing 300 meters of half-inch tape. Predecessor: 3490 "E" Barcodette letter: 'J' Color of leader block and notch tabs: blue Compatible drives: 3590 B; 3590 E; 3590 H Capacity: 10 GB native on Model B drives (up to 30 GB with 3:1 compression); 20 GB native on Model E drives (up to 60 GB with 3:1 compression); 30 GB native on Model H drives (up to 90 GB with 3:1 compression); Notes: Has the thickest tape of the 3590 tape family, so should be the most robust. See also: 3590 cleaning tape; 3590 tape cartridge; 3590 'K'; EHPCT; HPCT 3590 'K' (3590 K; 3590K) 3590 Extended High Performance Cartridge Tape, aka "Extended length", "double length": 600 meters of thinner tape. Available: March 3, 2000 Predecessor: 3590 'J' Barcodette letter: 'K' Color of leader block and notch tabs: green Compatible drives: 3590 E; 3590 H Capacity: 40 GB native on 3590 E drives (up to 120 GB with 3:1 compression, depending upon the compressability of the data); 60 GB native on Model H drives (up to 120 GB with 3:1 compression); Hardware Announcement: ZG02-0301 Notes: The double length of the tape spool makes for longer average positioning times. Fragility: Because so much tape is packed into the cartridge, it tends to be rather close to the inside of the shell, and so is more readily damaged if the tape is dropped, as compared to the 3590 'J'. 3590 microcode level Unix: 'tapeutil -f /dev/rmt_ vpd' (drive must not be busy) see "Revision Level" value AIX: 'lscfg -vl rmt_' see "Device Specific.(FW)" Windows: 'ntutil -t tape_ vpd' Microcode level shows up as "Revision Level". 3590 Model B11 Single-drive unit with attached 10-cartridge Automatic Cartridge Facility, intended to be rack-mounted (IBM 7202 rack). Can be used as a mini library. Interface is via integral SCSI-3 controller with two ports. As of late 1996 it is not possible to perform reclamation between 2 3590 B11s, because they are considered separate "libraries". Ref: "IBM TotalStorage Tape Device Drivers: Installation and User's Guide", Tape and Medium Changer Device Driver section. 3590 Model B1A Single-drive unit intended to be installed in a 3494 library. Interface is via integral SCSI-3 controller with two ports. 3590 Model E11 Rack-mounted 3590E drive with attached 10-cartridge ACF. 3590 Model E1A 3590E drive to be incorporated into a 3494. 3590 modes of operation (Referring to a 3590 drive, not in a 3494 library, with a tape magazine feeder on it.) Manual: The operator selects Start to load the next cartridge. Accumulate: Take each next cartridge from the Priority Cell, return to the magazine. Automatic: Load next tape from magazine without a host Load request. System: Wait for Load request from host before loading next tape from magazine. Random: Host treats magazine as a mini library of 10 cartridges and uses Medium Mover SCSI cmds to select and move tapes between cells. Library: For incorporation of 3590 in a tape library server machine (robot). 3590 performance See: 3590 speed 3590 SCSI device address Selectable from the 3590's mini-panel, under the SET ADDRESS selection, device address range 0-F. 3590 Sense Codes Refer to the "3590 Hardware Reference" manual. 3590 servo tracks Each IBM 3590 High Performance Tape Cartridge has three prerecorded servo tracks, recorded at time of manufacture. The servo tracks enable the IBM 3590 tape subsystem drive to position the read/write head accurately during the write operation. If the servo tracks are damaged, the tape cannot be written to. 3590 sharing between two TSM servers Whether by fibre or SCSI cabling, when sharing a 3590 drive between two TSM servers, watch out for SCSI resets during reboots of the servers. If the server code and hardware don't mesh exactly right, its possible to get a "mount point reserved" state, which requires a TSM restart to clear. 3590 speed Note from 1995 3590 announcement, number 195-106: "The actual throughput a customer may achieve is a function of many components, such as system processor, disk data rate, data block size, data compressibility, I/O attachments, and the system or application software used. Although the drive is capable of a 9-20MB/sec instantaneous data rate, other components of the system may limit the actual effective data rate. For example, an AS/400 Model F80 may save data with a 3590 drive at up to 5.7MB/sec. In a current RISC System/6000 environment, without filesystem striping, the disk, filesystem, and utilities will typically limit data rates to under 4MB/sec. However, for memory-to-tape or tape-to-tape applications, a RISC System/6000 may achieve data rates of up to 13MB/sec (9MB/sec uncompacted). With the 3590, the tape drive should no longer be the limiting component to achieving higher performance. See also IBM site Technote "D/T3590 Tape Drive Performance" 3590 statistics The 3590 tape drive tracks various usage statistics, which you can ask it to return to you, such as Drive Lifetime Mounts, Drive Lifetime Megabytes Written or Read, from the Log Page X'3D' (Subsystem Statistics), via discrete programming or with the 'tapeutil' command Log Sense Page operation, specifying page code 3d and a selected parameter number, like 40 for Drive Lifetime Mounts. Refer to the 3590 Hardware Reference manual for byte positions. See also: 3590 tape drive, hours powered on; 3590 tape mounts, by drive 3590 tape cartridge AKA "High Performance Cartridge Tape". See: 3590 'J' 3590 tape drive The IBM tape drive used in the 3494 tape robot, supporting 10Gbytes per cartridge uncompressed, or typically 30Gbytes compressed via IDRC. Uses High Performance Cartridge Tape. 3590 tape drive, hours powered on Put the 3494 into Pause mode; Open the 3494 door to access the given 3590's control panel; Select "Show Statistics Menu"; See "Pwr On Hrs" value. 3590 tape drive, release from host Unix: 'tapeutil -f dev/rmt? release' after having done a "reserve" Windows: 'ntutil -t tape_ release' 3590 tape drive, reserve from host Unix: 'tapeutil -f dev/rmt? reserve' Windows: 'ntutil -t tape_ reserve' When done, release the drive: Unix: 'tapeutil -f dev/rmt? release' Windows: 'ntutil -t tape_ release' 3590 tape drive Available? (AIX) 'lsdev -C -l rmt1' 3590 tape drive cleaning The drive may detect when it needs cleaning, at which point it will display its need on its front panel, and notify the library (if so attached via RS-422 interface) and the host system (AIX gets Error Log entry ERRID_TAPE_ERR6, "tape drive needs cleaning", or TAPE_DRIVE_CLEANING entry - there will be no corresponding Activity Log entry). The 3494 Library Manager would respond by adding a cleaning task to its Clean Queue, for when the drive is free. The 3494 may also be configured to perform cleaning on a scheduled basis, but be aware that this entails additional wear on the drive and makes the drive unavailable for some time, so choose this only if you find tapes going read-only due to I/O errors. Msgs: ANR8914I 3590 tape drive model number Do 'mtlib -l /dev/lmcp0 -D' The model number is in the third returned token. For example, in returned line: " 0, 00116050 003590B1A00" the model is 3590 B1A. 3590 tape drive serial number Do 'mtlib -l /dev/lmcp0 -D' The serial number is the second returned token, all but the last digit. For example, in returned line: " 0, 00116050 003590B1A00" the serial number is 11605. 3590 tape drive sharing As of TSM 3.7, two TSM servers to be connected to each port on a twin-tailed 3590 SCSI drive in the 3494, in a feature called "auto-sharing". Prior to this, individual drives in a 3494 library could only be attached to a particular server (library partitioning): each drive was owned by one server. 3590 tape drive status, from host 'mtlib -l /dev/lmcp0 -qD -f /dev/rmt1' 3590 tape drive use, define "ENABLE3590LIBRary" definition in the server options file. 3590 tape drives, list From AIX: 'mtlib -l /dev/lmcp0 -D' 3590 tape drives, list in AIX 'lsdev -C -c tape -H -t 3590' 3590 tape drives, not being used in a See: Drives, not all in library being library used 3590 tape mounts, by drive Put the 3494 into Pause mode; Open the 3494 door to access the given 3590's control panel; Select "Show Statistics Menu"; See "Mounts to Drv" value. See also: 3590 tape drive, hours powered on; 3590 statistics 3590 volume, veryify Devclass See: SHow FORMAT3590 _VolName_ 3590B The original 3590 tape drives. Cartridges supported: 3590 'J' (10-30 GB), 'K' (20-60 GB) (Early B drives can use only 'J'.) Tracks: 128 total tracks, 16 at a time, in serpentine fashion. Number of servo tracks: 3 Interfaces: Two, SCSI (FWD) Previous generation: none in 3590 series; but 3490E conceptually. See also: 3590C 3590B vs. 3590E drives A tape labelled by a 3590E drive cannot be read by a 3590B drive. A tape labelled by a 3590B drive can be read by a 3590E drive, but cannot be written by a 3590E drive. The E model can read the B formatted cartridge. The E model writes in 256 track format only and can not write or append to a B formatted tape. The E model can reformat a B format tape and then can write in the E format. The B model can not read E formatted data. The B model can reformat an E format tape and then can write in the B format: the B model device must be a minimum device code level (A_39F or B_731) to do so. 3590C FORMAT value in DEFine DEVclass for the original 3590 tape drives, when data compression is to be performed by the tape drive. See also: 3590C; DRIVE 3590E IBM's fifth generation of this 1/2" tape cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Cartridges supported: 3590 'J' (20-60 GB), 'K' (40-120 GB) Tracks: 256 (2x the 3590B), written 16 at a time, in serpentine fashion. The head contains 32 track writers: As the tape moves forward, 16 tracks are written until EOT is encountered, whereupon electronic switching causes the other 16 track writers in the heads to be used as the tape moved backwards towards BOT. Then, the head is physically moved (indexed) to repeat the process, until finally all 256 tracks are written as 16 interleaved sets of 16 tracks. Number of servo tracks: 3 Interfaces: Two, SCSI (FWD) or FC As of March, 2000 comes with support for 3590 Extended High Performance Cartridge Tape, to again double capacity. Devclass: FORMAT=3590E-C (not DRIVE) Previous generation: 3590B Next generation: 3590K 3590E? (Is a drive 3590E?) Expect to be able to tell if a 3590 drive is an E model by visual inspection: - Rear of drive (power cord end) having stickers saying "Magstar Model E" and "2x" (meaning that the EHPC feature is installed in the drive). - Drive display showing like "E1A-X" (drive type, where X indicates extended) in the lower leftcorner. (See Table 5 in 3590 Operator Guide manual.) 3590EE Extra long 3590E tapes (double length), available only from Imation starting early 2000. The cartridge accent color is green instead of blue and have a K label instead of J. Must be used with 3590E drives. 3590H IBM's sixth generation of this 1/2" cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Cartridges supported: 3590 'J' (30-90 GB), 'K' (60-180 GB) Capacity: 30GB native, ~90 GB compressed Tracks: 384 (1.5 times the 3590E) Compatibility: Can read, but not write, 128-track (3590) and 256-track(3590E) tapes. Supported in: TSM 5.1.6 Interfaces: Two, SCSI (FWD) or FC Devclass: FORMAT=3590E-C (not DRIVE) Previous generation: 3590E Next generation: 3592 (which is a complete departure, wholly incompatible) 3590K See: 3590 'K' 3590L AIX ODM type for 3590 Library models. 3592 The IBM TotalStorage Enterprise Tape Drive and Cartridge model numbers, introduced toward the end of 2003. The drive is only a drive: it slides into a cradle which externally provides power to the drive. The small form factor more severely limits the size of panel messages, to 8 chars. This model is a technology leap, akin to 3490->3590, meaning that though cartridge form remains the same, there is no compatibility whatever between this and what came before. Cleaning cartridges for the 3592 drive are likewise different. Rather than having a leader block, as in 3590 cartridges, the 3592 has a leader pin, located behind a retractable door. The 3592 cartridge is IBM's first one in the 359x series with an embedded memory chip (Cartridge Memory): Records are written to the chip every time the cartridge is unloaded from a 3592 J1A tape drive. These records are then used by the IBM Statistical Analysis and Reporting System (SARS) to analyze and report on tape drive and cartridge usage and help diagnose andisolate tape errors. SARS can also be used to proactively determine if the tape media or tape drive is degrading over time. Cleaning tapes also have CM, emphatically limiting their usage to 50 cycles. The 3592 cartridges come in four types: - The 3592 "JA" long rewritable cartridge: the high capacity tape which most customers would probably buy. Native capacity: 300 GB (Customers report getting up to 1.2 TB.) Can be initialized to 60 GB to serve in a fast-access manner. Works with 3592 J1A tape drive. - The 3592 "JJ" short rewritable cartridge: the economical choice where lesser amounts of data is written to separate tapes. Native capacity: 300 GB. Works with 3592 J1A tape drive. - The 3592 "JW" long write-once (WORM) cartridge. Native capacity: 300 GB. - The 3592 "JR" short write-once (WORM) cartridge. Native capacity: 60 GB. Compression type: Byte Level Compression Scheme Swapping. With this type, it is not possible for the data to expand. (IBM docs also say that the drive uses LZ1 compression, and Streaming Lossless Data Compression (SLDC) data compression algorithm, and ELDC.) The TSM SCALECAPACITY operand of DEFine DEVClass can scale native capacity back from 100 GB down to a low of 60 GB. The 3592 cartridges may live in either a 3494 library (in a new frame type - L22, D22, and D24 - separate from any other 3590 tape drives in the library); or a special frame of a 3584 library. Host connectivity: Dual ported switched fabric 2-Gbps Fibre Channel attachment (but online to only one host at a time). Physical connection is FC, but the drive employs the SCSI-3 command set for operation, in a manner greatly compatible with the 3590, simplifying host application support of the drive. As with the 3590 tape generation, the 3592 has servo information factory-written on the tape. (Do not degauss such cartridges. If you need to obliterate the data on a cartridge, perform a Data Security Erase.) Drive data transfer rate: up to 40MB/s Data life: 30 years Barcode label: Consists of 8 chars, the first 6 being the tape volser, and the last 2 being media type ("JA"). Tape vendors: Fuji, Imation (IBM will not be manufacturing tape) The J1A version of the drive is supported in the 3584 library, as of mid 2004. IBM brochure, specs: G225-6987-01 http://www.fuji-magnetics.com/en/company /news/index2_html Next generation: None, as of 2004/09 3599 An IBM "machine type / model" spec for ordering any Magstar cartridges: 3599-001, -002, -003 are 3590 J cartridges; 3599-004, -005, -006 are 3590 K cartridges; 3599-007 is 3590 cleaning cartridge; 3599-011, -012, -013 are 3592 cartridges 3599-017 is 3592 cleaning cartridge. 3599 A product from Bow Industries for cleaning and retensioning 3590 tape cartridges. www.bowindustries.com/3599.htm 3600 IBM LTO tape library, announced 2001/03/22, withdrawn 2002/10/29. Models: 3600-109 1.8 TB autoloader 3600-220 2/4 TB tower; 1 or 2 drives developers artificially limit the 3600-R20 2/4 TB rack; 1 or 2 drives The 220 and R20 come with two removable magazines that can each hold up to 10 LTO data or cleaning cartridges. 3995 IBM optical media library, utilizing double-sided, CD-sized optical platters contained in protective plastic cartridges. The media can be rewritable (Magneto-Optical), CCW (Continuous Composite Write-once), or permanent WORM (Write-Once, Read-Many). Each side of a cartridge is an Optical Volume. The optical drive has a fixed, single head: the autochanger can flip the cartridge to make the other side (volume) face the head. See also: WORM 3995 C60 Make sure Device Type ends up as WORM, not OPTICAL. 3995 drives Define as /dev/rop_ (not /dev/op_). See APAR IX79416, which describes element numbers vs. SCSI IDs. 3995 manuals http://www.storage.ibm.com/hardsoft/ opticalstor/pubs/pubs3995.html 3995 web page http://www.storage.ibm.com/hardsoft/ opticalstor/3995/maine.html http://www.s390.ibm.com/os390/bkserv/hw/ 50_srch.html 56Kb modem uploads With 56Kb modem technology, 53Kb is the fastest download speed you can usually expect, and 33Kb is the highest upload speed possible. And remember that phone line quality can reduce that further. Ref: www.56k.com 64-bit filesize support Was added in PTF 6 of the version 2 client. 64-bit ready? (Is ADSM?) Per Dave Cannon, ADSM Development, 1998/04/17, the ADSM server has always used 64-bit values for handling sizes and capacities. 7206 IBM model number for 4mm tape drive. Media capacity: 4 GB Transfer rate: 400 KB/S 7207 IBM model number for QIC tape drive. Media capacity: 1.2 GB Transfer rate: 300 KB/S 7208 IBM model number for 8mm tape drive. Media capacity: 5 GB Transfer rate: 500 KB/S 7331 IBM model number for a tape library containing 8mm tapes. It comes with a driver (Atape on AIX, IBMtape on Solaris) for the robot to go with the generic OST driver for the drive. That's to support non-ADSM applications, but ADSM has its own driver for these devices. Media capacity: 7 GB Transfer rate: 500 KB/S 7332 IBM model number for 4mm tape drive. Media capacity: 4 GB Transfer rate: 400 KB/S 7337 A DLT library. Define in ADSM like: DEFine LIBRary autoDLTlib LIBType=SCSI DEVice=/dev/lb0 DEFine DRive autodltlib drive01 DEVice=/dev/mt0 ELEMent=116 DEFine DRive autodltlib drive02 DEVice=/dev/mt1 ELEMent=117 DEFine DEVclass autodlt_class DEVType=dlt LIBRary=autodltlib DEFine STGpool autodlt_pool autodlt_class MAXSCRatch=15 8200 Refers to recording format for 8mm tapes, for a capacity of about 2.3 GB. 8200C Refers to recording format for 8mm tapes, for a capacity of about 3.5 GB. 8500 Refers to recording format for 8mm tapes, for a capacity of about 5.0 GB. 8500C Refers to recording format for 8mm tapes, for a capacity of about 7.0 GB. 8900 Refers to recording format for 8mm tapes, for a capacity of about 20.0 GB. 8mm drives All are made by Exabyte. 8mm tape technology Yecch! Horribly unreliable. Tends to be "write only" - write okay, but tapes unreadable thereafter. 9710/9714 See: StorageTek 9840 See: STK 9840 9940b drive Devclass: - If employing the Gresham Advantape driver: generictape - If employing the Tivoli driver: ecartridge ABC Archive Backup Client for *SM, as on OpenVMS. The software is written by SSSI. It uses the TSM API to save and restore files. See also: OpenVMS ABSolute A Copy Group mode value (MODE=ABSolute) that indicates that an object is considered for backup even if it has not changed since the last time it was backed up; that is, force all files to be backed up. See also: MODE Contrast with: MODified. See also: SERialization (another Copy Group parameter) Accelis (LTO) Designer name for the next generation (sometimes misspelled "Accellis") 3570 tape, LTO. Cartridge is same as 3570, including dual-hub, half-wound for rapid initial access to data residing at either end of the tape (intended to be 10 seconds or less). Physically sturdier than Ultrium, Accelis was intended for large-scale automated libraries. But Accelis never made it to reality: increasing disk capacity made the higher-capacity Ultrium more realistic; and two-hub tape cartridges are wasteful in containing "50% air" instead of tape. Accelis would have had: Cartridge Memory (LTO CM, LTO-CM) chip is embedded in the cartridge: a non-contacting RF module, with non-volatile memory capacity of 4096 bytes, provides for storage and retrieval of cartridge, data positioning, and user specified info. Recording method: Multi-channel linear serpentine Capacity: 25 GB native, uncompressed Transfer rate: 10-20 MB/second. http://www.Accelis.com/ "What Happened to Accelis?": http://www.enterprisestorageforum.com/ technology/features/article.php/1461291 See also: 3583; LTO; MAM; Ultrium (LTO) ACCept Date TSM server command to cause the server to accept the current date and time as valid when an invalid date and time are detected. Syntax: 'ACCept Date' Note that one should not normally have to do this, even across Daylight Savings Time changes, as the conventions under which application programs are run on the server system should let the server automatically have the correct date and time. In Unix systems, for example, the TZ (Time Zone) environment variable specifies the time zone offsets for Daylight and Standard times. In AIX you can do 'ps eww ' to inspect the env vars of the running server. In a z/OS environment, see IBM site article swg21153685. See also: Daylight Savings Time Access Line-item title from the 'Query Volume Format=Detailed' report, which says how the volume may be accessed: Read-Only, Read/Write, Unavailable, Destroyed, OFfsite. Use 'UPDate Volume' to change the access value. If Access is Read-Only for a storage pool within a hierarchy of storage pools, ADSM will skip that level and attempt to write the data to the next level. Access TSM db: Column in Volumes table. Possible values: DESTROYED, OFFSITE, READONLY, READWRITE, UNAVAILABLE Access Control Lists (AIX) Extended permissions which are preserved in Backup/Restore. "Access denied" A message which may be seen in some environments; usually means that some other program has the file open in a manner that prevents other applications from opening it (including ADSM). Access mode A storage pool and storage volume attribute recorded in the ADSM database specifying whether data can be written to or read from storage pools or storage volumes. It can be one of: Read/write Can read or write volume in the storage pool. Set with UPDate STGpool or UPDate Volume. Read-only Volume can only be read. Set with UPDate STGpool or UPDate Volume. Unavailable Volume is not available for any kind of access. Set with UPDate STGpool or UPDate Volume. DEStroyed Possible for a primary storage pool (only), says that the volume has been permanently damaged. Do RESTORE STGpool or RESTORE Volume. Set with UPDate Volume. OFfsite Possible for a copy storage pool, says that volume is away and can't be mounted. Set with UPDate Volume. Ref: Admin Guide See also: DEStroyed Access time When a file was last read: its "atime" value (stat struct st_atime). The Backup operation results in the file's access timestamp being changed as each file is backed up, because as a generalized application it is performing conventional I/O to read the contents of the file, and the operating system records this access. (That is, it is not Backup itself which modifies the timestamp: it's merely that its actions incidentally cause it to change.) Beginning with the Version 2 Release 1 Level 0.1 PTF, UNIX backup and archive processes changed the ctime instead of user access time (atime). This was done because the HSM feature on AIX uses atime in assessing a file's eligibility and priority for migration. However, since the change of ctime conflicts with other existing software, with this Level 0.2 PTF, UNIX backup and archive functions now perform as they did with Version 1: atime is updated, but not ctime. AIX customers might consider geting around that by the rather painful step of using the 'cplv' command to make a copy of the file system logical volumes, then 'fsck' and 'mount' the copy and run backup; but that isn't very reliable. One thinks of maybe getting around the problem by remounting a mounted file system read-only; but in AIX that doesn't work, as lower level mechanisms know that the singular file has been touched. (See topic "MOUNTING FILE SYSTEMS READ-ONLY FOR BACKUP" near the bottom of this documentation.) Network Appliance devices can make an instant snapshot image of a file system for convenient backup, a la AFS design. Veritas Netbackup can restore the atime but at the expense of the ctime (http://seer.support.veritas.com/docs/ 240723.htm) See also: FlashCopy Accessor On a tape robot (e.g., 3494) is the part which moves within the library and carries the arm/hand assembly. See also: Gripper Accounting Records client session activities, with an accounting record written at the end of each client node session (in which a server interaction is required). The information recorded chiefly reflects volumetrics, and thus would be more useful for cross-charging purposes than for more illuminating uses. Note that a client session which does not require interaction with the server, such as 'q option', does not result in an accounting record being written. A busy system will create VOLUMINOUS accounting files, so use judiciously. See also: dsmaccnt.log; SUMMARY Accounting, query 'Query STatus', seek "Accounting:". Unfortunately, its output is meager, revealing only On or Off. See also: dsmaccnt.log Accounting, turn off 'Set ACCounting OFf' Accounting, turn on 'Set ACCounting ON' See also: dsmaccnt.log Accounting log Unix: Is file dsmaccnt.log, located in either the server directory or the directory specified on the DSMSERV_ACCOUNTING_DIR environment variable. MVS (OS/390): the recording occurs in SMF records, subtype 14. Accounting recording begins when 'Set ACCounting ON' is done and client activity occurs. The server keeps the file open, and the file will grow endlessly: there is no expiration pruning done by TSM; so you should cut the file off periodically, either when the server starts/ends, or by turning accounting off for the curation of the cut-off. Accounting log directory Specified via environment variable DSMSERV_ACCOUNTING_DIR (q.v.) in Unix environments, or NT Registry key. Introduced late in *SMv3. Accounting record layout/fields See the Admin Guide for a description of record contents. Field 24, "Amount of media wait time during the session", refers to time waiting for tape mounts. Note that maintenance levels may add accounting fields. See layout description in "ACCOUNTING RECORD FORMAT" near the bottom of this functional directory. Accounting records processing There are no formal tools for doing this. The IBM FTP site's adsm/nosuppt directory contains an adsmacct.exec REXX script, but that's it. See http://people.bu.edu/rbs/TSM_Aids.html for a Perl program to do this. ACF 3590 tape drive: Automatic Cartridge Facility: a magazine which can hold 10 cartridges. Note that this does not exist as such on the 3494: it has a 10-cartridge Convenience I/O Station, which is little more than a pass-through area. ACL handling (Access Control Lists) ACL info will be stored in the *SM database by Archive and Backup, unless it is too big, in which case the ACL info will be stored in a storage pool, which can be controlled by DIRMc. See also: Archive; Backup; DIRMc; INCRBYDate Ref: Using the Unix Backup-Archive Clients (indexed under Access Permissions, describing ACLs as "extended permissions"). ACLs (Access Control Lists) and Changes to Unix ACLs do not change the mtime affecting backup file mtime, so such a change will not cause the file to be backed up by date. ACLS Typically a misspelling of "ACSLS", but could be Auto Cartridge Loader System. ACS Automated Cartridge System ACSACCESSID Server option to specify the id for the ACS access control. Syntax: ACSACCESSID name Code a name 1-64 characters long. The default id is hostname. ACSDRVID Device Driver ID for ACSLS. ACSLOCKDRIVE Server option to specify if the drives within the ACSLS libraries to be locked. Drive locking ensures the exclusive use of the drive within the ACSLS library in a shared environment. However, there are some performance improvements if locking is not performed. If the ADSM drives are not shared with other applications in the configuration then drive locking are not required. Syntax: ACSLOCKDRIVE [YES | NO] Default: NO ACSLS Refers to the STK Automated Cartridge System Library Software. Based upon an RPC client (SSI) - server (CSI) model, it manages the physical aspects of tape cartridge storage and retrieval, while data retrieval is separate, over SCSI or other method. Whenever TSM has a command to send to the robot arm, it changes the command into something that works rather like an RPC call that goes over to the ACSLS software, then ACSLS issues the SCSI commands to the robot arm. ACSLS is typically needed only when sharing a library, wherein ACSLS arbitrates requests; otherwise TSM may control the library directly. Performance: As of 2000/06, severely impaired by being single-threaded, resulting in long tape mount times as *SM queries the drive several times before being sure that a mount is safe. http://www.stortek.com/StorageTek/ software/acsls/ Debugging: Use 'rpcinfo -p' on the server to look for the following ACSLS programs being registered in Portmap: program vers proto port 536871166 2 tcp 4354 300031 2 tcp 4355 then use 'rpcinfo -t ...' to reflect off the program instances. ACSQUICKINIT Server option to specify if the initialization of the ACSLS library should be quick or full initialization during the server startup. The full initialization matches the ACSLS inventory with the ADSM inventory and validate the locking for each ADSM owned volume. It also validates the drive locking and dismount all volumes currently in the ADSM drive. The full initialization takes about 1-2 seconds per volume and can take a long time during the server startup if the library inventory is large. ACSQUICKINIT bypasses all the inventory matching, lock validation and volume dismounting from the drive. The user must ensure the integrity of the ADSM inventory and drive availability, all ADSM volumes or drives are assumed locked by the same lock_id and available. This option is useful for server restart, and should only be used if all ADSM inventory and resources remain the same while the server is down. Syntax: ACSQUICKINIT [YES | NO] Default: NO ACSTIMEOUTX Server option to specify the multiple for the build-in timeout value for ACSLS API. The build-in timeout value for ACS audit API is 1800 seconds, for all other APIs are 600 seconds. If the multiple value specifed is 5, the timeout value for audit API becomes 9000 seconds and all other APIs becomes 3000 seconds. Syntax: ACSTIMEOUTX value Code a number from 1 - 100. Default: 1 Activate Policy Set See: ACTivate POlicyset; Policy set, activate ACTivate POlicyset *SM server command to specify an existing policy set as the Active policy set for a policy domain. Syntax: 'ACTivate POlicyset ' (Be sure to do 'VALidate POlicyset' beforehand.) You need to do an Activate after making management class changes. ACTIVE Column name in the ADMIN_SCHEDULES SQL database table. Possible values: YES, NO. SELECT * FROM ADMIN_SCHEDULES Active Directory See: Windows Active Directory Active file system A file system for which space management is activated. HSM can perform all space management tasks for an active file system, including automatic migration, recall, and reconciliation and selective migration and recall. Contrast with inactive file system. Active files, identify in Select STATE='ACTIVE_VERSION' See also: Inactive files, identify in Select; STATE Active files, number and bytes Do 'EXPort Node NodeName \ FILESpace=FileSpaceName \ FILEData=BACKUPActive \ Preview=Yes' Message ANR0986I will report the number of files and bytes. An alternate method, reporting MB only, follows the definition of Active files, meaning files remaining in the file system - as reflected in a Unix 'df' command and: SELECT SUM(CAPACITY*PCT_UTIL/100) FROM FILESPACES WHERE NODE_NAME='____' This Select is very fast and obviously depends upon whole file system backups. (Selective backups and limited backups can throw it off.) See also: Inactive files, number and bytes Active files, report in terms of MB By definition, Active files are those which are currently present in the client file system, which a current backup causes to be reflected in filespace numbers, so the following yields reasonable results: SELECT NODE_NAME, FILESPACE_NAME, FILESPACE_TYPE, CAPACITY AS "File System Size in MB", PCT_UTIL, DECIMAL((CAPACITY * (PCT_UTIL / 100.0)), 10, 2) AS "MB of Active Files" FROM FILESPACES ORDER BY NODE_NAME, FILESPACE_NAME Caveats: The amount of data in a TSM server filespace will differ somewhat from the client file system where some files are excluded from backups, and more so where client compression is employed. But in most cases the numbers will be good. Active files for a user, identify via SELECT COUNT(*) AS "Active files count"- Select FROM BACKUPS WHERE - NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='___' AND OWNER='___' - AND STATE='ACTIVE_VERSION' Active policy set The policy set within a policy domain most recently subjected to an 'activate' to effectively establish its specificaitons as those to be in effect. This policy set is used by all client nodes assigned to the current policy domain. See policy set. Active Version (Active File) The most recent backup copy of an object stored in ADSM storage for an object that currently exists on a file server or workstation. An active version remains active and exempt from deletion until it is replaced by a new backup version, or ADSM detects during a backup that the user has deleted the original object from a file server or workstation. Note that active and inactive files may exist on the same volumes. See also: ACTIVE_VERSION; Inactive Version; INACTIVE_VERSION Active versions, keep in stgpool For faster restoral, you may want to retain Active files in a higher storage pool of your storage pool hierarchy. There has been no operand in the product to allow you to specify this explicitly; but you can roughly achieve that end via the Stgpool MIGDelay value, to keep recent (Active) files in the higher storage pool. Of course, if there is little turnover in the file system feeding the storage pool, Active files will get old and will migrate. ACTIVE_VERSION SQL DB: State value in Backups table for a current, Active file. See also: DEACTIVATE_DATE Activity log Contains all messages normally sent to the server console during server operation. This is information stored in the TSM server database, not in a separate file. Do 'Query ACtlog' to get info. Each time the server starts it begins logging with message: ANR2100I Activity log process has started. See also: Activity log pruning Activity log, create an entry As of TSM 3.7.3 you can, from the client side, cause messages to be added to the server Activity Log (ANE4771I) by using the API's dsmLogEvent. Another means, crude but effective: use an unrecognized command name, like: "COMMENT At this time we will be powering off our tape robot." It will show up on an ANR2017I message, followed by "ANR2000E Unknown command - COMMENT.", which can be ignored. See also: ISSUE MESSAGE Activity log, number of entries There is no server command to readily determine the amount of database space consumed by the Activity Log. The only close way is to count the number of log entries, as via batch command: 'dsmadmc -id=___ -pa=___ q act BEGINDate=-9999 | grep ANR | wc -l' or do: SELECT COUNT(*) FROM ACTLOG See also: Activity log pruning Activity log, search 'Query ACtlog ... Search='Search string' Activity log, Select entries more than SELECT SERVERNAME,NODENAME,DATE_TIME - an hour old FROM ACTLOG WHERE - (CAST((CURRENT_TIMESTAMP-DATE_TIME) - HOURS AS INTEGER)>1) Activity log, seek a message number 'Query ACtlog ... MSGno=____' or SELECT MESSAGE FROM ACTLOG WHERE - MSGNO=0988 Seek one less than an hour old: SELECT MESSAGE FROM ACTLOG WHERE - MSGNO=0986 AND - DATE_TIME<(CURRENT_TIMESTAMP-(1 HOUR)) Activity log, seek message text SELECT * FROM ACTLOG WHERE MESSAGE LIKE '%%' Activity log, seek severity messages 'SELECT * FROM ACTLOG WHERE \ in last 2 days (SEVERITY='W' OR SEVERITY='E' OR \ SEVERITY='D') AND \ DAYS(CURRENT_TIMESTAMP)- \ DAYS(DATE_TIME) <2 Activity log content, query 'Query ACtlog' Activity log pruning (prune) Occurs just after midnite, driven by 'Set ACTlogretention N_Days' value. The first message which always remains in the Activity Log, related to the pruning, are ANR2102I and ANR2103I. Activity log retention period, query 'Query STatus', look for "Activity Log Retention Period" Activity log retention period, set 'Set ACTlogretention N_Days' Activity Summary Table See: SUMMARY table ACTLOG The *SM database Activity Log table. Columns: DATE_TIME, MSGNO, SEVERITY, MESSAGE, ORIGINATOR, NODENAME, OWNERNAME, SCHEDNAME, DOMAINNAME, SESSID ACTlogretention See: Set ACTlogretention AD See: Windows Active Directory Adaptive Differencing A.k.a "adaptive sub-file backup" and "mobile backup", to back up only the changed portions of a file rather than the whole file. Is employed for files > 1 KB and < 2 GB. (The low-end limit (1024 bytes) was due to some strange behavior with really small files, e.g., if a file started out at 5 k and then was truncated to 8 bytes. The solution was to just send the entire file if the file fell below the 1 KB threshold - no problem since these are tiny files. Initially introduced for TSM4 Windows clients, intended for roaming users needing to back update on laptop computers, over a telephone line. Note that the transfer speed thus varies greatly according to the phone line. See "56Kb modem uploads" for insight. (All 4.1+ servers can store the subfile data sent by the Windows client - providing that it is turned on in the server, via 'Set SUBFILE'.) Limitations: the differencing subsystem in use is limited to 32 bits, meaning 2 GB files. The developers chose 2 GB (instead of 4 GB) as the limit to avoid any possible boundary problems near the 32-bit addressing limit and also because this technology was aimed at the mobile market (read: Who is going to have files on their laptops > 2 GB?). As of 2003 there are no plans to go to 64 bits. Ref: TSM 3.7.3 and 4.1 Technical Guide redbook; Windows client manual; Whitepaper on TSM Adaptive Sub-file Differencing at http://www.ibm.com/ software/tivoli/library/whitepapers/ See also: Set SUBFILE; SUBFILE* ADIC Vendor: Advanced Digital Information Corporation - a leading device-independent storage solutions provider to the open systems marketplace. A reseller. www.adic.com ADMIN Name of the default administrator ID, from the TSM installation. Admin GUI There is none for ADSMv3: there is a command line admin client, and a web admin client instead. Administrative client A program that runs on a file server, workstation, or mainframe. This program allows an ADSM administrator to control and monitor an ADSM server using ADSM administrative commands. Contrast with backup-archive client. Administrative command line interface Beginning with the 3.7 client, the Administrative command line interface is no longer part of the Typical install, in order to bring it in line with the needs of the "typical" TSM user, who is an end user who does not require this capability. If you run a Custom install, you can select the Admin component to be installed. Administrative schedule A schedule to control operations affecting the TSM server. Note that you can't redirect output from an administrative schedule. That is, if you define an administrative schedule, you cannot code ">" or ">>" in the CMD. This seems to be related to the restriction that you can't redirect output from an Admin command issued from the ADSM console. Experience shows that an admin schedule will not be kicked off if a Server Script is running (at least in ADSMv3). The only restricted commands are MACRO and Query ACtlog, because... MACRO: Macros are valid only from administrative clients. Scheduling of admin commands is contained solely within the server and the server has no knowledge of macros. Query ACtlog: Since all output from scheduled admin commands is forced to the actlog then scheduling a Query ACtlog would force the resulitng output right back to the actlog, thereby doubling the size of the actlog. See: DEFine SCHedule, administrative Administrative schedule, run one time Define the administrative schedule with PERUnits=Onetime. Administrative schedules, disable See: DISABLESCheds Administrative schedules, prevent See: DISABLESCheds Administrator A user who is registered with an ADSM server as an administrator. Administrators are assigned one or more privilege classes that determine which administrative tasks they can perform. Administrators can use the administrative client to enter ADSM server commands and queries according to their privileges. Be aware that ADSM associates schedules and other definitions with the administrator who created or last changed it, and that removal or locking of the admin can cause the object to stop operating. In light of this affiliation, it is best for a shop to define a general administrator ID (much like root on a Unix system) which should be used to manage resources having sensitivity to the adminstrator ID. Administrator, add See: Administrator, register Administrator, lock out 'LOCK Admin Admin_Name' See also: Administrators, web, lock out Administrator, password, change 'UPDate Admin Admin_Name PassWord' Administrator, register 'REGister Admin ...' (q.v.) The administrator starts out with Default privilege class. To get more, the 'GRant AUTHority' command must be issued. Administrator, remove 'REMove Admin Adm_Name' Administrator, rename 'REName Admin Old_Adm_Name New_Name' Administrator, revoke authority 'REVoke AUTHority Adm_Name [CLasses=SYstem|Policy|STorage| Operator|Analyst] [DOmains=domain1[,domain2...]] [STGpools=pool1[,pool2...]]' Administrator, unlock 'UNLOCK Admin Adm_Name' Administrator, update info or password 'UPDate Admin ...' (q.v.) Administrator files Located in /usr/lpp/adsm/bin/ Administrator passwords, reset Shamefully, some sites lose track of all their administrator passwords, and need to restore administrator access. The one way is to bring the server down and then start it interactively, which is to say implicitly under the SERVER_CONSOLE administrator id. See: HALT; UPDate Admin Administrator privilege classes From highest level to lowest: System - Total authority Policy - Policy domains, sets, management classes, copy groups, schedules. Storage - Manage storage resources. Operator - Server operation, availability of storage media. Analyst - Reset counters, track server statistics. Default - Can do queries. Right out of a 'REGister Admin' cmd, the individual gets Default privilege. To get more, the 'GRant AUTHority' command must be issued. Administrators, query 'Query admin * Format=Detailed' Administrators, web, lock out You can update the server options file COMMMethod option to eliminate the HTTP and HTTPS specifications. See also: "Administrator, lock out" for locking out a single administrator. adsm The command used to invoke the standard ADSM interface (GUI), for access to Utilities, Server, Administrative Client, Backup-Archive Client, and HSM Client management. /usr/bin/adsm -> /usr/lpp/adsmserv/ezadsm/adsm. Contrast with the 'dsmadm' command, which is the GUI for pure server administration. ADSM ADSTAR Distributed Storage Manager. Consisted of Versions 1, 2, and 3 through Release 1. See also: IBM Tivoli Storage Manager; Tivoli Storage Manager; TSM; WDSF ADSM components installed AIX: 'lslpp -l "adsm*"' See also: TSM components installed ADSM monitoring products ADSM Manager (see http://www.mainstar.com/adsm.htm). Tivoli Decision Support for Storage Management Analysis. This agent program now ships free with TSM V4.1; however you do need a Tivoli Decision Support server. See redbook Tivoli Storage Management Reporting SG24-6109. See also: TSM monitoring products. ADSM origins See: WDSF ADSM server version/release level Revealed in server command Query STatus. Is not available in any SQL table via Select. ADSM usage, restrict by groups Use the "Groups" option in the Client System Options file (dsm.sys) to name the Unix groups which may use ADSM services. See also "Users" option. ADSM.DISKLOG (MVS) Is created as a result of the ANRINST job. You can find a sample of the JCL in the ADSM.SAMPLIB. ADSM.SYS The C:\adsm.sys directory is the "Registry Staging Directory", backed up as part of the system object backup (systemstate and systemservices objects), as the Backup client is traversing the C: DRIVE. ADSM.SYS is excluded from "traditional" incremental and selective backups ("exclude c:\adsm.sys\...\*" is implicit - but should really be "exclude.dir c:\adsm.sys", to avoid timing problems.) Note that backups may report adsm.sys\WMI, adsm.sys\IIS and adsm.sys\EVENTLOG as "skipped": these are not files, but subdirectories. You may employ "exclude.dir c:\adsm.sys" in your include-exclude list to eliminate the messages. (A future enhancement may implicitly do exclude.dir.) For Windows 2003, ADSM.SYS includes VSS metadata, which also needs to be backed up. See: BACKUPRegistry; NT Registry, back up; REGREST ADSM_DD_* These are AIX device errors (circa 1997), as appear in the AIX Error Log. ADSM logs certain device errors in the AIX system error log. Accompanying Sense Data details the error condition. ADSM_DD_LOG1 (0XAC3AB953) DEVICE DRIVER SOFTWARE ERROR Logged by the ADSM device driver when a problem is suspected in the ADSM device driver software. For example, if the ADSM device driver issues a SCSI I/O command with an illegal operation code the command fails and the error is logged with this identifier. ADSM_DD_LOG2 (0X5680E405) HARDWARE/COMMAND-ABORTED ERROR Logged by the ADSM device driver when the device reports a hardware error or command-aborted error in response to a SCSI I/O command. ADSM_DD_LOG3 (0X461B41DE) MEDIA ERROR Logged by the ADSM device driver when a SCSI I/O command fails because of corrupted or incompatible media, or because a drive requires cleaning. ADSM_DD_LOG4 (0X4225DB66) TARGET DEVICE GOT UNIT ATTENTION Logged by the ADSM device driver after receiving a UNIT ATTENTION notification from a device. UNIT ATTENTIONs are informational and usually indicate that some state of the device has changed. For example, this error would be logged if the door of a library device was opened and then closed again. Logging this event indicates that the activity occurred and that the library inventory may have been changed. ADSM_DD_LOG5 (0XDAC55CE5) PERMANENT UNKNOWN ERROR Logged by the ADSM device driver after receiving an unknown error from a device in response to a SCSI I/O cmd. There is no single cause for this: the cause is to be determined by examining the Command, Status Code, and Sense Data. For example, it could be that a SCSI command such as Reserve (X'16') or Release (X'17') was issued with no args (rest of Command is all zeroes). adsmfsm /etc/filesystems attribute, set "true", which is added when 'dsmmigfs' or its GUI equivalent is run to add ADSM HSM control to an AIX file system. Adsmpipe An unsupported Unix utility which uses the *SM API to provide archive, backup, retrieve, and restore facilities for any data that can be piped into it, including raw logical volumes. (In that TSM 3.7 can back up Unix raw logical volumes, there no need for Adsmpipe to serve that purpose. However, it is still useful for situations where it is inconvenient or impossible to back up a regular file, such as capturing the output of an Oracle Export operation where there isn't sufficient Unix disk space to hold it for 'dsmc i'.) By default, files are stored on the server under filespace name "/pipe" (which can be overridden via -s). Do 'adsmpipe' to see usage. -f Mandatory option to specify the name used for the file in the filespace. -c To backup file to the *SM server. -f here specifies the arbitrary name to be assigned to the file as it is to be stored in the *SM server. Input comes from Stdin. Messages go to Stderr. -x To restore file from the *SM server. Do not include the filespace name in the -f spec. Output goes to Stdout. Messages go to Stderr. -t To list previous backup files. Messages go to Stderr. -m To choose a management class. The session will show up as an ordinary backup, including in accounting data. There is a surprising amount of crossover between this API-based facility and the standard B/A client: 'dsmc q f' will show the backup as type "API:ADSMPIPE". 'dsmc q ba -su=y /pipe/\*' will show the files. 'dsmc restore -su=y /pipe/' will restore the file. To get the software: go to http://www.redbooks.ibm.com/, search on the redbook title (or "adsmpipe"), and then on its page click Additional Material, whereunder lies the utility. That leads to: ftp://www.redbooks.ibm.com/redbooks/ SG244335/ (The file may be labeled "adsmpipe.tar" but may in fact be a compressed file; so should actually have been named "adsmpipe.tar.Z".) Ref: Redbook "Using ADSM to Back Up Databases" (SG24-4335) .adsmrc (Unix client) The ADSMv3 Backup/Archive GUI introduced an Estimate function. It collects statistics from the ADSM server, which the client stores, by *SM server address, in the .adsmrc file in the user's Unix home directory, or Windows dsm.ini file. Client installation also creates this file in the client directory. Ref: Client manual chapter 3 "Estimating Backup processing Time"; ADSMv3 Technical Guide redbook See also: dsm.ini; Estimate; TSM GUI Preferences adsmrsmd.dll Windows library provided with the TSM 4.1 server for Windows. (Not installed with 3.7, though.) For Removable Storage Management (RSM). Should be in directory: c:\program files\tivoli\tsm\server\ as both: adsmrsm.dll and adsmrsmd.dll Messages: ANR9955W See also: RSM adsmscsi Older device driver for Windows (2000 and lower), for each disk drive. With Windows 2003 you instead use tsmscsi, installing it on each drive now, rather than having one device driver for all the drives. See manuals. adsmserv.licenses ADSMv2 file in /usr/lpp/adsmserv/bin/, installed with the base server code and updated by the 'REGister LICense' command to contain encoded character data (which is not the same as the hex strings you typed into the command). For later ADSM/TSM releases, see "nodelock". If the server processor board is upgraded such that its serial number changes, the REGister LICense procedure must be repeated - but you should first clear out the /usr/lpp/adsmserv/bin/adsmserv.licenses file, else repeating "ANR9616I Invalid license record" messages will occur. See: License...; REGister LICense adsmserv.lock The ADSM server lock file. It both carries information about the currently running server, and serves as a lock point to prevent a second instance from running. Sample contents: "dsmserv process ID 19046 started Tue Sep 1 06:46:25 1998". See also: dsmserv.lock ADSTAR An acronym: ADvanced STorage And Retrieval. In the 1992 time period, IBM under John Akers tried spinning off subsidiary companies to handle the various facets of IBM business. ADSTAR was the advanced storage company, whose principal product was hardware, but also created some software to help utilize the hardware they made. Thus, ADSM was originally a software product produced by a hardware company. Lou Gerstner subsequently became CEO, thought little of the disparate sub-companies approach, and re-reorganized things such that ADSTAR was reduced to mostly a name, with its ADSM product now being developed under the software division. ADSTAR Distributed Storage Manager A client/server program product that (ADSM) provides storage management services to customers in a multivendor computer environment. Advanced Device Support license For devices such as a 3494 robotic tape library. Advanced Program-to-Program An implementation of the SNA LU6.2 Communications (APPC) protocol that allows interconnected systems to communicate and share the processing of programs. See Systems Network Architecture Logical Unit 6.2 and Common Programming Interface Communications. Discontinued as of TSM 4.2. afmigr.c Archival migration agent. See also: dfmigr.c AFS You can use the standard dsm and dsmc client commands on AFS file systems, but they cannot back up AFS Access Control Lists for directories or mount points: use dsm.afs or dsmafs, and dsmc.afs or dsmcafs to accomplish complete AFS backups by file. The file backup client is installable from the adsm.afs.client installation file, and the DFS fileset backup agent is installable from adsm.butaafs.client. You may need to purchase the Open Systems Environment Support license for AFS/DFS clients. AFS and TSM 5.x There is no AFS support in TSM 5.x, as there is none specifically in AIX 5.x (AIX 4.3.3 being the latest). This seems to derive from the change in the climate of AFS, where it has gone open-source, thus no longer a viable IBM/Transarc product. AFS backups, delete You can use 'delbuta' to delete from AFS and TSM. Or: Use 'deletedump' from the backup interface to delete the buta dumps from the AFS backup database. The only extra step you need to do is run 'delbuta -s' to synchronize the TSM server. Do this after each deletedump run, and you should be all set. AFS backups, reality Backing up AFS is painful no matter how you do it... Backup by volume (using the *SM replacement for butc) is fast, but can easily consume a LOT of *SM storage space because it is a full image backup every time. To do backup by file properly, you need to keep a list of mount points and have a backup server (or set of clients) that has a lot of memory so that you can use an AFS memory cache - and using a disk cache takes "forever". AFSBackupmntpnt Client System Options file option, valid only when you use dsmafs and dsmcafs. (dsmc will emit error message ANS4900S and ignore the option.) Specifies whether you want ADSM to see a AFS mount point as a mount point (Yes) or as a directory (No): Yes ADSM considers a AFS mount point to be just that: ADSM will back up only the mount point info, and not enter the directory. This is the safer of the two options, but limits what will be done. No ADSM regards a AFS mount point as a directory: ADSM will enter it and (blindly) back up all that it finds there. Note that this can be dangerous, in that use of the 'fts crmount' command is open to all users, who through intent or ignorance can mount parts or all of the local file system or a remote one, or even create "loops". All of this is to say that file-oriented backups of AFS file systems is problematic. See also: DFSBackupmntpt Age factor HSM: A value that determines the weight given to the age of a file when HSM prioritizes eligible files for migration. The age of the file in this case is the number of days since the file was last accessed. The age factor is used with the size factor to determine migration priority for a file. It is a weighting factor, not an absolute number of days since last access. Defined when adding space management to a file system, via dsmhsm GUI or dsmmigfs command. See also: Size factor agent.lic file As in /usr/tivoli/tsm/client/oracle/bin/ Is the TDPO client license file. Lower level servers don't have server side licensing. TSM uses that file to verify on the client side. TDPO will not run without a valid agent.lic file. Aggregate See: Aggregates; Reclamation; Stored Size. Aggregate data transfer rate Statistic at end of Backup/Archive job, reflecting transmission over the full job time, which thus includes all client "think time", file system traversal, and even time the process was out of the operating system dispatch queue. Is calculated by dividing the total number of bytes transferred by the elapsed processing time. Both Tivoli Storage Manager processing and network time are included in the aggregate transfer rate. Therefore, the aggregate transfer rate is lower than the network transfer rate. Contrast with Network data transfer rate, which can be expected to be a much higher number because of the way it is calculated. Ref: B/A Client manual glossary. Aggregate function SQL: A function, such as Sum(), Count(), Avg(), and Var(), that you can use to calculate totals. In writing expressions and in programming, you can use SQL aggregate functions to determine various statistics on sets of values. Aggregated? In ADSMv3 'Query CONtent ... Format=Detailed': Reveals whether or not the file is stored in the server in an Aggregate and, if so, the position within the aggregate, as in "11/23". If not aggregated, it will report "No". See also: Segment Number; Stored Size Aggregates Refers to the Small Files Aggregation (aka Small File Aggregation) feature in ADSMv3. During Backup and Archive operations, small files are automatically packaged into larger objects called Aggregates, to be transferred and managed as a whole, thus reducing overhead (database and tape space) and improving performance. An Aggregate is a single file stored at the server. Space-managed (HSM) files are not aggregated, which lessens HSM performance. The TSM API certainly supports Aggregation; but Aggregation depends upon the files in a transaction all being in the same file space. TDPs use the API, but often work with very large files, which may each be a separate file space of their own. Hence, you may not see Aggregation with TDPs. But the size of the files means that Aggregation is not an issue for performance. The size of the aggregate varies with the size of the client files and the number of bytes allowed for a single transaction, per the TXNGroupmax server option (transaction size as number of files) and the TXNBytelimit client option (transaction size as number of bytes). Too-small values can conspire to prevent aggregation - so beware using TCPNodelay in AIX. As is the case with files in general, an Aggregate will seek the storage pool in the hierarchy which has sufficient free space to accommodate the Aggregate. An aggregate that cannot fit entirely within a volume will span volumes, and if the break point is in the midst of a file, the file will span volumes. Note that in Reclamation the aggregate will be simply copied with its original size: no effort will be made to construct output aggregates of some nicer size, ostensibly because the data is being kept in a size known to be a happy one for the client, to facilitate restorals. Files which were stored on the server unaggregated (as for example, long-retention files stored under ADSMv2) will remain that way indefinitely and so consume more server space than may be realized. (You can verify with Query CONtent F=D.) Version 2 clients accessing a v3 server should use the QUIET option during Backup and Archive so that files will be aggregated even if a media mount is required. Your Stgpool MAXSize value limits the size of an Aggregate, not the size of any one file in the Aggregate. See also: Aggregated?; NOAGGREGATES; Segment Number Ref: Front of Quick Start manual; Technical Guide redbook; Admin Guide "How the Server Groups Files before Storing" Aggregates and reclamation As expiration deletes files from the server, vacant space can develop within aggregates. For data stored on sequential media, this vacant space is removed during reclamation processing, in a method called "reconstruction" (because it entails rebuilding an aggregate without the empty space). Aggregation, see in database SELECT * FROM CONTENTS WHERE NODE_NAME='UPPER_CASE_NAME' ... In the report: FILE_SIZE is the Physical, or Aggregate, size. The size reflects the TXNBytelimit in effect on the client at the time of the Backup or Archive. AGGREGATED is either "No" (as in the case of HSM, or files Archived or Backup'ed before ADSMv3), or the relative number of the reported file within the aggregate, like "2/16". The value reflects the TXNGroupmax server limit on the number of files in an Aggregate, plus the client TXNBytelimit limiting the size of the Aggregate. Remember that the Aggregate will shrink as reclamation recovers space from old files within the Aggregate. AIT Advanced Intelligent Tape technology, developed by Sony and introduced in 1996 to handle the capacity requirements of large, data-intensive applications. This is video-style, helical-scan technology, wherein data is written in diagonal slashes across the width of the tape. Like 8mm tape, is less reliable than linear tape technologies because AIT tightly wraps the tape around various heads and guides at much sharper angles than linear tape, and its heads are mechanically active, making for higher wear on the tape, lowering reliability. Memory-in-Cassette (MIC) feature puts a flash memory chip in with the tape, for remembering file positions or storing a imited amount of data: the MIC chip contains key parameters such as a tape log, search map, number of times loaded, and application info that allow flexible management of the media and its contents. The memory size was 16 MB in AIT-1; is 64 MB in AIT-3. See: //www.aittape.com/mic.html Cleaning: The technology monitors itself and invokes a built-in Active Head Cleaner as needed; a cleaning cartridge is recommended periodically to remove dust and build-up. Tape type: Advanced Metal Evaporated (AME) Cassette size: tiny, 3.5 inch, 8mm tape. Capacity: 36 GB native; 70 GB compressed (2:1). Sony claims their AIT drives of *all* generations achieve 2.6:1 average compression ratio using Adaptive Lossless Data Compression (ALDC), which would yield 90 GB. Transfer rate: 4 MB/s without compression, 10 MB/s with compression (in the QF 3 MB/s is written). Head life: 50,000 hours Media rating: 30,000 passes. Lifetime estimated at over 30 years. Ref: www.sony.com/ait www.aittape.com/ait1.html http://www.mediabysony.com/ctsc/ pdf/spec_ait3.pdf http://www.tapelibrary.com/aitmic.html http://www.aittape.com/ ait-tape-backup-comparison.html http://www.tape-drives-media.co.uk/sony /about_sony_ait.htm Technology is similar to Mammoth-2. See also: MAM; SAIT AIT-2 (AIT2) Next step in AIT. Capacity: 50 GB native; 100 GB compressed (2:1). Sony claims their AIT drives of *all* generations achieve 2.6:1 average compression ratio using Adaptive Lossless Data Compression (ALDC), which would yield 130 GB. Transfer rate: 6 MB/sec max without compression; 15 MB/s with. Technology is similar to Mammoth-2. AIT-3 (AIT3) Next Sony AIT generation - still using 8mm tape and helical-scan technology. Capacity: 100 GB without compression, 260GB with 2.6:1 compression. MIC: 64 MB flash memory AIX 4.2.0 Per IBMer Andy Raibeck, 1998/10/12, responding to a question as to whether the ADSMv3 clients are supported under AIX 4.2.0: "AIX 4.2.0 is not a supported ADSM platform. We would have liked to support it, but the number of problems we had trying to get ADSM to run on 4.2.0 made it impractical." AIX 5L, 32-bit client The 32-bit B/A client for both AIX 4.3.3 & AIX 5L is in the package tivoli.tsm.client.ba.aix43.32bit (API client in tivoli.tsm.client.api.aix43.32bit, image client in tivoli.tsm.client.image.aix43.32bit, etc.). Many people seem to be confused by "aix43"-part of the names looking for non-existent *.aix51.32bit packages. AIXASYNCIO and AIXDIRECTIO notes Direct I/O only works for storage pool volumes. Further, it "works best" with storage pool files created on a JFS filesystem that is NOT large file enabled. Apparently, AIX usually implicitly disables direct I/O on I/O transactions on large file enabled JFS due to TSM's I/O patterns. To ensure use of direct I/O, you have to use non-large file enabled JFS, which limits your volumes to 2 GB each, which is very restrictive. IBM recommends: AIXDIRECTIO YES AIXSYNCIO NO Asynchronous I/O supposedly has no JFS or file size limitations, but is only used for TSM database volumes. Recovery log and storage pool volumes do not use async I/O. AIX 5.1 documentation mentions changes to the async I/O interfaces to support offsets greater than 2 GB, however, which implies that at least some versions (32-bit TSM server?) do in fact have a 2 GB file size limitation for async I/O. I was unable to get clarity on this point in the PMR I opened. ALDC Adaptive Lossless Data Compression compression algorithm, as used in Sony AIT-2. IBM's ALDC employs their proprietary version of the Lempel-Ziv compression algorithm called IBM LZ1. Ref: IBM site paper "Design considerations for the ALDC cores". See also: ELDC; LZ1; SLDC ALL-AUTO-LOFS Specification for client DOMain option to say that all loopback file systems (lofs) handled by automounter are to be backed up. See also: ALL-LOFS ALL-AUTO-NFS Specification for client DOMain option to say that all network file systems (lofs) handled by automounter are to be backed up. See also: ALL-NFS ALL-LOCAL The Client User Options file (dsm.opt) DOMain statement default, which may be coded explicitly, to include all local hard drives, excluding /tmp in Unix, and excluding any removeable media drives, such as CD-ROM. Local drives do not include NFS-mounted file systems. In 4.1.2, its default is to include the System Object (includes Registry, event logs, comp+db, system files, Cert Serv Db, AD, frs, cluster db - depends if pro, dc etc on which of these the system object contains). If you specify a DOMAIN that is not ALL-LOCAL, and want the System Object backed up, then you need to include SYSTEMOBJECT, as in: DOMAIN C: E: SYSTEMOBJECT See also: File systems, local; /tmp ALL-LOFS Specification for client DOMain option to say that all loopback file systems (lofs), except those handled by the automounter, are to be backed up. See also: ALL-AUTO-LOFS ALL-NFS Specification for client DOMain option to say that all network file systems (lofs), except those handled by the automounter, are to be backed up. See also: ALL-AUTO-NFS Allow access to files See: dsmc SET Access Always backup ADSMv3 client GUI backup choice to back up files regardless of whether they have changed. Equivalent to command line 'dsmc Selective ...'. You should normally use "Incremental (complete)" instead, because "Always" redundantly sends to the *SM server data that it already has, thus inflating tape utilization and *SM server database space requirements. Amanda The Advanced Maryland Automatic Network Disk Archiver. A free backup system that allows the administrator of a LAN to set up a single master backup server to back up multiple hosts to a single large capacity tape drive. AMANDA uses native dump and/or GNU tar facilities and can back up a large number of workstations running multiple versions of Unix. Recent versions can also use SAMBA to back up Microsoft Windows 95/NT hosts. http://www.amanda.org/ (Don't expect to find a system overview of Amanda. Documentation on Amanda is very limited.) http://sourceforge.net/projects/amanda/ http://www.backupcentral.com/amanda.html AMENG See also: LANGuage; USEUNICODEFilenames Amount Migrated As from 'Query STGpool Format=Detailed'. Specifies the amount of data, in MB, that has been migrated, if migration is in progress. If migration is not in progress, this value indicates the amount of data migrated during the last migration. When multiple, parallel migration processes are used for the storage pool, this value indicates the total amount of data migrated by all processes. Note that the value can be higher than reflected in the Pct Migr value if data was pouring into the storage pool as migration was occurring. See also: Pct Migr; Pct Util ANE Messages prefix for event logging. See messages manual. aobpswd Password-setting utility for the TDP for Oracle. Connects to the server specified in the dsm.opt file, to establish an encrypted password in a public file on your client system. This creates a file called TDPO. in the directory specified via the DSMO_PSWDPATH environment variable (or the current directory, if that variable is not set). Thereafter, this file must be readable to anyone running TDPO. Use aobpswd to later update the password. Note that you need to rerun aobpswd before the password expires on the server. Ref: TDP Oracle manual APA AutoPort Aggregation APARs applied to ADSM on AIX system See: PTFs applied to ADSM on AIX system API Application Programming Interface. Available for TSM Backup, Archive, and HSM facilities plus associated queries, providing a library such that programs may directly perform common operations. As of 4.1, available for: AS/400, Netware, OS/2, Unix, Windows ADSM location: /usr/lpp/adsm/api The API can not be used to access files backed up or archived with the regular Backup-Archive clients. Attempting to do so will yield "ANS4245E (RC122) Format unknown" (same as ANS1245E). Nor can files stored via the API be seen by the conventional clients. Nor can different APIs see each others' files. The only general information that you can query is file spaces and management classes. In the API manual, Chapter 4, Interoperability, briefly indicates that the regular command line client can do some things with data sent to the server via the API - but not vice versa. This is highly frustrating, as one would want to use the API to gain finely controlled access to data backed up by regular clients. The "Format unknown" problem is rather similar to the issue of trying to use a regular client of a given level to gain access to data backed up by another regular client at a higher level: the lower level client cannot decipher the advanced format which the higher level client used in storing the data. Thus, interoperability in general is limited in the product. LAN-free support: The TSM API supports LAN-free, as of TSM 4.2. Note that there is no administrative API. Performance: The APIs typically do not aggregate files as do standard TSM clients. Lack of aggregation is usually not detrimental to performance with APIs, though, in that they typically deal with a small number of large files. Encryption: As of late 2003, the API does not support encryption. Ref: Using the API. API, Windows Note that the TSM API for Windows handles objects as case insensitive but case preserving. This is an anomaly resulting from the fact that SQL Server allows case-sensitive databases names. API config file See the info in the "Using the API" manual about configuration file options appropriate to the API. Note that the API config file is specified on the dsmInit call. API header files See: dsmapi*.h API installed? AIX: There will be a /usr/lpp/adsm/api directory. APPC Advanced Program-to-Program Communications. Discontinued as of TSM 4.2. Application client A software application that runs on a workstation or personal computer and uses the ADSM application programming interface (API) function calls to back up, archive, restore, and retrieve objects. Contrast with backup-archive client. Application Programming Interface A set of functions that application (API) clients can call to store, query, and retrieve data from ADSM storage. Arch Archive file type, in Query CONtent report. Other types: Bkup, SpMg ARCHDELete A Yes/No parameter on the 'REGister Node' and 'UPDate Node' commands to specify whether the client node can delete its own archived files from the server. Default: Yes. See also: BACKDELete Archive The process of copying files to a long-term storage device. V2 Archive only archives files: it does *not* archive directories, or symbolic links, or special files!!! Just files. (Thus, Archive is not strictly suitable for making file system images. See the V2archive option in modern clients to achieve the same operation.) File permissions are retained, including Access Control Lists (ACLs). Symbolic links are followed, to archive the file pointed to by the symlink. Directories are not archived in ADSMv2, but files in subdirectories are recorded by their full path name, and so during retrieval any needed subdirectories will be recreated, with new timestamps. In contrast, ADSMv3 *does* archive directories. Archived data belongs to the user who performed the archive. Include/Exclude is not applicable to archiving: just to backups. When you archive a file, you can specify whether to delete the file from your local file system after it is copied to ADSM storage or leave the original file intact. Archive copies may be accompanied by descriptive information, may imply data compression software usage, and may be retrieved by archive date, object name, or description. Windows: "System Object" data (including the Registry) is not archived. Instead, you could use MS Backup to Backup System State to local disk, then use TSM to archive this. Contrast with Retrieve. See also: dsmc Archive; dsmc Delete ARchive; FILESOnly; V2archive For a technique on archiving a large number of individual files, see entry "Archived files, delete from client". Archive, delete the archived files Use the DELetefiles option. Archive, exclude files In TSM 4.1: EXCLUDE.Archive Archive, from Windows, automatic date You can effect this from the DOS command in Description command line, like: dsmc archive c:\test1\ -su=y -desc="%date% Test Archive" Archive, latest Unfortunately, there is no command line option to return the latest version of an archived file. However, for a simple filename (no wildcard characters) you can do: 'dsmc q archive ' which will return a list of all the archived files, where the latest is at the bottom, and can readily be extracted (in Unix, via the 'tail -1' command). Archive, long term, issues A classic situation that site technicians have to contend with is site management mandating the keeping of data for very long term periods, as in five to ten years or more. This may be incited by requirements as made by Sarbanes-Oxley. In approaching this, however, site management typically neglects to consider issues which are essential to the data's long-term viability: - Will you be able to find the media in ten years? Years are a long time in a corporate environment, where mergers and relocations and demand for space cause a lot of things to be moved around - and forgotten. Will the site be able to exercise inventory control over long-term data? - Will anyone know what those tapes are for in the future? The purpose of the tapes has to be clearly documented and somehow remain with the tapes - but not on the tapes. Will that doc even survive? - Will you be able to use the media then? Tapes may survive long periods (if properly stored), but the drives which created them and could read them are transient technology, with readability over multiple generations being rare. Likewise, operating systems and applications greatly evolve over time. And don't overlook the need for human knowledge to be able to make use of the data in the future. To fully assure that frozen data and media kept for years would be usable in the future, the whole enviroment in which they were created would essentially have to be frozen in time: computer, OS, appls, peripherals, support, user procedures. That's hardly realistic, and so the long-term viability of frozen data is just as problematic. To keep long-term data viable, it has to move with technology. This means not only copying it across evolving media technologies, but also keeping its format viable. For example: XML today, but tomorrow...what? That said, if long-term archiving (in the generic sense) is needed, it is best to proceed in as "vanilla" a manner as possible. For example, rather than create a backup of your commercial database, instead perform an unload: this will make the data reloadable into any contemporary database. Keep in mind that it is not the TSM administrator's responsibility to assure anything other than the safekeeping of stored data. It is the responsibility of the data's owners to assure that it is logically usable in the future. Archive, prevent client from doing See: Archiving, prohibit Archive, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)] on all volumes [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: It is best to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' to assure that the reported information will be current. Archive and Migration If a disk Archive storage pool fills, ADSM will start a Migration to tape to drain it; but because the pool filled and there is no more space there, the active Archive session wants to write directly to tape; but that tape is in use for Migration, so the client session has to wait. Archive archives nothing A situation wherein you invoke Archive like 'dsmc arch "/my/directory/*"' and nothing gets archived. Possible reasons: - /my/directory/ contains only subdirectories, no files; and the subdirectories had been archived in previous Archive operations. - You have EXCLUDE.ARCHIVE statements which specifies the files in this directory. Archive Attribute In Windows, an advanced attribute of a file, as seen under file Properties, Advanced. It is used by lots of other backup software to define if a file was already backed up, and if it has to be backed up the next time. As of TSM 5.2, the Windows client provides a RESETARCHIVEATTRibute option for resetting the Windows archive attribute for files during a backup operation. See also: RESETARCHIVEATTRibute Archive bit See: Archive Attribute Archive copy An object or group of objects residing in an archive storage pool in ADSM storage. Archive Copy Group A policy object that contains attributes that control the generation, destination, and expiration of archived copies of files. An archive copy group is stored in a management class. Archive Copy Group, define 'DEFine COpygroup DomainName PolicySet MGmtclass Type=Archive DESTination=PoolName [RETVer=N_Days|NOLimit] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' Archive descriptions Descriptions are supplementary identifiers which assist in uniquely identifying archive files. Descriptions are stored in secondary tables, in contrast to the primary archive table entries which store archive directory and file data information. Archive directory An archive directory is defined to be unique by: node, filespace, directory/level, owner and description. See also: CLEAN ARCHDIRectories Archive drive contents Windows: dsmc archive d:\* -subdir=yes Archive fails on single file Andy Raibeck wrote in March 1999: "In the case of a SELECTIVE backup or an ARCHIVE, if one or more files can not be backed up (or archived) then the event will be failed. The rationale for this is that if you ask to selectively back up or archive one or more files, the assumption is that you want each and every one of those files to be processed. If even one file fails, then the event will have a status of failed. So the basic difference is that with incremental we expect that one or more files might not be able to be processed, so we do not flag such a case as failed. In other cases, like SELECTIVE or ARCHIVE, we expect that each file specified *must* be processed successfully, or else we flag the operation as failed." Archive files, how to See: dsmc Archive Archive operation, retry when file in Have the CHAngingretries (q.v.) Client use System Options file (dsm.sys) option specify how many retries you want. Default: 4. Archive retention grace period The number of days ADSM retains an archive copy when the server is unable to rebind the object to an appropriate management class. Defined via the ARCHRETention parameter of 'DEFine DOmain'. Archive retention grace period, query 'Query DOmain Format=Detailed', see "Archive Retention (Grace Period)". Archive storage pool, keep separate It is best to keep your Archive storage pool separate from others (Backup, HSM) so that restorals can be done more quickly. If Archive data was in the same storage pool as Backups, there would be a lot of unrelated data for the restoral to have to skip over. Archive users SELECT DISTINCT OWNER FROM ARCHIVES [WHERE node_name='UpperCase'] SELECT NODE_NAME,OWNER,TYPE,COUNT(*) AS "Number of objects" FROM ARCHIVES WHERE NODE_NAME='____' OR NODE_NAME='____' GROUP BY NODE_NAME,OWNER,TYPE Archive users, files count SELECT OWNER,count(*) AS "Number of files" FROM ARCHIVES WHERE NODE_NAME='UPPER_CASE_NAME' GROUP BY OWNER Archive vs. Backup Archive is intended for the long-term storage of individual files on tape, while Backup is for safeguarding the contents of a file system to facilitate the later recovery of any part of it. Returning files to the file system en mass is thus the forte of Restore, whereas Retrieve brings back individual files as needed. Retention policies for Archive files is rudimentary, whereas for Backups it is much more comprehensive. See also: http://www.storsol.com/cfusion /template.cfm?page1=wp_whyaisa&page2= blank_men Archive vs. Selective Backup, The two are rather similar; but... differences The owner of a backup file is the user whose name is attached to the file, whereas the owner of an archive file is the person who performed the Archive operation. Frequency of archive is unrestricted, whereas backup can be restricted. Retention rules are simple for archive, but more involved for backup. Archive files are deleteable by the end user; Backup files cannot be selectively deleted. ADSMv2 Backup would handle directories, but Archive would not: in ADSMv3+, both Backup and Archive handle directories. Retrieval is rather different for the two: backup allows selection of old versions by date; archive distinction is by date and/or the Description associated with the files. ARCHIVE_DATE Column in *SM server database ARCHIVES table. Format: YYYY-MM-DD HH:MM:SS.xxxxxx Example: SELECT * FROM ARCHIVES WHERE ARCHIVE_DATE> '1997-01-01 00:00:00.000000' AND ARCHIVE_DATE< '1998-12-31 00:00:00.000000' Archived copy A copy of a file that resides in an ADSM archive storage pool. Archived file, change retention? The retention of individual Archive files cannot be changed: you can only Retrieve and then re-Archive the file. *SM is an enterprise software package, meaning that it operates according to site policies. It prohibits users from circumventing site policies, and thus will not allow users to extend archive retentions beyond their site-defined values. The product is also architected for security and privacy, providing the server administrator no means of retrieving, inspecting, deleting, or altering the contents or attributes of individual files. In terms of retention, all that the server administrator can do is change the retention policy for the management class, which affects all files in that class. See also: Archived files, retention period, update Archived files, count SELECT COUNT(*) AS "Count" FROM ARCHIVES WHERE NODE_NAME='' Archived files: deletable by client Whether the client can delete archived node? files now stored on the server. Controlled by the ARCHDELete parameter on the 'REGister Node' and 'UPDate Node' commands. Default: Yes. Query via 'Query Node Format=Detailed'. Archived files, delete from client Via client command: 'dsmc Delete ARchive FileName(s)' (q.v.) You could first try it on a 'Query ARchive' to get comfortable. Archived files, list from client See: dsmc Query ARchive Archived files, list from server 'SHow Archives NodeName FileSpace' Archived files, list from server, 'Query CONtent VolName ...' by volume Archived files, rebinding does not From the TSM Admin. manual, chapter on occur Implementing Policies for Client Data, topic How Files and Directories Are Associated with a Management Class: "Archive copies are never rebound because each archive operation creates a different archive copy. Archive copies remain bound to the management class name specified when the user archived them." (Reiterated in the client B/A manual, under "Binding and Rebinding Management Classes to Files".) Beware, however, that changing the retention setting of a management class's archive copy group will cause all archive versions bound to that management class to conform to the new retention. Note that you can use an ARCHmc to specify an alternate management class for the archive operation. Archived files, report by owner As of ADSMv3 there is still no way to do this from the client. But it can be done within the server via SQL, like: SELECT OWNER,FILESPACE_NAME,TYPE, ARCHIVE_DATE FROM ARCHIVES WHERE NODE_NAME='UPPER_CASE_NAME' - AND OWNER='joe' Archived files, report by year Example: SELECT * FROM ARCHIVES WHERE YEAR(ARCHIVE_DATE)=1998 Archived files, retention period Is part of the Copy Group definition. Is defined in DEFine DOmain to provide a just-in-case default value. Note that there is one Copy Group in a Management Class for backup files, and one for archived files, so the retention period is essentially part of the Management Class. Archived files, retention period, set The retention period for archive files is set via the "RETVer" parameter of the 'DEFine COpygroup' ADSM command. Can be set for 0-9999 days, or "NOLimit". Default: 365 days. Archived files, retention period, While you cannot change the retention update for an individual file, you can change it for all files bound to a given Management Class: 'UPDate COpygroup DomainName SetName ClassName Type=Archive RETVer=N_Days|NOLimit' where RETVer specifies the retention period, and can be 0-9999 days, or "NOLimit". Default: 365 days. Effect: Changing RETVer causes any newly-archived files to pick up the new retention value, and previously-archived files also get the new retention value, because of their binding to the changed management class. (The TSM database Archives table contains an Archive_Date column: there is no "Expiration_Date" column, and so the archived files conform to whatever the prevailing management class retention rules are at the time. So if you extend your retention policy, it pertains to all archive files, old and new.) Archived files, retention period, See: 'Query COpygroup ... Type=Archive' query Archived files, retrieve from client Via client dsmc command: 'RETrieve [-DEscription="..."] [-FROMDate=date] [-TODate=date] [-FROMOwner=owner] [-FROMNode=node] [-PIck] [-Quiet] [-REPlace=value] [-SErvername=StanzaName] [-SUbdir=No|Yes] [-TAPEPrompt=value] OrigFileName(s) [NewFileName(s)]' Archived files don't show up Some users have encountered the unusual problem of having archived files, and know they should not yet have expired, but the archived files do not show up in a client query, despite being performed from the owning user, etc. Analysis with a Select on the Archives table revealed the cause to be directories missing from the server storage pools, which prevented hierarchically finding the files in a client -subdir query. The fix was to re-archive the missing directories. Use ARCHmc (q.v.) to help avoid problems. ARCHIVES SQL: *SM server database table containing basic information about each archived object (but not its size). Along with BACKUPS and CONTENTS, constitutes the bulk of the *SM database contents. Columns: NODE_NAME, FILESPACE_NAME, TYPE, HL_NAME, LL_NAME, OBJECT_ID, ARCHIVE_DATE, OWNER, DESCRIPTION, CLASS_NAME. Archiving, prohibit Prohibit archiving by employing one of the following: In the *SM server: - LOCK Node, which prevents all access from the client - and which may be too extreme. - ADSMv2: Do not define an archive Copy Group in the Management Class used by that user. This causes the following message when trying to do an archive: ANS5007W The policy set does not contain any archive copy groups. Unable to continue with archive. - ADSMv3: Code NOARCHIVE in the include-exclude file, as in: "include ?:\...\* NOARCHIVE" which prevents all archiving. - 'UPDate Node ... MAXNUMMP=0', to be in effect during the day, to prevent Backup and Archive, but allow Restore and Retrieve. In the *SM client: - Employ EXCLUDE.ARCHIVE for the subject area. For example, you want to prevent your client system users from archiving files that are in file system /fs1: EXCLUDE.ARCHIVE /fs1/.../* Attempts to archive will then get: ANS1115W File '/fs1/abc/xyz' excluded by Include/Exclude list Retrieve and Delete Archive continue to function as usual. ARCHmc (-ARCHmc) Archive option, to be specified on the 'dsmc archive' command line (only), to select a Management Class and thus override the default Management Class for the client Policy Domain. (ADSM v3.1 allowed it in dsm.opt; but that's not the intention of the option.) Default: the Management Class in the active Policy Set. See "Archive files, how to" for example. As of ADSMv3.1 mid-1999 APAR IX89638 (PTF 3.1.0.7), archived directories are not bound to the management class with the longest retention. See also: CLASS_NAME; dsmBindMC ARCHRETention Parameter of 'DEFine DOmain' to specify the retention grace period for the policy domain, to protect old versions from deletion when the respective copy group is not available. Specified as the number of days (from date of archive) to retain archive copies. ARCserve Competing product from Computer Associates, to back up Microsoft Exchange Server mailboxes. Advertises the ability to restore individual mailboxes, but what they don't tell you is that they do it in a non-Microsoft supported way: they totally circumvent the MS Exchange APIs. The performance is terrible and the product as a whole has given customers lots of problems. See also: Tivoli Storage Manager for Mail ARCHSYMLinkasfile Archive option as of ADSMv3 PTF 7. If you specify ARCHSYMLinkasfile=No then symbolic links will not be followed: the symlink itself will be archived. If you specify ARCHSYMLinkasfile=Yes (the default), then symbolic links will be followed in order to archive the target files. Unrelated: See also FOLlowsymbolic Ref: Installing the Clients manual ARTIC 3494: A Real-Time Interface Coprocessor. This card in the industrial computer within the 3494 manages RS-232 and RS-422 communication, as serial connections to a host and command/feedback info with the tape drives. A patch panel with eight DB-25 slots mounted vertically in the left hand side of the interior of the first frame connects to the card. AS SQL clause for assigning an alias to a report column header title, rather than letting the data name be the default column title or expression used on the column's contents. The alias then becomes the column name in the output, and can be referred to in GROUP BY, ORDER BY, and HAVING clauses - but not in a WHERE clause. The title string should be in double quotes. Note that if the column header widths in combination exceed the width of the display window, the output will be forced into "Title: Value" format. Sample: SELECT VOLUME_NAME AS - "Scratch Vols" FROM LIBVOLUMES WHERE STATUS='Scratch' results in output like: Scratch Vols ------------------ 000049 000084 AS/400 Visit: www.as400.ibm.com ASC SQL: Ascending order, in conjunction with ORDER BY, as like: GROUP BY NODE_NAME ORDER BY NODE_NAME ASC ASC/ASCQ codes Additional Sense Codes and Additional Sense Code Qualifiers involved in I/O errors. The ASC is byte 12 of the sense bytes, and the ASCQ is byte 13 (as numbered from 0). They are reported in hex, in message ANR8302E. ASC=29 ASCQ=00 indicates a SCSI bus reset. Could be a bad adapter, cable, terminator, drive, etc.). The drives could be causing an adapter problem which in turn causes a vus reset, or a problematic adapter could be causing the bus reset that causes the drive errors. ASC=3B ASCQ=0D is "Medium dest element full", which can mean that the tape storage slot or drive is already occupied, as when a library's inventory is awry. Perform a re-inventory. ASC=3B ASCQ=0E is "Medium source element empty", saying that there is no tape in the storage slot as there should be, meaning that the library's inventory is awry. Perform a re-inventory. See Appendix B of the Messages manual. See also: ANR8302E ASR Automated System Recovery - a restore feature of Windows XP Professional and Windows Server 2003 that provides a framework for saving and recovering the Windows XP or Windows Server 2003 operating state, in the event of a catastrophic system or hardware failure. TSM creates the files required for ASR recovery and stores them on the TSM server. In the backup, TSM will generate the ASR files in the :\adsm.sys\ASR staging directory on your local machine and store these these files in the ASR file space on the TSM server. Ref: Windows B/A Client manual, Appendix F "ASR supplemental information"; Redbook "TSM BMR for Windows 2003 and XP" Msgs: ANS1468E ASSISTVCRRECovery Server option to specify whether the ADSM server will assist the 3570/3590 drive in recovering from a lost or corrupted Vital Cartridge Records (VCR) condition. If you specify Yes (the default) and if TSM detects an error during the mount processing, it locates to the end-of-data during the dismount processing to allow the drive to restore the VCR. During the tape operation, there may be some small effect on performance because the drive cannot perform a fast locate with a lost or corrupted VCR. However, there is no loss of data. See also: VCR ASSISTVCRRECovery, query 'Query OPTions', see "AssistVCRRecovery" Association Server-defined chedules are associated with client nodes so that the client will be contacted to run them in a client-server arrangement. See 'DEFine ASSOCiation', 'DELete ASSOCiation'. Atape Moniker for the Magstar tape driver, which supports 3590, 3570, and 3575. Download from ftp.storsys.ibm.com, in the /devdrvr/ directory. In AIX, is installed in /usr/lpp/Atape/. Sometimes, Atape will force you to re-create the TSM tape devices; and a reboot may be necessary (as in the Atape driver rewriting AIX's bosboot area): so perform such upgrades off hours. See also: IBMtape Atape header file, for programming AIX: /usr/include/sys/Atape.h Solaris: /usr/include/sys/st.h HP-UX: /usr/include/sys/atdd.h Windows: , Atape level 'lslpp -ql Atape.driver' atime See: Access time; Backup ATL Automated Tape Library: a frame containing tape storage cells and a robotic mechanism which can respond to host commands to retrieve tapes from storage cells and mount them for reading and writing. atldd Moniker for the 3494 library device driver, "AIX LAN/TTY: Automated Tape Library Device Driver", software which comes with the 3494 on floppy diskettes. Is installed in /usr/lpp/atldd/. Download from: ftp://service.boulder.ibm.com/storage/ devdrvr/ See also: LMCP atldd Available? 'lsdev -C -l lmcp0' atldd level 'lslpp -ql atldd.driver' ATS IBM Advanced Technical Support. They host "Lunch and Learn" conference call seminars ATTN messages (3590) Attention (ATTN) messages indicate error conditions that customer personnel may be able to resolve. For example, the operator can correct the ATTN ACF message with a supplemental message of Magazine not locked. Ref: 3590 Operator Guide (GA32-0330-06) Appendix B especially. Attribute See: Volume attributes Attributes of tape drive, list AIX: 'lsattr -EHl rmt1' or 'mt -f /dev/rmt1 status' AUDit DB Undocumented (and therefore unsuported) server command in ADSMv3+, ostensibly a developer service aid, to perform an audit on-line (without taking the server down). Syntax (known): 'AUDIT DB [PARTITION=partion-name] [FIX=Yes]' e.g. 'AUDIT DB PARTITION=DISKSTORAGE' as when a volume cannot be deleted. See also: dsmserv AUDITDB AUDit LIBRary Creates a background process which (as in verifying 3494's volumes) checks that *SM's knowledge of the library's contents are consistent with the library's inventory. This is a bidirectional synchronization task, where the TSM server acquires library inventory information and may subsequently instruct the library to adjust some volume attributes to correspond with TSM volume status info. Syntax: 'AUDit LIBRary LibName [CHECKLabel=Yes|Barcode]' where the barcode check was added in the 2.1.x.10 level of the server to make barcode checking an option rather than the implicit default, due to so many customers having odd barcodes (as in those with more than 6-char serials). Also, using CHECKLabel=Barcode greatly reduces time by eliminating mounts to read the header on the tapes - which is acceptable if you run a tight ship and are confident of barcodes corresponding with internal tape labeling. Sample: 'AUDit LIBRary OURLIB'. The audit needs to be run when the library is not in use (no volumes mounted): if the library is busy, the Audit will likely hang. Runtime: Probably not long. One user with 400 tapes quotes 2-3 minutes. Tip: With a 3494 or comparable library, you may employ the 'mtlib' command to check the category codes of the tapes in the library for reasonableness, and possibly use the 'mtlib' command to adjust errant values without resorting to the disruption of an AUDit LIBRary. This audit is performed when the server is restarted (no known means of suppressing this). In a 349X library, AUDit LIBRary will instruct the library to restore Scratch and Private category codes to match TSM's libvolumes information. This is a particularly valuable capability for when library category codes have been wiped out by an inadvertent Teach or Reinventory operation at the library (which resets category codes to Insert). AUDit LICenses *SM server command to start a background process which both audits the data storage used by each client node and licensing features in use on the server. This process then compares the storage utilization and other licensing factors to the license terms that have been defined to the server to determine if the current server configuration is in compliance with the license terms. There is no "Wait" capability, so use with server scripts is awkward. Syntax: 'AUDit LICense'. Will hopefully complete with messages ANR2825I License audit process 3 completed successfully - N nodes audited ANR2811I Audit License completed - Server is in compliance with license terms. Must be done before running 'Query AUDITOccupancy' for its output to show current values. Note that the time of the audit shows up in Query AUDITOccupancy output. Msgs: ANR2812W, ANR2834W, ANR2841W See also: Auditoccupancy; AUDITSTorage; License...; Query LICense; Set LICenseauditperiod; SHow LMVARS AUDIT RECLAIM Command introduced in v3.1.1.5 to fix a bug introduced by the 3.1.0.0 code. See also: RECLAIM_ANALYSIS AUDit Volume TSM server command to audit a volume, and optionally fix inconsistencies. If a disk volume, it must be online; if a tape volume, it will be mounted (unless TSM realizes that it contains no data, as when you are trying to fix an anomaly). What this does is validate file information stored in the database with that stored on the tape. It does this by reading every byte of every file on the volume and checks control information which the server imbeds in the file when it is stored. The same code is used for reading and checking the file as would be used if the file were to be restored to a client. (In contrast, MOVe Data simply copies files from one volume to another. There are, however, some conditions which MOVe Data will detect which AUDit Volume will not.) If a file on the volume had previously been marked as Damaged, and Audit Volume does not detect any errors in it this time, that file's state is reset. AUDit Volume is a good way to fix niggly problems which prevent a volume from finally reaching a state of Empty when some residual data won't otherwise disappear. Syntax: 'AUDit Volume VolName [Fix=No|Yes] [SKIPPartial=No|Yes] [Quiet=No|Yes]'. "Fix=Yes" will delete unrecoverable files from a damaged volume (you will have to re-backup the files). Caution: Do not use AUDit Volume on a problem disk volume without first determining, from the operating system level, what the problem with the disk actually is. Realize that a disk electronics problem can make intact files look bad, or inconsistently make them look bad. What goes on: The database governs all, and so location of the files on the tape is necessarily controlled by the current db state. That is to say, Audit Volume positions to each next file according to db records. At that position, it expects to find the start of a file it previously recorded on the medium. If not (as when the tape had been written over), then that's a definite inconsistency, and eligible for db deletion, depending up Fix. The Audit reads each file to verify medium readability. (The Admin Guide suggests using it for checking out volumes which have been out of circulation for some time.) Medium surface/recording problems will result in some tape drives (e.g., 3590) doggedly trying to re-read that area of the tape, which will entail considerable time. A hopeless file will be marked Damaged or otherwise handled according to the Fix rules. The Audit cannot repair the medium problem: you can thereafter do a Restore Volume to logically fix it. Whether the medium itself is bad is uncertain: there may indeed be a bad surface problem or creasing in the tape; but it might also be that the drive which wrote it did so without sufficient magnetic coercivity, or the coercivity of the medium was "tough", or tracking was screwy back then - in which case the tape may well be reusable. Exercise via tapeutil or the like is in order. Audit Volume has additional help these days: the CRCData Stgpool option now in TSM 5.1, which writes Cyclic Redudancy Check data as part of storing the file. This complements the tape technology's byte error correction encoding to check file integrity. Ref: TSM 5.1 Technical Guide redbook DR note: Audit Volume cannot rebuild *SM database entries from storage pool tape contents: there is no capability in the product to do that kind of thing. Msgs: ANR2333W, ANR2334W See also: dsmserv AUDITDB AUDITDB See: 'DSMSERV AUDITDB' AUDITOCC SQL: TSM database table housing the data that Query AUDITOccupancy reports. Columns: NODE_NAME, BACKUP_MB, BACKUP_COPY_MB, ARCHIVE_MB, ARCHIVE_COPY_MB, SPACEMG_MB, SPACEMG_COPY_MB, TOTAL_MB (This separately reports primary and copy storage pool numbers, in contrast to 'Query AUDITOccupancy', which report them combined.) Be sure to run 'AUDit LICenses' before reporting from it (as is also required for 'Query AUDITOccupancy'). See also: AUDITSTorage; Query AUDITOccupancy AUDit Volume performance Will be impacted if CRC recording is in effect. AUDITSTorage TSM server option. As part of a license audit operation, the server calculates, by node, the amount of server storage used for backup, archive, and space-managed files. For servers managing large amounts of data, this calculation can take a great deal of CPU time and can stall other server activity. You can use the AUDITSTorage option to specify that storage is not to be calculated as part of a license audit. Note: This option was previously called NOAUDITStorage. Syntax: "AUDITSTorage Yes|No" Yes Specifies that storage is to be calculated as part of a license audit. This is the default. No Specifies that storage is not to be calculated as part of a license audit. (Expect this to impair the results from Query AUDITOccupancy) Authentication The process of checking and authorizing a user's password before allowing that user access to the ADSM server. (Password prompting does not occur if PASSWORDAccess is set to Generate.) Authentication can be turned on or off by an administrator with system privilege. See also: Password security Authentication, query 'Query STatus' Authentication, turn off 'Set AUthentication OFf' Authentication, turn on 'Set AUthentication ON' The password expiration period is established via 'Set PASSExp NDays' (Defaults to 90 days). Authorization Rule A specification that allows another user to either restore or retrieve a user's objects from ADSM storage. Authorized User In the TSM Client for Unix: any user running with a real user ID of 0 (root) or who owns the TSM executable with the owner execution permission bit set to s. Auto Fill 3494 device state for its tape drives: pre-loading is enabled, which will keep the ACL index stack filled with volumes from a specified category. See /usr/include/sys/mtlibio.h Auto Migration, manually perform for 'dsmautomig [FSname]' file system (HSM) Auto Migrate on Non-Usage In output of 'dsmmigquery -M -D', an (HSM) attribute of the management class which specifies the number of days since a file was last accessed before it is eligible for automatic migration. Defined via AUTOMIGNOnuse in management class. See: AUTOMIGNOnuse Auto-sharing See: 3590 tape drive sharing AUTOFsrename Macintosh and Windows clients option controlling the automatic renaming of pre-Unicode filespaces on the *SM server when a Unicode-enabled client is first used. The filespace is renamed by adding "_OLD" to the end of its name. Syntax: AUTOFsrename Prompt | Yes | No AUTOLabel Parameter of DEFine LIBRary, as of TSM 5.2, to specify whether the server attempts to automatically label tape volumes for SCSI libraries. See: DEFine LIBRary Autoloader A strictly sequential tape magazine for 3480/3490 tape drives. Contrast with Library, which is random. Automatic Cartridge Facility 3590 tape drive: a magazine which can hold 10 cartridges. Automatic migration (HSM) The process HSM uses to automatically move files from a local file system to ADSM storage based on options and settings chosen by a root user on your workstation. This process is controlled by the space monitor daemon (dsmmonitord). Is goverened by the "SPACEMGTECH=AUTOmatic|SELective|NONE" operand of MGmtclass. See also: threshold migration; demand migration; dsmautomig Automatic reconciliation The process HSM uses to reconcile your file systems at regular intervals set by a root user on your workstation. This process is controlled by the space monitor daemon (dsmmonitord). See: Reconciliation; RECOncileinterval AUTOMIGNOnuse Mgmtclass parameter specifying the number of days which must elapse since the file was last accessed before it is eligible for automatic migration. Default: 0 meaning that the file is immediately available for migration. Query: 'Query MGmtclass' and look for "Auto-Migrate on Non-Use". Beware setting this value higher than one or two days: if all the files are accessed, the migration threshold may be exceeded and yet no migration can occur; hence, a thrashing situation. See also: Auto Migrate on Non-Usage AUTOMOUNT (ADSMv2 only) Client System Options file (dsm.sys) option for Sun systems only. Specifies a symbolic link to an NFS mount point monitored by an automount daemon. There is no support for automounted file systems under AIX. Availability Element of 'Query STatus', specifying whether the server is enabled or disabled; that is, it will be "Disabled" if 'DISAble SESSions' had been done, else will show "Enabled". look for "Availability". Average file size: ADSMv2: In the summary statistics from an Archive or Backup operation, is the average size of the files processed. Note that this value is the true average, and is not the "Total number of bytes transferred" divided by "Total number of objects backed up" because the "transferred" number is often inflated by retries and the like. See also: Total number of bytes transferred AVG SQL statement to yield the average of all the rows of a given numeric column. See also: COUNT; MAX; MIN; SUM B Unit declarator signifying Bytes. Example: "Page size = 4 KB" b Unit declarator signifying bits. Example: "Transmit at 56 Kb/sec" B/A Abbreviation for Backup/Archive, as when referring to the B/A Client manual. BAC Informal acronym for the Backup/Archive Client. BAC Binary Arithmetic Compression: algorithm used in the IBM 3480 and 3490 tape system's IDRC for hardware compression the data written to tape. See also: 3590 compression of data Back up some files once a week See IBM doc "How to backup only some files once a week": http://www.ibm.com/support/docview.wss? uid=swg21049445 Back up storage pool See: BAckup STGpool BACKDELete A Yes/No parameter on the 'REGister Node' and 'UPDate Node' commands to specify whether the client node can delete its own backup files from the server, as part of a dsmc Delete Filespace. Default: No. See also: ARCHDELete Backed-up files, list from client 'dsmc Query backup "*" -FROMDate=xxx -NODename=xxx -PASsword=xxx' Backed-up files, list from server You can do a Select on the Backups or Contents table for the filespace; but there's a lot of overhead in the query. A lower overhead method, assuming that the client data is Collocated, is to do a Query CONTent on the volume it was more recently using (Activity Log, SHow VOLUMEUSAGE). A negative COUnt value will report the most recent files first, from the end of the volume. Backed-up files count (HSM) In dsmreconcile log. Backhitch Relatively obscurant term used to describe the start/stop repositioning that some tape drives have to perform after writing stops, in order to recommence writing the next burst of data adjoining the last burst. This is time-consuming and prolongs the backup of small files. Lesser tape technologies such as DLT are notorious for this. This effect is sometimes called "shoe-shining", referring to the reciprocating motion. Redbook "IBM TotalStorage Tape Selection and Differentiation Guide" notes that LTO is 5x slower than 3590H in its backhitch; and "In a non-data streaming environment, the excellent tape start/stop and backhitch properties of the 3590 class provides much better performance than LTO." See Tivoli whitepaper "IBM LTO Ultrium Performance Considerations" Ref: IBM site Technote 1111444 See also: DLT and start/stop operations; "shoe-shining"; Start-stop; Streaming Backint SAP client; uses the TSM API and performs TSM Archiving rather than Backup. Msgs prefix: BKI See also: TDP for R/3 BACKRETention Parameter of 'DEFine DOmain' to specify the retention grace period for the policy domain, to protect old versions from deletion when the respective Copy Group is not available. You should, however, have a Copy Group to formally establish your retention periods: do 'Query COpygroup' to check. Specify as the number of days (from date of deactivation) to retain backup versions that are no longer on the client's system. Backup The process of copying one or more files, directories, and ACLs to a server backup type storage pool to protect against data loss. During a Backup, the server is responsible for evaluating versions-based retention rules, to mark the oldest Inactive file as expired if the new incoming version causes the oldest Inactive version to be "pushed out" of the set. (See: "Versions-based file expiration") ADSMv2 did not back up special files: character, block, FIFO (named pipes), or sockets). ADSMv3 *will* back up some special files: character, block, FIFO (named pipes); but ADSMv3 will *not* back up or restore sockets (see "Sockets and Backup/Restore"). More trivially, the "." file in the highest level directory is not backed up, which is why "objects backed up" is one less than "objects inspected".) Backups types: - Incremental: new or changed files; Can be one of: - full: all new and changed files are backed up, and takes care of deleted files; - partial: simply looks for files new or changed since last backup date, so omits old-dated files new to client, and deleted files are not expired. An example of a partial incremental is -INCRBYDate. Via 'dsmc Incremental'. (Note that the file will be physically backed up again only if TSM deems the content of the file to have been changed: if only the attributes (e.g., Unix permissions) have been changed, then TSM will simply update the attributes of the object on the server.) - Selective: you select the files. Via 'dsmc Selective'. Priority: Lower than BAckup DB, higher than Restore. Full incrementals are the norm, as started by 'dsmc incremental /FSName'. Use an Include-Exclude Options File if you need to limit inclusion. Use a Virtual Mount Point to start at other than the top of a file system. Use the DOMain Client User Options File option to define default filesystems to be backed up. (Incremental backup will back up empty directories. Do 'dsmc Query Backup * -dirs -sub=yes' the client to find the empties, or choose Directory Tree under 'dsm'.) To effect backup, TSM examines the file's attributes such as size, modification date and time (Unix mtime), ownership (Unix UID), group (Unix GID), (Unix) file permissions, ACL, special opsys markers such as NTFS file security descriptors, and compares it to those attributes of the most recent backup version of that file. (Unix atime - access time - is ignored.) Ref: B/A Client manual, "Backing Up and Restoring Files" chapter, "Backup: Related Topics", "What Does TSM Consider a Changed File"; and under the description of Copy Mode. This means that for normal incremental backups, TSM has to query the database for each file being backed up in order to determine whether that file is a candidate for incremental backup. This adds some overhead to the backup process. TSM tries to be generic where it can, and in Unix does not record the inode number. Thus, if a 'cp -p' or 'mv' is done such that the file is replaced (its inode number changes) but only the ctime attribute is different, then the file data will not be backed up in the next incremental backup: the TSM client will just send the new ctime value for updating in the TSM database. Backup changes the file's access timestamp (Unix stat struct st_atime): the time of last "access" or "reference", as seen via Unix 'ls -alu ...' command. The NT client uses the FILE_FLAG_BACKUP_SEMANTICS option when a file is opened, to prevent updating the Access time. See also: Directories and Backup; -INCRBYDate; SLOWINCREMENTAL; Updating--> Contrast with Restore. For a technique on backing up a large number of individual files, see entry "Archived files, delete from client". Backup, batched transaction buffering See: TXNBytelimit Backup, delete all copies Currently the only way to purge all copies of a single file on the server is to setup a new Management Class which keeps 0 versions of the file. Run an incremental while the files is still on the local FS and specify this new MC on an Include statement for that file. Next change the Include/Exclude so the file now is excluded. The next incremental will expire the file under the new policy which will keep 0 inactive versions of the file. Backup, delete part of it ADSM doesn't provide a means for server commands to delete part of a backup; but you can effect it by emplacing an Exclude for the object to be deleted: the next backup will render it obsolete in the backups. Backup, exclude files Specify "EXclude" in the Include-exclude options file entry to exclude a file or group of files from ADSM backup services. (Directories are never excluded from backups.) Backup, full (force) You can get a full backup of a file system via one of the following methods (being careful to weigh the ramifications of each approach): - In the server, do 'UPDate COpygroup ... MODE=ABSolute' in the associated Management Class, which causes files to be backed up regardless of having been modified. (You will have to do a 'VALidate POlicyset' and 'ACTivate POlicyset' to put the change into effect.) Don't forget to change back when the backup is done. - Consider GENerate BACKUPSET (q.v.), which creates a package of the file system's current Active backup files. See: Backup Set; dsmc REStore BACKUPSET; Query BACKUPSETContents - At PC client: relabel the drive and do a backup. At Unix client: mount the file system read-only at a different mount point and do a backup. - As server admin, do 'REName FIlespace' to cause the filespace to be fully repopulated in the next backup (hence a full backup): you could then rename this just-in filespace to some special name and rename the original back into place. - Do a Selective Backup; like 'dsmc s -su=y FSname' in Unix. (In the NT GUI, next to the Help button there is a pull down menu: choose option "always backup".) - Define a variant node name which would be associated with a management class with the desired retention policy, code an alternate server stanza in the Client System Options file, and select it via the -SErvername command line option. Backup, full, periodic (weekly, etc.) Some sites have backup requirements which do not mesh with TSM's "incremental forever" philosophy. For example, they want to perform incrementals daily, and fulls weekly and monthly. For guidance, see article "Performing Full Client Backups with TSM" on the IBM website. Backup, last (most recent) Determine the date of last backup via: Client command: 'dsmc Query Filespace' Server commands: 'Query FIlespace [NodeName] [FilespaceName] Format=Detailed' SELECT * FROM FILESPACES WHERE - NODE_NAME='UPPER_CASE_NAME' and look at BACKUP_START, BACKUP_END Select: Backup, management class used Shows up in 'query backup', whether via command line or GUI. Backup, more data than expected going If you perform a backup and expect like 5 GB of data to go and instead find much more, it's usually a symptom of retries, as in files being open and changing during the backup. Backup, OS/2 OS/2 files have an archive byte (-a or +a). Some say that if this changes, ADSM will back up such files; but others say that ADSM uses the filesize-filedate-filetime combination. Backup, prohibit See: Backups, prevent Backup, selective A function that allows users to back up objects from a client domain that are not excluded in the include-exclude list and that meet the requirement for serialization in the backup copy group of the management class assigned to each object. Performed via the 'dsmc Selective' cmd. See: Selective Backup. Backup, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)] on all volumes [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: You need to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' for the reported information to be current. Backup, subfile See: Adaptive Differencing; Set SUBFILE; SUBFILE* Backup, successful? Consider something like the following to report on errors, to be run via schedule: /* FILESERVER BACKUP EXCEPTIONS */ Query EVent DomainName SchedName BEGINDate=TODAY-1 ENDDate=TODAY-1 EXceptionsonly=YES Format=Detailed >> /var/log/backup-problems File will end up with message: "ANR2034E QUERY EVENT: No match found for this query." if no problems (no exceptions found). Backup, undo There is no way to undo standard client Incremental or Selective backups. Backup, which file systems to back up Specify a file system name via the "DOMain option" (q.v.) or specify a file system subdirectory via the "VIRTUALMountpoint" option (q.v.) and then code it like a file system in the "DOMain option" (q.v.). Backup, which files are backed up See the client manual; search the PDF (Backup criteria) for the word "modified". In the Windows client manual, see: - "Understanding which files are backed up" - "Copy mode" - "Resetarchiveattribute" (TSM does not use the Windows archive attribute to determine if a file is a candidate for incremental backup.) - And, Windows Journal-based backup. It is also the case that TSM respects the entries in Windows Registry subkey HKLM\System\CurrentControlSet\Control\ BackupRestore\FilesNotToBackup (No, this is not mentioned in the client manual; is in the 4.2 Technical Guide redbook. File \Pagefile.sys should be in this list.) Always do 'dsmc q inclexcl' in Windows to see the realities of inclusion. Note that there is also a list of Registry keys not to be restored, in KeysNotToRestore. Unix: See the criteria listed under the description of "Copy mode" (p.128 of the 5.2 manual). Backup copies, number of Defined in Backup Copy Group. Backup Copy Group A policy object that contains attributes which control the generation, destination, and expiration of backup versions of files. A backup copy group belongs to a management class. Backup Copy Group, define 'DEFine COpygroup DomainName PolicySet MGmtclass [Type=Backup] DESTination=Pool_Name [FREQuency=Ndays] [VERExists=N_Versions|NOLimit] [VERDeleted=N_Versions|NOLimit] [RETExtra=N_Versions|NOLimit] [RETOnly=N_Versions|NOLimit] [MODE=MODified|ABSolute] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' Backup Copy Group, update 'UPDate COpygroup DomainName PolicySet MGmtclass [Type=Backup] [DESTination=Pool_Name] [FREQuency=Ndays] [VERExists=N_Versions|NOLimit] [VERDeleted=N_Versions|NOLimit] [RETExtra=N_Versions|NOLimit] [RETOnly=N_Versions|NOLimit] [MODE=MODified|ABSolute] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' BAckup DB TSM server command to back up the TSM database to tape (backs up only used pages, not the whole physical space). This operation is essential when LOGMode Rollforward is in effect, as this is the only way that the Recovery Log is cleared. 'BAckup DB DEVclass=DevclassName [Type=Incremental| Full|DBSnapshot] [VOLumenames=VolNames| FILE:File_Name] [Scratch=Yes|No] [Wait=No|Yes]' The VOLumenames list will be used if there is at least one volume in it which is not already occupied; else TSM will use a scratch tape per the default Scratch=Yes. Note that the DevClass can be of DEVType FILE...which could allow you to have a large-capacity hard drive inside a fire-proof enclosure so as to produce a secure backup for disaster with no extra effort. DBSnapshot Specifies that you want to run a full snapshot database backup, to make a "point in time" image for possible later db restoral (in which the Recovery Log will *not* participate). The entire contents of a database are copied and a new snapshot database backup is created without interrupting the existing full and incremental backup series for the database. If roll-forward db mode is in effect, and a snapshot is performed, the recovery log keeps growing. Before doing one of these, be aware that the latest snapshot db backup cannot be deleted! Priority: Higher than filespace Backup, so will preempt it if conflict. The Recovery Log space represented in the backup will not be reclaimed until the backup finishes: the Pct Util does not decrease as the backup proceeds. The tape used *does* show up in a 'Query MOunts". Note that unlike in other ADSM tape operations, the tape is immediately unloaded when the backup is complete. If using scratch volumes, beware that this function will gradually consume all your scratch volumes unless you do periodic pruning ('DELete VOLHistory'). If specifying volsers to use, they must *not* already be assigned to a DBBackup or storage pool: if they are, ADSM will instead try to use a scratch volume, unless Scratch=No. Example: 'BAckup DB DEVclass=LIBR.DEVC_3590 VOL=000050 Type=full Scratch=No' You should free old dbbackup volumes: 'DELete VOLHistory TOD=-N T=DBB' where "-N" should specify a value like -7, saying to deleted any older than 7 days, meaning you keep the latest 7 days worth for safety. It is best to schedule this deletion to occur immediately prior to doing BAckup DB: in this way you can assure that a tape will be available, even if the scratch pool was exhausted. Messages: ANR1360I when output volume opened; ANR1361I when the volume is closed; ANR4554I tracks progress; ANR4550I at completion (reports number of pages backed up). Incremental DB Backup does *not* automatically write to the last tape used in a full backup: it will write to a scratch tape instead. (And each incremental writes to a new tape.) Queries: Do either: 'Query VOLHistory Type=DBBackup' or 'Query LIBVolume' to reveal the database backup volume. (A 'Query Volume' is no help because it only reports storage pool volumes, and by their nature, database backup media are outside ADSM storage. See: Database backup volume, pruning. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting). See also: DELete VOLHistory BAckup DB performance As of mid-2001, BAckup DB is still a plodding task. Data rates, even with the best disk, tape, and CPU hardware, are only 3 - 4 MB/sec, which is well below hardware speeds. Thus, the TSM database system itself is the drag on performance. BAckup DB to a scratch 3590 tape Perform like the following example: in the 3494 'BAckup DB DEVclass=LIBR.DEVC_3590 Type=Full' BAckup DB to a specific 3590 tape Perform like the following example: in the 3494 'BAckup DB DEVclass=LIBR.DEVC_3590 Type=Full VOLumenames=000050 Scratch=No' BAckup DEVCONFig ADSM server command to back up the device configuration information which ADSM uses in standalong recoveries. Syntax: 'BAckup DEVCONFig [Filenames=___]' (No entry is written to the Activity Log to indicate that this was performed.) See also DEVCONFig server option. Backup failure message "ANS4638E Incremental backup of 'FileSystemName' finished with 2 failure" Backup files An elemental concept in *SM relates to its database orientation: each file is unique by nodename, filespace, and filename. Together, the nodename, filespace name, and filename constitute the database key for managing the file. Backup files: deletable by client Controlled by the BACKDELete parameter node? on the 'REGister Node' and 'UPDate Node' commands. Default: No (which thus prohibits a "DELete FIlespace" operation from the client). Query via 'Query Node Format=Detailed'. Backup files, management class binding By design, you can not have different backup versions of the same file bound to different management classes. All backup versions of a given file are bound to the same management class. Backup files, delete *SM provides no inherent method to do this, but you can achieve it by the following paradigm: 1. Update Copygroup Verexists to 1, ACTivate POlicyset, do a fresh incremental backup. This gets rid of all but the last (active) version of a file. 2. Update Copygroup Retainonly and Retainextra to 0; ACTivate POlicyset; EXPIre Inventory. This gets ADSM to forget about inactive files. 3. If the files are "uniquely identified by the sub-directory structure above the files" add those dirs to the exclude list. Do an Incremental Backup. The files in the excluded dirs get marked inactive. The next EXPIre Inventory should then remove them from the tapes. See also: Database, delete table entry Backup files, list from server 'Query CONtent VolName ...' Backup files, retention period Is part of the Copy Group definition. Is defined in DEFine DOmain to provide a just-in-case default value. Note that there is one Copy Group in a Management Class for backup files, and one for archived files, so the retention period is essentially part of the Management Class. Backup files, versions 'SHOW Versions NodeName FileSpace' Backup files for a node, list from SELECT NODE_NAME, FILESPACE_NAME, - SERVER HL_NAME, LL_NAME, OWNER, STATE, - BACKUP_DATE, DEACTIVATE_DATE FROM - BACKUPS WHERE - NODE_NAME='UPPER_CASE_NAME' (Be sure that node name is upper case.) Backup generations See "Backup version" Backup Image See: dsmc Backup Image Backup laptop computers Look into CoreData's Remoteworx for ADSM software, which detects and transmits only the byte-level data changes for each backup file, to an ADSM client PC running Windows. See www.coredata.com. Backup objects for day, query at server SELECT * FROM BACKUPS WHERE - NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='___' AND - DATE(BACKUP_DATE)='2000-01-14' Backup of HSM-managed files Use one server for HSM plus the Backup of that HSM area: this allows ADSM to effect the backup (of large files) by copying from one storage pool tape to another, without recalling the file to the host file system. In the typical backup of an HSM-managed file system, ADSM will back up all the files too small to be HSM-migrated (4095 bytes or less); and then any files which were in the disk level of the HSM storage pool hierarchy, in that they had not yet migrated down to the tape level; and then copy across tapes in the storage pool. If Backup gets hung up on a code defect while doing cross-tape backup, you can circumvent by doing a dsmrecall of the problem file(s). The backup will then occur from the file system copy. Be advised that cross-pool backup can sometimes require three drives, as files can span tapes. With only two drives, you can run into an "Insufficient mount points available" condition (ANR0535W, ANR0567). Backup Operation Element of report from 'Query VOLHistory' or 'DSMSERV DISPlay DBBackupvolumes' to identify the operation number for this volume within the backup series. Will be 0 for a full backup, 1 for first incremental backup, etc. See also: Backup Series Backup operation, retry when file in Have the CHAngingretries (q.v.) Client use System Options file (dsm.sys) option specify how many retries you want. Default: 4. Backup performance Many factors can affect backup performance. Here are some things to look at: - Client system capability and load at the time of backup. - If Expiration is running on the server, performance is guaranteed to be impaired, due to the CPU and database load involved. - Use client compression judiciously. Be aware that COMPRESSAlways=No can cause the whole transaction and all the files involved within it to be processed again, without compression. This will show up in the "Objects compressed by:" backup statistics number being negative (like "-29%"). (To see how much compression is costing, compress a copy of a typical, large file that is involved in your backups, outside of TSM, performing the compression with a utility like gzip.) Beware that using client compression and sending that data to tape drives which also compress data can result in prolonged time at the tape drive as its algorithms struggle to find patterns in the patternless compressed data. - Using the MEMORYEFficientbackup option considerably reduces performance. - The client manual advises: "A very large include-exclude list may decrease backup performance." - A file system that does compression (e.g., NTFS) will prolong the job. - Backing up a file system which is networked to this client system rather than native to it (e.g., NFS, AFS) will naturally be relatively slow. - Make sure that if you activated client tracing in the past that you did not leave it active, as its overhead will dramatically slow client performance. - File system topology: conventional directories with more than about 1000 files slow down all access, including ADSM. (You can gauge this by doing a Unix 'find' command in large file systems and appreciate just how painful it is to have too many files in one directory.) - Consider using MAXNUMMP to increase the number of drives you may simultaneously use. - Your Copy Group SERialization choice could be causing the backup of active files to be attempted multiple times. - May be waiting for mount points on the server. Do 'Query SEssion F=D'. - Examine the Backup log for things like a lot of retries on active files, and inspect the timestamp sequence for indications of problem areas in the file system. - If an Incremental backup is slow while a Selective or Incrbydate is fast, it can indicate a client with insufficient real memory or other processes consuming memory that the client needs to process an Active files list expeditiously. - If the client under-estimates the size of an object it is sending to the server, there may be performance degradation and/or the backup may fail. See IBM site TechNote 1156827. - Defragment your hard drive! You can regain a lot of performance. (This can also be achieved by performing a file-oriented copy of the file system to a fresh disk, which will also eliminate empty space in directories.) - If a Windows system, consider running DISKCLEAN on the filesystem. - In a PC, routine periodic executions of a disk analyzer (e.g., CHKDSK, or more thorough commercial product) are vital to find drive problems which can impair performance. - Do your schedule log, dsmerror log, or server Activity Log show errors or contention affecting progress? - Avoid using the unqualified Exclude option to exclude a file system or directory, as Exclude is for *files*: subdirectories will still be traversed and examined for candidates. Instead, use Exclude.FS or Exclude.Dir, as appropriate. - TSM Journaling may help a lot. - The number of versions of files that you keep, per your Backup Copy Group, entails overhead: During a Backup, the server has additional work to do in having to check retention policies for this next version of a file causing the oldest one in the storage pool having to be marked for expiration. See also: DEACTIVATE_DATE - If AIX, consider using the TCPNodelay client option to send small transactions right away, before filling the TCP/IP buffer. - If running on a PC, disable anti-virus and other software which adds overhead to file access. - Backups of very large data masses, such as databases, benefit from going directly to tape, where streaming can often be faster than first going to disk, with its rotational positioning issues. And speed will be further increased by hardware data compression in the drive. - If backups first go to a disk storage pool, consider making it RAID type, to benefit from parallel striping across multiple, separate channels & disk drives. But avoid RAID 5, which is poor at sequential writing. - Make sure your server BUFPoolsize is sufficient to cache some 99% of requests (do 'q db f=d'), else server performance plummets. - Maximize your TXNBytelimit and TXNGroupmax definitions to make the most efficient use of network bandwidth. - Balance access of multiple clients to one server and carefully schedule server admin tasks to avoid waiting for tape mounts, migration, expirations, and the like. Migration in particular should be avoided during backups: see IBM site TechNote 1110026. - Make sure that LARGECOMmbuffers Yes is in effect in your client (the default is No, except for AIX). - The client RESOURceutilization option can be used to boost the number of sessions. - If server and client are in the same system, use Shared Memory in Unix and Named Pipes in Windows. - If client accesses server across network, examine TCP/IP tuning values and see if other unusual activity is congesting the network. - See if your client TCPWindowsize is too small - but don't increase it beyond a recommendes size. (63 is good for Windows.) - Is your ethernet card in Autonegotiate mode? Shame on you! - Beware the invisible: networking administrators may have changed the "quality of service" rating - perhaps per your predecessor - so that *SM traffic has reduced priority on that network link. - If it is a large file system and the directories are reasonably balanced, consider using VIRTUALMountpoint definitions to allow backing up the file system in parallel. - A normal incremental backup on a very large file system will cause the *SM client to allocate large amounts of memory for file tables, which can cause the client system to page heavily. Make sure the system has enough real memory, and that other work running on that system at the same time is not causing contention for memory. Consider doing Incrbydate backups, which don't use file tables, or perhaps "Fast Incrementals". - Consider it time to split that file system into two or more file systems which are more manageable. - Look for misconfigured network equipment (adapters, switches, etc.). - Are you using ethernet to transfer large volumes of data? Consider that ethernet's standard MTU size is tiny, fine for messaging but not well suited to large volumes of data, making for a lot of processor and transmission overhead in transferring the data in numerous tiny packets. Consider the Jumbo Frame capability in some incarnations of gigabit ethernet, or a transmission technology like fibre channel, which is designed for volume data transfers. That is, ethernet's capacity does not scale in proportion to its speed increase. - If warranted, put your *SM traffic onto a private network (like a SAN does) to avoid competing with other traffic in getting your data through. - If you have multiple tape drives on one SCSI chain, consider dedicating one host adapter card to each drive in order to maximize performance. - If your computer system has only one bus, it could be constrained. (RS/6000 systems can have multiple, independent buses, which distribute I/O.) - Tape drive technologies which don't handle start-stop well (e.g., DLT) will prolong backups. See: Backhitch - Automatic tape drive cleaning and retries on a dirty drive will slow down the action. - Tapes whose media is marginal may be tough for the tape drive to write, and the drive may linger on a tape block for some time, laboring until it sucessfully writes it - and may not give any indication to the operating system that it had to undertake this extra effort and time. (As an example, with a watchable task: Via 'Query Process' I once observed a Backup Stgpool taking about four times as long as it should in writing a 3590 tape, the Files count repeatedly remaining contant over 20 seconds as it struggled to write modest-sized files.) - If you mix SCSI device types on a single SCSI chain, you may be limiting your fastest device to the speed of the slowest device. For example, putting a single-ended device on a SCSI chain with a differential device will cause the chain speed to drop to that of the single-ended device. - In Unix, use the public domain 'lsof' command to see what the client process is currently working on. - In Solaris, use the 'truss' command to see where the client is processing. - Is cyclic redundancy checking enabled for the server/client (*SM 5.1)? This entails considerable overhead. - Exchange 2000: Consider un-checking the option "Zero Out Deleted Database Pages" (required restart of the Exchange Services). See IBM article ID# 1144592 titled "Data Protection for Exchange On-line Backup Performance is Slow and Microsoft KB 815068. - A Windows TSM server may be I/O impaired due to its SCSI or Fibre Channel block size. See IBM site Technote 1167281. If none of the above pan out, consider rerunning the problem backup with client tracing active. See CLIENT TRACING near the bottom of this document. See also: Backup taking too long; Client performance factors Backup performance with 3590 tapes Writing directly to 3590 tapes, rather than have an intermediate disk, is 3X-4X faster: 3590's stream the data where disks can't. Ref: ADSM Version 2 Release 1.5 Performance Evaluation Report. BACKup REgistry During Incremental backup of a Windows system, the Registry area is backed up. However, in cases where you want to back up the Resistry alone, you can do so with the BACKup REgistry command. The command backs up Registry hives listed in Registry key HKEY_LOCAL_MACHINEM\System\ CurrentControlSet\Control\Hivelist Syntax: BACKup REgistry Note that there in current clients, there are no operands, to guarantee system consistency. Earlier clients had modifying parameters: BACKup REgistry ENTIRE Backs up both the Machine and User hives. BACKup REgistry MACHINE Backs up the Machine root key hives (registry subkeys). BACKup REgistry USER Backs up User root key hives (registry subkeys). See also: BACKUPRegistry Backup Required Before Migration In output of 'dsmmigquery -M -D', an (HSM) attribute of the management class which determines whether it is necessary for a backup copy (Backup/Restore) of the file to exist before it can be migrated by HSM. Defined via MIGREQUIRESBkup in management class. See: MIGREQUIRESBkup Backup retention grace period The number of days ADSM retains a backup version when the server is unable to rebind the object to an appropriate management class. Defined via the BACKRETention parameter of 'DEFine DOmain'. Backup retention grace period, query 'Query DOmain Format=Detailed', see "Backup Retention (Grace Period)". Backup Series Element of report from 'Query VOLHistory' or 'DSMSERV DISPlay DBBackupvolumes' to identify the TSM database backup series of which the volume is a part. Each backup series consists of a full backup and all incremental backups that apply to that full backup, up to the next full backup of the TSM database. Note: After a DSMSERV LOADDB, the Backup Series number will revert to 1. When doing DELete VOLHistory, be sure to delete the whole series at once, to avoid the ANR8448E problem. See also: BAckup VOLHistory Backup sessions, multiple See: RESOURceutilization Backup Set TSM 3.7+ facility to create a collection of a client node's current Active backup files as a single point-in-time amalgam (snapshot) on sequential media, to be stored and managed as a single object in a format tailored to and restorable on the client system whose data is therein represented. The GENerate BACKUPSET server command is used to create the set, intended to be written to sequential media, typically of a type which can be read either on the server or client such that the client can perform a 'dsmc REStore BACKUPSET' either through the TSM server or by directly reading the media from the client node. The media is often something like a CD-ROM, JAZ, or ZIP. Note that you cannot write more than one Backup Sets to a given volume. If this is a concern, look into server-to-server virtual volumes. (See: Virtual Volumes) Also known by the misleading name "Instant Archive". Note that the retention period can be specified when the backup set is created: it is not governed by a management class. Also termed "LAN-free Restore". The consolidated, contiguous nature of the set speeds restoral. ("Speeds" may be an exaggeration: while Backup Sets are generated via TSM db lookups, they are restored via lookups in the sequential media in which the Backup Set is contained, which can be slow.) Backup Sets are frozen, point-in-time snapshots: they are in no way incremental, and nothing can be added to one. But there are several downsides to this approach: The first is that it is expensive to create the Backup Set, in in terms of time, media, and mounts. Second, the set is really "outside" of the normal TSM paradigm, further evidenced by the awkwardness of later trying to determine the contents of the set, given that its inventory is not tracked in the TSM database (which would represent too much overhead). You will not see a directory structure for a backupset. Note that you can create the Backup Set on the server as devtype File and then FTP the result to the client, as perhaps to burn a CD - but be sure to perform the FTP in binary mode! Backup Sets are not a DR substitute for copy storage pools in that Backup Sets hold only Active files, whereas copy storage pools hold all files, Active and Inactive. There is no support in the TSM API for the backup set format. Further, Backup Sets are unsuitable for API-stored objects (TDP backups, etc.) in that the client APIs are not programmed to later deal with Backup Sets, and so cannot perform client-based restores with them. Likewise, the standard Backup/Archive clients do not handle API-generated data. See: Backup Set; GENerate BACKUPSET; dsmc Query BACKUPSET; dsmc REStore BACKUPSET; Query BACKUPSET; Query BACKUPSETContents Ref: TSM 3.7 Technical Guide redbook Backup Set, amount of data Normal Backup Set queries report the number of files, but not the amount of data. You can determine the latter by realizing that a Backup Set consists of all the Active files in a file system, and that is equivalent to the file system size and percent utilized as recorded at last backup, reportable via Query FIlespace. Backup Set, list contents Client: 'Query BACKUPSET' Server: 'Query BACKUPSETContents' See also: dsmc Query BACKUPSET Backup set, on CD In writing Backup Sets to CDs you need to account for the amount of data exceeding the capacity of a CD... Define a devclass of type FILE and set the MAXCAPacity to under the size of the CD capacity. This will cause the data to span TSM volumes (FILEs), resulting in each volume being on a separate CD. Be mindful of the requirement: The label on the media must meet the following restrictions: - No more than 11 characters - Same name for file name and volume label. This might not be problem for local backupset restores but is mandatory for server backupsets over devclass with type REMOVABLEFILE. The creation utility DirectCD creates random CD volume label beginning with creation date, which will will not match TSM volume label. Ref: Admin Ref; Admin Guide "Generating Client Backup Sets on the Server" & "Configuring Removable File Devices" Backup set, remove from Volhistory A backup set which expires through normal retention processing may leave the volume in the volhistory. There is an undocumented form of DELete VOLHistory to get it out of there: 'DELete VOLHistory TODate=TODAY [TOTime=hh:mm:ss] TYPE=BACKUPSET VOLume=______ [FORCE=YES]' Note that VOLume may be case-sensitive. Backup Set and CLI vs. GUI In the beginning (early 2001), only the CLI could deal with Backup Sets. The GUI was later given that capability. However: The GUI can be used only to restore an entire backup set. The CLI is more flexible, and can be used to restore an entire backup set or individual files within a backup set. Backup Set and TDP The TDPs do not support backup sets - because they use the TSM client API, which does not support Backup Sets. Backup Set and the client API The TSM client API does not support Backup Sets. Backup Set restoral performance Some specific considerations: - A Backup Set may contain multiple filespaces, and so getting to the data you want within the composite may take time. (Watch out: If you specify a destination other than the original location, data from all file spaces is restored to the location you specify.) - There is no table of contents for backup sets: The entire tape or set has to be read for each restore or query - which explains why a Query BACKUPSETContents is about as time-consuming as an actual restoral. See also "Restoral performance", as general considerations apply. Backup Set volumes not checked in SELECT COUNT(VOLUME_NAME) FROM VOLHISTORY WHERE TYPE='BACKUPSET' AND VOLUME_NAME NOT IN (SELECT VOLUME_NAME FROM LIBVOLUMES) Backup Sets, report SELECT VOLUME_NAME FROM VOLHISTORY WHERE TYPE='BACKUPSET' Backup Sets, report number SELECT COUNT(VOLUME_NAME) FROM VOLHISTORY WHERE TYPE='BACKUPSET' Backup skips some PC disks Possible causes: (skipping) - Options file updated to add disk, but scheduler process not restarted. - Drive improperly labeled. - Drive was relabeled since PC reboot or since ADSM client was started. - The permissions on the drive are wrong. - Drive attributes differ from those of drives which *will* backup. - Give ADSM full control to the root on each drive (may have been run by SYSTEM account, lacking root access). - Msgmode is QUIET instead of VERBOSE, so you see no messages if nothing goes wrong. - ADSM client code may be defective such that it fails if the disk label is in mixed case, rather than all upper or lower. Backup skips some Unix files An obvious cause for this occurring is that the file matches an Exclude. Another cause: The Unix client manual advises that skipping can occur when the LANG environment variable is set to C, POSIX (limiting the valid characters to those with ASCII codes less than 128), or other values with limitations for valid characters, and the file name contains characters with ASCII codes higher than 127. Backup "stalled" Many ADSM customers complain that their client backup is "stalled". In fact, it is almost always the case that it is processing, simply taking longer than the person thinks. In traditional incremental backups, the client must get from the server a list of all files that it has for the filespace, and then run through its file system, comparing each file against that list to see if it warrants backup. That entails considerable server database work, network traffic, client CPU time, and client I/O...which is aggravated by overpopulated directories. Summary advice: give it time. BAckup STGpool *SM server operation to create a backup copy of a storage pool in a Copy Storage Pool (by definition on serial medium, i.e., tape). Syntax: 'BAckup STGpool PrimaryPoolName CopyPoolName [MAXPRocess=N] [Preview=No|Yes|VOLumesonly] [Wait=No|Yes]' Note that storage pool backups are incremental in nature so you only produce copies of files that have not already been copied. (It is incremental in the sense of adding new objects to the backup storage pool. It is not exactly like a client incremental backup operation: BAckup STGpool itself does not cause objects to be identified as deletable from the *SM database. It is Expire Inventory that rids the backup storage pool of obsolete objects.) Order of backup: most recent data first, then work back in time. BAckup STGpool copies data: it does not examine the data for issues...you need to use AUDit Volume for that, optionally using CRC data. Only one backup may be started per storage pool: attempting to start a second results in error message "Backup already active for pool ___". MAXPRocess: Specify only as many as you will have available mount points or drives to service them (DEVclass MOUNTLimit, less any drives already in use or unavailable (Query DRive)). Each process will select a node and copy all the files for that node. Processes that finish early will quit. The last surviving process should be expected to go on to other nodes' data in the storage pool. If you don't actually get that many processes, it could be due to the number of mount points or there being too few nodes represented in the stgpool data. Elapsed time cannot be less than the time to process the largest client data set. Beware using all the tape drives: migration is a lower priority process and thus can be stuck for hours waiting for BAckup STGpool to end, which can result in irate Archive users. MAXPRocess and preemption: If you invoked BAckup STGpool to use all drives and a scheduled Backup DB started, the Backup DB process would pre-empt one of the BAckup STGpool processes to gain access to a drive (msg ANR1440I): the other BAckup STGpool processes continue unaffected. (TSM will not reinitiate the terminated process after the preempting process has completed.) Preview: Reveals the number of files and bytes to be backed up and a list of the primary storage pool volumes that would be mounted. You cannot backup a storage pool on one computer architecture and restore it on another: use Export/Import. If a client is introducing files to a primary storage pool while that pool is being backed up to a copy storage pool, the new files may get copied to the copy storage pool, depending upon the progress that the BAckup STGpool has made. Preemption: BAckup STGpool will wait until needed tape drives are available: it does not preempt Backups or HSM Recalls or even Reclamation. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting - as archive type files). Msgs: ANR1212I, ANR0986I (reports process, number of files, and bytes), ANR1214I (reports storage pool name, number of files, and bytes), ANR1221E (if insufficient space in copy storage pool) See also: Aggregates BAckup STGpool, estimate requirements Use the Preview option. BAckup STGpool, how to stop If you need to stop the backup prematurely, you can do one of: - CANcel PRocess on each of its processes. But: you need to know the process numbers, and so can't, for example, make the stop an administrative schedule. - UPDate STGpool ... ACCess=READOnly This will conveniently cause all the backup processes to stop after they have finished with the file they are currently working on. In the Activity Log you will find message ANR1221E, saying that the process terminated because of insufficient space. (Updating the storage pool back to READWrite before a process stops will prevent the process from stopping: it has to transition to the next file for it to see the READOnly status.) BAckup STGpool, minimize time To minimize the time for the operation: - Perform the operation when nothing else is going on in ADSM; - Maximize your TSM database Cache Hit Pct. (standard tuning); - Maximize the 'BAckup STGpool' MAXPRocess number to: The lesser of the number of tape drives or nodes available when backing up disk pools (which needs tape drives only for the outputs); The lesser of either half the number of tape drives or the number of nodes when backing up tape pools (which needs tape drives for both input and output). - If you have an odd number of tape drives during a tape pool backup, one drive will likely end up with a tape lingering it after stgpool backup is done with that tape, and ADSM's rotational re-use of the drive will have to wait for a dismount. So for the duration of the storage pool backup, consider having your DEVclass MOUNTRetention value 1 to assure that the drive is ready for the next mount. - If you have plenty of tapes, consider marking previous stgpool backup tapes read-only such that ADSM will always perform the backup to an empty tape and so not have to take time to change tapes when it fills last night's. BAckup STGpool, order within hierarchy When performing a Backup Stgpool on a storage pool hierarchy, it should be done from the top of the hierarchy to the bottom: you should not skip around (as for example doing the third level, then the first level, then the second). Remember that files migrate downward in the hierarchy, not upward. If you do the Backup Stgpool in the same downward order, you will guarantee not missing files which may have migrated in between storage pool backups. BAckup STGpool taking too long Can be due to tapes whose media is marginal, tough for the input tape drive to read or the output tape drive to write, causing lingering on a tape block for some time, laboring until it sucessfully completes the I/O - and may not give any indication to the operating system that it had to undertake this extra effort and time. To analyze: Observe via 'Query Process'. ostensibly seeing the Files count repeatedly remaining contant as a file of just modest file size is copied. But is it the input or output volume? To determine, do 'UPDate Volume ______ ACCess=READOnly' on the output volume: this will cause the BAckup STGpool to switch to a new output volume. If subsequent copying suffers no delay, then the output tape was the problem; else it was probably the input volume that was troublesome. While the operation proceeds, return the prior output volume to READWrite state, which will tend to cause it to be used for output when the current output volume fills, at which time a different input volume is likely. If copying becomes sluggish again, then certainly that volume is the problem. BAckup STGPOOLHierarchy There is no such command - but there should be: The point of a storage pool hierarchy is that if a file object is in any storage pool within the hierarchy, that is "there". In concert with this concept, there should be a command which generally backs up the hierarchy to backup storage. The existing command, BAckup STGpool is antithetical, in that it addresses a physical subset of the whole, logical hierarchy: it is both a nuisance to have to invoke against each primary storage pool in turn, and problematic in that a file which moves in the hierarchy might be missed by the piecemeal backup. Backup storage pool See also: Copy Storage Pool Backup storage pool, disk? Beware using a disk as the 1st level of (disk buffer for Backup) a backup storage pool hierarchy. TSM storage hierarchy rules specify that if a given file is too big to fit into the (remaining) space of a storage pool, that it should instead go directly down to the next level (presumably, tape). What can happen is that the disk storage pool can get full because migration cannot occur fast enough, and the backup will instead try to go directly to tape, which can result in the client session getting hung up on a Media Wait (MediaW status). Mitigation: Use MAXSize on the disk storage pool, to keep large files from using it up quickly. However, many clients back up large files routinely, so you end up with the old situation of clients waiting for tape drives. Another problem with using this kind of disk buffering for Backups is that the migration generates locks which interfere with Backup, worse on a multiprocessor system. If TSM is able to migrate at all, it will be thrashing trying to keep up, continually re-examining the storage pool contents to fulfill its migration rules of largest file sizes and nodes. Lastly, you have to be concerned that your backup data may not all be on tape: being on disk, it represents an incomplete tape data set, and jeopardizes recoverability of that filespace, should the disk go bad. See also: Backup through disk storage pool Backup success message "Successful incremental backup of 'FileSystemName'", which has no message number. Backup successful? You can check the 11th field of the dsmaccnt.log. BACKup SYSTEMObject See: dsmc BACKup SYSTEMObject Backup table See: BACKUPS Backup taking too long Sometimes it may seem that the backup (seems like it "hangs" client is hung, but almost always it is (hung, freezes, sluggish, slow)) active. To determine why it's taking as long as it is, you need to take a close look at the system and see if it or TSM is really hung, or simply slow or blocked. Examination of the evolutionary context of the client might show that the number of files on it has been steadily increasing, and so the number in TSM storage, and thus an increasingly burdensome inventory obtained from the server during a dsmc Incremental. The amount of available CPU power and memory at the time are principal factors: it may be that the system's load has evolved whereas its real memory has not, and it needs more. Use your opsys monitoring tools to determine if the TSM client is actually busy in terms of CPU time and I/O in examination of the file system: the backup may simply be still be looking for new files to send to server storage. The monitor should show I/O and CPU activity proceeding. In the client log, look for the backup lingering in a particular area of the file system, which can indicate a bad file or disk area, where a chkdsk or the like may uncover a problem. You could also try a comparative INCRBYDate type backup and see if that does better, which would indicate difficulty dealing with the size of the inventory. TSM Journaling may also be an option. Consider doing client tracing to identify where the time is concentrated. (See "CLIENT TRACING" section at bottom of this document.) If not hung, then one or more of the many performance affectors may be at play. See: Backup performance Backup through disk storage pool It is traditional to back up directly to (disk buffer) tape, but you can do it through a storage pool hierarchy with a disk storage pool ahead of tape. Advantages: - Immediacy: no waiting for tape mount. - No queueing for limited tape drives when collocation is in effect. - 'BAckup STGpool' can be faster, to the extent that the backup data is still on disk, as opposed to a tape-to-tape operation. Disadvantages: - ADSM server is busier, having to move the data first to disk, then to tape (with corresponding database updates). - There can still be some delays for tape mounts, as migration works to drain the disk storage pool. - Backup data tends to be on disk and tape, rather than all on tape. (This can be mitigated by setting migration levels to 0% low and 0% high to force all the data to tape.) - A considerable amount of disk space is dedicated to a transient operation. - With some tape drive technology you may get better throughput by going directly to tape because the streaming speed of some tape technology is by nature faster than disk. With better tape technology, the tape is always positioned, ready for writing whereas the rotating disk has to wait for its spot to come around again. And, the compression in tape drive hardware can result in the effective write speed exceeding even the streaming rate spec. - If the disk pool fills, incoming clients will go into media wait and will remain tape-destined even if the disk pool empties. - In *SM database restoral, part of that procedure is to audit any disk storage pool volumes; so a good-sized backup storage pool on disk will add to that time. See also: Backup storage pool, disk? Backup version An object, directory, or file space that a user has backed up that resides in a backup storage pool in ADSM storage. The most recent is the "active" version; older ones are "inactive" versions. Versions are controlled in the Backup Copy Group definition (see 'DEFine COpygroup'). "VERExists" limits the number of versions, with the excess being deleted - regardless of the RETExtra which would otherwise keep them around. "VERDeleted" limits versions kept of deleted files. "RETExtra" is the retention period, in days, for all but the latest backup version. "RETOnly" is the retention period, in days, for the sole remaining backup version of a file deleted from the client file system. Note that individual backups cannot be deleted from either the client or server. See Active Version and Inactive Version. Backup version, make unrecoverable First, optionally, move the file on the client system to another directory. 2nd, in the original directory replace the file with a small stub of junk. 3rd, do a selective backup of the stub as many times as you have 'versions' set in the management class. This will make any backups of the real file unrestorable. 4th, change the options to stop backing up the real file. There is a way to "trick" ADSM into deleting the backups: Code an EXCLUDE statement for the file, then perform an incremental backup. This will cause existing backup versions to be flagged for deletion. Next, run EXPIre Inventory, and voila! The versions will be deleted. Backup via Schedule, on NT Running backups on NT systems through "NT services" can be problematic: If you choose Logon As and assign it an ADMIN ID with all the necessary privileges you can think of, it still may not work. Instead, double-click on the ADSM scheduler and click on the button to run the service as the local System Account. BAckup VOLHistory ADSM server command to back up the volume history data to an opsys file. Syntax: 'BAckup VOLHistory [Filenames=___]' (No entry is written to the Activity Log to indicate that this was performed.) Note that you need not explicitly execute this command if the VOLumeHistory option is coded in the server options file, in that the option causes ADSM to automatically back up the volume history whenever it does something like a database backup. However, ADSM does not automatically back up the volume history if a 'DELete VOLHistory' is performed, so you may want to manually invoke the backup then. See also: Backup Series; VOLUMEHistory Backup MB, over last 24 hours SELECT SUM(BYTES)/1000/1000 AS "MB_per_day" FROM SUMMARY WHERE ACTIVITY='BACKUP' AND (CURRENT_TIMESTAMP-END_TIME)HOURS <= 24 HOURS Backup vs. Archive, differences See "Archive vs. Selective Backup". Backup vs. Migration, priorities Backups have priority over migration. Backup without expiration Use INCRBYDate (q.v). Backup without rebinding In AIX, accomplish by remounting the file system on a special mount point name; or, on a PC, change the volume name/label of the hard drive. Then back up with a different, special management class. This will cause a full backup and create a new filespace name. Another approach would be to do the rename on the other end: rename the ADSM filespace and then back up with the usual management class, which will cause a full backup to occur and regenerate the former filespace afresh. Backup won't happen See: Backup skips some PC disks BACKUP_DIR Part of Tivoli Data Protection for Oracle. Should be listed in your tdpo.opt file. It specifies the client directory which wil be used for storing the files on your server. If you list the filespaces created for that node on the server after a succesful backup, you will see one filespace with the same name as you BACKUP_DIR. Backup-archive client A program that runs on a file server, PC, or workstation that provides a means for ADSM users to back up, archive, restore, and retrieve objects. Contrast with application client and administrative client. BackupDomainList The title under which DOMain-named file systems appear in the output of the client command 'Query Options'. BackupExec Veritas Backup Exec product. A dubious aspect is the handling of open files, per a selectable option: it copies a 'stub' to tape, allowing for it to skip the file. Apparently, most of the time when you restore the file, it's either a null file or a partial copy of the original, either way being useless. http://www.BackupExec.com/ BACKUPFULL In 'Query VOLHistory' or 'DSMSERV DISPlay DBBackupvolumes' or VOLHISTORY database TYPE output, this is the Volume Type to say that volume was used for a full backup of the database. BACKUPINCR In 'Query VOLHistory' or VOLHISTORY database TYPE output, this is the Volume Type to say that volume was used for an incremental backup of the database. BACKUPRegistry Option for NT systems only, to specify whether ADSM should back up the NT Registry during incremental backups. Specify: Yes or No Default: Yes The Registry backup works by using an NT API function to write the contents of the Registry into the adsm.sys directory. (The documentation has erroneously been suggesting that the system32\config Registry area should be Excluded from the backup: it should not). The files written have the same layout as the native registry files in \winnt\system32\config. You can back up just the Registry with the BACKup Registry command. In Windows 2000 and beyond, you can use the DOMain option to control the backup of system objects. Ref: redbook "Windows NT Backup and Recovery with ADSM" (SG24-2231): topic 4.1.2.1 Registry Backup BACKUPS SQL: TSM database table containing info about all active and inactive files backed up. Along with ARCHIVES and CONTENTS, constitutes the bulk of the *SM database contents. Columns: NODE_NAME, FILESPACE_NAME, STATE (active, inactive), TYPE, HL_NAME, LL_NAME, OBJECT_ID, BACKUP_DATE, DEACTIVATE_DATE, OWNER, CLASS_NAME. Notes: Does not contain information about file sizes or the volumes which the objects are on (see the Contents table). In a Select, you can do CONCAT(HL_NAME, LL_NAME) to stick those two components together, to make the output more familiar; or concatenate the whole path by doing: SELECT FILESPACE_NAME || HL_NAME || LL_NAME FROM BACKUPS. See: DEACTIVATE_DATE; OWNER; STATE; TYPE Backups, count of bytes received Use the Summary table, available in TSM 3.7+, like: SELECT SUM(BYTES) AS Sum_Bytes - FROM ADSM.SUMMARY - WHERE (DATE(END_TIME) = CURRENT DATE \ - 1 DAYS AND TIME(END_TIME) >= \ '20.00.00') OR (DATE(END_TIME) = \ CURRENT DATE) AND ACTIVITY = 'BACKUP' See also: Summary table Backups, parallelize Going to a disk pool first is one way; then the data migrates to tape. To go directly to tape: You may need to define your STGpool with COLlocation=FILespace to achieve such results; else *SM will try to fill one tape at a time, making all other processes wait for access to the tape. Further subdivision is afforded via VIRTUALMountpoint. (Subdivide and conquer.) That may not be a good solution where what you are backing up is not a file system, but a commercial database backup via agent, or a buta backup, where each backup creates a separate filespace. In such situations you can use the approach of separate management classes, so as to have separate storage pools, but still using the same library and tape pool. If you have COLlocation=Yes (node) and need to force parallelization during a backup session, you can momentarily toggle the single, current output tape from READWrite to READOnly to incite *SM to have multiple output tapes. Backups, prevent There are times when you want to prevent backups from occurring, as when a restoral is running and fresh backups of the same file system would create version confusion in the restoral process, or where client nodes tend to inappropriately use the TSM client during the day, as in kicking off Backups at times when drives are needed for other scheduled tasks. You can prevent backups in several ways: In the *SM server: - LOCK Node, which prevents all access from the client - and which may be too extreme. - 'UPDate Node ... MAXNUMMP=0', to be in effect during the day, to prevent Backup and Archive, but allow Restore and Retrieve. In the *SM client: - In the Include-Exclude list, code EXCLUDE.FS for each file system. In general: - If the backups are performed via client schedule: Unfortunately, client schedules lack the ACTIVE= keyword such that we can render them inactive. Instead, you can do a temporary DELete ASSOCiation to divorce the node from the backup schedule. - If the backups are being performed independently by the client: Do DISAble SESSions after the restoral starts, to allow it to proceed but prevent further client sessions. Or you might do UPDate STGpool ... ACCess=READOnly, which would certainly prevent backups from proceeding. See also: "Restorals, prevent" for another approach Backups go directly to tape, not disk Some shops have their backups first go as intended to a disk storage pool, with migration to tape. But they may find backups going directory to tape. Possible causes: - The file exceeds the STGpool MAXSize. - The file exceeds the physical storage pool size. - The backup occurred choosing a management class which goes to tape. - Maybe only some of the data is going directly to tape: the directories. Remember that *SM by default stores directories under the Management Class with the longest retention, modifiable via DIRMc. - Your storage pool hierarchy was changed by someone. - See also "ANS1329S" discussion about COMPRESSAlways effects. - Your client (perhaps DB2 backup) may be overestimating the size of the object being backed up. - Um, the stgpool Access mode is Read/Write, yes? A good thing to check: Do a short Select * From Backups... to examine some of those files, and see what they are actually using for a Management Class. Backups without expiration Use INCRBYDate (q.v). Backupset See: Backup Set baclient Shorthand for Backup-Archive Client. bak DFS command to start the backup and restore operations that direct them to buta. See also: buta; butc; DFS bakserver BackUp Server: DFS program to manage info in its database, serving recording and query operations. See also "buserver" of AFS. Barcode See CHECKLabel Barcode, examine tape to assure that 'mtlib -l /dev/lmcp0 -a -V VolName' it is physically in library) Causes the robot to move to the tape and scan its barcode. 'mtlib -l /dev/lmcp0 -a -L FileName' can be used to examine tapes en mass, by taking the first volser on each line of the file. Bare Metal Restore (BMR) Grudgingly performed by TSM, if at all: is basically left to 3rd party providers such as The Kernel Group (see www.tkg.com/products.html). Redbook: "ADSM Client Disaster Recovery: Bare Metal Restore" (SG24-4880) See also: BMR Users group: TSM AIX Bare Metal Restore Special interest group. Subscribe by sending email to TSMAIXBMR-subscribe@yahoogroups.com or via the yahoogroups web interface at http://www.yahoogroups.com Bare Metal Restore, Windows? BMR of Windows is highly problematic, due to the Registry orientation of the operating system and hardware dependencies. I.e., don't expect it to work. As one customer put it: "Windows is the least transportable and least modular OS ever." Batch mode Start an "administrative client session" to issue a single server command or macro, via the command: 'dsmadmc -id=YOURID -pa=YOURPW CMDNAME', as described in the ADSM Administrator's Reference. BCV EMC disk: Business Continuance Volumes. BEGin EVentlogging Server command to begin logging events to one or more receivers. A receiver for which event logging has begun is an active receiver. When the server is started, event logging automatically begins for the console and activity log and for any receivers that are started automatically based on entries in the server options file. You can use this command to begin logging events to receivers for which event logging is not automatically started at server startup. You can also use this command after you have disabled event logging to one or more receivers. Syntax: 'BEGin EVentlogging [ALL|CONSOLE|ACTLOG |EVENTSERVER|FILE|FILETEXT|SNMP |TIVOLI|USEREXIT]' See: User exit Benchmark Surprisingly, many sites simply buy hardware and start using it, and then maybe wonder if it is providing its full performance potential. What should happen is that the selection of hardware should be based upon performance specifications published by the vendor; then, once it is made operational at the customer site, the customer should conduct tests to measure and record its actual performance, under ideal conditions. That is a benchmark. Going through this process gives you a basis for accepting or rejecting the new facilities and, if you accept them, you have a basis for later comparing daily performance to know when problems or capacity issues are occurring. .BFS File name extension created by the server for FILE type scratch volumes which contain client data. Ref: Admin Guide, Defining and Updating FILE Device Classes See also: FILE Billing products Chargeback/TSM, an optional plugin to Servergraph/TSM (www.servergraph.com). Bindery A database that consists of three system files for a NetWare 3.11 server. The files contain user IDs and user restrictions. The Bindery is the first thing that ADSM backs up during an Incremental Backup. ADSM issues a Close to the Bindery, followed by anOpen (about 2 seconds later). This causes the Bindery to be written to disk, so that it can be backed up. Binding The process of associating an object with a management class name, and hence a set of rules. See "Files, binding to management class" Bit Vector Database concept for efficiently storing sparse data. Database records usually consist of multiple fields. In some db applications, only a few of the fields may have data: if you simply allocate space for all possible fields in database records, you will end up with a lot of empty space inflating your db. To save space you can instead use a prefacing sequence of bits in each database record which, left to right, correspond to the data fields in the db record, and in the db record you allocate space only for the data fields which contain data for this record. If the bit's value is zero, it means that the field had no data and does not participate in this record. If the bit's value is one, it means that the field does participate in the record and its value can be found in the db record, in the position relative to the other "one" values. Example: A university database is defined with records consisting of four fields: Person name, College, Campus address, Campus phone number. But not all students or staff members reside on campus, so allocating space for the last two fields would be wasteful. In the case of staff member John Doe, the last three fields are unnecessary, and so his database record would have a bit vector value of 1000, meaning that only his name appears in the database record. Bitfile Internal terminology denoting an Aggregate. Sometimes seen like "0.29131728", which is notation specifying an OBJECT_ID HIGH portion (0) and an OBJECT_ID LOW portion (29131728). (OBJECT_ID appears in the Archives and Backups database tables.) Note that in the BACKUPS table, the OBJECT_ID is just the low portion. See also: OBJECT_ID Bkup Backup file type, in Query CONtent report. Other types: Arch, SpMg Blksize See: Block size used for removable media Block size used for removable media *SM sets the block size of all its (tape, optical disc) blksize tape/optical devices internally. Setting it in smit has no effect, except for tar, dd, and any other applications that do not set it themselves. ADSM uses variable blocking on all tapes, ie. blocksize is 0. Generally however, for 3590 it will attempt to write out a full 256K block, which is the largest allowed blocksize with variable blocking. Some blocks, eg. the last block in a series, will be shorter. AIX: use 'lsattr -E -l rmt1' to verify. DLT: ADSMv3 sets blksize to 256KB. Ref: IBM site Technote 1167281 BMR Bare Metal Restore. The Kernel Group has a product of that name. However, as of 2001/02 TKG has not been committing the resources required to develop the product, given the lack of SSA disk, raw volume support, and Windows 2000. URL: http://www.tkg.com/products.html See also: Bare Metal Restore BOOKS Client User Options file (dsm.opt) option for making the ADSM online publications available through the ADSM GUI's Help menu, View Books item. The option specifies the command to invoke, which in Unix would be 'dtext'. Books, online, installing Follow the instructions contained in the booklet which accompanies the Online Product Library CD-ROM. Books, online, storage location Located in /usr/ebt/adsm/ More specifically: /usr/ebt/adsm/books Books, online, using If under the ADSM GUI: Click on the Help menu, View Books item. From the Unix prompt: 'dtext', which invokes the DynaText hypertext browser: /usr/bin/dtext -> /usr/ebt/bin/dtext. Books component product name "adsmbook.obj" As in 'lslpp -l adsmbook.obj'. BOT A Beginning Of Tape tape mark. See also: EOT BPX-Tcp/Ip The OpenEdition sockets API is used by the Tivoli Storage Manager for MVS 3.7 when the server is running under OS/390 R5 or greater. Therefore, "BPX-Tcp/Ip" is displayed when the server is using the OpenEdition sockets API (callable service). "BPX" are the first three characters of the names of the API functions that are being used by the server. Braces See: {}; File space, explicit specification BRMS AS/400 (iSeries) Backup Recovery and Media Services, a fully automated backup, recovery, and media management strategy used with OS/400 on the iSeries server. The iSeries TSM client referred to as the BRMS Application Client to TSM. The BRMS Application Client function is based on a unique implementation of the TSM Application Programming Interface (API) and does not provide functions typically available with TSM Backup/Archive clients. The solution it integrated into BRMS and has a native iSeries look and feel. There is no TSM command line or GUI interfaces. The BRMS Application client is not a Tivoli Backup/Archive client nor a Tivoli Data Protection Client. You can use BRMS to save low-volume user data on distributed iSeries systems to any Tivoli Storage Manager (TSM) server. You can do this by using a BRMS component called the BRMS Application Client, which is provided with the base BRMS product. The BRMS Application Client has the look and feel of BRMS and iSeries. It is not a TSM Backup or Archive client. There is little difference in the way BRMS saves objects to TSM servers and the way it saves objects to media. A TSM server is just another device that BRMS uses for your save and restore operations. BRMS backups can span volumes. There is reportedly a well-known throughput bottleneck with BRMS. (600Kb/s is actually quite a respectable figure for BRMS.) Ref: In IBM webspace you can search for "TSM frequencly asked questions" and "TSM tips and techniques" which talk of BRMS in relation to TSM. BU Seldom used abbreviation for backup. Buffer pool statistics, reset 'RESet BUFPool' BUFFPoolsize You mean: BUFPoolsize BUFPoolsize Definition in the server options file. Specifies the size of the database buffer pool in memory, in KBytes (i.e. 8192 = 8192 KB = 8 MB). A larger buffer pool can keep more database pages in the memory cache and lessen I/O to the database. As the ADSM (3.1) Performance Tuning Guide advised: While increasing BUFPoolsize, care must be taken not to cause paging in the virtual memory system. Monitor system memory usage to check for any increased paging after the BUFPoolsize change. (Use the 'RESet BUFPool' command to reset the statistics.) Note that a TSM server, like servers of all kinds, benefits from the host system having abundant real memory. Skimping is counter-productive. The minimum value is 256 KB; the maximum value is limited only by available virtual memory. Evaluate performance by looking at 'Query DB F=D' output Cache values. A "Cache Hit Pct." of 98% is a reasonable target. Default: 512 (KB) To change the value, either directly edit the server options file and restart the server, or use SETOPT BUFPoolsize and perform a RESet BUFPool. You can have the server tune the value itself via the SELFTUNEBUFpoolsize option. Ref: Installing the Server See also: SETOPT BUFPoolsize; LOGPoolsize; RESet BUFPool; SELFTUNEBUFpoolsize BUFPoolsize server option, query 'Query OPTion' Bulk Eject category 3494 Library Manager category code FF11 for a tape volume to be deposited in the High-Capacity Output Facility. After the volume has been so deposited its volser is deleted from the inventory. bus_domination Attribute for tape drives on a SCSI bus. Should be set "Yes" only if the drive is the only device on the bus. buserver BackUp Server: AFS program to manage info in its database, serving recording and query operations. See also "bakserver" of DFS. Busy file See: Changed buta (AFS) (Back Up To ADSM) is an ADSM API application which replaces the AFS butc. The "buta" programs are the ADSM agent programs that work with the native AFS volume backup system and send the data to ADSM. (The AFS buta and DFS buta are two similar but independent programs.) The buta tools only backup/restore at the volume level, so to get a single file you have to restore the volume to another location and then grovel for the file. This is why ADSM's AFS facilities are preferred. The "buta" backup style provides AFS disaster recovery. All of the necessary data is stored to restore AFS partitions to an AFS server, in the event of loss of a disk or server. It does not allow AFS users to backup and restore AFS data, per the ADSM backup model. All backup and restore operations require operator intervention. ADSM management classes do not control file retention and expiration for the AFS files data. Locking: The AFS volume is locked in the buta backup, but you should be backing up clone volumes, not the actuals. There is a paper published in the Decorum 97 Proceedings (from Transarc) describing the buta approach. As of AFS 3.6, butc itself supports backups to TSM, via XBSA (q.v.), meaning that buta will no longer be necessary. License: Its name is "Open Systems Environment", as per /usr/lpp/adsm/bin/README. The file backup client is installable from the adsm.afs.client installation file, and the DFS fileset backup agent is installable from adsm.butaafs.client. Executables: /usr/afs/buta/. See publication "AFS/DFS Backup Clients", SH26-4048 and http://www.storage.ibm.com/software/ adsm/adafsdfs.htm . There's a white paper available at: http://www.storage.ibm.com/software/ adsm/adwhdfs.htm Compare buta with "dsm.afs". See also: bak; XBSA buta (DFS) (Back Up To ADSM) is an ADSM API application which replaces the AFS butc. The "buta" programs are the ADSM agent programs that work with the native AFS fileset backup system and send the data to ADSM. (The AFS buta and DFS buta are two similar but independent programs.) The buta tools only backup/restore at the fileset level, so to get a single file you have to restore the fileset to another location and then grovel for the file. This is why ADSM's AFS facilities are preferred. Each dumped fileset (incremental or full) is sent to the ADSM server as a file whose name is the same as that of the fileset. The fileset dump files associated with a dump are stored within a single file space on the ADSM server, and the name of the file space is the dump-id string. The "buta" backup style provides DFS disaster recovery. All of the necessary data is stored to restore DFS aggregates to an DFS server, in the event of loss of a disk or server. It does not allow DFS users to backup and restore DFS data, per the ADSM backup model. All backup and restore operations require operator intervention. ADSM management classes do not control file retention and expiration for the DFS files data. Locking: The DFS fileset is locked in the buta backup, but you should be backing up clone filesets, not the actuals. License: Its name is "Open Systems Environment", as per /usr/lpp/adsm/bin/README. The file backup client is installable from the adsm.dfs.client installation file, and the DFS fileset backup agent is installable from adsm.butadfs.client. Executables: in /var/dce/dfs/buta/ . See publication "AFS/DFS Backup Clients", SH26-4048 and http://www.storage.ibm.com/software/ adsm/adafsdfs.htm . There's a white paper available at: http://www.storage.ibm.com/software/ adsm/adwhdfs.htm Compare buta with "dsm.dfs". See also: bak butc (AFS) Back Up Tape Coordinator: AFS volume dumps and restores are performed through this program, which reads and writes an attached tape device and then interacts with the buserver to record them. Butc is replaced by buta to instead perform the backups to ADSM. As of AFS 3.6, butc itself supports backups to TSM through XBSA (q.v.), meaning that buta will no longer be necessary. See also: bak butc (DFS) Back Up Tape Coordinator: DFS fileset dumps and restores are performed through this program, which reads and writes an attached tape device and then interacts with the buserver to record them. Butc is replaced by buta to instead perform the backups to ADSM. See also: bak bydate You mean -INCRBYDate (q.v.). C: vs C:\* specification C: refers to the entire drive, while C:\* refers to all files in the root of C: (and subdirectories as well if -SUBDIR=YES is specified). A C:\* backup will not cause the Registry System Objects to be backed up, whereas a C: backup will. Cache (storage pool) When files are migrated from disk storage pools, duplicate copies of the files may remain in disk storage ("cached") as long as TSM can afford the space, thus making for faster retrieval. As such, this is *not* a write-through cache: the caching only begins once the storage pool HIghmig value is exceeded. ADSM will delete the cached disk files only when space is needed. This is why the Pct Util value in a 'Query Volume' or 'Query STGpool' report can look much higher than its defined "High Mig%" threshold value (Pct Util will always hover around 99% with Cache activated). Define HIghmig lower to assure the disk-stored files also being on tape, but at the expense of more tape action. When caching is in effect, the best way to get a sense of "real" storage pool utilization is via 'Query OCCupancy'. Note that the storage pool LOwmig value is effectively overridden to 0 when CAChe is in effect, because once migration starts, TSM wants to assure that everything is cached. You might as well define LOwmig as 0 to avoid confusion in this situation. Performance: Requires additional database space and updating thereof. Can also result in disk fragmentation due to lingering files. Is best used for the disks which may be part of Archive and HSM storage pools, because of the likelihood of retrievals; but avoid use with disks leading a backup storage pool hierarchy, because such disks serve as buffers and so caching would be a waste of overhead. With caching, the storage pool Pct Migr value does not include cached data. See also the description of message ANR0534W. CAChe Disk stgpool parameter to say whether or not caching is in effect. Note that if you had operated CAChe=Yes and then turn it off, turning it off doesn't clear the cached files from the diskpool - you need to also do one of the following: - Fill the diskpool to 100%, which will cause the cached versions to be released to make room for the new files; or - Migrate down to 0, then do MOVe Data commands on all the disk volumes, which will free the cached images. Cache Hit Pct. Element of 'Query DB F=D' report, reflecting server database performance. (Also revealed by 'SHow BUFStats'.) The value should be up around 98%. (You should periodically do 'RESet BUFPool' to reset the statistics counts to assure valid values, particularly if the "Total Buffer Requests" from Query DB is negative (counter overflow).) If the Cache Hit Pct. value is significantly less, then the server is being substantially slowed in having to perform database disk I/O to service lookup requests, which will be most noticeable in degrading backups being performed by multiple clients simultaneously. Your ability to realize a high value in this cache is affected by the same factors as any other cache: The more, new entries in the cache - as from lots of client backups - the less likely it may be that any of those resident in the cache may serve a future reference, and so the lookup has to go all the way back to the disk-based database, meaning a "cache miss". It's all probability, and the inability to predict the future. Increase BUFPoolsize in dsmserv.opt . Note: You can have a high Cache Hit Pct. and yet performance still suffering if you skimp on real memory in your server system, because all modern operating systems use virtual memory, and in a shortage of real memory, much of what had been in real memory will instead be out on the backing store, necessitating I/O to get it back in, which entails substantial delay. See topic "TSM Tuning Considerations" at the bottom of this document. See also: RESet BUFPool Cache Wait Pct. Element of 'Query DB F=D' report. Specifies, as a percentage, the number requests for a database buffer pool page that was unavailable (because all database buffer pool pages are occupied). You want the number to be 0.0. If greater, increase the size of the buffer pool with the BUFPoolsize option. You can reset this value with the 'RESet BUFPool' command. Caching, turn off 'UPDate STGpool PoolName CAChe=No' If you turn caching off, there's no reason for ADSM to suddenly remove the cache images and lose the investment already made: that stuff is residual, and will go away as space is needed. CAD See: Client Acceptor Daemon Calibration Sensor 3494 robotic tape library sensor: In addition to the bar code reader, the 3494 accessor contains another, more primitive visions system, based upon infrared rather than laser: it is the Calibration Sensor, located in the top right side of the picker. This sensor is used during Teach, bouncing its light off the white, rectangular reflective pads (called Fiducials) which are stuck onto various surfaces inside the 3494. This gives the robot its first actual sensing of where things actually are inside. CANcel EXPIration TSM server command to cancel an expiration process if there is one currently running. This does NOT require the process ID to be specified, and so this command can be scheduled using the server administrative command scheduling utility to help manage expiration processing and the time it consumes. TSM will record the point where it stopped, in the TSM database, which will be the point from which it resumes when the next EXPIre Inventory is run. As such, this may be preferable to CANcel PRocess. Msgs: ANR0813I when stopped by CANcel PRocess See also: Expiration, stop CANcel PRocess TSM server command to cancel a background process. Syntax: 'CANcel PRocess Process_Number' Notes: Processes waiting on resources won't cancel until they can get that resource - at which point they will go away. For example, a Backup Stgpool process which is having trouble reading or writing a tape, and is consumed with retrying the I/O, cannot be immediately cancelled. When a process is canceled, it often has to wait for lock requests to clear prior to going away: SHOW LOCKS may be used to inspect. CANcel REQuest *SM server command to cancel pending mount requests. Syntax: 'CANcel REQuest [requestnum|ALl] [PERManent]' where PERManent causes the volume status to be marked Unavailable, which prevents further mounts of that tape. CANcel RESTore ADSMv3 server command to cancel a Restartable Restore operation. Syntax: 'CANcel RESTore Session_Number|ALl' See also: dsmc CANcel Restore; Query RESTore CANcel SEssion To cancel an administrative or client session. Syntax: 'CANcel SEssion [SessionNum|ALl]' A client conducting a dsm session will get an alert box saying "Stopped by user", though it was actually the server which stopped it. An administrative session which is canceled gets regenerated... adsm> cancel se 4706 ANS5658E TCP/IP failure. ANS5102I Return code -50. ANS5787E Communication timeout. Reissue the command. ANS5100I Session established... ANS5102I Return code -50. SELECT command sessions are a problem: depending on complexity of the query it quite possible for the server to hang, and Tivoli has stated that the Cancel may not be able to cancel the Select, such that halting and restarting the server is the only way out of that situation. Ref: Admin Guide, Monitoring the TSM Server, Using SQL to Query the TSM Database, Issuing SELECT Commnds. Msgs: ANS4017E Candidates A file in the .SpaceMan directory of an HSM-managed file system, listing migration candidates (q.v.). The fields on each line: 1. Migration Priority number, which dsmreconcile computes based upon file size and last access. 2. Size of file, in bytes. 3. Timestamp of last file access (atime), in seconds since 1970. 4. Rest of pathname in file system. Capacity Column in 'Query FIlespace' server command output, which reflects the size of the object as it exists on the client. Note that this does *not* reflect the space occupied in ADSM. See also: Pct Util Cartridge devtype, considerations When using a devclass with DEVType=Cartridge, 3590 devices can only read. This is to allow customers who used 3591's (3590 devices with the A01 controller) to read those tapes with a 3590 (3590 devices with the A00 controller). The 3591 device emulates a 3490, and uses the Cartridge devtype. 3590's use the 3590 devtype. You can do a Help Define Devclass, or check the readme for information on defining a 3590 devclass, but it is basically the same as Cartridge, with a DEVType=3590. The 3591 devices exist on MVS and VM only, so the compatibality mode is only valid on these platforms. On all other platforms, you can only use a 3590 with the 3590 devtype. Cartridge System Tape (CST) A designation for the base 3490 cartridge technology, which reads and writes 18 tracks on half-inch tape. Sometimes referred to as MEDIA1. Contrast with ECCST and HPCT. See also: ECCST; HPCT; Media Type CAST SQL: To alter the data representation in a query operation: CAST(Column_Name AS ___) See: TIMESTAMP Categories See: Volume Categories Category code, search for volumes 'mtlib -l /dev/lmcp0 -qC -s ____' will report only volumes having the specified category code. Category code control point Category codes are controlled at the ADSM LIBRary level. Category code of one tape in library, Via Unix command: list 'mtlib -l /dev/lmcp0 -vqV -V VolName' In TSM: 'Query LIBVolume LibName VolName' indirectly shows the Category Code in the Status value, which you can then see in numerical terms by doing 'Query LIBRary [LibName]'. Category code of one tape in library, Via Unix command: set 'mtlib -l /dev/lmcp0 -vC -V VolName -t Hexadecimal_New_Category' (Does not involve a tape mount.) No ADSM command will performs this function, nor does the 3494 control panel provide a means for doing it. By virtue of doing this outside of ADSM, you should do 'AUDit LIBRary LibName' afterward for each ADSM-defined library name affected, so that ADSM sees and registers the change. In TSM: 'UPDate LIBVolume LibName VolName STATus=[PRIvate|SCRatch]' indirectly changes the Category Code to the Status value reflected in 'Query LIBRary [LibName]'. Category Codes Ref: Redbook "IBM Magstar Tape Products Family: A Practical Guide" (SG24-4632), Appendix A Category codes of all tapes in Use AIX command: library, list 'mtlib -l /dev/lmcp0 -vqI' for fully-labeled information, or just 'mtlib -l /dev/lmcp0 -qI' for unlabeled data fields: volser, category code, volume attribute, volume class (type of tape drive; equates to device class), volume type. (or use options -vqI for verbosity, for more descriptive output) The tapes reported do not include CE tape or cleaning tapes. In TSM: 'Query LIBVolume [LibName] [VolName]' indirectly shows the Category Code in the Status value, which you can then see in numerical terms by doing 'Query LIBRary [LibName]'. Category Table (TSM) /usr/tivoli/tsm/etc/category_table Contains a list of tape library category codes, like: FF00=inserted. (unassigned, in ATL) CC= Completion Code value in I/O operations, as appears in error messages. See the back of the Messages manuals for a list of Completion Codes and suggested handling. CCW Continuous Composite WORM, as in a type of optical WORM drive that can be in the 3995 library. CD See also: DVD... CD for Backup Set See: Backup set, on CD CDRW (CD-RW) support? Tivoli Storage Manager V5.1, V4.2 and V4.1 for Windows and Windows 2000 supports removable media devices such as Iomega JAZ, Iomega ZIP, CD-R, CD-RW, and optical devices provided a file system is supplied on the media. The devices are defined using a device class of device type REMOVEABLEFILE. (Ref: Tivoli Storage Manager web pages for device support, under "Platform Specific Notes") With CD-ROM support for Windows, administrators can also use CD-ROM media as an output device class. Using CD-ROM media as output requires other software which uses a file system on top of the CD-ROM media. ADAPTEC Direct CD software is the most common package for this application. This media allows other software to write to a CD by using a drive letter and file names. The media can be either CD-R (read) or CD-RW (read/write). (Ref: Tivoli Storage Manager for Windows Administrator's Guide) CE (C.E.) IBM Customer Engineer. CE volumes, count of in 3494 Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fff6' Cell (tape library storage slot) For libraries containing their own supervisor (e.g., 3494), TSM does not know or care about where volumes are stored in the library, in that it merely has to ask the library to mount them as needed, so does not need to know. See: Element; HOME_ELEMENT; Library... SHow LIBINV Cell 1 See: 3494 Cell 1 Central Scheduling A function that allows an *SM administrator to schedule backup, archive, and space management operations from a central location. The operations can be scheduled on a periodic basis or on an explicit date. Shows up in server command Query STATus output as "Central Scheduler: Active". (It is not documented in the manuals what controls its Active/Inactive state) Changed Keyword at end of a line in client backup log indicating that the file changed as it was being backed up, as: Normal File--> 1,544,241,152 /SomeFile Changed Backup may be reattempted according to the CHAngingretries value. In the dsmerror.log you may see an auxiliary message for the retry: " truncated while reading in Shared Static mode." See also: CHAngingretries; Retry; SERialization CHAngingretries (-CHAngingretries=) Client System Options file (dsm.sys) option to specify how many additional times you want *SM to attempt to back up or archive a file that is "in use", as discovering during the first attempt to back it up, when the Copy Group SERialization is SHRSTatic or SHRDYnamic (but not STatic or DYnamic). Note that the option controls retries: if you specify "CHAngingretries 3", then the backup or archive operation will try a total of 4 times - the initial attempt plus the three retries. Be aware that the retry will be right after the failed attempt: *SM does not go on to all other files and then come back and retry this one. Option placement: within server stanza. Spec: CHAngingretries { 0|1|2|3|4 } Default: 4 retries. Note: It may be futile to attempt to retry, in that if the file is large it will likely be undergoing writing for a long time. Note: Does not control number of retries in presence of read errors. This option's final effect depends upon the COpygroup's SERialization "shared" setting: Static prohibits retries if the file is busy; Dynamic causes the operation to proceed on the first try; Shared Static will cause the attempt to be abandoned if the file remains busy, but Shared Dynamic will cause backup or archiving to occur on the final attempt. See also: Changed; Fuzzy Backup; Retry; SERialization CHAngingretries, query The 'dsmc q o' command will *not* reveal the value of this option: you have to examine the dsm.sys options file. CHAR SQL function to return a string (aligned left). Syntax: CHAR(expression[,n]) See also: LEFT() CHECKIn LIBVolume TSM server command to check a *labeled* tape into an automated tape library. (For 3494 and like libraries, the volume must be in Insert mode.) 'CHECKIn LIBVolume LibName VolName STATus=PRIvate|SCRatch|CLEaner [CHECKLabel=Yes|No|Barcode] [SWAP=No|Yes] [MOUNTWait=Nmins] [SEARCH=No|Yes|Bulk] [CLEANINGS=1..1000] [VOLList=vol1,vol2,vol3 ...] [DEVType=3590]' (Omit VolName if SEARCH=Yes. You can do CHECKLabel=Barcode only if SEARCH=Yes.) Note that this command is not relevant for LIBtype=MANUAL. Note that SEARCH=Bulk will result in message ANR8373I, which requires doing 'REPLY ' and ' >> ' (redirection). Command output, suppress Use the Client System Options file (dsm.sys) option "Quiet". See also: VERBOSE Command routine ADSMv3: Command routing allows the server that originated the command to route the command to multiple servers and then to collect the output from these servers. Format: Server1[,ServerN]: server cmd Commands, uncommitted, roll back 'rollback' COMMIT TSM server command used in a macro to commit command-induced changes to the TSM database. Syntax: COMMIT See also: Itemcommit Committing database updates The Recovery Log holds uncommitted database updates. See: CKPT; LOGPoolsize COMMMethod Server Options File operand specifying one of more communications methods which clients may use to reach the server. Should specify at least one of: HTTP (for Web admin client) IPXSPX (discontinued in TSM4) NETBIOS (discontinued in TSM4) NONE (to block external access to the server) SHAREDMEM (shared memory, within a single computer system) SNALU6.2 (APPC - discontinued in TSM4) SNMP TCPIP (the default, being TCP, not UDP) (Ref: Installing the Server, Chap. 5) COMMMethod Client System Options file (dsm.sys) option to specify the one communication method to use to reach each server. Should specify one of: 3270 (discontinued in TSM4) 400comm HTTP (for Web Admin) IPXspx NAMEdpipe NETBios PWScs SHAREdmem (shared memory, within a single computer system) SHMPORT SNAlu6.2 TCPip (is TCP, not UDP) Be sure to code it, once, on each server stanza. See also: Shared memory COMMmethod server option, query 'Query OPTion'. You will see as many "CommMethod" entries as were defined in the server options file. Common Programming Interface A programming interface that allows Communications (CPIC) program-to-program communication using SNA LU6.2. See Systems Network Architecture Logical Unit 6.2. Discontinued as of TSM 4.2. COMMOpentimeout Definition in the Server Options File. Specifies the maximum number of seconds that the ADSM server waits for a response from a client when trying to initiate a conversation. Default: 20 seconds. Ref: Installing the Server... COMMTimeout Definition in the Server Options File. Specifies the communication timeout value in seconds: how long the server waits during a database update for an expected message from a client. Default: 60 seconds. Too small a value can result in ANR0481W session termination and ANS1005E. A value of 3600 is much more realistic. A large value is necessary to give the client time to rummage around in its file system, fill a buffer with files' data, and finally send it - especially for Incremental backups of large file systems having few updates, where the client is out of communication with the server for large amounts of time. If client compression is active, be sure to allow enough time for the client to decompress large files. Ref: Installing the Server... See also: IDLETimeout; SETOPT; Sparse files, handling of, Windows COMMTimeout server option, query 'Query OPTion' Communication method "COMMmethod" definition in the server options file. The method by which a client and server exchange information. The UNIX application client can use the TCP/IP or SNA LU6.2 method. The Windows application client can use the 3270, TCP/IP, NETBIOS, or IPX/SPX method. The OS/2 application client can use the 3270, TCP/IP, PWSCS, SNA LU6.2, NETBIOS, IPX/SPX, or Named Pipe method. The Novell NetWare application client can use the IPX/SPX, PWSCS, SNA LU6.2, or TCP/IP methods. See IPX/SPX, Named Pipe, NETBIOS, Programmable Workstation Communication Service, Systems Network Architecture Logical Unit 6.2, and Transmission Control Protocol/Internet Protocol. Communication protocol A set of defined interfaces that allows computers to communicate with each other. Communications timeout value, define "COMMTimeout" definition in the server options file. Communications Wait (CommW, commwait) "Sess State" value in 'Query SEssion' for when the server was waiting to receive expected data from the client or waiting for the communication layer to accept data to be sent to the client. An excessive value indicates a problem in the communication layer or in the client. Recorded in the 23rd field of the accounting record, and the "Pct. Comm. Wait Last Session" field of the 'Query Node Format=Detailed' server command. See also: Idle Wait; Media Wait; RecvW; Run; SendW; Start CommW See: Communications Wait commwait See: Communications Wait Competing products ARCserve; Veritas; www.redisafe.com; www.graphiumsoftware.com Compile Time (Compile Time API) Refers to a compiled application, which may emply a Run Time API (q.v.). The term "Compile Time API" may be employed with a TDP, which is a middleware application which employs both the TDP subject API (database, mail, etc.) plus the TSM API. Compress files sent from client to Can be defined via COMPRESSIon option server? in the dsm.sys Client System Options file. Specifying "Yes" causes *SM to compress files before sending them to the *SM server. Worth doing if you have a fast client processor. COMPRESSAlways Client User Options file (dsm.opt) option to specify handling of a file which *grows* during compression. (COMPRESSIon option must be set for this option to come into play.) Default: v2: No, do not send the object if it grows during compression. v3: Yes, do send if it grows during compression. Notes: Specifying No can result in wasted processing... The TXNGroupmax and TXNBytelimit options govern transaction size, and if a file grows in compression when COMPRESSAlways=No, the whole transaction and all the files involved within it must be processed again, without compression. This will show up in the "Objects compressed by:" backup statistics number being negative (like "-29%"). Messages: ANS1310E; ANS1329S See also IBM site TechNote 1156827. Compression Refers to data compression, the primary objective being to save storage pool space, and secondarily data transfer time. TSM compression is governed according to REGister Node settings, client option settings (COMPRESSIon), and Devclass Format. Object attributes may also specify that the data has already been compressed such that TSM will not attempt to compress it further. Drives: Either client compression or drive compression should be used, but not both, as the compression operation at the drive may actually cause the data to expand. EXCLUDE.COMPRESSION can be used to defeat compression for certain files during Archive and Backup processing. Ref: TSM Admin Guide, "Using Data Compression" See also: File size COMPression= Operand of REGister Node to control client data compression: No The client may not compress data sent to the server - regardless of client options. Each client session will show: "Data compression forced off by the server" in the headings, just under the Server Version line of the client log. Yes The client must always compress data sent to the server - regardless of client options. Each client session will show: "Data compression forced on by the server" in the headings, just under the Server Version line of the client log. Client The client may choose whether or not to compress data sent to the server, via client options. Default: COMPression=Client COMPRESSIon (client compression) Client System Options file (dsm.sys) option. Code in a server stanza. Specifying "Yes" causes *SM to compress files before sending them to the TSM server, during Backup and Archive operations, for storage as given - if the server allows the client to make a choice about compression, via "COMPRESSIon=Client" in 'REGister Node'. Conversely, the client has to uncompress the files in a restoral or retrieval. (The need for the client to decompress the data coming back from the server is implicit in the data, and thus is independent of any client option.) Worth considering if you have a fast client processor and the storage device does not do hardware compression (most tape drives do). Compression increases data communication throughput and takes less space if the destination storage pool is Disk - but less desirable if the storage pool is tape, in that the tape drive is better for doing compression, in hardware. Beware: if the file expands during compression then TSM will restart the entire transaction - which could involve resending other files, per the TXNGroupmax / TXNBytelimit values. The slower your client, the longer it takes to compress the file, and thus the longer the exposure to this possibility. Check at client by doing: 'dsmc Query Option' for ADSM or 'dsmc show options' for TSM. The dsmc summary will contain the extra line: "Compression percent reduction:", which is not present without compression. Note that during the operation the progress dots will be fewer and slower than if not using compression. With "COMPRESSIon Yes", the server COMMTimeout option becomes more important - particularly with large files - as the client takes considerable time doing decompression. How long does compression take? One way to get a sense of it is to, outside of TSM, compress a copy of a typical, large file that is involved in your backups, performing the compression with a utility like gzip. Where the client options call for both compression and encryption, compression is reportedly performed before encryption - which makes sense, as encrypted data is effectively binary data, which would either see little compression, or even exapansion. And, encryption means data secured by a key, so it further makes sense to prohibit any access to the data file if you do not first have the key. See also: Sparse files, handling of, Windows Compression, by tape drive Once the writing of a tape has begun with or without compression, that method will persist for the remainder of the tape is full. Changing Devclass FORMAT will affect only newly used tapes. Compression, client, control methods Client compression may be controlled by several means: - Client option file spec. - Client Option Set in the server. (Do 'dsmc query options' to see what's in effect, per options file and server side Option Set.) - Mandated in the server definition of that client node. If compression is in effect by any of the above methods, it will be reflected in the statistics at the end of a Backup session ("Objects compressed by:"). Compression algorithm, client Is Ziv Lempel (LZI), the same as that used in pkzip, MVS HAC, and most likely unix as well, and yes the data will normally grow when trying to compress it for a second time, as in a client being defined with COMPRESSAlways=Yes and a compressed file being backed up. Per the 3590 Intro and Planning Guide: "Data Compression is not recommended for encrypted data. Compressing encrypted data may reduce the effective tape capacity." This would seem to say that any tough binary data, like pre-compressed data from a *SM client, would expand rather than compress, due to the expectations and limitations of the algorithm. Compression being done by client node? Controlled by the COMPression parameter (before it sends files to server for on the 'REGister Node' and 'UPDate Node' backup and archive) commands. Default: Client (it determines whether to compress files). Query from ADSM server: 'Query Node Format=Detailed'. "Yes" means that it will always compress files sent to server; "No" means that it won't. Query from client: 'dsmc Query Option' for ADSM, or 'dsmc show options' for TSM look for "Compression". Is also seen in result from client backup and archive, in "Objects compressed by:" line at end of job. Compression being done by *SM server Controlled via the DEVclass "FORMAT" on 3590 tape drives? operand. Compression being done by tape drive? Most tape drives can perform hardware compression of data. (The 3590 can.) Find out via the AIX command: '/usr/sbin/lsattr -E -l rmt1' where "rmt1" is a sample tape drive name. TSM will set compression according to your DEVclass FORMAT=____ value. You can use SMIT to permanently change this, or do explicit: 'chdev -l rmt1 compress=yes|no'. You can also use the "compress" and "nocompress" keywords in the 'tapeutil' or 'ntutil' command to turn compression on and off for subsequent *util operations (only). Configuration file An optional file pointed to by your application that can contain the same options that are found in the client options file (for non-UNIX platforms) or in the client user options file and client system options file (for UNIX platforms). If your application points to a configuration file and values are defined for options, then the values specified in the configuration file override any value set in the client options files. Configuration Manager See: Enterprise Configuration and Policy Management Connect Agents Commercial implementations of the ADSM API to provide high-performance, integrated, online backups and restores of industry-leading databases. TSM renamed them to "Data Protection" (agents) (q.v.). See http://www.storage.ibm.com/ software/adsm/addbase.htm Console mode See: -CONsolemode; Remote console -CONsolemode Command-line option for ADSM administrative client commands ('dsmadmc', etc.) to see all unsolicited server console output. Sometimes referred to as "remote console". Results in a display-only session (no input prompt - you cannot enter commands). And unlike the Activity Log, no date-timestamps lead each line. Start an "administrative client session" via the command: 'dsmadmc -CONsolemode'. To have Operations monitor ADSM, consider setting up a "monitor" admin ID and a shell script which would invoke something to the effect of: 'dsmadmc -ID=monitor -CONsolemode -OUTfile=/var/log/ADSMmonitor.YYYYMMDD' and thus see and log events. Note that ADSM administrator commands cannot be issued in Console Mode. See also: dsmadmc; -MOUNTmode Ref: Administrator's Reference Consumer session The session which actually performs the data backup. (To use an FTP analogy, this is the "data channel".) Sometimes called the "data thread". Contrast with: Producer session See also: RESOURceutilization Contemporary Cybernetics 8mm drives 8510 is dual density (2.2gig and 5gig). (That brand was subsumed by Exabyte: see http://www.exabyte.com/home/ products.html for models.) Content Manager CommonStore CommonStore seamlessly integrates SAP R/3 and Lotus Domino with leading IBM archive systems such as IBM Content Manager, IBM Content Manager OnDemand, or TSM. The solution supports the archiving of virtually any kind of business information, including old, inactive data, e-mail documents, scanned images, faxes, computer printed output and business files. You can offload, archive, and e-mail documents from your existing Lotus Notes databases onto long-term archive systems. You can also accomplish a fully auditable document management system with your Lotus Notes client. http://www.ibm.com/software/data/ commonstore/ CONTENTS (SQL) The *SM database table which is the entirety of all filespaces data. (As such, Select queries against this table are quite expensive.) Along with Archives and Backups tables, constitutes the bulk of the *SM database contents. Columns: VOLUME_NAME, NODE_NAME (upper case), TYPE (Bkup, Arch, SpMg), FILESPACE_NAME (/fs), FILE_NAME (/subdir/ name), AGGREGATED (n/N), FILE_SIZE, SEGMENT (n/N), CACHED (Yes/No) Whereas the Backups table records a single instance of the backed up file, the Contents table records the primary storage pool instance plus all copy storage pool instances. Note that no timestamp is available for the file objects: that info can be obtained from the Backups table. But a major problem with the Contents is the absence of anything to uniquely identify the instance of its FILE_NAME, to be able to correlate with the corresponding entry in the Backups table, as would be possible if the Contents table carried the OBJECT_ID. The best you can do is try to bracket the files by creation timestamp as compares with the volume DATE_TIME column from the Volhistory table and the LAST_WRITE_DATE from the Volumes table. See also: Query CONtent Continuation and quoting Specifying things in quotes can always get confusing... When you need to convey an object name which contains blanks, you must enclose it in quotes. Further, you must nest quotes in cases where you need to use quotes not just to convey the object to *SM, but to have an enclosing set of quotes stored along with the name. This is particulary true with the OBJECTS parameter of the DEFine SCHedule command for client schedules. In its case, quoted names need to have enclosing double-quotes stored with them; and you convey that composite to *SM with single quotes. Doing this correctly is simple if you just consider how the composite has to end up... Wrong: OBJECTS='"Object 1"'- '"Object 2"' Right: OBJECTS='"Object 1" '- '"Object 2"' That is, the composite must end up being stored as: "Object 1" "Object 2" for feeding to and proper processing by the client command. The Wrong form would result in: "Object 1""Object 2" mooshing, which when illustrated this way is obviously wrong. The Wrong form can result in a ANS1102E error. Ref: "Using Continuation Characters" in the Admin Ref. Continuing server command lines Code either a hyphen (-) or backslash (continuation) (\) at the end of the line and contine coding anywhere on the next line. Continuing client options Lines in the Client System Options File (continuation) and Client User Options File are not continued per se: instead, you re-code the option on successive lines. For example, the DOMain option usually entails a lot of file system names; so code a comfortable number of file system names on each line, as in: DOMain /FileSystemName1, ... DOMain /FileSystemName7, ... Count() SQL function to calculate the number of records returned by a query. Note that this differs from Sum(), which computes a sum from the contents of a column. Convenience Eject category 3494 Library Manager category code FF10 for a tape volume to be ejected via the Convenience I/O Station. After the volume has been so ejected its volser is deleted from the inventory. Convenience Input-Output Station 3494 hardware feature which provides 10 (Convenience I/O) access slots in the door for inputting cartridges to the 3494 or receiving cartridges from it. May also be used for the transient mounting of tapes for immediate processing, not to become part of the repository. The Convenience I/O Station is just a basic pass-through area, and should not be confused with the more sophisticated Automatic Cartridge Facility magazine available for the 3590 tape drive. We find that it takes some 2 minutes, 40 seconds for the robot to take 10 tapes from the I/O station and store them into cells. When cartridges have been inserted from the outside by an operator, the Operator Panel light "Input Mode" is lit. It changes to unlit as soon as the robot takes the last cartridge from the station. When cartridges have been inserted from the inside by the robot, the Operator Panel light "Output Mode" is lit. The Operator Station System Summary display shows "Convenience I/O: Volumes present" for as long as there are cartridges in the station. See also the related High Capacity Output Facility. Convenience I/O Station, count of See: 3494, count of cartridges in cartridges in Convenience I/O Station CONVert Archive TSM4.2 server command to be run once on each node to improve the efficiency of a command line or API client query of archive files and directories using the Description option, where many files may have the same description. Previously, an API client could not perform an efficient query at all and a Version 3.1 or later command line client could perform such a query only if the node had signed onto the server from a GUI at least once. Syntax: CONVert Archive NodeName Wait=No|Yes Msgs: ANR0911I COPied COPied=ANY|Yes|No Operand of 'Query CONtent' command, to specify whether to restrict query output either to files that are backed up to a copy storage pool (Yes) or to files that are not backed up to a copy storage pool (No). Copy Group A policy object assigned to a Management Class specifying attributes which control the generation, destination, and expiration of backup versions of files and archived copies of files. It is the Copy Group which defines the destination Storage Pools to use for Backup and Archive. ADSM Copygroup names are always "STANDARD": you cannot assign names, which is conceptually pointless anyway in that there can only be one copygroup of a given type assigned to a management class. 'Query Mgm' does not reveal the Copygroups within the management class, unfortunately: you have to do 'Query COpygroup'. Note that Copy Groups are used only with Backup and Archive. HSM does not use them: instead, its Storage Pool is defined via the MGmtclass attribute "MIGDESTination". See "Archive Copy Group" and "Backup Copy Group". Copy group, Archive type, define See: DEFine COpygroup, archive type Copy group, Backup type, define See: DEFine COpygroup, backup type Copy group, Archive, query 'Query COpygroup [CopyGroupName] (defaults to Backup type copy group) Type=Archive' Copy group, Backup, query 'Query COpygroup [CopyGroupName] (defaults to Backup type copy group) [Type=Backup]' Copy group, delete 'DELete COpygroup DomainName PolicySet MgmtClass [Type=Backup|Archive]' Copy group, query 'Query COpygroup [CopyGroupName]' (defaults to Backup type copy group) COPy MGmtclass Server command to copy a management class within a policy set. (But a management class cannot be copied across policy domains or policy sets.) Syntax: 'COPy MGmtclass DomainName SetName FromClass ToClass' Then use 'UPDate MGmtclass' and other UPDate commands to tailor the copy. Note that the new name does not make it into the Active policy set until you do an ACTivate POlicyset. Copy Storage Pool A special storage pool, consisting of serial volumes (tapes) whose purpose is to provide space to have a surity backup of one or more levels in a standard Storage Pool hierarchy. The Copy Storage Pool is employed via the 'BAckup STGpool' command (q.v.). There cannot be a hierarchy of Copy Storage Pools, as can be the case with Primary Storage Pools. Be aware that making such a Copy results in that much more file information being tracked in the database...about 200 bytes for each file copy in a Copy Storage Pool, which is added to the file's existing database entry rather than create a separate entry. Copy Storage Pools are typically not collocated because it would mean a mount for every collocated node or file system, which could be a lot. Note that there is no way to readily migrate copy storage pool data, as for example when you want to move to a new tape technology and want to transparently move (rather than copy) the current data. Ref: Admin Guide topic Estimating and Monitoring Database and Recovery Log Space Requirements Copy Storage Pool, define See: DEFine STGpool (copy) Copy Storage Pool, delete node data You cannot directly delete a node's data from a copy storage pool; but you can circuitously effect it by using MOVe NODEdata to shift the node's data to separate tapes in the copy stgpool (temporarily changing the stgpool to COLlocate=Yes), and then doing DELete Volume on the newly written volumes. Copy storage pool, files not in Invoke 'Query CONtent' command with COPied=No to detect files which are not yet in a copy storage pool. Copy Storage Pool, moving data You don't: if you move the primary storage pool data to another location you should have done a 'BAckup STGpool' which will create a content-equivalent area, whereafter you can delete the volumes in the old Copy Storage Pool and then delete the old Copy Storage Pool. Note that neither the 'MOVe Data' command nor the 'MOVe NODEdata' command will not move data from one Copy Storage Pool to another. Copy Storage Pool, restore files Yes, if the primary storage pool is directly from unavailable or one of its volumes is destroyed, data can be obtained directly from the copy storage pool Ref: TSM Admin Guide chapter 8, introducing the Copy Storage Pool: ...when a client attempts to retrieve a file and the server detects an error in the file copy in the primary storage pool, the server marks the file as damaged. At the next attempt to access the file, the server obtains the file from a copy storage pool. Ref: TSM Admin Guide, chapter Protecting and Recovering Your Server, Storage Pool Protection: An Overview... "If data is lost or damaged, you can restore individual volumes or entire storage pools from the copy storage pools. TSM tries to access the file from a copy storage pool if the primary copy of the file cannot be obtained for one of the following reasons: - The primary file copy has been previously marked damaged. - The primary file is stored on a volume that is UNAVailable or DEStroyed. - The primary file is stored on an offline volume. - The primary file is located in a storage pool that is UNAVailable, and the operation is for restore, retrieve, or recall of files to a user, or export of file data." Copy Storage Pool, restore volume from 'RESTORE Volume ...' Copy Storage Pool & disaster recovery The Copy Storage Pool is a secondary recovery vehicle after the Primary Storage Pool, and so the Copy Storage Pool is rarely collocated for optimal recovery as the Primary pool often is. This makes for a big contention problem in disaster recovery, as each volume may be in demand by multiple restoral processes due to client data intermingling. A somewhat devious approach to this problem is to define the Devclass for the Copy Storage Pool with a FORMAT which disables data compression by the tape drive, thus using more tapes, and hence reducing the possibility of collision. Consider employing multiple management classes and primary storage pools with their own backup storage pools to distribute data and prevent contention at restoral time. If you have both high and low density drives in your library, use the lows for the Copy Storage Pool. Or maybe you could use a Virtual Tape Server, which implicitly stages tape data to disk. Copy Storage Pool volume damaged If a volume in a Copy Storage Pool has been damaged - but is not fully destroyed - try doing a Move Data first in rebuilding the data, rather than just deleting the volume and doing a fresh BAckup STGpool. Why? If you did the above and then found the primary storage pool volume also bad, you would have unwittingly deleted your only copies of the data, which could have been retrieved from that partially readable copy storage pool volume. So it is most prudent to preserve as much as possible first, before proceeding to try to recreate the remainder. Copy Storage Pool volume destroyed If a volume in a Copy Storage Pool has been destroyed, the only reasonable action is to make this known to ADSM by doing 'DELete Volume' and then do a fresh 'BAckup STGpool' to effectively recreate its contents on another volume. (Note that Copy Storage Pool volumes cannot be marked DEStroyed.) Copy Storage Pools current? The Auditocc SQL table allows you to quickly determine if your Copy Storage Pools have all the data in the Primary Storage Pools, by comparing: BACKUP_MB to BACKUP_COPY_MB ARCHIVE_MB to ARCHIVE_COPY_MB SPACEMG_MB to SPACEMG_COPY_MB If the COPY value is higher, it indicates that you have the same data in multiple Copy Storage Pools, as in an offsite pool. COPY_TYPE Column in VOLUMEUSAGE SQL table denoting the types of files: BACKUP, ARCHIVE, etc. Copygroup See: Copy Group COPYSTGpools TSM 5.1+ feature providing the possibility to simultaneously store a client's files into each copy storage pool specified for the primary storage pool where the clients files are written. The simultaneous write to the copy pools only takes place during backup or archive from the client. In other words, when the data enters the storage pool hierarchy. It does not take place during data migration from an HSM client nor on a LAN free backup from a Storage Agent. Naturally, if your storage pools are on tape, you will need a tape drive for the primary storage pool action and the copy storage pool action: 2 drives. Your mount point usage values must accommodate this. Maximum length of the copy pool name: 30 chars Maximum number of copy pool names: 10, separated by commas (no intervening spaces) This option is restricted to only primary storage pools using NATIVE or NONBLOCK data format. The COPYContinue parameter may also be specified to further govern operation. Note: The function provided by COPYSTGpools is not intended to replace the BACKUP STGPOOL command. If you use the COPYSTGpools parameter, continue to use BACKUP STGPOOL to ensure that the copy storage pools are complete copies of the primary storage pool. There are cases when a copy may not be created. COUNT(*) SQL statement to yield the number of rows satisfying a given condition: the number of occurrences. There should be as many elements to the left of the count specification as there are specified after the GROUP BY, else you will encounter a logical specification error. Example: SELECT OWNER,COUNT(*) AS "Number of files" FROM ARCHIVES GROUP BY OWNER SELECT NODE_NAME,OWNER,COUNT(*) AS "Number of files" FROM ARCHIVES GROUP BY NODE_NAME,OWNER See also: AVG; MAX; MIN; SUM COUrier DRM media state for volumes containing valid data and which are in the hands of a courier, going offsite. Their next state should be VAULT. See also: COURIERRetrieve; MOuntable; NOTMOuntable; VAult; VAULTRetrieve COURIERRetrieve DRM media state for volumes empty of data, which are being retrieved by a courier. Their next state should be ONSITERetrieve. See also: COUrier; MOuntable; NOTMOuntable; VAult; VAULTRetrieve CPIC Common Programming Interface Communications. .cpp Name suffix seen in some messages. Refers to a C++ programming language source module. CRC Cyclic Redundancy Checking. Available as of TSM 5.1: provides the option of specifying whether a cyclic redundancy check (CRC) is performed during a client session with the server, or for storage pools. The server validates the data by using a cyclic redundancy check which can help identify data corruption. The CRC values are validated when AUDit Volume is performed and during restore/retrieve processing, but not during other types of data movement (e.g., migration, reclamation, BAckup STGpool, MOVe Data). It is important to realize that the CRC values are stored when the data is first enters TSM, via Backup or Archive, to be stored in a storage pool which has CRCdata specified. The CRC info is thereby stored with the data and is associated with it for the life of that data in the TSM server, and will move with the data even if the data is moved to a storage pool where CRC recording is not in effect. Likewise, if data was not originally stored with CRC, it will not attain CRC if moved into a CRCed storage pool. Activated: VALIdateprotocol of DEFine SERver; CRCData operand of DEFine STGpool; REGister Node VALIdateprotocol operand; Verified: "Validate Protocol" value in Query SERver; "Validate Data?" value in Query STGpool Ref: IBM site TechNote 1143615 Cristie Bare Machine Recovery IBM-sponsored complementary product for TSM: A complete system recovery solution that allows a machine complete recovery from normal TSM backups. http://www.ibm.com/software/tivoli/ products/storage-mgr/cristie-bmr.html Cross-client restoral See: Restore across clients Cross-node restoral See: Restore across clients CSQryPending Verb type as seen in ANR0444W message. Reflects client-server query for pending scheduled tasks. CST See: Cartridge System Tape See also: ECCST; HPCT; Media Type CST-2 Designation for 3490E (q.v.). Ctime and backups The "inode change time" value (ctime) reflects when some administrative action was performed on a file, as in chown, chgrp, and like operations. When ADSM Backup sees that the ctime value has changed, it will back up the file again. This can be problematic for HSM-managed files, in that such backup requires copying from tape to tape, and there may be too few drives available during the height of nightly backups, which could cause the backup to fail then. So try to avoid mass chgrp and like operations on HSM-managed files. CURRENT_DATE SQL: Should be the current date, like "2001-09-01". But in ADSM 3.1.2.50, the month number was one more than it should be. Examples: SELECT CURRENT_DATE FROM LOG SELECT * FROM ACTLOG WHERE DATE(DATE_TIME)=CURRENT_DATE See also: Set SQLDATETIMEformat CURRENT_TIME SQL: The current time, like HH:MM:SS format. See also: Set SQLDATETIMEformat CURRENT_TIMESTAMP SQL: The current date and time, like YYYY-MM-DD HH:MM:SS or YYYYMMDDHHMMSS. See also: Set SQLDATETIMEformat CURRENT_USER SQL: Your administrator userid, in upper case. D2D Colloquialism for Disk-to-Disk, as in a disk backup scheme where the back store is disk rather than tape. D2D backup Really an ordinary backup, where the TSM server primary storage pool is of ramdom access devtype DISK rather serial access FILE or one of the various tape drive types. See also: DISK D2T Colloquialism for Disk-to-Tape, as in a disk backup scheme where the back store is tape - the traditional backup medium. Damaged files These are files in which the server found errors when a user attempted to restore, retrieve, or recall the file; or when an 'AUDit Volume' is run, with resulting Activity Log message like: "ANR2314I Audit volume process ended for volume 000185; 1 files inspected, 0 damaged files deleted, 1 damaged files marked as damaged." TSM knows when there is a copy of the file in the Backup Storage Pool, from which you may recover the file via 'RESTORE Volume', if not 'RESTORE STGpool'. If the client attempts to retrieve a damaged file, the TSM server knows that the file may instead be obtained from the copy stgpool and so goes there. The marking of a file as Damaged will not cause the next client backup to again back up the file, given that the supposed damage may simply be a dirty tape drive. Doing an AUDit Volume Fix=Yes on a primary storage pool volume may cause the file to be deleted therefrom, and the next backup to store a fresh copy of the file into that storage pool. Msgs: ANR0548W See also: Destroyed Damaged files, list from server 'Query CONtent VolName ... DAmaged=Yes' (Interestingly, there is no "Damaged" column available to customers in the Contents table in the TSM SQL database.) DAT Digital Audio Tape, a 4mm format which, like 8mm, has been exploited for data backup use. It is a relatively fragile medium, intended more for convenience than continuous use. Note that *SM Devclass refers to this device type as "4MM" rather than "DAT". A DDS cartridge should be retired after 2000 passes, or 100 full backups. A DDS drive should be cleaned every 24 hours of use, with a DDS cleaning cartridge. Head clogging is relatively common. Recording formats: DDS2 and DDS3 (Digital Data Storage). DDS2 - for DDS2 format without compression DDS2C - for DDS2 with hardware compression DDS2 - for DDS3 format without compression DDS3C - for DDS3 format with hardware compression Data access control mode One of four execution modes provided by the 'dsmmode' command. Execution modes allow you to change the space management related behavior of commands that run under dsmmode. The data access control mode controls whether a command can access a migrated file, sees a migrated file as zero-length, or receives an input/output error if it attempts to access a migrated file. See also execution mode. Data channel In a client Backup session, the part of the session which actually performs the data backup. Contrast with: Producer session See: Consumer session Data mover A named device that accepts a request from TSM to transfer data and can be used to perform outboard copy operations. As used with Network Addressable Storage (NAS) file server. Related: REGISTER NODE TYPE=NAS Data ONTAP Microkernel operating system in NetApp systems. Data Protection Agents Tivoli name for the Connect Agents that were part of ADSM. More common name: TDP (Tivoli Data Protection). The TDPs are specialized programs based upon the TSM API to back up a specialized object, such as a commercial database, like Oracle. As such, the TDPs typically also employ an application API so as to mingle within an active database, for example. You can download the TDP software from the TSM web site, but you additionally need a license and license file for the software to work. See also: TDP Data thread In a client Backup session, the part of the session which actually performs the data backup. Contrast with: Producer session See: Consumer session Data transfer time Statistic in a Backup report: the total time TSM requires to transfer data across the network. Transfer statistics may not match the file statistics if the operation was retried due to a communications failure or session loss. The transfer statistics display the bytes attempted to be transferred across all command attempts. Beware that if this value is too small (as when sending a small amount of data) then the resulting Network Data Transfer Rate will be skewed, reporting a higher number than the theoretical maximum. Look instead to the Elapsed time, to compute sustained throughput. Ref: Backup/Archive Client manual, "Displaying Backup Processing Status". Database The TSM Database is a proprietary database, governing all server operations and containing a catalog of all stored file system objects. All data storage operations effectively go through the database. The TSM Database contains: - All the administrative definitions and client passwords; - The Activity Log; - The catalog of all the file system objects stored in storage pools on behalf of the clients; - The names of storage pool volumes; - In a No Query Restore, the list of files to participate in the restoral; - Digital signatures as used in subfile backups. Named in dsmserv.dsk, as used when the server starts. (See "dsmserv.dsk".) Customers may perform database queries via the SELECT command (q.v.) and via the ODBC interface. The TSM database is dedicated to the purposes of TSM operation. It is not a general purpose database for arbitrary use, and there is no provided means for adding or thereafter updating arbitrary data. Why a proprietary db, and not something like DB2? Well, in the early days of ADSM, DB2's platform support was limited, so this product-specific, universal database was developed. It is also the case that this db is optimized for storage management operations in terms of schema and locking. But the problem with the old ADSM db is that is is very limited in features, and so a DB2 approach is being re-examined. See also: Database, space taken for files; DEFine SPACETrigger; ODBC Database, back up Perform via ADSM server command 'BAckup DB' (q.v.). To back up to a 3590 tape in the 3494, choose a tape which is not already defined to a storage pool. Note that there is no query command to later directly reveal which tape a database backup was written to: you have to do 'Query VOLHistory Type=DBBackup'. Database, back up unconventionally An unorthodox approach for supporting point-in-time restorals of the ADSM database that came to mind would be to employ standard *SM database mirroring and at an appointed time do a Vary Off of the database volume(s), which can then be image-copied to tape, or even be left as-is, with a replacement disk area put into place (Vary On) rotationally. In this way you would never have to do a Backup DB again. Database, back up to a scratch 3590 Perform like the following example: tape in the 3494 'BAckup DB DEVclass=OURLIBR.DEVC_3590 Type=Full' Database, back up to a specific 3590 Perform like the following example: tape in the 3494 'BAckup DB DEVclass=OURLIBR.DEVC_3590 Type=Full VOLumenames=000049 Scratch=No' Database, "compress" See: dsmserv UNLOADDB (TSM 3.7) Database, content and compression The TSM Server database has a b-tree organization with internal references to index nodes and siblings. The database grows sequentially from the beginning to end, and pages that are deleted internally are re-used later when new information is added. The only utility that can compress the database so that "gaps" of deleted pages are not present is the database dump/load utility. After extensive database deletions, due to expiration processing or filespace/volume delete processing, pages in the midst of the database space may become free, but pages closer to the beginning or end of the database still allocated. To reduce the size of your database, sufficient free pages must exist at the end of the linear database space that is allocated over your database volumes. A database dump followed by a load will remove free pages from the beginning of the database space to minimize free space fragmentation and may allow the database size to be reduced. Database, convert second primary 'REDuce DB Nmegabytes' volume to volume copy (mirror) 'DELete DBVolume 2ndVolName' 'DEFine DBCopy 1stVolName 2ndVolName' Database, create 'dsmfmt -db /adsm/DB_Name Num_MB' where the final number is the desired size for the database, in megabytes, and is best defined in 4MB units, in that 1 MB more (the LVM Fixed Area, as seen with SHow LVMFA) will be added for overhead if a multiple of 4MB, else more overhead will be added. For example: to allocate a database of 1GB, code "1024": ADSM will make it 1025. Database, defragment See: dsmserv UNLOADDB (TSM 3.7) Database, defragment? You can gauge how much your TSM database is fragmented by doing Query DB and compare the Pct Util against the Maximum Reduction: a "compacted" database with a modest utilization will allow a large reduction, but a "fragmented" one will be much less reducible. Database, delete table entry See: Backup files, delete; DELRECORD; File, selectively delete from *SM storage Database, designed for integrity The design of the database updating for ADSM uses 2-phase commit, allowing recovery from hardware and power failures with a consistent database. The ADSM Database is composed of 2 types of files, the DB and the LOG, which should be located on separate volumes. Updates to the DB are grouped into transactions (a set of updates). A 2-phase commit scheme works the following way, for the discussion assume we modify DB pages 22, 23: 1) start transaction 2) read 22 from DB and write to LOG 3) update 22' in DB and write 22' to log 4) same as 2), 3) for page 23 5) commit 6) free LOG space Database, empty If you just formatted the database and want to start fresh with ADSM, you need to access ADSM from its console, via SERVER_CONSOLE mode (q.v.). From there you can register administrators, etc., and get started. Database, enlarge You can extend the space which may be used within database "volumes" (actually, files) by using the 'EXTend DB' command. If your existing files are full, you *cannot* extend the files themselves: they are fixed in size. Instead, you have to add a volume (file), as follows: - Create and format the physical file by doing this from AIX: 'dsmfmt -db /adsm/dbext1 100' which will create a 101 MB file, with 1 MB added for overhead. - Define the volume (file) to ADSM: 'DEFine DBVolume /adsm/dbext1 The space will now show up in 'Query DBVolume' and 'Query DB', but will not yet be available for use. - Make the space available: 'EXTend DB 100' Note that doing this may automatically trigger a database backup, with message ANR4552I, depending. Database, extend usable space 'EXTend DB N_Megabytes' The extension is a physical operation, so shell "filesize" limit could disrupt the operation. Note that doing this may automatically trigger a database backup, with message ANR4552I, depending. Database, maximum size Per APAR IC15376, the ADSM database should not exceed 500 GB. Per the TSM 5.1 Admin Guide: 530 GB. Ref: Server Admin Guide, topic Increasing the Size of the Database or Recovery Log topic, in Notes. See: SHow LVMFA, which reveals that the max is actually 531.2 GB. (See the reported "Maximum possible DB LP Table size".) See also: Volume, maximum size Database, mirror See: MIRRORRead LOG Database, mirror, create Define a volume copy via: 'DEFine DBVolume Db_VolName Copy_VolName 'DEFine DBCopy Db_VolName Copy_VolName' Then you can do an 'EXTend DB N_Megabytes' (which will automatically kick off a full database backup). Database, mirror, delete 'DELete DBVolume Db_VolName' (It will be almost instantaneous) Message: ANR2243I Database, number of filespace objects See: Objects in database Database, query 'Query DB [Format=Detailed]' Database, rebuild from storage pool No: in a disaster situation, the ADSM tapes? server database *cannot* be rebuilt from the data on the storage pool tapes, because the tape files have meaning only per the database contents. Database, reduce by duress Sometimes you have to minimize the size of your database in order to relocate it or the like, but can't Reduce DB sufficiently as it sits. If so, try: - Prune all but the most recent Activity Log entries. - Delete any abandoned or useless filespaces to make room. (Q FI F=D will help you find those which have not seen a backup in many a day, but watch out for those that are just Archive type.) - Delete antique Libvol entries. - If still not enough space, an approach you could possibly use would be to Export and delete any dormant node data, to Import after you have moved the db, to bring that data back. Database, reduce space utilized You can end up with a lot of empty space in your database volumes. If you need to reclaim, you can employ the technique of successively adding a volume to the database and then deleting the oldest volume, until all the original volumes have been treated. This will consolidate the data, and can be done while *SM is up. Note that free space within the database is a good thing, for record expansion. Database, remove volume 'DELete DBVolume Db_VolName' That starts a process to migrate data from the volume being deleted to the remaining volumes. You can monitor the progress of that migration by doing 'q dbv f=d'. Database, reorganize See: dsmserv UNLOADDB (TSM 3.7) Database, space taken per client node This is difficule to determine (and no one really cares, anyway), but here's an approach: The Occupancy info, which provides the number of filespace objects), by type, in primary and copy storage pools. The Admin Guide topic "Estimating and Monitoring Database and Recovery Log Space Requirements" provides numbers for space utilized. The product of the two would yield an approximate number. Database, space taken for files From Admin Guide chapter Managing the Database and Recovery Log, topic Estimating and Monitoring Database and Recovery Log Space Requirements: - Each version of a file that ADSM stores requires about 400 to 600 bytes of database space. (This is an approximation which anticipates average usage. Consider that for Archive files, the Description itself can consume up to 255 chars, or contribute less if not used.) - Each cached or copy storage pool copy of a file requires about 100 to 200 bytes of database space. - Overhead could increase the required space up to an additional 25%. These are worst-case estimations: the aggregation of small files will substantially reduce database requirements. Note that space in the database is used from the bottom, up. Ref: Admin Guide: Estimating and Monitoring Database and Recovery Log Space Requirements. Database, verify and fix errors See: 'DSMSERV AUDITDB' Database allocation on a disk For optimal performance and minimal seek times: - Use the center of a disk for TSM space. This means that the disk arm is never more than half a disk away from the spot it needs to reach to service TSM. - You could then allocate one biggish space straddling the center of the disk; but if you instead make it two spaces which touch at the center of the disk, you gain benefit from TSM's practice of creating one thread per TSM volume, so this way you can have two and thus some parallelism. Database Backup To capture a backup copy of the ADSM database on serial media, via the 'BAckup DB' command. Database backups are not portable across platforms - they were not designed to be so - and include a lot of information that is platform specific: use Export/Import to migrate across platforms. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting). See also: dsmserv RESTORE DB Database backup, latest SELECT DATE_TIME AS - "DATE TIME ",TYPE, - MAX(BACKUP_SERIES),VOLUME_NAME FROM - VOLHISTORY WHERE TYPE='BACKUPFULL' OR - TYPE='BACKUPINCR' Database backup, query volumes 'Query VOLHistory Type=DBBackup'. The timestamp displayed is when the database backup started, rather than finished. Another method: 'Query DRMedia DBBackup=Yes COPYstgpool=NONE' Note that using Query DRMedia affords you the ability to very selectively retrieve info, and send it to a file, even from a server script. Database backup, delete all 'DELete VOLHistory TODate=TODAY TOTime=NOW Type=DBBackup' (Note that TSM will not allow you to delete your last database backup, for safety reasons. You can circumvent this, and free a "trapped" tape, by doing a placebo db backup to devclass type File.) Database backup in progress? Do 'Query DB Format=Detailed' and look at "Backup in Progress?". Database backup trigger, define See: DEFine DBBackuptrigger Database backup trigger, query 'Query DBBackuptrigger [Format=Detailed]' Database backup volume Do 'Query VOLHistory Type=DBBackup', if the ADSM server is up, or 'Query OPTions' and look for "VolumeHistory". If ADSM is down, you can find that information in the file specified on the "VOLUMEHistory" definition in the server options file (dsmserv.opt). See "DSMSERV DISPlay DBBackupvolumes" for displaying information about specific volumes when the volume history file is unavailable. See "DSMSERV RESTORE DB Preview=Yes" for displaying a list of the volumes needed to restore the database to its most current state. Database backup volume, pruning If you do not have DRM: Use 'DELete VOLHistory TODate=SomeDate TOTime=SomeTime Type=DBBackup' to manage the number of database backups to keep. If you have DRM: 'Set DRMDBBackupexpiredays __' Database backup volumes, identifying Seek "BACKUPFULL" or "BACKUPINCR" in the current volume history backup file - a handy way to find them, without having to go into ADSM. Or perform server query: select volume_name from volhistory - where (upper(type)='BACKUPFULL' or - upper(type)='BACKUPINCR') Database backup volumes, identifying Unfortunately, when a 'DELete historical VOLHistory' is performed the volsers of the deleted volumes are not noted. But you can get them two other ways: 1. Have an operating system job capture the volsers of the BACKUPFULL, BACKUPINCR volumes contained in the volume history backup file (named in the server VOLUMEHistory option) before and after the db backup, then compare. 2. Do 'Query ACtlog BEGINDate=-N MSGno=1361' to pick up the historical volsers of the db backup volumes at backup completion to check against those no longer in the volume history. Database backups (Oracle, etc.) Done with TSM via the Tivoli Data Protection (TDP) products. See: TDP See also: Adsmpipe Database buffer pool size, define "BUFPoolsize" definition in the server options file. Database buffer pool statistics, reset 'RESet BUFPool' Database change statistics since last 'Query DB Format=Detailed' backup Database consumption factors - All the administrative definitions are here; elminate what is no longer needed. - The Activity Log is contained in the database: control amount retained via 'Set ACTlogretention N_Days'. The Activity Log also logs administrator commands, Events, client session summary statistics, etc., which you may want to limit. - The database is at the mercy of client nodes or their filespaces being abandoned, and client file systems and disks being renamed such that obsolete filespaces consume space. - Volume history entries consume some space: eliminate what's obsolete via 'DELete VOLHistory'. - More than anything, the number of files cataloged in the database consume the most space, and your Copy Group retention policies govern the amount kept. Nodes which have a sudden growth in file system files will inflate the db via Backup. See: "Many small files" problem - Restartable Restores consume space in that the server is maintaining state information in the database (the SQL RESTORE table). Generally control via server option RESTOREINTERVAL, and reclaim space from specific restartable restores via the server command CANCEL RESTORE. Also, during such a restore the server will need extra database space to sort filenames in its goal to minimize tape mounts during the restoral, and so there will be that surge in usage. - Complex SELECT operations will require extra database space to work the operation. - When you Archive a file, the directory containing it is also archived. When the -DEscription="..." option is used, to render the archived file unique, it also causes the archived directory to be rendered unique, and so you end up with an unexpectedly large number of directories in the *SM database, even though they are all effectively duplicates in terms of path. - The size of the Aggregate in Small Files Aggregation is also a factor: the more small files in an aggregate, the lower the overhead in database cataloging. As the 3.1 Technical Guide puts it, "The database entries for a logical file within an aggregate are less than entries for a single physical file." See: Aggregate - Make sure that clients are not running Selective backups or Archives on their file systems (i.e., full backups) routinely instead of Incremental backups, as that will rapidly inflate the database. Likewise, be very careful of coding MODE=ABSolute in your Copy Group definitions. - Talk to client administrators about excluding useless files from backup, like temp directories and web browser cache files. - Make sure that 'EXPIre Inventory' is being run regularly - and that it gets to run to completion. Note that API-based clients, such as the TDP series and HSM, require their own, separate expiration handling: failing to do that will result in data endlessly piling up in the storage pools and database. - Not using the DIRMc option can result in directories being needlessly retained after their files have expired, in that the default is for directories to bind to the management class with the longest retention period (RETOnly). - Realize that long-lived data that was stored in the server without aggregation will be output from reclamation likewise unaggregated, thus using more database space than if it were aggregated. (See: Reclamation) - With the Lotus Notes Agent, *SM is cataloging every document in the Notes database (.NSF file). - Beware the debris left around from the use of DEFine CLIENTAction (q.v.). - Windows System Objects are large and consist of thousands of files. - Wholesale changes of ACLs (Access Control Lists) in a file system may cause all the files to be backed up afresh. - Daylight Savings Time transitions can cause defective TSM software to back up every file. - Use of DISK devclass volumes can use more db space. (See Admin Guide table "Comparing Random Access and Sequential Access Disk Devices".) In that the common cause of db growth is file deluge from a client node, simple ways to inspect are: produce a summary of recent *SM accounting records; harvest session-end ANE* records from the Activity Log; and to do a Query Content with a negative count value on recently written storage pool tapes. (Ideally, you should be running accounting record summaries on a regular basis as a part of system management.) Database file It is named within file: /usr/lpp/adsmserv/bin/dsmserv.dsk (See "dsmserv.dsk".) Database file name (location) Is defined within file: /usr/lpp/adsmserv/bin/dsmserv.dsk (See "dsmserv.dsk".) The name gets into that file via 'DEFine DBVolume' (not by dsmfmt). ADSM seems to store the database file name in the ODM, in that if you restart the server with the name strings within dsmserv.dsk changed, it will still look for the old file names. Database file name, determine 'Query DBVolume [Format=Detailed]' Database filling indication Activity log will contain message ANR0362W when utilization exceeds 80%. Database fragmentation, gauge Try the following to report: SELECT CAST((100 - ( CAST(MAX_REDUCTION_MB AS FLOAT) * 256 ) / (CAST(USABLE_PAGES AS FLOAT) - CAST(USED_PAGES AS FLOAT) ) * 100) AS DECIMAL(4,2)) AS PERCENT_FRAG FROM DB Database full indication ANR0131E diagnosticid: Server DB space exhausted. Database growth See: Database consumption factors Database location See "Database file name" Database log pages, mode for reading, "MIRRORRead DB" definition in the define server options file. Database log pages, mode for writing, "MIRRORWrite DB" definition in the define server options file. Database max utilization stats, reset 'RESet DBMaxutilization' Resets the Max. Pct Util number, which is seen in a 'Query DB', to be the same as the current Pct Util value. Database page size 'Query DB Format=Detailed', "Page Size (bytes):" Currently: 4096 Database performance - Locate the database on disks which are separate from other operating system services, and choose fast disks and connection methods (like Ultra SCSI). - Spread over multiple physical volumes (disks) rather than consolidating on a single large volume: TSM gives a process thread to each volume, so performance can improve through parallelism. And, of course, you always benefit by having more disk arms to access data. - Avoid RAID striping, as this will slow performance. (Striping is for distributing I/O across multiple disks. This slows down db operations because striping involves a relatively costly set-up overhead to get multiple disk working together to handle the streaming type writing of a lot of data. DB operations constitute many operations involving small amounts of data, and thus the overhead of striping is detrimental.) - Do 'Query DB F=D' and look at the Cache Hit Pct. The value should be up around 98%. If less, consider boosting the server BUFPoolsize option. - Assure that the server system has plenty of real memory so as to avoid paging in serving database needs. See also: Server performance Database robustness The *SM database is private to the product. Unfortunately, it is not a robust database, and as long as it remains proprietary it will likely be the product's Achilles heel. Running multiple, simultaneous, intense database-updating operations (Delete Filespace, Delete Volume) has historically caused problems, including database deadlocks, server crashes, and even database damage. AVOID DOING SO!! Database size issues See: Database consumption factors Database space utilization issues So your database seems bloated. Is there something you can do? The ADSM database will inevitably grow with the number of files being backed up and the number of backup versions retained and their retention periods. Beyond the usual, the following are pertinent to database space utilization: - Make sure you are running expiration regularly. - The Activity Log is in the database. Examine your 'Set ACTlogretention' value and look for runaway errors that may have consumed much space. - Look for abandoned File Spaces, the result of PC users renaming their disks or file systems and then doing backups under the new name. - Volume History information tends to be kept forever: you need to periodically run 'DELete VOLHistory'. And with that command you should also be deleting old DBBackup volumes to reclaim tapes. - Using verbose descriptions for Archive files will eat space. (Each can be up to 255 chars.) - Consider coercing client systems to exclude rather useless files from backups, such as temp files and web browser cache files. Database space required for HSM files Figure 143 bytes + filename length. Database Space Trigger ADSM V3.1.2 feature which allows setting a trigger (%) and when reached, will dynamically create a new volume, define it to the database and extend the db. Database volume (file) Each database volume (file) contains info about all the other db and log files. See also: dsmserv.dsk Database volume, add 'DEFine DBVolume Vol_Ser' Database volume, delete 'DELete DBVolume Vol_Ser' Database volume, query 'Query DBVolume [VolName] [Format=Detailed]' Database volume, vary back on 'VARy ONline VolName' after message ANR0202W, ANR0203W, ANR0204W, ANR0205W. Always look into the cause before attempting to bring the possibly defective volume back. Database volume usage, verify If your *SM db volumes are implemented as OS files (rather than rlv's) you can readily inspect *SM's usage of them by looking at the file timestamps, as the time of last read and write will be thereby recorded. Databases, backing up Is performed via ADSM Connect Agents and TSM Data Protection (agents). For supported list, see the Clients software list (URL available at the bottom of this document). For others you'll have to seek another source. General note: Backing up active databases using simple incremental backup, from outside the database, is problematic because part of the database is on disk and part is in memory, and perhaps elsewhere (e.g., recovery log). Unlike a sequential file, which is updated either appended to it or replacing it, a database gets updated in random locations inside of it - often "behind" the backup utility, which is reading the database as a sequential file. Furthermore, many databases consist of multiple, interrelated files, and to it is impossible for an external backup utilities to capture a consistent image of the data. Thus, it's advisable to back up databases using an API-based utility which participates in the database environment to back it up from the inside, and thus get a consistent and restorable image. Alternately, some database applications can themselves make a backup copy of the database, which can then be backed up via TSM incremental backup. Ref: redbook Using ADSM to Back Up Databases (SG24-4335) DATE SQL: The month-day-year portion of the TIMESTAMP value, of form MM/DD/YYYY. Sample usage: SELECT NODE_NAME, PLATFORM_NAME, - DATE(LASTACC_TIME) FROM NODES SELECT DATE(DATE_TIME) FROM VOLHISTORY - WHERE TYPE='BACKUPFULL' See also: TIMESTAMP Date, per server ADSM server command 'SHow TIME' (q.v.). See also: ACCept Date DATE_TIME SQL database column, as in VOLHISTORY, being a timestamp (date and time), like: 2001-07-30 09:30:07.000000 See also: CURRENT_DATE; DATE DATEformat, client option, query Do ADSM 'dsmc Query Option' or TSM 'show options' and look at the "Date Format" value. A value of 0 indicates that your opsys dictates the format. See also: TIMEformat DATEformat, client option, set Definition in the client user options file. Specifies the format by which dates are displayed by the *SM client. NOTE: Not usable with AIX or Solaris, in that they use NLS locale settings (see /usr/lib/nls/loc in AIX, and /usr/lib/localedef/src in Solaris). Do 'locale' in AIX to see its settings. "1" - format is MM/DD/YYYY (default) "2" - format is DD-MM-YYYY "3" - format is YYYY-MM-DD "4" - format is DD.MM.YYYY "5" - format is YYYY.MM.DD Default: 1 Query: ADSM 'dsmc Query Options' or TSM 'dsmc show options' and look at the "Date Format" value. A value of 0 indicates that your opsys dictates the format. Advisory: Use 4-digit year values. Various problems have been encountered when using 2-digit year values, such as Retrieve not finding files which were Archived using a RETV=NOLIMIT (so date past 12/31/99). DATEformat, server option, query 'Query OPTion' and look at the "DateFormat" value. DATEformat, server option, set Definition in the server options file. Specifies the format by which dates are displayed by the ADSM server (except for 'Query ACtlog' output, which is always in MM/DD/YY format). "1" - format is MM/DD/YYYY (default) "2" - format is DD-MM-YYYY "3" - format is YYYY-MM-DD "4" - format is DD.MM.YYYY "5" - format is YYYY.MM.DD Default: 1 Ref: Installing the Server... DAY(timestamp) SQL function to return the day of the month from a timestamp. See also: HOUR(); MINUTE(); SECOND() Day of week in Select See: DAYNAME Daylight Savings Time You should not have to do anything in TSM during a Daylight Savings Time transition: that should be handled by your computer operating system, and all applications running in the system will pick up the adjusted time. In a z/OS environment, see IBM site article swg21153685. See also: ACCept Date; NTFS and Daylight Savings Time DAYNAME(timestamp) SQL function to return the day of the week from a timestamp. Example: SELECT ... FROM ... WHERE DAYNAME(current_date)='Sunday' See also: HOUR(); MINUTE(); SECOND() DAYS SQL "labeled duration": a specific unit of time as expressed by a number (which can be the result of an expression) followed by one of the seven duration keywords: YEARS, MONTHS, DAYS, HOURS, MINUTES, SECONDS, or MICROSECONDS (q.v.). The number specified is converted as if it were assigned to a DECIMAL(15,0) number. A labeled duration can only be used as an operand of an arithmetic operator in which the other operand is a value of data type DATE, TIME, or TIMESTAMP. Thus, the expression HIREDATE + 2 MONTHS + 14 DAYS is valid, whereas the expression HIREDATE + (2 MONTHS + 14 DAYS) is not. In both of these expressions, the labeled durations are 2 MONTHS and 14 DAYS. DAYS(timestamp) SQL function to get the number of days from a timestamp (since January 1, Year 1). DB2 backups Is not a TDP, but like them it utilizes the TSM client API to store the data on the TSM server. It is best to invoke the client while sitting within the client directory. Instead of, or addition to that, you may want to set the following environment variables: Basic client: DSM_CONFIG=: DSM_DIR=: DSM_LOG=: API client: DSMI_CONFIG=: DSMI_DIR=: DSMI_LOG=: Each backup is its own filespace, whose name is that of the DB2 database plus a timestamp. See redbook: "Using ADSM to Back Up Databases", SG24-4335-03 and "Managing VLDB Using DB2 UDB EEE", SG24-5105-00. DB2 backups, delete You have to manually inactive the backups using the db2adutl delete command. Sample tasks: 'db2adutl query full' will list your db2 backups; 'db2adutl delete full older than N days' will delete. DB2 backups, query Like: db2adutl query full (You cannot use 'dsmc query backup' because the backups were stored via the TSM client API.) DB2 log handling The DB2 database backup does not pick up the DB2 logs: use the user exit program provided by DB2 to archive (not backup) the inactive log files. DB2 restore command Like: db2 restore db db0107 use tsm .DBB File name extension created by the server for FILE type scratch volumes which contain TSM database backup data. Ref: Admin Guide, Defining and Updating FILE Device Classes DBBACKUP In 'Query VOLHistory', volume type for sequential access storage volumes used for database backups. Also under 'Volume Type' in /var/adsmserv/volumehistory.backup . DBBackup tapes vanishing with DRM Watch out that you don't delete database volume history with the same number of days as the DRM "Set DRMDBBackupexpiredays" command: just when ADSM DRM is changing the status of the db tapes to "vault retrieve" you are also deleting them from the volume history and therefore never see them as "vault retrieve". DBBackuptrigger The Database Backup Trigger: to define when TSM is to automatically run a full or incremental backup of the TSM database, based upon the Recovery Log filling, when running in Rollforward mode. (As opposed to getting message ANR0314W in Normal mode.) At triggering time, TSM also automatically deletes any unnecessary recovery log records - which may take valuable time. Msgs: ANR4553I See: DEFine DBBackuptrigger; Set LOGMode DBDUMP In 'Query VOLHistory', Volume Type to say that volume was used for an online dump of the database (pre ADSM V2R1). Also under 'Volume Type' in /var/adsmserv/volumehistory.backup . .dbf See: Oracle database factoids DBPAGESHADOW TSM 4.1 server option. Provides a means of mirroring the last batch of information written to the server database. If enabled, the server will mirror the pages to the file specified by DBPAGESHADOWFILE option. On restart, the server will use the contents of this file to validate the information in the server database and if needed take corrective action if the information in the actual server database volumes is not correct as verified by the information in the page shadow file. In this way if an outage occur that affects both mirrored volumes, the server can recover pages that have been partially written. See the dsmserv.opt.smp file for an explanation of the DBPAGESHADOW and DBPAGESHADOWFILE options. Note that the DBPAGESHADOWFILE description differs from what is documented in the TSM publications. This option does NOT prepend the server name to the file name: the file name used is simply the name specified on the option. DBPAGESHADOWFILE TSM 4.1 server option (boolean). Specifies the name of the database page shadowing file. See: DBPAGESHADOW DBSnapshot See: BAckup DB; DELete VOLHistory; "Out of band"; Query VOLHistory DBSnapshot, delete This is performed with the command 'DELete VOLHistory ... Type=DBSnapshot'. However, TSM insists that the latest snapshot database backup cannot be deleted! A way to get around this would be to perform another DBSnapshot, this time directed at a File type of output devclass. This would allow you to delete the tape volume from TSM and re-use it, and you could then delete the file at the operating system level. This presumes that you have enough disk space for the file. You might be able to get away with making the file /dev/null if you are on Unix. D/CAS Circa 1990 Data CASsette tape technology using a specially notched Philips audio cassette catridge and 1/8" tape, full width. Variations: D/CAS-43 50 MB Tape vendors: Maxell 184720 D/CAS-86 100 MB 600 feet length, 16,000 ftpi Tape vendors: Maxell CS-600XD DCR Design Change Request DDS* Digital Data Storage: the data recording format for 4mm (DAT) tapes, as in DDS1, DDS2, DDS3. See: DAT DDS2 tapes Can be read by DDS2 and DDS3 drives. DEACTIVATE_DATE *SM SQL: Column in the BACKUPS table, being the date and time that the object was deactivated; that is, when it went from being an Active file to Inactive. Example: 2000-08-16 02:53:27.000000 The value is naturally null for Active files (those whose STATE is ACTIVE_VERSION). It may also be null for Inactive files (INACTIVE_VERSION): this is the case for old files marked for expiration based on number of versions (rather than retention periods), so marked during client Backup processing (Incremental or Selective). Note that such marked files can be seen in a server Select, but cannot be seen from client queries. During expiration if the TSM server encounters an inactive version without a deactivation date, then TSM expires this object. Looked at another way, if client backup processing does not occur, version-oriented expiration cannot occur. See also: dsmc Query Backup Deadlocks in server? 'SHow DEADLocks' (q.v.) Msgs: ANR0390W Debugging See "CLIENT TRACING" and "SERVER TRACING" at bottom of this document. DEC SQL function to convert a string to a decimal number. Syntax: DEC(String,Precision,Scale) String Is the string to be converted Precision Is the length for the portion before the decimal point. Scale Is the length for the portion after the decimal point. DEC Alpha client Storage Solutions Specialists provides an ADSM API called ABC. See HTTP://WWW.STORSOL.COM. DEFAULT The generic identifier for the default management class, as shows up in the CLASS_NAME column in the Archives and Backups SQL tables. Note that "DEFAULT" is a reserved word: you cannot define a management class with that name. See also: CLASS_NAME; Default management class Default management class The management class *SM assigns to a storage pool file if there is no INCLUDE option in effect which explicitly assigns a management class to specified file system object names. Hard links are bound to the default management class in that they are not directories or files. Note that automatic migration occurs *only* for the default management class; for the incl-excl named management class you have to manually incite migration. Default management class, establish 'ASsign DEFMGmtclass DomainName SetName ClassName' Default management class, query 'Query POlicyset' and look in the Default Mgmt Class Name column or 'Query MGmtclass' and look in the Default Mgmt Class column DEFAULTServer Client System Options file (dsm.sys) option to specify the default server. This is a reference to the SErvername stanza which is coded later in the file: it is *not* the actual server name, which is set via SET SERVERNAME. The stanza name is restricted to 8 characters (not 64, as the manual says). HSM migration will use this value unless MIgrateserver is specified. DEFine Administrator You mean: REGister Admin DEFine ASSOCiation Server command to associate one or more client nodes with a client schedule which was established via 'DEFine SCHedule'. Syntax: 'DEFine ASSOCiation Domain_Name Schedule_Name Node_name [,...]' Note that defining a new schedule to a client does not result in it promptly "seeing" the new schedule, when SCHEDMODe PRompted is in effect: you need to restart the scheduler so that it talkes to the server and gets scheduled for the new task. Related: 'DELete ASSOCiation' DEFine BACKUPSET Server command to define a client backup set that was previously generated on one server and make it available to the server running this command. The client node has the option of restoring the backup set from the server running this command rather than the one on which the backup set was generated. Any backup set generated on one server can be defined to another server as long as the servers share a common device type. The level of the server to which the backup set is being defining must be equal to or greater than the level of the server that generated the backup set. You can also use the DEFINE BACKUPSET command to redefine a backup set that was deleted on a server. Syntax: 'DEFine BACKUPSET Client_NodeName BackupSetName DEVclass=DevclassName VOLumes=VolName[,VolName...] [RETention=Ndays|NOLimit] [DESCription=____]' See also: GENerate BACKUPSET DEFine CLIENTAction TSM server command to schedule one or more clients to perform a command, once. This results in the definition of a client schedule with a name like "@1", PRIority=1, PERUnits=Onetime, and DURunits to the number of days set by the duration period of the client action. It also does DEFine ASSOCiation to have the operation handled by the specified nodenames. 'DEFine CLIENTAction [NodeName[,Nodename]] [DOmain=DomainName] ACTion=ActionToPerform [OPTions=AssociatedOptions] [OBJects=ActionObjects] [Wait=No|Yes]' where ACTion is one of: Incremental Selective Archive REStore RETrieve IMAGEBACkup IMAGEREStore Command Macro For OBJects: Normally code within double quotes; but if you need to code quotes within quotes, enclose the whole in single quotes and the internals as double quotes. Example: DEFine CLIENTAction NODEA - ACTion=Command - OBJects='mail -s "Subject line, body empty" joe /dev/null' Where ACTion=Command, you can code OBJects with multiple operating system commands, separated by the conventional command separator for that environment. For example, in Unix, you can cause a delayed execution by coding a 'sleep' ahead of the command, as in: OBJects='sleep 20; date'. If there is any question about the invoked commands being in the Path which the scheduler process may have been started with, by all means code the commands with full path specs, which will avoid 127 return code issues. The Wait option became available in TSM 4.1. Note that a Command is run under the account under which the TSM server was started (in Unix, usually root). Timing: How soon the action is performed is at the mercy of your client SCHEDMODe spec: POlling is at the client's whim, and will result in major delay compared to PRompted, where the server initiates contact with the client (when it gets around to it - *not* necessarily immediately). When using PRompted, watch out for PRESchedulecmd and POSTSchedulecmd, which would thus get invoked every time. Housekeeping: Because of the schedule clutter left behind, you should periodically run 'DELete SCHedule Domain_Name @*', which gets rid of the temporary schedule and association. Msgs: ANR2510I, ANR2561I See also: DEFine SCHedule, client; SET CLIENTACTDuration DEFine CLIENTOpt Server command to add a client option to an option set. Syntax: DEFine CLIENTOpt OptionSetName OptionName 'OptionValue' [Force=No|Yes] [SEQnumber=number] Force will cause the server-defined option to override that in the client option file - for singular options only...not additive options like Include-Exclude and DOMain. Additive options will always be seen by the client (as long it is at least V3), and will be logically processed ahead of the client options. Code the OptionValue in single quotes to handle multi-word values, and use double-quotes within the single quotes to further contain sub-values. Example: DEFine CLIENTOpt SETNAME INCLEXCL 'Exclude "*:\...\Temporary Internet Files\...\"' SEQ=0 DEFine CLOptset Examples: DEFine cloptset ts1 desc='Test option sets' COMMIT DEFine CLIENTOpt ts1 CHAngingretries 1 seq=10 DEFine CLIENTOpt ts1 COMPRESSAlways=Yes Force=Yes SEQnumber=20 DEFine CLIENTOpt ts1 INCLEXCL "exclude /tmp/.../*" DEFine CLIENTOpt ts1 INCLEXCL "include ""*:\My Docs\...\*""" COMMIT DEFine COpygroup Server command to define a Backup or Archive copy group within a policy domain, policy set, and management class. Does not take effect until you have performed 'VALidate POlicyset' and 'ACTivate POlicyset'. DEFine COpygroup, archive type 'DEFine COpygroup DomainName PolicySet MgmtClass Type=Archive DESTination=PoolName [RETVer=N_Days|NOLimit] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic] DEFine COpygroup, backup type 'DEFine COpygroup DomainName PolicySet MgmtClass [Type=Backup] DESTination=Pool_Name [FREQuency=Ndays] [VERExists=N_Versions|NOLimit] [VERDeleted=N_Versions|NOLimit] [RETExtra=N_Versions|NOLimit] [RETOnly=N_Versions|NOLimit] [MODE=MODified|ABSolute] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' DEFine DBBackuptrigger Server command to define settings for the database backup trigger. Syntax: 'DEFine DBBackuptrigger DEVclass=DevclassName [LOGFullpct=N] [INCRDEVclass=DevclassName] [NUMINCremental=???]' where: LOGFullpct Specifies the Recovery Log percent fullness threshold at which an automatic backup is triggered, 1 - 99. Default: 50 (%). Choose a value which gives the backup a chance to complete before the Log fills. NUMINCremental Specifies the maximum number of Incrementals that will be performed before a Full is done. Code 0 - 32, where 0 says to only do Fulls. Default = 6. See also: DBBackuptrigger DEFine DBCopy Server command to define a volume copy (mirror) of a database volume. Syntax: 'DEFine DBCopy Db_VolName Copy_VolName' DEFine DBVolume Server command to define an additional volume for the database. Syntax: 'DEFine DBVolume Vol_Ser Formatsize=#MB Wait=No|Yes' Messages: ANR2429E DEFINE DBVolume: Maximum database capacity exceeded. Note that you benefit from having more DB volumes. See: Database performance DEFine DEVclass Server command to define a device class for storage pools, and associating it with a previously defined library, if applicable. Note that the device class DISK is pre-defined in TSM, as used in DEFine STGpool for random access devices. See also: Devclass DEFine DEVclass (3590) 'DEFine DEVclass DevclassName DEVType=3590 LIBRary=LibName [FORMAT=DRIVE|3590B|3590C| 3590E-B|3590E-C] [MOUNTRetention=Nmins] [PREFIX=ADSM|TapeVolserPrefix] [ESTCAPacity=X] [MOUNTWait=Nmins] [MOUNTLimit=DRIVES|Ndrives|0]' DEFine DEVclass (File) 'DEFine DEVclass DevclassName DEVType=FILE [MOUNTLimit=1|Ndrives|DRIVES] [MAXCAPacity=4M|maxcapacity] [DIRectory=currentdir|dirname]' Note that "3590" is a special, reserved DEVType. Specifying MOUNTLimit=DRIVES allows *SM to adapt to the number of drives actually available. (Do not use for External LIbraries (q.v.).) DEFine DOmain Server command to define a policy domain. Syntax: 'DEFine DOmain DomainName [description="___"] [backretention=NN] [archretention=NN]' Since a client node is assigned to one domain name, it makes sense for the domain name to be the same as the client node name (i.e., the host name). DEFine DRive Server command to define a drive to be used in a previously-defined library. Syntax: 'DEFine DRive LibName DriveName DEVIce=/dev/??? [ONLine=Yes|No] [CLEANFREQuency=None|Asneeded|N] [ELEMent=SCSI_Lib_Element_Addr]' where ONLine says whether a drive should be considered available to *SM. The TSM Admin Ref manual specifically advises: "Each drive is assigned to a single library." DO NOT attempt to define a physical drive to more than one library! Doing so will result in conflicts which will render drives offline. Thus, with a single library, you cannot use the same drives for multiple scratch pools, for example. To get around this: say you have both 3590J tapes and 3590Ks, but want the lesser tapes used for offsite volumes. What you can do is use DEFine Volume to assign the 3590s to the offsite pool - which will go on to use the general scratch pool only when its assigned volumes are used up. Example: 'DEFine DRive OURLIBR OURLIBR.3590_300 DEVIce=/dev/rmt1' TSM will get the device type from the library's Devclass, which will subsequently turn up in 'Query DRive'. It is not necessary to perform an ACTivate POlicyset after the Define. In a 3494, how does TSM communicate with the Library Manager to perform a mount on a specific drive if the LM knows nothing about the opsys device spec? In a preliminary operation, TSM issues an ioctl() MTDEVICE request, after having performed an open() on the /dev/rmt_ name to obtain a file descriptor, to first obtain that Device Number from the Library Manager, and thereafter uses that physical address for subsequent mount requests. For an example, see /usr/lpp/Atape/samples/tapeutil.c . DEFine LIBRary Server command to define a Library. Syntax for 3494: 'DEFine LIBRary LibName LIBType=349x - DEVIce=/dev/lmcp0 PRIVATECATegory=Np_decimal SCRATCHCATegory=Ns_decimal' The default Private category code: 300 (= X'12C'). The default Scratch category code: 301 (= X'12D'). With 3494 libraries and 3590 tapes, the defined Scratch category code is for 3490 type tapes, and that value + 1 will be used for your 3590 tapes. Server option ENABLE3590LIBRARY must also be defined for 3590 use. In choosing category code numbers, be aware that the 'mtlib' command associated with 3494s reports category code numbers in hexadecimal: you may want to choose values which come out to nice, round numbers in hex, and code their decimal equivalents in the DEFine LIBRary. Realize also that choosing category codes is a major commitment: you can't change them in UPDate LIBRary. AUTOLabel is new in TSM 5.2, for SCSI libraries, to specify whether the server attempts to automatically label tape volumes. Requires checking in the tapes with CHECKLabel=Barcode on the CHECKIn LIBVolume command. "No" Specifies that the server does not attempt to label any volumes. "Yes" says to label only unlabeled volumes. OVERWRITE is to attempt to overwrite an existing label - only if both the existing label and the bar code label are not already defined in any server storage pool or volume history list. DO NOT attempt to define multiple libraries to simultaneously use the same drives. See comments under DEFine DRive. See also: ENABLE3590LIBRARY; Query LIBRary; SCRATCHCATegory; UPDate LIBRary DEFine LOGCopy Server command to define a volume copy (mirror) of a recovery log volume. Syntax: 'DEFine LOGCopy RecLog_VolName Mirror_Vol' DEFine LOGVolume Server command to define an additional recovery log volume. Syntax: 'DEFine LOGVolume RecLog_VolName' Messages: ANR2452E DEFine MGmtclass Server command to define a management class within a policy set. Syntax: 'DEFine MGmtclass DomainName SetName ClassName [SPACEMGTECH=AUTOmatic| SELective|NONE] [AUTOMIGNOnuse=Ndays] [MIGREQUIRESBkup=Yes|No] [MIGDESTination=poolname] [DESCription="___"]' Note that except for DESCription, all of the optional parameters are Space Management Attributes for HSM. DEFine PATH TSM server command to define a path, and thus access, from a source to a destination - a new requirement as of TSM 5.1, to support server-free backups. The source and destination must be defined before the path. Additional info: http://www.ibm.com/support/ docview.wss?uid=swg21083662 See also: DEFine DRive; Paths DEFine POlicyset Server command to define a policy set within a policy Domain. Syntax: 'DEFine POlicyset Domain_Name SetName [DESCription="___"]' DEFine SCHedule, administrative Server command to define an administrative schedule. Syntax: 'DEFine SCHedule SchedName Type=Administrative CMD=CommandString [ACTIVE=No|Yes] [DESCription="___"] [PRIority=5|N] [STARTDate=MM/DD/YYYY|TODAY] [STARTTime=NNN] [DURation=N] [DURunits=Minutes|Hours|Days| INDefinite] [PERiod=N] [PERUnits=Hours|Days|Weeks| Months|Years|Onetime] [DAYofweek=ANY|WEEKDay|WEEKEnd| SUnday|Monday|TUesday| Wednesday|THursday| Friday|SAturday] [EXPiration=Never|some_date]' The schedule name can be up to 30 chars. In CMD=CommandString: string length is limited to 512 chars; you cannot specify redirection (> or >>). Macros cannot be scheduled (as they reside on the client, not the server), but you can schedule (server) Scripts. DEFine SCHedule, client Server command to define a schedule which a client may use via server command 'DEFine ASSOCiation'. Syntax: 'DEFine SCHedule DomainName SchedName [DESCription="___"] [ACTion=Incremental|Selective| Archive|REStore| RETrieve|Command|Macro] [OPTions="___"] [OBJects="___"] [PRIority=N] [STARTDate=NNN] [STARTTime=HH:MM:SS|NOW] [DURation=N] [DURunits=Hours|Minutes|Days| INDefinite] [PERiod=N] [PERUnits=Days|Hours|Weeks| Months|Years|Onetime] [DAYofweek=ANY|WEEKDay|WEEKEnd| SUnday|Monday|TUesday| Wednesday|THursday| Friday|SAturday] [EXPiration=Never|some_date]' The schedule name can be up to 30 chars. Use PERUnits=Onetime to perform the schedule once. ACTion=Command allows specifying that the schedule processes a client operating system command or script whose name is specified via the OBJECTS parameter. Be careful not to specify too many objects, or use wildcards, else msg ANS1102E can result. See also "Continuation and quoting". Note that because TSM has no knowledge of the workings of the invoked command, it can only interpret rc 0 from the invoked command as success and any other value as failure, so plan accordingly. OBJects specifies the objects (file spaces or directories) for which the specified action is performed. OPTions specify options to the dsmc command, just as you would when manually invoking dsmc on that client platform, including leading hyphen as appropriate (e.g., -subdir=yes). Once the schedule is defined, you need to bind it to the client node name: see 'DEFine ASSOCiation'. Then you can start the scheduler process on the client node. See also: DEFine CLIENTAction; DURation; SET CLIENTACTDuration; SHow PENDing DEFine SCRipt ADSMv3 server command to define a Server Script. Syntax: 'DEFine SCRipt Script_Name ["Command_Line..." [Line=NNN] | File=File_Name] [DESCription=_____]' Command lines are best given in quotes, and can be up to 1200 characters long. The description length can be up to 255. The DEFine will fail if there is a syntax error in the script, such as a goto target lacking a trailing colon or target label longer than 30 chars, with msg ANR1469E. It is probably best to create and maintain scripts in files in the server system file system, as the line-oriented revision method is quite awkward. See also: Server Scripts; UPDate SCRipt DEFine SERver To define a Server for Server-to-Server Communications, or to define a Tivoli Storage Manager storage agent as if it were a server. Syntax: For Enterprise Configuration, Enterprise Event Logging, Command Routing, and Storage Agent: 'DEFine SERver ServerName SERVERPAssword=____ HLAddress=ip_address LLAddress=tcp_port [COMMmethod=TCPIP] [URL=url] [DESCription=____] [CROSSDEFine=No|Yes]' For Virtual Volumes: 'DEFine SERver ServerName PAssword=____ HLAddress=ip_address LLAddress=tcp_port [COMMmethod=TCPIP] [URL=____] [DELgraceperiod=NDays] [NODEName=NodeName] [DESCription=____]' See also: Query SERver; Set SERVERHladdress; Set SERVERLladdress DEFine SPACETrigger ADSMv3 server command to define settings for triggers that determine when and how the server resolves space shortages in the database and recovery log. It can then allocate more space for the database and recovery log when space utilization reaches a specified value. After allocating more space, it automatically extends the database or recovery log to make use of the new space. Note: Setting a space trigger does not mean that the percentage used in the database and recovery log will always be less than the value specified with the FULLPCT parameter. TSM checks usage when database and recovery log activity results in a commit. Deleting database volumes and reducing the database does not cause the trigger to activate. Therefore, the utilization percentage can exceed the set value before new volumes are online. Mirroring: If the server is defined with mirrored copies for the database or recovery log volumes, TSM tries to create new mirrored copies when the utilization percentage is reached. The number of mirrored copies will be the same as the maximum number of mirrors defined for any existing volumes. If sufficient disk space is not available, TSM creates a database or recovery log volume without a mirrored copy. Syntax: DEFine SPACETrigger DB|LOG Fullpct=__ [SPACEexpansion=N_Pct] [EXPansionprefix=______] [MAXimumsize=N_MB] Msgs: ANR4410I; ANR4411I; ANR4412I; ANR4414I; ANR4415I; ANR4430W; ANR7860W See also: Query SPACETrigger DEFine STGpool (copy) DEFine STGpool PoolName DevclassName POoltype=COpy [DESCription="___"] [ACCess=READWrite|READOnly| UNAVailable] [COLlocate=No|Yes|FIlespace] [REClaim=PctOfReclaimableSpace] [MAXSCRatch=N] [REUsedelay=N] PoolName can be up to 30 characters. See also: MAXSCRatch DEFine STGpool (disk) Server command to define a storage pool. Syntax for a random access storage pool: 'DEFine STGpool PoolName DISK [DESCription="___"] [ACCess=READWrite|READOnly| UNAVailable] [MAXSize=MaxFileSize] [NEXTstgpool=PoolName] [MIGDelay=Ndays] [MIGContinue=Yes|No] [HIghmig=PctVal] [LOwmig=PctVal] [CAChe=Yes|No] [MIGPRocess=N]' PoolName can be up to 30 characters. Note that MIGPRocess pertains only to disk storage pools. See also: DISK; MIGContinue DEFine STGpool (tape) Server command to define a storage pool. Syntax for a tape storage pool: 'DEFine STGpool PoolName DevclassName [DESCription="___"] [ACCess=READWrite|READOnly| UNAVailable] [MAXSize=NOLimit|MaxFileSize] [NEXTstgpool=PoolName] [MIGDelay=Ndays] [MIGContinue=Yes|No] [HIghmig=PctVal] [LOwmig=PctVal] [COLlocate=No|Yes|FIlespace] [REClaim=N] [MAXSCRatch=N] [REUsedelay=N] [OVFLOcation=______]' PoolName can be up to 30 characters. Note that once a storage pool is defined, it is thereafter stuck with the specified devclass: you cannot change it with UPDate STGpool. (You are left with doing REName STGpool, and then redefine the original name to be as you want it, whereafter you can do Move Data to transfer contents from old to new.) The OVFLOcation value will appear in message ANR8766I telling of the place for the ejected volume, so use capitalization and wording which makes it stand out in that context. See also: MAXSCRatch; MIGContinue DEFine Volume Server command to define a volume in a storage pool (define to a storage pool). Syntax: 'DEFine Volume PoolName VolName [ACCess=READWrite|READOnly| UNAVailable|OFfsite] [LOcation="___"]' Resulting msg: ANR2206I Note that a volume can belong to only one storage pool. A storage pool which normally uses scratch volumes may also have specific volumes defined to it: the server will use the defined volume first. (Ref: Admin Guide, "How the Server Selects Volumes with Collocation Enabled") If a 3590 tape, do 'CHECKIn' after. Defined Volume A volume which is permanently assigned to a storage pool via DEFine Volume. Contrast with Scratch Volumes, which are dynamically taken for use in storage pools, whereafter they leave the storage pool to return to Scratch state. Ref: Admin Guide, "Scratch Volumes Versus Defined Volumes". See also: Scratch Volume Degraded Operation 3494 state wherein the library is basically operational, but an auxiliary aspect of it is inoperative, such as the Convenience I/O Station. delbuta DFS: ADSM-provided command (Ksh script) to delete a fileset backup (dump) from both ADSM storage (via 'dsmadmc ... DELete FIlespace') and the DFS backup database (via 'bak deletedump'). 'delbuta {-a Age|-d Date|-i DumpID|-s} [-t Type] [-f FileName] [-n] [-p] [-h]' where you can specify removal by age, creation date, or individual Dump ID. You can further qualify by type ('f' for full backups, 'i' for incrementals, 'a' for incrementals based upon a parent full or incremental); or by a list contained within a file. Use -n to see a preview of what would be done, -p to prompt before each deletion, -h to show command usage. Where: /var/dce/dfs/buta/delbuta Ref: AFS/DFS Backup Clients manual, chapter 7. Delete ACcess See: dsmc Delete ACcess DELETE ARCHCONVERSION Process seen in the server the first time a node goes into the Archive GUI when the archive data needs to be converted, as when upgrading clients between certain (unknown) levels. The conversion operation can be very time-consuming, depending upon the amount of archive data in server storage which needs to be converted. Msgs: ANS5148W Delete ARchive See: dsmc Delete ARchive DELete ASSOCiation ADSM Server command to remove the association between one or more clients with a schedule. Syntax: 'DELete ASSOCiation Domain_Name Schedule_Name Node_name [,...]' Related: 'DEFine ASSOCiation', 'Query ASSOCiation'. DELete BACKUPSET Server command to delete a backup set prior to its natural expiration. A Backup Set's retention period is established when the set is created, and it will automatically be deleted thereafter. Syntax: 'DELete BACKUPSET Node_Name Backup_Set_Name [BEGINDate=____] [BEGINTime=____] [ENDDate=____] [ENDTime=____] [WHERERETention=N_Days|NOLimit] [WHEREDESCription=____] [Preview=No|Yes]' Note that the node name and backup set name are required parameters: you may use wildcard characters such as "* *" in those positions. And in using wildcards in these positions you may be able to get around the restriction of not being able to delete the last backupset. See also: DELete VOLHistory DELete DBVolume TSM server command to delete a database volume, which is performed asynchronously, by a process. ADSM will automatically move any data on the volume to remaining database space, thus consolidating it. Deletion is only logical: the physical database volume/file remains intact. The best approach is to delete volumes in the reverse order that you added them so as to minimize the possibility of data being moved more than once in the case of multiple volume deletions. The best approach to removing a DB volume is to first Reduce the database and then delete a volume. Syntax: "DELete DBVolume VolName". DELete DEVclass ADSM server command to delete a device class. Syntax: 'DELete DEVclass DevclassName' DELete DRive TSM server command to delete a drive from a library. Syntax: 'DELete DRive LibName Drive_Name' Example: 'DELete DRive OURLIBR OURLIBR.3590_300' Notes: A drive that is in use - busy - cannot be deleted (you will get error ANR8413E or the like). All paths related to a drive must be deleted before the drive itself can be deleted. Use SHOW LIBrary to verify status. Msgs: ANR8412I DELete FIlespace (from server) TSM server command to delete a client file space. The deletion of objects is immediate: no later Expire Inventory is required. The deletion of the filespace takes place file by file, and can run for days for large filespaces. Syntax: 'DELete FIlespace NodeName FilespaceName [Type=ANY|Backup| Archive|SPacemanaged] [Wait=No|Yes] [OWNer=OwnerName] [NAMETYPE=SERVER|UNIcode|FSID] [CODEType=BOTH|UNIcode| NONUNIcode]' By default, results in an asynchronous process being run in the server to effect the database deletions, which you can monitor via Query PRocess. You need to wait for this to finish before, say, doing a fresh incremental backup on this filespace name. Use Wait to make the deletion synchronous. For Windows filespaces, you may have to add NAMETYPE=UNICODE to get it to work. WARNING; DO NOT RUN MORE THAN ONE DELETE FILESPACE AT A TIME!!! Doing so could jeopardize your *SM database. See entry on "Database robustness". Also, do not run a DELete FIlespace when clients are active, as the entirety of the Delete could end up in your Recovery Log as client updates prevent the administrative updates from being committed. Note that "Type=ANY" removes only Backup and Archive copies, not HSM file copies: you have to specify "SPacemanaged" to effect the more extreme measure of deleting HSM filespaces. Note also that the deletion will be an intense database operation, which can result in commands stalling. Moreover, competing processes - especially for the same node - will likely need access to the same database blocks, and collide with the message "ANR0390W A server database deadlock situation...". For this reason is it best to run only one DELete FIlespace at one time. If interrupted: Files up to that point are gone. If a pending Restore is in effect, this operation should not work. Speed: rather time-consuming - we've seen about 50 files/second. See also: Delete Filespace (from client) Delete Filespace (from client) ADSM client command: 'dsmc Delete Filespace', which will present a selection menu of file spaces (though this requires "BACKDELete=Yes" on 'REGister Node', which is contrary to the default, so that you may need to do it from the server). Results in an *asynchronous* process being run in the server to effect the database deletions and inventory expiration: you must wait for this to finish before, say, doing a fresh incremental backup on this filespace name. Speed: rather time-consuming - we've seen about 50 files/second. If a pending Restore is in effect, this operation should not work. See also: DELete FIlespace (from server) Delete Filespace fails to delete it You may be intending to delete a node, and are pursing the preliminary steps of deleting its filespaces. The Delete Filespace may seem happy, but doing a Query Filespace thereafter shows that the filespace has not gone away. This is likely a server software defect: a server level upgrade may correct it. Beyond that, you might try doing Delete Filespace from the client, selecting the filespace by relative number, and see if that makes it go away. (From the server side, 'DELete FIlespace *' may work - but you may not want all that node's filespaces deleted!) If not, do SELECT * FROM VOLUMEUSAGE WHERE NODE_NAME="__" and see if any volumes show up, where the volumes may be in a wacky state you may be able to correct; or you may be able to delete the volumes, assuming collocation by node such that no other nodes' data are on the volume, or where you can first perform a Move to separate out the nodes data on that volume. Your only other choice would be an appropriate audit operation - which is dicey stuff: you should contact TSM Support. DELete LIBRary ADSM server command to delete a library. Prior to doing this, all the library's assigned drives must be deleted. WARNING!! Deleting a library causes all of its volues to be checked out! If you unfortunately do this, you will need to use the 'mtlib' AIX command to fix the Category codes, and then use 'AUDit LIBRary' to reconcile ADSM with the library reality. DELete LOGVolume ADSM server command to delete a Recovery Log volume. ADSM will automatically start a process to move any data on the volume to remaining Recovery Log space, thus consolidating it. To delete a log volume, Query LOG needs to show a Maximum Extension value at least as large as the volume being deleted. Deletion is only logical: the physical recovery volume/file remains intact. The best approach is to delete volumes in the reverse order that you added them so as to minimize the possibility of data being moved more than once in the case of multiple volume deletions. Syntax: 'DELete LOGVolume VolName'. Delete Node You mean 'REMove Node'. DELETE OBJECT See: File, selectively delete from *SM storage; File Space, delete selected files DELete SCHedule, administrative Server command to delete an administrative schedule. Syntax: 'DELete SCHedule SchedName Type=Administrative' See also: DEFine SCHedule DELete SCHedule, client Server command to delete a client schedule. Syntax: 'DELete SCHedule DomainName SchedName [Type=Client]' See also: DEFine SCHedule DELete SCRipt Server command to delete a server script or one line from it. Syntax: 'DELete SCRipt Script_Name [Line=Line_Number]' Deleting a whole script causes the following prompt to appear: Do you wish to proceed? (Yes/No) (There is no prompt when simply deleting a line.) Deleting a line does not cause lines below it to "slide up" to take the old line number: all lines retain their prior numbers. Msgs: ANR1457I Delete selected files from ADSM See: Filespace, delete selected files storage DELete VOLHistory TSM server command to delete non-storage pool volumes, such as those used for database backups and Exports. Syntax: 'DELete VOLHistory TODate=MM/DD/YYYY|TODAY |TODAY-Ndays TOTime=HH:MM:SS|NOW |NOW+hrs:mins|NOW-hrs:mins Type=All|DBBackup [DEVclass=___] |DBSnapshot [DEV=___] |DBDump|DBRpf|EXPort |RPFile [DELETELatest=[No|Yes] |RPFSnapshot [DELETELatest=[No|Yes] |STGNew |STGReuse|STGDelete' There is no provision for deleting a single volume, sadly. As of ADSMv3, you will get an error if you try to delete all DBBackup copies: you must keep at least 1, per APARs IX86694 and IX86661. This is also the case for DBSnapshot volumes: the latest cannot be deleted. Do not use this command to delete DBB volumes that are under the control of DRM: DRM itself handles that per Set DRMDBBackupexpiredays. (If you are paying for and using DRM, let it do what it is supposed to: meddling jeopardizes site recoverability.) Do not expect *SM to delete old DBBackup entries reflecting Incremental type 'BAckup DB' operations until the next full backup is performed. That is, the full and incrementals constitute a set, and you should not expect to be able to delete critical data within the set: the whole set must be of sufficient age that it can entirely go (msg ANR8448E). "Type=BACKUPSET" is not documented but may work, being a holdover frome version 4.1 days. Also, there was a bug in the 4.2 days that prevented some backupsets from being deleted with the DELete BACKUPSET command; you could delete them with 'DELete VOLHistory Type=BACKUPSET Volume= TODate=' Msgs: ANR2467I (reports number of volumes deleted, but not volnames) See also: Backup Series; Backup set, remove from Volhistory DELete Volume TSM server command to delete a volume from a storage pool and, optionally, the files within the volume, if the volume is not empty. Syntax: 'DELete Volume VolName [DISCARDdata=No|Yes]' Specifying DISCARDdata=Yes will cause the removal of all database information about the files that were backed up to that tape, and so the next incremental backup will take all such files afresh. (This is logical deletion: The volume is not mounted. The physical data remains on the tape, though logically inaccessible. If you have security and/or privacy concerns for such tapes that had been used by TSM and are being decommissioned from the library, consider using a utility like the tapeutil command's "erase" function to physically eradicate the data.) Note that the volume may not immediately return to the scratch pool if REUsedelay is in effect. Also, if the volume is offsite, you should recall to onsite. Multiple simultaneous: V3 experience reveals no problems running more than one data-discarding Delete Volume at a time. I've run 5 at a time without incident. Deleting a primary storage pool copy of a file also causes any copy storage pool copies to be deleted (a form of instant expiration of data, in that the primary copy constitutes the stem of the database entry). Ref: Admin Guide, "Deleting Storage Pool Volumes". Notes: No Activity Log or dsmerror.log entry will be written as a result of this action. Volumes whose Access is Unavailable cannot be deleted. If a pending Restore is in effect, this operation should not work. "ANS8001I Return code 13" indicates that the command was invoked without "DISCARDdata=Yes" and the volume still contains data. Messages: ANR1341I See also: DELete VOLHistory "deleted" In backup summary statistics, as in "Total number of objects deleted:". Refers to the number of files expired because not found (or excluded) in the backup operation. Those files will be flagged in the body of the report with "Expiring-->". Deleted files, rebind See: Inactive files, rebind Deleted from storage pool, messages ANR1341I, ANR2208I, ANR2223I DELetefiles (-DELetefiles) Client option to delete files from the client file system after Archive has stored them on the server. Can also be used with the restore image command and the incremental option to delete files from the restored image if they were deleted from the file space after the image was created. Note particularly the statement that the operation will not delete the file until it is stored on the server. This affects when in the sequence that the file will actually be deleted. Remember that *SM batches Archive data into Aggregates, as defined by transaction sizings (TXN* options) and so the file(s) will not be deleted until the transaction is completed. DANGER!!: If your server runs with Logmode Normal, you may lose files if the server has to be restored, because all transactions since the last server database backup will be lost! Before using DELetefiles in a site, carefully consider all factors. What about directories? The Archive operation has no capability for deleting directories, for several reasons... First, directories may be the home of objects other than the files being deleted (e.g., symbolic links, special files, unrelated files), and because in the time it takes to archive files from any given directory, new files may have been introduced into it. If you want directories deleted, you need to do so thereafter, with an operating system function. See also: Total number of objects deleted Dell firmware advisory Customers report serious quality problems with Dell firmware, as for the Dell Powervault 136T. Beware. DELRECORD Undocumented, unsupported command noted in some APARs for deleting TSM db table entries. Usage undefined. See also: Database, delete table entry Delta file As used in subfile backups. Msgs: ANS1328E Demand Migration The process HSM uses to respond to an out-of-space condition on a file system. HSM migrates files to ADSM storage until space usage drops to the low threshold set for the file system. If the high threshold and low threshold are the same, HSM attempts to migrate one file. Density See: Tape density DES See: ENCryptkey; PASSWORDDIR -DEScription="..." Used on 'dsmc Archive' or 'dsmc Query ARchive' or 'dsmc Retrieve' to specify a text string describing the archived file, which can be used to render it unique among archived files of the same name. Wildcard characters may be used. Be aware that rendering the file unique in this way also implicitly renders the path directory unique such that it will also be archived again if there isn't one of the same description already stored in the server. That is, the given description is also applied to the path directory. If you do not specify a description with the archive command, the default is to provide a tagged date, in the form "Archive Date: __________", where the date value inserted is the system date, always 10 characters long. (If your date format uses a two digit year, there will be two blank spaces at the end of the date.) Note that only the date is provides - not the time of day. Description, on an Archive file Is set via -DEscription="..." in the 'dsmc archive' operation. Note that you cannot change the archive file Description after archiving. DESTination A Copy Group attribute that specifies the storage pool to which a file is backed up, archived, or migrated. At installation, ADSM provides three storage destinations named BACKUPPOOL, ARCHIVEPOOL, and SPACEMGTPOOL. Destination for Migrated Files In output of 'dsmmigquery -M -D', an (HSM) attribute of the management class which specifies the name of the ADSM storage pool in which the file is stored when it is migrated. Defined via MIGDESTination in management class. See: MIGDESTination DEStroyed Access Mode for a primary storage pool volume saying that it has been permanently damaged, and needs a 'RESTORE STGpool' or 'RESTORE Volume' (which itself will mark the volume DEStroyed, msg ANR2114I). Set: 'UPDate Volume ... ACCess=DEStroyed'. (Note that Copy Storage Pool volumes cannot be marked DEStroyed.) If there is a storage pool backup for the volume, access to files that were on the volume causes *SM to automatically obtain them instead from the copy storage pool. Note that marking volumes as "Destroyed" does not affect the status of the files on the volumes: the next Incremental Backup job will not back up those files afresh. All that the Destroyed mode does is render the volume unmountable. See: Copy Storage Pool, restore files directly from But the volume or storage pool RESTORE operation should still be performed, to repopulate the primary storage pool with the files. See also: RESTORE Volume /dev/fsm The HSM File Space Manager character special file, apparently created when HSM comes up. Should look like: crw-rw-rwT- 1 root sys 255, 0 Dec 5 12:28 /dev/fsm If need to re-create, do: 'mknod /dev/fsm c 255 0' 'chmod 1666 /dev/fsm' /dev/lb_ SCSI library supported by *SM device driver, such as the 9710. /dev/lmcp0 3494 Library Manager Control Point special device, established by configuring and making this "tape" device Available via SMIT, as part of installing the atldd (automated tape library device driver). (Specifically, 'mkdev -l lmcp0" creates the dev in AIX.) /dev/mt_ In Unix systems, tape drives that are used by *SM, but not supported by *SM device drivers. AIX usage note: When alternating use of the drive between AIX and *SM, make one available and the other unavailable, else you will have usage problems. For example, if the drive was most recently used with *SM, do: rmdev -l mt0; mkdev -l rmt0; and then the inverse when done. /dev/rmt_ Magnetic tape drive supported as a GENERICTAPE device. /dev/rmt_.smc For controlling the SCSI Medium Changer (SMC), as on 3570, 3575, 3590-B11 Automatic Cartridge Facility. /dev/rmt_.smc, creation When running 'cfgmgr -v' to define a 3590 library, the 3590's mode has to be in "RANDOM" for the rmt_.smc file to be created. /dev/rop_ Optical drives supported by ADSM. /dev/vscsiN See "vscsi". Devclass The device class for storage pools: a storage pool is assigned to a device class. The device class also allows you to specify a device type and the maximum number of tape drives that it can ask for. For random access (disk), the Devclass must be the reserved name "DISK". For tape, the Devclass is whatever you choose, via 'DEFine DEVclass'. Used in: 'DEFine DBBackuptrigger', 'DEFine STGpool', 'Query Volume' See also: Query DEVclass; SHow DEVCLass Devclass, 3590, define See "DEFine DEVclass (3590)". Devclass, rename There is no command to do this: you have to define a new devclass, reassign to it, then delete the old name. Devclass, verify all volumes in See: SHow FORMATDEVCLASS _DevClass_ DEVCLASSES SQL table for devclass definitions. Columns: DEVCLASS_NAME, ACCESS_STRATEGY (Random, Sequential), STGPOOL_COUNT, DEVTYPE, FORMAT, CAPACITY, MOUNTLIMIT, MOUNTWAIT, MOUNTRETENTION, PREFIX, LIBRARY_NAME, DIRECTORY, SERVERNAME, RETRYPERIOD, RETRYINTERVAL, LAST_UPDATE_BY, LAST_UPDATE (YYYY-MM-DD HH:MM:SS.000000) DEVCONFig Definition in the server options file, dsmserv.opt (/usr/lpp/adsmserv/bin/dsmserv.opt). Specifies the name of the file(s) that should receive device configuration information and thus become backups when such information is changed by the server. Use 'BAckup DEVCONFig' to force updating of the file(s). Default: none Ref: Installing the Server... See also: Device config... DEVCONFig server option, query 'Query OPTion' devconfig.out In TSM v5 and higher the first line of file must be: SET SERVERNAME ADSM Device Specified via "DEVIce=DeviceName" in 'DEFine DRive ...' device category As seen in 'mtlib -l /dev/lmcp0 -f /dev/rmt2 -qD' on a 3494. See: Category Codes Device class See: Devclass Device config file considerations During a *SM DB restore, if your libtype is set to manual in your devconfig file, check that SHARED=NO is not part of the DEFINE LIBR statement. See also: DEVCONFig Device config file, determine name 'Query OPTions', look for "Devconfig" Device config info, file(s) to "DEVCONFig" definition in the receive as backup, define server options file, dsmserv.opt (/usr/lpp/adsmserv/bin/dsmserv.opt). The files will end up containing all device configuration info that administrators set up, in ADSM command format, such as "DEFine DEVclass..." and "DEFINE LIBRARY" command lines. Device configuration, backup manually 'BAckup devconfig' causes the info to be captured in command line format in files defined on DEVCONFIG statements in the server options file, dsmserv.opt (/usr/lpp/adsmserv/bin/dsmserv.opt). Device configuration, restore Occurs as part of the process involved in the following commands (run from the AIX command line): 'dsmserv restore db' 'dsmserv loaddb' 'DSMSERV DISPlay DBBackupvolumes' Device drivers, tape drives Under Unix: Drives which are used with a name of the form "/dev/rmtX" employ tape device drivers supplied with the operating system, which in AIX are stored in /usr/lib/drivers. These are defined in SMIT under DEVICES then TAPE DRIVES. For example, IBM "high tape device" drives such as 3590 have their driver software shipped with the tape hardware. Drives used with a name of the form "/dev/mtX" employ tape device drivers supplied by ADSM itself. These are defined in SMIT under ADSM DEVICES. And their library will be /dev/lb0. DEVNOREADCHECK Undocumented VM opsys option: allows the server to ignore the RING IN/NO RING status of the input tape. DEVType Operand of 'DEFine DEVclass', for specifying device class. Recognized: FILE, 4MM, 8MM, QIC, 3590, CARTridge, OPTical. Note: Devtypes can change from one TSM version to another such that they cannot be caried across in an upgrade. The upgrade may nullify such DEVTypes. Thus, in performing an upgrade it is wise to check your DEVclasses. df of HSM file system (AIX) Performing a 'df' command on the HSM server system with the basic HSM-managed file system name will cause the return of a hdr line plus two data lines, the first being the JFS file system and the second being the FSM mounted over the JFS. However, if you enter the file system name with a slash at the end of it, you will get one data line, being just the FSM mounted over the JFS. dfmigr.c Disk file migration agent. See also: afmigr.c DFS The file backup client is installable from the adsm.dfs.client installation file, and the DFS fileset backup agent is installable from adsm.butadfs.client. You need to purchase the Open Systems Environment Support license for AFS/DFS clients. The DCE backup utilities are located in /opt/dcelocal/bin. See 'buta', 'delbuta'. DFS backup to Solaris IBM reportedly has no plans to support this type of client. DFSBackupmntpnt Client System Options file option, valid only when you use dsmdfs and dsmcdfs. (dsmc will emit error message ANS4900S and ignore the option.) Specifies whether you want ADSM to see a DFS mount point as a mount point (Yes, which is the default) or as a directory (No): Yes ADSM considers a DFS mount point to be just that: ADSM will back up only the mount point info, and not enter the directory. This is the safer of the two options, but limits what will be done. No ADSM regards a DFS mount point as a directory: ADSM will enter it and (blindly) back up all that it finds there. Note that this can be dangerous, in that use of the 'fts crmount' command is open to all users, who through intent or ignorance can mount parts or all of the local file system or a remote one, or even create "loops". Default: Yes By default, when doing an incremental backup on any DFS mount point or DFS virtual mount point, TSM does not traverse the mount points: it will only backup the mount point metadata. To backup mount a point as a regular directory and traverse the mount point, set DFSBackupmntpnt No before doing the backup. If you want to backup a mount point as mount point and backup the data below the mount point, first backup the parent directory of the mount point and then backup mount point separately as a virtual mount point. See also: AFSBackupmntpnt DFSInclexcl Client System Options file option, valid only when you use dsmdfs and dsmcdfs. (dsmc will emit error message ANS4900S and ignore the option.) Specifies the path and file name of your DFS include-exclude options file. DHCP database, back up Do not attempt to back this up directly: it can be made to produce a backup copy of its database periodically (system32/dhcp/backup), and then that copy can be backed up with TSM incremental backup. You also can make a copy of the DHCP registry setup info in a REG file for backup. The key is located in HKEY_LOCAL_MACHINE\System\ CurrentControlSet\Services\DHCPServer\ Configuration. Ref: http://support.microsoft.com/ support/kb/articles/Q130/6/42.asp Diamond icon in v3 GUI Restore A four-sided diamond icon to the left of a file in the v3 GUI shown in a Restore selection tree display indicates that the file is Inactive. Shown to the left of a directory, indicates that the directory contains inactive files. DIFFESTIMATE Option in the TDPSQL.CFG file. Prior to performing a database backup, the TDP for SQL client must 'reserve' the required space in the storage pool. It *should* get the estimate right for full backups and transaction log backups because the space used in the database and transaction logs is available from SQL Server. But: For differential backups, there is no way of knowing how much data is to be backed up until the backup is complete. The TDP for SQL client therefore uses the percentage specified in the the DIFFESTIMATE option to calculate a figure based on the total space used. E.g., for a database of 50GB with a DIFFESTIMATE value of 20, TDP will reserve 10Gb (20% of 50GB). A "Server out of data storage space" error will arise if the actual backup exceeds the calculated estimate. If the storage pool is not big enough to accomodate the larger backup, of if other backup data prevents further space being reserved, this error will occur. Setting DIFFESTIMATE to 100 will ensure that there is always sufficient space available, but will prevent space in your primary storage pool being utilised by other clients and may force the backup to occur to the next storage pool in the hierarchy unnecessarily. It is worth setting DIFFESTIMATE to the maximum proportion of the data you can envisage ever being backed up during a differential backup. Directories, empty, and Selective Selective Backup does not back up empty Backup directories. Directories, empty, restoring See: Restore empty directories Directories and Archive ADSM Archive does not save directory structure: the only ADSM facility which does is Incremental Backup (Selective Backup does not, either). See also: DIRMc Directories and Backup A normal Incremental Backup will *not* back up directories whose timestamp has changed since the last backup. This is because it would be pointless to do so: *SM already has the information it needs about the directory itself in order to recreate it, and restoral of a directory reconstructs it, with contemporary datestamps. An -INCRBYDate Backup, in contrast, *will* back up pre-existing directories whose timestamps it sees as newer, because it knows nothing about them having been previously backed up, by virtue of simple date comparison. See also: Directory performance; DIRMc" Directories and binding to management class The reason that directories are bound to the management class with the longest retention is that there is no guarantee that the files within the directory will all be bound to the same management class. A simple example: suppose I have a directory called C:\ANDY with two files in it, like this: C:\ ANDY\ PRODFILE.TXT TESTFILE.TXT and that the include/exclude list specifies two different management classes: INCLUDE C:\ANDY\PRODFILE.TXT MC90DAYS INCLUDE C:\ANDY\TESTFILE.TXT MC15DAYS So which management class should C:\ANDY be bound to? The question becomes even more interesting if a new file is introduced to the C:\ANDY directory and an include statement binds it to, say, the MC180DAYS management class. Binding directories to the management class with the longest retention (RETOnly) is how TSM can assure that the directory is restorable no matter which management class the files under that directory are bound to. If all management classes have the same retention, TSM will choose the one first in alphabetical order. (APAR IY11805 talked about first choosing by most recently updated mgmtclass definition, but that appears false.) Ordinary directory entries - those with only basic info - will be stored in the database, but entries with more info may end up in a storage pool. The way around this is to use DIRMc to bind the directories to a management class that resides on disk. Alternatively one could create the disk management class such that it has the longest retention, and thus negate the need to code DIRMc. One "gotcha": be careful when creating new management classes or updating exising existing management classes. You will always want to ensure that the *disk* management class has the longest retention. Directories and Restore Whereas ordinary restore operations reinstate the original file permissions, directory permissions are only restored when using the SUbdir=Y option of 'dsmc' or the Restore Subdirectory Branch function of dsm GUI. Directories may be in the *SM db When a file system is restored, you may see *SM rebuild the directory structure long before any tapes are mounted. It can do this when the directory structure is basic such that it can be stored as a dababase object (much like many empty files can be). In such cases, there is no storage pool space associated with directories, and no tape use. With more complex directory structures (Unix directories with Access Control Lists, Windows directories, and the like), the extended information associated with directories exceeds the basic database attributes data structure, and so the directory information needs to be stored in a storage pool. That is where the DIRMc option comes in: it allows you to control the management class that will get associated with the directory information that needs to get stored in a storage pool. See also: DIRMc Directories missing in restore Perhaps you backed them up with a DIRMc which resolved to a shorter retention than the files in the directories. (Later ADSM software should prevent this.) This is why in the absence of DIRMc, directories are bound to the copygroup with the longest retention period - to prevent such loss. Directories visible in restore, but Simplest cause: In a GUI display, you files not shown need to click on the folder/directory to open it, to see what's inside. This could otherwise be a permissions thing: you are attempting to access files that were backed up by someone other than you, and which do not belong to you. Directory--> Leading identifier on a line out of incremental Backup, reflecting the backup of a directory entry. Note that with basic directory structures, as on Unix systems, *SM is able to store directory info in the server database itself because the info involves only name and basic attributes: the contents of a directory are the files themselves, which are handled separately. Thus, directory backups usually do not have to be in a storage pool. Note that the number of bytes reflected in this report line is the size of the directory as it is in the file system. Because *SM is storing just name and attributes, it is the actual amount that *SM stores rather than the file system number that will contribute to the "Total number of bytes transferred:" value in the summary statistics from an Archive or Backup operation. Note that the number will probably be less than the sum reflected by including the numbers shown on "Directory-->" lines of the report, in that *SM stores only the name and attributes of directories. See also: Rebinding--> Directory performance Conventional directories are simply flat, sequential files which contain a list of file names which cross-reference to the physical data on the disk. As primitive data structures, directories impede performance, as lookups are serial, take time, and involve lockouts as the directory may be updated. As everyone finds, on multiple operating systems, the more files you have in a directory, the worse the performance for anything in your operating system going after files in that directory. The gross rule of thumb is that about 1000 files is about all that is realistic in a directory. Use subdirectories to create a topology which is akin to an equilateral triangle for best performance. Also, from a 2.1 README: "Tens of thousands of files in a single random-ordered directory can cause performance slowdowns and server session timeouts for the Backup/Archive client, because the list of files must be sorted before *SM can operate on them. Try to limit the number of files in a single random-ordered directory, or increase the server timeout period." Directory permissions restored Occurred in some V2 levels. Per ADSM, incorrectly "it is working as designed and was documented in IC07282". Circumvent by using dsmc restore with -SUbdir=Yes on the command line or dsm Restore by Subdirectory Branch in the GUI to restore the directory with the correct permissions. Directory separator character '/' for Unix, DOS, OS/2, and Novell. See also ":" volume/folder separator for Macintosh. Directory timestamp preservation, *SM easily preserves the timestamp of Windows restored directories through use of the Windows API function SetFileTime(). DIRMc Client System Options file (dsm.sys) backup option to specify the Management Class to use for directories. (For Backup only; not for Archive. See ARCHMc for Archive.) Syntax: DIRMc ManagementClassName Placement: Must be within server stanza With some client types (e.g., Unix), the directory structure is simple enough that directory information can be stored in the ADSM database such that storage pool space is not required for it: the use of DIRMc does not change this. However, where a client uses richer directories or when an ACL (Access Control List) is associated with the directory, there is too much information and so *does* need to be stored in a storage pool. (Note that this same principal pertains to all simple objects, and thus empty files as well.) The DIRMc option was originated because, without it, the directories would be bound to the management class that has a backup copygroup with the longest retention period (see below). In many sites that was causing directories to go directly to tape resulting in excessive tape mounts and prolonged retrievals. (Additional note: Beyond being bound to the management class with the longest backup retention, if multiple management classes have the same creation date, directories will be bound to the management class earliest in alphabetical order, per APAR IY11805.) Performance: You could use DIRMc to put directory data into a separate management class such that it could be on a volume separate from the file data and thus speed restorals, particularly if the volume is disk. (In a file system restoral, the directory structure is restored first.) Systems known to have data-rich directory information which must go to a storage pool: DFS (with its ACLs), Novell, Windows NTFS. Default: the Management Class in the active Policy Set which has the longest retention period (RETOnly); and in the case of there being multiple management classes with the same RETOnly, the management class whose name is highest in collating sequence gets picked. (The number of versions kept is not a factor.) Thus, in the absence of DIRMc, database and storage pool consumption can be aggravated by retaining directories after their files have expired. If used, be sure to choose a management class which retains directories as long as the files in them. NOTE: As of ADSMv3, DIRMc is not as relevant as it once was, because of Restore Order processing (q.v.), which creates an interim, surrogate directory structure and restore/retrieves the actual directory information whenever it is encountered within the restore order (the order in which data appears on the backup media). However, the restoral ultimately has to retouch those surrogate directories, and you don't want that to happen by wading through a set of data tapes unrelated to the restored data (where the dirs ended up by virtue of longest retention). So use of DIRMc is still desirable for file systems whose directories end up in storage pools. See also: Directories may be in the *SM db; Restore Order DIRMc, query In ADSM do 'dsmc Query Options': under GENERAL OPTIONS see "dirmc". In TSM do 'dsmc show options' and inspect the "Directory MC:" line. If your client options do not specify an override, the value will say 'DEFAULT'. -DIrsonly Client option, as used with Retrieve, to process directories only - not files. DISAble Through ADSMv2, the command to disable client sessions. Now DISAble SESSions. DISAble EVents ADSMv3+ server command to disable the processing of one or more events to one or more receivers (destinations). Syntax: 'DISAble EVents ALL[,CONSOLE][,ACTLOG] [,ACTLOG][,EVENTSERVER][,FILE] [,SNMP][,TIVOLI][,USEREXIT] EventName[,ALL][,INFO] [,WARNING][,ERROR][,SEVERE] NODEname=NodeName[,NodeName...] SERVername=ServerName [,ServerName]' where: TIVOLI Is the Tivoli Management Environment (TME) as a receiver. Example: 'DISAble EV ACTLOG ANE4991 *' DISAble SESSions Server command to prevent client nodes from starting any new Backup/Archive sessions. Current client node sessions are allowed to complete. Administrators can continue to access the server. Duration: Does not survive across a TSM server restart: the status is reset to Enable. Determine status via 'Query STatus' and look for "Availability". Msgs: ANR2097I See also: DISAble; DISABLESCheds; ENable SESSions DISABLESCheds Server option to specify whether administrative and client schedules are disabled during an TSM server recovery scenario. Syntax: DISABLESCheds Yes | No Default: No Query: Query OPTion, "DisableScheds" Disaster recovery See: Copy Storage Pool and disaster recovery Disaster Recovery Manager See: DRM Disaster recovery, short scenario, - Restore the server node from a AIX system mksysb image; - Restore the other volume groups (including the ones used for the adsm database, log, storage pool, etc.) from a savevg; - Follow the instructions & run the scripts so wonderfully prepared by DRM. (The DRM script knows everything about the database size, volhist, which volumes were considered offsite, etc.) DISK Predefined Devclass name for random access storage pools, as used in 'DEFine STGpool DISK ...'. Beware their use, as a frequently changing population of many files can result in fragmentation as time passes, and a high penalty in disk access overhead. With DISK TSM keeps track of each (4 KB) block in the DISK volumes, which means maintaining a map of all the blocks, searching and updating that map in each storage pool reference. Realize that Reclamation occurs on serial media, and thus not for DISK, meaning that the space formerly occupied by small files in a multi-file Aggregate cannot be reclaimed. REUsedelay is not applicable to DISK volumes: your data will probably not be recoverable because the space vacated by expired files, where whole Aggregates expired, is reused on disk, whereas such space remains untouched on tape. Restoral performance may be impaired if using random-access DISK rather than sequential-access FILE or tape: you may see only one restore session instead of multiple. That is, with DISK there is no Multi-session Restore. See: http://www-1.ibm.com/support/ docview.wss?uid=swg21144301 DISK storage pools are best used for only first point of arrival on a TSM system: the data must migrate to sequential access storage (FILE, tape) to be safe. Ref: Admin Guide table "Comparing Random Access and Sequential Access Disk Devices" See also: D2D; FILE; Multi-session restore Disk Pacing Term to describe AIX's control of Unix's traditional inclination to buffer any amount of file data, no matter how large. AIX limitation thus prevents memory overloading. Disk stgpool not being used See: Backups go directly to tape, not disk Disk storage pool See: Storage pool, disk See also: Backup storage pool, disk?; Backup through disk storage pool Disk Table The TSM database and recovery log volumes, as can be reported via 'SHow LVMDISKTABLE' (q.v.). DiskXtender A hierarchical storage product by Legato. For it to work with TSM, you need to have file dsm.opt in the DX home directory. DISKMAP ADSM server option for Sun Solaris. Specifies how ADSM performs I/O to a disk storage pool. Either: Yes To map client data to memory (default); No Write client data directly to disk. The more effective method for your current system needs to be determined by experimentation. Disks supported ADSM supports any disk storage device which is supported by the operating system. Dismount tape, whether mounted by Via Unix command: ADSM or other 'mtlib -l /dev/lmcp0 -d -f /dev/rmt?' 'mtlib -l /dev/lmcp0 -d -x Rel_Drive#' (but note that the relative drive method is unreliable). Msgs: "Demount operation Cancelled - Order sequence." probably means that the drive is actively in use by TSM, despite your impression. See also: Mount tape Dismount tape which was mounted by 'DISMount Volume VolName' *SM (The volume must be idle, as revealed in 'Query MOunt'.) DISMount Volume *SM server command to dismount an idle, mounted volume. Syntax: 'DISMount Volume VolName'. If volume is in use, ADSM gives message ANR8348E DISMOUNT VOLUME: Volume ______ is not "Idle". See also: Query MOunt DISPLAYLFINFO See: Storage Agent and logging/accounting -DISPLaymode ADSMv3 dsmadmc option for report formatting, with output being in either "list" or "table" form. Prior to this, the output from Administrative Query commands was displayed in a tabular format or a list format, depending on the column width of the operating system's command line window, which made it difficult to write scripts that parsed the output from the Query commands as the output format was not predictable. Choices: LISt The output is in list format, with each line consisting of a row title and one data item, like... Description: Blah-blah TABle The output is in tabular format, with column headings. See also: -COMMAdelimited; SELECT output, columnar instead of keyword list; -TABdelimited DISTINCT SQL keyword, as used with SELECT, to yield only distinct, unique, entries, to eliminate multiple column entries of the same content. Form: SELECT DISTINCT FROM Note that DISTINCT has the effect of taking the first occurrence of each row, so is no good for use with SUM(). DLT Digital Linear Tape. Single-hub cartridge with 1/2" tape where the external end is equipped with a plastic leader loop, (which has been the single largest source of DLT failures). Data is recorded on DLTtape in a serpentine linear format. DLT technology has lacked servo tracks on the tape as Magstar and LTO have, making for poor DLT start-stop performance as it has to fumble around in repositioning, which can greatly prolong backups, etc. DLT is thus intended to be a streaming medium, not start-stop. Super DLTtape finally provides servo tracking, in the form of Laser Guided Magnetic Recording (LGMR), which puts optical targets on the backside of the tape. http://www.dlttape.com/ http://www.overlanddata.com/PDFs/ 104278-102_A.pdf http://www.cartagena.com/naspa/LTO1.pdf See also: SuperDLT DLT and repositioning DLT (prior to SuperDLT) lacks absolute positioning capability, and so when you need to perform an operation (Audit Volume) which is to skip a bad block or file, it must rewind the tape and then do a Locate/Seek. DLT and start/stop operations *SM does a lot of start/stop operations on a tape, and DLT has not been designed for this (until SuperDLT). Whenever the DLT stops, it has to back up the tape a bit ("backhitch") before moving forward to get the tracking right. Sometimes, it seems, it doesn't get it right anyway, resulting in I/O errors. A lot of repositioning "beats up" the drive, and can result in premature failure. See: Backhitch DLT barcode label specs Can be found in various vendor manuals, such as the Qualstar TLS-6000 Technical Services Manual, section 2.3.1, at www.qualstar.com/146035.htm#pubpdf DLT cartridge inspection/leader repair See Product Information Note at www.qualstar.com/146035.htm#pubpdf DLT cleaner tape When a DLT clean tape is used, it writes a tape mark 1/20th down the tape. The next clean uses up 1/20 more tape. When you have used it 20 times, putting it back in the drive doesn't clean anything. You can degauss it to erase the tape marks and then reuse it up to 3 times, though that can result in the tape head being dirtied rather than cleaned. DLT drives All are made by Quantum. Quantum bought the technology from DEC, which at the time called them TKxx tape drives. DLT Forum Is on the Quantum Web Site: http://www.dlttape.com/index_wrapper.asp DLT IV media specs 1/2 inch data cartridge Metal particle formulation for high durability. 1,828 feet length 30 year archival storage life 1,000,000 passes MTBF 35 GB native capacity on DLT 7000, 20GB on DLT 4000 40 GB native capacity on DLT 8000 DLT Library sources http://www.adic.com DLT media life DLT tapes are spec'd at 500,000 passes. In general, the problem that usually occurs with DLT is not tape wear, but contamination. The cleaner the environment, the better chance the tapes will have of achieving their full wear life...some 38 years. Streaming will prematurely wear the tapes. DLT tapes density DLT 4000 are 20GB native, 40GB "typical compression". Manually load a tape and look very carefully at the density lights on the DLT drive. DLT tapes can do 35GB, but for backwards compatibility they can do lower densities. The drive decides on the density when the tape is first written to and that density is used forever more. It is possible to "reformat" the media to a higher density: 0. Make sure there is no ADSM data on the tape and the volume has been deleted from the library and ADSM volume list. Mark the drive as "offline" in ADSM. 1. Mount the tape manually in the drive 2. Use the "density select" button to choose 35GB. 3. At the UNIX system: 'dd if=/dev/zero of=/dev/rmt/X count=100' (/dev/rmt/X is the real OS device driver for the drive) 4. Dismount the tape. 5. Mark the drive as online. 6. Get ADSM to relabel the tape. This works because the DLT drive will change the media density IF it is writing at the beginning of the tape. This should result in getting > 35GB on DLT tapes. DLT vs. Magstar (3590, 3570) drives DLT tapes are clumsy and fragile; With a DLT the queue-up time is much longer than any of the magstars, and the search time is even worse; DLT drive heads wear faster. DLT also writes data to the very edges of a tape causing the tape edges to wear. Both have cartridges consisting of a single spool, with the tape pulled out via a leader. DLTs are prone to load problems, especially as the drive and tape wear: there is a little hook in the drive that must engage a plastic loop in the tape leader, and when the hook comes loose from its catch, a service call is required to get it repaired. And, of course, the plastic leader loop breaks. Customers report Magstar throughput much faster than DLT, helped by the servo tracks on tape that DLT lacks. Magstar-MP's are optimized for start-stop actions, and that is much of what ADSM will do to a drive. DLT is optimized for data streaming. If a MP tape head gets off alignment during a write operation, the servo track reader on the drive stops writing and adjusts. DLT aligns itself during the load of the tape. If it gets off track during a write it has no way to correct and could overwrite data. New technology DLT drives can read older DLT tapes, whereas Magstar typically does not support backward compatibility. DLT4000 Capacity: 20GB native, 40GB "typical compression". Transfer rate: 1.5 MB/sec DLT7000 Digital Linear Tape drives, often found in the STK 9370. Can read DLT4000 tapes. Tape capacity: 35 GB. Transfer rate: 5 MB/sec Beware that they have had power supply problems (there are 2 inside each drive): Low voltage on those power supplies will cause drives to fail to unload. And always make sure to be at the latest stable microcode level. See also: SuperDLT DLT7000 cleaning There is a cleaning light, and it comes on for two different things: "clean requested", and "clean required". There is a tiny cable that goes from the drives back to the robot. With hardware cleaning on, that is how the "clean required" gets back to the robot and causes it to mount the cleaning tape. A "clean request" doesn't. That is, the light coming on does not always result in cleaning being done. DLT7000 compression DLT7000 reportedly come configured to maximize data thruput, and will automatically fall out of compression to do this. If you want to maximize data storage, then you need to modify the drive behavior. See the hardware manual. DLT7000 tape labels Reportedly must be a 1703 style label and have the letter 'd' in the lower left corner. DLT8000 Digital Linear Tape drives. DLT type IV or better cartridges must be used. Can read DLT4000 tapes. Tape capacity: 40 GB. Transfer rate: 6 MB/sec DM services Unexplained Tivoli internal name for HSM under TSM, as seen in numerous references in the Messages manual series 9000 messages, apparently because it would be too confusing for its Tivoli Space Manager to have the acronym "TSM". "DM" probably stands for Data Migrator. .DMP File name extension created by the server for FILE type scratch volumes which contain Database dump and unload data. Ref: Admin Guide, Defining and Updating FILE Device Classes DNSLOOKUP TSM 5.2+ compensatory server option for improving the performance of Web Admin and possibly other client access by specifying: DNSLOOKUP NO Background: DNS lookup control is provided in web (HTTPD) servers in general. (In IBM software, the control name is DNSLOOKUP; in the popular Apache web server, the control is HostnameLookups.) Web servers by default perform a reverse-DNS query on the requesting IP address before servicing the web request. This reverse-DNS query (C gethostbyaddr call) is used to retrieve the host and domain name of the client, which is logged in the access log and may be used in various ways. The problem comes when DNS service is impaired. It may be the case that your OS specifies multiple DNS servers, and one or more of them may not actually be DNS servers, or may be down, or unresponsive. This can result in a delay of up to four seconds before rotating to the next DNS server. Other causes of delay involve use of a firewall or DHCP with no DNS server (list) specified. You can gauge if you have such a DNS problem through the use of the 'nslookup' or 'host' commands. Note that DNS lookup problems affect the performance of all applications in your system, and should be investigated, as the use of gethostbyaddr is common. With DNSLOOKUP OFF specified, only the IP address is had. See also: Web Admin performance issues Documentation, feed back to IBM Send comments on manuals, printed and online, to: starpubs@sjsvm28.vnet.ibm.com Domain See: Policy Domain DOMain Client User Options file (dsm.opt) option to specify the default file systems in your client domain which are to be eligible for incremental backup, as when you do 'dsmc Incremental' and do not specify a file system. DOMain is ignored in Archive and Selective Backup. The DOMain statement can be coded repeatedly: the effect is additive. That is, coding "DOMain a:" followed by "DOMain b:" on the next line is the same as coding "DOMain a: b:". Note that Domains may also be specified in the client options set defined on the server, which are also additive, preceding what is coded in the client's options file. When a file system is named via DOMain, all of its directories are always backed up, regardless of Inclue/Exclude definitions: the Include/Exclude specs affect only eligibility of *files* within directories. AIX: You cannot code a name which is not one coded in /etc/filesystems (as you might try to do in alternately mounting a file system R/O): you will get an ANS4071E error message. Default: all local filesystems, except /tmp. (Default is same as coding "ALL-LOCAL", which includes all local hard drives, excluding /tmp, and excludes any removeable media drives, such as CD-ROM, and excludes loopback file systems and those mounted by Automounter. Local drives do not include NFS-mounted file systems.) Verify: 'dsmc q fi' or 'dsmc q op'. Override by specifying file systems on the 'incremental' command, as in: 'dsmc Incremental /fs3' Note that instead of a file system you can code a file system subdirectory, defined previously via the VIRTUALMountpoint option. Do not confuse DOMain with Policy Domain: they are entirely different! See also: File systems, local; SYSTEMObject Domain list, in GUI From the GUI menu, choose "edit" -> preferences; there you'll find a "backup" tab which will give you access to your domain options, and a self-explicit "include-exclude" tab. -DOMain=____ Client command line option to specify file system name(s) which augment those specified on the Client User Options file DOMain statement(s). For example: If your options file contains "DOMain /fs1 /fs2" and you invoke a backup with -DOMain="/fs3 /fs4" then the backup will operate on /fs1, /fs2, /fs3, and /fs4. Note that both DOMain and -DOMain are ignored if you explicitly list file systems to be backed up, as with 'dsmc i /fs7 /fs8'. DOMAIN.Image Client Options File (dsm.opt) option for those clients supporting Image Backups. Specifies the mounted file systems and raw logical volumes to be included by default when Backup Image is performed without file system or raw logical volume arguments. Syntax: DOMAIN.Image Name1 [Name2 ...] See also: dsmc Backup Image; MODE domdsm.cfg The default name for the TDP For Domino configuration file. Values in that file are established via the 'domdsmc set' command. Note that if the file contains invalid values, TDP will use default values. "Preference" info, by default, comes from this cfg - not domdsm.opt . Remember that dsm.opt is the TSM API config file. You can point to an alternate configuration file using the DOMI_CONFIG environment variable. domdsmc query dbbackup TDP Domino command to report on previously backed up Domino database instances. If it fails to find any, it may be that the domdsmc executable does not have the set-user-id bit on: perform Unix command 'chmod 6771' to turn it on. See IBM KB article 1109089. Domino See: domdsm.cfg; Lotus Domino; Tivoli Storage Manager for Mail Domino backup There are two *guaranteed* ways to get a consistent Domino database backup: 1) Shut down the Domino server and back up the files, as via the B/A client. 2) Use Data Protection for Domino, which uses the Domino backup and restore APIs. This can be done while the Domino Server is up even if the database is changing during backup. Some customers point to the TSM 5.1 Open File support and believe they can use that instead; but if a database is "open", you cannot absolutely guarantee that the database will be in a consistant state during the point in time the "freeze" happens, because not all of the database may be on the disk - some may still be in memory. The Domino transaction logging introduced in Domino 5 make sure that the database can be made consistent even after a crash. Domino restoral considerations When performing a restoral with TDP Notes, the restored physical files are seen to have contemporary timestamps, rather than reflecting the timestamps of the backups. This is because the external, physical file timestamps don't matter, and receive no special attention: what matters are the timestamps internal to the Domino database, which is what the TDP is concerned with. DOS/Win31 client Available in ADSM v.2, but not v.3. dpid2 daemon Serves as a translator between SMUX and DPI (SNMP Multiplexor Protocol and Distributed Protocol Interface) traffic. Make sure that it is known to the snmp agent, as by adding a 'smux' line to /etc/snmpd.conf for the dpid2 daemon; else /var/log could fill with msgs: dpid2 lost connection to agent dpid2 smux_wait: youLoseBig [ps2pe: Error 0] Dr. Watson errors (Windows) May be caused by having old options in your options file, which are no longer supported by the newer client. DRIVE FORMAT value in DEFine DEVclass to indicate that the maximum capabilities of the tape drive should be used. Note that this is not as reliable or as definitive as more specific values. See also: 3590B; 3590C; FORMAT Drive A drive is defined to belong to a previously-defined Library. Drive, define to Library See: 'DEFine DRive' Drive, update 'UPDate DRive ...' (q.v.) Drive, vary online/offline 'UPDate DRive ...' (q.v.) Drive cleaning, excessive Can be caused by bad drive microcode, as seen with DLT7000. The microcode does not record calibration track onto to tapes correctly. So the drives detect a weak signal and think that cleaning is needed. Drive mounts count See: 3590 tape mounts, by drive Drive status, from host 'mtlib -l /dev/lmcp0 -f /dev/rmt1 -qD' DRIVEACQUIRERETRY TSM4.1 server option for 3494 sharing. Allows an administrator to set the number of times the server will retry to acquire a drive. Possible values: 0 To retry forever. This is the default. -1 To never retry. 1 to 9999 The number of times the server will retry. See also: 3494SHARED; MPTIMEOUT Driver not working - can't see tape Has occurred in the case of an operating drives system like Solaris 2.7 booted in 64-bit mode, but the driver being 32-bit. DRIVES SQL table. Elements, as of ADSMv3: LIBRARY_NAME: FSERV.LIB DRIVE_NAME: FSERV.3590_500 DEVICE_TYPE: 3590 DEVICE: /dev/rmt5 ONLINE: YES ELEMENT: ACS_DRIVE_ID: LAST_UPDATE_BY: LAST_UPDATE: CLEAN_FREQ: Later, TSM added the columns... DRIVE_STATE ALLOCATED_TO Note: Does not reveal the media mounted on a drive. Drives, maximum to use at once See: MOUNTLimit Drives, not all in library being used As in you find processes waiting for (Insufficient mount points drives (do 'Query SEssion F=D' and find ANR0535W, ANR0567W) some sessions waiting for mount points), though you believe you have enough drives in the library to handle the requests... - Most obviously, do 'Query DRive' and make sure all are online. - In the server, do 'SHow LIBrary' and see if it thinks all the drives are available. Inspect the "mod=" value: if you have a mixture of model numbers, some of your drives might not get used. A further consideration is that using new drives with old server software (as with inappropriate definitions such that TSM thinks they are older drives) could result in erratic behavior, as in perhaps balky dismounting, etc. Review TSM documentation on how to best define such devices for use in your library, and appropriate levels of software and device drivers. - If all your drives get rotationally used, but all cannot be used simultaneously, then it's a DEVclass MOUNTLimit problem (and be aware that MOUNTLimit=DRIVES is not always reliable, so may be better to explicitly specify the number). - If not all drives get rotationally used, some have a problem: Attempt to use 'mtlib' and 'tapeutil'/'ntutil' commands on those. - Check your client MAXNUMMP value. - Watch out for the devclass for your drives somehow having changed and thus being incompatible with your storage pools. - If just certain drives never get used, then there is a problem specific to those drives... - If a 3494 or like library, look for an Intervention Required condition, caused by a load/unload failure or similar, which takes the drive out of service. - At the library manager station, check the availability status of the drives. (They can be logically made unavailable there.) - Check the front panel of the drives, looking for "ONLINE=0" or like anomaly. - In AIX, do 'lsdev -C -c tape -H -t 3590' and see if all drives have status of Available. - Are you trying to use a new tape technology with a server level which doesn't support it such that the drive devclass is GENERICTAPE rather than the actual type, needed to mount and use the tapes that go with that drive technology? - In a more obscure case, a 3494/3590 customer reports this being caused by the cleaning brush on the drive not functioning correctly: replaced, cleaned, no more problem. - 5.1 changed things so that we now have to define a Path for libraries and drives, which may be at the root of your difficulty. Do Query PATH in addition to Query DRive, and possibly SHow LIBRary, to seek out any missing defs or bad states. - Assure that your MAXscratch value is appropriate. Keep in mind that various TSM tasks simply cannot be done in parallel. Drives, number of in 3494 Via Unix command: 'mtlib -l /dev/lmcp0 -qS' Drives, query 'Query DRive [LibName] [DriveName] [Format=Detailed]' DRM TSM Disaster Recovery Manager. In AIX environment, does 2 major things: 1. Automates (mostly) the vaulting process for moving/tracking copy storage pool tapes and DB backup tapes offsite and onsite. If you have a tape robot and do a lot of tape vaulting you can either: a) Have a very expensive ADSM administrator do all the checking and status updates daily for vaulting tapes; b) Have a very expensive UNIX dude write scripts to automate the process (and of course maintain them); or c) Pay for DRM and get the function ready to go out of the box. 2. Generates the "recovery plan" file that is a concatenated series of scripts and instructions that tell you how to rebuild your *SM server in an offsite, DR environment (which is the first thing you have to do in a disaster situation - you have to get your *SM server back up at your recovery recovery site, before you can start using *SM to recover your appls.) Ref: Admin Guide manual; Tivoli Storage Management Concepts redbook Competing product: AutoVault, at CodeRelief.com - a very inexpensive alternative, no TSM hooks. See also: ORMSTate DRM, add primary, copy stgpools SET DRMPRIMSTGPOOL SET DRMCOPYSTGPOOL DRM, prevent from checking tape label To keep DRM from checking the tape label before ejecting a tape: Set DRMCHECKLabel No DRM and ACS libraries DRM won't do checkouts from ACS libraries. (You can write scripts to work around it.) DRM considerations Numerous customers report encountering inconsistencies with DRM, as in doing Query DRMedia and finding 18 of 50 offsite volumes not listed. This may have to do with changing status of vault retrieve volumes which somehow are not checked-in in time. When the volume history is truncated to the point where this state change was made the volume is 'lost'. - Make sure that you use DRM to expire *SM database backup volumes. - Watch out for human error: In using MOVe DRMedia to return tapes, a mistyped a volser for a volume that is still physically offsite but has just gone to vault retrieve state, the volume will be deleted and left at the vault: it's not in a DRM state anymore and you have to do manual inventory to find it. - The offsite vendor can mistakenly omit a tape to be returned and ops runs MOVe DRMedia anyway and the tape is "lost". - A volume inadvertently left in the tape library and not sent offsite cannot be returned. - A MOVe DRMedia done by mistake, or an automated script which is not in tune with retention policies can result in inconsistencies. As always, keeping good records will help uncover and rectify problems. If an automated library, after you explode the DRM files, you may have to edit DEVICE.CONFIGURATION.FILE to put actual location and volser of your DB backup tape. That's so the DR script (and the server) can find it. DRMDBBackupexpiredays See: Set DRMDBBackupexpiredays DRMEDIA SQL: TSM database table recording disaster recovery media, which is to say database backup volumes and copy storage pool volumes. Columns, with samples: VOLUME_NAME: 000004 STATE: MOUNTABLE (always this unless MOVe DRMedia is done) UPD_DATE: 2000-11-12 15:11:29.000000 LOCATION: STGPOOL_NAME: OUR.STGP_COPY LIB_NAME: OUR.LIB VOLTYPE: CopyStgPool DBBackup dscameng.txt American English message text file. The DSM_DIR client environment variable should point to the directory where the file should reside. dsierror.log *SM API error log (like dsmerror.log) where information about processing errors is written. Because buta is built upon the API, use of buta also causes this log to be created. The DSMI_LOG client environment variable should point to the directory where you want the dsierror.log to reside. If unspecified, the error log will be written to the current directory. The error log for client root activity (HSM migration, etc.) will be /dsierror.log. See also: DSMI_LOG; "ERRORLOGRetention"; tdpoerror.log ____.dsk VMware virtual disk files, such as win98.dsk, linux.dsk, etc. Backing up such files per se is not the best idea, and is worse if the .dsk area is active. The best course is to run the backup from within the guest operating system. dsm The GUI client for backup/archive, restore/retrieve. Contrast with 'dsmc' command, for command line interface. AIX: /usr/lpp/adsm/bin/dsm IRIX: /usr/adsm/dsm Solaris: /opt/IBMDSMba5/solaris/dsm and symlink from /usr/sbin/dsmc Beware: ADSM install renders this cmd setGID bin, which thwarts superuser uses. Assure setGID chmod'ed off. Ref: Using the UNIX Backup-Archive Client, chapter 1. DSM_CONFIG Client environment variable to point to the Client User Options file (dsm.opt) for users who create their own rather than depend upon the default file /usr/lpp/adsm/bin/dsm.opt. Ref: "Installing the Clients" manual. See also: -optfile DSM_DIR Officially, the client environment variable to point to the directory containing dscameng.txt, dsm.sys, dsmtca, and dsmstat. But is also observed by /etc/rc.adsmhsm as the directory from which HSM should run installfsm, dsmrecalld, and dsmmonitord. Ref: "Installing the Clients" manual. DSM_LOG Client environment variable to point to the *directory* where you want the dsmerror.log to reside. (Remember to code the directory name, not the file name.) If undefined, the error log will be written to the current directory. Beware symbolic links in the path, else suffer ANS1192E. Advice: Avoid using this if possible, because it forces use of a single error log file, which can make for permissions usage problems across multiple users, and muddy later debugging in having the errors from all manner of sessions intermixed in the file. Ref: "Installing the Clients" manual. See also: ERRORLOGName option dsm.afs The dsm.afs backup style provides the standard ADSM user interface and backup/restore model to AFS users, which unlike plain dsm will back up AFS Access Control Lists for directories. Users can have control over the backup of their data, and can restore individual files without requiring operator intervention. Individual AFS files are maintained by the ADSM system, and the ADSM management classes control file retention and expiration. Additional information is needed in order to restore an AFS server disk. Contrast with buta, which operates on entire AFS volumes. dsm.ini (Windows client) The ADSMv3 Backup/Archive GUI introduced an Estimate function. It collects statistics from the ADSM server, which the client stores, by server, in the dsm.ini file in the backup-archive client directory. (Comparable file in the Unix environment is .adsmrc.) Client installation also creates this file in the client directory. Ref: Client manual chapter 3 "Estimating Backup processing Time"; ADSMv3 Technical Guide redbook This file is also being used, in at least a provisional manner, to make the GUI configurable, as in limiting what an end user can do. See IBM site Solution swg21109086. See also: .adsmrc; Estimate; TSM GUI Preferences dsm.opt file See Client User Options file. AIX: /usr/lpp/adsm/bin/dsm.opt. IRIX: /usr/adsm/dsm.opt. Solaris: /usr/bin (so located due to the Solaris packaging mechanism wherein an install will delete old files, and /usr/bin was deemed "safe" - but not really the best choice) The DSM_CONFIG client environment variable may point to the options file to use, instead of using the options file in the the default location. dsm.opt.smp file Sample Client User Options file. Use this to create your first dsm.opt file. dsm.sys file See: Client System Options File. AIX: /usr/lpp/adsm/bin/dsm.sys IRIX: /usr/adsm/dsm.sys Solaris: /usr/bin (so located due to the Solaris packaging mechanism wherein an install will delete old files, and /usr/bin was deemed "safe" - but not really the best choice) The DSM_DIR client environment variable may be used to point to the directory where the file to be used resides. Beware there being multiple dsm.sys files, as in AIX maybe having: /usr/tivoli/tsm/client/api/bin/dsm.sys /usr/tivoli/tsm/client/api/bin64/dsm.sys /usr/tivoli/tsm/client/ba/bin/dsm.sys dsm.sys.smp file Sample Client System Options file. Use this to create your first dsm.sys file. In /usr/lpp/adsm/bin dsmaccnt.log This is the ADSM server accounting file on an AIX system, which is written to after 'Set ACCounting ON' is done. The file is located in the directory from which the server is started, which is typically /usr/lpp/adsmserv/bin/. See also: Accounting... dsmadm The GUI command for server administration of Administrators, Central Scheduler, Database, Recovery Log, File Spaces, Nodes, Policy Domains, Server, and Storage Pools. Contrast with the 'adsm' command, which is principally for client management. dsmadmc *SM administrative client command line mode for server cmds, available as a client on all *SM systems where the *SM client software has been installed. (On Windows clients, dsmadmc is not installed by default: you have to perform a Custom install, marking the admin command line client for installation. After a basic install, you can go back and install dsmadmc by reinvoking the install, choosing Modify type, there marking just the admin command line client for installation. See IBM doc item 1083434.) The dsmadmc command starts an "administrative client session" to interact with the server from a remote workstation, as described in the *SM Administrator's Reference. In Unix, the version level preface and command output all go to Stdout. Note that the dsmadmc command is neutral: you can use it on any platform type to communicate to a TSM server on any platform type. The dsmadmc invoker does not have to be a superuser. To enter console mode (display only): 'dsmadmc -CONsolemode' To enter mount mode (monitor mounts): 'dsmadmc -MOUNTmode' To enter batch mode (single command): 'dsmadmc -id=____ -pa=____ Command...' 'dsmadmc -id=____ -pa=____ macro Name' To enter interactive mode: 'dsmadmc -id=YourID -pa=YourPW' Options: -CONsolemode Run in Console mode, to display TSM server msgs but allow no input. -DATAOnly=[No|Yes] (TSM 5.2+) To suppress the display of headers (product version, copyright, ANS8000I command echo, column headers) and ANS8002I trailer. Error messages are not suppressed. -DISPLaymode=[LISt|TABle] The interface is normally adaptive, displaying output in tabular form if the window is wide enough, otherwise reverting to Identifier:Value form. This option allows you to force query output to one or the other, regardless of the window width. Note that, regardless of window width, query commands may be programmed with a fixed column width. -ID=____ Specify administrator ID. -Itemcommit Say that you want to commit commands inside a macro as each command is executed. This prevents the macro from failing if any command in it encounters "No match found" (RC 11) or the like. See also: COMMIT -MOUNTmode Run in Mount mode, to display all mount messages, such as ANR8319I, ANR8337I, ANR8765I. No input allowed. -NOConfirm Say you don't want TSM to request confirmation before executing vital commands. Example: Select, "This SQL query might generate a big table, or take a long time. Do you wish to continue ? Y/N" -OUTfile=____ All terminal commands and responses are to be captured in the named file, as well as be displayed on the screen. The file will not reflect command input prompting but will record the cmd. Use this rather than Unix 'dsmadmc | tee ', which doesn't work. -PASsword=____ Specify admin password. -Quiet Don't display Stdout msgs to screen; but Stderr will. -SERVER=____ Select a server other than the one in this system's client options file. (Not avail. in Windows: use -TCPServeraddress instead.) -COMMAdelimited Specifies that any tabular output from a server query is to be formatted as comma-separated strings rather than in readable format. This option is intended to be used primarily when redirecting the output of an SQL query (SELECT command). The comma-separated value format is a standard data format which can be processed by many common programs, including spreadsheets, data bases, and report generators. Note that where values themselves contain commas, TSM will enclose the value in quotes, e.g. "123,456". -TABdelimited Specifies that any tabular output from a server query is to be formatted as tab-separated strings rather than in readable format. This option is intended to be used primarily when redirecting the output of an SQL query (SELECT command). The tab-separated value format is a standard data format which can be processed by many common programs, including spreadsheets, databases, and report generators. Tabs make parsing easier compared to commas, in that it is not uncommon for values to contain commas. You can also specify any option allowed in the client options file. Alas, there is no option to specify a file containing a list of commands to be invoked. The dsmadmc client command is obviously useless if the server is not up. See my description of the ANS8023E message. Notes: Prior to TSM 5.2 and the -DATAOnly option, there is no way to suppress headers or ANS800x messages that appear in the output - you are left to remove them after the fact. You might use ODBC, but that accesses just the TSM db, not any TSM commands. You can suppress the "more..." scrolling prompt only by running a command in batch mode (adding the command to the end of the line) and piping the output to cat... dsmadmc SomCmd | cat. Install note: dsmadmc may not install by Ref: Admin Ref chapter 3: "Using Administrative Client Options". See also: -Itemcommit dsmapi*.h *SM API header files, for compiling your own API-based application: dsmapifp.h dsmapips.h dsmapitd.h In TSM 3.7, lives in /usr/tivoli/tsm/client/api/bin/sample/ They are best included in C source modules in the following order: #include "dsmapitd.h" #include "dsmapifp.h" #include "dapitype.h" #include "dapiutil.h" #include "dsmrc.h" See also: libApiDS.a dsmapitca The ADSM API Trusted Communication Agent For non-root users, the ADSM client uses a trusted client (dsmtca) process to communicate with the ADSM server via a TCP session. This dsmtca process runs setuid root, and communicates with the user process (API) via shared memory, which requires the use of semaphores. The DSM_DIR client environment variable should point to the directory where the file should reside. dsmattr HSM: Command to set or display the recall mode for a migrated file. Syntax: 'dsmattr [-RECAllmode=Normal|Migonclose| Readwithoutrecall] [-RECUrsive] FileName(s)|Dir(s)' See "Readwithoutrecall". dsmautomig (HSM) Command to start threshold migration for a file system. dsmmonitord checks the need for migration every 5 minutes (or as specified on the CHEckthresholds Client System Options file (dsm.sys)) and if needed will automatically invoke dsmautomig to do threshhold migrations. Query: ADSM 'dsmc Query Options' or TSM 'dsmc show options', look for "checkThresholds". Note that persistent dsmautomig invocations are an indication that HSM thinks the file system is runninng out of space, despite what a 'df' may show. Deleting files or extending the file system has been shown to stop these "dry heaves" dsmautomig invocations. See "dsmmonitord", "automatic migration", "demand migration". dsmBeginQuery API function. dsmBindMC API call to bind the file object to a management class. It does so by scanning the Include/Exclude list for a spec matching the object, wherein you may have previously coded a management class for a filespec. What the call returns reflects what it has found - which is to say that the dsmBindMC call does not itself specify the Management Class. You'll end up with the default management class if the dsmBindMC processing did not find a spec for the object in the Include/Exclude list. It would be nice if there were a call which were as definitive as the -ARCHMc spec for the command line client, but such is not the case. dsmc Command-line version of the client for backup-restore, archive-retrieve. Invoking simply 'dsmc' puts you into the command line client, in interactive mode (aka "loop mode"). Contrast with 'dsm' command, for graphical interface (GUI). To direct to another server, invoke like this: 'dsmc q fi -server=Srvr', or 'dsmc i -server=Srvr /home'. (Note that the options *must* be coded AFTER the operation.) AIX: /usr/lpp/adsm/bin/dsmc IRIX: /usr/adsm/dsmc NT: Reference the B/A Client manual for Windows manual, section "Starting a Command Line Session", where you can Start->Programs->TSM folder->Command Line icon; or use the Windows command line to shuffle over to the TSM directory and issue the 'dsmc' command. Solaris: /opt/IBMDSMba5/solaris/dsmc, and symlink from /usr/sbin/dsmc Note that you can run a macro file with dsmc: put various commands like Incremental into a file, the run as 'dsmc macro MacroFilename'. Beware: ADSM install renders this cmd setGID bin, which thwarts superuser uses. Assure setGID chmod'ed off. Ref: Using the UNIX Backup-Archive Client, chapter 7. See also: dsmc LOOP dsmc and wildcards (asterisk) New TSM users in at least a Unix environment may not realize that how you utilize a wildcard may cause results to be wholly different than they expect. For example: A novice user goes into a directory and wants to see all the files that are in the backup storage pool for that directory, so they enter: dsmc query backup * But what does that really do? The asterisk is exposed to the Unix shell that is controlling the user session, and it expands the asterisk into a list of all the files in the directory. So the query will end up trying to ask the TSM server for information on the files currently in the directory - which may have no correlation with what is in the backup storage pool. (This theoretical example sidesteps the TSM complication that it may disallow such wildcarding, with error message ANS1102E; but we're trying to explore a point here.) So how do you then pose the request to the TSM server that it show all backed up files from the directory? By one of the following constructs (where this is a Unix example): dsmc query backup '*' dsmc query backup \* dsmc query backup "*" By quoting or escaping the asterisk, the shell passes it, intact, to the dsmc command, which responds by formulating an API request to the TSM server for all files contained within the stored filespace for this directory. And this yields the expected results. The rule here may be expressed as: * refers to the file system '*' refers to the filespace Note that the above does *not* apply to the Windows environment: the Windows command processor does not expand wildcards, but rather just passes them on to the invoked program as-is. dsmc Archive To archive named files. Syntax: 'Archive [-ARCHMc=managementclass] [-DELetefiles] [-DEscription="..."] [-SErvername=StanzaName] [-SUbdir=No|Yes] [-TAPEPrompt=value] FileSpec(s)' The number of FileSpecs is limited to 20; see "dsmc command line limits". Wildcard characters in the FileSpec(s) can be passed to the Archive command for it to expand them: this avoids the shell implicitly expanding the names, which can result in the command line arguments limit being exceeded. For example: instead of coding: dsmc Archive myfiles.* code: dsmc Archive 'myfiles.*' or... dsmc Archive myfiles.\* Note that the archive operation will succeed even if you don't have Unix permissions to delete the file after archiving. It is important to understand that an Archive operation is deemed "explicit": that you definitely want all the specified files sent...WITHOUT EXCEPTION. Because of this, message ANS1115W and a return code 4 will be produced if you have an Exclude in play for an included object. (Due to the preservational nature of Archive, you very much want to know if some file was not preserved.) It is advisable to make use of the DEscription, as it renders the archived object unique - but be aware that doing so also forces the path directories to be archived once more, if the description is unique. Archiving a file automatically archives the directories in the path to it. As of ADSMv3.1 mid-1999 APAR IX89638 (PTF 3.1.0.7), archived directories are not bound to the management class with the longest retention. Note that you cannot change the archive file Description after archiving. See also: DELetefiles; dsmc Archive dsmc Backup Image TSM3.7+ client command to create an image backup of one or more file spaces that you specify. Available for major Unix systems (AIX, Sun, HP). This is a raw logical volume backup, which backs up a physical image of a volume rather than individually backing up the files contained within it. This is achieved with the TSM API (which must be installed). This backup is totally independent of ordinary Backup/Restore, and the two cannot mingle. Image backups need to be run as "root". Syntax: 'dsmc Backup Image File_Spec' where File_Spec identifies either the name of the file system that occupies the logical volume (more specifically, the mount point directory name), when that file system is mounted; or the name of the logical volume itself, when it has no mounted file system. If the volume contains a file system, you must specify by file system name: that allows you to supplement the image backup with Incremental or Selective backups via the MODE option. It also assures that the mounted file system, if any, is dismounted before the image backup is performed. The client and server both must be at least 3.7. Advisory: When a file system is specified, the operation will try to unmount the file system volume, remount it read-only, perform the backup, and then remount it as it was. This can be disruptive, and is problematic if the backup is interrupted. Use the Include.Image option to include an image for backup, or to assign a specific management class to an image object. Syntax: 'dsmc Backup Image [Opts] Filespec(s)' Ref: Redbook "Tivoli Storage Manager Version 3.7 Technical Guide"; IBM online info item swg21153898 Msgs: ANS1063E; ANS1068E See also: MODE dsmc Backup NAS Contacts the TSM EE server for it to initiate an image backup of one or more file systems belonging to a Network Attached Storage (NAS) file server. The NAS file server performs the outboard data movement. A server process starts in order to perform the backup. See also: NDMP; NetApp dsmc BACKup SYSTEMObject Windows client command to back up all valid system objects, allowing you to perform a backup of System Objects separate from ordinary files. Note that an Incremental Backup will ordinarily also back up System Objects. Verification: The backup log will show messages like "Backup System Object: Event log", "Backup System Object: Registry". Note that this command cannot be scheduled. dsmc CANcel Restore ADSMv3 client command to cancel a Restore operation. See also: CANcel RESTore dsmc command line limits By default, the number of FileNames which can be specified on the dsmc command line to 20 (message ANS1102E); and the TSM backup-archive client's command-line parsing is limited to 2048 total bytes (message ANS1209E The input argument list exceeds the maximum length of 2048 characters.). The intent is to protect hapless customers from themselves - but that of course penalizes everyone, deprives the product of the flexibility that its Enterprise status warrants, and prevents it from scaling to the capabilities of the operating system environment which the customer chose for large-scale processing. (In AIX, at least, the command line length limit is defined by the ARG_MAX value in /usr/include/sys/limits.h: exceeding that results in the typical shell error "arg list too long".) As of the TSM 5.2.2 Unix client, this limitation is relieved in the form of the -REMOVEOPerandlimit command line option. In other environments, there are some circumventions you can employ: - Use the -FILEList option. - In the Unix environment, use the 'xargs' command to efficiently invoke the command with up to 20 filespecs per invocation, via the -n20 option. Within an interactive session (which you invoked by entering 'dsmc' with no operands): A physical line may not contain more than 256 characters, and may be continued to a maximum of 1500 characters. Ref: B/A Clients manual, "Entering client commands" See also: -FILEList; -REMOVEOPerandlimit dsmc Delete ACcess TSM client command to revoke access to files that you previously allowed others to access via 'dsmc SET Access'. Syntax: 'dsmc Delete ACcess [options]' You will be presented with a list from which to choose. (As such, this is a quick, convenient way to display all access permissions.) dsmc Delete ARchive TSM client command to delete Archived files from TSM server storage. Syntax: 'dsmc Delete ARchive [options] FileSpec' In more detail: 'dsmc Delete ARchive [-NOPRompt] [-DEscription="..."] [-PIck] [-SErvername=StanzaName] [-SUbdir=No|Yes] FileSpec(s)' If you do not qualify the deletion with a unique Archive file description, all archived files of that name will be deleted. The number of FileSpecs is limited to 20; see "dsmc command line limits". The delete actually only marks the entries for deletion: it is Expire Inventory which actually removes the entries and reclaims space. But the marking is irreversible: there is no customer-provided means for un-marking the files; and the marking does not show up in the Archives table. Thus, a Select on the Archives table continues to show the files exactly as before the Delete Archive. dsmc Delete Filespace ADSM client command to delete filespaces from *SM server storage. Syntax: 'dsmc Delete Filespace [options]' You will be presented with a list of filespaces to choose from. dsmc EXPire TSM client command to inactivate the backup objects you specify in the file specification or with the filelist option. The command does not remove workstation files: if you expire a file or directory that still exists on your workstation, the file or directory is backed up again during the next incremental backup unless you exclude the object from backup processing. If you expire a directory that contains active files, those files will not appear in a subsequent query from the GUI. However, these files will display on the command line if you specify the proper query with a wildcard character for the directory. dsmc Help Client command line interface command to see help topics on the use of dsmc commands and option, plus message numbers. (Note that you have to scroll down to see everything.) When you invoke 'dsmc Help', there is no interaction with the TSM server. dsmc Incremental The basic command line client command to perform an incremental backup. Syntax: 'Incremental [] FileSpec(s)' FileSpec(s): Most commonly will be file system name(s). If you want to back up just a directory, how you specify the directory will make a difference... In specifying a file system name, you enter just the name, like "/home", and TSM will pursue backing up the full file system. But if you specify a directory name like /home/user1, only that single directory entry will be backed up: you need to specify /home/user1/ to explicitly tell TSM that rather than just back up that object, that you are telling it to back up a directory *and* what is contained in it. The number of FileSpecs is limited to 20; see "dsmc command line limits". Note that whereas scheduled backups result in each line being timestamped, this does not happen with command line incremental backups. (Neither running the command as a background process, nor redirecting the output will result in timestamping the lines.) The number of filespec operands may be limited: see "dsmc command line limits". See also: dsmc Selective dsmc LOOP To start a loop-mode (interactive) client session. Same as entering just 'dsmc'. dsmc Query ACcess TSM client command to display a list of users whom you have given access rights to your Backup and/or Archive files, via dsmc SET ACcess, so that they can subsequently perform Restore or Retrieve using -FROMNode, -FROMOwner, etc. 'dsmc Query ACcess [-scrolllines] [-scrollprompt]' See also: dsmc SET Access dsmc Query ACTIVEDIRECTORY Windows TSM 4.1 client command to provide information about backed up Active Directory. Ref: Redpiece "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment" dsmc Query ARchive *SM client command to list specified Archive files. Syntax: 'dsmc Query ARchive [-DEscription="___"] [-FROMDate=date] [-TODate=date] [-FROMNode=nodename] [-FROMOwner=ownername] [-SCROLLPrompt=value] [-SCROLLLines=number] [-SErvername=StanzaName] [-SUbdir=No|Yes] FileSpec(s)' The number of FileSpecs is limited to 20; see "dsmc command line limits". Wildcard characters in the filename(s) can be passed to the Archive command for it to expand them: this avoids the shell implicitly expanding the names, which can result in the command line arguments limit being exceeded. For example: instead of coding: dsmc Query ARchive myfiles.* code: dsmc Query Archive 'myfiles.*' or... dsmc Query Archive myfiles.\* Displays: File size, archive date and time, file name, expiration date, and file description (but not file owner). Performing a wide search for your archive files is a challenge. You'd like to say "look for all my archive files, beginning at the root of the mounted file systems". But it doesn't want to comply. What you have to do is restrict the search to a file system. For example, if your file activity is in /home, you can do: dsmc q archive /home/ -subdir=yes -desc="whatever" Note the foolishness of these client commands: unless you code a slash (/) or slash-asterisk (/*) at the end of the directory name, the commands assume that you are looking for an individual *file* of that name, and turns up nothing! Note: Root can see the archive files owned by others, but the query does not reveal file owners. Note that you can query across nodes, but only if the file system architectures are compatible. See also: dsmc Query Backup across architectural platforms dsmc Query Backup *SM client command to list specified backup files, issued as: 'dsmc Query Backup [options] ' Options: -DIrsonly: Display only directory names for backup versions of your files, as in: 'dsmc Query Backup -dirs -sub=yes '. -FROMDate=date -FROMTime=time -INActive To include Inactive files in the operation. All Active files will be displayed first, and then the Inactive ones. Note that files marked for expiration cannot be seen from the client, but can be seen in a server Select on the BACKUPS table. -SCROLLPrompt=Yes -SCROLLLines=number -SErvername=StanzaName -SUbdir=Yes -TODate=date -TOTime=time -DATEFORMAT, -FROMNode, -FROMOWNER, -NODename, -NUMBERFORMAT, -PASsword, -QUIET, -TIMEFORMAT, -VERBOSE The number of FileSpecs is limited to 20; see "dsmc command line limits". Note that it is not possible to use a filespec which is the top of your file system (e.g., "/" in Unix) and have dsmc report all files, regardless of filespace. It can't do that: you have to base the query on filespaces. Wildcards: Use only opsys (shell) wildcard characters, which can only be used in the file name or extension. They cannot be used to specify destination files, file systems, or directories. In light of this, you would best do 'Query Filespace' first to see what file systems were being backed up, rather than frustrate yourself trying to use wildcards which get you nowhere. This query command will display file size, backup timestamp, managment class, active/inactive, and file; but there is no way to get file details such as username, group info, file timestamps, or even the type of file system object (to be able to distinguish between directories and files, for example): neither the -verbose nor -description CLI options help get more info. In contrast to the CLI, the GUI will provide such further info, via its View menu, "File details" selection - but this operates on one file at a time. Note that the speed of this query command in returning results bears no relationship to the speed of a restoral of the same files, both because of further *SM database lookup requirements and media handling. See also: dsmc and wildcards; DEACTIVATE_DATE dsmc Query Backup across architectural Cross platform Querying of files only platforms work on those platforms that understand the other's file systems, such as among Windows, DOS, NT, and OS/2; or among AIX, IRIX, and Solaris - and even there incompatibilities may exist. Mac's can't be either the source or the target in moves from another platform. A succinct way to express the schism is to say that there are the "slash" and "backslash" camps, and that their files cannot mingle. See also: Restore across architectural platforms dsmc Query BACKUPSET *SM client command to query a backup set from a local file or the server, to see metadata about the Backup Set: its name, generatin date, retention, and description. You must be superuser to query a backupset from the server. Syntax: 'Query BACKUPSET [Options] BackupsetName|LocalFileName' Note that there is no way from the client to query the contents of a backup set. See also: Backup Set; Query BACKUPSETContents dsmc Query CERTSERVDB Windows TSM 4.1 client command. Ref: Redpiece "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment" dsmc Query CLUSTERDB Windows TSM 4.1 client command. Ref: Redpiece "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment" dsmc Query COMPLUSDB Windows TSM 4.1 client command. Ref: Redpiece "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment" dsmc Query Filespace TSM client command to report filespaces known to the server for this client. The "Last Incr Date" column reflects the date of the last successful, full Incremental backup. If its value is null, it could be the result of: - The filespace having been created by Archive activity only. - Doing backups other than complete Incremental type (e.g., Selective, or Incremental on a subdirectory in the file system). - The Incremental backup having been interrupted. - The Incremental backup suffering from files changing during backup and you don't have Shared Dynamic copy serialization active, or files selected for backup disappear from the client before the backup can be done. - It's a filespace for odd backup types such as buta. Syntax: 'dsmc Query Filespace [-FROMNode=____]' See also: Query FIlespace dsmc Query INCLEXCL TSM 4.1: Formalized client command to display the list of Include-Exclude statements that are in effect for the client, in the order in which they are processed during Backup and Archive operations. This is the best way to interpret your include-exclude statements, as it reports your client-based and server-based (Cloptset) specifications together. Report columns: Mode Incl or Excl Function Archive or All Pattern '#' appears at the front where '*' was coded for "all drives". Source File Where the include or exclude is: dsm.opt = Your client. Server = Cloptset. Operating System = Windows Registry value. This command is valid for all UNIX, all Windows, and NetWare clients. Historical notes: Was introduced in ADSMv3.PTF6 as an undocumented client command, like 'dsmc Query OPTION'. In TSM 3.7, Tivoli management decided that, because it was unsupported, it should not be a Query, but rather a Show command, being consistent with undocumented and unsupported SHow commands in the server. That command persisted into TSM 4.1.2, where the capability was formalized as the 'dsmc Query INCLEXCL' command. Customers still using it in older client levels need to realize that because it was "unsupported", it would not necessarily be capable of recognizing newer Exclude options, like EXCLUDE.FS (as was discovered). For example, if you have no EXCLUDE.FS statements coded and don't get the message "No exclude filespace statements defined.", then the Query code is behind the times. See also: dsmc SHow INCLEXCL dsmc Query Mgmtclass ADSM client command to display info about the management classes available in the active policy set available to the client. 'dsmc Query Mgmtclass [-detail] [-FROMNode=____]' where -detail reveals Copy Group info, which includes retention periods. dsmc Query Options Undocumented ADSM client command, contributed by developers, to report combined settings from the Client System Options file and Client User Options file. In ADSMv3, also shows the merged options in effect (those from dsm.opt and the cloptset). TSM: Replaced by 'show options'. dsmc Query RESTore ADSM client command to display a list of your restartable restore sessions, as maintained in the server database. Reports: owner, replace, subdir, preservepath, source, destination. Restartable sessions are indicated by negative numbers, and their Restore State is reported as "restartable". See also: RESTOREINTERVAL dsmc Query SChedule ADSM client command to display the events scehduled for your node. dsmc Query SEssion ADSM client command to display info about your ADSM session: current node name, when the session was established, server info, and server connection. dsmc Query SYSTEMInfo TSM 5.x Windows client meta command to provide a comprehensive report on the TSM Windows environment - options files, environment variables, files implicitly and explicitly excluded, etc. Creates a dsminfo.txt file. dsmc Query SYSTEMObject TSM 4.1 Windows client command to provide information about backed up System Objects. Ref: Redpiece "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment" dsmc Query Tracestatus ADSM client command to display a list of available client trace flags and their current settings. Ref: Trace Facility Guide dsmc REStore Client command to restore file system objects. 'dsmc REStore [FILE] [] []' Allowable options: -DIrsonly, -FILESOnly, -FROMDate, -FROMNode, -FROMOwner, -FROMTime, -IFNewer, -INActive, -LAtest, -PIck, -PITDate, -PITTime, -PRESERvepath, -REPlace, -RESToremigstate, -SUbdir, -TAPEPrompt, -TODate, -TOTime. The number of SourceFilespecs is limited to 20; see "dsmc command line limits". If you are restoring a directory, it is important that you specify the SourceFilespec with a directory indicator (slash (/) in Unix, backslash (\) in Windows, else the restore will conduct a prolonged search for what it presumes to be a file rather than a directory. This is particularly important for point-in-time restorals, where the client does a lot of filtering. See also: dsmc and wildcards; Restore... dsmc REStore BACKUPSET Client command to restore a Backup Set from the server, a local file, or a local tape device. The location of the Backup Set may be specified via -LOCation. The default location is server. Use client cmd 'dsmc Query BACKUPSET' to get metadata about the backup set. Use server cmd 'Query BACKUPSETContents' to either check the contents of the Backup Set or gauge access performance (which excludes the destination disk performance factors involved in a client dsmc REStore BACKUPSET). dsmc REStore REgistry TSM command to restore a Windows Registry. But it will restore only the most recent one, rather than an inactive version. You can manually restore an older version by using the GUI to restore the files to their original location, the adsm.sys directory. Start the Registry restore within the GUI with the command Restore Registry in the menu Utilities or within the ADSM CLI with REGBACK ENTIRE. Be sure that you check the Activate Key after Restore box in the dialog window. The ADSM client tries to restore the latest version of the files into the adsm.sys directory, but this time, you do not allow to replace the files on your disk. This will guarantee that the 'older' files will remain on the disk. The last dialog window which appears is a confirmation that the registry restore is completed and activated as the current registry. The machine must be rebooted for the changes to take effect. See also: REGREST dsmc RETrieve *SM client command to retrieve a previously Archived file. Syntax: 'dsmc RETrieve [options] SourceFilespec [DestFilespec]' where you may specify files or directories. Allowable options: -DEScription, -DIrsonly, -FILESOnly, -FROMDate, -FROMNode, -FROMOwner, -FROMTime, -IFNewer, -PIck, -PRESERvepath, -REPlace, -RESToremigstate, -SUbdir, -TAPEPrompt, -TODate, -TOTime. The number of SourceFilespecs is limited to 20; see "dsmc command line limits". dsmc SCHedule See: Scheduler, client, start manually dsmc Selective TSM client command to selectively back up files and/or directories that you specify. Syntax: 'dsmc Selective [-Options...] FileSpec(s)' Allowable options: -DIrsonly, -FILESOnly, -VOLinformation, -CHAngingretries, -Quiet, -SUbdir, -TAPEPrompt When files are named, the directories that contain them are also backed up, unless the -FILESOnly option is present. The number of FileSpecs is limited to 20; see "dsmc command line limits". To specify a whole Unix file system, enter its name with a trailing slash. You must be the owner of a file in order to back it up: having read access is not enough. (You get "ANS1136E Not file owner" if you try.) Your include-exclude specs apply to Selective backups. It is important to understand that a Selective backup is deemed "explicit": that you definitely want all the specified files backed up...WITHOUT EXCEPTION. Because of this, message ANS1115W and a return code 4 will be produced if you have an Exclude in play for an included object. Relative to Incremental backups, Selective backups are "out of band": they do not participate in the Incremental continuum, in several ways: - In a selective backup, copies of the files are sent to the server even if they have not changed since the last backup. This might result in having more than one copy of the same file on the server, and can result in old Inactive versions of the file being pushed out of existence, per retention versions policies. - The backup date will not be reflected in 'Query Filespace F=D', or in 'dsmc Query Filespace'. If you change the management class on an Include, Selective backup will cause rebinding of only the current, Active file being backed up: it will not rebind previously backed up files, as an unqualified Incremental will. See also: Selective Backup dsmc SET Access *SM client command to grant another user, at the same or different node, access to Backup or Archive copies of your files, which they would do using -FROMNode and -FROMOwner. Syntax: 'dsmc SET Access {Archive|Backup} {filespec...} NodeName [User_at_NodeName] [Options...]' The filespec should identify files, and not just name a directory. The access permissions are stored in the TSM database. Thus, the original granting client system may vanish and the grantee can still access the files. There is no check for either the node or user being known to the *SM server - though the node needs to be registered with the *SM server for that node and its user to subsequently access the data that you are authorizing access to, else error ANS1353E will be encountered. Note that this applies only to *your* specific files, even if you are root. That is, if you are root and attempt to grant file system access to root at another node, you will *not* be able to see files created by other users as you would as root on the native system. Inverse: 'dsmc Delete ACcess'. See also: dsmc Query ACcess; -FROMNode; -FROMOwner; -NODename dsmc SET Password *SM client command to change the ADSM password for your workstation. If you do not specify the old and new password parameters, you are prompted once for your old password and twice for your new password. Syntax: 'dsmc SET Password OldOne NewOne' dsmc SHow INCLEXCL TSM: Undocumented client command, contributed by developers, to evaluate your Include-Exclude options as TSM thinks of them. This command is invaluable in revealing the mingling of server-defined Include/Exclude statements and those from the client options file. Beware: In that this operation is unsupported, it may not be capable of recognizing newer Exclude options. For example, if you have no EXCLUDE.FS statements coded and don't get the message "No exclude filespace statements defined.", then the SHow code is behind the times. Shortcoming: Does not reveal the managment class which may be coded on Include lines...you have to browse your options file. Read the report from the top down. Remember that Include/Exclude's defined in the server Client Option Set in effect for this node will precede those defined on the client (additive). Report elements: No exclude filespace statements defined Means that there are no "EXCLUDE.FS" options defined in the client options file. No exclude directory statements defined Means that there are no "EXCLUDE.DIR" options defined in the client options file. No include/exclude statements defined Means that there are no "INCLExcl" options defined in the client options file. (Message shows up even in client platforms where INCLExcl is not a defined client option.) ADSM: 'dsmc Query INCLEXCL'. dsmc SHOW Options TSM client command to reveal all options in effect for this client. Note that output is more comprehensive than what is returned from the dsm GUI's Display Options selection. For example, this command will report InclExcl status whereas the GUI won't. ADSM: 'dsmc query options' (The ADSM query option command was an undocumented command developed for internal use. In support of this the command was changed in TSM to a show option command so that it fell in line with the standard ADSM/TSM conventions for non-supported commands.) dsmc status values (AIX) Do not depend upon 'dsmc' to yield meaningful return codes (see advisory under "Return codes"). However, observation shows that the dsmc command typically returns the following shell status values. 0 The command worked. In the case of a server query (Query Filespace) there were objects to be reported. 2 The command failed. In the case of a server query (Query Filespace) there were no objects to be reported. 168 The command failed for lack of server access due to no password established for "password=generate" type access and invoked by non-root user such that no password prompt was issued. Accompanied by message ANS4503E. (Don't confuse these Unix status values with TSM return codes.) dsmc.afs Command-line dsm.afs dsmc.nlm won't unload (Novell Netware) Have option "VERBOSE" in the options file, not "QUIET". Then, rather than unload the nlm at the Netware console, go into the dsmc.nlm session and press 'Q' to quit. dsmcad See: Client Acceptor Daemon (CAD) DSMCDEFAULTCOMMAND Undocumented ADSM/TSM client option for the default subcommand to be executed when 'dsmc' is invoked with no operands. Normally, the value defaults to "LOOP", which is what you are accustomed to in invoking 'dsmc', that being the same as invoking 'dsmc LOOP'. Conceivably, you might change it to something like HELP rather than LOOP; but probably nothing else. Placement: in dsm.opt file (not dsm.sys) dsmcdfs Command-line interface for backing up and restoring DFS fileset data, which this command understands as such, and so will properly back up and restore DFS ACLs and mount points, as well as directories and files. See also: dsmdfs dsmccnm.h ADSM 3.1.0.7 introduced a new performance monitoring function which includes this file. See APAR IC24370 See also: dsmcperf.dll; perfctr.ini dsmcperf.dll ADSM 3.1.0.7 introduced a new performance monitoring function which includes this file. See APAR IC24370 See also: dsmccnm.h.dll; perfctr.ini dsmcrash.log, dsmcrash.dmp TSM 5.2+ failure analysis data capture files. The object is to provide for "first failure data capture" of crashes by capturing the info by IBM facilities the first time the crash occurs. Dr. Watson itself does a nice job of this, but TSM should not depend upon Dr. Watson being installed or configured to capture the needed info. dsmcsvc.exe This is the NT scheduler service. It has nothing to do with the Web client or the old Web shell client. Use 'DSMCUTIL LIST' to get a list of installed services. dsmcutil.exe Scheduler Service Configuration Utility in Windows. Allows *SM Scheduler Services installation and configuration on local and remote Windows machines. The Scheduler Service Configuration Utility runs on Windows only and must be run from an account that belongs to the Administrator/Domain Administrator group. Syntax: 'dsmcutil Command Options' Example: update the node name and password to new node: 'dsmcutil update /name:"your service name" /node:newnodename /password:password' ADSMv2 name (dsmcsvci.exe in ADSMv3). Use 'DSMCUTIL LIST' to get a list of installed NT services. The /COMMSERVER and /COMMPORT options are used to override values in the client options file used by the service. They correspond to different client options depending on the communications method being used (and yes, there is /CommMethod dsmcutil option). For TCP/IP, they correspond to -TCPServername and -tcpPort, respectively. Written by Pete Tanenhaus . Ref: Installing the Clients; dsmcutil.hlp file in the BAclient dir. dsmcsvci.exe ADSMv3 name (dsmcutil.exe in ADSMv2). dsmdf HSM command to display all file systems which are under the control of HSM. Does not display any which are not. Note that running the AIX 'df' command will show the file system twice - first as a device-and-filesystem and then as filesystem-and-filesystem, where the latter reflects the FSM overlay. Much the same comes out of an AIX 'mount' command. Invoke 'dsmmighelp' for assistance with all the HSM commands. dsmdfs GUI interface for backing up and restoring DFS fileset data, which this command understands as such, and so will properly back up and restore DFS ACLs and mount points, as well as directories and files. Its look and usage ie exactly the same as 'dsm'. Notes: Do not try to select the type "AGFS" for backup - that is the aggregate. Instead, go into the type "DFS" file system. You should also define some VIRTUALMountpoints to be able to directly select within the "/..." file system. See also: dsmcdfs dsmdu HSM command to display *SM space usage for files and directories under the control of HSM, in terms of 1 KB blocks; that is, the true size of all files in a directory, whether resident or migrated. Syntax: 'dsmdu [-a] [-s] [Dir_Name(s)]' where -a shows each file -s reports just a sum total Dir_Name(s) One or more directories to report on. If omitted, defaults to the current dir. Contrast with the Unix 'du -sk' command, which can only report on files currently present in the directory, such that migrated files throw it off. Invoke 'dsmmighelp' for assistance with all the HSM commands. dsmerror.log Where information about processing errors is written. The DSM_LOG client environment variable may be used to specify a directory where you want the dsmerror.log to reside. If unspecified, the error log for a dsm or dsmc client session will be written to the current directory. ADSM doesn't want you to have dsmerror.log be a symlink to /dev/null: if it finds that case, it will actually remove the symbolic link and replace it with a real dsmerror.log file! (See messages ANS1192E and ANS1190E.) The error log for client root activity (HSM migration, etc.) will be /dsmerror.log. In Macintosh OS X, the default error log name is instead "TSM Error Log". Don't try to use a single dsmerror.log for all sessions in the system: It's unusual and unhealthy, from both logical an physical standpoints, to mingle the error logging from all sessions - which may involve simultaneous sessions. In such an error log, you want a clear-cut sequence of operations and consequences reflected. If you want all error logs to go to a single directory, consider creating a wrapper script for dsmc, named the same or differently, which will put all error logs into a single, all-writable directory, with an error log path spec which appends the username, for uniqueness and singularity. The wrapper script would invoke dsmc with the -ERRORLOGname= option spec. Advisory: Exclude dsmerror.log from backups, to prevent wasted time and possible problems. See: DSM_LOG; ERRORLOGName; ERRORLOGRetention; dsierror.log dsmerror.log ownership The error log file will be owned by the user that initiated the client session. However, if another user subsequently invokes the client, it can try and fail to gain access to that file because of permissions problems. You could make the file "public writable", but that is problematic in mixing error logging, making for later confusion in inspection of that log. Each user should end up with a separate error log, per invocation from separate "current directory" locations. Try to avoid using the DSM_LOG client environment variable, which would force use of a single error log file for the environment. dsmfmt TSM server-provided command for AIX, to format file system "volumes", which can be spaces to contain the TSM database, recovery log, storage pool, or a file which serves as a random access storage pool. Not for AIX raw logical volumes or Solaris raw partitions: they do not need to be formatted by TSM, and the dsmfmt command has no provision for them (it only accepts file names). But note that Solaris raw partitions need to be formatted in OS terms. Note that dsmfmt does *not* update the dsmserv.dsk file to add the new server component: that happens under a dsmserv invocation. Located in /usr/lpp/adsmserv/bin/. The command *creates* the designated file, so the file must not already exist. Unix note: There is no man page! Ref: Administrator's Reference manual, Appendix A. The size to be specified is the desired size, in MB, not counting the 1 MB overhead that dsmfmt will add (so if you say 4MB, you will get a 5MB resultant file). So the size should always be an odd number. To format a database volume: 'dsmfmt -db DBNAME SizeInMB-1MB' To format a recovery log volume: 'dsmfmt -log DBNAME SizeInMB-1MB' To format a file as a storage pool: 'dsmfmt -data NAME SizeInMB-1MB' The name given the file is the name to be used for the storage volume when it is later defined to the server. What the utility does is not exciting: it writes the chars "Eric" repeatedly to fill the space. Beware the shell "filesize" limit preventing formatting of a large file. dsmfmt errno 27 (EFBIG - File too It may be that your Unix "filesize" large) (errno = 27) limit prohibits writing a file that large. Do 'limit filesize' to check. If that value is too small, try 'unlimit filesize'. If that doesn't boost the value, you need to change the limit value that the operating system imposes upon you (in AIX, change /etc/security/limits). Another cause: the JFS file system not configured to allow "large files" (greater than 2 GB), per Large File Enabled. Do 'lsfs -q' and look for the "bf" value: if "false", not in effect. dsmfmt errno 28 (ENOSPC - No space No more disk blocks are left in the file left on device) (errno = 28) system. Most commonly, this occurs because you simply did not plan ahead for sufficient space. In an AIX JFS enabled for Large Files, free space fragmentation may be the problem: there are not 32 contiguous 4 KB blocks available. dsmfmt "File size..." error With a very large format (e.g., 80 GB), the following error message appears: "File size for /directory/filename must be less than 68,589,453,312 bytes." You may be exceeding file size limits for your operating system, or in Unix may be exceeding the filesize resource limit for your process. dsmfmt performance Dsmfmt is I/O intensive. Beware doing it on a volume or RAID or path which is also being used for other I/O intensive tasks such as OS paging. dsmfmt.42 Version of dsmfmt for AIX 4.2, so as to support volumes > 2GB in size. In such a system, dsmfmt should be a symlink to dsmfmt.42 . Be sure to define the filesystem as "large file enabled". dsmhsm ADSM HSM client command to invoke the Xwindows interface. Note that there is no 'dsmhsmc' command for line-mode HSM commands. There are instead individual commands such as 'dsmdf', 'dsmdu', 'dsmrm', etc. Invoke 'dsmmighelp' for assistance with all the HSM commands. DSMI_CONFIG ADSM API: Environment variable pointing to the Client User Options file (dsm.opt). Note that it should point at the options file itself, not the directory that it resides in. Ref: "AFS/DFS Backup Clients" manual. DSMI_DIR ADSM API: The client environment variable to point to the directory containing dscameng.txt, dsm.sys, and dsmtca. Ref: "AFS/DFS Backup Clients" manual. DSMI_LOG ADSM API: Client environment variable to point to the *directory* where you want the dsierror.log to reside. (Remember to code the directory name, not the file name.) If undefined, the error log will be written to the current directory. Ref: "Installing the Clients" manual. DSMI_ORC_CONFIG TDP for Oracle environment variable, to point to the client user options file (dsm.opt). dsmInit() TSM API function to start a session from the TSM client to the TSM server. There can only be one active session open at a time within one client process. dsmlabel To label a tape, or optical disk, for use in a storage pool. (Tapes must be labeled to prevent overwriting tapes which don't belong to ADSM and to control tapes once ADSM has used them (and re-use when they become empty). Syntax: 'dsmlabel -drive=/dev/XXXX [-drive...] -library=/dev/lmcp0 [-search] [-keep] [-overwrite] [-format] [-help] [-barcode] [-trace]'. where the drive must be one which was specifically ADSM-defined, via SMIT. You can specify up to 8 drives, to more quickly perform the labeling. It will iteratively prompt for a label volsers so you can do lots of tapes. Type just 'dsmlabel' for full help. "-format" is effective only on optical cartridges. -barcode Use the barcode reader to select volumes: will cause the first six characters of the barcode to be used as the volume label. Dsmlabel does not change Category Codes. If you Ctrl-C the job, it will end after the current tape is done. Tapes new to a 3494 tape library will have a category code of Insert both before and after the dsmlabel operation. Ref: Administrator's Reference manual See also: 'LABEl LIBVolume'; "Tape, initialize for use with a storage pool". Newly purchased tapes should have been internally labeled by the vendor, so there should be no need to run the 'dsmlabel' utility. dsmls HSM command to list files in a directory and show file states. Syntax: 'dsmls [-n] [-R] [Filespec...]' where: -n Omits column headings from report. -R Traverses subdirectories. Note that it does not expand wildcard specifications itself, so you CANNOT code something like: dsmls /filesys/files.199803\* In report: Resident Size: Shows up as '?' if the path used is a symlink, because HSM is uncertain as to the actual filespace name. File State: m = migrated m (r) = migrated, with recallmode set to Readwithoutrecall '?' if the path used is a symlink. Note that the premigrated files are reported from the premigrdb database located in the .SpaceMan directory. Note that the command does not report when the file was migrated. dsmmigfs Add, dsmmigfs Update HSM: Command to add or remove space management, or to query it. 'dsmmigfs Add [-OPTIONS] FSname' causes: 1. Creates .SpaceMan dir in the filesys 2. Updates /etc/adsm/SpaceMan/config/dsmmigfstab to add the filesys definition to HSM, with selected options 3. Updates the /etc/filesystems stanza for the filesys to add a "nodename" entry is added, "mount" is changed to "false", and "adsmfsm=true" is added. 4. Mounts FSM over the AIX filesys. 5. Activates HSM management of it. But it does not result in that Filespace becoming known in the ADSM server: the first migration or backup will do that. Add/Update options: -HThreshold=N Specifies high threshold for migration from the HSM-managed file system to the HSM storage pool. -Lthreshold=N Specifies low threshold for migration from the HSM-managed file system to the HSM storage pool. (A low value is good for loading a file system, but not for keeping many files recalled.) -Pmpercentage=N The percentage of space in the file system that you want to contain premigrated files that are listed next in the migration candidates list for the file system. -Agefactor=N The age factor to assign to all files in the file system. -Sizefactor The size factor to assign to all files in the file system. -Quota=N The max number of megabytes (MB) of data that can be migrated and premigrated from the file system to ADSM storage pools. Default: the same number of MB as allocated for the file system itself. -STubsize=N The size of stub files left on the file system when HSM migrates files to ADM storage. Hints: Specifying a low Lthreshold value helps in file system loading by keeping migration active, to prevent message ANS4103E condition. dsmmigfs Deactivate/REActivate/REMove HSM: Command to deactivate, reactivate, or remove space management for a file system. Syntax: 'dsmmigfs Deactivate ' 'dsmmigfs REActivate ' 'dsmmigfs REMove ' dsmmigfs GLOBALDeactivate HSM: Command to deactivate or reactivate /GLOBALREActivate space management for all file systems on the client system. Syntax: dsmmigfs GLOBALDeactivate dsmmigfs GLOBALREActivate dsmmigfs Query HSM: Command to query space management settings for named or all HSM-controlled file systems. Syntax: 'dsmmigfs Query [ ]' dsmmigfs REMove HSM: Command to remove space management from a file system. Syntax: 'dsmmigfs REMove [FileSysName(s)>]' or use the GUI cmd 'dsmhsm'. This will perform a Reconcile, Expire, and then unmount of the FSM, also involving an update of /etc/filesystems in AIX. Make sure you are not sitting in that directory at the time, or the unmount will fail with messages ANS9230E and ANS9078W. It is best to do this *before* doing a Delete Filespace: if you do it after, you will have to do the Del Filespace twice to finally get rid of the file space. dsmmigfstab HSM: file system table naming the AIX file systems which are to be managed by HSM. Located in /etc/adsm/SpaceMan/config. Add file systems to the list via the dsmhsm GUI, or the 'dsmmigfs add FileSystemName' command. Query via: 'dsmmigfs query [FileSystemName...]' dsmmighelp HSM: Command to display usage information on its command repertoire. dsmmigquery HSM: Command to display space management information, such as migration candidates, recall list. 'dsmmigquery [-Candidatelist] [-SORTEDMigrated] [-SORTEDAll] [-Help] [file systems]' 'dsmmigquery [-Mgmtclass] [-Detail] [-Options]' Caution: defaults to current directory, so be sure to specify file system name. dsmmigrate HSM: Command to migrate selected files from a local file system to an ADSM storage pool. Syntax: 'dsmmigrate [-R] [-v] FileSpec(s)' where... -R Specifies recursive pursuit of subdirectories. -v Displays the name and size of each file migrated. If using a wildcard, it is faster to allow dsmmigrate to expand it per its own processing order, as in invoking like: 'dsmmigrate \*.gz' with the asterisk quoted so that ADSM expands it rather than the shell. To migrate all files in a file system: 'dsmmigrate /file/system/\*' To perform a dsmmigrate on a file, you must be the file's owner, else suffer ANS9096E. Note: For a large file system this may take some time, and depending upon the ADSM server configuration you might get message ANS4017E on the client, which would mean that that the server waited up to its COMMTimeout value for the client to come back with something for the server to do, but nadda, so the server dismissed the session. (Issue the server command 'Query OPTion' to see the prevailing CommTimeOut value, in seconds.) Dsmmigrate will typically generate dsmerror.log data in the current directory when given a wildcard and some of the files need not be migrated. dsmmigundelete HSM: Command to recreate deleted stub files, to reinstate file instances which were inadvertently deleted from the HSM-managed file system. (This command operates on whole file systems: you cannot specify single files.) This operation depends upon the original directory structure being intact: it will not recreate a stub file where the file's directory is missing. Thus, this command cannot be used as a generalized restoral method. The stub contains information ADSM needs to recall the file, plus some amount of user data. ADSM needs 511 bytes, so the amount of data which can also reside in the stub is the defined stub size minus the 511 bytes. When you do a dsmmigundelete, ADSM simply puts back enough data to recreate the stubs, with 0 bytes of user data (since you don't want us going out to tapes to recover the rest of the stub). When the file gets recalled, then migrated again, we once again have user data that we can leave in the stub, so the stub size goes back to its original value. This goes to show that the leading file data in the stub file is a copy of what's in the full, migrated file. See also: Leader data dsmmode HSM: Command to set one or more execution modes which affect the HSM-related behavior of commands: -dataaccess controls whether a migrated file can be retrieved. -timestamp controls whether the file's atime value is set to the current time when accessed. -outofspace controls whether HSM returns an error code rather than try to recover from out-of-space conditions. -recall controls how a migrated file is recalled: Normal or Migonclose. Note, however, that the outofspace parameter will *not* prevent commands like 'cp' from encountering "No space left on device" conditions. dsmmonitord HSM monitoring daemon, started by /etc/inittab's "adsmsmext" entry invoking /etc/rc.adsmhsm . It is busy: every 2 seconds it looks for file-system-full conditions so as to start migration; and every 5 minutes to do threshhold migrations (or the interval specified on the CHEckthresholds Client System Options file (dsm.sys)). This daemon also runs dsmreconcile (from either the directory specified via DSM_DIR or the directory whence dsmmonitord was invoked) according to the interval defined via the RECOncileinterval Client System Options file (dsm.sys) option, and automatically before performing threshold migration if the migration candidates list for a file system is empty. Be aware that this daemon does not help if the user attempts to recall a file of a size which causes the local file system to be exhausted: what happens is that the user gets a "ANS9285K Cannot complete remote file access" error message - which says nothing about this. Full usage (as found in the binary): 'dsmmonitord [-s seconds] [-t directory] [-v]' dsmmonitord PID Is remembered in file: /etc/adsm/SpaceMan/dsmmonitord.pid dsmnotes The backup client command for the Lotus ConnectAgent. Sample usage: 'dsmnotes incr d:\notes\data\mail\johndoe.nsf' DSMO_PSWDPATH See: aobpswd dsmperf.dll You mean: dsmcperf.dll (q.v.) dsmq HSM: Command to display all information, for all files currently queued for recall. Columns: ID Recall ID DPID The PID of the dsmrecall daemon. Start Time When it started INODE Inode number of the file being recalled. Filesystem File system involved. Original Name Name of file that was migrated. dsmrecall HSM: Command to explicitly demigrate (recall) files which were previously migrated. Syntax: 'dsmrecall [-recursive] [-detail] Name(s)' The -detail option alas shows details only upon completion of the full operation: it does not reveal progress. If using a wildcard, it is *much* faster to allow dsmrecall to expand it per its own processing order: having the shell expand it forces dsmrecall to get the files off tape in collating order, rather than the order it knows them to be on the tape(s) - so invoke like: 'dsmrecall somefiles.199807\*' with the asterisk quotes so *SM expands it rather than the shell. Note that during a recall, as the recalled file is being written back to disk that its timestamp will be "now", and thereafter will be set to the file's original timestamp. Dsmrecall will typically not generate dsmerror.log data in the current directory when given a wildcard and some of the files need not be recalled. In the presence of msg "ANR8776W Media in drive DRIVE1 (/dev/rmt1) contains lost VCR data; performance may be degraded.", it may be faster to do a Restore of the files to a temp area, if you simply want to reference the data. dsmrecalld HSM daemon to perform the recall of migrated files. It is started by /etc/inittab's "adsmsmext" entry invoking /etc/rc.adsmhsm . Control via the MINRecalldaemons and MAXRecalldaemons options in the Client System Options file (dsm.sys). Default: 20 Full usage (as found in the binary): dsmrecalld [-t timeout] [-r retries] [{-s | -h}] [{-i | -n}] [-v] -t timeout in seconds; only valid with -s -r number of times to retry recall; only valid with -s -s soft recall, will time out; default -h hard recall, will not time out -i interruptable, can be cancelled; default -n non-interruptable, cannot be cancelled dsmrecalld PID Is remembered in file: /etc/adsm/SpaceMan/dsmrecalld.pid dsmreconcile HSM: Client root user command to synchronize client and server and build a new migration candidates list for a file system. Is usually run automatically by dsmmonitord, invoking dsmreconcile once for each controlled file system, at a frequency (mostly) controlled by the RECOncileinterval Client System Options file (dsm.sys) option. Can also be run manually as needed. Syntax: 'dsmreconcile [-Candidatelist] [-Fileinfo] [FileSystemName(s)]' Note that HSM will also run reconcilliation automatically before performing threshold migration if the migration candidates list for a file system is empty. Msgs: "Note: unable to find any candidates in the file system." can indicate that all files have been migrated. See also: Expiration (HSM); MIGFILEEXPiration; Migration candidates list (HSM). dsmreg.lic ADSMv2 /usr/lpp/adsmserv/bin executable module for converting given license codes into encoded hex strings which are then written to the adsmserv.licenses file. See: adsmserv.licenses; License...; REGister LICense dsmrm HSM: Command to remove a recall process from the recall queue. dsmsched.log The schedule log's default name, as it resides in the standard ADSM directory. Can be changed via the SCHEDLOGname Client System Options file (dsm.sys) option. To verify the name: in ADSM, do 'dsmc q o' and look for SchedLogName; in TSM, do 'dsmc show opt'. Obviously, you need write access to the directory in which the log is to be produced in order to have a log. See: SCHEDLOGname dsmscoutd HSM 5+ Scout Daemon, which seeks migration candidates. Its operation is governed by the Maxcandidates value. dsmserv Command in /usr/lpp/adsmserv/bin/ to start the ADSM server. This is something which would be done by the /usr/lpp/adsmserv/bin/rc.adsmserv shell being executed by the "autosrvr" line which ADSM installation added to the /etc/inittab file. Command-line options: -F To overwrite shared memory when restarting the server after a server crash. Code before other options. noexpire Suppress inventory expiration, otherwise specified via EXPINterval. -o FileName Specifies the server options file to be used, as when running more than one server. quiet Start the server as a daemon program. The server runs as a background process, and does not read commands from the server console. Output messages are directed to the SERVER_CONSOLE. Note that there is no option for preventing client sessions from starting, which can be inconvenient in some circumstances, like restarting after a hinkey problem. Performance: dsmserv performs regular fsync() calls. When used for stand-alone operations like database restorals, the run time can be 6 hours with the syncing and 15 minutes without. Since dsmserv is an unstripped module, there is the opportunity to CSECT-replace the fsync by statically linking in a dummy fsync function which simply returns (keeping dsmserv from getting fsync from the shared library). See also: Processes, server; dsmserv.42 Ref: ADSM Installing the Server... TSM Admin Guide chapter on Managing Server Operations; Starting, Halting, and Restarting the Server dsmserv AUDITDB A salvage command for when *SM is down with a bad database or disk storage pool volume, to look for structural problems and logical inconsistencies. Run this command *before* starting the server, typically after having reloaded the database. Syntax: 'DSMSERV AUDITDB [ADMIN|ARCHSTORAGE|DISKSTORAGE| INVENTORY|STORAGE] [FIX=No|Yes] [Detail=No|Yes] [LOGMODE=NORMAL|ROLLFORWARD] [FILE=ReportOutputFile]' The various qualifiers represent partial database treatments. Reportedly, running with no qualifiers does everything represented in the partial qualifiers. ARCHDESCRIPTIONS [FIX=Yes] To fix corrupted database as evidenced in message 'Error 1246208 deleting row from table "Archive.Descriptions"'. DISKSTORAGE: Causes disk storage pool volumes to be audited. FIX=No: Report, but not fix, any logical inconsistencies found. If the audit finds inconsistencies, re-issue the command specifying FIX=Yes before making the server available for production work. Because AUDITDB must be run with FIX=Yes to recover the database, the recommended usage in a recovery situation is FIX=Yes the first time. FIX=Yes: Fix any inconsistencies and issues messages indicating the actions taken. Detail=No: Test only the referential integrity of the database, to just reveal any problems. This is the default. Detail=Yes: Test the referential integrity of the database and the integrity of each database entry. LOGMODE=NORMAL: Allows you to override your server's Rollforward logmode, to avoid running out of recovery log space. (Note that Logmode is controlled via the Set command, which you obviously cannot perform when you cannot bring your server up because it has the problem you are addressing.) Tivoli recommends opening a problem report with them before running this audit - under their guidance. Per their advisory: "If errors are encountered during normal production use of the server that suggest that the database is damaged, the root cause of the errors must be determined with the assistance of IBM Support. Performing DSMSERV AUDITDB on a server database that has structural damage to the database tables may result in the loss of more data or additional damage to the database." Be aware that such an audit cannot correct all problems: it will fail on an inconsistency in the database, as one example. If your database is TSM-mirrored, you should first set the MIRRORREAD DB server option to VERIFY: this will force the server to compare database pages across the mirrored volumes, and if an inconsistency is found on a given mirror volume, that volume will be marked as stale and it will be forced to resynchronize with a remaining valid volume. Runtime: Beware that this command is not optimized, and can take a very long time to run, proportional to the amount of data to be audited. Some customers report it running over 4 days for an 8 GB database! (Processing time has been observed to be non-linear, as in one customer finding it taking over 3 days to get halfway through the database, then finishing less than a day later.) If coming from a TSM v4 system, you may see dramatically lesser runtimes if you first run CLEANUP BACKUPGROUP. Consult the Readme and Support if unsure. Msgs: ANR0104E; ANR4142I; ANR4206I; ANR4306I Ref: Admin Ref, Appendix See also: AUDit DB (online cmd) See also separate TSM DATABASE AUDITING samples towards the bottom of this doc. dsmserv AUDitdb, interrupt? There's no vendor documentation saying whether an AUDitdb can be stopped (as in killing its process), safely. The process reportedly disregards Ctrl-C (SIGINT) and simple 'kill' command (SIGTERM): only a 'kill -9' (SIGKILL) terminates the process. Customer reports of having stopped the process tell of no (known) ill effects; but that is non-deterministic: hold onto that backup tape! dsmserv AUDitdb archd fix=yes Undocumented ADSM initial command to correct a corrupted database as evidenced in message 'Error 1246208 deleting row from table "Archive.Descriptions"'. dsmserv DISPlay DBBackupvolumes Stand-alone command to display database backup volume information when the volume history file (e.g., /var/adsmserv/volumehistory.backup) is not available. Full syntax: 'DSMSERV DISPlay DBBackupvolumes DEVclass=DevclassName VOLumenames=VolName[,VolName...]' Example: 'DSMSERV DISPlay DBBackupvolumes DEVclass=OURLIBR.DEVC_3590 VOLumenames=VolName[,VolName...]' Note that this command will want to use a tape drive - one specified in the file named by the DEVCONFig dsmserv.opt parameter - to mount the tape R/O. (Drive must be free, else get ANR8420E I/O error.) You can use this command form to try identify the database backup tapes when the volume history file is absent, not up to date, or lacking DBBACKUP entries. The command requires the devconfig file - which may also have been lost - and entails going hunting through a possibly large number of tapes until you finally find the latest dbbackup tape. See also: dsmserv RESTORE DB, volser unknown dsmserv DUMPDB ADSM database salvage function, to be used in conjunction with DSMSERV LOADDB (q.v.). See also: STAtusmsgcnt dsmserv DUMPDB and LOADDB These are part of a salvage utility that was a stop gap solution for ADSM version 1 until the database backup and recovery functions could be added in ADSM version 2. Unless you are on ADSM version 1 (which is unsupported except for the VSE server), you should be using BAckup DB and DSMSERV RESTORE DB functions to backup/recover your database (and also for migrating ADSM server to a different hardware server of the same operating system type). The circumstances under which you might use DUMPDB and LOADDB today are very rare and probably would involve the absence of regular ADSM database backups (regular database backups using BAckup DB are obviously recommended) and are probably recommended only under the direction of IBM ADSM service support. See also: dsmserv LOADDB; LOADDB dsmserv EXTEND LOG FileName N_MB Stand-alone command to extend the Recovery Log to a new volume when its size is insufficient for ADSM start-up. (Note that you are to add a new volume, *not* extend the existing one.) The new volume should have been separately prepared by running 'dsmfmt -log ...'. The extend operation will run dsmserv for the short time that it takes to extend the log and format the new volume, plus add the new volume name to the dsmserv.dsk file, whereafter the stand-alone server process shuts down. Thereafter you may bring up the server normally. dsmserv FORMAT Ref: Administrator's Reference, TSM Utilities appendix. dsmserv INSTALL Changed to DSMSERV FORMAT in ADSMv3. Ref: Administrator's Reference, Appendix D. dsmserv LOADDB Stand-alone command to reload the ADSM database after having done 'DSMSERV DUMPDB' and 'DSMSERV INSTALL'. After a DUMPDB, it is best to perform the LOADDB to a database having twice the capcity as the amount that was dumped... As the Admin Guide says: "The DSMSERV LOADDB utility may increase the size of the database. The server packs data in pages in the order in which they are inserted. The DSMSERV DUMPDB utility does not preserve that order. Therefore, page packing is not optimized, and the database may require additional space." See topic "ADSM DATABASE STRUCTURE AND DUMPDB/LOADDB" at the bottom of this file for further information. This operation takes a looooooong time: it slows as it gets further along, with tremendous disk activity. Example: 'DSMSERV LOADDB DEVclass=OURLIBR.DEVC_3590 VOLumenames=VolName[,VolName...]' Note: After the reload, the next BAckup DB will restart your Backup Series number as 1. See also: Backup Series; STAtusmsgcnt dsmserv RESTORE DB A set of commands for restoring the *SM server database, under varying conditions. If the database and/or recovery log volumes are destroyed, use dsmfmt to prepare replacements AT LEAST EQUAL IN CAPACITY to the originals. (Failure to make them equal in capacity can result in server failure.) DO NOT reformat the recovery log volume if doing a rollforward recovery: you need its data for the recovery. Second, you have to initialize them by running DSMSERV INSTALL. Then you can run the RESTORE DB command. You would be wise to set server config file option DISABLESCheds before proceeding. With most forms of Restore DB, you will also need a copy of the volume history file and your server options file with its pointer to the vol history. This makes the RESTORE DB process simpler as you can just specify a date rather than having to work out which backup is on what volser. The -todate=xx/xx/xxxx -totime=xx:xx options allow you to select which database backup(s) to restore from; NOT a point at which the recovery log should be rolled forward to. ==> Do NOT restart the server between the install and the restore db command: doing this would delete all the entries in the volume history file! Do's and Dont's: Realize that Restore DB was designed to restore back onto the same machine where the image was taken: that is, Restore DB is not intended to serve as a cross-platform migration mechanism. You can do 'DSMSERV RESTORE DB' across systems of the same architecture: see the Admin Guide, Managing Server Operations, Moving the Tivoli Storage Manager Server, for the rules. It is illegal, risky, and in some cases logically impossible to employ Restore DB to migrate the *SM database across platforms, which is to say different operating systems and hardware architectures. (See IBM site TechNote 1137678.) The same considerations apply in this issue as in moving any other kind of data files across systems and platforms: - Character set encodings may differ: ASCII vs. EBCDIC; single-byte vs. double-byte. - Binary byte order may differ: "big-endian" vs. "little-endian", as in the classic Intel architecture conventions v. the rest of the world. - Binary unit lengths may differ: as in 32-bit word lengths vs 64-bit. - The data may contain other environmental depedencies. Simply put, the architectures and software levels of the giving and taking systems must be equivalent. In general, use Export/Import migrate across systems. (One customer reported successfully migrating from AIX to Solaris via Restore DB; but the totality of success is unknown, and it might succeed only with very specific levels of the two operating systems and *SM servers.) See also IBM site TechNote 1111554 ("Post Database Restore Steps"). See also: Export dsmserv RESTORE DB, volser unknown TSM provides a command to assist with the situation where you need to perform a TSM database restoral and the volume history information has been lost, as in a disk failure. See: dsmserv DISPlay DBBackupvolumes The command requires the devconfig file - which may also have been lost - and entails going hunting through a possibly large number of tapes until you finally find the latest dbbackup tape. What you really need in such circumstances is something to dramatically reduce the number of volumes to search through... One 3494 user reported combined loss of the *SM database and volume history backup file, leaving no evidence of what volume to use in restoring the database. That's a desperate situation, calling for desperate measures... If you know the approximate time period of when your dbbackup was taken, you can narrow it down to a few tape volumes and then try each in a db restore: only one tape in a given time period can be a dbbackup, and the others ordinary data, which db restore should spit out... Go to your 3494 operator panel. Activate Service Mode. In the Utilities menu, choose View Logs. Go into the candidate TRN (transactions) log. Look for MOUNT_COMPLETE, DEMOUNT_COMPLETE entries in your time period. The volser is in angle brackets, like <001646001646>, wherein the volser is 001646. (Watch out for the 3494 PC clock being mis-set.) dsmserv RESTORE DB Preview=Yes Stand-alone command to display a list of the volumes needed to restore the database to its most current state, without performing the restoral operation. You must be in the directory with the dsmserv.opt file, else will get ANR0000E message; so do: 'cd /usr/lpp/adsmserv/bin' 'DSMSERV RESTORE DB Preview=Yes' dsmserv runfile Command for the *SM server to run a single procedure encoded into a file, and halt upon completing that task. Syntax: dsmserv runfile where the file contains one or more TSM server commands, one per line (akin to a TSM macro). This command is most commonly run to load the provided sample scripts: dsmserv runfile scripts.smp and to initialize web admin definitions: dsmserv runfile dsmserv.idl Ref: Admin Ref manual; Quick Start manual See also: Web Admin dsmserv UNLOADDB TSM 3.7 Stand-alone command to facilitate defragmentation (reorganization) of the TSM database, via unload-reload, unloading the database in key order, to later reload that preserve. (The operation does not "compress" the db, as an early edition of the TSM Admin Guide stated, but rather reclaims empty space by compacting database records - putting them closer together.) Syntax: DSMSERV UNLOADDB DEVclass=DevclassName [VOLumenames=Volnameslist] [Scratch=Yes|No] [CONSISTENT=Yes|No where: CONSISTENT Specifies whether server transaction processing should be suspended so that the unloaded database is a transactionally-consistent image. Default: Yes The procedure: - Shut down the server. - dsmserv unloaddb devclass=tapeclass scratch=yes - Halt that server instance. - Reinitialize the db and recovery log as needed, as in: dsmserv format 1 log1 2 db1 db2 - Reload the database: dsmserv loaddb devclass=tapeclass volumnames=db001,db002,db003 - Consider doing a DSMSERV AUDITDB to fix any inconsistencies before putting the database back into production. Ref: Admin Guide topic "Optimizing the Performance of the Database and Recovery Log"; Admin Ref appendix A The Tivoli documentation is superficial, failing to provide information as to how long you can expect your database to be out of commission, the risks involved, the actual benefits, or how long you can expect them to last. For execution, there is no documentation saying what constitutes success or failure, what messages may appear, or what to do if the operation fails. Is it worth it? Customers who have tried the operation report improvements of about 10% immediately after the reload, and very long runtimes (maybe days). It is probably not worth it. dsmserv UPGRADEDB Update some of the database meta-data, (dsmserv -UPGRADEDB) which would be invoked - only if it needed to be invoked - when the server is down. Conventionally, a product upgrade from one release to the next will require an UPGRADEDB; but when going between PTFs and patches of the same release an UPGRADEDB should not be required. It does not have to convert any database data - and thus the operation is insensitive to the size of the actual database and should take seconds to execute regardless of the database size. All your policies, devices, etc. will be preserved. Note that upgrades which do not involve any change in data formats will not utilize an Upgradedb. Upgrades that do involve data format changes will usually perform the Upgradedb automatically - or in some cases tell the customer that it needs to be done. So, usually you do not have to manually invoke an Upgradedb. Naturally, server upgrades are performed when the server is down. DSMSERV_ACCOUNTING_DIR Server environment variable to specify the directory in which the dsmaccnt.log accounting file will be written. If directory doesn't exist, or the environment variable is not set, the current directory is used for the accounting file. NT note: a Registry key instead specifies this location. DSMSERV_CONFIG Server environment variable to point to the Server Options file. DSMSERV_DIR Server environment variable to point to the directory containing the server executables. DSMSERV_OPT Server environment variable to point to the server options file. dsmserv.42 Version of dsmserv for AIX 4.2, so as to support ADSM file system volumes > 2GB in size. In such a system, dsmserv should be a symlink to dsmserv.42 . Be sure to define the filesystem as "large file enabled". dsmserv.cat ADSM V.3 message catalog installed in /usr/lib/nls/msg/en_US. dsmserv.dsk File in which names the database and recovery log files/volumes, each on its own line, as referenced by the server when it starts. Created: Via 'dsmserv format', as specified in the Quick Start manual. Updated: Each time you define or delete server volumes. (Humans should never have to touch this file.) Where: AIX: /usr/lpp/adsmserv/bin/ Sun: /opt/IBMadsm-s/bin/ At start-up, dsmserv.dsk is used to find ONE data base or recovery log volume: the rest of the volumes are located through a structure in the first 1 MB that is added to each of the data base and recovery log volumes. That is, each db and log file contains info about all the other db and log files, so in a pinch you could start the server by creating a minimal dsmserv.dsk file containing just one db and log file name: the server will thereafter update dsmserv.dsk with all the log and db file names. dsmserv.err Server error log, in the server directory, written when the server crashes, ostensibly when the server is being run in the foreground. Seen to contain messages: ANR7833S, ANR7834S, ANR7837S, ANR7838S See also: dsmsvc.err DSMSERV.IDL See: Web Admin (webadmin) dsmserv.lock The TSM server lock file. It both carries information about the currently running server, and serves as a lock point to prevent a second instance from running. Sample contents: "dsmserv process ID 19046 started Tue Sep 1 06:46:25 1998". Msgs: ANR7804I See also: adsmserv.lock dsmserv.opt Server Options File, normally residing in the server directory. Specifies a variety of server options, one of the most important being the TCP port number through which clients reach the server, as coded in their Client System Options File. Note that the server reads the file from top to bottom during restart. Some options, like COMMmethod, are additive, while others are unique specifications. For unique options, the last one specified in the file is the last one used. Updating: Whereas the server reads its options file only at start time, changes made to the file via a text editor will not go into effect until the next server restart. Use the SETOPT command (q.v.) to both update the file and put some options into effect. (Beware, however, that the command appends to the file, which can result in there being multiple, redundant options in the file which you will want to clean up.) The DSMSERV_CONFIG environment variable, or the -o option of 'dsmserv' command, can be used to specify an alternate location for the file. Ref: Admin Ref manual, appendix "Server Options Reference" See also: Query OPTion dsmserv's, number of See: Processes, server dsmsetpw HSM: Command to change the ADSM password for your client node. dsmsm HSM: Space monitor daemon process which runs when there are space-managed file systems defined in /etc/adsm/SpaceMan/config/dsmmigfstab dsmsm PID HSM: Is remembered in file: /etc/adsm/SpaceMan/config/ dsmmigfstab.pid dsmsnmp ADSMv3: SNMP component. Must be started before the ADSM server. dsmsta Storage Agent. dsmstat Monitors NFS mounted filesystems to be potentially backed up. DSM_DIR also points to this. See: NFSTIMEout dsmsvc.err Server error log, in the server directory, written when the server crashes, ostensibly when the server is being run in the background. See also: dsmserv.err DSMSVC.EXE Service name of the web server bound to TCP port 1580. dsmtca Trusted Communication Agent, aka Trusted Client Agent program. Employing the client option PASSWORDAccess Generate causes dsmtca to run as root. For non-root users, the ADSM client uses a trusted client (dsmtca) process to communicate with the ADSM server via a TCP session. This dsmtca process runs setuid root, and communicates with the user process (dsmc) via shared memory, which requires the use of semaphores. So for non-root users, when you start a dsmc session, it hands data to dsmtca as an intermediary to send to the server. The DSM_DIR client environment variable should point to the directory where the file should reside. dsmulog You can capture *SM server console messages to a user log file with the *SM dsmulog utility. You can invoke the utility with the ADSMSTART shell script which is provided as part of the ADSM AIX server package. You can have the server messages written to one or more user log files. When the dsmulog utility detects that the server it is capturing messages from is stopped or halted, it closes the current log file and ends its processing. (/usr/lpp/adsmserv/bin/) Ref: Admin Guide; Admin Ref; /usr/lpp/adsmserv/bin/adsmstart.smp dsmwebcl.log The) Web Client log, where all Web Client messages are written. (Error messages are written to the error log file.) Location: Either the current working directory or the directory you specify with the DSM_LOG environment variable. See also: Web client Dual Gripper 3494 feature to add a second gripper to the cartridge picker ("hand") so that it can hold one cartridge to be stored and grab one for retrieval. This feature makes possible "Floating-home Cell" so that cartridges need not be assigned fixed cells. "Reach" factors result in the loss of the top and bottom two rows of your storage cells, so consider carefully if you really need a dual gripper. (Except in a very active environment with frequent tape transitions, storage cells are preferred over having a dual gripper.) The gripper is not controlled by host software: it is a 3484 Library Manager optimizer function (i.e microcode). The dual gripper is only used during periods of high (as determined by the LM) activity. Dual Gripper usage statistics Gripper usage info is available from the 3494's Service Mode... Go to the Service menu thereunder, and select View Usage Info. DUMPDB See: DSMSERV DUMPDB dumpel.exe Windows: Dump Event Log, a Windows command-line utility that dumps an event log for a local or remote system into a tab-separated text file. This utility can also be used as a filter. DURation In schedules: The DURation setting specifies the size of the window within which the scheduled event can begin - or resume. For example, if the scheduled event starts at 6 PM and has a DURation of 5 hours, then the event can start anywhere from 6 PM to 11 PM. Perhaps more importantly, if the scheduled event is preempted (msg ANR0487W), ADSM will know enough to restart the event if resources (i.e., tape drives) become available within the window. DVD as server serial media Backups can be performed to DVD, in place of tape. The Admin Guide manual provides some guidance in configuring for this. One Windows customer reports success in a somewhat different way: Use the Windows program called DLA (Drive Letter Assingment) from Veritas, often included in the burner software; or use a package like IN-CD from Nero. You can then format the DVD (or CD) like a diskette. Then define a device-class of removable file and a manual library. Now you can write directly on the CD or DVD. See also: CD... DYnamic An ADSM Copy Group serialization mode, as specified by the 'DEFine COpygroup' command SERialization=DYnamic operand spec. This mode specifies that ADSM accepts the first attempt to back up or archive an object, regardless of any changes made during backup or archive processing. See: Serialization. Contrast with Shared Dynamic, Shared Static, and Static. See also CHAngingretries option. DynaText The hypertext utility in ADSMv2 to read the online Books on most platforms supporting ADSM: all Unixes, Macintosh, Microsoft Windows. Obsolete, with the advent of HTML and PDF. 'E' See: 3490 tape cartridge; Media Type E-fix IBM term for an emergency software patch created for a single customer's situation. As such, e-fixes should not be adopted by other customers. See also: Patch levels E-Lic Electronic Licensing - A key file that is on the CD, but not located on any download sites. Thus you must have the CD loaded in most cases before being able to use the downloaded filesets. EBU Enterprise Backup UTILITY used with Oracle 7 databases. Involves a Backup Catalog. See "RMAN" for Oracle 8 databases. ECCST Enhanced Capacity Cartridge System Tape; a designation for the 3490E cartridge technology, which reads and writes 36 tracks on half-inch tape. Sometimes referred to as MEDIA2. Contrast with CST and HPCT. See also: CST; HPCT; Media Type .edb Filename suffix for MS Exchange Database. Related: .pst Editor ADSMv3 client option (dsm.opt or dsm.sys) option controlling the command line interface editor, which allows you to recall a limited number of previously-issued commands (up to 20) via the keyboard (up-arrow, down-arrow), and edit them (up-arrow, Delete, Insert keys). Specify: Yes or No Default: Yes Ref: B/A Client manual, Using Commands, Using Previous Commands EHPCT 3590 Extended High Performance Cartridge Tape, as typically used in 359E drives. See: 3590 'K' See also: CST; HPCT Eject tape from 3494 library Via TSM server command: 'CHECKOut LIBVolume LibName VolName [CHECKLabel=no] [FORCE=yes] [REMove=Yes]' where the default REMove=Yes causes the ejection. Via Unix command you can effect this by changing the category code to EJECT (X'FF10'): 'mtlib -l /dev/lmcp0 -vC -V VolName -t ff10' Ejections, "phantom" Tapes get ejected from the tape library without TSM having done it. Customers report the following causes: - Drive incorrectly configured by installation personnel. Reads fail, and the drive (erroneously) signals the library manager that the tape is so bad that it should be spit out. - Excessive SCSI chain length. Caused severe errors such that the tape was rejected. Ejects, pending Via Unix command: 'mtlib -l /dev/lmcp0 -qS' Elapsed processing time Statistic at end of Backup/Archive job, recording how long the job took, in hours, minutes, and seconds, in HH:MM:SS format, like: 00:01:36. This is calculated by subtracting the starting time of a command process from the ending time of the completed command process. Shows up in server Activity Log on message ANE4964I. ELDC Embedded Lossless Data Compression compression algorithm, as used in the 3592. See also: ALDC; LZ1; SLDC Element Term used to describe some part of a SCSI Library, such as the 3575. The element number allows addressing of the hardware item as a subset of the SCSI address. An element number may be used to address a tape drive, a tape storage slot, or the robotics of the library. In such libraries, the host program (TSM) is physically controlling actions and hence specific addressing is necessary. In libraries where there is a supervisor program (e.g., 3494), actions are controlled by logical host requests to the library, rather than physical directives, and so element addressing is not in effect. In TSM, an element is described in the 'DEFine DRive' command ELEMent parameter. Note that element numbers do not necessarily start with 1. See also: HOME_ELEMENT Element address SCSI designation of the internal elements of a SCSI device, such as a small SCSI library, where each slot, drive, and door has its own element address as a subset of the library's SCSI address. Element addresses have fixed assignments, per the device manufacturer: your definitions must conform to them. If a SCSI library drive cannot be used within TSM but can be used successfully via external means (e.g., the Unix 'tar' command) that could indicate incorrect Elementk addresses. Another symptom of an element mismatch is if TSM will mount a tape but be unable to use it and/or dismount it. Element addresses, existing You can probably use the 'tapeutil' or 'ntutil' command: open first device and then do Element Inventory (14). Or use 'lbtest' (q.v.): Select 6 to open the library, 8 to get the element count and 9 to get the inventory. Scroll back to the top of the 9 listing to find the drives and element addresses associated with SCSI IDs. In AIX, note that the 'lsdev' command is typically of no help in identifying the element address from the SCSI ID and drive - there is no direct correlation. Example of using lbtest: Library with three drives mt1, mt2 and mt3 (drives can be either rmtX or mtX devices). The slot address are 5, 6, and 7. It is believed that mt1 goes with element 5. To test this theory a tape needs to loaded in the drive located at slot 5 either manually or using lbtest. To use lbtest do the following: - Invoke lbtest - Select 1: Manual test - Select 1: Set device special file (e.g., /dev/lb0) - Prompt: "Return to continue:" Press Enter - Select 6: open - Select 8: ioctl return element count (shows the number of drives, slots, ee ports and transports) - Select 9: ioctl return all library inventory (Will show the element address of all components. Next to element address you will see indications of FULL or EMPTY.) - Select 11: move medium transport element address: Source address moving from: (select any slot with tape) Destination address move to: (in this case it would be 5) Invert option: Select 0 for not invert - Select 40: execute command (which does AIX command `tctl -f /dev/mt1 rewoffl`) If the command is successful, the drive and element match. If you get the message "Driver not ready" try /dev/mt2 and so on until it is successful: the process of elimination. - Select 11: move medium Source address will be 5 and destination will be 6 for the next drive. - Select 40: execute command - Repeat selections 11 and 40 for each remaining drive. - After the last drive has been verified select 11 to return tape to its slot. select 99 to return to opening menu select 9 to quit Element number See: Element address Empty Typical status of a tape in a 'Query Volume' report, reflecting a sequential access volume that either had just been acquired for use from the Scratch pool, or had been assigned to the storage pool via DEFine Volume, and data has not yet been written to the volume. Can also be caused when the empty tapes are not in the library by virtue of MOVe MEDia: another MOVe MEDia would have to be done to get them to go to scratch, because if the tapes are out of the library and go to scratch you will lose track of them. See also: Pending Empty directories, backup Empty directories are only backed up during an Incremental backup, not in a Selective backup. (Some portions of the ADSM documentation suggest that empty directories are not backed up: this is incorrect - they are backed up.) Empty directories, restoring See "Restore and empty directories". Empty file and Backup The backup of an empty file does not require storage pool space or a tape mount: it is the trivial case where all the info about the empty file can be stored entirely in the database entry. However, if supplementary data such as an Access Control List (ACL) is attached to the file, it means that the entry is too data-rich to be entirely stored in the database and so ends up in a storage pool. EMTEC European Multimedia Technologies Former name: BASF Magnetics, which changed its name to EMTEC Magnetics after it was sold by BASF AG in 1996. Starting in 2002, all famous BASF-brand audio, video and data media products will bear the name "EMTEC". Emulex LP8000 Fibre Channel Adapter Needs to be configured as "fcs0" device for it to work with the TSM smit menus. If inadvertently defined as an lpfc0 device, it suggests that you have loaded the "emulex" device driver instead, which corresponds to the filesets devices.pci.lpfc.diag and devices.pci.lpfc.rte, which are filesets are provided by Emulex. In order to have the device recognized as a fcs0 device instead of lpfc0 device, you need to remove those two filesets and rerun cfgmgr. You of course will need to have the proper IBM AIX fibre channel filesets installed. Those filesets are dicussed in the TSM server readme. http://www.emulex.com/ts/fc/docs/ frame8k.htm ENable Through ADSMv2, the command to enable client sessions. Now ENable SESSions. ENable SESSions TSM server command to permit client node Backup and Archive sessions, undoing the prohibition of a prior DISAble SESSions command. Note that the Disable status does not survive across an AIX reboot: the status is reset to Enable. Determine status via 'Query STatus' and look for "Availability". Msgs: ANR2096I See also: DISAble SESSions; ENable ENABLE3590LIBRary Definition in the server options file (dsmserv.opt). Specifies the use of 3590 tape drives within 349x tape libraries. Default: No? Msgs: ANR8745E Ref: Installing the Server... ENABLE3590LIBRary server option, query 'Query OPTion' ENABLELanfree TSM client option to specify whether to enable an available LAN-free path to a storage area network (SAN) attached storage device. A LAN-free path allows backup, restore, archive, and retrieve processing between the Tivoli Storage Manager client and the SAN-attached storage device. See also: LanFree bytes transferred ENABLEServerfree TSM client option to specify whether to enable SAN-based server-free image backup which off-loads data movement processing from the client and server processor and from the LAN during image backup and restore operations. Client data is moved directly from the client disks to SAN-attached storage devices by a third-party copy function initiated by the Tivoli Storage Manager server. The client disks must be SAN-attached and accessible from the data mover, such as a SAN router. If SAN errors occurs, the client fails-over to a direct connection to the server and moves the data via LAN-free or LAN-based data movement. See also: Server-free; Serverfree data bytes transferred Encryption of client-sent data New in TSM 4.1. Uses uses a standard 56-bit DES routine to provide the encryption. The encryption support uses a very simple key management method, where the key is a textual password. The key is only used at the client, it is not transferred or stored at the server. Multiple keys can be used, but only the key entered when the ENCryptkey client option was set to SAVE is stored. Information stored in the file stream on the server indicates that encryption was used and which type. Unlike the TSM user password, the encryption key password is case-sensitive. If the password is lost or forgotten, the encrypted data cannot be decrypted, which means that the data is lost. Where the client options call for both compression and encryption, compression is reportedly performed before encryption - which makes sense, as encrypted data is effectively binary data, which would either see little compression, or even exapansion. And, encryption means data secured by a key, so it further makes sense to prohibit any access to the data file if you do not first have the key. Performance hit: Be well aware that encrypting network traffic comes at a substantial price, in lowering throughput. See: ENCryptkey ENCryptkey TSM 4.1 Windows option, later extended to other clients, specifying whether to save the encryption key password to the Registry in encrypted format. (Saving it avoids being prompted for the password when invoking the client, much like "PASSWORDAccess generate" saves the plain password.) Syntax: ENCryptkey Save|Prompt where Save says to save the encryption key password while Prompt says not to save it, such that you are prompted in each invocation of the client. Where stored: Unix: The encryption key and password are encrypted and stored in the TSM.PWD file, in a directory determined by the PASSWORDDIR option. Windows: Registry Default: Save See also: /etc/security/adsm/; INCLUDE.ENCRYPT; EXCLUDE.ENCRYPT End of volume (EOV) The condition when a tape drive reaches the physical end of the tape. Unlike disks, which have fixed, known geometries, tape lengths are inexact. In writing a tape, its end location is known only by running into it. End-of-volume message ANR8341I End-of-volume reached... Enhanced Virtual Tape Server 1998 IBM product: To optimize tape storage resources, improve performance, and lower the total cost of ownership. See also: Virtual Tape Server Enrollment Certificate Files Files provided by Tivoli, with your server shipment, containing server license data. Filenames are of the form _______.lic . See: REGister LICense Enterprise Configuration and Policy TSM feature which makes possible Management providing Storage Manager configuration and policy information to any number of managed servers after having been defined on a configuration server. The managed servers "subscribe" to profiles owned by the configuration manager, and thereafter receive updates made on the managing server. The managed server cannot effect changes to such served information: it is only a recipient. Ref: Admin Guide, chapter on "Working with a Network of IBM Tivoli Storage Manager Servers" Enterprise Management Agent The TSM 3.7 name for the Web Client. Environment variables See: DSM_CONFIG, DSM_DIR, DSM_LOG, DSMSERV_ACCOUNTING_DIR, VIRTUALMountpoint In AIX, you can inspect the env vars for a running process via: ps eww Ref: Admin Guide, "Defining Environment Variables"; Quick Start, "Defining Environment Variables" EOS End of Service. IBM term for discontinuance of support for an old product. Their words: "Defect support for Tivoli products will generally be provided only for the current release and the most recent prior release. A prior release will be eligible for service for 12 months following general availability of the current release. These releases will be supported at the latest maintenance ("point release") level. Usually, there will be 12 months' notice of EOS for a specific release. At the time of product withdrawal, notice of the EOS date for the final release will be given. At the time a release reaches EOS, it will no longer be supported, updated, patched, or maintained. After the effective EOS date, Tivoli may elect, at its sole discretion, to provide custom support beyond the EOS date for a fee." See also: WDfM EOT An End Of Tape tape mark. See also: BOT EOV See: End of volume EOV message ANR8341I End-of-volume reached... ERA codes (from 3494) See MTIOCLEW (Library Event Wait) Unsolicited Attention Interrupts table in the rear of the SCSI Device Drivers manual. Erase tape See: Tape, erase errno The name of the Unix system standard error number, as enumerated in header file /usr/include/sys/errno.h . Some *SM messages explicitly refer to it by its name, some by generic return code. errno 2 Common error indicating "no such file or directory", often caused by specifying a file name without using its full path, such that the operation seeks the file in the current directory rather than a specific place. Error handler See: ERRORPROG Error log A text file (dsmerror.log) written on disk that contains ADSM processing error messages. Beware symbolic links in the path, else suffer ANS1192E. See also: DSM_LOG; ERRORLOGname; ERRORLOGRetention Error log, operating system AIX has a real hardware error log, reported by the 'errpt' command. Solaris records various hardware problems in the general /var/log/messages log file. Error log, query ADSM 'dsmc Query Options' or TSM 'dsmc show options', look for "Error log". Error log, specify location The DSM_LOG Client environment variable may be used to specify the directory in which the log will live. ADSMv3: add this to dsm.sys: * Error log errorlogname /var/adm/log/ dsmerror.log errorlogretention 14 D Error log size management Use the client option ERRORLOGRetention to prune old entries from the log, and to potentially save old entries. Error messages language "LANGuage" definition in the server options file. Error number In messages, usually refers to the error number returned by the operating system. In Unix, this is the "errno" (q.v.). Error Recovery Cell See "Gripper Error Recovery Cell" ERRORLOGname Macintosh, Novell, and Windows options file and command line option for specifying the name of the TSM error log file (dsmerror.log), where error messages are written. (Note that it is the name of a file, not a directory.) Beware symbolic links in the path, else suffer ANS1192E. See also: DSM_LOG; dsmerror.log; ERRORLOGRetention ERRORLOGRetention Client System Options file (dsm.sys) option (not Client User Options file, as the manual may erroneously say) to specify the number of days to keep error log entries, and whether to save the pruned entries (in file dsmerlog.pru). Syntax: ERRORLOGRetention [N | ] [D | S] where: N Do not prune the log (default). days Number of days of log to keep. D Discard the error log entries (the default) S Save the error log entries to same-directory file dsmerlog.pru Placement: Code within server stanza. Default: Keep logged entries indefinitely. See also: SCHEDLOGRetention ERRORPROG Client System Options file (dsm.sys) option to specify a program which ADSM should execute, with the message as an operand, if a severe error occurs during HSM processing. Can be as simple as "/bin/cat". Code within the server stanza. ERT Estimated Restore Time See also: Estimate ESM Enterprise Storage Manager, as in ADSM or TSM. ESTCAPacity The estimated capacity of volumes in a Device Class, as specified in the 'DEFine DEVclass' command. This is almost always just a human reference value, having no impact on how much data TSM actually puts onto a tape - which is as much as it can. Note that the value "latches" for a given volume when use of the volume first begins. Changing the ESTCAPacity value will apply to future volumes, but will not change the estimated capacity of prevailing volumes (as revealed in a 'Query Volumes' report). After a reclamation, the ESTCAPacity value for the volume returns to the base number for the medium type. Estimate The ADSMv3 Backup/Archive GUI introduced an Estimate function. At the conclusion of backups, this implicit function collects statistics from the *SM server, which the client stores, by *SM server address, in the .adsmrc file in the user's Unix home directory, or Windows dsm.ini file. In a later operation, the GUI user may invoke the Estimate function to get a sense of what will be involved in a subsequent Backup, Archive, Restore, or Retrieve: The client can then estimate the elapsed time for the operation on the basis of the saved historical information. A user can then choose to cancel the operation before it starts if the amount of data selected or the estimated elapsed time for the operation is excessive. The information provided: Number of Objects Selected: The number of objects (files and directories) selected for an operation such as backup or restore. Calculated Size: The Estimate function calculates the number of bytes the currently selected objects occupy by scanning the selected directories or requesting file information from the *SM server. Estimated Transfer Time: The client estimates the elapsed time for the operation on the basis of historical info, calculating it by using the average transfer rate and average compression rate from previous operations. See also: .adsmrc; dsm.ini Estimated Capacity A column in a 'Query STGpool' report telling of the estimated capacity of the storage pool. The value is dependent upon the stgpool MAXSCRatch value having been set: If the stgpool has stored data on at least one scratch volume, the estimated capacity includes the maximum number of scratch volumes allowed for the pool. (For tape stgpools, the EstCap number is a rather abstract value, amortized over the all the tapes in a library - which typically have to be available for use in other storage pools as well, and so is usually meaningless for any single stgpool. See "Pct Util, from Query STGpool" for observations on deriving the amount of data contained in the stgpool.) TSM uses estimated capacity to determine when to begin reclamation of stgpool volumes. Estimated Capacity A column in a 'Query Volumes' report telling of the estimated capacity of a volume, which is as was specified via the ESTCAPacity operand of the 'DEFine DEVclass' command. The value reported is the "logical capacity": the content after 3590 hardware compression. If the files were well compressed on the client, then little or no compression can be done by the drives and thus the closer the value will be to physical capacity. Experience shows that the capacity value is not assigned to a volume until the first data is actually written to it. Ref: TSM Admin Guide, "How TSM Fills Volumes" See also: ESTCAPacity; Pct Util /etc/.3494sock Unix domain socket file created by the Library Manager Control Point daemon (lmcpd). /etc/adsm/ Unix directory created for storing control information. All Unix systems have the HSM SpaceMan subdirectory in there. Non-AIX Unix systems have their encrypted client password file in there for option PASSWORDAccess GENERATE. The 3.7 Solaris client (at least, GUI) is reported to experience a Segmentation Fault failure due to a problem in the encrypted password file. Removing the problem file from the /etc/adsm/ directory (or, the whole directory) will eliminate the SegFault. (Naturally, you have to perform a root client-server operation like 'dsmc q sch' to cause the password file to be re-established.) See also: /etc/security/adsm; Password, client, where stored on client; PASSWORDDIR /etc/adsm/SpaceMan/status HSM status info, which is the symlink target of the .SpaceMan/status entry in the space-managed file system. /etc/ibmatl.conf Library Manager Control Point Daemon (lmcpd) configuration file in Unix. Defines the 3494 libraries that this host system will communicate with Each active line in the file consists of three parts: 1. Library name: Is best chosen to be the network name of your library, such as "LIB1" in a "LIB1.UNIVERSITY.EDU" name. In AIX, the name must be the one that was tied to the /dev/lmcp_ device driver during SMIT configuration. In Solaris, this is the arbitrary symbolic name you will specify on the DEVIce operand of the DEFine LIBRary TSM server command, and use with the 'mtlib' command -l option to work with the library. 2. Connection type: If RS-232, the name of the serial device, such as /dev/tty1. If TCP/IP, the IP address of the library. (Do not code ":portnumber" as a suffix unless you have configured the 3494 to use a port number other than "3494", as reflected in /etc/services.) 3. Identifier: The 1-8 character name you told the 3494 in Add LAN Host to call this host system (Host Alias). The file may be updated at any time; but the lmcpd does not look at the file except when it starts, so needs to be restarted to see the changes. Ref: "IBM SCSI Tape Drive, Medium Changer, and Library Device Drivers: Installation and User's Guide" manual (GC35-0154) See also: Library Manager Control Point Daemon /etc/ibmatl.pid Library Manager Control Point (LMCP) Daemon PID number file. The lmcpd apparently keeps it open and locked, so it is not possible for even root to open and read it. /etc/mnttab in Solaris Prior to Solaris 8, /etc/mnttab was a mounts table file. As of Solaris 8, it is a mount point for the mnttab file system! The name should be excluded from backups (in dsm.opt code "Domain -/etc/mnttab"), as it does not have to be restored: the OS will re-create it. /etc/security/adsm/ AIX default directory where ADSM stores the client password. Overridable via the PASSWORDDIR option. ADSMv3: Should contain one or more files whose upper case names are the servers used by this client, and whose contents consist of an explanatory string followed by an encrypted password for reaching that server. TSMv4: File name is TSM.PWD . This password file is established by the client superuser performing a client-server command which requires password access, such as 'dsmc q sched'. See also: Client password, where stored on client; ENCryptkey; /etc/adsm; PASSWORDDIR Ethernet card, force us of specific You may have multiple ethernet cards in a computer and want client sessions to use a particular card. (In networking terms, the client is "multi-homed".) This can be effected via the client TCPCLIENTAddress option, in most cases; but watch out for the server-side node definition having a hard-coded HLAddress specification. Event ID NN (e.g. Event ID 11) An NT Event number, as can be seen in the NT Event Viewer. A handy place to search for their meaning: http://www.eventid.net/search.asp Event ID: 17055 As when backing up an MS SQL db. Apparently the backup process was interrupted and this caused the BAK file to become corrupt. This also makes it impossible to restore from the BAK file, another reported symptom. The BAK files were deleted and recreated and things worked thereafter. Event Logging An ADSM feature. You can define event receivers using FILEEXIT or USEREXIT support and collect real time event data. You can then create your own parsing utilites (or borrow someones) to sort the data and arrange the results to suit your needs. This avoids the Query Event command, which is compute intensive and requires a generous amount of server resources. Event Logging is one way to alleviate expensive queries against your server. See: BEGin EVentlogging; Disable Events; ENable EVents; Query ENabled Event records, delete old 'DELete event Date [Time]' Event records retention period, query 'Query STatus', look for "Event Record Retention Period" Event records retention period, set 'Set EVentretention N_Days' Default: Installation causes it to be set to 10 days. Event return codes Return codes in the Event Log can be other than what you might expect... If a client schedule command is executed asynchronously, then it is not possible for TSM to track the outcome, in which case the event will be reported as "Complete" with return code 0. To get a true return code, run the command synchronously, where possible, as in using Wait=Yes. If the command is a Server Script that includes several commands which are simply stacked to run in sequence, each of those commands may or may not end with return code 0, but ultimately the script exits with a return code of 0, then the event will be reported as "Complete" with return code 0. The obvious treatment here is to write the Script to examine the return code from each invoked comamnd and exit early when a result is non-zero. Again, such commands must be synchronous. See also: Return codes Event server See: TEC EVENTS table SQL table. Columns: SCHEDULED_START, ACTUAL_START, DOMAIN_NAME, SCHEDULE_NAME, NODE_NAME, STATUS, RESULT, REASON. More reliable than the SUMMARY table, but getting at data can be a challenge. You need to specify values for the SCHEDULED_START and/or ACTUAL_START columns in order to get older data from the EVENTS table: SELECT * FROM EVENTS WHERE SCHEDULED_START>'06/13/2003'. Restriction: Dates must be explicit, not computed or relative; so the construct "scheduled_start>current_timestamp - 1 day" won't work (see APAR IC34609). For a developer, the EVENTS table is a little tricky. Unlike BACKUPS, NODES, ACTLOG, etc., which have a finite number of records, the EVENTS table is unbounded. If you do a Query EVent with date criteria beyond your event record retention setting, you'll get a status of Uncertain. If you do a Query EVent for future dates, you get a status of Future. When the Query EVent function was "translated" to the SELECTable EVENTS table, the question as to what constitutes a complete table (i.e. SELECT * FROM EVENTS) needed to be addressed. Since EVENTS is unbounded, the table is theoretically infinite in size. So the developers decided to mirror Query EVent behavior and thus get only the records for today, by default. Note that SELECT does not support the reporting of Future events from the EVENTS table, but it will show you Uncertain records that go past your event record retention. See also: APAR IC34609 re timestamps Events, administrative command 'Query EVent ScheduleName schedules, query Type=Administrative' Events, client schedules, query 'Query EVent DomainName ScheduleName' to see all of them. Or use: 'Query EVent * * EXceptionsonly=Yes' to see just problems, and if none, get message "ANR2034E QUERY EVENT: No match found for this query." EVENTSERVer Server option to specify whether, at startup, the server should try to contact the event server. Code "Yes" or "No". Default: Yes Exabyte 480 8mm library with 4 drives and 80 tape slots. A rotating cylindrical silo sits above the four tape drives. *EXC_MOUNTWait It is an Exchange Agent only option that tells the Exchange Agent to wait for media (tape) mounts when necessary. Values: Yes, No. excdsm.log The TDP for Exchange log file, normally located in the installation directory for TDP for Exchange (unless you changed it). Exchange Microsoft Exchange, a mail agent. Exchange stores all mailboxes in one file (information store) ... therefore you can't restore individual mailboxes. (More specifically, there is no "brick level" backup/restore due to the absence of a native "backup and restore" API from Microsoft (as of Exchange 5.5 and 2000; a subsequent version may provide the API capability). In Exchange 2000, you can somewhat mitigate having to do mailbox restores if you use the deleted mailbox retention option. (Or called something very similar) This will allow you to recover a mailbox after it has been deleted X number of days ago, based on this setting. Exchange 2003 should have "Recovery Storage Group" that will allow you to restore an individual mailbox "database" (not a single mailbox, just the mailbox database) into a special storage group without impacting the live server. You can then connect to it and use ExMerge to export the individual mailbox. Still lacking, but something. Ref: In www.adsm.org, search on "brick", and in particular see Del Hoobler's postings.) Backed up by Tivoli Storage Manager for Mail (q.v.). If you have version 1.1.0.0 of the ADSMConnect Exchange Agent, then you MUST be running the backup as Exchange Site Service Account. This account, by default, has the correct permissions to back up the Exchange Server. Performance: Tivoli's original testing showed that "/buffers:3,1024" seemed to produce the best results. Redbook: Connect Agent for Exchange. See also: ARCserve; TDP for Exchange; TXNBytelimit; TXNGroupmax Exchange, delete old backups With TDP for Exchange version 1, look at the "EXCDSMC /ADSMAUTODELETE" command. With TDP for Exchange version 2, you do not have to worry about deletions because it has the added function of TSM policy management that will handle expiration of old backups automatically. Exchange, restore a single mailbox? *SM can only do this if Microsoft provides an API that makes it possible, and Microsoft DOES NOT have mailbox/item level backup and restore APIs for any version of Exchange including the new Exchange 2000. There are vendors who have coded solutions using APIs (like MAPI) that are not intended for backup and restore. These solutions tend to take large amounts of time for backups and full restores... (Try restoring a 50Gig IS or storage group from an item level backup and restore.) Microsoft themselves claims that they have tried to come up with a way to provide some type of item level restore support via the backup and restore APIs but have not succeeded because of the architecture of the JET database (the database that is the heart of Exchange.) Microsoft contends that customers should take advantage of deleted item level recovery and the new deleted mailbox level recovery of Exchange 2000 to solve these problems. Ref: "TDP for Microsoft Exchange Server Installation and User's Guide" manual, appendix B topic "Individual Mailbox Restore". A third party vendor, Ontrack Software (www.ontrack.com) has a software product called PowerControls which claims to read a .edb full backup to extract a single mailbox. Exchange, restore across servers? It can be done. One customer says: The trick is to specify the TSM-nodename of the FROM-server when you restore on the TO-server. For instance: tdpexcc restore "Storage Group C" FULL /Mountwait=Yes /MountDatabases=Yes /excserver= /fromexcserver= /TSMPassword= /tsmnode= Another says: Go to the restore server and do a restore of the mail (make sure erase existing logs is CHECKED!), but DO NOT restore the DIRECTORY, only the information store, private and public. Then after the restore restart the services for exchange and go into the Administrator program (see tech net article ID Q146920 for full details). Go into Server Objects, and then select Consistency Adjuster. Under the Private Information Store section make sure Synchronize with the directory is checked, click All Inconsistencies and away you go. This will rebuild the user directory whole list and all the mail. Naturally, be sure that your operating system, Exchange, and TDP levels are all the same across the server systems, and do the deed only after having a full backup. Here are some Microsoft docs explaining some issues to keep in mind: http://www.microsoft.com/exchange/ techinfo/deployment/2000/ MailboxRecover.asp http://www.microsoft.com/exchange/ techinfo/deployment/2000/ E2Krecovery.asp Exchange, restoring You can restore the Exchange Db to a different computer, provided it is within the same Exchange Org.; but only the info store - not the directory. Performance: An Exchange restore will almost always be slower than backup because it is writing to disk and, more importantly, it is replaying transaction logs. Use Collocation by filespace, to keep the data for your separate storage groups on different tapes to facilitate running parallel restores. Exchange 2000 SRS, back up via CLI To backup the Exchange 2000 Site Replication Service via the command line, do like: tdpexcc backup "SRS Storage" full /tsmoptfile=dsm.opt /logfile=exsch.log /excapp=SRS >> excfull.log Exchange 2003 (Exchange Server 2003) Requires Data Protection for Exchange version 5.2.1 at a minimum. See: http://www.ibm.com/support/ entdocview.wss?uid=swg21157215 Exchange Agent Only deals with Information Store (IS) and Directory (DIR) data. The Message Transfer Agent (MTA) is not dealt with at all. The Exchange Agent has 4 backup types: Full, Copy, Incremental, Differential: "Full" and "Copy" backup contain the database file, all transaction logs, and a patch file. "Incremental" and "Differential" backup contain the database file, all transaction logs, and a patch file. Each backup will show which type it is in the backup history list on the Restore Tab. See also: TDP for Exchange Exchange databases There are 2/3 databases in Exchange... - The Directory, dir.edb, which stores the users/groups/etc. - The Public Database, pub.edb, which store public folders and such. - The Private Database, priv.edb, which stores the private mailboxes and such. Exchange product files Seagate had a product for backing up open Exchange files. It uses ADSM as a backup device (through the API). Then Seagate sold the backup software division to Veritas, so see: http://www.veritas.com/products/stormint Exclude The process of specifying a file or group of files in your include-exclude options file with an exclude option to prevent them from being backed up or migrated. You can exclude a file from backup and space management, backup only, or space management only. Note that exclusion operates ONLY ON FILES! Any directories which ADSM finds as it traverses the file system will be backed up. The other implication of this is that ADSM will always traverse directories, even if you don't want it to, so it can waste a lot of time. To avoid directory traversal, use EXCLUDE.DIR, or consider using virtual mount points instead to specify major subdirectories to be processed, and omit subdirectories to be ignored. Note that excluding a file for which there are prior backups has essentially the same effect as if the file had been deleted from the client: all the backup versions suddenly become expired. EXclude Client option to specify files that should be excluded from TSM Archive, Backup, or HSM services. Placement: Unix: Either in the client system options file or, more commonly, in the file named on the INCLExcl option. Other: In the client options file. You cannot exclude in Restorals. Remember that upper/lower case matters! For backup exclusion, code as: 'EXclude.backup pattern...' For HSM exclusion, code as: 'EXclude.spacemgmt pattern...' To exclude from *both* backup and HSM: 'EXclude pattern...' As to "pattern"... /dir/* covers all files in dir and /dir/.../* covers all files in all subdirs of dir, so both cover all files below dir. Further, /dir/.../* includes /dir/*, so only one exclude is necessary to exclude a whole branch. Effects: The file(s) are expired in the next backup. Note that with DFS you need to use four dots (as in /dir/..../*). Messages: ANS4119W See also: EXCLUDE.DIR; EXCLUDE.File; etc EXCLUDE.FS Exclude a drive You can code your client Domain statement to omit the drive you don't want backed up. Note that specification like 'EXCLUDE.Dir "C:\"' should not be used to try to exclude the root of a drive. Exclude and retention (expiration) When you exclude files or directories, it has the same effect as if the objects were no longer on the client system: the the backup versions will be eligible for expiration. Exclude archive files In TSM 4.1: EXCLUDE.Archive In earlier levels, a circumvention is to include them to a special management class that does not exist. You will then get an error message and the files will not be archived. Exclude from Restore There is no Exclude option to exclude file system objects during a Restore. To try to circumvent, you might create a dummy object of that name in the file system and then tell the Restore not to replace files. Exclude ignored? See: Include-Exclude "not working" EXCLUDE.Archive TSM 4.1+: Exclude a file or a group of files that match the pattern from Archiving (only). This does not preclude the archiving of directories in the path of the file - but in any case, this should not be an issue, in that TSM does not archive directories that it knows to already be in server storage. There is no Exclude that excludes from both Archive and Backup. EXCLUDE.Backup Excludes a file or a group of files from backup services only. There is no Exclude that excludes from both Backup and Archive. Effects: The file(s) are expired in the next backup. EXCLUDE.COMPRESSION Can be used to defeat compression for certain files during Archive and Backup processing. Where used: To alleviate the problem of server storage pool space being mis-estimated and backups thus failing because already-compressed files expand during TSM client compression. So you would thus code like: EXCLUDE.COMPRESSION *.gz EXCLUDE.Dir (ADSM v.3+) Specifies a directory (and files and subdirectories) that you want to exclude from Backup services only, thus keeping *SM from scanning the directory for files and subdirectories to possibly back up. (The simpler EXCLUDE does *not* prevent the directory from being traversed to possibly back up subdirectories.) The pattern is a directory name, not a file specification. Wildcards *are* allowed. In Unix, specify like: EXCLUDE.Dir /dirname or EXCLUDE.Dir /dirnames* In Windows, note that you cannot do like "EXCLUDE.Dir G:" to exclude a drive: you need to have "EXCLUDE.Dir G:\*". Use this option when you have both the backup-archive client and the HSM client installed. Do not attempt to specify like 'EXCLUDE.Dir "C:\"' to try to exclude the root of a drive. Effects: The directory and all files below it are expired in the next backup. Note that EXCLUDE.Dir takes precedence over all other Include/Exclude statements, regardless of relative positions. Note that EXCLUDE.Dir cannot be overridden with an Include. EXCLUDE.Dir *does not* apply if you perform a Selective backup of a single file under that directory; but it does apply if the Selective employs wildcard characters to identify files under that directory. EXCLUDE.ENCRYPT TSM 4.1 Windows option to exclude files from encryption processing. See also: ENCryptkey; INCLUDE.ENCRYPT EXCLUDE.File Excludes files, but not directories, that match the pattern from normal backup services, but not from HSM services. Effects: The file(s) are expired in the next backup. EXCLUDE.File.Backup Excludes a file from normal backup services. EXCLUDE.FS (ADSM v.3+) Specifies a filespace/filesystem that you want to exclude from Backup services. (This option applies only to Backup operations - not Archive or HSM.) This option is available in the Unix client, but not the Windows client (as of TSM 5.2.2). In TSM (not ADSM) the filespace may be coded using a pattern. Effects: The specified file system(s) are skipped, as though they were not specified on the command line of the Domain option. (Note that the file systems are *not* expired, as lesser EXCLUDEs do.) Note that EXCLUDE.FS takes precedence over all other Include statements and non-EXCLUDE.FS Exclude statements, regardless of relative positions. But: Does it make sense to exclude a file system? Or should you instead not include it in the first place, as in not coding it in a DOMain statement or as a dsmc command object? (Make sure that you *do* have a DOMain statement coded in your options file!) With client schedules, an alternative is to use the OBJects parameter to control the file systems to back up. See also: dsmc Query INCLEXCL; dsmc SHow INCLEXCL EXCLUDE.HSM No, there is no such thing. What you want to do is simply EXCLUDE, which excludes the object from both Backup and HSM. Exclude.Restore An ad hoc, undocumented addition you may stumble upon in the TSM 5.2 client. It is there only for use under the direction of IBM Service: there is no assurance that it will work as you expect, or in all cases. AVOID IT. Executing Operating System command or Message in client schedule log, script: referring to a command being run per either the PRESchedulecmd, PRENschedulecmd, POSTSchedulecmd, or POSTNschedulecmd option; or by the DEFine SCHedule ACTion=Command spec where OBJects="___" specifies the command name. Execution Mode (HSM) A mode that controls the space management related behavior of commands that run under the dsmmode command. The dsmmode command provides four execution modes - a data access control mode that controls whether a migrated file can be accessed, a time stamp control mode that controls whether the access time for a file is set to the current time when the file is accessed, an out-of-space protection mode that controls whether HSM intercepts an out-of-space condition on a file system, and a recall mode that controls whether a file is stored on your local file system when accessed, or stored on your local file system only while it is being accessed, and then migrated back to ADSM storage when it is closed. .EXP File name extension created by the server for FILE type scratch volumes which contain Export data. Ref: Admin Guide, Defining and Updating FILE Device Classes See also: FILE EXPINterval Definition in the Server Options file. Specifies the number of hours between automatic inventory expiration runs, after first running it when the server comes up. Setting the interval to 0 sets the process to manual, and then you must enter the 'EXPIre Inventory' command to start the process. Default: 24 hours Automatic expiration can be suppressed by starting 'dsmserv' with the "noexpire" command line option. You can also code "EXPINterval 0". Ref: Installing the Server... See also: SETOPT EXPInterval server option, change 'SETOPT EXPINterval ___' while up, or change dsmserv.opt file EXPINterval for next start-up. EXPInterval server option, query 'Query OPTion', look for "ExpInterval". Expiration The process by which objects are deleted from storage pools because their expiration date or retention period has passed. Backed up or archived objects are marked for deletion based on the criteria defined in the backup or archive copy group ('Query COpygroup'). File objects are evaluated for removal at Expiration time either by having been marked as expired at Backup time (per your retention policy Versions rules) or per the retention periods specified in the Backup Copy Group. The expiration process has two phases: 1. Data expiration on ITSM database. 2. Data expiration on tapes. (Freeing tapes to Scratch can seem to be delayed as this is under way.) The order in which expiration occurs has been observed to be the same as types are listed in the ANR0812I message: backup objects, archive objects, DB backup volumes (DRMDBBackupexpiredays), recovery plan files (DRM). Avoid doing expirations during incremental backups - the backups will be degraded. Beware that as a database operation, the expiration will require Recovery Log space. If the expiration is massive, the Recovery Log will fill, and so you should have DBBackuptrigger configured. If SELFTUNEBUFpoolsize is in effect, the Bufpool statistics are reset before the expiration. Messages: ANR4391I, ANR0811I, ANR0812I, ANR0813I See also: DEACTIVATE_DATE; dsmc EXPire; EXPInterval; SELFTUNEBUFpoolsize Expiration (HSM) The retention period for HSM-migrated files is controlled via the MIGFILEEXPiration option in the Client System Options file (governing their removal from the migration area after having been modified or deleted in the client file system) such that the storage pool image is obsolete. The client system file is, of course, permanent and does not expire. Possible values: 0-9999 (days). Default: 7 (days). The value can be queried via: 'dsmc Query Option' in ADSM or 'dsmc show options' in TSM; look for "migFileExpiration". Expiration, invocation Invoked automatically per Server Options file option EXPInterval; Invoke manually: 'EXPIre Inventory'. Expiration, stop (cancel) 'CANcel PRocess Process_Number' will cause the next Expire Inventory to start over. 'CANcel EXPIration' is simpler, and will cause the expiration to checkpoint so that the next Expire Inventory will resume. You may also want to change the EXPINterval server option to "EXPINterval 0" to prevent further expirations, at their assigned intervals - though this means having to take down the server. See also: CANcel EXPIration Expiration date for a Backup file Perform a SELECT on the Backups table to get the DEACTIVATE_DATE, and then add your prevailing backup retention period. Expiration date for an Archive file Perform a SELECT on the Archives table to get the ARCHIVE_DATE, and then add your prevailing archive retention period. Expiration happening? 'Query ACtlog BEGINDate=-999 s=expira' should reveal ANR0812I messages reflecting deletions. Expiration happening outside schedule When you have an administrative schedule performing 'EXPIre Inventory', you want to defeat automatic expirations which otherwise occur via the ExpInterval server option. Expiration messages, control "EXPQUiet" server option (q.v.). Expiration not happening - Is your EXPINterval server option set to a good value, or do you have an administrative schedule doing Expire Inventory regularly? - Retention periods defined in the Copy Group define how long storage pool files will be retained: if you have long retentions then you won't see data expiring any time soon. - Did the management class to which the files were bound disappear? (You can query a few files to check.) If so, the default management class copy group values pertain; or, if no such default copy group, then the DEFine DOMain grace period prevails. See also: Grace period Expiration performance Some things to consider: - Boosting BUFPoolsize to a high value will cut run time substantially. - Avoid running when other database- intensive operations are scheduled. (The "What else is running?" question.) - Standard operating system configuration issues: CPU speed, memory size, disk and paging space performance, contention with other system processes, etc. - Look for TSM db disk problems in the operating system error log. - Performing the expiration with SKipdirs=No with less than TSM server level 5.1.5.1 will result in not just directories being skipped in Expiration, but also the files within those directories! This causes file to build up in the TSM server. Reverting to SKipdirs=Yes will gradually fix the performance problem. - The more versions you have of a file in server storage, and the longer your Backup Copy Group retention policies, the longer Expiration will take, because time-based policy processing occurs during Expiration (in contrast with versions-based processing, which occurs at client Backup time). Ref: IBM site Solution 1141810: "How to determine when disk tuning is needed for your ITSM server". See also: Database performance Expiration period, HSM See: Expiration (HSM); MIGFILEEXPiration Expiration process As reported in Query Process, like: Examined 14784 objects, deleting 14592 backup objects, 16 archive objects, 0 DB backup volumes, 0 recovery plan files; 0 errors encountered. Notes: - Backup and Archive objects may be deleted in concert: it is not the case that expiration will go through all Backup object first, then move on to Archive object deletions. Expiration processes, list 'SELECT STATUS FROM PROCESSES WHERE PROCESS ='Expiration' ' Expiration slow (ADSMv3) APAR PQ26279 describes a major ADSM software defect in which expiration was overly slow in initial and later runs. Expire files by name See: dsmc EXPire EXPIre Inventory *SM server command to manually start inventory expiration processing, via a background process, to remove outdated client Archive, Backup, and Backupset objects from server storage pools according to the terms specified by the Copypool retention and versions specifications for the management classes to which the objects are bound. EXPIre Inventory processes Backup files according to having been marked as expired at Backup time, per retention versions rules; or by examining Inactive files according to retention time values. Expiration naturally removes the storage pool object instance, as well as the appropriate database reference. Expiration is also employed by the server to remove expired server state settings such as Restartable Restore. (The name "Expire Inventory" is misleading, as the function performed by the command is actually database deletion, by virtue of deleting files previously marked expired during Backup, and those computed at Expire Inventory time as having outlived the time-based retention policy.) EXPIre Inventory can be cancelled. Syntax: 'EXPIre Inventory [Quiet=No|Yes] [Wait=No|Yes] [DUration=1-2147483648_Mins] [SKipdirs=No|Yes]' DUration can be defined to limit how long the task runs. (Note: At the end of the duration, the expiration will stop and the point where it stopped is recorded in the TSM database, which will be the point from which it resumes when the next EXPIre Inventory is run.) SKipdirs is per APAR IY06778, due to the revised expiration algorithm experiencing performance degradation while expiring archive objects. (The problem with deleting archive directories, is that TSM must not delete the directory object if there are still files dependent upon it. So, to delete an archive directory, TSM needs to see if ANY files referenced that directory using another set of database calls. This other set of database calls is where the extra time was being spent.) SKipdirs is thus a formalized circumvention for a design change which wasn't properly thought through or tested. The intent of SKipdirs=Yes initially was to allow EXPIre Inventory to bypass all the directories created by Archive. This was a circumvention until the CLEANUP ARCHDIR utilities could be run to clear out these objects. However, until the fix in TSM server level 5.1.5.1, SKipdirs=Yes can also prevent Backup directories and the files under them from being deleted, resulting in ever longer EXPIre Inventory executions and database bloat. SKipdirs=Yes should *not* be used perpetually. Note that API-based clients, such as the TDPs, require their own, separate expiration handling (actually, deletion). Likewise, HSM handles expiration of its own files separately: see MIGFILEEXPiration. How long it takes: The time is proportional to the amount of data ready to be expired. (It is not the case that it plows through the entire *SM database at each invocation, seeking things ready to be expired.) Expire inventory works through the nodes in the order they were registered. This is a disruptive operation which can cause *SM processing to slow to a crawl, so run off-hours so that it will not conflict with things. Reclamation should be disabled during the Expiration ('UPDate STGpool PoolName REClaim=100') so that it doesn't get kicked off prematurely and waste resources in copying data that will be expired as expiration proceeds. WARNING: Expiration quickly consumes space in the Recovery Log, and can exhaust it if the amount of data expiration is great. The DUration operand is there to help keep this from happening. Msgs: ANR0812I; ANR0813I; ANR4391I to record each filespace processed when started in non-quiet mode. See also: CANcel EXPIration; dsmc EXPire; Expiration, stop; Expiring.Objects; Restartable Restore; Server Options file option EXPInterval EXPIre Inventory, placement EXPIre Inventory is best kicked off at the end of a daily (e.g., morning) administration job so that it will reduce tape occupancy levels so that following Reclamation work can run efficiently thereafter. EXPIre Inventory, results Message ANR0812I reports the number of objects removed upon normal conclusion, and ANR0813I for abnormal conclusion. An historic shortcoming is lack of reporting of the number of bytes involved. You can compensate for this by doing 'AUDit LICenses' and 'Select * From Auditocc' before and after the 'EXPIre Inventory'. Expire processing order It looks like Expire processing occurs in the order that you add your client nodes to the *SM server. Expiring--> Leads the line of output from a Backup operation, as when Backup finds that a file has been removed from the file system since the last Backup. The file will be rendered Inactive in server storage. The previously Active copy in server storage is "deactivated". Note that no server storage space is freed until Expire Inventory processing occurs. See also: Updating-->; Normal File-->; Rebinding--> Expiring file HSM: A migrated or premigrated file that has been marked for expiration and removal from *SM storage. If a stub file or an original copy of a premigrated file is deleted from a local file system, or if the original copy of a premigrated file is updated, the corresponding migrated or premigrated file is marked for expiration the next time reconciliation is run. It expires and is removed from *SM storage after the number of days specified with the MIGFILEEXPiration option have elapsed. See: MIGFILEEXPiration Expiring.Objects An internal server table to record what is available for expiration at any given point in time. It's maintained "on-the-fly" as new objects come into the system and the existing objects get moved to Inactive or available for expiration. The records contain the pertinent information for the server to complete the deletion. So, instead of walking the inventory tables at EXPIre Inventory time and performing lengthy calculations then as to what objects can go, that workload is distributed over time. On larger systems, it greatly speeds up the process of figuring out what can be deleted and what can't. Fluctuations in expire time are due to external events, such as a filesystem that had purged a lot of files, retention policies changed, etc. Export *SM server meta command encompassing a family of object exports which allow parts of the server to be written to removable media (tape) so that the data can be transferred to another server - even one of a different architecture (supposedly). The produced tape will end up in the LIBVolumes list with a Last Use type of "Export". Note that Export will write out Backup files first, before other types, and exports first from things directly resident in its database (directories, empty files, etc). Export apparently uses *SM database space for scratch pad use, as database usage will increase when only Export is running. One cute thing you can do for an abandoned filespace is to Export it to a file, archive the file, and delete the filespace such that the data is preserved but all the database space reflecting the individual files is reclaimed. Export is sometimes advocated for getting long-term storage data out of the TSM server, to reduce overhead. This is effective, but lost are all the advantages of TSM database inventory tracking of the data, where it is then up to you to somehow keep track of what you wrote to what export tape and how to get it back. Message ANR0617I will summarize how well the export went: SUCCESS or INCOMPLETE. Watch for message ANR0627I saying that files were skipped, as can happen when input tapes suffer I/O errors. (Export will nicely go on to completion, getting as much data as it can.) To export from one *SM server's storage pools to another, use the ADSMv3+ Virtual volumes facility (see chapter 13 of the Admin Guide). Note: Your success in exporting from one server to another is probabalistic, as the vendor would do little testing in this area. Exporting across platforms is dicey at best. (Be particularly cautious with EBCDIC vs. ASCII platforms.) You will probably have the best chance when the receiving server is at the same level or higher compared to the exporting server. Ref: Admin Guide, Managing Server Operations, Moving the Tivoli Storage Manager Server See also: dsmserv RESTORE DB; IMport EXPORT In 'Query VOLHistory', Volume Type to say that volume was used to record data for export. Also under 'Volume Type' in /var/adsmserv/volumehistory.backup . EXPort Node TSM server command to export client node definitions to serial media (tape). Syntax: 'EXPort Node [NodeName(s)] [FILESpace=FileSpaceName(s)] [DOMains=DomainName(s)] [FILEData=None|All|ARchive| Backup|BACKUPActive| ALLActive|SPacemanaged] [Preview=No|Yes] [DEVclass=DevclassName] [Scratch=Yes|No] [VOLumenames=VolName(s)] [USEDVolumelist=file_name]' Note that exporting to a device type of SERVER allows exporting the data to another ADSM server, via virtual volumes (electronic valuting). Hint: Using Preview=Yes is a handy way of determining the amount of data owned by a node. Consider doing a LOCK Node first! Export via FTP rather than tape Keep in mind that you can export to a devclass of type FILE, and then FTP the resultant file to the other system for Importation. Export-Import across libraries In some cases, customers want to perform an Export-Import from one library to another of the same type, usually at different sites, to rebuild the TSM server at the other site. The TSM manuals have been without information on how to approach this... - Do 'LOCK Node' on all involved client nodes to prevent inadvertent changes to the data you intend to export, and nullify all administrative schedules which could interfere with the long-running Export. - Perform an Export of all data. Carefully check the results of the operation to assure that all the data successfully made it to tape. (The volumes will show up in VOLHistory as Volume Type "EXPORT".) - Perform a CHECKOut LIBVolume to eject the volumes. - Transport the tapes to the new site. - Flick the read/write tab on the tapes to read-only before inserting into the new library, as you'll want to assure that this vital data is not obliterated until you're sure that the new TSM system is complete and stable. - Insert the tapes into the new library. - Perform a CHECKIn LIBVolume with a STATus=PRIvate. - Perform Import. Check that the amount of data imported matches that in the Export. - At some later time, perform a CHECKOut LIBVolume of the read-only volumes and change their tab to read-write to enable their re-use, then perform a CHECKIn LIBVolume as STATus=SCRatch. Leave the old TSM system and library intact until the new TSM system is complete: it is not unknown for there to be problems with Export-Import. Export-Import across servers You may get stuck with a situation where you have an old server and a new server and no common tape hardware nor means of disconnecting tape drives from one system to attach to the other, in performing a traditional Export-Import. In that case, if you're running Unix, a "trick" you might try is to do the export over the network, by doing the export-import using File devices which are in reality FIFO special files, which on the sending system is being read by an 'r**' command to send the data over to the network to be caught by a program there which feeds the FIFO that Import is reading over there. On the sending and receiving systems do: mkfifo fifo On the sending system do: cat fifo | rsh othersys 'cat > fifo' And then have the sending *SM system do an Export Node to a File type device and a VOlumename being the file name of fifo, and have the receiving TSM system do an Import from a File type device where VOlumename is fifo on that system. (Note: This is an unproven concept, but should work.) Export-Import Node A method of copying a node from one ADSM server to another, retaining the same Domain and Node names. (If the node imports with Domain name which is odd to your ADSM server, you can thereafter do an 'UPDate Node' to reassign the node to a more suitable Domain in your server.) Note that this migrates the Filespace data, but the file system stays where it is; and so Export-Import is inappropriate for when you want to transfer an HSM file system from one ADSM server host to another (use cross-node restore instead). EXPQUiet Server option to control the verbosity of expiration messages: No (default) allows verbosity; Yes minimizes output. ext3 file system support The TSM 5.1.5 client for Linux provides (Linux client) support for ext3 file systems. Prior to that, one could effect backups via dsmc by defining the file systems of interest as VIRTUALMountpoint's: subsequent restoral can be performed via either dsmc or dsm. The filespace will be recorded as type EXT2 on the server. EXTend DB ADSM server command to extend the database "assigned space" to use more of the "available space". Causes a process to be created which physically formats the additional space (because it takes so long). 'Query DB' will immediately show the space being available, though the formatting has not completed. Syntax: 'EXTend DB N_Megabytes' Note that doing this may automatically trigger a database backup, with message ANR4552I, depending upon your DBBackuptrigger values. EXTend LOG TSM server command to extend the Recovery Log "assigned space" to use more of the "available space". Causes a process to be created which physically formats the additional space (because it takes so long). 'Query LOG' will immediately show the space being available, though the formatting has not completed. Syntax: 'EXTend LOG N_Megabytes' Results in ANR0307I formatting progress messages to appear in the Activity Log. Caution: In some cases, customers have found that with Logmode Rollforward, the next db backup after the extension fails to clear the Recovery Log. Restarting the server is the only known way to clear that situation. See also: dsmserv EXTEND LOG EXTernal Operand of 'DEFine LIBRary' server command, to specify that a mountable media repository is managed by an external media management system. External Library A collection of drives manage by a media managment system that is not part of ADSM, as for example some mainframe tape management system. (A 3494 that is used directly by *SM is *not* an External Library.) EZADSM Early name for the ADSM Utilities. Name obsoleted in ADSM 2.1.0. Failed Status in Query EVent output indicating that the scheduled event did occur but the client reports a failure in executing the operation, and successive retries have not succeeded. See also: Missed; Total number of objects failed FAS Fabric-Attached Storage, as employed in NetApp brand network attached storage product. FC Fibre Channel. Current 3590 drives can be attached to hosts via Fibre Channel or SCSI. FCA Fibre Channel Adapter card. fcs0 See: Emulex LP8000 Fibre Channel Adapter FDR/UPSTREAM Backup/restore product from Innovation Data Processing, which they say is a comprehensive, powerful, high performance storage management solution for backup of most of the open systems LAN/UNIX platforms and S/390 Linux data to OS/390 or z/OS mainframe backup server. UPSTREAM will provide automated operations with fast, reliable and verifiable backups/restores/archival and file transfers that can be automatically initiated and controlled from either client or the mainframe backup server. UPSTREAM provides unique data reduction techniques including online database agents offering maximum safety with superior disaster recovery protection. Supports Windows and AIX. (The vendor's website is poor.) FFFA volume category code, 3494 Reflects a tape which was manually removed from the 3494, by opening the door and removing the tape from a cell, instead of otherwise ejecting it. To remove the Library Manager entry for the volume, to allow the cell to be reused, change the Category Code to FFFB. See: Volume Categories Fibre Channel adapter, mixing disk IBM's official statement concerning the and tape on same one FC HBA sharing of tape and disk on a single adapter, as of 2003/05: "...Using a single Fibre Channel host bus adapter (HBA) on a host server for concurrent tape and disk operations is generally not recommended. In high performance, high stress situations with dissimilar I/O devices, stability problems can arise. IBM is focused on assuring configuration interoperability. In so doing, IBM tests single HBA configurations to determine interoperability. Certain customer environments using AIX with the IBM FC Switch (2109) connecting both ESS (2105) and Magstar 3590 Tape have demonstrated acceptable interoperability. For customers that are considering sharing a single HBA with concurrent disk and tape operations, it is strongly recommended that the sales team conduct a Pre-Sales Solutions Assurance Review with members of the Techline or ATS team to review the issues and concerns. IBM and IBM's partners will continue evaluating other configurations and make specific statements regarding interoperability as available." Ref: IBM Ultrium Device Drivers Installation and User's Guide, as one place. Synposis: You risk a hang or data corruption, not that it certainly won't work. See also: HBA FibreChannel and number of tape drives A rule of thumb is that there should not be more than three tape drives per FibreChannel path. FICON IBM term, used with S/390, for Fiber Connection of devices. A follow-on to ESCON. Ref: redbook "Introduction to IBM S/390 FICON" (SG24-5176) FID messages (3590) Failure ID message numbers, which appear on the 3590 drive panel. FID 1 These messages indicate device errors that require operator and service representative, or service representative only action. The problem is acute. The device cannot perform any tasks. FID 2 These messages report a degraded device condition. The problem is serious. The customer can schedule a service call. FID 3 These messages report a degraded device condition. The problem is moderate. The customer can schedule a service call. FID 4 These messages report a service circuitry failure. The device requires service, but normal drive function is not affected. The customer can schedule a service call. Ref: 3590 Operator Guide (GA32-0330-06) Appendix B especially. Fiducials White, light-reflective rectangles attached to the corners of tape drives and cell racks in a 3494 tape robot for the infrared sensor on the robot head to determine exactly where such elements exactly are, when in Teach mode. Ref: "IBM 3590 High Performance Tape Subsystem User's Guide" (GA32-0330-0) FILE In DEFine DEVclass, is a DEVType which refers to a disk file in a file system of the *SM server computer, which is regarded as a form of sequential access media - which implicitly means singular access, which is to say that a FILE is dedicated to a single active Session, where no other Sessions can use the FILE volume - including multi-session processes. (This is in contrast to the DISK device class, which is random access, and can be simultaneously used by multiple Sessions.) Naturally, there is no library or drive defined for FILE. FILE type volumes may be either Scratch or Defined type. For Scratch type, when the server needs to allocate a scratch "volume" (file), it creates a new file in the directory specified in the DEFine. For scratch volumes used to store client data, the file created by the server has a file name extension of .BFS. For scratch volumes used to store export data, a file name extension of .EXP is used. For example, suppose you define a device class with a DIRECTORY of /ADSMSTOR and the server needs a scratch volume in this device class to store export data, the file which the server creates might then be named /ADSMSTOR/00566497.EXP . When empty, Scratch type FILE volume size is controlled by the Devclass MAXCAPacity value: when a volume is filled, another is created and used. The number of such volumes is limited by the Stgpool MAXSCRatch value: if inadequate, you will ultimately encounter "out of space" stgpool error messages. Scratch type FILE volumes are deleted from the file system, giving back the space they occupied. Instead of Scratch, you may do DEFine Volume to pre-assign volumes in the FILE pool, in conjunction with setting MAXSCRatch=0. This allows you to attain predictable results, as in spreading I/O load over multiple OS disks. Properties: - FILE type devices are sequential media, and are treated in many respects like tape. - No prep (labeling, formatting) is required. - They require mountpoints, are mounted and dismounted, etc. - Volume name must be unique, as it is a file system file name. - MOUNTLimit may be used to limit the number of simultaneous volumes in use in the pool, and thus limit processes: when limit reached, new processes wait for FILEs. MOUNTLimit=DRIVES is not valid in that there are no "drives". - There should be no actual manual intervention required in their use. FILE devs may be used for a variety of purposes, including electronic vaulting. Ref: Admin Guide table "Comparing Random Access and Sequential Access Disk Devices" See also: DISK; SERVER; Sequential devices; Storage pool space and transactions See also IBM site Technote 1141492 FILE devclass performance As a sequential pseudo device, FILE benefits from several real and conceptual performance advantages, over DISK (random access) class: - There is only the need to keep track of where files start within the FILE area, rather than map blocks as in DISK class. - Access is linear, without TSM having to hop around seeking the next piece of the series. - Access is dedicated rather than shared, eliminating contention. However, there are inconvenient realities in this pretense: - The FILE area is built upon a file system's disk blocks - which can be expected to be scattered about on the disk. - The disk will often be shared, and so there is real contention involved. FILE is tape emulation: there are certain TSM functionality advantages, but don't fool yourself into believing that FILE is truly sequential. File, delete from filespace See: File Space, delete selected files File, expirable? See: SHow Versions File, find on a set of volumes SELECT VOLUME_NAME FROM CONTENTS WHERE - NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='{fsname}' AND - FILE_NAME='{path.without.fsname} {filename}' File, find when only filename known There may be times when you know the name of a file, but not what directory (or perhaps even filespace) it is in. In the TSM server you can do: SELECT * FROM BACKUPS WHERE [FILESPACE_NAME="FSname" AND] LL_NAME="TheFileName" (Remember that for client systems where filenames are case-insensitive, such as Windows, TSM stores them as UPPER CASE, so search for them the same way.) File, in storage pool When TSM stores files in storage pools, if the current storage pool sequential volume fills as the file is being written, the remainder of the file will be stored on another volume: the file will span volumes. (If the file is within an Aggregate, the Aggregate necessarily spans volumes as well.) A file cannot span Aggregates. If the file size meets or exceeds Aggregate size, the file is not Aggregated. See: Aggregated?; Segment Number File, management class bound to The management class to which any given file is bound can be most readily be checked via 'dsmc q backup ...' or a GUI restore looksee on the client, or via a more consumptive Select performed on the server Backups table. File, selectively delete from *SM There is no supported way currently to storage - standard method dispose of an individual file from server storage via a server operation: but you may accomplish it from the client side, by one of the following methods: 1. The crude approach: Create an empty, dummy file of the same name, back up the empty surrogate as many times as your retention generations value, to assure that all copies of the original are gone. (The backup of an empty file does not require storage pool space or a tape mount: it is the trivial case where all the info about the empty file can be stored entirely in the database entry.) 2. Use a special management class with null retention values... - On the server, define a special management class with VERDeleted=0 and RETOnly=0; - On the client, code an Include to tie the specific file to that special management class; - On the client, create a dummy file in the same place in the file system that the bogey file existed; - Perform a Selective Backup on that file name. *SM will then expire the "old" version of the file, and the low retention will cause Expiration to delete it the next day. File, selectively delete from *SM Unsupported and possibly dangerous: storage - unsupported method First up you need to find out the object id(s) for the object(s) that you want to delete. You can find this out from the backup or archive tables using SELECT. Then it is just a simple matter of using the DELETE OBJECT command. There is one trick though. The OBJECT_ID field from the backup and archive tables is a single number. However, the object ID required by DELETE OBJECT takes 2 numbers as parameters, an OBJECT_ID HIGH and an OBJECT_ID LOW. The HIGH value has been seen to always be zero. So, if you want to delete object 193521018 for example, just do DELETE OBJECT 0 193521018. (Note that this command is a *SM construct, as opposed to the pure SQL Delete statement.) Further warning: This command does exactly and only what it says: it deletes an object - regardless of context. It does not update all the necessary tables to fully remove an object from the TSM server. If you use this command, you risk creating a database inconsistency and thus future problems. See also: File Space, delete selected files File, split over two volumes? Do SELECT FILE_NAME FROM CONTENTS WHERE volume_name='______' AND SEGMENT<>'1/1' to find the name of the file spread over two volumes. Then do: SELECT VOLUME_NAME FROM CONTENTS WHERE FILE_NAME='see.above' AND SEGMENT='2/2' to find the other volume. File, what volume is it on? The painful way, depending upon your file population: SELECT VOLUME_NAME FROM CONTENTS - WHERE FILE_NAME='_______' Or: Restore or retrieve the file to a temp area, and see what tape was mounted. Or: Mark the storage pool Unavailable for a moment, attempt a restoral or retrieval, unmark, and look in the server Activity Log for what volume it could not get. See also: Restoral preview File(s), always back up during an Accomplish this by creating a parallel incremental backup Management Class definition pointing to a parallel Backup Copy Group definition which contains "MODE=ABSolute", and then have an Include statement for that file refer to the parallel Management Class. File age For migration prioritization purposes, the number of days since a file was last accessed. File aggregation See Aggregates File attributes, in TSM storage File attributes are not available at the server via SQL Select queries: the attribute information is only available via the same kind of client you used to back up the file, and then only in the GUI client. That is, if you used the Windows client to back up a file, only the Windows client GUI can get the file attributes. While the server certainly does store the attributes given to it by the client, the TSM server does not provide the server administrator with that view of the database. Nor is there any way to get them in their "raw" (uninterpreted) format. This is partly because such data is something only the client admin need be concerned about, and partly because the way the attributes are stored is platform-specific such that extra server programming would be needed to properly interpret the attributes in the context of the client architecture. ODBC issues Select requests, so it's view of the server DB is likewise limited (and slow). See also: dsmc Query Backup File in use during backup or archive Have the CHAngingretries (q.v.) Client System Options file (dsm.sys) option specify how many retries you want. Default: 4. File name (location) of database, Are defined within file: recovery log /usr/lpp/adsmserv/bin/dsmserv.dsk (See "dsmserv.dsk".) File names as stored in server Client operating system file names are stored in the server according to the conventions of the operating system and file system. Unix file names are case-sensitive, and so they are stored as-is. Windows, following the MS-DOS convention, has file names which are case-insensitive, and so TSM follows the convention of that environment by storing them in upper case. File server A dedicated computer and its peripheral storage devices that are connected to a local area network that stores both programs and files that are shared by users on the network. File size For migration prioritization purposes, the size of a file in 1-KB blocks. Revealed in server 'Query CONtent VolName F=D". TSM records the size of a file as it goes to a storage pool. If the client compresses the file, TSM records the compressed size in its database. If the drive compresses the file, TSM is unaware of the compression. See also: FILE_SIZE; File attributes File size, maximum, for storage pool See "MAXSize" operand of DEFine STGpool. File size, maximum supported There was a historic limitation in the ADSM server and client that the maximum file size for backup and archive could not exceed 2 GB. That restriction was lifted in the server around 8/96; and in the client PTF 6, for platforms AIX 4.2, Novell NetWare, Digital UNIX, and Windows NT. See also: Volume, maximum size File Space (Filespace) A logical space on the *SM server that contains a group of files that were stored as a logical unit, as in backup files, archived files. A file space typically consists of the files backed up or archived for a given Unix file system, or a directory apportionment thereof defined via the Unix VIRTUALMountpoint option. In Windows, the file system defined by volume name or UNC name. File Spaces are the middle part of the unique *SM name associated with file system objects, where node name is the higher portion and the remainder of the path name is the lower portion. By default, clients can delete archive file spaces, but not backup file spaces, per server REGister Node definitions. CAUTION: The filespace name you see in character form in the server may not accurately reflect reality, in that the clients may well employ different code pages (Windows: Unicode) than the server. The hexadecimal representation of the name in Query FIlespace is your ultimate reference. File Space, backup versions 'SHOW Versions NodeName FileSpace' File Space, delete in server 'DELete FIlespace NodeName FilespaceName [Type=ANY|Backup| Archive|SPacemanaged] OWNer=OwnerName' Note that "Type=ANY" removes only Backup and Archive copies, not HSM file copies. File Space, delete from client From client, dsmc Delete Filespace is a gross, overall operation which deletes all aspects of the filespace (providing that the node's ARCHDELete and BACKDELete specifications allow it). Doing DELete FIlespace from the server allows greater selectivity as to the type of data to be deleted. File Space, delete selected files TSM does not provide a means for customers to delete specific files from filespaces, as you might want to do if last night's backup sent virus-infected files to the server. TSM is a strict, policy-based data assurance facility for an enterprise, where the server administrator is provided no means for monkeying with individual files...which belong to the clients, who should be guaranteed that their data lives according to the agreed rules. One thing you can do is force individual filenames to be pushed out of the filespace via special policy specifications: Add an Include statement for these files in your client options, specifying a special management class with a COpygroup retention period of 0 (zero) days, and then run a special backup. See also: DELETE OBJECT; File, selectively delete from *SM storage File Space, explicit specification Use braces to enclose and thus isolate the file space portion of a path, as in: 'dsmc query archive -SUbdir=Yes "{/a/b}/c/*"' This will explicitly identify the file space name to TSM, keeping it from guessing wrong in cases where the file system portion of the path is not resident on the system where the command is invoked, you lack access to it, or the like. (TSM assumes that the filespace is the one with longest name which matches the beginning of the filespec. So if you have two filespaces "/a" and "/a/b", you need to specify "{/a/}somefile" to distinguish.) Ref: (Unix) Backup/Archive client manual: Understanding How TSM Stores Files in File Spaces File Space, move to another node The 'REName FIlespace' cannot do this. within same server (The product does not provide an easy means for reattributing file spaces to other nodes - largely, I think, because it would be too easy for naive customers to get into trouble in assigning a file space to an operating system which did not support the kind of file system represented in the file space.) You can perform it via the following (time-consuming) technique, which temporarily renames the sending node to the receiving node: Assume nodes A & B, and you want to move filespace F1 from A to B... 1. REName Node B B_temp 2. REName Node A B 3. EXPort Node B FILESpace=f1 FILEData=All DEVType=3590 VOL=123456 (wait for the export to complete) 4. REName Node B A 5. REName Node B_temp B 6. IMport Node B Replacedefs=No DEVType=3590 VOLumenames=123456 Alternately, you could do the converse: temporarily rename the receiving node to the exported file space node name for the purposes of receiving the import. File Space, number of files in The Query FIlespace server command does not reveal the number; and Query OCCupancy counts only the number of file space objects which are stored in storage pools. File Space, on what volumes? Unfortunately, there is no command such that you can specify a file space and ask ADSM to show you what volumes its files reside upon. You have to do 'Query CONtent VolName' on each volume in turn and look for files, which is tedious. File Space, remove In performing filespace housekeeping, it's wise to do a Rename Filespace rather than an immediate Delete: hang on to the renamed oldie for at least a few days, and only after no panic calls, do DELete FIlespace on that renamee. Alternately, you could Export the filespace and reclaim that tape after a prudent period; but that takes time, and the panicked user would have to await an equally prolonged Import before their data could be had. If you don't exercise prudence in this fashion, recovering a filespace would involve a highly disruptive, prolonged TSM db restoral to a prior time, Export, then restoral back to current time followed by an import. No one wants to face a task like that. File Space, rename 'REName FIlespace NodeName FSname Newname' A step to be performed when an HSM-managed file system is renamed. File Space, timestamp when Backup file 'SHow Versions NodeName FileSpace' written to File Space locking TSM will lock a filespace as it performs some operations, which can result in conflicts. See IBM site TechNote 1110026. File Space name Remember that it is case-sensitive. For ADSM V3 Windows clients after 3.1.0.5, the filespace name is based on the Windows UNC name for each drive, rather than on the drive label. So if somebody changed the Windows NT networking ID, that would change the UNC name, and force a full backup again. Per the API manual Interoperability chapter: Intel platforms automatically place filespace names in uppercase letters when you register or refer them. However, this is not true for the remainder of the object name specification. File Space name, list 'Query CONtent VolName' File Space name *_OLD A filespace name like "\\acadnt1\c$_OLD" is an indication of having a Unicode enabled client where the node definition allows "Auto Filespace Rename = Yes": TSM can't change filespaces on the fly to Unicode so it renames the non-unicode filespaces to ..._old, creates new Unicode filespaces, and then does a "full" backup for the filespaces. When your retention policies permit, you can safely delete the old filespaces. See AUTOFsrename in the Macintosh and Windows B/A clients manuals. File Space number See: FSID File Space reporting From client: 'dsmc q b -SUbdir=Yes -INActive {filespacename}:/dir/* > filelist.output File Space restoral, preview tapes Old way: needed 'SHow VOLUMEUSAGE NodeName' to get the tapes used by a node, then run 'Query CONtent VolName NODE=NodeName FIlespace=FileSpaceName' on each volume in turn. ADSMv3: SELECT VOLUME_NAME FROM - VOLUMEUSAGE WHERE - NODE_NAME='UPPER_CASE_NAME' - AND FILESPACE_NAME='____' AND - COPY_TYPE='BACKUP' AND - STGPOOL_NAME='' File Spaces, abandoned Clients may rename file systems and disk volumes, thus giving the backed-up filespaces new identities and leaving behind the old filespaces for the TSM system administrator to deal with. To TSM, there is no difference between a file system which hasn't been backed up for five years and one which has not been backed up for five hours: the data belongs to the client, and the TSM server's role is to simply do the client's bidding. This is where system administration is needed... The standard treatment is to periodically look for abandoned filespaces (look at last client access time in Query Node, and Query FIlespace last backup date), notify the clients, and delete them if the client says to or no response within a reasonable time. Watch out for filespaces which are just used for archiving, such that backups are not reflected. See "Export" for a technique to preserve abandoned filespaces but eliminate their burden on the server db. File Spaces, report backups Not so easy: the information is in the database, though getting it is tedious. The Actlog table can be mined for ANE* messages reflecting backups (including transfer rates), and with that timestamp you can go at the Backups table to determine the filespace name, and from the filenames gotten there you could brave the Contents table to get sizes (which records aggregates or filesizes, whichever is larger). File Spaces, summarize usage 'SELECT n.node_name,n.platform_name, - COUNT(*) AS "# Filespaces", - SUM(f.capacity) AS "MB Capacity" - FROM nodes n,filespaces f - WHERE f.node_name=n.node_name - GROUP BY n.node_name,n.platform_name - ORDER BY 2,1' File spaces not backed up in 5 days SELECT FILESPACE_NAME AS "Filespace", \ NODE_NAME AS "Node Name", \ DAYS(CURRENT_DATE)-DAYS(BACKUP_END) \ AS "Days since last backup" FROM \ FILESPACES WHERE (DAYS(BACKUP_END) \ < (DAYS(CURRENT_DATE)-5)) Or: SELECT * FROM FILESPACES WHERE - CAST((CURRENT_TIMESTAMP-BACKUP_END)DAYS AS DECIMAL(3,0))>5 File State The state of a file that resides in a file system to which space management has been added. A file can be in one of three states - resident, premigrated, or migrated. See also: resident file; premigrated file; migrated file File system, add space management HSM: 'dsmmigfs add FSname' or use the GUI cmd 'dsmhsm' File system, deactivate space HSM: 'dsmmigfs deactivate FSname' management or use the GUI cmd 'dsmhsm' File system, display HSM: 'dsmdf [FSname]' or 'ddf [FSname]' File system, expanding An HSM-managed file system can be expanded via SMIT or discrete commands, while it is active - no problem. File system, force migration HSM: 'dsmautomig [FSname]' File system, Inactivate all files When a TSM client is retiring, it may be desirable to render all its files Inactive, and allow them to age out gracefully, rather than do a wholesale filespace deletion. Such an inactivation is best done by either emptying the client file system and then doing a last Incremental backup, or by creating an empty file system on the client and then temporarily renaming the TSM server filespace to match for the final Incremental. A tedious alternative is to use the client EXPire command on all the client's Active objects. In doing this, you want the retention policy to have date-based expiration, as files controled by versions-only expiration will remain in the retired filespace indefinitely. File system, query space management HSM: 'dsmmigfs query FSname' or use the GUI cmd 'dsmhsm' File system, reactivate space HSM: 'dsmmigfs reactivate FSname' management or use the GUI cmd 'dsmhsm' File system, remove space management HSM: 'dsmmigfs remove FSname' (q.v.) File system, restrict incremental Use "DOMain" option in the Client User backup to Options file to restrict incremental backup to certain drives or file systems. File system, update space management HSM: 'dsmmigfs update FSname' or use the GUI cmd 'dsmhsm' File system incompatibility The *SM client is programmed to know what kind of file systems your operating system can handle - and, by logical extension, what kinds it cannot. When you attempt to perform cross-node operations to for example inspect the files backed up by a node running a different operating system than yours, the client will not show you anything. The big problem here is the client's failure to say anything useful about its refusal, leaving the customer scratching his head. See also: message ANS4095E File System Migrator (FSM) A kernel extension that is mounted over an operating system file system when space management is added to the file system (over JFS, in AIX). The file system migrator intercepts all file system operations and provides any space management support that is required. If no space management support is required, the operation is passed through to the operating system (e.g., AIX) for it to perform the file system operations. (Note that this perpetual intercept adds overhead, which delays customary file system tasks like 'find' and 'ls -R'.) In the AIX implementation of FSM, HSM installation updates the /etc/vfs file to add its virtual file system entry like: fsm 15 /sbin/helpers/fsmvfsmnthelp none (HSM prefers VFS number 15.) File system restoral, preview tapes Unfortunately, there is no command to needed accomplish this. You could instead try 'SHow VOLUMEUSAGE NodeName' to get a list of the Primary Storage Pool tapes used by a node, then run 'Query CONtent VolName NODE=NodeName FIlespace=FileSpaceName' on each volume in turn to identify the volumes In ADSMv3+ you can exploit the "No Query Restore" feature, which displays the volume name to be mounted, which you can then skip. See: No Query Restore File system size 'Query Filespace' shows its size in the "Capacity" column, and its current percent utilzation under "Pct Util". File system state The state of a file system that resides on a workstation on which ADSM HSM is installed. A file system can be in one of these states-native, active, inactive, or global inactive. File system type used by a client 'Query FIlespace', "Filespace Type". Reveals types such as JFS (AIX), FSM:JFS (HSM under AIX), FAT (DOS, Windows 95), NFS3, NTFS (Windows NT), XFS (IRIX). File system types supported, Macintosh See the Macintosh Backup-Archive Clients Installation and User's Guide, topic "Supported file systems" (Table 10) File system types supported, Unix See the Unix Backup-Archive Clients Installation and User's Guide, topic "File system and ACL support". (Table 47) File system types supported, Windows See the Windows Backup-Archive Clients Installation and User's Guide, topic "Performing an incremental, selective, or incremental-by-date backup". File systems, local The "DOMain ALL-LOCAL" client option causes *SM to process all local file systems during Incremental Backup. For special, non-Backup processing, your client may need to definitively acquire the list of all local file systems. In Unix, you can use the 'df' or 'mount' commands and massage the output. A cuter/sneakier method is to have TSM tell you the file system names: have "DOMain ALL-LOCAL" (or omit DOMain) in your dsm.opt file, and then do 'dsmc query opt'/'dsmc show opt' and parse the returned DomainList. Rightly, /tmp is not included in the returned list. If you don't want to disturb your system dsm.opt file, you can simply define environment variable DSM_CONFIG to name an empty file, like: setenv DSM_CONFIG /dev/null or use the -OPTFILE command line arg (but this arg is not usable with all commands). And to avoid having that environment variable setting left in your session, you can execute the whole in a Csh sub-shell, by enclosing in parens: (setenv DSM_CONFIG /dev/null ; dsmc show opt ) You might use the PRESchedulecmd to weasle such an approach for you. File systems to back up Specify a file system name via the "DOMain option" (q.v.) or specify a file system subdirectory via the VIRTUALMountpoint option (q.v.) and then code it like a file system in the "DOMain option" (q.v.). File systems supported See: File system types supported File systems under HSM control End up enumerated in file /etc/adsm/SpaceMan/config/dsmmigfstab by virtue of running 'dsmmigfs'. FILE_NAME ADSMv3 SQL: The full-path name of a file, being a composite of the HL_NAME and LL_NAME, like: /mydir/ .pinerc FILE_SIZE ADSMv3 SQL: A column in the CONTENTS table, supposedly reflecting the file size. Unfortunately the SQL access we as customers have to the TSM database is a virtual view, which deprives us of much information. Here, FILE_SIZE is the size of the Aggregate (of small files), not the individual file, except when the file is very large and thus not aggregated (greater than the client TXNBytelimit setting), and except in the case of HSM, which does not aggregate. So, in a typical Contents listing involving small files, you will see like "AGGREGATED: 3/9", and all 9 files having the same FILE_SIZE value, which is the size of the Aggregate in which they all reside. Only when you see "AGGREGATED: No" is the FILE_SIZE the actual size of the file. Note also that the CONTENTS table is a dog to query, so it is hopeless in a large system. See also: File attributes FILEEXit Server option to allow events to be saved to a file -- NOTE: Events generated are written to file exit when generated, but AIX may not perform the actual physical write until sometime later - so events may not show up in the file right after they are generated by the server/client. Be sure to enable events to be saved (ENABLE EVENTLOGGING FILE ...) in addition to activating the file exit receiver. Syntax: FILEEXit [YES | NO] [APPEND | REPLACE | PRESERVE] -FILEList= TSM v4.2+ option for providing to the dsmc command a list of files and/or directories, both as a convenience and to overcome the long-imposed default restriction of 20 on the number of filespecs which may appear on the command line. The basic rules are: - one object name per line in the file; - no wildcards; - names containing spaces should be enclosed in double-quotes; - specifying a directory causes only the directory itself to be processed, not the files within it. Invalid entries are skipped, resulting in a dsmerror.log entry. Processing performance (per 4.2 Tech Guide redbook): The entries in the filelist are processed in the order they appear in the filelist. For optimal processing performance, you should pre-sort the filelist by filespace name and path. See also: dsmc command line limits; -REMOVEOPerandlimit Files, backup versions 'SHOW Versions NodeName FileSpace' Files, binding to management class Files are accociated with a Management Class in a process called "binding" such that the policies of the Management Class then apply to the files. Binding is done by: Default management class in the Active policy set. Backup: DIRMc option Archive: ARCHMc option on the 'dsmc archive' command (only) INCLUDE option of an include-exclude list Using a different management class for files previously managed by another management class causes the files to be rebound to the rules of the new management class - which can cause the elimination of various inactive versions of files and the like, depending upon the change in rules; so be careful in order to avoid disruption. Ref: Admin Guide Files, maximum transferred as a group "TXNGroupmax" definition in the server between client and server options file. Files, number of in storage pools, See: Query OCCupancy query Files sent in current or recent Sometimes, a current or recent session client session had some impact on the server, and the TSM administrator would like to identify the particulars of the files involved. It is usually well known what TSM storage pool volume they went to, and so a simple way to report them is: 'Query CONtent VolName COUnt=-N F=D' where -N is some likely number which will encompass the recently arrived files of interest - which is most likely to work when the files are large. This may be even simpler if you have a disk storage pool as the initial reception area for Archive, Backup, or HSM client operation. This technique is a handy way to spot-check a set of tapes and see what they were last used for. (The Query Content command is targeted at a volume and limited in scope, so no server overhead, and results are nearly instantaneous.) Files in a volume, list 'Query CONtent VolName ...' Files in database See: Objects in database Fileserver and user-executed restorals Shops may have a fileserver and dependent workstations, perhaps of differing architectures. Backups occur from the fileserver, but how to make it possible for users - who are not on the fileserver - to perform their own restorals? Possibilities: - For each user, have the fileserver do a 'dsmc SET Access' to allow the workstation users to employ -FROMNode and -FROMOwner to perform restorals to their workstations...whence the data would flow back to the server over NFS, which may be tolerable. - Allow rsh access to the fileserver so that via direct command or interface the users could invoke ADSM restore. - Fabricate a basic client-server mechanism with a root proxy daemon on the fileserver performing the restoral for the user, and feeding back the results. (A primitive mechanism could even be mail-based, with the agent on the fileserver using procmail or the like to receive and operate upon the request.) - Have the fileserver employ two different nodenames with ADSM: one for its own system work, and the other for the backup of those client user file systems. This would allow you to give the users a more innocent, separate password which they could use (or embed in a shell script you write for them) to perform ADSM restorals from their workstations using the -nodename option. The data in this case would flow to the ADSM client on the workstation, and then back to the fileserver via NFS, which may be tolerable. The nuisance here is setting up and maintaining ADSM client environments on the workstations...which could be made easier if you further exploited your NFS to have the executables and options files shared from the fileserver (where they would reside, but could not be executed because of the server being Sun and client code being AIX, say). -FILESOnly ADSMv3+ client option, as used with Restore and Retrieve, to cause the operation to bring back only files, not their accompanying directories. However, in Archive, directories in the path of the source file specification *will* be archived. During Restore and Retrieve, surrogate directories will be constructed to emplace the original structure of the file collection. Ref: TSM 4.2 Technical Guide See also: Restore Order; V2archive Filespace See: File Space Filespace number See: FSID Filespace Type Element of 'Query FIlespace' server command, reflecting the type of file system which ADSM found when it was *first* backed up. (Change from, for example, FAT to NTFS, and there will be no change in Filespace Type.) Sample types: Platform: JFS AIX FSM:JFS AIX HSM ext2 LINUX NFS3 IRIX XFS IRIX FAT32 Windows 95 NTFS WinNT AUTOFS IRIX See also: Platform FileSpaceList Entry in ADSM 'dsmc Query Options' or TSM 'dsmc show options' report which reveals the Virtual Mount Points defined in dsm.sys. Names are reported under this label if defined as a Virtual Mount Point *and* something is actually there. As such this is a good way of determining if an incremental backup will work on this name. FILESPACES *SM SQL table for the node filespace. Columns: NODE_NAME, FILESPACE_NAME, FILESPACE_TYPE, CAPACITY, PCT_UTIL, BACKUP_START, BACKUP_END See also: Query FIlespace for field meanings. FILETEXTEXIT TSM server option to specify a file to which enabled events are routed. Each logged event is a fixed-size, readable line. Syntax: FILETEXTEXIT [No|Yes] File_Name REPLACE|APPEND|PRESERVE Parameters: Yes Event logging to the file exit receiver begins automatically at server startup. No Event logging to the file exit receiver does not begin automatically at server startup. When this parameter has been specified, you must begin event logging manually by issuing the BEGIN EVENTLOGGING command. file_name The name of the file in which the events are stored. REPLACE If the file already exists, it will be overwritten. APPEND If the file already exists, data will be appended to it. PRESERVE If the file already exists, it will not be overwritten. Filling Typical status of a tape in a 'Query Volume' report, reflecting a sequential access volume is currently being filled with data. (In searching the manuals, note that the phrase "partially filled" is often used instead of "filling".) Note that this status can pertain though the volume shows 100% utilized: the utilization has reached the estimated capacity but not yet the end of the volume. Note that "Filling" will not immediately change to "Full" on a filled volume if the Segment at the end of the volume spans into the next volume: writing of the remainder of the segment must complete on the second volume before the previous volume can be declared "Full". This necessitates the mounting and writing of a continuation volume, which might be thwarted by volume availability (MAXSCRatch, etc.). Note also that it is not logical for a non-mounted Filling status tape to be used when the current tape fills with a spanned file: files which span volumes must always continue at the front of a fresh volume. It would not be logical for a file to span from the end of one volume into the midst of another volume. Thus, a Filling tape will most often be used when an operation begins, not as it continues. Historically, *SM has always keep as many volumes in filling status as you have mount points defined to the device class for that storage pool. So if your device class has a MOUNTLimit of 2, you'll always see 2 volumes in filling status (barring volumes that encounter an error). So when one Filling tape goes full, it would start another one. Advisory: Your scratch pool capacity can dwindle faster than you would expect, by tapes in Filling status having just a small amount of data on them, perhaps never again called upon for further filling. This can be caused by a worthy Filling tape dismounting when an operation like Move Data starts: it would otherwise use that Filling tape, but because it is dismounting, *SM instead uses a fresh tape, and that new tape will probably be used for further operations, leaving the old Filling tape essentially abandoned; so your usable tape complement shrinks. Reclamation: Filling volumes can be reclaimed as readily as Full volumes, per the reclaim threshold you set. Ref: Admin Guide, chapter 8, How the Server Selects Volumes with Collocation Enabled; ... Disabled See also: Full; Pct Util Firewall and idle session A firewall between the TSM client and server can result in the session being disconnected after, say, an hour of idle time (as in a long MediaWait). The real solution, of course, is to resolve the wait problems. You might also set the TCP keepalive interval to below the value of your firewall timeout before a session starts, or changing the SO_KEEPALIVE on the socket for a current session (if possible). Firewall support For web-based access, TSM 4.1 introduced the option WEBPorts. The client scheduler operating in Prompted mode does not work when the server is across a firewall; but it does work when operating in Polling mode. To enable the Backup-Archive client, Command Line Admin client, and the Scheduler (running in polling mode) to run outside a firewall, the port specified by the server option TCPPort (default 1500) must be opened within the firewall. The server cannot log events to a Tivoli Enterprise Console (T/EC) server across a firewall. Consider investigating VPN methods or SAN in general. Ref: Quick Start manual, "Connecting with IBM Tivoli Storage Manager across a Firewall". See: Port numbers, for ADSM client/server; SESSIONINITiation; WEBPorts Firmware IBM term for microcode. Firmware, for 3570, 3590 May be in a secure directory on the ADSM web site, index.storsys.ibm.com. (login:code3570 passwd: mag5tar). Fixed-home Cell 3494 concept wherein a cartridge is assigned to a fixed storage cell: its home will not change as it is used. This is necessitated if the Dual Gripper feature is not installed. fixfsm (HSM) /usr/lpp/adsm/bin/fixfsm, a ksh script for recreating .SpaceMan files when there is a corruption or loss problem in that HSM control area, including loss of the whole directory. Ref: Redbook "Using ADSM HSM", page 52 and appendix D. Fixtest Synonymous with "patch"; indicates that the code has not been fully tested. If your TSM version has a nonzero value in the 4th part of the version number (i.e. the '8' in '5.1.5.8') then it is a fixtest (or patch). See also: Version numbering FlashCopy Facility on the IBM ESS (Shark) which purports to facilitate backups by creating a backup image of a file system. It performs the operation by making a block-by-block copy of an entire volume. The IBM doc talks of having to unmount the file system before taking the copy - which is impossible in most sites - but that is actually an advisory to ensure the consistency of the involved data. Floating-home Cell 3494 Home Cell Mode wherein a cartridge need not be assigned to a fixed storage cell: its home will change as it is used. This is made possible via the Dual Gripper feature. See: Home Cell Mode FMR Field Microcode Replacement, as in updating the firmware on a drive. In the case of a tape drive, when the CE does this he/she arrives with a tape (FMR tape); but it can often be done via host command. .fmr Filename suffix for FMR (q.v.). IBM changed to a .ro suffix in 2003. Folder separator character ':'. (Macintosh) See also: "Directory separator" for Unix, DOS, OS/2, and Novell. FOLlowsymbolic Client User Options file (dsm.opt) (or 'dsmc -FOLlowsymbolic') option to specify whether ADSM is to restore files to symbolic directory links, and to allow a symbolic link to be used as a Virtual Mount Point (q.v.). Default: No Implications in restoring a symbolic link which pointed to a directory, and the symlink already exists: If FOLlowsymbolic=Yes, the symbolic link is restored and overlays the existing one; else ADSM displays an error msg. You may also be thinking of ARCHSYMLinkasfile. FOLlowsymbolic, query ADSM 'dsmc Query Options' or TSM 'show options" and look for "followsym". Font to use with the dsm GUI It ignores the -fn flag. Use the work-around of using X resources to set the font the GUI should use. Try invoking the GUI like this: dsm -xrm '*fontList: fixed' This lets the GUI come up with the font "fixed" being used for all panels. To use another font, simply replace "fixed" with that font's name (the command 'xlsfonts' gives a list of fonts available on your system). Alternatively, you can put a line like "dsm*fontList: fixed" into your .Xdefaults file ("dsm" is the GUI's X class name), and source this file using 'xrdb -merge ~/.Xdefaults"'. This sets the default font to be used for all dsm sessions. forcedirectio Solaris UFS mount option: For the duration of the mount, forced direct I/O will be used - data is transferred directly between user address space and the disk. If the filesystem is mounted using noforcedirectio (the default), data is buffered in kernel address space when the user address space application moves data. forcedirectio is a performance option that is of benefit only in large sequential data transfers. Reported value: One customer saw a throughput enhancement factor of 5 - 15. Ref: Solaris mount_ufs man page Format See: Dateformat; -DISPLaymode; MessageFormat; Numberformat; Timeformat Format= Operand of many TSM queries, to specify how much information to return: Standard The default, to return a basic amount of information. Detailed To return full information. FORMAT= Operand of DEFine DEVclass, to define the manner in which TSM is to tell the DEVType device to operate. For example, a 3590 drive can be specified to operate in either basic mode or compress mode. Advice: Avoid the temptation to employ the "FORMAT=DRIVE" specification, available for many device types, which says to operate at the highest format of which the device is capable. This is non-specific, and has historically been the subject of defect reports where it would not yield the highest operating format. Specify exactly what you want, to get what you want. Format command /usr/lpp/adsmserv/bin/dsmfmt Free backup products See: Amanda http://www.backupcentral.com/ free-backup-software2.html FREQuency A Copy Group attribute that specifies the minimum interval, in days, between successive backups. Note that this unit refers to day thresholds, not 24-hour intervals. -FROMDate (and -FROMTime) Client option, as used with Restore and Retrieve, to limit the operation to files Backed up or Archived on or after the indicated date. Used on RESTORE, RETRIEVE, QUERY ARCHIVE and QUERY BACKUP command line commands, usually in conjunction with -TODATE (and -TOTIME) to limit the files involved. The operation proceeds by the server sending the client the full list of files, for the client to filter out those meeting the date requirement. A non-query operation will then cause the client to request the server to send the data for each candidate file to the client, which will then write it to the designated location. In ADSMv3, uses "classic" restore protocol rather than No Query Restore protocol. Contrast with "FROMDate". See: No Query Restore /FROMEXCSERV=server-name TDP Exchange option for doing cross-Exchange server restores... where you are doing a restore from a different Exchange Server.. and need to specify the Exchange Server name that the backup was taken under. -FROMNode Used on ADSM client QUERY ARCHIVE, QUERY BACKUP, Query Filespace, QUERY MGMTCLASS, RESTORE, and RETRIEVE command line to display, retrieve, or restore files belonging to another user on another node. (Root can always access the files of other users, so doesn't need this option.) The owner of the files must have granted you access by doing 'DSMC SET Access'. Contrast with -NODename, which gives you the ability to gain access to your own files when you are at another node. The Mac 3.7 client README advises that using FROMNode with a large number of files incurs a huge performance penalty, and advises using NODename instead. dsm GUI equivalent: Utilities menu, "Access another node" Related: -FROMOwner. See also: VIRTUALNodename -FROMOwner Used on QUERY ARCHIVE, QUERY BACKUP, QUERY FILESPACE, RESTORE, and RETRIEVE, client commands, when invoked by an ordinary user, to operate upon files owned by another user. Wildcard characters may be used. Root can always access the files of other users, but would want to use this option to limit the operation to the files owned by this user, as in querying just that user's archive files in a file system. The owner of the files must have granted you access by doing 'DSMC SET Access'. As of ADSM3.1.7, non root users can specify -FROMOwner=root to access files owned by the root user if the root user has granted them access. Related: -FROMNode. -FROMTime (and -TOTime) Client option, used with Restore and Retrieve, to limit the operation to files backed up on or after the indicated time. Used on RESTORE, RETRIEVE, QUERY ARCHIVE and QUERY BACKUP command line commands, usually in conjunction with -FROMDate (and -TODate) to limit the files involved. The operation proceeds by the server sending the client the full list of files, for the client to filter out those meeting the time requirement. A non-query operation will then cause the client to request the server to send the data for each candidate file to the client, which will then write it to the designated location. FRU Field-Replaceable Unit. A term that hardware vendors use to describe a part that can be replaced "in the field": at the customer site. FSID (fsID) File Space ID: a unique numeric identifier which the server assigns to a filespace, under a node, when it is introduced to server storage. (FSIDs are not unique across nodes - only within nodes.) Is referenced in commands like DELete FIlespace, REName FIlespace. The fsID of a file space can be displayed via the GUI: on the main window, select the File details option from the View menu. May appear in messages ANR0800I, ANR0802I, ANR4391I. fslock.pid A file in the .SpaceMan directory of an HSM-managed file system, containing the ASCII PID of the current or last dsmreconcile process. FSM See: File System Migrator Fstypes Windows option file or command line option to specifiy which type of file system you want to see on the ADSM server when you view file spaces on another node. Use this option only when you query, restore, or retrieve files from another node. Choices: FAT File Allocation Table drives. RMT-FAT Remote FAT drives. HPFS High-Performance File System drives (OS/2 and Windows NT). RMT-HPFS Remote HPFS drives. NTFS Windows NT File System drives RMT-NTFS Remote NTFS drives. FTP site index.storsys.ibm.com (Better to use direct FTP than WWW.) Go into directory "adsm". Full Typical status of a tape in a 'Query Volume' report, reflecting a sequential access volume which has been used to the point of having filled. Over time, you will see the Pct Util for the volume drop. This reflects the logical deletion of files on the volume per expiration rules. But the very nature of serial media is such that there is no such thing as either the physical deletion of files in the midst of the the volume nor re-use of space in its midst. So the physical tape remains unchanged as the logical Pct Util value declines: in real, physical terms, the tape is still full as per having been written to the End Of Tape marker. Hence, the volume will retain the "Full" status until either all files on it expire, or you reclaim it at a reasonably low percentage. Remember that you do not want to quickly re-use volumes that became full, but rather want to age them, both to even out the utilization of tapes in your library, and to assure that physical data is still in place should you be forced to restore your *SM database to earlier than latest state. Msgs: When tape fills: ANR8341I End-of-volume reached... See also: Filling; Pct Util Full backup See: Backup, full Full volumes, report avg capacity by SELECT STGPOOL_NAME AS STGPOOL, storage pool CAST(MEAN(EST_CAPACITY_MB/1024) AS DECIMAL(5,2)) AS GB_PER_FULL _VOL FROM VOLUMES WHERE STATUS='FULL' GROUP BY STGPOOL_NAME Fuzzy backup A backup version of an object that might not accurately reflect what is currently in the object because ADSM backed up the object while the object was being modified. See: SERialization Fuzzy copy An archive copy of an object that might not accurately reflect what is currently in the object because ADSM archived the object while the object was being modified. GE Excessive abbreviation of GigE, which is Gigabit Ethernet. GEM Tivoli Global Enterprise Manager. GENerate BACKUPSET TSM3.7 server command to create a copy of a node's current Active data as a single point-in-time amalgam. The output is intended to be written to sequential media, typically of a type which can be read either on the server or client such that the client can perform a 'dsmc REStore BACKUPSET' either through the TSM server or by directly reading the media from the client node. Syntax: 'GENerate BACKUPSET Node_Name Backup_Set_Name_Prefix [*|FileSpaceName[,FileSpaceName]] DEVclass=DevclassName [SCRatch=Yes|No] [VOLumes=VolName[,Volname]] [RETention=365|Ndays|NOLimit] [DESCription=___________] [Wait=No|Yes' It is wise to set a unique DESCription value to facilitate later identification and searching. See: Backup Set; dsmc REStore BACKUPSET Query BACKUPSETContents GENERICTAPE DEVclass DEVType for when the server does not recognize either the type of device or the cartridge recording format - never the best situation. See also: ANS1312E Ghost (Norton product) and TSM You can use Ghost as a quick way to install the recovery system that is used to run TSM restores of the real system. Sites that use Ghost this way generally put the recovery system and its TSM client software in a separate partition rather than non-standard folders in the production partition. GIGE Nickname for Gigabit Ethernet. global inactive state The state of all file systems to which space management has been added when space management is globally deactivated for a client node. When space management is globally deactivated, HSM cannot perform migration, recall, or reconciliation. However, a root user can update space management settings and add space management to additional file systems. Users can access resident and premigrated files. GPFS General Parallel File System (GPFS) is the product name for Almaden's Tiger Shark file system. It is a scalable cluster file system for the RS/6000 SP. Tiger Shark was originally developed for large-scale multimedia. Later, it was extended to support the additional requirements of parallel computing. GPFS supports file systems of several tens of terabytes, and has run at I/O rates of several gigabytes per second. http://www.almaden.ibm.com/cs/gpfs.html Grace period The default retention period for files where the management class to which they were bound disappears, and the default management class does not have a copy group for them. Per DEFine DOMain. See: ARCHRETention, BACKRETention Grant Access You mean SET Access. See: dsmc SET Access GRant AUTHority *SM server command to grant an administrator one or more administrative privilege classes. Syntax: 'GRant AUTHority Adm_Name [CLasses=SYstem|Policy|STorage| Operator|Analyst|Node] [DOmains=domain1[,domain2...]] [STGpools=pool1[,pool2...]] [AUTHority=Access|Owner] [DOmains=____|NOde=____]' When you specify CLASSES=POLICY, you specify a list of policy domains the admin id can control. That admin can do things ONLY for the nodes in the specified domain(s): lock/unlock, register, associate, change passwords. But the admin won't be allowed to do any things on the server end, like checkin/checkout, manage storage pools, or mess with admin schedules, or even create new domains; you need SYSTEM for that. A limitation with POLICY is the inability to Cancel sessions for the nodes in its domain. See also: Query ADmin; REGister Admin; REMove Admin; UPDate Admin Graphical User Interface (GUI) A type of user interface that takes advantage of a high-resolution monitor, includes a combination of graphics, the object-action paradigm, and the use of pointing devices, menu bars, overlapping windows, and icons. See: dsm, versus dsmc Gripper On a tape robot (e.g., 3494) is the "hand" part, carried on the Accessor, which grabs and holds tapes as they are moved between storage cells and tape drives. See also: Accessor Gripper Error Recovery Cell 3494: Cartridge location 1 A 3 if Dual Gripper installed; 1 A 1 if Dual Gripper *not* installed. Also known as the "Error Recovery Cell". Ref: 3494 Operator Guide. Group By SQL operator to specify groups of rows to be formed if aggregate functions (AVG, COUNT, MAX, SUM, etc.) are used. SQL clause that allows you to group records (rows) that have the same value in a specified field and then apply an aggregate function to each group. For example, here we report the number of files and megabytes, by node, in the Occupancy table, for primary storage pools: SELECT NODE_NAME, SUM(NUM_FILES) as - "# Files", SUM(PHYSICAL_MB) as - "Physical MB" FROM OCCUPANCY WHERE - STGPOOL_NAME IN (SELECT DISTINCT - STGPOOL_NAME FROM STGPOOLS WHERE - POOLTYPE='PRIMARY') GROUP BY - NODE_NAME' The Group By causes the Sums to occur for each stgpool in turn. Groups Client System Options file (dsm.sys) option to name the Unix groups which may use ADSM services. It is a means of restricting ADSM use to certain groups. Default: any group can use ADSM. GroupWise Novell Nterprise product for communication and collaboration, a principal component being mail. Its backup is perhaps best accomplished with St. Bernard's Open File Manager. One thing you want to be careful of with Groupwise is how your policies are set up... It has been reported that GroupWise stores its messages in uniquely named files - which it would periodically reorganize, deleting the old uniquely named files and creating new ones. See also GWTSA. GUI Graphical User Interface; as opposed to the CLI or WCI. GUI, control functionality The TSM client GUI, in Windows, may be configured to limit the services available to the end user. See IBM site Solution swg21109086. GUI client Refers to the window-oriented client interface, rather than the command-line interface. Note that the GUI is a convenience facility: as such its performance is inferior to that of the command line client, and so should not be used for time-sensitive purposes such as disaster recovery. (So says the B/A Client manual, under "Performing Large Restore Operations".) As of 2004, the GUI is currently designed to query the server for all jobs when the GUI starts up, and then depend on events from the server to keep in sync when jobs are printed and new jobs are submitted. It is possible for the GUI to get out of sync with reality: the GUI will remove a job instance from its repertoire if a query for the job fails to find it (which additionally keeps 5010-505 "cannot find" messages out of the server error.log). GUI vs. CLI By design, the GUI client is different in its manner of operation than the CLI client, because the nature of the GUI means that it needs to provide responses faster. Before v3, the GUI worked much like the CLI, obtaining all information about the area being queried before returning any. That was problematic, in the obvious delay, and client memory utilization (where a *SM client schedule process itself may be hanging on to a lot of memory). As of v3, the GUI asked the server for only as much data as it needed to fulfill its immediate display request (a top level set of directories, or the immediate contents of a selected directory). That discipline, however, makes PIT restorals problematic, in that the GUI's pursuit of just what exists within the PIT timeframe can mean that it will not obtain and display directories which you know to be involved, because they had been backed up outside the timeframe. (APAR IC24733 addresses this artifact, to say that it is working as designed.) Thus, for PIT restorals, you may be better off using the CLI. GUID (TSM 4.2+) The Globally Unique IDentifier (GUID) associates a client node with a physical system. The GUID is (currently) not used for functional purposes, but is only there for potential reporting purposes. When you install the Tivoli software: On Unix, the tivguid program is run to generate a GUID which is stored in the /etc/tivoli directory; On Windows, the tivguid.exe program is run to generate a GUID which is stored in the Registry. The GUID is a 16-byte code that identifies an interface to an object across all computers and networks. The identifier is unique because it contains a time stamp and a code based on the network address that is hard-wired on the host computer's LAN interface card. The GUID for a client node on the server can change if the host system machine is corrupted, if the file entry is lost, or if a user uses the same node name from different host systems. You can perform the following functions from the command line: - Create a new GUID 'tivguid -Create' - View the current GUID 'tivguid -Show' - Write a specific value - Create another GUID even if one exists. Do 'tivguid -Help' for usage. Ref: Unix client manual (body and glossary); IBM site entry swg21110521 GUIFilesysinfo Client option that determines whether information such as filesystem capacity is displayed on the initial GUI screen for all filesystems (GUIF=All, the default), or only for local filesystems (GUIF=Local). GUIF=Local is useful if the remote filesystems displayed are often unreachable, because ADSM must wait for the remote filesystem information or a timeout before displaying the initial GUI screen, which may cause a delay in the appearance of the initial GUI screen. This option can be specified in dsm.sys or dsm.opt, or on the command line when invoking the GUI. GUITREEViewafterbackup Specifies whether the client is returned to the Backup, Restore, Archive, or Retrieve window after a successful operation completes. Specify where: Client options file (dsm.opt) and the client system options file (dsm.sys). Possibilities: No - default; Yes. GWTSA GroupWise Target Service Agent - a NetWare TSA module used to make an online backup of GroupWise. See also: GroupWise HALT ADSM server command to shut down the server. This is an abrupt action. If possible, perform a Disable beforehand and give time for prevailing sessions to finish. Unix alternative for when you are locked out and want to halt the server cleanly is to send it a SIGTERM signal: 'kill -15 ' ( = 'kill -TERM ') ( = 'kill ') See also: Server "hangs"; Server lockout Hard drives list See: File systems, local Hard links (hardlinks) Unix: When more than one directory entry in a file system points to the same file system inode, as achieved by the 'ln' command. The directory entries are just names which associate themselves with a certain inode number within the file system. They are equivalent, which is to say that one is not the "original, true" entry and that the later one is "just a link". The "hard links" condition is known only because the inode block contains a count of links to the inode. When one of its multiple names is deleted, the link count is reduced by one, and the inode goes away only if the link count reaches zero. When you back up a file that contains a hard link to another file, TSM stores both the link information and the data file on the server. If you back up two files that contain a hard link to each other, TSM stores the same data file under both names, along with the link information. When you restore a file that contains hard link info, TSM attempts to reestablish the links. If only one of the hard-linked files is still on your workstation, and you restore both files, TSM hard-links them together. Of course, if the hard link was broken since the backup such that the multiple names became files unto themselves, then it will not be possible to restore the hardlink name. Ref: Using the Backup-Archive Clients manual, "Understanding How Hard Links Are Handled". HAVING SQL operand, as in: "... HAVING COUNT(*)>10" HBA Host Bus Adapter, a term commonly used with Fibre Channel to refer to the interface card. Performance/impact: FibreChannel is high speed traffic, where an HBA such as a 6228 can eat the entire available bandwidth of a PCI bus; so each card should be on a separate PCI bus, with very little else on the bus. IBM recommends: "It is highly recommended that Tape Drives and Tape Libraries be connected to the system on their own host bus adapter and not share with other devices types (DISK, CDROM, etc.)." The redpaper IBM TotalStorage: FAStT Best Practices Guide further says: "It is often debated whether one should share HBAs for disk storage and tape connectivity. A guideline is to separate the tape backup from the rest of your storage by zoning and move the tape traffic to a separate HBA and create an separate zone. This avoids LIPa resets from other loop devices to reset the tape device and potentially interrupt a running backup." HDD Hard Disk Drive Header files for 3590 programming /usr/include/sys/mtio.h /usr/include/sys/Atape.h Helical scan tape techology Magnetic tape is tightly wound around and passes over a drum, at an angle. Inside the drum and protruding from a slot cut into it is a rotating arm with read/write heads on both ends of the arm. The heads contact the tape in "slash" strokes, the effect being like a helix. This recording technique allows higher density than if the tape were linearly passed over a single head: it is most commonly found used in VCRs, where analog video frames are conveniently recorded in the slashes. The technique was extended to data recording in 8mm form - where it achieved notoriety because of high error rates and unreadable tapes. Helical scanning is rough on tapes, resulting in oxide shedding and head clogging: frequent cleaning is essential. In contrast, linear tape technology does not employ sharp angles or mechanically active heads, and so its tapes enjoy much longer, reliable lives. As found in Exabyte Mammoth and Sony AIT (both 8mm tape technologies). Help files for client May have to do: 'setenv HELP /usr/lpp/adsm/bin' Hidden directory See: .SpaceMan Hierarchical storage management client A program that runs on a workstation or file server to provide space management services. It automatically migrates eligible files to ADSM storage to maintain specific levels of free space on local file systems, and automatically recalls migrated files when they are accessed. It also allows users to migrate and recall specific files. Hierarchy See: Storage Pool Hierarchy High Capacity Output Facility 3494 hardware area, located on the inside of the control unit door, consisting of a designated column of slots within the 3494 from which the operator can take Bulk Ejects by opening the door. To change it, you need to perform a Teach Current Configuration, which involves going through a multi-step configuration review, followed by a 3494 reboot; then you need to force a partial reinventory, for the Library Manager to review the cells involved. See also the related Convenience I/O Station. High Performance Cartridge Tape The advanced cartridges used in the IBM 3590 tape drive. High threshold HSM: The percentage of space usage on a local file system at which HSM automatically begins migrating eligible files to ADSM storage. A root user sets this percentage when adding space management to a file system or updating space management settings. Contrast with low threshold. See "dsmmigfs". High-level address Refers to the IP address of a server. See also: Low-level address; Set SERVERHladdress; Set SERVERLladdress HIghmig Operand of 'DEFine STGpool', to define when ADSM can start migration for the storage pool, as a percentage of the storage pool occupancy. Can specify 1-100. Default: 90. To force migration from a storage pool, use 'UPDate STGpool' to reduce the HIghmig value (with HI=0 being extreme). See also: Cache; LOwmig HIPER Seen in IBM APARs; refers to a situation which is High Impact, PERvasive. Hivelist See: BACKup REgistry Hives High level keys HL_NAME SQL: The high level name of an object, being the directory in which the object resides. Simply put, it is everything between the filespace name and the file name, which is to say all the intervening directories. In most cases, the FILESPACE_NAME will not have a trailing slash, the HL_NAME will have a leading and trailing slash, and the LL_NAME will have no slashes. Unix examples: For file system /users, directory name /users: FILESPACE_NAME="/users", HL_NAME="/", LL_NAME="". For file system /users, directory name /users/mgmt/: FILESPACE_NAME="/users", HL_NAME="/", LL_NAME="users". For file system /users, file name /users/mgmt/phb: FILESPACE_NAME="/users", HL_NAME="/mgmt/", LL_NAME="phb". For file system filename /usr/docs/Acrobat3.0/Introduction.pdf the FILESPACE_NAME="/usr/docs", HL_NAME="/Acrobat3.0/", LL_NAME="Introduction.pdf". Note: The Contents table has a FILE_NAME column which is a composite of the HL_NAME and LL_NAME, like: /mydir/ .pinerc which makes it awkward to use the output of that table to further select in the Backups table, for example. See also: FILE_NAME; LL_NAME HLAddress REGister Node specification for the client's IP address, being a hard-coded specification of the address to use, as opposed to the implied address discovered by the TSM server during client sessions (which may be specified on the client side via the TCPCLIENTAddress option). See also: LLAddress; IP addresses of clients; SCHEDMODe PRompted Hole in the tape test An ultimate test of tape technology error correction ability: a (1.25mm) hole is punched through the midst of data-laden tape, and then the tape is put through a read test. 3590 tape technology passes this extreme test. ("Magstar Data Integrity Tape Experiment") Ref: Redbook "IBM TotalStorage Tape Selection and Differentiation Guide"; http://www4.clearlake.ibm.com/hpss/Forum /2000/AdobePDF/Freelance-Graphics-IBM- Tape-Solutions-Hoyle.pdf Home Cell Mode 3494 concept determining whether cartridges are assigned to fixed storage slots (cells) or can be stored anywhere after use (Floating-home Cell). Query via 3494 Status menu selection "Operational Status". Home Element Column in 'Query LIBVolume' output. See: HOME_ELEMENT HOME_ELEMENT TSM DB: Column in LIBVOLUMES table containing the Element address of the SCSI library slot containing the tape. (Does not apply to libraries which contain their own supervisor, such as the 3494, where TSM does not physically control actions.) Type: Integer Length: 10 See also: Element Host name You mean "Server name" or "Node name"? (q.v.) Hot backup Colloquial term referring to performing a backup on an object, such as a database, which is undergoing continual updating as a conventional, external backup of that object proceeds. The restorability of the object backed up that way is questionable at best. The more reasonable approach involves performing the backup from inside the object, as for example a database API which can capture data for backup but do so in conjunction with ongoing processing. Another approach is an operating system API which performs continual, real-time backup. HOUR(timestamp) SQL function to return the hour value from a timestamp. See also: MINUTE(); SECOND() HOURS See: DAYS HP-UX file systems HP-UX uses the Veritas File System (VxFS), also referred to as the Journaled File System (JFS). VxFS provides Logical Volume Manager (LVM) tools to administer physical disks and allow administrators to manage storage assets. In general, one or more physical disks are initialized as physical volumes and are allocated to Volume Groups. Storage from the Volume Group is made available to a host by creating one or more Logical Volumes. Once allocated, Logical Volumes can be used for HP-UX file systems or used as raw (logical) devices for DBMS. Information about the Volume Group and Logical Volume are stored on each physical volume. HPCT High Performance Cartride Tape. See: 3590 'J' Contrast with CST and ECCST. See also: 3590 'J'; EHPCT HSM Hierarchical Storage Management. Currently called "TSM for Space Management". A TSM client option available in AIX and Solaris. Its nature calls for operating system modifications, typically in the form of kernel extensions. (Was once available for SGI as well, but that was withdrawn. IBM intended HSM for many platforms, but as they approached the task they found that various parties were being licensed to likewise modify the operating system to their needs: in that this uncoordinated approach would lead to inevitable conflicts, IBM reduced its ambitions.) Started by /etc/inittab's "adsmsmext" entry invoking /etc/rc.adsmhsm . See also: DM HSM, add file system to it Employ the GUI, or the command: 'dsmmigfs add FileSystemName' The file system name ends up being added to the list /etc/adsm/SpaceMan/config/dsmmigfstab HSM, command format Control via the OPTIONFormat option in the Client User Options file (dsm.opt): STANDARD for long-form, else SHORT. Default: STANDARD HSM, display Unix kernel messages? Control via the KERNelmessages option in the Client System Options file (dsm.sys). Default: Yes HSM, exclude files Specify "EXclude.spacemgmt pattern..." in the Include-exclude options file entry to exclude a file or group of files from HSM handling. HSM, for Windows It's Legato DiskXtender, an IBM-blessed TSM companion product. (Formerly from OTG Software, bought by Legato.) http://portal1.legato.com/products/ disxtender/ In past history: Eastman Software had an HSM for NT product called OPEN/stor, being replaced in 1998 by Advanced Storage for Windows NT (y2k compliant). As of mid-98, OPEN/stor became Storage Migrator 2.5 (version 2.5 includes the ADSM option as part of the base product) HSM, insufficient space in file system You can run into a situation where it looks like there should be room in the HSM-controlled file system to move in a given file, but attempting to do so results in an error indicating insufficient space to complete the operation. This may be due to fragmentation of the disk space: the query you performed to report the amount of free space is misleading because it includes partially free blocks of space, whereas the file copy operation wants whole, empty blocks. In AIX, for example, the default file system block size is 4 KB. A file containing 1 byte of data requires a minimum storage unit of one 4 KB block where 4095 bytes are empty; but those 4095 bytes can only be used for the expansion of that file, not the introduction of a new file. In AIX, a fragmentation problem at data movement time can be determined by examining the AIX Error Log, as via the 'errpt' command, for JFS_FS_FRAGMENTED entries. HSM, recall daemons, max number Control via the MAXRecalldaemons option in the Client System Options file (dsm.sys). Default: 20 HSM, recall daemons, min number Control via the MINRecalldaemons option in the Client System Options file (dsm.sys). Default: 3 HSM, reconcilliation interval Control via the RECOncileinterval option in the Client System Options file (dsm.sys). Default: 24 hours HSM, reconcilliation processes, max Control via the MAXRCONcileproc option number in the Client System Options file (dsm.sys). Default: 3 HSM, start manually In Unix: '/etc/rc.adsmhsm &' HSM, threshold migration, query Via the AIX command: 'dsmmigfs Query [FileSysName]' HSM, threshold migration, set Control via the AIX command: 'dsmmigfs Add|Update -hthreshold=N' for the high threshold migration percentage level. Use: 'dsmmigfs Add|Update -lthreshold=N' for the low threshold migration percentage level. HSM, retention period for migrated Control via the MIGFILEEXPiration option files (after modified or deleted in in the Client System Options file client file system) (dsm.sys). Default: 7 (days) HSM, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)] on all volumes [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: It is best to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' to assure that the reported information will be current. HSM, threshold migration, max number Control via the MAXThresholdproc option of processes in the Client System Options file (dsm.sys). Default: 3 HSM active on a file system? 'dsmdf FSname', look in "FS State" column for "a" for active, "i" for inactive, or "gi" for global inactive. HSM and Aggregation HSM did not begin utilizing Aggregation when that capability came into being in ADSMv3, and HSM still does not use it. The rationale for not using Aggregation is that the HSM design transfers each file in its own transaction, which is due to a number of reasons, such as that HSM in general will be migrating "large" files as these are favored during candidates search (unless the size factor is 0) and will thus be migrated before any of the smaller files. The effect is increased server overhead as well as greater tape utilization. HSM backup, offsite copypool only Some implementations seek to have only an offsite (copypool) image of the HSM data, seeking to avoid the use of tapes for an onsite backup image. An approach: Via dsmmigfs, defined the stub size to be 512 to eliminate leading file data from the stub, to force all files to be eligible for migration. Employ a relatively low HThreshold value on the HSM file system, to cause most files to migrate naturally. Prepatory to daily TSM server administration tasks, schedule a 'dsmmigrate -R' on the file system, allowing enough time for it to finish. As part of daily TSM server administration, do Backup Stgpool on the disk & tape stgpools to which that HSM data migrates, to an appropriate offsite stgpool. HSM candidates list 'dsmmigquery FSname' HSM commands, list help 'dsmmighelp' HSM configuration directory /etc/adsm/SpaceMan/config HSM daemons dsmmonitord and dsmrecalld. Their PIDs are remembered in files /etc/adsm/SpaceMan/dsmmonitord.pid and /etc/adsm/SpaceMan/dsmrecalld.pid HSM disaster recovery (offsite) issues For *SM offsite disaster recovery, what should go offsite? Should you send copies of HSM storage pool backups, or copies of backup storage pools reflecting HSM file system backups - or both? HSM storage pools contain only data which has migrated from the HSM file system to TSM server storage - which is *never* small (<4 KB) files. Because HSM storage pool copy tapes are inherently incomplete, they cannot fully recover HSM in the event of a disaster. However, one would *like* to depend upon HSM copy storage pool tapes because restoring the server storage pool is so easy. Depending upon HSM file system backup storage pool data for disaster recovery is more appropriate in that it is a complete image of the data: files of all sizes, migrated or not. While complete, a backup image of HSM is problematic for disaster recovery in that there is little chance that it can all fit into the HSM file system upon restoral. To accomplish such a restoral, you will need an aggressive migration from the file system to the HSM storage pool, which has the opportunity to run as the restoral takes time to transition from one tape to another. (Note that a Backup storage pool tape set is far too awkward to depend upon as a resource for restoring a bad HSM primary tape storage pool: depend upon HSM backup storage pool tapes only for file recovery and disaster recovery.) HSM error handling Specify a program to execute via the ERRORPROG Client System Options file (dsm.sys). Can be as simple as "/bin/cat". **WARNING** If ADSM loses its mind (as when it obliterates its own client password), this can result in tens of thousands of mail messages being sent. HSM file, recall Is implicit by use of the file, or you can use the dsmrecall command (q.v.). HSM file system, back up Performing a 'dsmc Incremental' on an HSM file system results in basic backup files. If a file is Migrated, a backup of it results in just the single instance of the file in the Backups table: there will be no backup image of the stub file. HSM file system, mount Make sure your current directory is not the mount point directory, then: 'mount FSname' # Mount the JFS 'mount -v fsm FSname' # Mount the FSM (The second command will result in msg "ANS9309I Mount FSM: ADSM space management mounted on FSname".) HSM file system, mounting from an NFS You can have an HSM-managed file system client available to remote systems via NFS; but there are procedural considerations: - Attempting to mount the file system too early in server start-up could result in having the (empty) server mount point directory being mounted. What's worse: a 'df' on the client misleads with historical information. - AIX's normal exports sequence will result in the JFS file system being exported from the server. You need to do another 'exportfs' command after HSM mounts its FSM VFS over the JFS file system, else on the client you get: mount ServerName:/FSname MtPoint mount: access denied for ServerName:/FSname mount: giving up on: ServerName:/FSname Permission denied So try '/usr/bin/exportfs -v FSname'. Note that this can sometimes take up to 10 minutes to take effect (some problem with mountd). HSM file system, move to another ADSM The simplest method is to set up a server replacement HSM file system in the new environment and perform a cross-node restore (-VIRTUALNodename=FormerClient) to populate the new file system, specifying -SUbdir=Yes to recreate the full directory structure, and -RESToremigstate=No to move all the data across. This method depends upon the feasibility of using a datacomm line for so much data, being able to use a tape drive on the source TSM server for a prolonged period, and the receiving HSM file system parameters being set to perform migration and dsmreconcile in time to make space for the incoming data. Another approach is to: Perform a final backup of the HSM file system in its original location. EXPort Node of that backup filespace. Define the HSM file system and HSM storage pool in its new environment. IMport Node to plant the backup filespace. Perform a full file system restoral in the new environment (dsmc restore -SUbdir=Yes -RESToremigstate=Yes (the default anyway)) to recreate the directory structure, restore small files, and recreate stub files. This basically follows the HSM file system recovery procedures outlined in the HSM manual and HSM redbook (q.v.). The big consideration to this approach is that Export and Import are very slow. HSM file system, move to another The following method is anecdotally client, same server reported, but is undocumented: import volume group mount the HSM file system dsmmigfs import HSM file system, remove Make sure that the file system is all but empty, in that following REMove will cause a full recall. 'dsmmigfs REMove FSname', which... - runs reconcilliation for the filesys; - evaluates space for total recall; - recalls all files - has the server eliminate migrated file images from server storage - unmounts the FSM from the JFS filesys. You then do: 'umount FSname' # Unmount the JFS 'rmfs -r FSname' to remove the file system, LV, and mount point. Remove name from /etc/exports.HSM; Update /usr/lpp/adsm/bin/dsm.opt, and restart dsmc schedule process, if any; Update /usr/lpp/adsm/bin/rc.adsmhsm, if filesys named there. HSM file system, rename 'dsmmigfs deactivate FSname' 'umount FSname' # Unmount the FSM 'umount FSname' # Unmount the JFS Change name in /etc/filesystems; Change name in /etc/exports.HSM; Rename mount point; Change name in /etc/adsm/SpaceMan/config/dsmmigfstab; In ADSM server: 'REName FIlespace NodeName FSname NewFSname' 'mount NewFSname'; 'mount -v fsm NewFSname'; 'dsmmigfs reactivate NewFSname' '/usr/sbin/exportfs NewFSname' # To export the FSM Update /usr/lpp/adsm/bin/dsm.opt Update /usr/lpp/adsm/bin/rc.adsmhsm, if filesys named there. HSM file system, restore as stub files Use -RESToremigstate=Yes (the default) (restore in migrated state) to restore the files such that the data ends up in TSM server filespace and the client file system gets stub files. (Naturally, files too small to participate in HSM migration are fixed residents in the file system, and physical restoral must occur.) Can specify either on the dsmc command line, or in the Client User Options file (dsm.opt). Example: 'dsmc restore -RESToremigstate=Yes -SUbdir=Yes /FileSystem' To query, do 'dsmc Query Option' in TSM or 'dsmc show options' in TSM and look for "restoreMigState". See also: dsmmigundelete; Leader data HSM file system, unmount Do this when the file system is dormant. Make sure your current directory is not the mount point directory, then: 'umount FSname' # Unmount the FSM 'umount FSname' # Unmount the JFS HSM file systems, list 'dsmmigfs query [FileSystemName...]' The file systems end up enumerated in file /etc/adsm/SpaceMan/config/dsmmigfstab by virtue of running 'dsmmigfs add'. HSM files, database space required Figure 143 bytes + filename length. HSM files, restore as stubs (migrated Control via the RESToremigstate Client files) or as whole files User Options file (dsm.opt) option. Specify "RESToremigstate Yes" to restore as stubs (the default, usual method); or just say "No", to fully restore the files to the local file system in resident state. HSM files, actual sizes The Unix 'du -k ...' command can be used to display the sizes of files as they sit in the Unix file system; but it obviously knows not of HSM and cannot display actual data sizes for files migrated from an HSM-controlled file system. Use the ADSM HSM 'dsmdu' command to display the true sizes. See: dsmdu HSM files, seek in database SELECT * FROM SPACEMGFILES WHERE - NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='___' AND FILE_NAME='___' This will report state (Active, Inactive), migration date, deletion date, and management class name. It will not report owner, size, storage pool name or volumes that the file is stored on. HSM for Netware Product "FileWizard 4 TSM" from a company called Knozall Systems. http://www.knozall.com/hsm.htm HSM for Windows See: HSM, for Windows HSM installed? In AIX, do: lslpp -l "adsm*" or: lslpp -l "tsm*" and look for "HSM". HSM kernel extension loaded? '/usr/lpp/adsm/bin/installfsm -q /usr/lpp/adsm/bin/kext' See also: installfsm HSM kernel extension management See: installfsm HSM Management Class, select HSM uses the Default Management Class which is in force for the Policy Domain, which can be queried from the client via the dsmc command 'Query MGmtclass'. You may override the Default Management Class and select another by coding an Include-Exclude file, with the third operand on an Include line specifying the Management Class to be used for the file(s) named in the second operand. HSM migration behavior Observations via 'dsmls' show that files migrate as follows: 1. They sit in the file system for some time, as Resident (r). 2. When space is needed, migration candidates are migrated (m). In addition, the Premigration Percentage causes a certain additional amount to be premigrated (p). Note that the premigrated files are recorded in the premigrdb database located in the .SpaceMan directory. HSM migration candidates list empty See: HSM migration not happening HSM migration not happening Possible causes: - The file system is not actively under HSM control. - The management class operand SPACEMGTECHnique is NONE or SELective. Check via client 'dsmmigquery -M -D'. - The files are predominantly smaller than the stub size defined for the HSM file system (usually 4KB). - If your file system usage level is not over the defined migration threshold, there is no need for migration. - dsmmonitord not running (started by rc.adsmhsm) so as to run dsmreconcile and create a migration candidates list (verifiable via 'dsmmigquery -c FSnm') - By default, migration requires that a backup have been done first, per the MGmtclass MIGREQUIRESBkup choice. (Look for msg ANS9297I.) - Assure that your storage pool migration destinations are defined as you think they are. - Assure that the destination storage pool Access is Read/Write, and that its volumes are online. - Another cause of this problem is there being binary (as in a Newline) embedded in a space-managed file name. Look for such an oddity in the migration candidates list. - Try a manual dsmreconcile. That may say "Note: unable to find any candidates in the file system.": try doing 'dsmmigrate -R Fsname' and see what messages result. - If there is a migration candidates list, manually run dsmautomig and see if that works; else try a manual dsmmigrate on a selected file and see if that works. HSM migration processes, number The 4.1.2 HSM client introduces the new parameter MAXMIGRATORS (q.v.). HSM quota HSM: The total number of megabytes of data that can be migrated and premigrated from a file system to ADSM storage. The default is "no quota", but if activated, the default value is the same number of megabytes as allocated for the file system itself. HSM quota, define Defined when adding space management to a file system, via the dsmhsm GUI or the 'dsmmigfs add -quota=NNN Fsname' command. HSM quota, update Can be done via the dsmhsm GUI or the 'dsmmigfs update -quota=NNN Fsname' command. HSM rc file /etc/rc.adsmhsm, which is a symlink to /usr/lpp/adsm/rc.adsmhsm, a Ksh script. Invoked by /etc/inittab's "adsmsmext" entry. As provided by IBM, the script has no "#!" first line to cause it to be run under Ksh if invoked simply by name. HSM recall Priority: Will preempt a BAckup STGpool. HSM recall processes, cancel 'dsmrm ID [ID ...]' HSM recall processes, current 'dsmq' HSM server Specified on the MIgrateserver option in the Client System Options file (dsm.sys). Default: the server named on the DEFAULTServer option. HSM status info Stored in: /etc/adsm/SpaceMan/status which is the symlink target of the .SpaceMan/status entry in the space-managed file system. HSM threshold migration interval Defaults to once every 5 minutes. Specify a value on the CHEckthresholds option in the Client System Options file (dsm.sys). HTTP A COMMmethod defined in the Server Options File, for the Web-browser based administrative interface. You need to code both: COMMmethod HTTP HTTPPort 1580 HTTPport Client System Options File (dsm.sys) option specifying the TCP/IP port address for the Web Client. Code a value from 1000 - 32767. Default: 1581 Windows advisory: The HTTPport in the options file may not actually be what controls the port number: there may be an HttpPort value in the registry, which will take precedence for the port on which to listen. The registry entry is: HKEY_LOCAL_MACHINE\SYSTEM\ControlSetXX \Services\ADSM Client Acceptor \Parameters\HttpPort . The "dsm.opt" file will be looked at if this HttpPort Registry entry does not exist: if there is no HTTPport value specified in the dsm.opt, the default value of 1581 will be used. The HttpPort value in the Registry can be updated with the dsmcutil command: dsmcutil update cad /name:"NameOfCadService " /httpport:#### Surprise: The HTTPport value also controls the Client Acceptor (dsmcad) port number! Ref: www.ibm.com/support/ entdocview.wss?uid=swg21079454 . See also: WEBPorts HTTPPort Server options file option specifying the port number for the HTTP communication method. Default: 1580 HTTPS ADSMv3 COMMmethod defined in the Server Options File, for a Web-browser based administrative interface using the Secure Sockets Layer (SSL) communications protocol. You need to code both: COMMmethod HTTPS HTTPSPort 1580 Note: Not required for the Web proxy and is not supported by TSM. HTTPSPort Server options file option specifying the port number for the HTTPS communication method, which uses the Secure Socket Layer (SSL). Defaut: 1543 Hyperthreading See: Intel hyperthreading & licensing I/O error messages ANR1414W at TSM server start-up time, reporting a volume "read-only" due to previous write error. ANR8359E Media fault ... (q.v.) I/O errors reading a tape Errors are sometimes encountered when reading tapes. Sometimes, simply repeating the read will cause the error to disappear. With tapes which have been unused for a long time, or stored in under unfavorable conditions, you may want to retension the tape before trying to read it. See: Retension IBM media problems Call (888) IBM-MEDIA about the problem you have with media purchased from IBM. IBM Tivoli Storage Manager Formal name of product, as of 2002/04, previously called Tivoli Storage Manager (and before that, ADSTAR Distributed Storage Manager, derived from WDSF). IBM TotalStorage New name, supplanting "Magstar" in 2002. IBMtape The 3590/LTO/Ultrium device driver for Solaris systems. ftp://ftp.software.ibm.com/storage/ devdrvr/Solaris/ See also: Atape ICN IBM Customer Number. The 7-digit number under which you order IBM software, and through which you obtain IBM support under contract. Idle timeout value, define "IDLETimeout" definition in the server options file. Idle wait (IdleW, IdleWait) "Sess State" value in 'Query SEssion' output for when the server end of the session is idle, waiting for a request from the client. Recorded in the 22nd field of the accounting record, and the "Pct. Idle Wait Last Session" field of the 'Query Node Format=Detailed' server command, where slower clients typically have larger numbers. Can result when a client has asked for a mass of information from the server (as in an incremental backup), the server has sent it to the client, and the client is now very busy sorting it and scanning file systems for files which need to be backed up, comparing against the list of already-backed-up files provided by the server. In the midst of a Backup session, idle wait time is as the client is running through the file system seeking the next changed file to back up - and changed files may be few and far between in a given file system. Naturally, a client system busy doing other things will deprive the TSM backup of CPU time and result in file system contention (made worse by virus checking). Also keep in mind that the client doesn't send data to the server until it has a transaction's worth. Retries are another impediment to getting back to the server. If the server expects a response and the client is too busy for a long time, IDLETimeout can occur. See also: Communications Wait; Media Wait; SendW, Start IDLETimeout Definition in the server options file. Specifies the number of minutes that a client session can be idle before its session will be canceled. Allowed: 1 (minute) to infinity Default: 15 (minutes) Too small a value can result in server message ANR0482W. A value of 60 is much more realistic. See IBM site topic "Why are sessions being terminated due to timeouts?" (swg21161949). See also: COMMTimeout; SETOPT IDLETimeout server option, query 'Query OPTion' IDRC Improved Data Recording Capability. Technology built into the 3590 tape drive to compress and compact data, from two to five times that of uncompacted data (the typical compression factor being 3x). IE Usually, Internet Explorer; but sometimes an unfortunately short abbreviation of Include/Exclude. -IFNewer Client option, used with Restore and Retrieve, to cause replacement of an existing file with the file from the server storage pool if that server file is newer than the existing file. Note that this is part of a full replacement type restore ("-REPlace=All|Yes|Prompt") and won't work if using "-REPlace=No". That is, it is part of a "fill in voids and update old files" restoral. WARNING: -REP=All|Yes -IFNewer was horrendously inefficient: it essentially does a -REP=ALL, mounting every tape and moving every file, and at the last second, only replaces it if newer. Ref: APARs IX87650 (server), IC23158 (client), IX89496 (client). Use -FROMDate, -FROMTime, and -PITDate instead, which result in database selection being done in the server, minimizing the movement of data. See also: -LAtest IGNORESOCKETS Testflag, per APAR IX80646, to give the ability to skip socket files during Restore. Works for all platforms except AIX 4.2 and HP-UX, which always skip socket files. Do not attempt to use during Backup. See also: Sockets, Testflag Image Backup (aka Snapshot Backup) The 3.7 facility for backing up a logical volume (partition) as a physical image, on the AIX, HP, and Sun client platforms. In TSM 5.1, available on Windows 2000, where the Logical Volume Storage Agent (LVSA) is available, which can take a snapshot of the volume while it is online. This image backup is a block by block copy of the data. Optionally only occupied blocks can be copied. If the snapshot option is used (rather than static) then any blocks which change during the backup process are first kept unaltered in an Original Block File. In this way the client is able to send a consistent image of the volume as it was at the start of the snapshot process to the Tivoli Storage Manager server. Subsequently available on Windows XP (which is built upon Windows 2000). TSM 5.2 built upon this: its Open File Support uses this Snapshot mechanism. See also: Open File Support; Raw logical volume, back up; Snapshot Immediate Client Actions utility After using, stop and restart the scheduler service on the client, so it can query the server to find out it's next schedule, which in this case would the immediate action you created. Otherwise you will need to wait till the client checks for its next schedule on its own. Also affected by the server 'Set RANDomize' command. Imperfect collocation Occurs when collocation is enabled, but there are insufficient scratch tapes to maintain full separation of data, such that data which otherwise would be kept separate has to be mingled within remaining volume space. See also: Collocation Import To import into a TSM server the definitions and/or data from another server where an Export had been done. Notes: Code -volumenames in the order they were created. If the server encounters a policy set named ACTIVE on the tape volume during the import process, it uses a temporary policy set named $$ACTIVE$$ to import the active policy set. After each $$ACTIVE$$ policy set has been activated, the server deletes that $$ACTIVE$$ policy set from the target server. TSM uses the $$ACTIVE$$ name to show you that the policy set which is currently activated for this domain is the policy set that was active at the time the export was performed. After doing the Import, review the policy results and perform VALidate POlicyset and ACTivate POlicyset as needed. IMport Node *SM server command to import data previously EXPorted from a *SM server. The process will retain the exported domain and node name. Syntax: 'IMPort Node DEVclass=DevclassName VOLumenames=VolName(s) [NodeName(s)] [FILESpace=________] [DOmains=____] [FILEData=None|ALl|ARchive| Backup|BACKUPActive| ALLActive| SPacemanaged] [Preview=No|Yes] [Dates=Absolute|Relative] [Replacedefs=No|Yes]' where NodeName, FILESpace, and DOmains are used to select from the input. Dates= Specifies whether the recorded backup or archive dates for client node file copies are set to the values specified when the files were exported (Absolute), or are adjusted relative to the date of import (Relative). Default: Absolute. Backup data will be put into the tape pool, and HSM data will be put into the HSM disk storage pool. Note that the exported domain name will typically not exist on the import system (nor would you want it to) and so the import operation will attempt to assign all to domain name STANDARD - after which you can perform an UPDate Node to reassign the node to an appropriate domain name in the importing system. Note that the volumes to be imported need to be checked in to the receiving server before use. If Import finds a filespace of the same name already on the receiving server, it will rename the incoming filespace to have a digit at the end of the name. A message reflecting this should appear in the Activity Log. (See "Importing File Data Information", "Understanding How Duplicate File Spaces Are Handled" in the Admin Guide.) Alas, there has been no merging capability in Import. There is Rename Filespace capability in the server, to adjust things to suit your environment, where you could make it match a file system name so that users could therein retrieve their imported data. Look for ANR0617I "success" message in the Activity Log to verify that the import has worked. DO NOT perform Query OCCupancy while Import is running: it has been seen to result in: ANR9999D imutil.c(2555): Lock acquisition (ixLock) failed for Inventory node 17. Messages: ANR0798E, ANR1366W, ANR1368W Improved Data Recording Capability See: IDRC IN SQL clause to include a particular set of data that matches one of a list of values. The set is specified in parentheses. Literals may appear in the set, enclosed in single quotes. WHERE COLUMN_NAME - IN (value1,value2,value3) See also: NOT IN IN USE Status of a tape drive in 'Query MOunt' output when a tape drive is committed to a session involving a client. -INActive 'dsmc REStore' option to cause ADSM to display both the active and inactive versions of files in the selection generated via -Pick. Inactive, when a file went Do a Select on the Backups table, where the DEACTIVATE_DATE tells the story. Inactive file, restore See example under "-PIck". Inactive file system HSM: A file system for which you have deactivated space management. When space management is deactivated for a file system, HSM cannot perform migration, recall, or reconciliation for the file system. However, a root user can update space management settings for the file system, and users can access resident and premigrated files. Contrast with active file system. Inactive files, identify in Select STATE='INACTIVE_VERSION' See also: Active files, identify in Select; STATE Inactive files, list via SQL SELECT HL_NAME, LL_NAME, - DATE(BACKUP_DATE) as bkdate, - DATE(DEACTIVATE_DATE) AS DELDATE, CLASS_NAME FROM ADSM.BACKUPS WHERE - STATE = 'INACTIVE_VERSION' AND - TYPE = 'FILE' AND - NODE_NAME = 'UPPER_CASE_NAME' AND - FILESPACE_NAME = 'Case_Sensitive_Name' Inactive files, number and bytes Do 'Query OCCupancy NodeName FileSpaceName Type=Backup' Total the number of files and bytes, for all stored data, Active and Inactive. Do 'EXPort Node NodeName FILESpace=FileSpaceName FILEData=BACKUPActive Preview=Yes' Message ANR0986I will report the number of files and bytes for Active files. Subtract these numbers from those obtained in Query OCCupancy, yielding values for Inactive files. See also: Active files, number and bytes Inactive files, rebind There is no command to rebind Inactive files (those which have been deleted from the client but which are retained in TSM server storage). But there is a simple technique to effect rebinding of the Inactive files: 1. Temporarily restore the Inactive filenames, or create an empty file of the same name. 2. Perform an unqualified Incremental backup. (A Selective backup binds the backed up files to the new mgmtclass, but not the Inactive files.) 3. Remove the temp files. Consider instead changing retention policies within the existing management class, as long as the change is safe to pertain to all the file systems bound to that mangement class. Inactive files, restore In the command line client (dsmc), use the -INActive option. Inactive files, restore selectively Restoring one or more Inactive files is awkward in that they all have the same name, and name is the standard way to identify files to restore. You can use the GUI or -PIck option to point out specific instances of Inactive files to be restored. Example of CLI-only: 'dsmc restore -inactive -pick ' then select one file from the list. But this requires a human selection process. To accomplish the same thing via a purely command line (batch) operation: First perform a query of the backup files, including the inactive ones. Then invoke the restoral as 'dsmc restore -INActive -PITDate=____ FileName Dest', where -PITDate serves to uniquely identify the instance of the Inactive version of the file. Also use -PITTime, if there was more than one backup on a given day. See also: -PITDate; -PITTime Inactive files for a user, identify SELECT COUNT(*) AS - via Select "Inactive files count" FROM BACKUPS - WHERE NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='___' AND OWNER='___'- AND STATE='INACTIVE_VERSION' Inactive Version (Inactive File) A copy of a backup file in ADSM storage that either is not the most recent version or the corresponding original object has been deleted from the client file system. For example: you delete a file, then do a backup - the latest backup copy of the file is now in the Inactive Version, and would have to be restored from there. Inactive backup versions are eligible for expiration according to the management class assigned to the object. Note that active and inactive files may exist on the same volumes. Query from client: 'dsmc Query Backup -SUbdir=Yes -INActive {filespacename}:/dir/* (where "-INActive" causes *both* active and inactive versions to be reported). See also: Active Version INACTIVE_VERSION SQL DB: State value in Backups table for a host-deleted, Inactive file. See also: ACTIVATE_DATE INCLEXCL TSM server-defined option for clients of all kinds (though the name may lead you to think it's just for Unix), via 'DEFine CLIENTOpt'. Each INCLEXCL contains an Include or Exclude statement in a set of such statements to be applied to the clients using the option set. The Include and Exclude specification coded in the server logically precede and are additive to client-defined Include and Exclude options. Example: DEFine CLIENTOpt INCLEXCL EXCLUDE.FS /home See: DEFine CLIENTOpt INCLExcl Client System Options file (dsm.sys) option to name the file which contains Include-Exclude specifications. Must be coded within a server stanza. Current status can be obtained via the command 'dsmc Query Option' in ADSM or 'dsmc show options' in TSM. Note that if this file is changed, the client scheduler needs to be restarted to see the change. Historical: This option was for many years available for use only in Unix clients. INCLExcl ignored? See: Include-Exclude "not working" INclude Client option to specify files for inclusion in backup processing, archive processing (as of TSM 3.7), image processing, and HSM services; and to also specify the management class to use in storing the files on the server. Placement: Unix: Either in the client system options file or, more commonly, in the file named on the INCLExcl option. Other: In the client options file. Note that Include applies only to files: you cannot specify that certain directories be included. Code as: 'INclude pattern...' or 'INclude pattern... MgmtClass' (Note that the INclude option does not provide the .backup and .spacemgmt qualifiers which the EXclude option does.) Coding an Include does not imply that other file names are excluded: the rule is that an Include statement assures that files are not excluded, but that other files will be implicitly included. Technique suggestion: Rather than have a bunch of management classes and cause client administrators set up somewhat intricate Include statements, it may be preferable to create multiple Domains on the TSM server with a tailored default management class in each, and then change the client Node definition to use that Domain. See also: INCLExcl; INCLUDE.FILE; INCLUDE.IMAGE INCLExcl not working See: Include-Exclude "not working" INCLUDE.ENCRYPT TSM 4.1 Windows option to include files for encryption processing. (The default is that no files are encrypted.) See also: ENCryptkey; EXCLUDE.ENCRYPT INCLUDE.FILE Variation on the INclude statement, to include a specified file in backup operations. INCLUDE.FS Windows (only) Include spec for Open File Support/Snapshot backups. Note that this spec is not in Unix. INCLUDE.IMAGE Variation on the INclude statement, for AIX, HP-UX, and Solaris systmes, to include a specified filespace or logical volume in backup operations. Note that INCLUDE.IMAGE stands alone, being independent of all other Include specifications. Include-exclude list A list of INCLUDE and EXCLUDE options that include or exclude selected objects for backup. An EXCLUDE option identifies objects that should not be backed up. An INCLUDE option identifies objects that are exempt from the exclusion rules or assigns a management class to an object or a group of objects for backup or archive services. The include-exclude list is defined either in the file named on the INCLEXCL opton of the Client System Options File (Unix systems) or in the client options file. Wildcards are allowed: * ... [] The include/exclude list is processed from bottom to top, and exits satisfied as soon as a match is found. Ref: Installing the Clients Include-exclude list, validate ADSMv3: dsmc Query INCLEXCL TSM: dsmc SHow INCLEXCL Include-Exclude list, verify Via manual, command line action: ADSM: 'dsmc Query INCLEXCL' (v3 PTF6) TSM: 'dsmc SHOW INCLEXCL' There is no way to definitively have the scheduler show you if it is seeing and honoring the include-exclude list, as there is no Action=Query in the server DEFine SCHedule command. The best you can do is have the scheduler invoke the Query Inclexcl command to demonstrate that the include-exclude options set was in effect at the time the schedule was run. 1. Add to your options file: PRESchedulecmd "dsmc query inclexcl" 2. Invoke the scheduler to redirect output to a file (as in Unix example 'dsmc schedule >> logfile 2>&1'). 3. Inspect the logfile. Include-Exclude "not working" Possible causes: - Not coded with a server stanza. - Scheduler process not restarted after client options file change. - Exclude not coded *before* the file system containing it is named on an Include, remembering that the Include-Exclude list is processed bottom-up. - Not supported for your opsys. - Unix: The InclExcl option must be coded in your dsm.sys file, and it must be within the server stanza you are using; and, of course, the file that it specifies must exist and be properly coded and have appropriate permissions. - Perhaps 'DEFine CLIENTOpt' has been done on the server, specifing INCLEXCL options for all clients which, though they logically precede client-defined Include-Exclude options, may interfere with client expectations. See also: Include-Exclude list, verify Include-Exclude options file For Unix systems: a file, created by a root user on your system, that contains statements which ADSM uses to determine whether to include or exclude certain objects in Backup and Space Management (HSM) operations, and to override the associated management classes to use for backup or archive. Each line contains Include or Exclude as the first line token, and named files as the second line token(s). Include statements may also contain a third token specifying the management class to be used for backup, to use other than the Default Management Class. The file is processed from the bottom, up, and stops processing, satisfied, as soon as it finds a match. The file is named in the Client System Options File (dsm.sys) for Unix systems, but on other systems the Include statements are located in the dsm.opt file itself. An Exclude option can be used to exclude a file from backup and space management, backup only, or space management only. An Include option can be used to include specific files for backup and space management, and optionally specify the management class to be used. Automatic migration occurs only for the Default Management Class; you have to manually incite migration if coded in the include-exclude options file. Caution: If you change your Include/Exclude list or file so that a previously included file is now excluded, any pre-existing backup versions of that file are expired the next time an incremental backup is run. Include-Exclude options file, query Use the client 'dsmc Query Option' in ADSM or 'dsmc show options' in TSM, and look for "InclExcl:". Include-exclude order of precedence As of ADSMv3, Include-Exclude specifications may come from the server as well as the client, and are taken in the following order: 1. Specifications received from the server's client options set, starting with the highest sequence number. 2. Specifications obtained from the client options file, from bottom to top. Note that, whether from the server or client, Include-Exclude statements are "additive", and cannot be overriden by a Force=Yes specification in the DEFine CLIENTOpt. Do 'dsmc Query Inclexcl' to see the full collection of Include-Exclude statements in effect, in the order in which they are processed during backup and archive operations. Ref: Admin Guide "Managing Client Option Files" See: DEFine CLIENTOpt; DEFine CLOptset; Exclude; INCLEXCL; Include -INCRBYDate Option on the 'dsmc incremental' command to requests an incremental backup by date: the client only asks the server for the date and time of the last incremental backup, for comparing against the client file's last-modified (mtime) timestamp. (A Unix inode administrative change (ctime, as via chmod, chown, chgroup) does not count.) In computer science terms, this is almost a "stateless" backup. This method eliminates the time, memory, and transmission path usage involved in capturing a files list from the server in an ordinary Incremental Backup. Because only the last backup date is considered in determining which files get backed up, any OS environment factors which affect the file but do not change its date and time stamps are not recognized. If a file's last changed date and time is after that of the last backup, the file is backed up. Otherwise it is not, even if the file's name is new to the file system. Because Incrbydate operates by relative date, there obviously must have been a previous complete Incremental backup to have established a filespace last backup date. Files that have been deleted from the file system since the last incremental backup will not expire on the server, because the backup did not involve a list comparison that would allow the client to tell the server that a previously existing file is now gone. Because this backup knows nothing about what was backed up before, it backs up a lot of directories afresh, because their timestamps have changed as their contents have changed - so that may be a time loss detracting from the other gains in this technique, unless changes to files within directories cause the timestamps on the directories to be updated such that a normal incremental would have backed them up anyway. Further things Incrbydate does not do: - Does not rebind backup versions to a new management class if you change the management class. - In Windows, does not back up files whose attributes have changed, unless the modification dates and times have also changed. - Ignores the copy group frequency attribute of management classes: the backup is unconditional. An Incrbydate backup of a whole file system will cause the filespace last backup timestamp to be updated. Prevailing retention rules are honored as usual in an -INCRBYDate backup. Because they do not change the last changed date and time, changes to access control lists (ACL) are not backed up during an incremental by date. Relative speed: In Windows, an Incrbydate backup will be slower than a full incremental backup with journaling active. Recommendation: Incrbydate backups are best suited to file systems with stable populations which are regularly updated, and which have few directories. Mail spool file systems are good candidates. Incremental backup See: dsmc Incremental Incremental backup, file systems to See: DOMain option back up Incremental backup, force when missed Run backup from client, if have access. by client Else create a backup schedule on the server (define schedule) of a small window including the current time, then associate the schedule with the client (DEFine ASSOCiation). "Incremental forever" Often cited as the mantra of the TSM product, it is a capability rather than a dictum. The basic scheme of the product is to back up any new or changed files. You don't necessarily have to ever perform a "full" backup - but of course the cost is having your backups spread over perhaps many tapes (mitigated by Reclamations), which can aggravate restoral times. But you are free to adopt any combination of full and incremental backups as dictated by economics and your restoral objectives. INCRTHreshold TSM 4.2+ option, for Windows. Specifies the threshold value for the number of directories in any journaled file space that might have active objects on the server, but no equivalent object on the workstation. GUI: "Threshold for non-journaled incremental backups" Ref: Windows client manual; TSM 4.2 Informix database backup Use the informix DBA do a DB export, then ADSM backs up this export. Or use the SQL BackTrack product. See also: TDP for Informix Informix database backup, query 'dsmc query backup /InstanceName/InstanceName/*/*' Initialize tapes See: Label tapes initserv.log TSMv4 server log file which will log errors in initializing the server. inode A data structure that describes the individual files in an operating system. There is one inode for each file. The number of inodes in a file system, and therefore the maximum number of files a file system can contain, is set when the file system is created. Hardlinked files share the same inode. inode number A number that specifies a particular inode in a file system. Insert category 3494 Library Manager category code FF00 for a tape volume added to the 3494 inventory. The 3494 reads the external label on the volume, creates an inventory entry for the volume, and assigns the volume to this category as it stores the tape into a library cell. The "LIBVolume" command set is the one TSM means of detecting and handling Insert volumes. You can have TSM adopt INSERT category cartridges via a command like: 'CHECKIn LIBVolume 3494Name DEVType=3590 SEARCH=yes STATus=SCRatch' Insert category tapes, count Via Unix environment command: 'mtlib -l /dev/lmcp0 -vqK -s ff00' Insert category tapes, list Via Unix environment command: 'mtlib -l /dev/lmcp0 -qC -s ff00' (There is no way to list such tapes from TSM.) Install directory, Windows ADSM: \program files\ibm\adsm TSM: \program files\tivoli\tsm installfsm HSM kernel extension management program, /usr/lpp/adsm/bin/installfsm, as invoked in /etc/rc.adsmhsm by /etc/inittab. Syntax: 'installfsm [-l|-q|-u] Kernel_Extension' where: -l Loads the named kernel extension. -q Queries the named kernel extension. -u Unloads the named kernel extension. Examples: (be in client directory) installfsm -l kext installfsm -q kext installfsm -u kext Msgs: ANS9281E Instant Archive An unfortunate, misleading name for what is in reality a Backup Set - which has nothing to do with the TSM Archive facility. The Instant Archive name derives from the property of the Backup Set that it is a permanent, self-contained, immutable snapshot of the Active files set. See: Backup Set; Rapid Recovery Intel hyperthreading & licensing In some modern Intel processors, fuller use of the computing components is made by multi-threading in hardware, which can currently make a single physical processor function like two. Does this affect IBM's licensing charges, which are based upon processor count? What we are hearing is No. Interfaces to ADSM Typically the 'adsm' command, used to invoke the standard ADSM interface (GUI), for access to Utilities, Server, Administrative Client, Backup-Archive Client, and HSM Client management. /usr/bin/adsm -> /usr/lpp/adsmserv/ezadsm/adsm. 'dsmadm': to invoke GUI for pure server administration. 'dsmadmc': to invoke command line interface for pure server administration. 'dsm': client backup-archive graphical interface. 'dsmc': client backup-archive command line interface. 'dsmhsm': client HSM Xwindows interface. Interposer An electrical connector adapter which connects between the cable and the SCSI device. Most commonly seen on Fast-Wide-Differential chains, as with a chain off the IBM 2412 SCSI adapter card. The interposer is part FC 9702. Inventory expiration runs interval, "EXPInterval" definition in the server define options file. Inventory Update A 3494 function invoked from the Commands menu of the operator station, to re-examine the tapes in the library and add any previously unknown ones to the library database. The 3494 will accept commands while it is doing this, so you could request a mount during the inventory. Contrast with "Reinventory complete system". IP address of client changes On occasion, your site may need to reassign the IP address of your computer, which is a TSM client. Per discussion in topic "IP addresses of clients", under some circumstances the TSM server has the client's IP address stored in its database, for client schedule purposes. The server would thus be stuck on the old client address, and keep trying and failing (i.e., timeout) to reach the client at its old address. (Or, worse, it might *succeed* in entering into a session with whatever computer has taken the old IP address!) How to get the server to recognize the new IP address? Given that the IP address is remembered only for nodes associated with a schedule, performing a 'DELete ASSOCiation' should cause the server to forget the IP address of the client and cause it to capture its actual, new IP address after a fresh 'DEFine ASSOCiation' and next scheduler communication with the client. (Note that neither stopping and starting the scheduler on the client, nor performing other interactive functions will cause the server to adopt the new IP address. The TCPCLIENTAddress option might be used to accomplish the change, but the option is actually for multi-homed (multiple ethernet carded) clients, to force use of one of its other IP addresses.) IP address of server See: 'DEFine SERver', HLAddress parameter; TCPServeraddress IP addresses of clients The TSM server stores the IP address of nodes in its database, but ONLY when the address is specified on the HLAddress parameter for the node definition, or for nodes associated with a schedule when running in Server Prompted (SCHEDMODe PRompted) mode. That is, for ordinary client contacts, the IP address used is not important: it is only when the server has to initiate contact with the client that it is important enough to be stored in the server. The IP addresses are readily available in the TSM 3.7 server table "Summary" (up to the number of days specified via Set SUMmaryretention), and are recorded in the Activity Log on message ANR0406I when clients contact the server to start sessions. TSM 5.x now provides the IP addresses in the Nodes table (if the above considerations apply), so you can perform 'Query Node ... F=D' to see them. Otherwise they can be found (not in a very readable format), by the following procedure (using undocumented debugging commands): 1. 'SHOW OBJDir': This will generate a list of objects in the database. Search for "Schedule.Node.Addresses". Note the value for "homeAddr". 2. 'SHOW NODE ': This will give you a list of the IP-addresses which have registered for running scheduled processes (by running the DSMC SCHEDULE program on the client node). See also: SCHEDMODe; TCPPort IPX/SPX Internetwork Packet Exchange/Sequenced Packet Exchange. IPX/SPX is Novell NetWare's proprietary communication protocol. IPXBuffersize *SM server option. Specifies the size (in kilobytes) of the IPX/SPX communications buffer. Allowed range: 1 - 32 (KB) Default: 32 (KB) IPXSErveraddress Old TSM 4.2 option for Novell clients for using IPX communication methods to interact with the TSM server. IPXSocket *SM server option. Specifies the IPX socket number for an ADSM server. Allowed range: 0 - 32767 Default: 8522 IPXBufferSize server option, query 'Query OPTion' IPXSocket server option, query 'Query OPTion' -Itemcommit Command-line option for ADSM administrative client commands ('dsmadmc', etc.) to say that you want to commit commands inside a macro as each command is executed. This prevents the macro from failing if any command in it encounters "No match found" (RC 11) or the like. See also: COMMIT; dsmadmc iSeries backups There is no TSM client per se for the iSeries. However, there is an interface to TSM based upon the TSM API called the BRMS Application Client. See also: BRMS ISSUE MESSAGE TSM 3.7+ server command to use with with return code processing in a script to issue a message from a server script to determine where the problem is with a command in the script. Syntax: 'ISSUE MESSAGE Message_Severity Message_Text' Message_Severity Specifies the severity of the message. The message severity indicators are: E = Error. ANR1498E is displayed in the message text. I = Information. ANR1496I is displayed in the message text. S = Severe. ANR1499S is displayed in the message text. W = Warning. ANR1497W is displayed in the message text. Message_Text Specifies the description of the message. See also: Activity log, create an entry ITSM IBM Tivoli Storage Manager - the name game evolves in 2002. See also: TSM ITSM for Databases Is the third generation name and new licensing scheme for the database backup agents in 2003: - TDP for Informix - TDP for MS SQL - TDP for Oracle ITSM For Hardware See: Tivoli Storage Manager For Hardware "JA" The 7th and 8th chars on a 3592 tape cartridge, identifying the media type, being the first generation of the 3592. Japanese filenames See: Non-English filenames Jaz drives (Iomega) Can be used for ADSMv3 server storage pools, via 'DEFine DEVclass ... DEVType=REMOVABLEfile'. Be advised that Jaz cartridges have a distinctly limited lifetime. See articles about it on the web: search on "Click of Death". JBB Journal-based backups (q.v.). JDB See: Journal-based backups (JBB) JFS buffering? No! The ADSM server bypasses JFS buffering on writes by requesting synchronous writes, using O_SYNC on the open(). There is no problem using JFS for the ADSM server database recovery log and storage pool volumes: this is the recommended method. JNLINBNPTIMEOUT Journal Based Backups Testflag, implemented in the 5.1.6.2 level fixtest, to allow a client to specify a timeout value that the client will wait for a connection to the journal daemon to become free (that is, the currently running jbb session to finish). Use by adding to your Windows dsm.opt file like: testflag jnlinbnptimeout:600 where the numeric value is in seconds. (TSM 5.2 will better address timeouts.) Join (noun) An SQL operation where you specify retrieving data from more than one table at a time by specifying FROM a comma-separated set of table names, using table-qualified column names to report the results. Example: SELECT MEDIA.VOLUME_NAME, MEDIA.STGPOOL_NAME, VOLUMES.PCT_UTILIZED FROM MEDIA, VOLUMES Note that processing tends to occur by repeatedly looking through the multiple tables, which is to say that you will experience a multiplicative effect: if the columns being reported occur in multiple tables, you need to use matching to avoid repetitive output, as in: WHERE MEDIA.VOLUME_NAME=VOLUMES.VOLUME_NAME So, if you had 100 volumes, this would prevent the query from reporting 100x100 times for the same set of volumes. See also: Subquery Journal-based backups (JBB) TSM 4.2+: Client journaling improves overall incremental backup performance for Windows NT and Windows 2000 clients (including MS Clustered systems) by using a client-resident journal to track the files to be backed up. The journal engine keeps track of changed files as they are changed, as a jornal daemon monitors file systems specified in the jbb config file. When the incremental backup starts, it just backs up the files that the journal has flagged as changed. (Thus, the journal grows in size only as a result of host file update activity: backups only act upon the contents of the journal - they do not add to it.) When objects are processed (backed up or expired) during a journal based backup, the b/a client notifies the journal daemon to remove the journal entries which have been processed - which releases space internal to the journal: the journal size itself is not reduced. In such backups, the server inventory does not need to be queried, and therein lies the performance advantage. Journal-based backups eliminate the need for the client to scan the local file system or query the server to determine which files to process. It also reduces network traffic between the client and server. Because archive and selective backup are not based on whether a file has changed, there is no server inventory query to begin with, and therefore the journal engine offers no advantage. The journal engine is not used for these operations. Default installation directory: C:\Program Files\Tivoli\TSM\baclient The number of journal entries corresponds with the amount of file system change activity and that the size of journal entries depends primarily on the fully qualified path length of objects which change (so file systems with very deeply nested dir structures will use more space). Every journal entry is unique, meaning that there can only be one entry per file/directory of the file system being journaled (each entry represents that the last change activity of the object). When a journal based backup is performed and journal entries are processed by the B/A client (backed up or expired), the space the processed journal db entries occupy are marked as free and will be reused, but the actual disk size of the journal db file never shrinks. Note that this design is intentionally independent of the Windows 2000 NTFS 5 journalled file system so as to be usable in NT as well, with the possibility of expansion to other platforms in the future. The first time you run a backup after enabling the journal service, you will still see a regular full incremental backup performed, done to synchronize the journal database with the TSM server database. Thereafter the backups should use the journaled backup method, unless the journal db and server db become out of sync (for more info, see the PreserveDbOnExit option in the client manual appendix on configuring the journal service). Relative speed: A JBB is typically faster than an Incrbydate backup. Ref: TSM 4.2 Technical Guide redbook; search IBM db for "TSM Journal Based Backup FAQ" (swg21155524). KB Knowledge Base. Vendors often name their customer-searchable databases this. Go to www.ibm.com and use the Search box to find articles in IBM's KB. KEEPMP= TSM 3.7+ server REGister Node parameter to specify whether the client node keeps the mount point for the entire session. Code: Yes or No. Default: No Ref: TSM 3.7 Technical Guide, 6.1.2.3 See also: MAXNUMMP; REGister Node Kernel extension (server) /usr/lpp/adsmserv/bin/pkmonx, as loaded by: '/usr/lpp/adsmserv/bin/loadpkx -f /usr/lpp/adsmserv/bin/pkmonx', usually by being an entry in /etc/inittab, as put there by /usr/lpp/adsmserv/bin/dsm_update_itab. (See the Installing manual.) NOTE: The need for the kernel extension is eliminated in ADSM 2.1.5, which implements "pthreads", as supported by AIX 4.1.4. Kernel extension (server), load Can be done manually as root via: '/usr/lpp/adsmserv/bin/loadpkx -f /usr/lpp/adsmserv/bin/pkmonx' or: 'cd /usr/lpp/adsmserv/bin' './loadpkx -f pkmonx' but more usually via an entry in /etc/inittab, as put there by /usr/lpp/adsmserv/bin/dsm_update_itab. Alternately you can: '/usr/lpp/adsmserv/bin/rc.adsmserv kernel' Messages: Kernel extension now loaded with kmid = 21837452. Kernel extension successfully initialized. Then you can start the server. Ref: Installing the Server... Kernel extension (server), loaded? As root: '/usr/lpp/adsmserv/bin/loadpkx -q /usr/lpp/adsmserv/bin/pkmonx' May say: "Kernel extension is not loaded" or "Kernel extension is loaded with kmid = 21834876." (See the Installing manual.) Kernel extension (server), unload Make sure all dsm* processes are down on the server, and then do: As root: '/usr/lpp/adsmserv/bin/loadpkx -u /usr/lpp/adsmserv/bin/pkmonx' KERNelmessages Client System Options file (dsm.sys) option to specify whether HSM-related messages issued by the Unix kernel during processing (such as ANS9283K) should be displayed. Specify Yes or No. Because of kernel nature, a change in this option doesn't take effect until the ADSM server is restarted. Default: Yes KEY= In ANR830_E messages, is Byte 2 of the sense bytes from the error, as summarized in the I/O Error Code Descriptions for Server Messages appendix in the Messages manual. To further explain some values: 7 Data protect: as when the tape cartridge's write-protect thumbwheel or slider has been thrown to the position which the drive will sense to disallow writing on the tape. Should be accompanied in message by ASC=27, ASCQ=00, and msg ANR8463E. Kilobyte 1,024 bytes. It is typically only disk drive manufacturers that express a kilobyte as 1,000 bytes. Software and tape drive makers typically use a 1,024 value. The TSM Admin Ref manual glossary, and the 3590 Hardware Reference manual, for example, both define a kilobyte as 1,024. L_ (e.g., L1) LTO Ultrium tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification L1 Ultrium Generation 1 Type A, 100 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. Ref: IBM LTO Ultrium Cartridge Label Specification L2 Ultrium Generation 2 Type A, 200 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. Ref: IBM LTO Ultrium Cartridge Label Specification L3 Ultrium Generation 3 Type A, 400 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. Ref: IBM LTO Ultrium Cartridge Label Specification L4 Ultrium Generation 4 Type A, 800 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. Ref: IBM LTO Ultrium Cartridge Label Specification LA Ultrium Generation 1 Type B, 50 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. Ref: IBM LTO Ultrium Cartridge Label Specification Label all tapes in 3494 library The modern way is to use the LABEl having category code of Insert LIBVolume command, to both label and checkin the volumes. To just label, issue the following operating system command: 'dsmlabel -drive=/dev/XXXX -library=/dev/lmcp0 -search -keep [-overwrite]' LABEl LIBVolume TSM server command (new with ADSMv3). Allows you to label and checkin a single tape, a range of tapes, or any new tapes in an automated library, all in one easy step. Note that there is no "checkin" phase for LIBtype=MANUAL. (The command task is serial: one volume is labeled at a time.) Syntax: 'LABEl LIBVolume libraryname volname|SEARCH=Yes|SEARCH=BULK [VOLRange=volname1,volname2] [LABELSource=Barcode|Prompt] [CHECKIN=SCRatch|PRIvate] [DEVTYPE=CARTRIDGE|3590] [OVERWRITE=No|Yes] [VOLList=vol1,vol2,vol3 ... -or- FILE:file_name]' The SEARCH option will cause TSM to issue an initial query to compile a list of Insert tapes, which it will then process. (If you thereafter add more tape to the library as the command is in its labeling phase, those Inserts will not be processed: you will have to reissue the command later.) The operation tends to use available drives rotationally, to even wear. Failing to specify OVERWRITE=Yes for a previously labeled volume results in error ANR8807W. This command will not wait for a drive to become available, even if one or more drives have Idle tapes or are in a Dismounting state. TSM is smart enough to not relabel a volume that is in a storage pool or the volume history file, and had been taken out of the library and put back in (thus getting an Insert category code): msg ANR8816E will result. Did the command succeed? It will end with message ANR0985I; but that message will always indicate success, even though there were problems, and that no tapes were labeled. Look for adjoining problem messages like ANR8806E. Advisory: Query for a reply number for the Checkin command (make sure you have the tape you want to checkin in the I/O slot) key in q request and it will ask you to enter a reply # (i.e reply 001). Your tape should then checkin. Warning: The foolish command will proceed to do its internal CHECKIn LIBVolume even if the labeling fails (msg ANR8806E) - in ADSMv3, at least! Note that a MOVe MEDia will hang if a LABEl LIBVolume is running. Note that if any tape being processed suffers an I/O error (Write), it will be skipped and, in the case of a 3494, its Category Code will remain FF00 (Insert). Msgs: ANR8799I to reflect start; ANR8801I & ANR8427I for each volume processed; ANR0985I; ANR8810I; ANR8806E. Note that there is no logged indication as to the drive on which the volume was mounted. Label prefix, define Via "PREFIX=LabelPrefix" in 'DEFine DEVclass ...' and 'UPDate DEVclass ...'. Label prefix, query 'Query DEVclass Format=Detailed' Label tapes Use the 'dsmlabel' utility. Newly purchased tapes should have been barcoded and internally labeled by the vendor, so there should be no need to run the 'dsmlabel' utility. But you still need to do an ADSM 'CHECKIn' (q.v.). Label tapes in a 3570 Do something like: 'dsmlabel -drive=/dev/rmt1,16 -library=/dev/rmt1.smc' Labelling a tape... Will destroy ALL data remaining on it, because a new will be written immediately after the labels. (It is the standard for writing on tapes in general that an EOD is written at the conclusion of writing.) Disk/disc media are typically different, as in the case of R/W Optical drives. If you inadvertently relable a data tape, try to restore data on the volume: Run a Q CONTENT volumename to get a list of file names, then try to restore each file individually (make sure to try several files, especially those located at the end of the tape): this may allow you to read past the tape mark. LABELSource Operand in 'LABEl LIBVolume' and other ADSMserver commands, used *only* for SCSI libraries, as in "LABELSource=BARCODE". Note that 3494s do not need this operand since the label is ALWAYS the barcode. LABELSource=3DBARCODE LAN configuration of 3494 Perform under the operator "Commands" menu of the 3494 operator station. Lan-Free Backup Introduced in TSM V3.7. Relieves the load on the LAN by introducing the Storage Agent. This is a small TSM server (without a Database or Recovery Log), termed a Storage Agent, which is installed and run on the TSM client machine. It handles the communication with the TSM server over the LAN but sends the data directly to SAN attached tape devices, relieving the TSM server from the actual I/O transfer. See also: Lan-Free Restore; Server-free Ref: TSM 3.7.3+4.1 Technical Guide redbook; TSM 5.1 Technical Guide LAN-Free Data Transfer The optional Managed System for SAN feature for the LAN-free data transfer function effectively exploits SAN environments by moving back-end office and IT data transfers from the communications network to a data network or SAN. LAN communications bandwidth then can be used to enhance and improve service levels for end users and customers. http://www.tivoli.com/products/index/ storage_mgr/storage_mgr_concepts.html See also: Network-Free... Lan-Free license file mgsyssan.lic Lan-Free Restore TSM 3.7 feature designed to get around network limitations when clients need to be quickly restored, and they are physically near the server. Client backups occur as usual, over the network each day (optimally, over over a Storage Area Network). Once on the server, a "Backup Set" can be produced from the current Active files, constituting a point-in-time bundle on media which can be read at the client site. Then, when a mass restoral is necessary at the client, the compatible media can be transported from the server location to the client location (or could have been sent there as a matter of course each day) and the client can be restored on-site from that bundled image. See: Backup Set LanFree bytes transferred Client Summary Statistics element: The total number of data bytes transferred during a lan-free operation. If the ENABLELanfree client option is set to No, this line will not appear. LANGuage Definition in the server options file and Windows Client User Options File. Specifies the language to use for help and error messages. Note that whereas the Windows client sports a LANGuage client option, the Unix client has no such option, instead relying upon the LANG environment variable, in that OS's environmental language support. Default: en_US (AMENG) for USA. If the client is running on an unsupported language/locale combination, such as French/Canada or Spanish/Mexico, the language will default to US English. Note that the language option does not affect the Web client, which employs the language associated with the locale of the browser. If the browser is running in a locale that TSM does not support, the Web client displays in US English. Ref: Just about every TSM manual discusses language. LANGuage server option, query 'Query OPTion' Laptop computers, back up See "Backup laptop computers". LARGECOMmbuffers ADSMv3 client system options file (dsm.sys) option (in ADSMv2 was "USELARGebuffers"). Specifies whether the client will use increased buffers to transfer large amounts of data between the client and the server. You can disable this option when your machine is running low on memory. Specify Yes or No. Msgs: ANS1030E See also: MEMORYEFficientbackup Default: Yes for AIX; No for all others Last 8 hours, SQL time ref You can form a "within last 8 hours" spec in a SELECT by using the form: [Whatever_Timestamp] >(CURRENT_TIMESTAMP-8 hours) Last Backup Completion Date/Time Column in 'Query FIlespace Format=Detailed'. This field will be empty if the backup was not a full incremental, or it was but did not complete, or if the filespace involves Archive activity rather than Backup. As of TSM 5.1: If the command specified by the PRESchedulecmd or POSTSchedulecmd option ends with a nonzero return code, TSM will consider the command to have failed. Last Backup Start Date/Time Column in 'Query FIlespace Format=Detailed'. This field will be empty if the backup was not a full incremental, or it was but did not complete, or if the filespace involves Archive activity rather than Backup. As of TSM 5.1: If the command specified by the PRESchedulecmd or POSTSchedulecmd option ends with a nonzero return code, TSM will consider the command to have failed. Last Incr Date See: dsmc Query Filespace Last night's volumes See: Volumes used last night LASTSESS_SENT SQL: Field in NODES table is for data sent for *any* TSM client operation, whether it be Archive, Backup, or even just a Query. -LAtest 'dsmc REStore' option to restore the most recent backup verson of a file, be it active or inactive. Without this option, ADSM searches only for active files. See also -IFNewer. LB Ultrium Generation 1 Type C, 30 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification lbtest AIX, NT library test program for use with SCSI libraries using the special device /dev/lb0 or /dev/rmtX.smc. Beware using when TSM is also going after the library, as TSM will fail when it cannot open it. Where it is: Windows: /utils directory Unix: server/bin directory Syntax: Windows: lbtest -dev lbx.0.0.y UNIX: lbtest <-f batch-input-file> <-o batch-output-file> <-d special-file> <-p passthru-device> Unix example: lbtest -dev /dev/lbxx Windows example: c:>lbtest -dev lbx.0.0.y where x is the SCSI address and y is the port number - values available from the server utilities diagnostic screen. Once in lbtest, select manual test, select open, select return element count and then do what you want. Make sure you have your command window scrolling as the stuff goes by awful fast. Ref: There is no documentation provided by Tivoli for this TSM utility. LC Ultrium Generation 1 Type D, 10 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LD Ultrium Generation 2 Type B, 100 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LE Language Environment. LE Ultrium Generation 2 Type C, 60 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification Leader data HSM: Leading bytes of data from a migrated file that are replicated in the stub file in the local file system. (The migrated file contains all the file's data; but the leading data of the file is also stored in the stub file for the convenience of limited-access commands such as the Unix 'head' command. The amount of leader data stored in a stub file depends on the stub size specified. The required data for a stub file consumes 511 bytes of space. Any remaining space in a stub file is used to store leader data. If a process accesses only the leader data and does not modify that data, HSM does not need to recall the migrated file back to the local file system. See also: dsmmigundelete; RESToremigstate LEFT(String,N_chars) SQL function to take the left N characters of a given string. Sample usage: SELECT * FROM ADMIN_SCHEDULES WHERE LEFT(SCHEDULE_NAME,4)='BKUP' See also: CHAR() Legato Is bundled with DEC Unix. LF Ultrium Generation 2 Type D, 20 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LG Ultrium Generation 3 Type B, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LH Ultrium Generation 3 Type C, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LI Ultrium Generation 3 Type D, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification libApiDS.a The *SM API library. In TSM 3.7, lives in /usr/tivoli/tsm/client/api/bin See also: dsmapi* Libraries, multiple of same time, Sites may end up with multipe libraries avoiding operator confusion of the same type. How to keep operators from returning offsite tapes to the wrong library? One approach is color-coding: apply solid-color gummed labels to the cartridges and frame the library I/O portal with the same color, making it all but impossible for the operator to goof. Choose yellow and purple, and put Big Bird and Barney pictures onto each library to enhance operator comprehension. Library A composite device consisting of serial media (typically, tapes), storage cells to house them, and drives to read them. A library has its own, dedicated scratch tape pool (dedicated per category code assignment during Checkin, or the like). In TSM, a Library is a logical definition: there may be multiple logical Library definitions for a physical library (as needed when a library contains multiple drive types), with each instance having its own, dedicated scratch tape pool. LIBRary TSM keyword for defining and updating libraries. Note that in TSM a library definition cannot span multiple physical libraries. Library (LibName) A collection of Drives for which volume mounts are accomplished via a single method, typically either manually or by robotic actions. LibName comes into play in Define Library such that Checkin will assign desired category codes to new tapes. LibName is used in: AUDit LIBRary, CHECKIn, CHECKOut, DEFine DEVclass, DEFine DRive, DEFine LIBRary. Is target of: DEFine DEVclass and: DEFine DRive Ref: Admin Guide See also: SCSI Library Library, 3494, define Make sure that the 3494 is online. For a basic definition: 'DEFine LIBRary LibName LIBType=349x - DEVIce=/dev/lmcp0' which take default category codes of decimal 300 (X'12C') for Private and decimal 301 (X'12D') for 3490 Scratch, with 302 (X'12E') implied for 3590 Scratch. For a secondary definition, for another system to access the 3494, you need to define categories to segregate tape volumes so as to prevent conflicting use. That definition would entail: 'DEFine LIBRary LibName LIBType=349x - DEVIce=/dev/lmcp0 PRIVATECATegory=Np_decimal SCRATCHCATegory=Ns_decimal' where the Np and Ns values are unique, non-conflicting Private and Scratch category codes for this Library. (Note that defined category codes are implicitly assigned to library tapes when a Checkin is done.) See also: SCRATCHCATegory Ref: Admin Guide Library, add tape to 'CHECKIn LIBVolume ...' Library, audit See: AUDit LIBRary Library, count of all volumes Via Unix command: 'mtlib -l /dev/lmcp0 -vqK' Library, count of cartridges in See: 3494, count of cartridges in Convenience I/O Station Convenience I/O Station Library, count of CE volumes Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fff6' Library, count of cleaning cartridges Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fffd' Library, count of SCRATCH volumes Via Unix command: (3590 tapes, default ADSM SCRATCH 'mtlib -l /dev/lmcp0 -vqK -s 12E' category code) Library, define drive within 'DEFine DRive LibName Drive_Name DEVIce=/dev/??? [ELEMent=SCSI_Lib_Element_Addr]' Note that ADSM will automatically figure out the device type, which will subsequently turn up in 'Query DRive'. Library, multiple drive types Drives with different device types are supported in a single physical library if you perform a DEFine LIBRary for each type of drive. If distinctively different drive device types are involved (such as 3590E and 3590H), you define two libraries. Then you define drives and device classes for each library. In each device class definition, you can use the FORMAT parameter with a value of DRIVE, if you choose. Living with this arrangement involves the awkwardness of having to apportion your scratch tapes complement between the two TSM library definitions. Ref: Admin Guide "Configuring an IBM 3494 Library for Use by One Server" Library, query 'Query LIBRary [LibName] [Format=Detailed]' Note that the Device which is reported is *not* one of the Drives: it is instead the *library device* by which the host controls the library, rather than the conduit for getting data to and from the library volumes. Does not reveal drives: for the drives assigned to a library you have to do 'Query DRive', which amounts to a bottom-up search for the associated library. Note that there is also an unsupported command to show the status of the library and its drives: 'SHow LIBrary'. Library, remove tape from 'CHECKOut LIBVolume LibName VolName [CHECKLabel=no] [FORCE=yes] [REMove=no]' Library, SCSI See: SCSI Library Library, use as both automatic and Define the library as two libraries: one manual automatic, the other manual: def library=manual libtype=manual def drive manual mtape device=_____ Then when you want to use the drive as a manual library you do: UPDate DEVclass ____ LIBRary=MANUAL And to change back: UPDate DEVclass ____ LIBRary=Automat Library Client A TSM server which accesses a library managed by a separate TSM server, with data transfer ofer a server-to-server communication path. Specified via DEFine LIBRary ... SHAREd See also: Library Manager Library debugging If a library is not properly responding to *SM, here are some analysis ideas: - Do 'q act' in the *SM server to see if it is reporting an error. - If the opsys has an error log, see if any errors recorded there. If the lib has its own error log, inspect. Maybe the library gripper or barcode reader is having a problem. - Try to identify what changed in the environment to cause the difference since the problem appeared. - Is the library in a proper mode to service requests (i.e., did some operator leave a switch in a wrong position or change configuration?). For example, a 9710 must have the FAST LOAD option enabled. - Examine response outside of *SM, via the mtlib, lbtest or other command appropriate to your library, emulating the operation as closely as possible. Be next to the lib to actually see what's happening. - Check networking between *SM and the library: If a direct connection, check cabling and connectors; If networked and on different subnets, maybe an intermediary router problem, or that the library resides in a subnet which is Not Routed (cannot be reached from outside). - Is there a shortage of tape drives, as perhaps tapes left in drives after *SM was not shut down cleanly? - Perform *SM server queries (e.g., 'q pr') as a sluggish request is pending. Do 'Query REQuest' for more manual libs to see if mount pending. Maybe the server is in polling mode waiting on a tape mount: do 'SHow LIBrary' to see what it thinks. - If CHECKIn is hanging, try it with CHECKLabel=No and see if faster, which skips tape loading and barcode review. Library full situation You can have *SM track volumes that are removed from a full library, if you employ the Overflow Storage Pool method. Ref: Admin Guide, "Managing a Full Library" See: MOVe MEDia, Overflow Storage Pool Library Manager TSM concept for a TSM server which controls device operations when multiple IBM TSM servers share a storage device, per 'DEFINE LIBRary ... SHAREd'. Device operations include mount, dismount, volume ownership, and library inventory. See also library client. Library Manager The PC and application software residing in a 3494 or like robotic tape library, for controlling the robotic mechanism and otherwise managing the library, including the database of library volumes with their category codes. Library Manager, microcode level Obtain at the 3494 control panel: First: In the Mode menu, activate the Service Menu (will result in a second row of selections appearing in menu bar at top of screen). Then: under Service, select View Code Levels, then scroll down to "LM Patch Level", which will show a number like "512.09". Library Manager Control Point The host device name through which a (LMCP) a host program (e.g., TSM or the 'mtlib' command) accesses the unique 3494 library that has been associated with that device name, as via AIX SMIT configuration. The LMCP is used to perform the library functions (such as mount and demount volumes). In AIX, the library is accessed via a special device, like /dev/lmcp0. In Solaris, it is more simply the arbitrary symbolic name that you code in the /etc/ibmatl.conf file's first column. That is, in Solaris you simply reference the name you chose to stuff into the file: it is not some peculiar name that is generated via the install programs. The "SCSI...Device Drivers: Programming Reference" manual goes into details and helps make this clearer. Library Manager Control Point Daemon A process which is always running on the (lmcpd) AIX system through which programs on that system interact with the one or more 3494 Tape Libraries which that host is allowed to access (per definitions in the 3494 Library Manager). The executable is /etc/lmcpd. In AIX, the lmcpd software is a device driver. In Solaris, it is instead Unix-domain sockets. The /etc/ibmatl.conf defines arbitrary name "handles" for each library, and each name is tied to a unique lmcp_ device in the /dev/ directory, via SMIT definitions. The daemon listens on port 3494, that number having been added to /etc/services in the atldd install process. There is one daemon and one control file in the host, through which communication occurs with all 3494s. This software is provided on floppy disk with the 3494 hardware. Installs into /usr/lpp/atldd. Updates are available via FTP to the storsys site's .devdrvr dir. It used to be started in /etc/inittab: lmcpd:234:once:/etc/methods/startatl But later versions caused it to be folded into the /etc/objrepos and /etc/methods/ database system such that it is started by the 'cfgmgr' that is done at boot time. Restart by doing 'cfgmgr' (or, less disruptively, 'cfgmgr -l lmcp0'); or simply invoke '/etc/lmcpd'. Configuration file: /etc/ibmatl.conf If the 3494 is connected to the host via TCP/IP (rather than RS-232), then a port number must be defined in /etc/services for the 3494 to communicate with the host (via socket programming). By default, the Library Driver software installation creates a port '3494/tcp' entry in /etc/services, which matches the default port at the 3494 itself. If to be changed, be sure to keep both in sync. Ref: "IBM SCSI Tape Drive, Medium Changer, and Library Device Drivers: Installation and User's Guide" manual (GC35-0154) See also: /etc/.3494sock; /etc/ibmatl.conf Library Manager Control Point Daemon In /etc/ibmatl.pid; but may not be able (lmcpd) PID to read because "Text file busy". Library not using all drives Examine the following: - Mount limit on device class. - 'SHow LIBrary'; make sure all Online and Available. - If AIX, 'lsdev -C -c tape -H -t 3590' and make sure all Available (do chdev if not). - At library console, assure drives are Available. - If AIX, use errpt to look for hardware problems. - Examine drive for being powered on and not in problem state. Library offline? Run something innocuous like: mtlib -l /dev/lmcp0 -qL If offline, will return: Query operation Error - Library is Offline to Host. and a status code of 255. Library sharing In a LAN+SAN environment, the ability for multiple TSM servers to share the resources of a SAN-connected library. Control communication occurs over the LAN, and data flow over the SAN. One sever controls the library and is called the Library Manager Server; requesting servers are called Library Client Servers. (Note that this arrangement does not fully conform to the SAN philosophy, in that peer-level access is absent.) Library sharing contrasts with library partitioning, where the latter subdivides and dedicates portions of the library to each. Ref: Admin Guide, "Multiple Tivoli Storage Manager Servers Sharing Libraries" Library space shortage An often cited issue is the tape library being "full", hindering everything. This typically results from site management not being realistic and skimping on resources, though that jeopardizes the mission of data backup and leaves the administrators in a lurch. Potential remediations: - Expand the library to give it the capacity it needs for reasonable operation. - Go for higher density tape drives and tapes, to increase library capacity without physical expansion. - Buy tape racks and employ a discipline which keeps dormant tapes outside the library, available for mounting via request. Library storage slot element address See: SHow LIBINV Library volumes, list Use opsys command: 'mtlib -l /dev/lmcp0 -vqI' for fully-labeled information, or just 'mtlib -l /dev/lmcp0 -qI' for unlabeled data fields: volser, category code, volume attribute, volume class (type of tape drive; equates to device class), volume type. (or use options -vqI for verbosity, for more descriptive output) The tapes reported do not include CE tape or cleaning tapes. LIBType Library type, as operand of 'DEFine LIBRary' server command. Legal types: MANUAL - tapes mounted by people SCSI - generic robotic autochanger 349X - IBM 3494 or 3495 Tape Lib. EXTERNAL - external media management LIBVolume commands The only TSM commands which recognize and handle tapes whose (3494) Category Code is Insert. See: 'CHECKIn LIBVolume', 'CHECKOut LIBVolume', 'LABEl LIBVolume', 'Query LIBVolume', 'UPDate LIBVolume'. Libvolume, remove Use CHECKOut. See also: DELete VOLHistory LIBVOLUMES *SM database table to track volumes which belong to it and which are contained in the named library. Columns: LIBRARY_NAME, VOLUME_NAME, STATUS, LAST_USE, HOME_ELEMENT, CLEANINGS_LEFT Libvolumes, count by Status 'SELECT STATUS,COUNT(*) AS \ "Library Counts" FROM LIBVOLUMES \ GROUP BY STATUS' Libvolumes which are Scratch, count 'SELECT COUNT(*) FROM LIBVOLUMES WHERE STATUS='Scratch' License See also: adsmserv.licenses; dsmreg.lic; Enrollment Certificate Files License, register 'REGister LICense' command. See: REGister LICense License, TSM 4 TSMv4 introduced the Tivoli 'Value-Based Pricing' model, which changed the license options and files: You no longer buy the network enabling license. Instead, the cost of the base server is tiered based on the hardware you are running on. The client license cost is also tiered based on the hardware type and size. Client licenses were also split into two flavors: a managed LAN system - which is basically what we had prior to v4.1 - and a managed SAN system. The end result is basically the same, but the accounting is different. License, unregister See: Unregister licenses See also notes under REGister LICense. License audit period, query 'Query STatus', see License Audit Period 'SHow LMVARS' also reveals it. License audit period, set 'Set LICenseauditperiod N_Days' License file ADSMv2: It is /usr/lpp/adsmserv/bin/adsmserv.licenses which is a plain file containing hexadecimal strings generated by invoking the 'REGister LICense' command per the sheet of codes received with your order. (The adsmserv module invokes the outboard /usr/lpp/adsmserv/bin/dsmreg.lic to perform the encoding.) ADSMv3 and TSM: The runtime file is the "nodelock" file in the server directory. CPU dependency: The generated numbers incorporate your CPU ID, and so if you change processors (or motherboard) you must regenerate this file. If to be located in a directory other than the ADSM server code directory, this must be specified to the server via the DSMSERV_DIR environment variable. Ref: Admin Guide; README.LIC file included in your installation License filesets (AIX), list 'lslpp -L' and look for tivoli.tsm.license.cert tivoli.tsm.license.rte License info, get See: LICENSE_DETAILS; 'Query LICense' LICENSE_DETAILS table SQL table added to TSM 4.1. Columns: LICENSE_NAME One of the usual TSM license feature names, as in: SPACEMGMT, ORACLE, MSSQL, MSEXCH, LNOTES, DOMINO, INFORMIX, SAPR3, ESS, ESSR3, EMCSYMM, EMCSYMR3, MGSYSLAN, MGSYSSAN, LIBRARY NODE_NAME Either the name of a Backup/Archive client or the name of a library. LAST_USED The time the library was last initialized or the last time that client session ended using that feature. License Wizard One of the Windows "wizards" (see the Windows server Quick Start manual) See: Unregister licenses LICENSE_DETAILS TSM 4.1 SQL table. Columns: LICENSE_NAME Varchar L=10 NODE_NAME Varchar L=64 LAST_USED Last access Timestamp LICENSE_NAME is the name of a license feature, being one of: SPACEMGMT, ORACLE, MSSQL, MSEXCH, LNOTES, DOMINO, INFORMIX, SAPR3, ESS, ESSR3, EMCSYMM, EMCSYMR3, MGSYSLAN, MGSYSSAN, LIBRARY where "MGSYS" is Managed Systems. NODE_NAME will be either the name of a Backup/Archive client or the name of a library. LAST_USED will be set to the time the library was last initialized or the last time that client session ended using that feature. (The datestamp may be more than 30 days ago; an 'AUDit LICense' will not remove the entry.) See also: 'Query LICense' LICenseauditperiod See: License audit period... Licenses ADSMv3: Held in the server directory as file "nodelock". See: nodelock Licenses, audit See: 'AUDit licenses' Licenses, insufficient Archives are denied with msg ANR0438W Backups are denied with msg ANR0439W HSM is denied with msg ANR0447W DRM is denied with msg ANR6750E Licenses, unregister See: Unregister licenses See also notes under REGister LICense. Licenses and dormant clients There is sometimes concern that having old, dormant filespaces hanging around for a dormant client may take up a client license. If your server level is at least 4.1, doing Query LICense, will reveal: Managed systems for Lan in use: x Managed systems for Lan licensed: y where the "in use" value is the thing. From the 4.1 Readme: With this service level the following changes to in use license counting are introduced. - License Expiration. A license feature that has not been used for more than 30 day will be expired from the in use license count. This will not change the registered licenses, only the count of the in use licenses. Libraries in use will not be expired, only client license features. - License actuals update. The number of licenses in use will now be updated when the client session ends. An audit license is no longer required for the number of in use licenses to get updated. (Sadly, this information was not carried over into the manuals.) The above information was further confused by APAR IC32946. See also: AUDit LICenses; Query LICense Licensing problems Can be caused by having the wrong date in your operating system such that TSM thinks the license is not valid. Lightning bolt icon In web admin interface, in a list of nodes: That is a link to the backup/archive GUI interface for the clients. It means you specified its URL for the Client acceptor piece. Clicked, it should bring up that node's web client. You can use that to perform client functions. For it to work: - The client acceptor and remote client agent must be installed on the node. - The client acceptor must be started but leave the remote client agent alone in manual. - The node must be findable on the network, by name or numeric address. You may need to go into the node and update it with the correct URL for it work correctly. This gives you a common management point to perform backup/restore procedures. Linux client support for >2 GB files As ov TSM 4.2.1, the TSM Linux client can back up Large Files, as possible as of Linux kernel 2.4. LINUX support, ADSM (client only) As of 1998/08, there was a NON-Supported version of the ADSM Linux client was available pre-compiled (no source code) on ftp.storsys.ibm.com FTP server in the /adsm/nosuppt directory: file adsmv3.linux.tar.Z (now gone). IBM says: "The TSM source code is not in the public domain." Reportedly worked well with RedHat 5.0. Back then, there was also: http://bau2.uibk.ac.at/linux/mdw/ HOWTO/mini/ADSM-Backup LINUX support, TSM client As of 2000/04/27, a formally supported Linux client is available through the TSM clients site. Installs into /opt/tivoli/tsm/client/. File system support, per the README: "The functionality of the Tivoli Storage Manager Linux client is designed and tested to work on file systems of the common types EXT2, NFS (see under known problems and limitations for supported environment), and ISO9660 (cdrom). Backup and archive for other file system types is not excluded. They will be tolerated and performed in compatibility mode. This means that features of other file systems types may not be supported by the Linux client. These file system type information of such file systems will be forced to unknown." The RedHat TSM Client reportedly needs at least the 4.2.2.1 client level or higher: the 4.1 client does not support the Reiser file system. LINUX support, TSM server Into mid 2003, implementing a TSM Linux server remains problematic: - Requires very specific (often older) kernel levels. - Device support is spotty. LINUX support, TSM web client You may experience a Java error when trying to use the web client interface (via IE 6.0 SP1 with JRE 1.4.2_03). The Unix Client manual, under firewall support, notes that the two TCP/IP ports for the remote workstation will be assigned to two random ports - which may be blocked by Linux's iptables. You'll want to choose two ports and explicitly open them in iptables. For example: In dsm.sys: webports 1582 1583 In /etc/sysconfig/iptables: -A RH-Lokkit-0-50-INPUT -p tcp -m tcp --dport 1582 --syn -j ACCEPT -A RH-Lokkit-0-50-INPUT -p tcp -m tcp --dport 1583 --syn -j ACCEPT and then restart dsmcad and iptables (/etc/rc.d/init.d/iptables restart). LJ Ultrium Generation 4 Type B, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LK Ultrium Generation 4 Type C, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LL Ultrium Generation 4 Type D, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LL_NAME SQL: Low level name of a filespace object, meaning the "filename" portion of the path...the basename. Unix example: For path /tmp/xyz, the FILESPACE_NAME="/tmp", HL_NAME="/", and LL_NAME="xyz". (Remember that for client systems where filenames are case-insensitive, such as Windows, TSM stores them as UPPER CASE.) See also: HL_NAME LLAddress REGister Node specification for the client's port number, being a hard-coded specification of the port to use, as opposed to the implied port number discovered by the TSM server during client sessions (which may be specified on the client side via the TCPCLIENTPort option). See also: HLAddress LM Library Manager. LMCP See Library Manager Control Point LMCP Available? 'lsdev -C -l lmcp0' lmcpd See: Library Manager Control Point Daemon. lmcpd, restart '/etc/kill_lmcpd' '/etc/lmcpd' lmcpd, shut down '/etc/kill_lmcpd' lmcpd level 'lslpp -ql atldd.driver' lmcp0 Library Manager Control Point, only for 3494 libraries. lmcp0, define Library Manager Control '/etc/methods/defatl -ctape -slibrary Point to AIX -tatl -a library_name='OIT3494' LOADDB See "DSMSERV LOADDB". Local Area Network (LAN) A variable-sized communications network placed in one location. It connects servers, PCs, workstations, a network operating system, access methods, and communications software and links. Local file systems See: File systems, local LOCK Admin ADSM server command to prevent an administrator from accessing the server, without altering privileges. Syntax: 'LOCK Admin Adm_Name' Note: Cannot be used on the SERVER_CONSOLE administrator id. Inverse: UNLOCK Admin LOCK Node TSM server command to prevent a client node from accessing the server. Syntax: 'LOCK Node NodeName'. A good thing to do before Exporting a node. Inverse: UNLOCK Node lofs (LOFS) "Loopback file system", or "Loopback Virtual File System": a file system created by mounting a directory over another local directory, also known as mount-over-mount. A LOFS can also be generated using an automounter. Under SGI IRIX, an AUTOFS (automount) file system. Loopback file systems provide access to existing files using alternate pathnames. Once such a virtual file system is created, other file systems can be mounted within it without affecting the original file system. An example: mount -t lo -o ro /real/files /anon/ftp/files To check your mount: mount -p Then put the new info from mount -p into your /etc/fstab. See also: all-lofs; all-auto-lofs Log See: Recovery log Log buffer pool See: LOGPoolsize Log command output To log command output, invoke the ADSM server command as in: 'dsmadmc -OUTfile=SomeFilename ...". See also: Redirection of command output Log file name, determine 'Query LOGVolume [Format=Detailed]' Log pinning See: Recovery Log pinning %Logical ADSM v.3 Query STGpool output field, later renamed to "Pct Logical" (q.v.). Logical file A client file stored in one or more server storage pools, either by itself or as part of an aggregate file (small files aggregation). See also: Aggregate file; Physical file Logical occupancy The space required for the storage of logical files in a storage pool. Because logical occupancy does not include the unused space created when logical files are deleted from aggregates (small files aggregation), it may be less than physical occupancy. See also: physical file; logical file Logical volume See: Raw Logical Volume Logical volume backups Available in ADSM 3.7. A way to obtain a physical image of the overall volume, rather than traversing the file system contained in the volume. Advantages: - Fast backup and restoral, in not having to diddle with thousands of files. - Minimal TSM db activity: just one entry to account for the single image, not thousands to account for all the files in it. - Simple way to snapshot your system for straightforward point-in-time restorals. Disadvantages: - Image integrity: no way to know or deal with contained files or vendor databases being open or active. Logmode See: Set LOGMode Logmode, query 'Query STatus', look for "Log Mode" near bottom. Logmode, set Set LOGMode Loop mode Term used for invocation of the command line client command in interactive mode. See: dsmc LOOP Loopback file system See: lofs LOwmig Operand of 'DEFine STGpool', to define when *SM can stop migration for the storage pool, as a percentage of the storage pool occupancy. Can specify 0-99. Default: 70. To force migration from a storage pool, use 'UPDate STGpool' to reduce the LOwmig value. You could reduce it all the way to 0; but if a backup or like task is writing to the storage pool, the migration task will not end until the backup ends; so a value of 1 may be better as a dynamic minimum. When migration kicks off, it will drain to below this level if CAChe=Yes in your storage pool because caching occurs only with migration, and at that point ADSM wants to cache everything in there. It is also the case that Migration fully operates on the entirety of a node's data, before re-inspecting the LOwmig value; thus, the level of the storage pool may fall below the LOwmig value. See: Migration LOGPoolsize Definition in the server options file. Specifies the size of the Recovery Log buffer pool, in Kbytes. A large buffer pool may increase the rate by which Recovery Log transactions are committed to the database. To see if you need to increase the size of this value, do 'Query LOG Format=Detailed' and look at "Log Pool Pct. Wait": if it is more than zero, boost LOGPoolsize. Default: 512 (KB); minimum: 128 (KB) See also: COMMIT Ref: Installing the Server... LOGPoolsize server option, query 'Query OPTion', see LogPoolSize LOGWARNFULLPercent Server option: Specifies the log utilization threshold at which warning messages will be issued. Syntax: LOGWARNFULLPercent where the percentage is that of log utilization at which warning messages will begin. After messages begin, they will be issued for every 2% increase in log utilization until utilization drops below this percentage. Code as: 0 - 98. Default: 90 See also: SETOPT Long filenames in Netware restorals From the TSM Netware client manual: "If files have been backed up from a volume with long name space loaded, and you attempt to restore them to a volume without long name space, the restore will fail." Long-term data archiving See: Archive, long term, issues Long-term data retention See: Archive, long term, issues Lotus Domino Mail server package, backed up by Tivoli Storage Manager for Mail (q.v.). Domino release 5 introduced new backup APIs, exploited by TDP for Lotus Domino. In Domino, every user has her own mail box database, so it can be individually restored. However, you cannot restore just a single document: you have to restore the DB and copy the document over. See also: TDP... Lotus Domino and compression The bytes read/written/transfered messages from TDP for Domino will be the same whether compression is on or off. Those messages are all based on the number of bytes read and does not take into account any compression being done by the TSM API. You would need to query the occupancy on the server to see any difference. Lotus Notes Agent Note that *SM catalogs every document in the Notes database (.NSF file). Low threshold A percentage of space usage on a local file system at which HSM automatically stops migrating files to ADSM storage during a threshold or demand migration process. A root user sets this percentage when adding space management to a file system or updating space management settings. Contrast with high threshold. See: dsmmigfs Low-level address Refers to the port number of a server. See also: High-level address; Set SERVERHladdress; Set SERVERLladdress LOwmig Operand of 'DEFine STGpool', to define when *SM can stop migration for the storage pool, as a percentage of the storage pool estimated capacity. When the storage pool reaches the low migration threshold, the server does not start migration of another node's files. Because all file spaces that belong to a node are migrated together, the occupancy of the storage pool can fall below the value you specified for this parameter. You can set LOwmig=0 to permit migration to empty the storage pool. Can specify 0-99. Default: 70. To force migration from a storage pool, use 'UPDate STGpool' to reduce the HIghmig value (with HI=0 being extreme). See also: Cache; HIghmig lpfc0 See: Emulex LP8000 Fibre Channel Adapter LRD In Media table, the Last Reference Date (YYYY-MM-DD HH:MM:SS.000000). LTO Linear Tape - Open. In 1997 IBM formed a partnership with HP and Seagate on an open tape standard called LTO or Linear Tape Open. LTO will be based on Magstar MP. (Conspicuously missing from the partnership is Quantum, the sole maker of DLT drives: LTO was devised as a mid-range tape technology in avoiding paying royalties to Quantum. Quantum subsequently advanced to SuperDLT to compete with LTO.) Employs servo tracking for precise positioning. Comes in two flavors, with different cartridges: Accelis (based upon IBM 3570) and Ultrium (based upon IBM 3590). The Accelis and Ultrium formats use the same head / media track layout / channel / servo technology, and share many common electronic building blocks and code blocks. Accelis is optimized for quick access to data while Ultrium is optimized for capacity. Note that Accelis was abandoned in favor of Ultrium, expecting that customers would want higher capacity rather than high performance. Cartridge Memory (LTO CM, LTO-CM) chip is embedded in both Accelis and Ultrium cartridges. A non-contacting RF module, with non-volatile memory capacity of 4096 bytes, provides for storage and retrieval of cartridge, data positioning, and user specified info. Capacity and speed are intended to double in each succeeding generation of the technology. Performance: LTO is streaming technology. If you cannot keep the data flowing at tape speed, it has to stop, back up, and restart to get the tape up to speed again, which makes for a substantial performance penalty. LTO seems, as a product, to be positioned between the compating DLT and the complementary, higher-priced 3590 and STK 9x40. SAN usage: Initially supported via SDG (SAN Data Gateway). Visit: http://lto-technology.com/ http://www.lto-technology.com/newsite/ index.html http://www.ultrium.com http://www.storage.ibm.com/hardsoft/ tape/lto/index.html http://www.cartagena.com/naspa/LTO1.pdf http://www.overlanddata.com/PDFs/ 104278-102_A.pdf http://www.ibm.com/storage/europe/ pdfs/lto_mag.pdf See also: 3583; Accelis; MAM; TXNBytelimit and tape drive buffers; Ultrium LTO bar code format - Quiet zones (at each end of the bar code). - A start character (indicating the beginning of the label). - A six-character volume label. - A two-character cartridge media-type identifier (L1), which identifies the cartridge as an LTO cartridge ('L') and indicates that the cartridge is the first generation of its type ('1'). - A stop character (indicating the end of the label) When read by the library's bar code reader, the bar code identifies the cartridge's volume label to the tape library. The bar code volume label also indicates to the library whether the cartridge is a data, cleaning, or diagnostic cartridge. LTO cleaning cartridge See: Ultrium cleaning cartridge LTO drive cleaning Seldom required. At each tape unload the LTO drives have a small mechanical brush that runs over the heads. This seems to reduce the need for cleaning. LTO performance See Tivoli whitepaper "IBM LTO Ultrium Performance Considerations" Note that performance can be impaired if the LTO-CM memory chip (aka Medium Auxiliary Memory: MAM) has failed. A worse problem is one which was divulged 2004/09/13, where bad LTO1,2 microcode will cause the CM index to be corrupted. Without the index, the drive has to grope its way through the data to find what it needs to access, and performance is severely impaired. The LTO architecture is designed to automatically re-build this index if it should become corrupted. However, when this corrupted index condition is detected, slow performance is the result as the index is re-built, as the tape must be re-read from the beginning to the end of the tape. A corrupted index may be fixed the next time it is used, only to be corrupted again at a future time: installing corrected drive microcode is the only solution. LTO customers should use TapeAlert, which spells out drive problems. LTO tape errors Can be caused by the cartridge having been dropped. (The LTO cartridges are not as rugged as 3480/3490/3590 tape cartridges.) LTO tape serial number The barcode may have "SU3689L1", wherein the serial number is "SU3689" - does not include the "L1". LTO vs. 3590 An LTO drive is 5 inches tall and roughly twice as long as the data cartridge; the motor is lightweight, and there is no tape 'buffer' between the cart and the internal reel. The motor on a 3580/3590 is much larger and heavier, and there is a vacuum column buffer between the cart and the internal reel. The net result is that the 3590 needs to get one reel or the other up to speed and has several inches of tape to accelerate AND has a much more powerful motor to do it. The LTO drive, with a lighter motor, has no tape buffer and needs to get both reels and all the tape moving. It is also the case that LTO is designed for streaming: the start-stop operation associated with small files is greatly detrimental to LTO performance (see: Backhitch). See also: LTO vs. 3590 LTO1 drives, IBM Those are 3580 Ultrium 1 drives. See: 3580 LTO-2 (lto2) See: Ultrium 2 LuName server option, query 'Query OPTion' LVM Fixed Area The 1 MB reserved control area on a *SM database volume, as accounted for in the creating 'dsmfmt -db' operation. See also: SHow LVMFA LVSA Logical Volume Snapshot Agent. For making an image backup of a Windows 2000 volume while the volume continues to be available for other processing. TSM will create the OBF (Old Blocks File) there, and perform the backup from there. Default location: C:\TSMLVSA See also: Image Backup; OBF; Open File Support; SNAPSHOTCACHELocation LZ1 IBM's proprietary version of Lempel-Ziv encoding called IBM LZ1. Macintosh, shut down after backups Put into the ADSM prefs file: "SCHEDCOMpleteaction Shutdown" Macintosh backup file names Macintosh has traditionally used the colon character (:) rather than slash (/) or backslash (\) as its directory designation character. Interestingly, this persists into OS X, where the user interface makes the directory character seem to be the usual Unix slash (/); but OS X invisibly translates that to and from its usual colon (:). So, if you do Query CONtent or the like at the TSM server, you will see the actual colons separating file path components. Macintosh client components The following components are in the Macintosh client package: Backup: The interactive GUI for backup, restore, archive, retrieve. ~2.8MB Scheduler daemon: A background appl that operates in sleep mode until it is time to run a schedule, then starts the Scheduler program. ~120KB Scheduler program: Communicates with the server for the next schedule to run, and performs the scheduled action, such as a backup or restore, at the scheduled time. ~1.5MB Macintosh disaster recovery Simply take some kind of removable disk (Syquest, ZIP, ...) with enough capacity and put a minimal version of MacOS (with TCP/IP support) and ADSM on it. Macintosh files, back up from NT Yes, ADSM can do this, via NT "Services for Macintosh". NT can access Macintosh file systems, and from NT you can then back them up. BUT: ADSM version 2 cannot handle the resource fork portion of the files (ADSM v3 can). V.2 restorals thus bring the files back as "flat files". See: Services for Macintosh; USEUNICODEFilenames Macintosh files, restore to NT The Mac files must be restored to a directory managed by "Services for Macintosh". Also make sure that Services for Macintosh is up and running. Macintosh icons, effects of moving In the Mac client V3 manual, Chapter 3, page 13, it says: "Simply moving an icon makes the file appear changed. ADSM records the change in icon position to minimize the problem of multiple icons occupying the same space after the files are restored. If only the attributes of a file or folder have changed, and not the data, only the attributes are backed up. You may have multiple versions of the same file with the only difference between them being the icon position or color." Macintosh OS X scheduler Via dsmcad. It's started from the script /Library/StartupItems/dsmcad/dsmcad when Mac OS X boots. You should see a /usr/bin/dsmcad running. If checking with the GUI client, you'll need to use 'TSM Backup for Administrators' rather than the plain 'TSM Backup': the latter will only show other users' backed up directories, not their files. MACRO TSM server command used to invoke a user-programmed set of TSM commnds, as a package, with variable substitution. Syntax: 'MACRO MacroName [Substitutionvalues]' where the the macro file name is case-sensitive and Substitutionvalues fill in percent-signed numbers, in numerical order by invocation order. Example of variables: %1, %2, %3. Note that you cannot run a macro via an Administrative Schedule - but you can via a Client Schedule, via ACTion=Macro with OBJects naming the macro...which means that the schedule must be associated with a node and that its dsmc sched process causes the macro to run. (Consider instead using Server Scripts.) Redirection: Works The TSM manuals are obscure as to where macro files are supposed to be located. In actuality, they can be: - In the directory where the dsmadmc command was invoked, whereby you can invoke the macro simply by its base name, as in: MACRO mymacro - In any system directory, whereby you need to invoke the macro by full path name, as in: MACRO /usr/local/adsm/mymacro One convenient practice would be to create a standard macros directory, and then 'cd' there before invoking 'dsmadmc', thus allowing you to invoke the macros with short names. Note that you do not need eXecute permission to be set on macro files, in that ADSM will load and interpret them. An unusual factor is that TSM keeps going back to the macro as it performs it, even if the macro is simple and certainly involves no looping: changing the content of the macro during a "more..." screen transition, for example, will result in an "ANR2000E Unknown command" error message. Ref: Admin Guide chapter "Automating Server Operations", Using Macros See also: /* */; Server scripts Magic Number You will run into occasional TSM server messages referring to "magic number". This amounts to a checksum number which TSM generated and stored in the database at the time it put the file object into its storage pool (wrote it to media), to assure data integrity. When at some time in the future TSM may be called upon to retrieve the object from that media, it generates a checksum from the retrieved file data and checks that it matches what it originally had for the object. An error indicates that the data could be read from the media without hardware/OS detection of an error, but nevertheless there is a discrepancy. The data is thus deemed corrupted and hopeless: you need to perform a Restore Volume or the like to get a usable copy of the object. How did the data go bad? The most likely cause is between TSM and the tape head: Faulty hardware, erroneous firmware, bad SCSI cables, network infrastructure problems, and the like can all result in bad data ending up on the media. Magstar Product line acronym: Magnetic storage and retrieval. Name supplanted in 2002 by See also: IBM TotalStorage; TotalStorage Magstar MP IBM's name for its 3570 and 3575 technology. MAILprog Client System Options file (dsm.sys) option to specify who gets mail, and via what mailer program, when a password expires and a new password is generated. Can be used when PASSWORDAccess Generate is in effect. Code within the SErvername section of definitions. Format: "Mailprog /mail/pgmname User_Id" See also: PASSWORDAccess; PASSWORDDIR MAKESPARSEFILE See: Sparse files, handling of MAM Medium Auxiliary Memory: An Auxiliary Memory residing on a medium, for example, a tape cartridge. Some tape technologies - e.g., AIT and LTO (Ultrium) - use cartridges equipped with Medium Auxiliary Memory (MAM), a non-volatile memory used to record medium identification and usage info. This is typically accessed via an RF interface and does not require reading the tape itself. In a library not equipped with a mobile MAM reader, it is necessary to load the cartridge into the drive to read the MAM via the drive's MAM reader. Ref: http://www.t10.org/ftp/t10/ document.99/99-347r0.pdf Mammoth tape drive Exabyte 8mm (helical scan) tape drive with SCSI-2 fast interface, wide or narrow, with SE or differential as an option. Capacity: 20 GB, native/uncompressed; 40 GB compressed. Transfer Rate: 10.5 GB per hour, native/uncompressed; 360 MB/min compressed rate. Technology is similar to AIT-1. Mammoth-2 tape drive Exabyte 8mm tape drive (helical scan). Form factor: half-height, 5.25" Capacity: 60 GB Transfer rate: 12 MBps Cartridge tape contains a section of cleaning fabric which the drive uses as needed. Technology is similar to AIT-2. Managed Server See: Enterprise Configuration and Policy Management MANAGEDServices Windows client option for having CAD cause the client scheduler, and web client, to run rather than have them hang around as memory-holding processes. Syntax: MANAGEDServices {[schedule] [webclient]} See also: CAD Management class A policy object that contains a collection of (HSM) space management attributes and backup and archive Copy Groups. The space management attributes contained in a Management Class determine determine whether HSM-managed files are eligible for automatic or selective migration. The attributes in the backup and archive Copy Groups determine whether a file is eligible for incremental backup and specify how ADSM manages backup versions of files and archived copies of files. The management class is typically chosen for users by the node root administrator (via 'ASsign DEFMGmtclass') but can alternately be selected as the third token on the INCLUDE line in the include-exclude options file, or via the DIRMc Client Systems Option File option, or the ARCHMc 'dsmc archive' command line option. However, automatic migration occurs *only* for the default management class; for the incl-excl named management class you have to manually incite migration. Management class, choose Is accomplished by specifying the mangement class as the third token on a client Include option. Format: Include FileSpec MgmtClassName To have all backups use the management class, code: Include * MgmtClassName To have specific file systems use the management class, do like: Include /fsname/.../* MgmtClassName Ref: Client B/A manual Management class, copy See: COPy MGmtclass Management class, default As the name implies, this is the management class which will be used by default. Can be overridden via the third token on the INCLUDE line in the include-exclude options file. However, automatic migration occurs *only* for the default management class; for the incl-excl named management class you have to manually incite migration. Management class, default, establish 'ASsign DEFMGmtclass DomainName SetName ClassName' To make this change effective you then need to do: 'ACTivate POlicyset DomainName SetName' Management class, define 'DEFine MGmtclass DomainName SetName ClassName [SPACEMGTECH=AUTOmatic| SELective|NONE] [AUTOMIGNOnuse=Ndays] [MIGREQUIRESBkup=Yes|No] [MIGDESTination=poolname] [DESCription="___"]' Note that except for DESCription, all of the optional parameters are Space Management Attributes for HSM. Management class, delete 'DELete MGmtclass DomainName SetName ClassName' Management class, query 'Query MGmtclass [[[DomainName] [SetName] [ClassName]]] [f=d]' See also: Management classes, query Management class, SQL queries It is: CLASS_NAME Management class, update See: UPDate MGmtclass Management class for HSM, select HSM uses the Default Management Class which is in force for the Policy Domain, which can be queried from the client via the dsmc command 'Query MGmtclass'. You may override the Default Management Class and select another by coding an Include-Exclude file, with the third operand on an Include line specifying the Management Class to be used for the file(s) named in the second operand. Management class used by a client 'dsmc query mgmtclass' or 'dsmc query options' in ADSM ('dsmc show options' in TSM). Management class used in backup Shows up in 'dsmc query backup', whether via command line or GUI. Management classes, display in detail 'dsmmigquery -M -D' Management classes, query from client 'dsmc Query Mgmtclass [-DETail]' Reports the default management class and any management classes specified on INCLude statements in the Include/Exclude file. Management classes, unused, identify You can perform queries like the following, for Archives and Backups: SELECT DOMAIN_NAME, CLASS_NAME FROM MGMTCLASSES WHERE CLASS_NAME NOT IN (SELECT DISTINCT(CLASS_NAME) FROM ARCHIVES) MANUAL (libtype) See: Manual library Manually Ejected category 3494 Library Manager category code FFFA for a tape volume which was in the inventory but in a re-inventory was not found in the 3494. Thus, the 3494 thinks that someone reached in and removed it. This category is typically induced by having to extricate a damaged tape from the robot. See "Purge Volume" category to eliminate such an entry. Manual library No, it's not a library full of manuals; it's a library whose volumes are to be mounted manually, by people responding to mount messages. It is distinguished by LIBType=MANUAL in DEFine LIBRary; and the tape device will be of "mt" type, rather than "rmt" (*SM driver). A shop running this type of operation will usually have an operations terminal running the *SM administrative client in Mount Mode (dsmadmc -mountmode), simply for the operators to see and respond to mount requests. Outstanding mount requests can be checked via Query REQuest. Such requests are answered with the REPLY command acknowledging a specific request number, to signify that the action requested has been performed by the operator such that *SM can proceed. Manuals See: TSM manuals "Many small files" problem The name of the challenge where backups involve a large number of small files, which stresses the TSM database due to the heavy updating and number of database entries, and the client's memory and processing power in performing an Incremental backup. See "Database performance" for ways to mitigate the impact on the TSM database and optimize performance. Other possible approaches: - To somewhat reduce Backup time, consider using -INCRBYDate backup, which eliminates getting a long list of files from the server, massaging it in client memory, and then comparing as the file system is traversed. (But see the INCRBYDate entry for side effects.) - Another Backup time reduction scheme: With some client file systems it may be known in what area updating occurs, as in the case of a company doing product testing which creates thousands of results files in subdirectories named by product and date. Here you can tailor your backup to go directly at those directories and skip the rest of the file system, where you know that little or nothing has changed. - Journal-Based Backups may be a good alternative on Windows. - Consider 'dsmc Backup Image' (q.v.), to back up the physical image of a volume (raw logical volume) rather than individually backing up the files within it. - Some customers pre-combine many small files on the client system, as with the Unix 'tar' command or personal computer file bundling packages, thus reducing the quantity to a single bundle file. - If regulations require you to keep files for a certain period, consider using Backup Sets rather than doing full backups. - Consider a "divide and conquer" approach, using parallel backup processes to operate on separate areas of a file system housing many small files, to reduce the overall time to perform the backup. You may employ a 'dsmc i' for each major top-level directory, to back up into the same TSM server filespace, or use the VIRTUALMountpoint option to cause the file system to be treated as multiple filespaces. Naturally, this can be effective only if your disk and I/O path can meet the demands.) Your retention policies need to be reasonable: don't arbitrarily retain a year's worth of versions, but rather keep as much as is really needed to recover files. Make sure you are running regular, unlimited expirations, else your TSM database will balloon. The backup of small files is also problematic with tape drives with poor start-stop characteristics (see Backhitch). The condition of the directory in which the small files exist can also slow things down: see "Backup performance". Consider turning on client tracing to identify the specific problem area. Master Drive An informal name for the first, SMC drive in a SCSI library, such as the 3584. (Remove that drive and you suffer ANR8840E trying to interact with the library.) MATCHAllchar Client option to specify a character to be used as a match-all wildcard character. The default is an asterisk (*). MATCHOnechar Client option to specify a character to be used as a match-one wildcard character. The default is a question mark (?). MAX SQL statement to yield the largest number from all the rows of a given numeric column. See also: AVG; COUNT; MIN; SUM MAXCAPacity Devclass keyword for some devices (principally, File) to specify the maximum size of any data storage files defined to a storage pool categorized by this device class. MAXCAPACITY, if set to other than 0 determines the maximum amount of data ADSM will put to a tape, ESTCAPACITY, if MAXCAPACITY is not set, is an estimate used for some calculations for reclamation and display, but does not determine when a tape is full. On VM and MVS servers MAXCAPACITY is the maximum amount of data that ADSM will put on a tape, but if the tape becomes physically full, or has certain errors, it will be marked full before it reaches that capacity. The capacity reported by ADSM does not consider compression. If client compression is used, or if the data is not very compressible (backups of zip files, for examples) then ADSM will report a full tape will a smaller capacity. Most tape manufacturers give their tape capacity assuming compression (I think normally around 3/1), so if you are sending already compressed data, you will not be able to reach the stated capacities. MAXCMDRetries Client System Options file (dsm.sys) option to specify the maximum number of times you want the client scheduler to attempt to process a scheduled command which fails. Default: 2 Do not confuse with the Copy Group SERialization parameter, which governs attempts on a busy file, not session reattempts. Maximum command retries 'Query STatus' Maximum mounts See: MOUNTLimit Maximum Scheduled Sessions 'Query STatus' output reflecting the number of schedule sessions possible, as controlled by the 'Set MAXSCHedsessions' command percentage of the the Maximum Sessions value seen in 'Query STatus'. Default: 50% of Maximum Sessions. MAXMIGRATORS HSM: New in 4.1.2 HSM client, per the IP22148.README.HSM.JFS.AIX43 file: Starting with this release, dsmautomig starts parallel sessions to the TSM server that allows to migrate more than one file at a time. The number of parallel migration sessions is recognized by the dsmautomig process specific option that can be configured in the dsm.sys file: MAXMIGRATORS (default = 1, min = 1, max = 20) Make sure that sufficient resources are availabale on the TSM server for parallel migration. Avoid to set the MAXMIGRATORS option higher than number of sessions on the TSM server can be used for storing data. maxmountpoint You mean MAXNUMMP (q.v.) MAXNUMMP TSM 3.7+ server REGister Node, UPDate Node parameter to limit the number of concurrent mount points, per node, for Archive and Backup operations. Prevents a client from taking too many tape drives at one time. Affects parallelization. Code 0 - 999. Default: 1 Warning: A value of 0 will result in ANS1312E message and immediate termination of a backup/archive session; but restore/retrieve will not be impeded. Warning: Upgrading to 3.7, with its attendant database conversion, results in the MAXNUMMP value being 0! Ref: TSM 3.7 Technical Guide, 6.1.2.3 See also: KEEPMP; MOUNTLimit; Multi-session client; REGister Node MAXPRocess Operand in 'BAckup STGpool', 'MOVe NODEdata', 'RESTORE STGpool', and 'RESTORE Volume' to parallelize the operation - tempered by the number of tape drives. Note that the "process" implications in the name harks back to the days when server taks were performed by individual processes: in these modern times, MAXPRocess is figurative and actually governs the number of threads. MAXRecalldaemons Client System Options file (dsm.sys) option to specify the maximum number of dsmrecalld daemons which may run at one time to service HSM recall requests. Default: 20 MAXRECOncileproc Client System Options file (dsm.sys) option to specify the maximum number of reconcilliation processes which HSM can start automatically at one time. Default: 3 MAXSCRatch Operand in 'DEFine STGpool' to govern the use of scratch tapes in the storage pool. Specifies the maximum number of scratch volumes that may be taken for the storage pool, cumulatively. That is, each volume taken from the scratch pool is still known as a scratch volume, as reflected in the Query Volume "Scratch Volume?" value, and will return to the scratch pool when emptied. The MAXSCRatch value is thus the storage pool's quota limit. Setting MAXSCRatch=0 prevents use of scratch volumes, an intentional special case when you want to have the storage pool use on volumes specifically assigned to it, via 'DEFine Volume'. If MAXSCRatch is greater than 0 and you have also DEFine'd volumes into the storage pool, the DEFine'd volumes will be used first, then scratches. Msgs: ANR1221E MAXSCRatch, query 'Query STGpool ... Format=Detailed'; look for the value associated with "Maximum Scratch Volumes Allowed". MAXSCRatch and collocation ADSM will never allocate more than 'MAXSCRatch' volumes for the storage "raw logical volume" "lock files" /tmp pool: collocation becomes defeated when the scratch pool is exhausted as ADSM will then mingle clients. When a new client's data is to be moved to the storage pool, ADSM will first try to select a scratch tape, but if the storage pool already has 'MAXSCRatch' volumes then it will select the tape with the lowest utilization in the storage pool. MAXSessions Server options definition (dsmserv.opt). Specifies the number of simultaneous client sessions. The MAXSessions value is incremented by prompted sessions, polling sessions, and admin sessions. When an attempt is made to prompt a client there is a 1 minute delay for response from that client. The next client to be prompted is not prompted until either the first client responds or the 1 minute delay elapses. So if you have many prompted clients, be sure your schedule starttime duration is large enough to accomodate 1 minute delays. Typically the client will start as soon as prompted, so you may have prompted clients that are not "loaded" and consequently the entire delay is used waiting for a client that is not going to respond. Even if you are maxed out on the MAXSessions value, you can always start more administrative clients. Default: 25 client sessions Ref: Installing the Server... See also: Multi-session Client; "Set MAXSCHedsessions %sched", whereby part of this total MAXSessions value is devoted to Schedule sessions; SETOPT MAXSessions server option, query 'Query OPTion', see "Maximum Scheduled Sessions". MAXSize STGpool operand to define the maximum size of a Physical file which may be stored in this pool. (Remember that Physical size refers to the size of an Aggregate, not the size of a Logical file from the client file system. See "Aggregates".) Limiting the size of a file eligible for a given pool in a hierarchy causes larger files to skip that storage pool and try the next one down in the hierarchy. If the file is too big for any pool in the hierarchy, it will not be stored. The file's size, as reported by the operating system, is compared to the storage pool's MAXSize value PRIOR TO compression. Value can be specified as "NOLIMIT" (which is the default), or a number followed by a unit type: K for kilobytes, M for megabytes, G for gigabytes, T for terabytes. Examine current values via server command 'Query STGpool Format=Detailed'. Msgs: ANS1310E See also: Storage pool space and transactions MAXThresholdproc Client System Options file (dsm.sys) option to specify the maximum number of HSM threshold migration processes which can start automatically at one time. Default: 3 Maximum sessions, define "MAXSessions" definition in the server options file. Maximum sessions, get 'Query STatus' MB Megabyte: To be considered equal to 1024x1024 = 1,048,576 in TSM. (Note that disk makers base their sizings on 1000, not 1024.) MBps Megabytes per second, a data rate typically used with tape drives. Mbps Megabits per second, a data rate typically associated with data communication lines. Media Access Status Element of Query SEssion F=D report. "Waiting for access to output volume ______ (___ seconds)" may reflect the volume name that the session was waiting for when it started - but that may no longer be the actual volume needed. For example: an Archive session fills the disk storage pool in a hierarchy where tape is the next level, and so a migration process is incited...and so the client is waiting on the tape which the migration process is migrating to. Then that tape fills. Migration goes on to a fresh tape, but the archive session still shows waiting for access to the original tape. When neither Query Process nor Query Session F=D show the volume identified in "Waiting for access...", it can be due to a backup of HSM-managed space where that volume is feeding the backup directly from the storage pool rather than the client, as HSM backups operate where the HSM space is on the *SM server. Query Session F=D shows only the output volume, not the implicit input. "Current output volume(s): ______,(470 Seconds)" is an undocumented form, which seems to reflect how long the tape has been idle, as for example when the client is looking for the next candidate file to back up. This impression is reinforced by the Seconds value dropping back to zero periodically. If that HSM backup cannot mount either the input or output volumes for lack of drives, the field will report two "Waiting for mount point..." instances, which looks odd but makes perfect sense. Media fault message ANR8359E Media fault ... (q.v.) Media Type IBM 34xx tape cartridges have an external one-character ID, as follows: '1' Cartridge System Tape (CST): 3490 'E' Enhanced Capacity Cartridge System Tape (ECCST): 3490E 'J' Magstar 3590 tape cartridge (HPCT) 'K' Magstar 3590 tape cartridge (EHPCT) See also: CST; ECCST; HPCT Media TSM db table to intended to report volumes managed via the MOVe MEDia cmd. Columns: VOLUME_NAME, STATE (MOUNTABLEINLIB, MOUNTABLENOTINLIB), UPD_DATE (YYYY-MM-DD HH:MM:SS.000000), LOCATION, STGPOOL_NAME, LIB_NAME, STATUS (EMPTY, FILLING, FULL, ACCESS (READONLY, etc.), LRD (YYYY-MM-DD HH:MM:SS.000000). (LRD is Last Reference Date.) MEDIA1 A less-used designation for 3490 base cartridge technology. See CST. MEDIA2 A less-used designation for 3490E cartridge technology. See ECCST. MEDIA3 A less-used designation for 3590 cartridge technology. mediaStorehouse 199901 product from Manage Data Inc. which functions as an ADSM proxy client to service backup and restore of network-client data via CORBA wherever the user currently happes to be (based upon userid). www.managedata.com Media Wait (MediaW) "Sess State" value in 'Query SEssion' for when a sequential volume (tape) is to be mounted to serve the needs of that session with a client and the session awaits completion of that mount. This could mean waiting either for a mount point or a volume in use by another session or process. Another cause is the tape library being unavailable, as in a 3494 in Pause mode. Recorded in the 24th field of the accounting record, and the "Pct. Media Wait Last Session" field of the 'Query Node Format=Detailed' server command. See also: Communications Wait; Idle Wait; SendW; Run; Start Medium changer, list contents Unix: 'tapeutil -f /dev/____ inventory' Windows: 'ntutil -t tape_ inventory' See: ntutil; tapeutil Medium Mover (SCSI commands) 3590 tape drive: Allows the host to control the movement of tape cartridges from cell to cell within the ACF magazine, treating it like a mini library of volumes. Megabyte See: MB Memory limits See: Unix Limits Memory-mapped I/O You mean Shared Memory (q.v.) MEMORYEFficientbackup ADSMv3+ Client User Options file (dsm.opt) option specifies a more memory conserving algorithm for processing incremental backups, backing up one directory at a time, and using less memory. This obviously occurs at (great) expense of backup performance. Choices: No Your client node uses the faster, more memory-intensive method when it processes incremental backups. Yes Your client node uses the method that uses less memory when processing incremental backups - BUT WITH A BIG PERFORMANCE PENALTY. Note: This option can also be defined on the server. Msgs: ANS1030E See also: LARGECOMmbuffers Message explanation You can do 'help MsgNumber' to get info about a message. For example: with message ANR8776W, you can simply do 'help 8776'. Message filesets (TSM AIX server) tivoli.tsm.msg.en_US.devices tivoli.tsm.msg.en_US.server tivoli.tsm.msg.en_US.webhelp Message interval "MSGINTerval" definition in the server options file. MessageFormat Definition in the server options file. Specifies the message headers in all lines of a multi-line message. Possible option numbers: 1 - Only the first line of a multi-line message contains the header. 2 - All lines of a multi-line message contain headers. Default: 1 Ref: Installing the Server... MessageFormat server option, query 'Query OPTion' Messages, suppress Use the Client System Options file (dsm.sys) option "Quiet". See also: VERBOSE MGMTCLASSES SQL Table for Management Classes. Columns: DOMAIN_NAME, SET_NAME, CLASS_NAME, DEFAULT, DESCRIPTION, SPACEMGTECHNIQUE, AUTOMIGNONUSE, MIGREQUIRESBKUP, MIGDESTINATION, CHG_TIME, CHG_ADMIN, PROFILE MGSYSLAN Managed System for LAN license. MIC Memory-in-Cassette: Sony's non-volatile memory chip in their AIT cartridge. See: AIT; MAM Microcode, acquire Call 1-800-IBM-SERV and request the latest microcode for your device. Microcode, install Can use tapeutil or ntutil (Tape Drive Service Aids): select "Microcode Load"... - position to equivalent /dev/rmtx and hit Enter; - at "Enter Filename" enter the filename of your new firmware; - press F7 - download of firmware to the drive begins; successful download will be displayed (message "Operation completed successfully!") - press F10 and enter q to exit tapeutil/ntutil. Microcode in tape drive Run /usr/lpp/adsmserv/bin/mttest... select 1: manual test select 1: set device special file e.g.: /dev/rmt0 select 20: open select 46: device information or select 37: inquiry MICROSECONDS See: DAYS Microsoft Cluster Server Environment See IBM article swg21109932 scheduled backups, verify Microsoft Exchange See: Exchange; TDP for Exchange MIGContinue ADSMv3 Stgpool keyword to specify whether ADSM is allowed to migrate files that have not exceeded the MIGDelay value. Default: Yes. Because of the MIGDelay parameter, it is now possible for ADSM to complete a migration process and not meet the low migration threshold. This can occur if the MIGDelay parameter value prevents *SM from migrating enough files to satisfy the low migration threshold. The MIGContinue parameter allows system administrators to specify whether ADSM is allowed to migrate additional files. Exploitation note: This setting allows a very nice archival scheme to be implemented. Say you run a time sharing system, and when users leave you archive their home directories as a tar file in a storage pool. But you only want to keep the most recent year's worth of data there, and want anything older to be written to separate tapes that can be ejected from the tape library when they fill. You can set MIGDelay=365 and MIGContinue=No. This will keep recent files in the "current" storage pool and, when you drop the HIghmig value to cause migration to the "oldies" storage pool below it, files more than a year old will go there. Neat. See also: MIGDelay; Migration MIGDelay ADSMv3+ Stgpool keyword to specify the minimum number of days that a file must remain in a storage pool before the file becomes eligible for migration from the storage pool. The number of days is counted from the day that the file was stored in the storage pool or retrieved by a client, whichever is more recent. (The NORETRIEVEDATE server option prevents retrieval date recording.) This parameter is optional. Allowable values: 0 to 9999. Default: 0, which means migration is not delayed, which causes migration to be determined purely in terms of occupancy level. See also: MIGContinue; NORETRIEVEDATE MIGFILEEXPiration Client System Options file (dsm.sys) HSM option to specify the number of days that copies of migrated/premigrated files are kept on the server after they are modified on or deleted from the client file system. That is, the no-longer-viable migrated copy of the file in the HSM server is removed while the original remains intact on the client and a new, migrated copy of a modified file may now be present on the ADSM server. Note that the expiration clock starts ticking after reconciliation is run on the file system; and that HSM takes care of its own expiration, rather than it being done in EXPIre Inventory. Default: 7 (days) MIGPRocess Operand of 'DEFine STGpool' and 'UPDate STGpool' to specify the number of processes to be used for migrating files from the (disk) storage pool to a lower storage pool in the hierarchy of storage pools. (You cannot specify this operand on sequential (tape) storage pools, in that tape is traditionally a final destination.) Default: 1 process. Note that it pertains to migrating from a disk storage pool down to tape: you cannot specify migration *from* tape. Migration occurs with one process per node, moving *all* of the data for one node before going on to the data for another node. The order of nodes processed is per largest amount of data in the disk storage pool. See APAR IX77884. This means that if only one node session is active, you will get just one migration process, regardless of the MIGPRocess value. %Migr (ADSMv2 server) See: Pct Migr Migrate files (HSM) 'dsmmigrate Filename(s)' migrate-on-close recall mode A mode that causes HSM to recall a migrated file back to its originating file system only temporarily. If the file is not modified, HSM returns the file to a migrated state when it is closed. However, if the file is modified, it becomes a resident file. You can set the recall mode for a migrated file to migrate-on-close by using the dsmattr command, or set the recall mode for a specific execution of a command or series of commands to migrate-on-close by using the dsmmode command. Contrast with normal recall mode and read-without-recall recall mode. Migrated file A file that has been copied from a local file system to ADSM storage and replaced with a stub file on the local file system. Contrast with resident file and premigrated file. See also: Leader data; Stub file Migrated file, accessibility 'dsmmode -dataACCess=n' (normal) makes migrated files appear resident, and allow them to be retrieved. 'dsmmode -dataACCess=z' makes migrated files appear to be zero-length, and prevents them from being retrieved. Migrated file, display its recall 'dsmattr Filename' mode Migrated file, set its recall mode 'dsmattr -recallmode=n|m|r Filename' (HSM) where recall mode is one of: - n, for Normal - m, for migrate-on-close - r, for read-without-recall Migrated files, HSM, list from client 'dsmls' 'dsmmigquery -SORTEDMigrated' (this takes some time) Migrated files, HSM, list from server 'Query CONtent VolName ... Type=SPacemanaged' Migrated files, HSM, count In dsmreconcile log. MIgrateserver HSM: Client System Options file (dsm.sys) option to specify the name of the ADSM server to be used for HSM services (file migration - space management). Code at the head of the dsm.sys file, not in the server stanzas. Cannot be overridden in dsm.opt or via command line. Using -SErvername on the command line does not cause MIgrateserver to use that server. Default: server named on DEFAULTServer option. Migration A concept which occurs in several places in ADSM: Storage pools: Refers to migrating files from one level to a lower level in a storage pool hierarchy when the Pct Migr value (Query STGpool report) reaches the specified threshhold percentage (HIghmig), mitigated by other control values such as MIGDelay and NORETRIEVEDATE. Occurs with one process per node (regardless of the MIGPRocess value), moving *all* of the data for one node before going on to the data for another node - or before again checking the LOwmig value. The order of nodes processed is per largest amount of data in the disk storage pool. Priority: Will wait for a Move Data process to complete, and then take a tape drive before any additional waiting Move Data processes start. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting). HSM: The process of copying a file from a local file system to ADSM storage and replacing the file with a stub file on the local file system. See also: threshold migration; demand migration; selective migration See: DEFine STGpool; HIghmig; LOwmig; MIGDelay, NORETRIEVEDATE Migration, Auto, manually perform for HSM: 'dsmautomig [FSname]' file system Migration, prevent at start-up To prevent migration from occurring during a problematic TSM server restart, add the following (undocumented) option to the server options file: NOMIGRRECL Migration, storage pool files General ADSM concept of migrating a storage pool's files down to the next storage pool in a hierarchy when a given pool exceeds its high threshold value. Migration, storage pool files, query 'Query STGpool [STGpoolName]' Migration, storage pool files, set The high migration threshold is specified via the "HIghmig=N" operand of 'DEFine STGpool' and 'UPDate STGpool'. The low migration threshold is specified via the "LOwmig=N" operand. Note that LOwmig is effectively overridden to 0 when CAChe=Yes is in effect for the storage pool, because ADSM wants to cache everything once migration is triggered. Migration and reclamation As a TSM server pool receives data, the server checks to see if migration is needed. This migration causes cascading checks as the next stgpool in the hierarchy receives data. When the bottom of the storage pool hierarchy is reached, the migration checking thread will initiate reclamation checking against this lowest level stgpool if it is a sequential stgpool. If there are multiple sequential storage pools within the storage pool hierarchy, reclamation processing will start on the lowest hierarchy position and proceed to the next level storage pool in the hierarchy. Migration candidate considerations Too small? A file will not be a (HSM) candidate for migration if its size is smaller than the stub file size (as revealed in 'dsmmigfs query'). Management class proper? As installed, HDM will not migrate files unless they have been backed up. 'dsmmigquery FSname' Migration candidates, list (HSM) 'dsmmigquery FSname' Migration candidates list (HSM) A prioritized list of files that are eligible for automatic migration at the time the list is built. Files are prioritized for migration based on the number of days since they were last accessed (atime), their size, and the age and size factors specified for a file system. Note that time of last access is a measure of demand for the file, so is used as a basis rather than modification time. Can be rebuilt by the client root user command: 'dsmreconcile [-Candidatelist] [-Fileinfo]' See: candidates Migration in progress? 'Query STGpool ____ Format=Detailed' "Migration in Progress?" value. Migration not happening That is, migration from a higher level storage pool to a lower one in a storage pool hierarchy is not happening. - The presence of server option NOMIGRRECL will prevent it. Migration not happening (HSM problem) See: HSM migration not happening Migration performance The migration of data from one storage pool to a lower one - particularly to tape - is limited by: - Your collocation specification, which can cause many tapes to be mounted as files are "delivered" to their appropriate places in the next storage pool. - The *SM database is in the middle of the action, so its cache hit ratio performance is important with many small files. - Long mount retention periods can prolong processing in having to wait for an idle tape to be dismounted before the next one can be mounted. - The MOVEBatchsize and MOVESizethresh server option values will govern how much data moves in each server transaction. - The performance of your tape technology is also a factor. - In moving from disk to tape, realize that the conflicting characteristics of the two media can hamper performance... Disk is a bit-serial medium which has to perform seeks to get to data. Tape is a byte-parallel medium which is always ready to write when in streaming mode, where its transfer rate is typically much faster than disk. If the tape to wait for the disk to provide data, the tape drive is forced into start/stop mode, which particularly worsens throughput in some tape technologies. - With caching in effect, there will be more disk seek time to step over older cached files in migrating new files, while the receiving tape drive waits. See: MOVEBatchsize, MOVESizethresh Migration Priority A number assigned to a file in the Migration Candidates list (candidates file), computed by: - multiplying the number of days since the file was last accessed by the age factor; - multiplying the size of the file in 1-KB blocks times the size factor; - add those two products to produce the priority score (Migration Priority). This ends up in the first field of the candidates file line. See: candidates Migration processes, number of Code on "MIGPRocess=N" keyword of 'DEFine STGpool' and 'UPDate STGpool'. Default: 1. See: MIGPRocess Migration storage pool (HSM) Specified via 'DEFine MGmtclass MIGDESTination=StgPl' or 'UPDate MGmtclass MIGDESTination=StgPl'. Default destination: SPACEMGPOOL. Migration vs. Backup, priorities Backups have priority over migration. MIGREQUIRESBkup (HSM) Mgmtclass parameter specifying that a backup version of a file must exist before the file can be migrated. Default: Yes Query: 'Query MGmtclass' and look for "Backup Required Before Migration". See also: Backup Required Before Migration; RESToremigstate MIM (3590) Media Information Message. Sent to the host system. AIX: appears in Error Log. Severity 1 indicates high temporary read/write errors were detected (moderate severity). Severity 2 indicates permanent read/write errors were detected (serious severity). Severity 3 indicates tape directory errors were detected (acute severity). Ref: "3590 Operator Guide" manual (GA32-0330-06) esp. Appendix B "Statistical Analysis and Reporting System User Guide" See also: SARS; SIM MIN SQL statement to yield the smallest number from all the rows of a given numeric column. See also: AVG; COUNT; MAX; SUM MINRecalldaemons Client System Options file (dsm.sys) option to specify the minimum number of dsmrecalld daemons which may run at one time to service HSM recall requests. Default: 3 See also: MAXRecalldaemons MINUTE(timestamp) SQL function to return the minutes value from a timestamp. See also: HOUR(), SECOND() MINUTES See: DAYS Mirror database Define a volume copy via: 'DEFine DBCopy Db_VolName Copy_VolName' MIRRORRead DB server option, query 'Query OPTion' MIRRORRead LOG|DB Normal|Verify Definition in the server options file. Specifies the mode used for reading recovery log pages or data base log pages. Possibilities: Normal: read one mirrored volume to obtain the desired page; Verify: read all mirror volumes for a page every time a recovery log or database page is read, and if an invalid page is encountered, to resync with valid page from other volume (decreases performance but assures readability). This should be in effect when a (standalone) dsmserv auditdb is run. Default: Normal Ref: Installing the Server... MIRRORRead LOG server option, query 'Query OPTion' MIRRORWrite DB server option, query 'Query OPTion' MIRRORWrite LOG|DB Sequential|Parallel Definition in the server options file. Specifies how mirrored volumes are accessed when the server writes pages to the recovery log or data base log during normal processing. "Sequential" is "conditional mirroring" such that data won't be written to a mirror copy until successfully written to the primary. Default: Sequential for DB; Parallel for LOG Comments: *SM Sequential mirroring *is* better than RAID because of the danger of partial page writes - which *do* occur in the real world as hardware and human defects evidence themselves. RAID will perform the partial writing in parallel, thus resulting in a corrupted database if the writing is interrupted, whereas *SM Sequential mirroring will leave you with a recoverable database - by simple resync, not "recovery". That is, RAID is just as problematic as *SM Parallel mirroring. Mirroring of the *SM database is much debated. You could let the hardware or operating system perform mirroring instead, but you lose the advantaged of the *SM application mirroring - which also include being able to put the mirrors on any arbitrary volume, not in a single Volume Group as AIX insists. Ref: Installing the Server... MIRRORWrite LOG server option, query 'Query OPTion' Missed Status in Query EVent output indicating that the scheduled startup window for the event has passed and the schedule did not begin. When you have SCHEDMODe PRompted and have a client schedule set up for the node, then it is missed if the server couldn't contact the client within the time window. The dsmsched.log will typically show "Scheduler has been stopped." One mundane cause of Missed is that the client scheduler process already has a (long-running) session underway, as in the case of a backup which runs much longer than expected because of a lot of new data in the file system, which runs well past the start time for the next session. See also: Failed; Schedule, missed Mobile Backups See: Adaptive differencing; SUBFILE* MODE A TSM server Copy Group attribute that specifies whether a backup should be performed for an object that was not modified since the last time it was backed up. (MODE=MODified|ABSolute) Specifying a Management Class with MODE=ABSolute is a technique for performing a full backup of a file system. See also: ABSolute; MODified MODE (-MODE) Client option used in conjunction with Backup Image to specify the type of file system style backup that should be used to supplement the last image backup. Choices: Selective The default. Causes the usual image backup to be performed, to distinguish from the Incremental choice. (The name of this choice is unfortunate in that it invites confusion with the standard TSM Selective backup, which this choice has nothing to do with. The name of this choice should have been "Image". Incremental Only back up files whose modification timestamp is later than that of the last image backup. This is accomplished via an -INCRBYDate backup, whose nature means that deleted files cannot be detected and head toward expiration on the server, and nor can files whose attributes have changed be detected for backup. If there was no prior image backup, this Incremental choice will be ignored as an erroneous specification, and a full image backup will be performed, as if Selective had instead been the choice. See also: dsmc Backup Image MODified A backup Copy Group attribute that indicates that an object is considered for backup only if it has been changed since the last backup. An object is considered changed if the date, size, owner, or permissions have changed. (Note that the file will be physically backed up again only if TSM deems the content of the file to have been changed: if only the attributes (e.g., Unix permissions) have been changed, then TSM will simply update the attributes of the object on the server.) See also: MODE Contrast with: ABSolute See also: SERialization (another Copy Group parameter) Monitoring products See: TSM monitoring products MONTHS See: DAYS Mount in progress Server command: 'SHow ASM' Mount limit See: MOUNTLimit Mount message See: TAPEPrompt Mount point, keep over whole session? The 'REGister Node' operand KEEPMP controls this. Mount point queue Server command: 'SHow ASQ' Mount point wait queue IBM internal term for how ADSM prioritizes server tasks needing tapes. MOVe Datas have a higher priority than some other tasks. Mount points Defined globally in DEVclass MOUNTLimit Restricted thereunder via REGister Node parameters KEEPMP and MAXNUMMP, governing the number of mount points available for other sessions. See: KEEPMP; MAXNUMMP; MOUNTLimit Mount points, maximum See: MOUNTLimit Mount points, report active 'SHow MP' Mount request timeout message ANR8426E on a CHECKIn LIBVolume. Mount requests, pending 'Query REQuest' (q.v.). Via Unix command: 'mtlib -l /dev/lmcp0 -qS' Mount requests, service console See: -MOUNTmode Mount Retention Output field in report from 'Query DEVclass Format=Detailed'. Value is defined via MOUNTRetention operand of 'DEFine DEVclass' command. See also: KEEPMP; MAXNUMMP; MOUNTLimit; MOUNTRetention Mount retention period, change See: MOUNTRetention Mount tape Via Unix command: 'mtlib -l /dev/lmcp0 -m -f /dev/rmt? -V VolName' # Absolute drivenm 'mtlib -l /dev/lmcp0 -m -x Rel_Drive# -V VolName' # Relative drive# (but note that the relative drive method is unreliable). Note that there is no ADSM command to explicitly mount a tape: mounts are implicit by need. Once mounted, it takes 20 seconds for the tape to settle and become ready for processing. See also: Dismount tape Mount tape, time required For a 3590 tape drive: If a drive is free, it takes a nominal 32 seconds for the 3494 robot to move to the storage cell containing the tape, carry the tape to the drive, load the tape, and have it wind within the drive. Wind-on time itself is about 20 seconds. Note that if you have two tape drives and your mount request is behind another which is just starting to be processed, you should expect your mount to take twice as long, or about 64 seconds. To rewind, dismount, mount a new tape in that drive, and position it can take 120 seconds. If a mount is taking an usually long time, it could mean that the library has a cleaning tape mounted, cleaning the drive. Or the tape could be defective, giving the drive a hard time as it tries to mount the tape. MOuntable DRM media state for volumes containing valid data and available for onsite processing. See also: COUrier; COURIERRetrieve; NOTMOuntable; VAult; VAULTRetrieve MOUNTABLEInlib State for a volume that had been processed by the MOVe MEDia command: the volume contains valid data, is mountable, and is in the library. See also: MOVe DRMedia MOUNTABLENotinlib State for a volume that had been processed by the MOVe MEDia command: the volume may contain valid data, is mountable, but is not in the library (is in its external, overflow location). See msg ANR1425W. See also: MOVe DRMedia Mounted, is a tape mounted in a drive? The 3494 Database "Device" column will show a drive number if the tape is mounted, and a Cell number of "_ K 6", where '_' is the wall number. If the Cell number says "Gripper", the tape is in the process of being mounted. Mounted volumes Server command: 'SHow ASM' MOUNTLimit (mount limit) Operand in 'DEFine DEVclass', to specify the maximum number of concurrent mounts. Affects BAckup STGpool, etc. It should be set no higher than the number of physical drives you have available. In ADSMv3+, you can specify "MOUNTLimit=DRIVES", and ADSM will then dynamically adjust the MOUNTLimit. Default: 1. -MOUNTmode Command-line option for *SM administrative client commands ('dsmadmc', etc.) to have all mount messages displayed at that terminal. No administrative commands are accepted. See also: -CONsolemode; dsmadmc Ref: Administrator's Reference MOUNTRetention Devclass operand, to specify how long, in minutes (0-9999), to retain an idle sequential access volume before dismounting it. Default: 60 (minutes). The value should be long enough to allow for re-use of same mounted tape within a reasonable time, but not so long that the tape could end up trapped in the drive upon an operating system shutdown which does not give *SM the opportunity to dismount it. (Always shut *SM down cleanly if possible.) Another reason to keep mount retention fairly short is that having a tape left in a drive only delays a mount for a new request, in that the stale tape must be dismounted first: this is a big consideration in restorals, particularly of a large quantity of data as for a whole file system, in which case it would be worth minimizing the MOUNTRetention when such a job runs. Also, the drive mechanism stays on while tape is mounted, so adds wear. Keep mount retention short when collocation is employed, to prevent waiting for dismounts, given the elevated number of mounts involved. But keep the retention value sufficient to cover client think time during file system backups. Msgs: ANR8325I for dismount when MOUNTRetention expires. See also: KEEPMP; MAXNUMMP; MOUNTLimit MOUNTRetention, query 'Query DEVclass Format=Detailed' and look for "Mount Retention" value. Mounts, current 'SHow MP'. Or Via Unix command: 'mtlib -l /dev/lmcp0 -qS' for the number of mounted drives; 'mtlib -l /dev/lmcp0 -vqM' for details on mounted drives. Mounts, maximum See: MOUNTLimit Mounts, monitor Start an "administrative client session" to control and monitor the server from a remote workstation, via the command: 'dsmadmc -MOUNTmode'. If having a human operator perform mounts, consider setting up a "mounts operator" admin ID and a shell script which would invoke something to the effect of: 'dsmadmc -ID=mountop -MOUNTmode -OUTfile=/var/log/ADSM-mounts.YYYYMMDD' and thus log all mounts. Ref: Administrator's Reference Mounts, pending Via ADSM: 'Query REQuest' (q.v.). Via Unix command: 'mtlib -l /dev/lmcp0 -qS' Mounts, historical SELECT * FROM SUMMARY WHERE ACTIVITY='TAPE MOUNT' Mounts count, by drive See: 3590 tape mounts, by drive MOUNTWait DEVclass and CHECKIn LIBVolume command operand specifying the number of minutes to wait for a tape to mount, on an allocated drive. Note that this pertains only to the time taken for a tape to be mounted by tape robot or operator once a tape mount request has been issued, and has been honored by the library. Example: a task requires a tape volume which is not in the library. It does not pertain to a wait for a tape *drive* when for example one incremental backup is taking up all tape drives and another incremental backup comes along needing a tape drive. Default: 60 min. Advice: The MOUNTWait value should be larger than the MOUNTRetention to assure that idle volumes have a chance to dismount and free drives before the MOUNTWait time expires. MOVe Data Server command to move a volume's viable data to volume(s) within the same sequential access volume storage pool (default) or a specified sequential access volume storage pool. (MOVe Data cannot be used on DISK devtype (Random Access) storage pools. The source storage pool may be a disk pool, with the target being the defined NEXTstgpool, whereby MOVe Data essentially will accomplish what migration does, but physically rather than logically. Copy storage pool volume contents can only be moved to other volumes in the same copy storage pool: you cannot move copy storage pool data across copy storage pools. MOVe Data can effectively reclaim a tape by compacting the data onto another volume. Syntax: 'MOVe Data VolName [STGpool=PoolName] [RECONStruct=No|Yes] [Wait=No|Yes]' RECONStruct is new with TSM 5.1, and allows the vacated space within aggregates to be reclaimed, thus allowing Move Data to be the equivalent of Reclamation. The reconstruction does incur more time. And, again, this can be done only on sequential access storage pools. The "from" volume gets mounted R/O. By default, data is moved by copying Aggregates as-is: unlike Reclamation, MOVe Data does not reclaim space where logical files expired and were logically deleted from *within* an Aggregate. (Per 1998 APAR IX82232: RECONSTRUCTION DOES NOT OCCUR DURING MOVE DATA: "MOVe Data by design does not perform reconstruction of aggregates with empty space. Although this was discussed during design, it was decided to only perform reconstruction during reclamation. A major reason for this decision was performance as reconstruction of aggregates requires additional overhead that MOVe Data does not; hence requires additional time to complete.") Like Reclamation, MOVe Data brings together all the pieces of each filespace, which means it has to skip down the tape to get to each piece. (The portion of a filespace that is on a volume is called a Cluster.) In addition, if the target storage pool is collocated, each cluster may ask for a new output tape, and TSM isn't smart enough to find all the clusters that are bound for a particular output tape and reclaim them together. Instead it is driven by the order of filespaces on the input tape, so the same output tape may be mounted many times. In doing a MOVe Data, *SM attempts to fill volumes, so it will select the most full available volume in the storage pool. Note that the data on the volume will be inaccssible to users until the operation completes. During the move, the 'Query PRocess' "Moved Bytes" reflects the data in uncompressed form. Ends with message ANR1141I (which fails to report byte count). May be preempted by higher priority operation - see message ANR1143W - but may not preempt the lower priority reclamation process (msg ANR2420E). (Move Data has a higher priority on what IBM internally refers to as the Mount point wait queue.) See also: AUDit Volume; NOPREEMPT; Pct Util; Reclamation Move Data, find required volumes Move Data would obviously involve the subject volume itself, and any volumes containing files that spanned into (the front of) or out of (the back of) the volume. This would be identifiable by the Segment number in Query CONtent _volname_, or the corresponding Select, being other than 1/1. For spanning files, you would then have to perform a Content table search on the related segment. (A tape in Filling status would obviously have no span-out-of segment on another volume.) Move Data, offsite volumes When (copy storage pool) volumes are marked "ACCess=OFfsite", TSM knows not to use those volumes, to instead use onsite copy storage pool volumes containing the same data (from the same primary storage pool). Naturally, the files on one offsite volume may be found on any number of onsite volumes, so multiple mounts may be expected, accompanied by a bunch of TSM "think time" between volumes. See also: ANR1173E MOVe Data and caching disk volumes Doing a Move Data on a cached disk pool volume has the effect of clearing the cache. This is obvious, when you think about it, as the cache represents data that is already in the lower storage pool in the hierarchy...that data has been "pre-moved". MOVe Data performance Move Data operations can be expected to involve considerable repositioning as the source tape is processed, to skip over full-expired Aggregates. Whether your tape technology is good at start-stop operations will affect your throughput. See also: BUFPoolsize; MOVEBatchsize; MOVESizethresh MOVe DRMedia DRM server command to move disaster recovery media offsite and back onsite. Will eject the volumes out of the library before transitioning the volumes to the destination state. Syntax: 'MOVe DRMedia VolName [WHERESTate=MOuntable| NOTMOuntable|COUrier| VAULTRetrieve|COURIERRetrieve] [BEGINDate=date] [ENDDate=date] [BEGINTime=time] [ENDTime=time] [COPYstgpool=StgpoolName] [DBBackup=Yes|No] [REMove=Yes|No|Bulk] [TOSTate=NOTMOuntable| COUrier|VAult|COURIERRetrieve| ONSITERetrieve] [WHERELOcation=location] [TOLOcation=location] [CMd=________] [CMDFilename=file_name] [APPend=No|Yes] [Wait=No|Yes]' Do not do a MOVe DRMedia where a MOVe MEDia is called for. REMove=BUlk is not supposed to result in a Reply required on SCSI libraries, but may: the workaround is Wait=Yes. MOVe MEDia ADSMv3 command to deal with a full library by moving storage pool volumes to an external "overflow" location, typically named on the OVFLOcation operand of Primary and Copy Storage Pools. (Think "poor man's DRM".) Unlike with Checkout, the volume remains requestable and ultimately mountable, via an outstanding mount request. (Note that, internally, MOVe MEDia actually performs a Checkout Libvolume, as indicated in its ANR6696I message.) Syntax: 'MOVe MEDia VolName STGpool=PoolName [Days=NdaysSinceLastUsage] [WHERESTate=MOUNTABLEInlib| MOUNTABLENotinlib] [WHERESTATUs=FULl,FILling,EMPty] [ACCess=READWrite|READOnly] [OVFLOcation=________] [REMove=Yes|No|Bulk] [CMd="command"] [CMDFilename=file_name] [APPend=No|Yes] [CHECKLabel=Yes|No]' By default, moving a volume out of the library causes it to be made ReadOnly, and moving it back into the library causes it to be made ReadWrite. If you are moving a volume back into a library (MOUNTABLENotinlib) and it is not empty, you must specify WHERESTATUs=FULl for the command to work, else get ANR6691E error. OVFLOcation can be used to override that specification had by the storage pool. Do not do a MOVe MEDia where a MOVe DRMedia is called for. This command moves whole volumes, not the data within them. Note that a MOVe MEDia will hang if a LABEl LIBVolume is running. After doing MOVe MEDia to move the volume back into the library: - The volume will be READWrite, rather than the READOnly that is conventional for a moved-out volume; - Query MEDia no longer shows the volume (Query Volume does), until CHECKIn is done; - You must do a CHECKIn LIBVolume to get the volume back into play. What happens when there are more than 10 tapes to go to the 3494 Convenience I/O Station? TSM moves one at a time, then an Intervention Required shows up ("The convenience I/O station is full"): when you empty the I/O station, the Int Req goes away, and TSM resumes ejecting tapes. No indication of the condition shows up in the Activity Log. Watch out for ANR8824E message condition where the request to the library is lost: the volume will probably have actually been ejected from the library, but the MOVe MEDia updating of its status to MOUNTABLENotinlib would not have occurred, leaving it in an in-between state. Msgs: ANR8762I; ANR2017I; ANR0984I; ANR0609I; ANR0610I; ANR6696I; ANR8766I; ANR6683I; ANR6682I; ANR0611I; ANR0987I (completion) See also: Overflow Storage Pool; OVFLOcation; Query REQuest Ref: Admin Guide, "Managing a Full Library" MOVe NODEdata TSM 5.1+ server command to move data for all filespaces for one or more nodes. As with the 'MOVe Data' command, when the source storage pool is a primary pool, you can move data to other volumes within the same pool or to another primary pool; but when the source storage pool is a copy pool, data can only be moved to other volumes within that copy pool (so the TOstgpool parameter is not usable). This command can operate upon data in a storage pool whose data format is NATIVE or NONBLOCK. As of 2003/11 the Reference Manual fails to advise what the Tech Guide does: that the Access mode of the volumes must be READWRITE or READONLY, which precludes OFFSITE and any possibility of onsite volumes standing in for the offsite vols. Cautions: As of 2003/05, the command may report success though that was not the case, as in specifying a non-existant filespace. Ref: TSM 5.1 Technical Guide MOVEBatchsize Definition in the server options file. Specifies the maximum number of client files that can be grouped together in a batch within the same server transaction for storage pool backup/restore, migration, reclamation, or MOVe Data operations. Specify 1-1000 (files). Default: 40 (files). TSM: If the SELFTUNETXNsize server option is set to Yes, the server sets the MOVEBatchsize option to its maximum values to optimize server throughput. Beware: A high value can cause severe performance problems in some server architectures when doing 'BAckup DB'. MOVEBatchsize, query 'Query OPTion'; look for "MoveBatchSize". MOVESizethresh Definition in the server options file. Specifies a threshold, in megabytes, for the amount of data moved as a batch within the same server transaction for storage pool backup/restore, migration, reclamation, or MOVe Data operations. Specify 1-500 (MB) Default: 500 (megabytes). TSM: If the SELFTUNETXNsize server option is set to Yes, the server sets the MOVESizethresh option to its maximum values to optimize server throughput. MOVESizethresh and MOVEBatchsize Server data is moved in transaction units whose capacity is controlled by the MOVEBatchsize and MOVESizethresh server options. MOVEBatchsize specifies the number of files that are to be moved within the same server transaction, and MOVESizethresh specifies, in megabytes, the amount of data to be moved within the same server transaction. When either threshold is reached, a new transaction is started. MOVESizethresh, query 'Query OPTion'; seek "MoveSizeThresh". MP1 Metal Particle 1 tape oxide formulation type, as used in the 3590. Lifetime: According to Imation studies (http://www.thic.org/pdf/Oct00/ imation.jgoins.001003.pdf) "All Studies Conclude that Advanced Metal Particle (MP1) Magnetic Coatings Will Achieve a Projected Magnetic Life of 15-30 Years. Media will lose 5% - 10% of its magnetic moment after 15 years. Media resists chemical degradation even after direct exposure to extreme environments." MPTIMEOUT TSM4.1 server option for 3494 sharing. Specifies the maximum time in seconds the server will retry before failing the request. The minimum and maximum values allowed are 30 seconds and 9999 seconds. Default: 30 seconds See also: 3494SHARED; DRIVEACQUIRERETRY MSCS Microsoft Cluster Server. MSGINTerval Definition in the server options file. Specifies the number of minutes that the ADSM server waits before sending subsequent message to a tape operator requesting a tape mount, as identified by the MOUNTOP option. Default: 1 (minute) Ref: Installing the Server... MSGINTerval server option, query 'Query OPTion' MSI (.msi file suffix) Designates the Microsoft Software Installer. Note that such files are on the CD-ROM, not in the online download area (which has .exe, .TXT, and .FTP files). If you copy the files from the CD for alternate processing, be aware that Microsoft does not support running an MSI from a mapped network drive when you are connect to a server via remote desktop to terminal server. MSI (Microsoft Installer) return codes See item 21050782 on the IBM web site ("Microsoft Installer (MSI) Return Codes for Tivoli Storage Manager Client & Server"). msiexec command Invokes the Microsoft Software Installer as for example msiexec /i "Z:\tsm_images\TSM_BA_Client \IBM Tivoli Storage Manager Client.msi" to install from the CD-ROM or network drive containing the installation image. See: Windows client manual mt See: /dev/mt MT0, MT1 Tape drive identifiers on Windows 2000. Example: MT0.0.0.2 for a 3590E drive in a 3494 library. mt_._._._ Designation for a tape drive in a Windows configuration, using Fibre Channel, as in mt0.0.0.5, where the encoding means "magnetic tape device, Target ID 0, Lun 0, Bus 0, with the final digit being auto assigned by Windows based on the time of first detection. mtadata Exchange server: Message Transfer Agent data, as in \exchsrvr\mtadata mtevent Command provided with 3494 Tape Library Device Driver, being an interface to the MTIOCLEW function, to wait for library events and display them. Usage: mtevent -[ltv?] -l[filename] Library special filename, i.e. "/dev/lmcp0". -t[timeout] Wait for asychronous library event, for the specified # of seconds. If omitted, the program will wait indefinitely. -? this help text. NOTE: The -l argument is required. mtlib Command provided with 3494 Tape Library Device Driver to manually interact with the Library Manager. For environments: AIX, SGI, Sun, HP-UX, Windows NT/2000. Do 'mtlib -\?' to get usage info - but beware that its output fails to show the legal combinations of options as the Device Drivers manual does. -L is used to specify the name of a file containing the volsers to be processed - and only with the -a and -C operands. This is handy for resetting Category Code values in a 3494 library, via like: 'mtlib -l /dev/lmcp0 -C -L filename -t"012C"' -v (verbose) will identify each element of the output, which makes things clearer than the "quick" output which is produced in the absence of the -v option. Specify category codes as hex numbers. (Remember that this is a library physical command: it knows nothing about TSM or what is defined in your TSM system.) If command fails because "the library is offline to the host", it indicates either that the host is not defined in the 3494's LAN Hosts allowance list, or that the host is not on the same subnet as the 3494 in the unusual case that the subnet is defined as Not Routed. A mount (-m) may take a considerable time and then yield: "Mount operation Error - Internal error" due to the tape being problematic, but the mount will probably work. Ref: "IBM SCSI Tape Drive, Medium Changer, and Library Device Drivers: Installation and User's Guide" (GC35-0154) mttest Undocumented command for performing ioctl operations and set's on a tape drive. /usr/lpp/adsmserv/bin/mttest. Syntax: 'mttest <-f batch-input-file> <-o batch-output-file> <-d special-file>' MTU Maximum Transmission Unit: the hardware buffer size of an Ethernet card, as revealed by 'netstat -i'. This is the maximum size of the frame/packet that can be transmitted by the adapter. (Larger packets need to be subdivided to be transmitted.) The standard Ethernet MTU size is 1500. Note that this maximum packet size is a constraining factor for processes which use ethernet. For example, a single process can max out a 10Mb ethernet card, but it can only drive a 100Mb card about 2.5x faster because the measly packet size is so constraining. To make full use of higher-speed ethernets, then, one must have multiple processes feeding them. (10Mb, 100Mb, and gigabit ethernet all use the same format and frame size.) See: TCPNodelay Multi-homed client See: TCPCLIENTAddress Multi-session Client TSM 3.7 facility which multi-threads, to (Multi session client) start multiple sessions, in order to transfer data more quickly. This will work for the following program components: Backup-archive client (including Enterprise Management Agent, formerly Web client) Backup and Archive functions. This new functionality is completely transparent: there is no need to switch it on or off. The TSM client will decide if a performance improvement can be gained by starting an additional session to the server. This can result in as many as five sessions running at one time to read files and send them to the server. (So says the B/A client manual, under "Performing Backups Using a GUI", "Displaying Backup Processing Status".) Types of threads: - Compare: For generating the list of backup or archive candidate files, which is handed over to the Data Transfer thread. There can be one or more simultaneous Compare threads. - Data Transfer: Intereacts with the client file system to read or write files in the TSM operation, performs compression/decompression, handles data transfer with the server, and awaits commitment of data sent to the server. There can be one or more simultaneous Data Transfer threads. - Monitor: The multi-session governor. Decides if multiple sessions would be beneficial and initiates them. The number of sessions possible is governed by the RESOURceutilization client option setting and server option MAXSessions. Mitigating factors: Using collocation, only one data transfer session per file space will write to tape at one time: all other data transfer sessions for the file space will be in Media Wait state. Under TSM 3.7 Unix, with "PASSWORDAccess Generate" in effect, a non-root session is single-threaded because the TCA does not support multiple sessions. Multi-session Client is supported with any server version; but if the server is below 3.7, the limit is 2 sessions. Considerations: Multiple accounting records for multiple simultaneous sessions from one command invocation. Ref: TSM 3.7 Technical Guide, 6.1 See also: MAXNUMMP; MAXSessions; RESOURceutilization; TCA; Threads, client Multi-Session Restore TSM 5.1 facility which allows the backup-archive clients to perform multiple restore sessions for No Query Restore operations, increasing the speed of restores. (Both server and client must be at least 5.1.) This is similar to the multiple backup session feature. Elements: - RESOURceutilization parameter in dsm.sys - MAXNUMMP setting for the node definition in the server - MAXSessions parameter in dsmserv.opt The efficacy of MSR is obviously limited by the number of volumes which can be used in parallel. From an IBM System Journal article: "During a large-scale restore operation (e.g., entire file space or host), the TSM server notifies the client whether additional sessions may be started to restore data through parallel transfer. The notification is subject to configuration settings that can limit the number of mount points (e.g., tape drives) that are consumed by a client node, the number of mount points available in a particular storage pool, the number of volumes on which the client data are stored, and a parameter on the client that can be used to control the resource utilization for TSM operations. The server prepares for a large-scale restore operation by scanning database tables to retrieve information on the volumes that contain the client's data. Every distinct volume found represents an opportunity for a separate session to restore the data. The client automatically starts new sessions, subject to the afore-mentioned constraints, in an attempt to maximize throughput." Additional info: http://www.ibm.com/support/ docview.wss?uid=swg21109935 See also: DISK; Storage pool, disk, performance Multi-threaded session See: Multi-session Client Multiple servers See: Servers, multiple Multiple sessions See: MAXNUMMP; Multi-session Client; RESOURceutilization Multiprocessor usage TSM uses all the processors available to it, in a multi-processor environment. One customer cited having a 12-processor system, and TSM used all of them. MVS Multiple Virtual Storage: IBM's mainframe operating system, descended from OS/MFT and OS/MVT (multiple fixed or variable number of tasks). Because the operating system was so tailored to a specific hardware platform, MVS was a software product produced by the IBM hardware division. MVS evolved into OS/390, for the 390 hardware series. MVS server performance Turn accounting off and you will likely see a dramatic improvement in performance. Especially boost the TAPEIOBUFS server option. See also: Server performance Named Pipe In general: A type of interprocess communication which allows message data streams to be passed between peer processes, such as between a client and a server. Windows: The name of the facility by which the TSM client and server processes can directly intercommunicate when the are co-resident in the same computer, to enhance performance by not going through data communications methods to transfer the data. The governing option is NAMedpipename. See also: Restore to tape, not disk NAMedpipename (-NAMedpipename=) Windows client option for direct communication between the TSM client and server processes when they are running on the same computer or across connected domains, thus avoiding the overhead of going through data communication methods (e.g., TCP/IP). This depends upon a file system object which the server and client will both reference in order to communicate - which can be a point of vulnerability, in contrast to traditional networking (ANS1865E). Syntax: NAMedpipename \\.\pipe\SomeName -NAMedpipename=\\.\pipe\SomeName Default: Originally: \pipe\dsmserv Later: \\.\pipe\Server1 See also: COMMMethod; NAMEDpipename; Shared Memory NAMEDpipename Windows server option for direct communication between the TSM server and client processes when they are running on the same computer or across connected domains, thus avoiding the overhead of going through data communication methods (e.g., TCP/IP). This depends upon a file system object which the server and client will both reference in order to communicate - which can be a point of vulnerability, in contrast to traditional networking (ANS1865E). And note that the involvement of Windows Domain itself can mean networking, which can obviate the advantage. Syntax: NAMEDpipename name Default: Originally: \pipe\dsmserv Later: \\.\pipe\Server1 See also: COMMMethod; NAMedpipename; Shared Memory Names for objects, coding rules Content: the following characters are legal in object names: A-Z 0-9 _ . - + & (It is best not to use the hyphen because ADSM uses it when continuing a name over multiple lines in a query, which would be visually confusing.) Length: varies per type of object. Ref: Admin Ref NAS See: Network Appliance See also IBM site Solution 1105834 NATIVE Refers to storage pool DATAFormat definition, where NATIVE is the default. TSM operations use storage pools defined with a NATIVE or NONBLOCK data format (which differs from NDMP). DATAFormat=NATive specifies that the data format is the native TSM server format and includes block headers. NATIVE is required: - To back up a primary storage pool; - To audit volumes; - To use CRCData. See also: NONBLOCK native file system A file system to which you have not added space management. NDMP Network Data Management Protocol: a cross-vendor standard for enterprise data backups, to tape devices. Its creation was led by Network Appliance and Legato Systems. The backup software orchstrates a network connection between an NDMP-equipped NAS appliance and an NDMP tape library or backup server. The appliance uses NDMP to stream its data to the backup device. The NDMP support in TSM works only with tape drives as the backup target, and there are no plans to extend NDMP support to disk. As of 2004/01, NDMP backs up at volume level only. Originally, only SCSI libraries were supported for NDMP operations. Support for ACSLS libraries was introduced in 5.1.1 and support for 349x libraries came in 5.1.5. To perform NDMP operations with TSM, tape drives must be accessible to the NAS device. This means that there must be a SCSI or FC connection between the filer and drive(s) and a path must be defined in TSM from the NAS data mover to the drive(s). Some or all of the drives can also be accessed by the TSM server, provided that there is physical connectivity and a path definition from the TSM server to those drives. This does not mean that data is funneled through the TSM server for NDMP operations. It simply allows sharing drives for NDMP and conventional TSM operations. In fact, if the library robotics is controlled directly by the TSM server (rather than through a NAS device), it is possible to share drives among NAS devices, library server, storage agents and library clients. Data flow for NDMP operations is always directly between the filer and the drive and never through the TSM server. The TSM server handles control and metadata, but not bulk data flow. The TSM server does not need to be on a SAN, but if you want to share drives between the TSM server and the NAS device, a SAN allows the necessary interconnectivity. See: dsmc Backup NAS; Network Appliance (NAS) backups Nearline storage A somewhat odd, ad hoc term to describe on-site, nearby storage pool data; as opposed to offsite versions of the data. NetApp Network Appliance, Inc. Long-time provider of network attached storage. Company was founded by guys who helped develop AFS. www.netapp.com NetTAPE NetTAPE provides facilities such as remote tape access, centralized operator interfaces, and tape drive and library sharing among applications and systems. As of late 1997, reportedly a shakey product as of late 1997. Ref: redbook 'AIX Tape Management' (SG24-4705-00) NETBIOS Network Basic Input/Output System. An operating system interface for application programs used on IBM personal computers that are attached to the IBM Token-Ring Network. NETBIOSBuffersize *SM server option. Specifies the size of the NetBIOS send and receive buffers. Allowed range: 1 - 32 (KB). Default: 32 (KB) NetbiosBufferSize server option, query 'Query OPTion' NetbiosSessions server option, query 'Query OPTion' NETTAPE IBM GA-product that allows dynamic sharing of tape drives among many applications. NetWare Novell product. Has historically not had virtual memory, and so tends to be memory-constrained, which hinders *SM backups and restorals. See also: nwignorecomp NetWare backup recommendation Code "EXCLUDE sys:/.../*.qdr/.../*.*" to omit the queues on the SYS volume. NetWare Loadable Module (NLM) Novell NetWare software that provides extended server functionality. Support for various ADSM and NetWare platforms are examples of NLMs. Netware restore, won't restore, saying Reason unknown, but specifying option incoming files are "write protected" "-overwrite" has been seen to resolve. Netware restore fails on long file See: Long filenames in Netware restorals name Netware restore performance - Make sure your ADSM client software is recent! (To take advantage of "No Query Restore" et al. But beware that No Query Restore is not used for NetWare Directory Services (NDS).) - Avoid client or Netware compression of incoming data (and no virus scanning of each incoming file). - If you have a routed network environment, have this line in SYS:ETC\TCPIP.CFG : TcpMSSinternetlimit OFF - Use TXNBytelimit 25600 in the DSM.OPT file, and TXNGroupmax 256 in the ADSM server options file. - Set up a separate disk pool that does not migrate to tape, and use DIRMc to send directory backups to it. - Consider using separate management classes for directories, to facilitate parallel restorals. - Disable scheduled backups of that filespace during its restoral. - Try to minimize other work that the server has to do duing the restoral (expirations, reclamnations, etc.). - And the usual server data storage considerations (collocation, etc.). Data spread out over many tapes means many tape mounts and lots of time. - Consider tracing the client to see where the time is going: traceflags INSTR_CLIENT_DETAIL tracefile somefile.txt (See "CLIENT TRACING" section at bottom of this document.) - During the session, use ADSM server command 'Q SE' to gauge where time is going; or afterwards, review the ADSM accounting record idle wait, comm wait, and media wait times. Network Appliance (NAS) backups Lineage: Tivoli originally announced that TSM version 4.2 would provide backup and restore of NAS filers - 3Q 2001. The product was "TDP for NDMP" (5698-DPA), a specialized client that interfaces with the Network Data Management Protocol (NDMP). Full volume image backup/restore will be supported. File level support is announced for TSM version 5.1 - 1Q 2002. TDP for NDMP was then folded into TSM Enterprise Edition, which was withdrawn from marketing 2002/11/12, supplanted by TSM Extended Edition (5698-ISX). Note that options COMPRESSION and VALIDATEPROTOCOL are not valid for a node of Type=NAS. The name of the NAS node must be the same as the data mover. Netware timestamp peculiarities The Modified timestamp on a Netware file is attached to the file, and remains constant as it may move, for example, from a vendor development site to a customer site. The Created timestamp is when the file was planted in the customer file system. Thus, the Created timestamp may be later than the Modified timestamp. Network card selection on client See: TCPCLIENTAddress Network data transfer rate Statistic at end of Backup/Archive job, reflecting the raw speed of the network layer: just the time it took to transfer the data to the network protocol handler (expressed that way to emphasize that *SM does not know if the data has actually gone over the network). The data transfer rate is calculated by dividing the total number of bytes transferred by the data transfer time. The time it takes to process objects is not included in the network transfer rate. Therefore, the network transfer rate is higher than the aggregate transfer rate. Corresponds to the Data Verb time in an INSTR_CLIENT_DETAIL client trace. Contrast with Aggregate data transfer rate. Beware that if the Data transfer time is too small (as when sending a small amount of data) then the resulting Network Data Transfer Rate will be skewed, reporting a higher number than the theoretical maximum. This reflects the communications medium rapidly absorbing the initial data in its buffers, which it has yet to actually send. That is, ADSM handed off the data and considers it logically sent, having no idea as to whether it has been physically sent. This also explains why at the beginning of a backup session that you see some number of files seemingly sent to the server before an ANS4118I message appears saying that a mount is necessary (for backup directly to tape), rather than appearing after the first file. Thus, to see meaningful transfer rate statistics you need to send a lot of data so as to counter the effect of the initial buffering. Ref: B/A Client manual glossary See also: Data transfer time; TCPNodelay Network performance Many network factors can affect performance: - Technology generation: Are you still limited to 10 Mbps or 100, when Gigabit Ethernet is available, with its faster basic speed and optional larger frame sizes? - Are you using an ethernet switch rather than a router to improve subnet performance (and security)? - Are your network buffer sized adequate? In AIX, particularly do 'netstat -v' and see if the "No Receive Pool Buffer Errors" count is greater than zero: if so boost the Receive Pool Buffer Size. (A value of 384 is no good: needs to be 2048.) Network Storage Manager (NSM) The IBM 3466 storage system which combines a tape robot and AIX system in one package, wholly maintained by IBM. The IBM Network Storage Manager (NSM) is an integrated data storage facility that provides backup, archive, space management, and disaster recovery of data stored in a network computing environment. NSM integrates ADSM server functions and AIX with an RS/6000 RISC rack mounted processor, Serial Storage Architecture (SSA) disk subsystems, tape library (choose a type) and drives, and network communications, into a single server system. Network transfer rate See: Network data transfer rate Network-Free Rapid Recovery Provides the ability to create a backup set which consolidates a client's files onto a set of media that is portable and may be directly readable by the clients system for fast, "LAN-free" (no network) restore operations. The portable backup set, synthesized from existing backups, is tracked and policy-managed by the TSM server, can be written to media such as ZIP, Jaz drives, and CD-ROM volumes, for use by Windows 2000, Windows NT, AIX, Sun Solaris, HP-UX, NetWare backup-archive client platforms. In addition, for the Windows 2000, Windows NT, AIX, Sun Solaris (32-bit) and HP-UX backup-archive clients, the backup sets can be copied to tape devices. TSM backup-archive clients can, independent of the TSM server, directly restore data from the backup set media using standard operating system device drivers. Ref: Redbook "Tivoli Storage Manager Version 3.7: Technical Guide" (SG24-5477), see CREATE BACKUPSET. http://www.tivoli.com/products/index/ storage_mgr/storage_mgr_concepts.html Newbie Someone who is new to all this stuff. NEXTstgpool Parameter on 'DEFine STGpool' to define the next primary storage pool to use in a hierarchy of storage pools. (Copy storage pools are not eligible for hierarchical arrangement.) This can be used creatively to cause ADSM to use lower storage pools to be used as overflow areas rather than migration areas, by defining the HIghmig value to be 100 percent. This would be used in cases where storage pool filling has to keep up with incoming data, and could not if migration were used. NFS client backup prohibition You can establish a site policy that file systems should not be backed up from NFS clients (they will be done from the NFS server). Violators can be detected in a ADSM server 'Query Filespace' command (Filespace Type), whereupon you could delete the filespace outright or rename it for X days before deleting it, with warning mail to the perpetrator, and a final 'Lock Node' if no compliance. NFSTIMEout Client system options file (dsm.sys) or command line option to deal with error "ANS4010E Error processing '': stale NFS handle". Specifies the amount of time in seconds the server waits for an NFS system call response before it times out. If you do not have any NFS-mounted filesystems, or you do not want this time-out option, remove or rename the dsmstat file in the ADSM program directory. Syntax: "NFSTIMEout TimeoutSeconds". Note: This option can also be defined on the server. NIC selection on client See: TCPCLIENTAddress NLB Microsoft Network Load Balanced NLS National Language Support, standard in ADSMv3. The message repository is now called dsmserv.cat, which on AIX is found in /usr/lib/nls/msg/en_US (for the english version, other languages are found in their respective directories). The dsmameng.txt file still exists in the ADSM server working directory and is used if the dsmserv.cat file is not found. See also: Language No Query Restore ADSMv3+: Facility to speed restorals by eliminating the preliminary step of the server having to send the client a voluminous list of files matching its restoral specs, for the client to traverse the list and then sort it for server retrieval efficiency ("restore order"). That is, in a No Query Restore the client knows specifically what it needs and can simply ask the server for it, so there is no need for the server to first send the client a list of everything available. Both client and server have to be at Version 3+ in order to use No Query Restore. It is used automatically for all restores unless one or more of the following options are used: INActive, Pick, FROMDate, FROMTime, LAtest, TODate, TOTime. Also, No Query Restore is not used for NetWare Directory Services (NDS). Note that NQR has nothing to do with minimizing tape mounts for restore: for a given restore, TSM mounts each needed tape once and only once, retrieving files as needed in a single pass from the beginning of the tape to the end. A big consideration in NQR is that the client specification may be so general that the server ends up sending the client far more files than it needs. IBM used the term "No Query Restore" in their v3 announcements, but did not use it in their v3.1 manuals: usage was implied. Later manuals reinstated No Query Restore as a specific action, and documented it. IBM now refers to the v2 method of restoral as "Classic Restore". The most visible benefit of no query restore is that data starts coming back from the server sooner than it does with "classic" restore. With classic restore, the client queries the server for all objects that match the restore file specification. The server sends this info to the client, then the client sorts it so that tape mounts will be optimized. However, the time involved in getting the info from the server, then sorting it (before any data is actually restored), can be quite lengthy - and may incite client timeout at the server. NQR the *SM server do the work: the client sends the restore file specs to the server, the server figures out the optimal tape mount order, and then starts sending the restoral data to the client. The server can do this faster, and thus the time it takes to start actually restoring data is reduced. (A consideration is that while the server is busy figuring this out, no activity is visible from the client, which may concern the user.) Ref: Backup/Archive Client manual, chapter 3 (Backing Up and Restoring), "Restore: Advanced Considerations"; Redbook "ADSM Version 3 Technical Guide" (SG24-2236). See also: No Query Restore, disable; Restart Restore; Restore Order No Query Restore, disable Whereas this v3 feature was supposed to improve performance, it has had performance impacts of its own. To disable, perform the restoral with -traceflags=DISABLENQR, or by specifying option "TESTFLAG DISABLENQR" in dsm.opt. See "DISABLENQR" in "CLIENT TRACING". No-Query Restore See: No Query Restore NOAGGREGATES Temporary server options file option, to compensate for early v.3 defect. Is intended for customers who have a serious shortage of tapes. If you use this option, any new files backed up or archived to your server will not be aggregated. When the volumes on which these files are reclaimed, you will not be left with empty space within aggregates. The downside is that these files will never become aggregated, so you will miss the performance benefits of aggregation for these files. If you do not use the NOAGGREGATES option, files will continue to be aggregated and empty space may accumulate within these aggregates; this empty space will be eliminated during reclamation after you have run the data movement/reclamation utilities. NOARCHIVE ADSMv3 option for the include-exclude file, to prohibit Archive operations for the specified files, as in: "include ?:\...\* NOARCHIVE" to prohibit all archiving. NOAUDITStorage Server options file option, introduced by APAR PN77064 (PTF UN87800), to suppress the megabyte counting for each of the clients during an "AUDit LICenses" event, and thus reduce the time required for AUDit LICenses. Obsolete: now AUDITSTorage Yes|No. See: AUDITSTorage NOBUFPREFETCH Undocumented server option to disable the buffer prefetcher - at the expense of performance. (Useful where the 'SHow THReads' command reveals sessions hung on a condition in TbKillPrefetch, where the prefetcher is looping because of a design defect.) Node See: Client Node Node, add administrator Do 'REGister Admin', then 'GRant AUTHority' Node, define See: 'REGister Node' Node, delete See: 'REMove Node' Node, disable access 'LOCK Node NodeName' Node, lock 'LOCK Node NodeName' Node, move across storage pools Use 'MOVe Data', specifying a different storage pool; then reassign the node to the new stgpool's domain. But if a node shares tapes with other nodes: reassign it to the new stgpool, then let the data expire off of the old stgpool. Node, move to another Policy Domain 'UPDate Node NodeName DOmain=_____' In doing this, note: - If the receiving domain does not have the same management classes as were used in the old domain, the domain files will be bound to the receiving domain's default management class, which could have an adverse effect upon retention periods you expect. But in all cases, check the receiving domain Copypool retention policies before doing the move. - If the node was associated with a schedule, it will lose it, so be sure to examine all scheduling values. Node, number used See: Tapes, number used by a node Node, prevent data from expiring A request comes in from the owner of a client that because of subpoena or the like, its data must not expire; but that client has been using the same management class as is used for the backup of all clients. How to satisfy this request? 1. Use 'COPy DOmain' to create a copy of the policy domain the node is in. 2. Update the retention parameters in the copy group in the new domain. 3. Activate the appropriate policy set. 4. Use 'UPDate Node' to move the node to the new policy domain. Node, prohibit access 'LOCK Node NodeName' Node, prohibit storing data on server See: Client, prevent storing data on server Node, remove See: 'REMove Node' Node, space used for Active files 'Query OCCupancy' does not reveal this, as it reports all space. A simple way to get the information is to 'EXPort Node NODENAME FILEData=BACKUPActive Preview=Yes'. Node, space used on all volumes 'Query AUDITOccupancy NodeName(s) [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: It is best to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' to assure that the reported information will be current. Also try the unsupported command 'SHow VOLUMEUSAGE NodeName' Node, volumes in use by 'SHow VOLUMEUSAGE NodeName' or: 'SELECT DISTINCT VOLUME_NAME,NODE_NAME FROM VOLUMEUSAGE' or: 'SELECT NODE_NAME,VOLUME_NAME FROM VOLUME_USAGE WHERE - NODE_NAME='UPPER_CASE_NAME' Node, volumes needed to restore ADSMv3: SELECT FILESPACE_NAME,VOLUME_NAME - FROM VOLUMEUSAGE WHERE - NODE_NAME='UPPER_CASE_NAME' AND - COPY_TYPE='BACKUP' AND - STGPOOL_NAME='' Node conversion state An *SM internal designation. Node state 5 is Unicode, for Unicode enabled clients, which is to say platforms in which Unicode is supported. (Within Unicode-enabled clients, it is the filespace which specifically employs Unicode.) May be seen on ANR4054I and ANR9999D messages. Node name A unique name used to identify a workstation, file server, or PC to the server. Should be the same as returned by the AIX 'hostname' command. Is specified in the Client System Options file and the Client User Options file. Node name, register 'REGister Node ...' (q.v.) (register a client with the server) Be sure to specify the DOmain name you want, because the default is the STANDARD domain, which is what IBM supplied rather than what you set up. There must be a defined and active Policy Set. Node name, remove 'REMove Node NodeName' Node name, rename (Windows) See: dsmcutil.exe Node name, update registration 'UPDate Node ...' (q.v.) (register a client with the server) Node must not be currently conducting a session with the server, else command fails with error ANR2150E. Node names in a volume, list 'Query CONtent VolName ...' Node names known to server, list 'Query Node' Node password, update from server See: Password, client, update from server Node sessions, byte count SELECT NODE_NAME, SUM(LASTSESS_RECVD) - AS "Total Bytes" FROM NODES - GROUP BY NODE_NAME nodelock File the server directory, housing the licenses information generated by the ADSMv3 and TSM REGister LICense operation. The *SM server must have access to this file in order to run. If the server processor board is upgraded such that its serial number changes, the this file must be removed and regenerated. file first. See also: adsmserv.licenses; REGister LICense nodename /etc/filesystems attribute, set "-", which is added when 'dsmmigfs' or its GUI equivalent is run to add ADSM HSM control to an AIX file system. The dash tells the mount command to call the HSM mount helper. NODename Client System Options file operand to specify the node name by which the client is registered to the server. Placement: within a server stanza The intention of this option is to firmly specify the identity of the client where the client may have multiple identities, as in a multi-homed ethernet config. If your client system has only a single identity, it is best if this option is not used, letting the node name default to the natural system name. If you *do* code NODename, it is best that it be in upper case. If "PASSWORDAccess Generate" is in effect, you *cannot* use NODename because the password directory entry (e.g., as in /etc/security/adsm/) must be there for that node, and thus you must not have the choice of saying that you are some arbitrary node name. PASSWORDAccess Generate does not work if you code NODename. If in Unix you put it in dsm.opt, then ADSM assumes you want to be the "virtual root user", which gives you access to all of that node's data, requiring you to enter a password. Instead, put NODename in the dsm.sys file. If you are attempting to use NODename for cross-node restorals, DO NOT change your client options file to code the name of the originating node: remember that the options file is for all invocations of client functions, not just the one task you are performing, so your modification could yield incorrect results in incidental client invocations other than your own. Also, it is too easy to forget that this options file change was made. You should instead use the -NODename=____ invocation override form of the option. Note that as long as the Nodename remains the same, changes in the client's IP address (as in switching network providers) will not incite a password prompt. See also: PASSWORDAccess; TCPCLIENTAddress; VIRTUALNodename -NODename=____ (Employed on some clients (Netware and Windows), which otherwise would use -VIRTUALNodename if available there.) Command line equivalent, but override of the same options file definition, used when you want to restore or retrieve your own files when you are on other than your home nodename. Beware that specifying this causes ADSM to ask you for the password of that node, and thereafter regards you as a virtual root user. Worse, it will cause the password to be encrypted and stored on the machine where invoked. Thus anyone else can subsequently access your node's data, presenting a potential security issue. Unless that is your intent, use VIRTUALNodename instead of NODename. Note that when overriding the node name this way, with the ADSM server, a 'Query SEssion' will show the session as coming from the node whose name you have specified. Contrast with -FROMNode, which is used to gain access to another user's files. Note that a 'Query SEssion' in the server will say that the session is coming from the client named via -NODename, rather than the actual identity of the client. See also: -PASsword; VIRTUALNodename NODES SQL table containing all the information about each registered node. Columns: NODE_NAME, PLATFORM_NAME, DOMAIN_NAME, PWSET_TIME, INVALID_PW_COUNT, CONTACT, COMPRESSION, ARCHDELETE, BACKDELETE, LOCKED, LASTACC_TIME, REG_TIME, REG_ADMIN, LASTSESS_COMMMETH, LASTSESS_RECVD, LASTSESS_SENT, LASTSESS_DURATION, LASTSESS_IDLEWAIT, LASTSESS_COMMWAIT, LASTSESS_MEDIAWAIT, CLIENT_VERSION, CLIENT_RELEASE, CLIENT_LEVEL, CLIENT_SUBLEVEL, CLIENT_OS_LEVEL, OPTION_SET, AGGREGATION, URL, NODETYPE, PASSEXP. Note that the table is indexed by NODE_NAME, so seeking on an exact match is faster than on a "LIKE". Nodes, registered 'Query DOmain Format=Detailed' Nodes, registered, number SELECT COUNT(NODE_NAME) - AS "Number of registered nodes" - FROM NODES Nodes, report MB and files count SELECT NODE_NAME, SUM(LOGICAL_MB) AS - Data_In_MB, SUM(NUM_FILES) AS - Num_of_files FROM OCCUPANCY GROUP BY - NODE_NAME ORDER BY NODE_NAME ASC Nodes not doing backups in 90 days 'SELECT NODE_NAME, CONTACT, \ LASTACC_TIME, REG_TIME, DOMAIN_NAME \ FROM NODES WHERE DOMAIN_NAME='FL_INTL'\ AND DAYS(CURRENT_TIMESTAMP)-\ DAYS(LASTACC_TIME)>90 ORDER BY \ LASTACC_TIME DESC > SomeFilename' Nodes without filespaces There will always be nodes which have registered with the server but which have yet to send data to the server. The following will report them: SELECT NODE_NAME AS - "Nodes with no filespaces:", - DATE(REG_TIME) AS "Registered:", - DATE(LASTACC_TIME) AS "Last Access:" - FROM NODES WHERE NODE_NAME NOT IN - (SELECT NODE_NAME FROM FILESPACES) NOMIGRRECL Undocumented server option to prevent migration and reclamation at server start-up time. Note that there is no server Query that will evidence the use of this option: the server options file has to be inspected. Non-English filenames (NLS support) The TSM product is a product of the USA, written in an English language environment, originally and predominantly for English language customers using an alphabet comprised of the characters found in the basic ASCII character set. Trying to use TSM in a non-English environment is a stretch, as customers who have tried it have found and reported in ADSM-L. The product has experienced many, protracted problems with non-English alphabets, as seen in numerous APARs - and some debacles ("the umlaut problem" - see message ANS1304W). As of mid-2001, there is no support for mixed, multi-national languages, as for example a predomiantly English language client which stores some files whose names contain multi-byte character sets (e.g., Japanese). Customers find, for example, that to back up Japanese filenames you must run the Windows client on a Japanese language Windows server. Some customers circumvent the whole problem on their English language systems by copying the non-English files into a tarchive or zip file having an English name, which then backs up without problems. Another approach is to use NT Shares across English and non-English client systems, to back up as appropriate. NONBLOCK Refers to storage pool DATAFormat definition, where NATIVE is the default. TSM operations use storage pools defined with a NATIVE or NONBLOCK data format (which differs from NDMP). DATAFormat=NONblock specifies that the data format is the native TSM server format, but does not include block headers. See also: NATIVE NOPREEMPT ADSMv3 Server Options file (dsmserv.opt) entry to prevent preemption. TSM allows certain operations to preempt other operations for access to volumes and devices. For example, a client data restore operation preempts a client data backup for use of a specific device or access to a specific volume. When preemption is disabled, no operation can preempt another for access to a volume, and only a database backup operation can preempt another operation for access to a device. The effect is to cause high-priority tasks like Restores to wait for resources, rather than preempt a lower-priority task so as to execute asap. See also: Preemption; DEFine SCHedule NORETRIEVEDATE Server option to specify that the retrieve date of a file in a disk storage pool is not be updated when the file is restored or retrieved by a client. This option can be used in combination with the MIGDelay storage pool parameter to control when files are migrated. If this option is not specified, files are migrated only if they have been in the storage pool the minimum number of days specified by the MIGDelay parameter. The number of days is counted from the day that the file was stored in the storage pool or retrieved by a client, whichever is more recent. By specifying this option, the retrieve date of a file is not updated and the number of days is counted only from the day the file entered the disk storage pool. If this option is specified and caching is enabled for a disk storage pool, reclamation of cached space is affected. When space is needed in a disk storage pool containing cached files, space is obtained by selectively erasing cached copies. Files that have the oldest retrieve dates and occupy the largest amount of space are selected for removal. When the NORETRIEVEDATE option is specified, the retrieve date is not updated when a file is retrieved. This may cause cached copies to be removed even though they have recently been retrieved by a client. See also: MIGDelay Normal File--> Leads the line of output from a Backup operation, as when backup is incited by the file's mtime (file modification time) having changed, or if a chown or chgrp effected a change. See also: Updating-->; Expiring-->; Rebinding--> Normal recall mode A mode that causes HSM to copy a migrated file back to its originating file system when it is accessed. If the file is not modified, it becomes a premigrated file. If the file is modified, it becomes a resident file. Contrast with migrate-on-close recall mode and read-without-recall recall mode. NOT IN SQL clause to exclude a particular set of data that matches one of a list of values: WHERE COLUMN_NAME - NOT IN (value1,value2,value3) See also: IN "Not supported" Vendor parlance indicating that a certain level or mix of hardware/software is not supported by the vendor. It may mean that the vendor knows that the level is not viable by virtue of design; but more usually indicates that an older level of software was not deemed worth the expenditure to test compatibility, rather than having tested and having found incompatibilities. It is common for customers to inadvertently or intentionally use unsupported software and encounter no problems. Usually, usage of such software which "stays near the center of the path" can do okay; it's when the usage gets near the edges of complexity that functional problems are more likely to arise. NOTMOuntable DRM media state for volumes containing valid data, located onsite, but TSM is not to use it. This value can also be the default Location if Set DRMNOTMOuntablename has not been run. See also: COUrier; COURIERRetrieve; MOuntable; MOVe DRMedia; Set DRMNOTMOuntablename; VAult; VAULTRetrieve Novell See also: Netware Novell and TSM problems Novell customers report that problems using TSM (or, for that matter, many other applications) under Novell Netware are almost universally due to Novell irregularities and failing to communicate OS changes to developers. Novell (Netware) performance The standard Backup considerations apply, including too many files in one directory. See also: PROCESSORutilization Novell trustee rights With Novell your trustee rights are normally set on a directory level. If this is a case with your Novell systems, then just use the -dirsonly option when doing a restore. TSM backs up rights and IRFs only at a directory level, not a file level. Trustee Rights are not seen by the client workstation who maps the drive for his use. Client workstations should not be doing the backups: they should be done from the Novell system. .NSF file Lotus Notes database file. NSM See: Network Storage Manager NT Microsoft Windows New Technology operating system, situated between Windows 98 and Windows 2000. See: Windows NT .NTF files (Lotus Notes) and backup By default, the Lotus Notes Connect Agent will not back up .NTF files: you have to specifically request them to get them backed up. NTFS NT File System. Is understood by OS/2. Unlike FAT, NTFS directories are complex, and cannot be stored in the *SM database, instead having to go into a storage pool. NTFS and Daylight Savings Time Incredibly, NTFS file timestamps are offsets from GMT rather than absolute values - and hence the perceived timestamps on all files in the NTFS will change in DST transitions. (Another reason that NT systems cannot be regarded as serious contenders for server implementations.) http://support.microsoft.com/support/kb/ articles/q129/5/74.asp NTFS and permissions changes If someone happens to make a global change to the permissions (security information) of files in an NTFS, the next Backup will cause the files to be backed up afresh...which is warranted, as the attributes are vital to the files. The fresh backup will occur if any of the NTFS file security descriptors are changed: Owner Security Identifier (SID), Group SID, Discretionary Access Control List (ACL), and System ACL. Possible mitigations (all of which have encumbrances and side effects): - Perform -INCRBYDate backups. - In Windows Journal-Based Backups, you may employ the NotifyFilter. - Subfile backups should avoid wholesale backups, if you happen to use them. - Another approach to mitigation is to follow MS's AGLP (AGDLP for AD) rules: assign users to Global Groups, add Global Groups to Local (DOMAIN Local in AD) and only assign permissions to the local groups. You create the appropriate local groups (eg read access, write etc) and only assign permissions once to these groups. Any user changes are done through removal of uses from the Global groups or GG from local groups which doesn't trigger any ACL changes on the files so no extra backups are done. As far as initial security lockdown, this should be done at server setup. NTFS and security info in restorals NTFS object security information is stored with the object on the server and will be restored when the individual NTFS object is restored. "Security" in Windows NTFS and what gets restored: Inherited: The only security info is "provide same access as the parent directory is providing". TSM will restore the "checkmarked" inheritance. It *will not* restore parent's ACL, or the ACL of the parent's parent, ... up to the origin of the inherited ACL. As result you have resolved ability to inherit but not *what* to inherit. Explicitly specified: There is a list of users along with set of allowed operations. TSM will restore "no inheritance" mode and list of defined privileges. This is probably what you want in a restoral Mixed permissions: Both access inherited from the parent plus some explicitly specified additions/deletions/changes to the ACL. TSM restores both "inheritance" mode and the explicit access. As a result, the explicitly defined entities will have their access intact but the other are left to the mercy of ACL inherited from the parent directory. If the whole drive is restored, file/directory specific ACL elements are restored together their parents'. All this should explain why sometimes you see the ACL "restored", sometimes "not restored" and sometimes "partially restored". NTFS security info as stored in TSM Because of the amount of information involved in NTFS security data, it is too much to be stored in the TSM database, as simple file attribute data can otherwise be, and so NTFS security info has to go into a TSM storage pool. The NTFS security info is stored as part of the file data - an implication being that if just the security info is changed, the file itself has to be backed up afresh as well. NTuser.dat The NT current profile of each user registered to use the NT system. When you log on to NT, the contents of NTUSER.DAT are loaded into the HKEY_CURRENT_USER Registry key, where that copy persists only for the duration of the user session. So a TSM backup captures that as part of Registry backup; and you can do 'dsmc REStore REgistry USER CURUSER' to get your profile back. If the user is not logged in at the time of the backup, the file will be backed up from where it sits. If the user is logged in at the time, the file will be in use by the system, and will be backed up as part of the Registry, which is to say that the API used by the client for Registry backup will make a copy in the adsm.sys directory, and back that up. (The above assumes that the backup is run by Administrator: if run by an ordinary user, there is no access to either source of NTUSER.DAT data: it has to be skipped as busy.) C:\adsm.sys\Registry\\Users contains a directory for each id, and each id that was logged on at the time of the backup will have a file with a name like: S-1-5-21-1417001333- 436374069-854245398-1000 This is the logical equivalent of NTUSER.DAT. To restore it requires an extra step, though: When doing a bare metal restore, you restore the files, then the Registry; then you reboot; then you log on under that user's account. Since you don't have a restored copy of NTUSER.DAT, you will see the default profile. Run: dsmc REStore REgistry USER CURUSER which reloads the profile stuff from adsm.sys into the registry. Then you reboot again, and on the way down it will write the profile out to NTUSER.DAT again, and you are back in business. When you come back up, you have your restored/customized profile. If using the 4.1.2 client, the names in adsm.sys have changed, and the backed up user profile for each user is actually called NTUSER.DAT. And you can't restore individual Registry keys. So after you do the bare-metal restore of files & Registry as ADMINISTRATOR, you drag that person's NTUSER.DAT from the adsm.sys directory back to where it is supposed to be, before that account logs on again. In running standard TSM backups, be sure to run the TSM Scheduler Service under the Local System account, not a user account, to avoid the inevitable problem of finding the user profile (NTuser.dat) locked. Note that if a user has no NTUSER.DAT User Profile, upon login Windows creates a new one, using the default User Profile (which is stored on the System drive (typically, C:) in Documents and Settings\Default User\. It is vital, therefore, that "NTUSER.DAT" not be a blanket Exclude, as a Windows PC recovery could then result in there being no default User Profile. NTUSER.DAT is normally excluded from Journal Based Backup. ntutil Like 'tapeutil' for Unix, this utility for Windows NT or 2000 controls tape motion once a tape is mounted. It is part of the Magstar Device Drivers for NT available at the ADSM ftp server and its mirrors (ftp.storsys.ibm.com, under devdrvr/WinNT, within IBMmag.*, being a self-extracting file and contains the NTUTIL.EXE. With ntutil you can control some operations on a 3570. Syntax: 'ntutil <-f InputFile> <-o OutputFile> <-d SpecialFile> <-t>' Invoke simply as 'ntutil' to enter interactive mode. There is documentation in the manual "IBM SCSI Tape Drive, Medium Changer, and Library Device Drivers: Installation and User's Guide", available from the same ftp location. Also in Appendix A of the 3590 Maintenance Information manual. Null String Nullifying various operands in TSM requires that you code what is called a Null String, instead of a text value. A Null String is a string which contains nothing, and is coded as two adjacent quotes with nothing in between: "" . Number of Times Mounted Report line from 'Query Volume'. The number reported is since the tape came out of the scratch pool, and does not reflect the number of mounts over its lifetime. Above ADSM, your tape library may track tape mounts over the life of the tape's residency in the library, as the 3494 does in its Database menu selection. ADSM provides no means of resetting this number (a Checkout/Checkin sequence does not do it). NUMberformat Client User Options file (dsm.opt) option to select the format in which number references will be displayed. "1" - format is 1,000.00 (default) "2" - format is 1,000,00 "3" - format is 1 000,00 "4" - format is 1 000.00 "5" - format is 1.000,00 "6" - format is 1'000,00 NUMberformat Definition in the server options file. Specifies the format by which numbers are displayed by the ADSM server: "1" - format is 1,000.00 (default) "2" - format is 1,000,00 "3" - format is 1 000,00 "4" - format is 1 000.00 "5" - format is 1.000,00 "6" - format is 1'000,00 Default: 1 Ref: Installing the Server... NUMberformat server option, query 'Query OPTion' nwignorecomp ADSM client 2.1.07 supports the "nwignorecomp yes" parameter in the opt file. This will prevent ADSM from backing up the file if the only change to it is Netware compression. NWWAIT Netware option. As of TSM 5.2, this option was renamed to NWEXITNLMPROMPT. OBF Old Blocks File, as used in Windows 2000 image backup of volumes. In Server-free TSM backups, the terminology is "Original Blocks File". See also: LVSA; SNAPSHOTCACHELocation Object A collection of data managed as a single entity. OBJECT_ID (ObjectID) Decimal number object identifier in the ARCHIVES and BACKUPS tables. More generally, Object IDs are the surrogate database keys to which alphanumeric filenames are mapped. The Object IDs are 64-bit values, but the higher half is usually 0, making the ID effectively a 32-bit value. See also: Bitfile; SHow BFObject; SHow INVObject OBJects Operand in client 'DEFine SCHedule' ADSM server command which allows specification of names to be operated upon by the ACTion. Here you would define file systems to be backed up when ACTion=Incremental, which would otherwise take the filesystem names from the Client User Options File (dsm.opt) DOMain names. Objects compressed by: Element in a Backup statistics summary reporting how compressible the data was, as determined by the client as it was required to compress the data during the backup, per client or server options. Is computed as sum of the size of the files as the reside in the client file system, minus the number of bytes sent to the server, divided by the size of the files as they reside in the client file system. If negative (like "-29%"), then the data is expanding during compression, as can happen when it is already compressed. In this case, see if you have the option COMPRESSAlways is coded as Yes, and consider instead making it No. See also: COMPRESSAlways See: COMPRESSAlways objects deleted: Element of Backup summary statistics, reflecting the number of file system objects that the Backup process found gone since the last backup, by virtue of comparing the file system contents against the list of objects that the client got from the server at the beginning of the backup job. Note that there will necessarily be no objects deleted if running a Selective backup or an Incremental with -INCRBYDate. Objects in database, by nodename SELECT SUM(NUM_FILES) AS \ "Number of filespace objects", \ NODE_NAME FROM OCCUPANCY GROUP BY \ NODE_NAME ORDER BY \ "Number of filespace objects" DESC' Objects in database, total SELECT SUM(NUM_FILES) AS \ "Total filespace objects" FROM OCCUPANCY Objects Updated "Total number of objects updated" element in a backup statistics summary. The Objects Updated field displays the number of files or directories whose contents did not change but whose attributes or ACLs had changed. The server updates its information about the attributes or ACLs without the objects themselves having to be sent to the server. OBSI Open Backup Stream Interface. OBSI is an SQL-BackTrack component that provides the interface between BackTrack and a storage device or storage management system (like ADSM). OCCUPANCY SQL table reflecting the filespace objects inventory as reside in storage pools (which is not necessarily all of the file system objects). Columns: NODE_NAME Node name, upper case. TYPE 'Bkup', 'Arch', 'Spmg' FILESPACE_NAME STGPOOL_NAME NUM_FILES Number of files in storage pools. PHYSICAL_MB LOGICAL_MB See also: Query OCCupancy Occupancy of storage pool See: Query OCCupancy ODBC Open DataBase Connectivity (Microsoft). A standard low-level application programming interface (API) designed for use from the C language for accessing a variety of DBMSs with the same source code. It uses Structured Query Language (SQL) as its database access language. ADSMv3 provides an ODBC interface in the Windows client (only), which enables the SQL client to perform Select's (only) on the TSM database, with output therefrom to be manipulated by other ODBC compliant applications. This is beneficial in offloading SQL processing from the TSM server. Sample applications which can use this: Lotus Approach, Microsoft Access, Excel. Because Selects are employed, ODBC has the same limited view of the TSM DB as the server administrator has, meaning that file attributes, etc. cannot be seen. It is also just as slow. Ref: ADSM Version 3 Technical Guide redbook; TSM 5.1 Technical Guide redbook, Appendix A. ODBC driver Is supplied by your DB supplier. ODBC tracing For ODBC, there are two types: 1. ODBC Driver Manager trace, which is enabled via the "Tracing" tab in the ODBC Data Source Administrator. 2. TSM-specific ODBC driver tracing, which is enabled in the TSM-specific ODBC driver configuration dialog (the one whose title is "Configurate a TSM Data Source"). OEM Oracle Enterprise Manager Off-Line Copy Status of a database volume in a 'Query DBVolume' display. Investigate why it's offline. If it looks like it should be okay, do a 'VARy ONline VolName' to get it back. Don't tarry, as you are in jeopardy while the mirrored copy is down. OFfsite Access Mode for a Copy Storage Pool volume saying that it is away and can't be mounted. The Offsite designation serves to both identify the disaster recovery intent of the volumes and prohibit their incidental mounting. (They should be mounted only to recover from a disaster, after being brought back onsite.) Special characteristics for Offsite: - Mount requests are not generated; - In reclamation or Move Data operations conducted on Offsite volumes, the files represented on those volumes is taken from available on-site storage pools; - Empty offsite scratch volumes are not deleted from the offsite copy storage pool. Set with 'DEFine Volume' and 'UPDate Volume ... ACCess=OFfsite'. You would typically do this after a 'BAckup STGpool' such that the volumes which it created could be removed to an offsite location after library ejection via CHECKOut. (It is best to do this with a Copy Storage Pool separate from the one which you would keep on-site for immediate, non-disaster recoveries.) Offsite, how to send volumes Can query first, as: 'Query Volume * ACCess=READWrite,READOnly STatus=FILling,FULl STGpool=copypoolname' Mark all newly created copy storage pool volumes unavailable: 'UPDate Volume * ACCess=OFfsite LOcation="Sent offsite." WHERESTGpool=CopypoolName WHEREACCess=READWrite,READOnly WHERESTatus=FILling,FULl' Then eject each volume: 'CHECKOut LIBVolume LibName VolName [CHECKLabel=no] [FORCE=yes]' Later, to bring back: 'CHECKIn LIBVolume LibName VolName STATus=PRIvate DEVType=3590' 'UPDate Volume VolName ACCess=READWrite' (Alternately, consider using the MOVe MEDia command, which replaces the UPDate Volume and CHECKOut LIBVolume steps.) Offsite reclamation See: Offsite volume reclamation Offsite REUSEDELAY It is recommended that you set the REUsedelay parameter for your copy storage pool to be at least as long as the oldest database backup you intend to keep. This will ensure that reclaimed volumes are retained long enough to guarantee the recovery of expired files. Offsite volumes that you see are in the PENDING state are empty but are awaiting release based on the REUsedelay value. (From Admin Guide, Chapter 11 "Managing Storage Pools", "Reclamation and MOVE DATA Command Processing". Offsite tapes, eject Consider acquiring the ADSM DRM facility and using its command 'MOVE DRMEDIA', which will eject the volumes out of the library before transition the volumes to the destination state. Offsite tapes, empty? Do 'Query Volume ACCess=OFfsite STatusus=EMPty' to identify. Also note that at start-up, TSM writes messages like the following to the Activity Log: ANR1423W Scratch volume 000052 is empty but will not be deleted - volume access mode is "offsite". Offsite tapes, empty, return 'UPDate Volume * ACCess=READWrite (for copy storage pool tapes) WHERESTGpool='name of offsite pool' WHERESTatus=EMPTY WHEREACCess=OFfsite' This will automatically delete empty offsite volumes from ADSM and if you are using a tape management system, flag them to be returned. Offsite volume reclamation When you do perform offsite reclamation, it is recommended that you turn on reclamation for copy storage pools during your storage pool backup window and before marking the copy storage pool volumes as OFfsite. Next, turn off reclamation and then mark any newly created volumes as OFfsite. This sequence will keep partially filled offsite volumes as-is, prevent them from essentially being copied to onsite volumes. (See Admin Guide, Managing Storage Pools, "Reclaiming Space in Sequential Access Storage Pools", "Reclamation for Copy Storage Pools", "Reclamation of Offsite Volumes".) Because the volume involved is not present, its file complement has to be obtained from onsite tapes in order to effect reclamation. The process is designed so that all files needed from a particular primary volume are obtained at the same time, regardless of which volume reclamations need these files. Note that it may take some time for the reclamation to actually start in that the server has to perform a lookup for every file on the offsite volume to determine what onsite volumes they are on, so as to gather all the input tapes into an efficient, ordered collection. This is obviously rather expensive, so it's best to let offsite tapes get as empty as possible by themselves, and do reclamation only if and when the tape supply is low. An offbeat approach to emptying nearly-empty volume is to simply do a DELete Volume on them: being copy storage pool volumes, the deleted contents would be recreated on a fresh, local tape by the next BAckup STGpool. Note that this may be ill-advised in that you are eliminating your safety copy of client data. See also: ANR1173E Offsite volume recovery An offsite (copy storage pool) volume has evidenced a bad spot. How to recover its data? RESTORE Volume is not an option, as it is for primary storage pool volumes. You might proceed to perform a DELete Volume, to let the next BAckup STGpool recreate the contents of that volume - but that would be prudent only if you also have an onsite copy storage pool, as you would otherwise be gambling that the primary tapes are perfect. That is, the offsite volume you may be eager to delete may contain the only viable copy of some client data. If no onsite copy storage pool, the most prudent course would be to do a MOVe Data on the bad offsite volume, and then DELete Volume after all data (or as much as can be) has been moved. Offsite volume now onsite, but A volume returned from offsite and its reclamation happening like offsite Access mode was changed from Offsite to Readwrite or Readonly; but reclamation of the volume is occurring like an offsite reclamation, using volumes from original storage pools which contain the files on the "offsite" volume. Possible causes: - The Access mode of the offsite storge pool itself is perhaps Unavailable, rather than Readwrite or Readonly; - When the volume was returned to onsite, it was not Checked In. - If using DRM, you should not be trying to do on-site reclamation: you need to let reclamation empty the volumes, then request the return of volumes that are in a DRM state of VAULTRetrieve (empty). Upon their return, one way to handle is 'MOVe DRMedia VolName WHERESTate=VAULTR TOSTate=ONSITERetrieve' for each. The volumes will then be available for Checkin as scratch tapes. OFS See: Open File Support OnBar Informix DB: Online Backup And Restore. OnBar is a utility that comes with the online product starting with the 7.21.UC1 version. This utility has the ability to: - Perform parallel backups and restores of online. - Automatic & continuous backups of the logical logs. - Use 3rd Party Storage Managers to to store the online backups. OnBar and it keeps track of all the backup objects in its SYSUTILS table: the name and object ID from the storage manager. See also: TDP for Informix Online documentation (Books) Located in /usr/ebt/adsm/ From the Unix prompt: 'dtext', which invokes the DynaText hypertext browser: /usr/bin/dtext -> /usr/ebt/bin/dtext. Open File Support (OFS) TSM 5.2 facility for backing up one files. OFS is not a default install option: you would have to perform a custom install to get it. OFS cannot be turned on and off via options: once there, it is always there - you would need to use the setup wizard to remove OFS. The Windows INCLUDE.FS option can be used to specify whether a drive uses open file support. When using open file support, the entire volume is backed up via the snapshot method - not just open files. The idea is to capture the entire volume at a moment in time (hence the photographic term "snapshot"). While the backup is running, disk writes are intercepted by the LVSA, held until the LVSA can copy the original data (at the block level) to the snapshot cache, then allowed to go through. When it is time for TSM to back up the changed file, TSM backs up the original data from the snapshot cache, not the changed data. Performance: There is some additional overhead, which will vary with the amount of data being changed during the course of the backup. See also: Image Backup; Snapshot Open registration Clients can be registered with the server by the client root user. This is not the installation default. Can be selected via the command: 'Set REGistration Open'. Ref: Installing the Clients Contrast with "Closed registration". Open Systems Environment Name of licensing needed for AFS/DFS volume/fileset backup. If you try to use buta but lack the license, you will get error message: ANR2857E Session 19 with client AFSBKP has been rejected; server is not licensed for Open Systems Environment clients. If have license, start-up shows: ANR2856I Server is licensed to support Open Systems Environment clients. OpenVMS Is supported as a client using the client software called STORServer ABC (Archive Backup Client). http://www.storserver.com http://www.rdperf.com/RDHTML/ABC.HTML See also: ABC Operating system used by a client Shows up in 'Query Node' Platform. -OPTFILE ADSMv3+ client option for specifying the User Options File to use for the session. (In Unix, this means the client user options file: you cannot use -OPTFILE to point to an alternate client system options file.) Note that this command line option cannot be used with all commands, while the DSM_CONFIG environment variable method always works. And, obviously, this option which specifies an options file cannot be specified in the options file. See also: DSM_CONFIG; Platform Optical disc performance vs. tape Thus far, the performance of optical volumes/libraries is far below that of tapes, whether SCSI 1 or II. Ref: performance measurements in Redbook "AIX storage management" (GG24-4484), page 43/44. OPTIONFormat (HSM) Client User Options file (dsm.opt) option to specify the format users must use when issuing HSM client commands: STANDARD (long names) or SHORT. Default: STANDARD Options, client, query ADSM: 'dsmc Query Option' TSM: 'dsmc show options' Options, server, query 'Query OPTion' Options file, Windows Use 'dsmcutil update' and use "/optfile" to specify a different option file for any of the installed TSM services. .ora Filename suffix for Oracle files. Oracle backup See: TDP for Oracle Oracle database factoids Oracle .dbf files are initially allocated at a pre-specified size and populated with long runs of zero bytes. Some of the zero bytes are replaced with real data as applications write to the database. A .dbf file with a generous allocation may still consist mostly of long runs of zero bytes even after it has been in use for a while. Compression algorithms can achieve results much better than the typical three to one when working on long runs of zero bytes: such files compress down to nearly nothing. Order By SQL operation to sort the data in a query. This is expensive, so don't use unless you have to. ORM Offsite Recovery Media: media that is kept at a different location to ensure its safety if a disaster occurs at the primary location of the computer system. The media contains data necessary to recover the TSM server and clients. The offsite recovery media manager, which is part of DRM, identifies recovery media to be moved offsite and back onsite, and tracks media status. ORMSTate UPDate VOLHistory operand, to specify a change to the Offsite Recovery Media state of a database backup volume. The ORMSTATE options correspond to the DRM STATE shown in the Q DRMEDIA output. Orphaned stub file (HSM) A stub file for which no migrated file can be found on the ADSM server your client node is currently contacting for space management services. Reconcilliation detects orphaned files and writes their names to the .Spaceman/orphan.stubs file. A stub file can become orphaned, for example, if you modify your client system options file to contact a different server for space management than the one to which the file was migrated. OS Operating System (Unix, Windows, etc.). .OST Filename extension, "Off Site Tape", for Backup Sets where Devclass is type FILE. Out-of-band database backup TSM 3.7 facility to be able to make a full backup of the TSM database, as for offsite purposes, without interfering with the prevailing full+incremental backup series. This backup can be used to restore the server db to a point in time. Out-of-space protection mode One of four execution modes provided by the 'dsmmode' command. Execution modes allow you to change the HSM-related behavior of commands that run under 'dsmmode'. The out-of-space protection mode controls whether HSM intercepts out-of-space conditions. See also: execution mode. Originating file system The file system from which a file was migrated. When a file is recalled using normal or migrate-on-close recall mode, it is always returned to its originating file system. "Out of band" Refers to an action which does not participate within an established regimen. In TSM, examples are: - Selective backups, as opposed to Incremental backups. - BAckup DB ... Type=DBSnapshot -OUTfile Command-line option for ADSM administrative client commands ('dsmadmc', etc.) to capture interactive command results in the file named in "-OUTfile=FileName". Note that this output is "narrow". Alternately you can selectively redirect the output of commands by using ' > ' and ' >> '. Note that this output is supposed to be "wide" - but the output of some commands line 'q stg' is still narrow. See also: Redirection of command output Ref: Administrator's Reference Output width See: -COMMAdelimited; -DISPLaymode; SELECT output, column width; Set SQLDISPlaymode; -TABdelimited OVFLOocation Keyword for Primary and Copy Storage Pool definitions specifying a string identifying the location where volumes will go when they are ejected from the (full) library when processed by the MOVe Media command. See: MOVe Media, Overflow Storage Pool Overflow Storage Pool An overflow storage pool can be used for both primary and copy storage pools and allows, when a library becomes full, the removal and tracking of some of the volumes to an overflow location. An overflow storage pool is not a physical storage pool; it is a location name where volumes are physically moved to, having been removed from a physical library. Ref: Admin Guide, "Managing a Full Libary" See also: MOVe MEDia; OVFLOcation; Query MEDia OVFLOwlocation You mean: OVFLOcation Owner The owner of backup-archive files sent from a multi-user client node, such as AIX. OWNER SQL: Column in BACKUPS table. Is the owner of the file as defined on the client system. In Unix, this would normally be the username of the owner. If the username is not defined in the passwd system, such that 'ls -l' shows the owner as a UID number instead of a username, then the same numeric will show up in the OWNER column. Paging space This is not really an *SM topic, but it can affect *SM server functionality, so I include these notes... Paging space is in effect "tidal" space for real memory. It is the space which makes virtual memory possible. As such, it size needs to be proportional to real memory size for it to be meaningful - and for the system to be able to function. Sadly, we've seen some operating systems set up by people who don't understand virtual memory, and TSM suffers as a result. For example, we've seen a major AIX-based TSM system, with a hefty 12 GB of real memory, given 2 GB of paging space...as if the person who did it was referring to a worksheet which the site has been using for the past eight years for setting up any AIX system, regardless of size. Such a system is effectively being put into a "virtual=real" state where it's like the system is supposed to run in real memory only - which it architecturally can't...and will crash processes as there's no room. (AIX will issue a SIGDANGER signal to processes for them to voluntarily quit, before it gets drastic or fails utterly.) In general, it is healthy for paging space to be about twice the size of real memory. A specific recommendation from the AIX performance Analysis group (in APAR IX88159): total paging space = 512MB + (memory size - 256MB) * 1.25 Parallel backups See: Backups, parallelize Partial Incremental Is an Incremental which operates without a list of the Active files having been obtained from the *SM server, and thus does not necessarily back up all files in a file system, does not cause expiration or rebinding of files on the server, and ignores the frequency attribute of the Copy Group. Types: INCRBYDate, which operates only upon the date of the last Incremental backup; and Subset Incremental, which addresses only the file system objects which you specify. Partition disks, should you? The question comes up as to whether disks used for the TSM Database and Recovery Log should be used as whole disks or partitioned ("logical volumes", in AIX parlance). If you have smallish disks (2-4GB), by all means use them as whole disks, as that's a nice, modular size. With the larger disks more common today, it is better to partition them into units of about 4 GB each. This modular approach yields greater TSM parallelism in multiple TSM threads, and allows you to add dbvols in nice unit sizes as the db grows. The basic advantage of partitioning also pertains: it isolates the effects of a surface fault, which then affects only that partition instead of the whole disk, if it were unpartitioned. This makes it far less painful and time-consuming to swap that LV out of a mirrored set and swap in one of those nice replacement LVs you have set aside. PASsword ADSMv2: Macintosh and Windows clients only. ADSMv3: All clients. The PASsword option specifies an ADSM password. If this option is not used and your administrator has set authentication on, you are prompted for a password when you start an ADSM session. Ostensibly, this password would serve to satisfy the first requirement for a password in the Generate case, and every occurrence in Prompt mode. But if it's changed in the server, the client has to be brought into sync. -PASsword Option you can code on client command line ('dsmc', 'dsmadmc', etc) to specify the client password for interacting with the server. Example: 'dsmadmc -id=MyId -pas=MyPw'. Note that you will not have to do this for basic 'dsmc' operation when "PASSWORDAccess Generate" is active for your client, except when you are performing cross-client operations, where you have to specify the password of the alien client. But you *do* have to specify it when invoking 'dsmadmc' because the password involved is not that of the node, but rather for the administrator specified via -ID=____. The -PASsword option is ignored when PASSWORDAccess Generate is in effect: you cannot provide it on the command line to establish the client-stored password. The client is supposed to alter the argv[] strings to that the password is not revealed to other users in the system when they run the 'ps -efl' command. Where you do have to specify -PASsword=____, an issue for interpreted scripts is that the password apparently has to be coded into the script, thus exposing it in that way. This can be circumvented by coding the password itself in a file which is accessibly only to the authorized user, or group of authorized users, and have the script read the password from that file. Another approach is to engineer a rather trivial proxy agent which would accept a TSM command string you provided, which it would itself invoke with the password it knows about, and then pass back the results. Such an agent could be a command where the password is encrypted into the binary, or a minor daemon. For query-only processing you might define an administrator ID with only query capability, and not be concerned about the password being known. This lessens concerns, but is nevertheless a privacy/security issue in all the server information being potentially available to anyone. See also: -NODename Password, administrator, change/reset 'UPDate Admin Admin_Name PassWord' See also: Administrator passwords, reset Password, client, change at client dsmc SET Password Password, client, establish without Windows: You can establish the client contacting the server password into the registry without contacting the TSM server by issuing the command: dsmcutil updatepw /node:nodename /password:xxx /validate:no This is particularly necessary when client option SESSIONINITiation SERVEROnly is in effect, or the equivalent spec is in effect on the server side in the Node's definition, such that the client cannot initiate a session with the server. Password, client, reset at server 'UPDate Node NodeName PassWord' Password, client, rules 1-64 chars: A-Z, 0-9, -, ., +, & allowed; % not allowed. Password, client, update from server 'UPDate Node NodeName PassWord' Node must not be currently conducting a session with the server, else command fails with error ANR2150E. Password, client, where stored on When "PASSWORDAccess Generate" is client selected in the Client System Options File, the encrypted password is stored on the client as follows: Unix: Per the PASSWORDDIR option. Defaults: AIX ADSM: /etc/security/adsm/SrvrName AIX TSM: In the baclient directory in a file called X.pwd where X is a long alphanumeric name made up by dsm*. Other Unixes: /etc/adsm/SrvrName Macintosh: Per the PASSWORDDIR option. Default: In the install directory. Windows: In Registry key HKEY_LOCAL_MACHINE\SOFTWARE\IBM\ADSM \CurrentVersion\BackupClient\Nodes \ Data name: Password The encrypted password is stored in the Registry on a per-node basis (a separate password is generated for each node used to connect to the server). The SHOWPW command of the DSMCUTIL utility may be used to decrypt the password for a specified node and display it in clear text. 2000: Under SOFTWARE, string ADSM The password was established in the server 'REGister Node' command, and becomes set on the client when a non-trivial command such as 'dsmc Query SCHedule' is run ('dsmc Query Option' is too trivial) by the "superuser" (root in Unix; Administrator in Windows). Note that if you have multiple server stanzas in your options file and have "PASSWORDAccess Generate", you will be be prompted once for each as you use it, and it will be stored under that server stanza name. Note that if you upgrade the operating system (e.g., from Windows NT to Windows 2000), the place where the password was stored will likely be replaced, obliterating the previously stored passwords. See also: /etc/adsm; /etc/security/adsm; PASSWORDDIR Password, change client's 'dsmsetpw' (an HSM command) NT: 'dsmcutil updatepw' Password authentication Require password for administrators and client nodes to access the server per REGister Node and Set AUthentication. Password authentication, turn off Via TSM server command: 'Set AUthentication OFf' Password authentication, turn on Via TSM server command: 'Set AUthentication ON' Password expiration, node Per REGister Node, PASSExp= . Password expiration period, query In server: 'Query STatus', look for "Password Expiration Period" Password expiration period, set 'Set PASSExp N_Days' 1-9999 days. (Defaults to 90 days). Password length, query Do 'Query STatus', view "Minimum Password Length" Password length, set See: Set MINPwlength Password security The *SM (encrypted) password is not sent in the clear: During authentication, the client sends the server a message that is encrypted using the password as the key. The server knows what the decrypted message should be, so if the wrong password was used to encrypt the message, then the authentication will fail. PASSWORDAccess Option for Client System Options File (PASSWORDAccess Generate) to specify how your *SM client node password is to be handled. Code within a server stanza (under the appropriate SErvername spec). "Prompt" will cause a prompt for the password every time the server is accessed. This is the default - but should not be used with HSM. If used with Shared Memory access (COMMMethod SHAREDMEM), the client must either be root or be the same UID under which the server is running. "Generate" suppresses password prompting, causing the password to be encrypted and stored locally (in /etc/security/adsm/SrvrName), and generate a new password when the old one expires. Causes dsmtca (q.v.) to run as root. Use this when HSM or the web client are involved. "Generate" should be used with Shared Memory access (COMMMethod SHAREDMEM) when the client is not root or does not match the UID under which the server is running. To establish the password: As superuser, perform any client-server operation, like 'dsmc q f'. Note that if you have multiple server stanzas in your options file, you will be prompted once for each as you use it. (If the generated password file turns out to be zero-length, look for its file system being full.) Generate is unsuitable for use with various APIs, such as TDP for Domino with 'DOMDSMC /ADSMNODE', as a security feature. (To use Generate, you would have to code NODENAME in dsm.opt.) TDP for Oracle similarly prohibits Generate. When "Generate" is in effect, you cannot use the NODename option, because of the need to reference the /etc/security/adsm password, so you must not have the option to fake the node name. APAR IC11651 claims that if PASSWORDAccess is set to Generate in dsm.sys, then dsm.opt should *not* contain a NODE line. See also: ENCryptkey; MAILprog; PASSWORDDIR PASSWORDDIR Option for Client System Options File to override the natural directory which the TSM client should use to store the encrypted password file when the PASSWORDAccess option is set to GENERATE. Default: Is the most appropriate place for the given operating system: AIX: /etc/security/adsm/SrvrName Other Unixes: /etc/adsm/SrvrName NT: Registry. See also: /etc/adsm; /etc/security/adsm; Password, client, where stored on client; PASSWORDDIR Patch levels E-fix: An emergency software patch created for a single customer's situation. Limited Availability (LA) patch: A limited release of a patch just before it is generally available. General-availability (GA) patch: Intended to be distributed to all users. These patches have completed the verification process. Ref: Tivoli Field Guide: An Approach to Patches Path, drives SQL query SELECT COUNT(*) AS - "Number of Free Drives" from drives - WHERE DRIVE_NAME NOT IN (SELECT - DESTINATION_NAME FROM PATHS WHERE - ONLINE='NO') AND ONLINE='YES' AND - DRIVE_STATE IN ('EMPTY','UNKNOWN') Paths As of TSM 5.1, the procedure for defining a tape library of tape drive changed: it is now necessary to define a data path for all libraries and drives, including local libraries and drives. The path definitions are necessary for the server-free product enhancements. Pct Logical Header in Query STGpool F=D output. Specifies the logical occupancy of the storage pool as a percentage of the total occupancy. Logical occupancy represents space occupied by files which may or may not be part of an Aggregate. A value under 100% indicates that there is vacant space within the Aggregates, which Reclamation can reclaim in its compaction of Aggregates. A high value is desirable and means that a small fraction of the total occupancy in your storage pool is vacant space used by logical files that have been deleted from within aggregates. There are various reasons why this value may appear to remain high, including: - Most of the storage pool occupancy is attributed to non-aggregated files that were stored using a pre-Version 3 server; - You are not getting much aggregation because client files are very large or because your settings from the client TXNBytelimit option or TXNGroupmax client option are too small; - If logical files within aggregates are closely related, they may all tend to expire at the same time so entire aggregates get deleted rather than leaving aggregates with vacant space. - Reclamation of sequential storage pools removes vacant space within aggregates and raises the %Logical value for that pool. See also: Logical file Pct Migr Header in Query STGpool output. Estimates the percentage of data in the storage pool that can be migrated; that is, migratable. It is this value that is used to determine when to start or stop migration. Pct Migr indicates the amount of space occupied by committed files, as contrasted with the Pct Util value which can reflect allocated, pending file occupancy when a client data transaction is in progress. Caching: Pct Migr does *not* include space occupied by cached copies of files. For example, an archive storage pool that is 99% full with a Pct Migr of 15.1 means that 15.1% of the data is new: an image of it has not yet been migrated down to the next storage pool in the hierarchy so as what's in this higher level storage pool represents caching. The other 83.9% of the files are old, and were previously migrated with the cached image left in the storage pool. A value of 0.0% says that all data has already been migrated. For a disk storage pool, a high Pct Util and a low Pct Migr reflects caching, with the data being in both places. For sequential devices (tape), reflects the number of volumes containing viable data; and Pct Util shows how much of that space is actually used. See also: Cache; Migration; Pct Util - Query STGpool Pct. Reclaimable Space Report element from Query Volume. (SQL: PCT_RECLAIM) This is how much of the volume is empty and reclaimable, reflecting all empty space: - places where whole Aggregates have been logically deleted; - where space within Aggregates has been freed. Contrast with Pct Util, which does not account for voids within Aggregates. Pct. Reclaimable Space is more in tune with what Reclamation will address: space within aggregates. Unfortunately, though percent reclaimable space may be high for some volumes, their percent utilization may be high as well, which will make for a lot of data movement during reclamation. Frustratingly, volumes further down in percent reclailable space levels may have far smaller percent utilizations, and would reclaim much faster. The Pct. Reclaimable Space figure climbs as a reclamation or MOVe Data proceeds. Seeing the reclaimable space go from a considerable value to 0 in a MOVe Data operation suggests that all the reclaimable space was whole Aggregates, as in the case of a tape volume containing predominantly large files, with almost no possibility of space being logically freed with Aggregates. Pct Util, from Query FIlespace Column in 'Query FIlespace' server command output, which reflects the percent utilization of the object as it exists on the client, such as how full a Unix file system is. Note that this does *not* reflect the space occupied in TSM. See also: Capacity Pct Util, from Query STGpool Column in 'Query STGpool server command output. Specifies, as a percentage, the space used in the storage pool. Disk: Reflects the total number of disk blocks currently allocated by TSM. Space is allocated for backed-up, archived, or space-managed (HSM) files that are eligible for server migration, cached files that are copies of previously migrated files, and files that reside on any volumes that are offline. Note that the Pct Util value has few decimal places, which limits the accuracy of values computed with it, as in multiplying times the Estimated Capacity value to hope to yield the amount of data stored in the stgpool. Remember that the value is a percent number: to use it in computation, you must adjust. For example: for a Pct Util of 0.2, its corresponding computational value is 0.002 . The Pct Util value from the query corresponds to the PCT_UTILIZED value from a 'Select from Stgpools' - and note that the PCT_UTILIZED value has been seen to be lower than the Pct Util value (e.g., 0.2 for the query, 0.1 for the Select). Note that Pct Util can be higher than the value of Pct Migr (the migration control percentage) when a client data transaction such as a Backup is in progress. The Pct Util value reflects the amount of space actually allocated (while the transaction is in progress). Contrast with the value for Pct Migr, which only represents the space occupied by *committed* files. At the conclusion of the transaction, Pct Util and Pct Migr become synchronized. See also: Pct Migr Pct Util, from Query Volumes In a Query Volumes report, reflects the (SQL: PCT_UTILIZED) space taken up by unexpired data: non-aggregated files or, if aggregated, the amount of space occupied by whole aggregates (regardless of any empty, expired space within them, yielding a somewhat inflated number versus Pct. Reclaimable Space). Pct Util is more in tune with what MOVe Data will address: whole aggregates. When the Volume Status is Filling: the value is *SM's computation of the amount of data written versus the volume's estimated capacity. (Note that if you have short retention periods, you can have the unusual situation of files expiring as the tape fills, and so can also exhibit characteristics of Full volumes, as below.) When the Volume Status is Full: the value will be 100% at the time that *SM encountered End Of Tape (EOT) when writing the volume, and thereafter will reflect the amount of data *logically* remaining on the volume after file expirations. (The volume itself remains unmodified since that time, and in the real, physical sense it really is full.) See also: Filling; Full; Pct Migr Tapes get marked "full" when *SM hits the end of volume. If you are getting media errors, this could happen prematurely. Note that the value has only one decimal position (e.g., 95.1), which may be insufficient to reflect a tiny amount of data on a tape: that is, there may still be data on the tape though the Pct Util is 0.0 . Beware Migration, at some TSM levels, not updating the Pct Util values for involved tapes until after Migration has concluded! Note: Disk pool space is allocated by a backup session, in anticipation of the requirements of the backup session. It will show up as percent utilized and not percent migratable. See also: DLT Pending Typical status of a tape in a 'Query Volume' report (not 'Query LIBVolume'), reflecting a sequential access volume which has been purged of all data (it's empty), but which is awaiting for the STGpool REUsedelay number of days to elapse before it can be re-used. Offsite volumes would have a REUsedelay value at least as long as the oldest database backup to be kept, to guarantee the recovery of expired files. Pending volumes are re-evaluated every hour, beginning 60 minutes after the server is started. (Changing the REUsedelay value to 0 does not cause the Pending volumes to immediately return to scratch: it will happen in the next hourly examination.) To return a volume to the Scratch pool before the REUsedelay expires (as when you're desperate for scratches and cannot wait for the REUsedelay period), just do 'DELete Volume ______'. ('UPDate Volume' cannot return a volume to Scratch status.) 'DELete Volume' cannot succeed on a Pending volume while the Space Reclamation process that cleared it is still running, clearing other volumes that it also found reclaimable.) Messages: ANR1342I when volume becomes pending; ANR1341I when automatically deleted from stg pool per REUsedelay. See also: Empty Pending, when volume became 'Query Volume ______ F=D' examine "Date Became Pending" value. Pending volumes 'Query Volume STatus=PENDing' Percent utilization of storage pool(s) 'Query STGpool [STGpoolName]' See also: Query OCCupancy perfctr.ini ADSM 3.1.0.7 introduced a new performance monitoring function which includes this file. See APAR IC24370 See also: dsmccnm.h; dsmcperf.dll Performance topics See: 3590 performance; Backup performance; Database performance; Directory performance; DNSLOOKUP; Expiration performance; Migration performance; MOVe Data performance; MVS server performance; Netware restore performance; NT performance; Reclamation performance; Restoral performance; Server performance; Storage pool, disk, performance; Storage pool volumes and performance; Sun client performance; Tape drive performance; Tape drive throughput; V2archive; Web Admin performance issues Phantom tape ejections See: Ejections, "phantom" Phantom volume, remove See: Storage pool volume, long gone, delete Physical file A file, stored in one or more storage pools, consisting of either a single logical file, or a group of logical files packaged together (an aggregate file, in small files aggregation). See also: aggregate file; logical file Physical occupancy The occupancy of physical files in a storage pool. This is the actual space required for the storage of physical files, including the unused space created when logical files are deleted from aggregates (small files aggregation). See also: Physical file; Logical file; Logical occupancy Physical Space Occupied (MB) Report column from Query OCCupancy server command: The amount of physical space occupied by the file space. Physical space includes empty space within aggregate files, from which files may have been deleted or expired. PING SERVER ADSMv3 server command to test the connection between the local server and a remote one. Syntax: 'PING SERVER ServerName' -PIck Client option, as used with Restore and Retrieve, to present a numbered list of objects matching the file specification you entered, allowing you to select or "pick" from the list just those objects you want back. Each object that you select will get an 'x' mark next to it. When all desired have been selected, enter 'O' (ok) to proceed with the restoral. Note that if in invocation you entered a destination specification, you can pick only one item from the list, which is the singular object to go to that destination. -PIck is of particular value when you need to restore an Inactive version of a file from among many such versions. To perform such an operation, restoring to an alternate name so as to preserve the original, do like: dsmc restore -ina -pick currentfilename currentfilename.old See also: Inactive files, restore selectively PIT Abbreviation for Point In Time (restoral). See: Point-In-Time restoral; GUI vs. CLI -PITDate Point-In-Time Date option in ADSMv3, to restore Active files (only) up to the date specified. (The format of the date must be that specific to your system, per the prevailing DATEformat. You can perform a Query Restore to see the format in use.) Will use the No Query Restore protocol. PITDate will consider every backup made *until* the indicated date. Performance note: Is reported to cause every tape to be mounted and every file to be moved though few may actually be needed for replacing client files. Contrast with "FROMDate" and "TODate". See also: Inactive files, restore selectively; Point-In-Time restoral -PITTime Client option, used with the PITDate option, to establish a point-in-time for which you want to display or restore the latest version of your backups. Files or images that were backed up on or before the date and time you specified, and which were not deleted before the date and time you specified, are processed. Backup versions that you create after this date and time are ignored. This option is ignored if the -PITDate option is not specified. Syntax: PITTime time where the time specifies a time on a specified date. If you do not specify a time, the time defaults to 23:59:59. Specify the time in the format you selected with the TIMEformat option. When you include the TIMEformat option in a command, it must precede the FROMTime, PITTime, and TOTime options. See also: Inactive files, restore selectively; Point-In-Time restoral Planet Tivoli A technical, solutions-oriented, systems management conference that offers attendees an in-depth look at the Tivoli management solution and the industry surrounding it: your opportunity to mingle with your industry peers. Go to http://www.tivoli.com/news/, click on Planet Tivoli in sidebar. Platform As in 'Query FIlespace' report. The platform designation reflects the operating system under which the client node last contacted the server. There is no command to change this value. For dsm and dsmc clients, reflects the operating system name (e.g., "AIX", "IRIX", "Linux", "SUN SOLARIS", "WinNT"). For the API, reflects the name of the application used in the dsmInit() call. Note that inadvertently accessing the server with a nodename associated with a different platform type can cause real problems: re-accessing it from the original platform may reset the platform designation; but the problem access may have caused the server to latch onto an inappropriate "level" designation, which cannot be reversed like the platform designation can (see msg ANR0428W). See also: Query Node Point-In-Time restoral (PIT) ADSMv3 feature for Query and RESTORE. Recovers a file space or a directory to a previous condition, as used to eliminate data corruption known to have occurred at a certain time, by restoring to before that time. It operates by restoring specified file system objects known at that time. It is necessarily vital that your retention values for both files and directories cover the age to which you want to recover. (A capricious DIRMc setting could cause needed directories to not be available for the restora.) Note that a Point-In-Time restoral does NOT remove new-name objects that were created after that point in time: it does not reinstantiate the file system to what it entirely looked like at that time, but rather just brings back files which were backed up at that time. Point-In-Time restoral is supported on the file space, directory, or file level. IMPORTANT: Use the command line interface (CLI) version of the client to perform Point-In-Time restoral, rather than the GUI! See "GUI vs. CLI". In concert with that, the Admin Guide advises: "Performing full incremental backups is important if clients want the ability to restore files to a specific time. Only a full incremental backup can detect whether files have been deleted since the last backup. If full incremental backup is not done often enough, clients who restore to a specific time may find that many files that had actually been deleted from the workstation get restored. As a result, a client's file system may run out of space during a [PIT] restore process." See: GUI vs. CLI; -PITDate, -PITTime Policy domain A policy object that contains one or more policy sets and management classes which control how ADSM manages the files which you back up and archive. Client nodes are associated with a policy domain. See policy set, management class, and copy group. Policy domain name associated with 'Query Node' shows node name and the a client node, query Policy Domain Name associated with it. Policy domain name associated with Done via 'REGister Node ...' (q.v.). a client node, set Policy domain, copy 'COPy DOmain FromDomain ToDomain' Name can be up to 30 characters. Policy domain, define 'DEFine DOmain DomainName [DESCription="___"] [BACKRETention=NN] [ARCHRETention=NN]' Since a client node is assigned to one domain name, it makes sense for the domain name to be the same as the client node name (i.e., the host name). Policy domain, define Policy Set in 'DEFine POlicyset Domain_Name SetName [DESCription="___"]' Policy domain, delete 'DELete DOmain DomainName' Policy domain, policy set which has 'Query DOmain' will show the Activated been activated, query Policy Set currently in effect. Policy domain, query 'Query DOmain' for basic info. 'Query DOmain f=d' for detailed info. Policy domain, update 'UPDate DOmain DomainName [description="___"] [backretention=NN] archretention=NN]' Policy set A policy object that contains a group of management class definitions that exist for a policy domain. At any one time, there can be many policy sets within a policy domain, but only one policy set can be active. So what good is that? Not much, really. It gives you a really gross means of switching from one Policy Set to another via administrator action, but no means of selecting one or another from the client end. See: Active Policy Set; Management Class Policy set, activate To activate a policy set, specify a policy domain and policy set name. Be sure that you have done: 'VALidate POlicyset DomainName PolicysetName' beforehand. When you activate a policy set, the server: - Performs a final validation of the contents of the policy set - Copies the original policy set to the active policy set Command: 'ACTivate POlicyset DomainName SetName' Policy set, active, update You cannot update the ACTIVE policy set. After a policy set has been activated, the original and the ACTIVE policy sets are two separate objects: updating the original policy set has no effect on the ACTIVE policy set. To change the ACTIVE policy set you must do the following: - Copy the ACTIVE policy set to a policy set with another name (or just use the one from whence the ACTIVE one came, as 'q domain' shows). - Update that policy set. - Validate that policy set. - Activate that policy set, to have the server use the changes. Policy set, copy 'COPy policyset DomainName OldSet NewSet' Policy set, define 'DEFine POlicyset Domain_Name SetName [DESCription="___"]' Policy set, delete 'DELete policyset DomainName Setname' Policy set, query 'Query policyset [[DomainName [Setname]] [f=d]' Policy set, rename There is no command to simply rename a policy set; you have to: - 'COPy policyset DomainName OldSet NewSet' - 'UPDate policyset DomainName NewSet DESCription="___"' - 'VALidate POlicyset DomainName NewSet' - 'ACTivate POlicyset DomainName NewSet' - 'DELete policyset DomainName OldSet' Policy set, update The policy set to be updated cannot be the ACTIVE policy set. 'UPDate policyset DomainName SetName DESCription="___"' Policy set, validate 'VALidate POlicyset DomainName PolicysetName' There must be a default management class defined for the Policy Set. Polling See: SCHEDMODe Port number, for 3494 communication Installation of the LMCP should result in a /etc/services entry looking like: "lmcpd 3494/tcp # IBM Automated Tape Library Daemon", to permit TCP/IP communication via a TCP port number common between the AIX host and the 3494. By default, port '3494' is used, which matches the default at the 3494 itself. If to be changed, be sure to keep both in sync. Also, if using other than the default (3494) you need to code the port number in /etc/ibmatl.conf . Port number, in 3494 LAN Status menu When you define LAN host specification via the 3494 console, that results in an assigned port number (100, 101, 102) which is visible in the LAN Status display. The number is purely for internal identification, for the 3494's own purposes, and has nothing to do with TCP/IP port numbers (as you would find in Unix's /etc/services). Port number for a session See: Session port number Port numbers (ports) Internet network addressing and access is unique by: 1. Host 2. Port number 3. Protocol (UDP, TCP) Port numbers range from 0 to 65535 with 0-1023 being for root use. In the Internet world, port numbers are formally assigned (see http://www.iana.org/assignments/ port-numbers) but within a site the numbers may be used as needed. Note that TSM has not formally registered its port numbers - which have been taken for other purposes, internationally - which in some unusual contexts may cause a non-TSM application to attempt to interact with the TSM server, with resultant protocol mismatch failure (perhaps msg ANR0444W, ANR0484W). Port numbers, for TSM client/server TSM conventionally uses the following TCP/IP port numbers, for TCP communication: 1500 Server port default number for all session types. Use the TCPADMINPort server option to specify a port to separately handle sessions other than client sessions (admin, server-to-server, SNMP subagent, storage agent, library client, managed server, event server sessions). Use the TCPPort server option to specify a port to separately handle just client sessions. (The distinction between the two options facilitates firewall configuration.) Startup msg: ANR8200I. Settable via server option TCPADMINPort. Specify via TCPPort server option and DEFine SERver LLAddress and SET SERVERLladdress. This is also the default port number for the client to contact the TSM server, settable via the client TCPPort option. See also client option LANFREETCPport. 1501 Client default port for backups (schedule) on which the client listens for sessions from the TSM server. Per server's Node definition, LLaddress spec. Settable via client option TCPCLIENTPort. Note that this port exists only when the scheduled session is due: the client does not keep a port when it is waiting for the schedule to come around. 1510 Client port for Shared Memory, settable via client option SHMPort. (Startup msg ANR8285I). The TSM Storage Agent also listens on this shared memory port by default, settable via client option LANFREEShmport. 1510 Server TCP/IP port number when using Shared Memory, settable via server option SHMPort. 1521 SNMP subagent default port, settable via server option SNMPSUBAGENTPORT. 1543 ADSM HTTPS port number. 1580 Administrative web interface default (settable via server option HTTPPort). 1580 Client admin port. 1581 Client port default to respond to web administrative interface or Web Client. Settable via client option HTTPport. The Trusted Communication Agent client will use a non-privileged port number (>1023). Port 1500 is for the initial communication with the server, but once established, a separate session is forked off with it's own port: when the client connects to the assigned port, the server rolls the client over to another random port to keep the initial port open for further connections. To avoid this 'random' choice, consider using Polling Mode scheduling for clients outside the firewall: the clients will then only use the TCP port specified in the client options file. Establishing separate sessions allows multiple clients sessions to be established to the server at one time. When the *SM client establishes a session with the server, it randomly selects a socket (port) number that it calls out on. The adsm server then uses that client port number for return transmissions. If using server-initiated backups, you can set the client's port number for the server to use in the client's system options file. If you do this, then you will have to set up the client's TCP/IP to reserve that port number. The "tcpport " is how the initial port number is specified. A separate session is forked once the inital contact is made, but there is no way to predetermine what port number will be used: the attempts will increment the port number until an established connection is made (or the client times out). The Tivoli Event Client port may be set via the server option TECPort. Note that the client port number shows up on msg ANR0406I when the session starts, like the "4300" in: (Tcp/Ip 100.200.300.400(4330)). See also: DEFine SERver; Firewall support; HTTPport; TCPCLIENTPort; TCPPort; WEBPorts POSTNschedulecmd Like POSTSchedulecmd, but don't wait. See: POSTSchedulecmd POSTSchedulecmd Client System Options file (dsm.sys) option to specify a command to be run after running a schedule, and wait for it to complete. (Cannot be used on the command line.) If you don't want to wait for the post-schedule command to complete, code POSTNschedulecmd instead. In Unix, the command is run as a child process of the dsmc parent. Placement: code within server stanza. Code the command string within either single or double quotes: you can then code either double or single quotes inside as needed. Avoid coding this option with a blank or null value, as it may cause the scheduled command to fail. Caution: This option is perhaps best used with SCHEDMODe POlling, where triggering is under the control of the client. Using SCHEDMODe PRompted can be problematic as DEFine CLIENTAction tasks can hit the client at random, and have nothing to do with work that you set the option up to do. Example need: To restart a database server after backing up the database. Verify via 'dsmc query options' in ADSM or 'dsmc show options' in TSM; look for "PostSchedCmd". See also: PRESchedulecmd Pre-fetch See: NOBUFPREFETCH Pre-labeled tapes, a good idea? One can order tapes pre-labeled (standard tape labels written on the media, and a barcode which presumably matches); but is that a viable thing to do? There have been reports of customers satisfied with the performance of the pre-labeled tapes they received from a vendor - and some who have had bad experiences. (The label should be ANSI standard, ASCII.) The reality is that you simply do not know for certain that the supposedly pre-labeled tapes have been pre-labeled or that it was done compatibly. It costs little to have TSM label tapes for you, and you will be assured of proper results by having it do so. Remember that you as the TSM technician are ultimately responsible for results - not the vendor, or the site personnel who ordered the tapes. Error msgs: ANR8353E ANR8355E ANR8472I ANR8780E ANR8780E ANR8783E Precedence, Include-exclude order See: Include-exclude order of precedence Precedence of operations See Admin Guide topic "Preemption of Client or Server Operations". Preemption (pre-emption) TSM gives priority to more important processes, as when a Restore requires as input a tape that is currently being read by a BAckup STGpool: the storage pool backup process is terminated to relinquish the volume to the Restore. APAR IX72372 added v3 Admin Guide topic "Preemption of Client or Server Operations", which lists operations and priority order. Control: NOPREEMPT option in server options file (dsmserv.opt). Note that you can define a PRIority value on an administrative schedule - which defaults to a middle priority value of 5. Note that preemption may seem not to work, in that TSM is pursuing completion of a unit of work before interrupting that process, such as reclamation of a tape with a single, very large backup file on it. It has also been observed that a high-priority operation (e.g., data restore) will only pre-empt a process / session with the same devclass. When a client backup schedule is interrupted by preemption, it will usually be able to resume where it left off, as seen in its backup log containing message "ANS1809E Session is lost; initializing session reopen procedure." Msgs: ANR0487W; ANR0492I; ANR1440I Ref: Admin Guide, "Preemption of Client or Server Operations" See also: NOPREEMPT Preferences, GUI In the GUI, Preferences may be choices corresponding to client options. You thus can refer to a combination of the GUI Help function and the client manual for information. Prefixes See: Client component identifiers Premigrated file A file that has been copied to ADSM storage, but has not been replaced with a stub file on the local file system. An identical copy of the file resides both on the local file system and in ADSM storage. When free space is needed, HSM verifies that the file has not been modified and replaces the copy on the local file system with a stub file. HSM premigrates files after automatic migration is complete if there are additional files eligible for migration, and the premigration percentage is set to allow remigration. Contrast with migrated file and resident file. Premigrated files database A database that contains information about each file that has been premigrated to ADSM storage. The database is stored in a hidden directory named .SpaceMan in each file system to which space management has been added. HSM updates the premigrated files database whenever it premigrates and recalls files and during reconciliation. If the database becomes corrupted, it can be recreated by doing the following: - cd .SpaceMan - bkurfile premigrdb.dir premigrdb.pag - Run '/usr/lpp/adsm/bin/fixfsm' (a ksh script). See: fixfsm - Run 'dsmreconcile' Premigration The process of copying files that are eligible for migration to ADSM storage, but leaving the original file intact on the local file system. Premigration candidates 'dsmmigquery FileSystemName' Premigration Database Is the premigrdb.dir and premigrdb.pag file set located in the .SpaceMan directory. The 'dsmls' command reports from this when it lists premigrated (p) files. premigration percentage A space management setting that controls whether the next eligible candidates in a file system are premigrated following threshold or demand migration. The default for remigration percentage is the difference between the percentage specified for the high threshold and the percentage specified for the low threshold for a file system. premigrdb See "Premigrated files database" PRENschedulecmd Client System Options file (dsm.sys) option to specify a command to be run before running a schedule. (Cannot be used on the command line.) In Unix and Macintosh, TSM will not wait for the command to complete before proceeding. Contrast with PRESchedulecmd. Placement: code within server stanza. Prepending to SQL output Use the "||" SQL specification, as in the example: SELECT 'MOVe Data || VOLUME_NAME - FROM VOLUMES WHERE PCT_UTILIZED <50 PRESchedulecmd Client System Options file (dsm.sys) option to specify a command to be run before running a schedule. (Cannot be used on the command line.) For Unix, Macintosh, DOS, Windows, and OS/2, ADSM waits for the command to complete before continuing with processing. If you don't want ADSM to wait, in Unix and Macintosh you can code PRENschedulecmd instead. Code the command string within either single or double quotes: you can then code either double or single quotes inside as needed. In Unix, the command is run as a child process of the dsmc parent. Placement: code within server stanza. Avoid coding this option with a blank or null value, as it may cause the scheduled command to fail. Example need: to shut down a database server before backing up the database. Verify via 'dsmc query options' in ADSM or 'dsmc show options' in TSM; look for "PreSchedCmd". Caution: This option is perhaps best used with SCHEDMODe POlling, where triggering is under the control of the client. Using SCHEDMODe PRompted can be problematic as DEFine CLIENTAction tasks can hit the client at random, and have nothing to do with work that you set the option up to do. Evidence of pre-schedule execution will show up in the SCHEDLOGname-d file under "Executing Operating System command or script". Note that messages or textual reporting produced by the invoked command do not show up in the scheduler log, but will show up in the redirected output of the scheduler invocation. That is, if you invoke the scheduler to redirect output to a file (as in Unix example 'dsmc schedule >> logfile 2>&1'), the output will show up there. If the PRESchedulecmd returns a non-zero return code, the scheduled event will not run - because it has every reason to believe that steps prepatory to the scheduled action have not succeeded. Use this approach to perform some perfunctory operation before the schedule runs. To instead conditionally perform some action, schedule a script to run, which will internally invoke 'dsmc i' or similar client command if all is well. You cannot validly code this option more than once in the file: if you do, no error will result, but only the last occurrence of the option will be used. See also: POSTSchedulecmd PRENschedulecmd -PRESERvepath Client option, as used with Restore and Retrieve, to specify how much of the source path to reproduce as part of the target directory path when you get files back, but to a new location. Parameters: subtree Creates the lowest level source directory as a subdirectory of the target directory. This is the default. complete Restores the entire path, starting from the root, into the specified directory. The entire path includes all the directories *except* for the filespace name. nobase Restores the contents of the source directory without the lowest level, or base, directory into the specified destination directory. none Restore all selected source files to the target directory. No part of the source path at or above the source directory is reproduced at the target. Primary Storage Pool vs. Copy Storage Customers will sometimes compare their Pool total storage size Primary Storage Pool contents against the corresponding Copy Storage Pool contents (after recent BAckup STGpool) and, despite the number of files matching, the total size as reported in the Physical Space Occupied value differs between the two storage pools. This causes concern. But realize that Physical Space includes empty space within aggregate files, from which files may have been deleted or expired. See also: Aggregates; Physical occupancy; Query OCCupancy Prioritization See: NOPREEMPT; Preemption Priority of TSM server processes See: Preemption Priority Score See: Migration Priority Private, make tape a private volume Via TSM command: 'UPDate LIBVolume LibName VolName STATus=PRIvate' Private Status value reported in 'Query LIBVolume'. A tape just checked in as Private will have a null Last Use because there was no last use. (Make sure you label new volumes, to prevent new Checkins from getting a status of Private rather than the desired Scratch.) A tape will be forced to Private status when there is an I/O failure on a Scratch volume, as *SM sets it to Private to keep from thrashing on the scratch mount. Look in the Activity Log for the message "8778W Scratch vol ... changed to Private Status to prevent re-access". If in 'Query LIBVol': - Last Use is blank: expect that the volume was last used for a DUMPDB operation if the volume is a long-term resident of the library. - Last Use is "Data": Could be a Backup Set. Private, make tape Via ADSM command: 'UPDate LIBVolume LibName VolName STATus=PRIvate PRIVATE category code 'Query LIBRary' reveals the decimal category code number. See also: Volume categories Private subnets See: 10.0.0.0 - 10.255.255.255; 172.16.0.0 - 172.31.255.255; 192.168.0.0 - 192.168.255.255 PRIVATECATegory Operand of 'DEFine LIBRary' server command, to specify the decimal category number for private volumes in the repository, which are to be mounted by name (volser). Default value: 300. /proc (Solaris) Like /tmp, is a pseudo file system, this one providing access to the state of each active process in the system. The process info. monitored in the /proc file system changes as the process moves through its life cycle. Due to its nature, this file system is not worth backing up or restoring. Process, cancel 'CANcel PRocess NN' Process numbering Begins at 1 with each *SM server restart. Process start time Not revealed in Query PRocess: you have to do 'SELECT * FROM PROCESSES' and look at START_TIME. PROCESSES TSM SQL table. Fields: PROCESS_NUM Integer process number. PROCESS Process name, like "Backup Storage Pool". START_TIME Like "2002-12-19 07:19:15.000000" FILES_PROCESSED Integer. Value may also appear in STATUS. BYTES_PROCESSED Integer. Value may also appear in STATUS. STATUS Free-form text describing the status of the process. Processes, maximum There seems to be no way to define how many processes may be active at one time within the server - which is too bad in that such would be handy in causing serialization for commands which result in processes, like 'BAckup STGpool'. See MAXPRocess value on commands like 'BAckup STGpool', 'RESTORE Volume', etc. Processes, server (dsmserv's) When the ADSM server starts, in AIXv3 it will start a lot of processes, and in AIXv4 it will start one process with numerous threads. In either case these are ADSM threads. There will be one thread for each volume in your ADSM system (database, recovery log, storage pool) and so you are better off with multiple, smaller volumes than one large one, as parallelization will improve. There are other threads for each of the comm methods for accepting new conversations, migration and reclamation watchdog threads that will start these processes when needed, a deadlock detector, the server console, expiration watchdog to start expiration at the appropriate interval, the schedule manager, etc. These threads do not stop and restart. New threads (processes) are created and terminated as needed for client sessions, tape mounts and dismounts, server processes, etc. Do 'SHow THReads' to see 'em. To see the threads in AIXv4, use the -m option of the 'ps' command, as in 'ps -eflm'. See also: dsmserv; Storage pool volumes and performance Processor usage See: Multiprocessor usage PROCESSORutilization N Novell-only (Netware-only) option to control the percentage of CPU time allotted to ADSM, in 100ths of seconds. Said to be the single biggest impact parameter in the Novell dsm.opt file. Producer session The session that is responsible for querying and reporting results to the server. (To use an FTP analogy, this is the "control channel".) Contrast with: Consumer session See also: RESOURceutilization Programmable Workstation Communication A product that provides transparent Services (PWSCS) high performance communications between programs running on workstations or on host systems. .PST filename suffix and access A filename with that suffix is a Microsoft Outlook or Exchange personal folder. Such personal files files do not support shared access. When Outlook opens a PST, it locks it for exclusive access ("Open Exclusive"). No other user can touch that file until it is physically closed by the person who opened it. This is due, in large part, to the database format Outlook uses: contacts, calendar entries, messages, journal entries, etc. are stored in one big flat-file. If you attempt to share such files, the first person to open the file gains exclusive access to it, meaning that the owner of the file may be locked out of using her/his own file. Outlook does release the locks periodically (by default, after 15 minutes of inactivity), meaning that you can have Outlook open, and your .pst files won't stay always locked if they are not actively in use. (See MS KB article 222328, "OL2000: (CW) How to Change File LockTimeout Value for PST Inactivity".) The Outlook client can be configured to release the PST file after some period of inactivity so that another application can open and read it, even though the Outlook client is running: The "MSPST.INI" file controls this... DisconnectDelay=60 // Seconds till disconnect. Default is 15 min. DisconnectDisable=2 // 0 = disallow disconnect to occur, 2 = allow disconnect. Default is 2. Multiple .pst files can be exist on the PC and not be opened by Outlook. Also, if there is more than one Outlook mail profile on the PC and they both have .pst files, then one of the .pst files will be available for backup. Related: .edb PTFs applied to ADSM on AIX system 'lslpp -l adsm\*' Purge Volume category 3494 Library Manager category code FFFB to delete a Library Manager database entry, as when a tape ends up with a "Manually Ejected" FFFA category code because it was unusable, such that this useless 3494 database entry remains. See also: Volume, delete from Library Manager database PWSCS Programmable Workstation Communication Services. QFS Solaris: A high-performance file system that enables file sharing in a SAN. It eliminates performance bottlenecks resulting from applications using very large file sizes. QIC Quarter Inch Cartridge tape technology, using a twin-spool, flat cartridge, usually with an aluminum base plate and plastic enclosure, housing tape a quarter of an inch wide. See also: 7207 Query, restrict access See QUERYAUTH Query ACtlog TSM server command to report info from the Activity log. Syntax: 'Query ACtlog [BEGINDate=___] [BEGINTime=___] [ENDDate=___] [ENDTime=___] [MSGno=___] [Search=SearchString] [ORiginator=ALL|SErver|CLient] [NODEname=node_name] [OWNERname=owner_name] [SCHedname=schedule_name] [Domainname=domain_name] [SESsnum=session_number]' Defaults to reporting the latest hour's activity. MSGno and Search can be used together for more effective results. Note: In AIX TSM, the date format is MM/DD/YY, regardless of the server Dateformat setting. This is reportedly a function of the international NLS locale setting in AIX: there is no ready way for it to be any other format. Note: This command cannot be scheduled. Query ADmin ADSM server command to display info about administrators. Syntax: 'Query ADmin [Adm_Name|*] [CLasses=SYstem|Policy|STorage| Operator|Analyst] [Format=Detailed]' Also: GRant AUTHority, revoke admin. Query ARchive See: dsmc Query ARchive Query ASSOCiation *SM server command to display the client nodes associated with one or more client schedules, as for Backup and Archive operations. Syntax: 'Query ASSOCiation [[DomainName] [ScheduleName]]' Query AUDITOccupancy *SM server command to display info about the client node data storage utilization. The numbers reported will by default include both primary and copy storage pool contents, or but may be selected separately. Syntax: 'Query AUDITOccupancy NodeName(s) [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' By default, the command shows you the occupancy of all nodes in all domains for all storage pools; but the resulting report itself doesn't provide any indications of what is being included. Report details: The fixed report size granularity of a MB can easily lead to misunderstanding: a 0 MB value may not mean nothing there, but may instead mean too much less than a megabyte to have an integer value. You will find that the number for Backup Storage Used, for example, is equal to the sum of the Physical Space Occupied values from Query OCCupancy for all the backup data storage pools for that node. Note: It is best to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' to assure that the reported information will be current. Alternately, you may perform 'SELECT * FROM AUDITOCC'. Also try the unsupported command 'SHow VOLUMEUSAGE NodeName' See also: AUDITOCC Query Backup See: dsmc Query Backup Query BACKUPSET TSM server command to display information about one or more Backup Sets: Node Name, Backup Set Name, Date/Time, Regetion Period, Device Class Name, Description (but not the volumes constituting the set). Syntax : 'Query BACKUPSET [*|NodeName[,NodeName]] [*|BackupsetName[,BackupsetName]] [BEGINDate=____] [BEGINTime=____] [ENDDate=____] [ENDTime=____] [WHERERETention=Ndays|NOLimit] [WHEREDESCription=____] [WHEREDEVice=DevclassName]' See also: dsmc Query BACKUPSET Query BACKUPSETContents TSM server command to display information about the contents of a Backup Set: its files and directories. Syntax: 'Query BACKUPSETContents NodeName BackupSetName' Note that there is no provided means for the client CLI or GUI to obtain such information. Considerations: Processing this command can consume considerable time, network resources, and mount points. (The command has to look inside the Backup Set to report its contents, meaning that it has to mount the media and plow through the data.) See also: Backup Set; dsmc Query BACKUPSET; GENerate BACKUPSET Query CLOptset TSM server command to query a client option set defined on the server for all clients. Syntax: 'Query CLOptset Option_Set_Name DESCription=Description' Query CONtent TSM server command to display info about one or more files currently residing in a storage pool volume. Syntax: 'Query CONtent VolName [COUnt=N|-N] [NODE=NodeName] [FIlespace=____] [Type=ANY|Backup|Archive| SPacemanaged] [DAmaged=ANY|Yes|No] [COPied=ANY|Yes|No] [Format=Detailed]' A positive COUnt value shows the first N files on the volume, listed in forward order; a negative COUnt value shows the last N files on the volume, latest first. The reported Segment Number reveals whether the file spans volume (where "1/1" says it's wholly contained on the volume). COPied is for reporting on whether files have been backed up to a copy storage pool. Displays: Node Name (in upper case), Type (Arch, Bkup, SpMg), Filespace, and Client's Name for File. Use "F=D" to additionaly display Stored Size, Segment Number, and Cached Copy. (Does not reveal owner.) Performance: The more files on the volume, the longer the query takes, if you impose no count limit: with a modest limit, there is no significant server overhead, as made apparent by the nearly instantaneous results. If you have collocation enabled and each node's files fit on one tape, you can do 'Q CON VolName count=1' to determine what node's files are on each tape, as for generating pulllists for export node processes, etc. Note that Query CONtent will not report the contents of a volume which *SM has just started writing if it is spanning a transaction from a volume it has just filled, as during Copy Stgpool. See also: Damaged; Span volumes, files that, find; Stored Size Query COpygroup *SM server command to display info about one or more Copy Groups, where retention periods are defined. 'Query COpygroup [DomainName] [SetName] [ClassName] [Type=Archive] [Format=Detailed]' Query DB Server command to display allocation and statistical information about the server database: Available Space, Assigned Capacity, Maximum Extension, Maximum Reduction, Page Size, Total Usable Pages, Used Pages, Pct Util, Max. Pct Util, Physical Volumes (count), Buffer Pool Pages, Total Buffer Requests, Cache Hit Pct, Backup In Progress?, Type of Backup In Progress, Incrementals Since Last Full, Changed Since Last Backup (MB), Percentage Changed, Last Complete Backup Date/Time. Syntax: Query DB [Format=Detailed] Query DBBackuptrigger Server command to display the current settings for the database backup trigger, used in Rollforward mode. Syntax: Query DBBackuptrigger [Format=Detailed] See: Recovery Log; Set LOGMode Query DEVclass ADSM server command to display info about one or more device classes. Syntax: 'Query DEVclass [DevClassName] [Format=Standard|Detailed]' See also: SHow DEVCLass Query DRive TSM server command to display information about a drive located in a server-attached library: the state of a drive, whether it is online, offline, unavailable or being polled by the server. Syntax: 'Query DRive [* [LibName [DriveName]]] [Format=Standard|Detailed]' Notes: This command reports only whether the drive is said to be online to TSM. An online drive is not necessarily usable or operational. Do 'SHow LIBrary' to get more detailed information, supplemented by operating system drive inquiries, including use of the 'mtlib' command with 3494s. An "Unavailable Since" condition usually indicates a hardware problem, as per msg ANR8848W. Query DRMedia Server command to display information about database backup and copy storage pool volumes, or create a file of executable commands to process the subject volumes. (You do not need DRM to use this handy command.) Syntax: 'Query DRMedia [*|VolName] [WHERESTate=All|MOuntable| NOTMOuntable|COUrier|VAult| VAULTRetrieve|COURIERRetrieve| REmote] [BEGINDate=date] [ENDDate=date] [BEGINTime=time] [ENDTime=time] [COPYstgpool=pool_name] [Source=DBBackup|DBSnapshot| DBNone] [Format=Standard|Detailed|Cmd] [WHERELOCation=location] [CMd="command..."] [CMDFilename=file_name] [APPend=No|Yes]' By default, this will display all copy storage pool volumes and database backup volumes. You can cause it to show only db backup volumes by invoking with COPYstgpool having a non-existent copy storage pool name, as in "COPYstgpool=NONE". Note: The Source operand was "DBBackup=Yes|No" in ADSMv3. CMd specifies a command to be generated for each volume found by the Query. The command can be up to 255 characters long, and may be coded as multiple lines via the handy &NL substitution variable. Other substitution variables: &VOL The volume name. &VOLDSN The file name that the server writes into media labels. &LOC The volume's Location. Note that, whereas redirection under an administrative client session is relative to the system where the admin client is running, the CMDFilename spec is relative to the TSM server system. This command is particularly valuable in compensating for the inability to use redirection in server scripts, as when you would like to perform a Select to obtain the volname of the latest db backup, for massaging into a CHECKOut LIBVolume command, to eject that volume for offsite storage. See also: DRMEDIA; MOVe DRMedia; Query MEDia; Set DRMCMDFilename; Set DRMCOPYstgpool Query DRMSTatus TSM server command to query parameters defined to the TSM Disaster Recovery Manager. Reports: recovery plan prefix, plan instructions prefix, replacement volume postfix, primary storage pools, copy storage pools, courier name, vault site name, DB backup series expiration days, recovery plan file expiration days, check label yes/no, process FILE device type yes/no, command file name. See also: Set DRMDBBackupexpiredays Query EVent (for admin schedules) TSM server command to display scheduled and completed events. Syntax: 'Query EVent SchedName Type=Administrative [BEGINDate=NNN] [BEGINTime=Time] [ENDDate=Date] [ENDTime=Time] [EXceptionsonly=No|Yes] [Format=Standard|Detailed]' Query EVent (for client schedules) TSM server command to display scheduled and completed events. Syntax: 'Query EVent DomainName SchedName [Nodes=NodeName(s)] [BEGINDate=NNN] [BEGINTime=Time] [ENDDate=Date] [ENDTime=Time] [EXceptionsonly=No|Yes] [Format=Standard|Detailed]' Remember that events log entries are retained only as long as specified via 'Set EVentretention' (q.v.). In the report... Status Is the status of the event at the time that Query EVent was issued: In Progress Customers report seeing this in a failure of the client (such as the scheduler service/daemon freezing or dying). Query EVent notes A status of "Uncertain" usually means that the schedule event record has been deleted by automatic pruning functions: it is no longer in the database, per "Set EVentretention". It may be that you asked for too old information. You could change the amount of time that schedule event records are retained using the Set EVentretention command to keep these records around longer so that you can query their status. Query Event shows only the latest status for each event. If a scheduled operation is executed successfully, the status will indicate that the event was successful, although previous attempts at this event may have been unsuccessful. A status of "(?)" may only prevail in TSM 4.x: it reflects being unable to get the schedule state from the client prior to the error in communications. Check the TSM client(s) in question for completion of the scheduled event (through the client dsmsched.log and dsmerror.log). If the scheduled backup failed, rerun the scheduled event or perform a manual incremental backup to ensure the backup of the data. See "UPDate SCHedule, client" for the reason that prior event records may disappear. Query FIlespace *SM server command to display information about file spaces. Syntax: 'Query FIlespace [NodeName] [FilespaceName] [Format=Detailed]' The reported Filespace Name will be as "..." if it is unicode and your server cannot interpret that (code page). In such case, you can perform the command with Format=Detailed and transliterate the Hexadecimal Filespace Name. The Capacity and Pct Util values reported reflect the Unix file system size and utilization when TSM last looked, as you would see in a Unix 'df' command, for example. The values will be zero for AUTOFS filespaces and API client work, such as Oracle TDP backups. Query FIlespace does not reveal how much data has been stored by a node. (Use 'Query OCCupancy' to see space consumed in *SM server storage.) The "Last Backup Date" reflects only Incremental Backup executions... If "Last Backup" is: empty It indicates that there is nothing to report, as in the filespaces having been created in the server by virtue of Archive activity. Or look to see if the filespace type indicates that it was created by an API, which is inherently separate from regular backups. Or it may have been created by a Selective backup. stagnant It would seem that the client has not been doing unqualfied Incremental backups - that is, backing up whole file systems without modifying options. The value will be stagnant if the client is doing only Selective Backups, or if the client is doing qualified Incremental backups (where 'dsmc i /fsname/*' is an erroneous form, which should instead be 'dsmc i /fsname'). See also: "..."; dsmc Query Filespace; FILEPSPACES Query INCLEXCL See: dsmc Query INCLEXCL Query LIBRary *SM server command to display info about libraries you created via 'DEFine LIBRary', including Category Codes for Scratch and Private type volumes. Syntax: 'Query LIBRary [LibName] [Format=Standard|Detailed]' In the output, don't forget that with 3494 libraries and 3590 tapes, the defined Scratch category code is for 3490 type tapes, and that value + 1 is for your 3590 tapes. See also: DEFine LIBRary Query LIBVolume TSM server command to display info about one or more volumes that have been previously checked into an automated tape library and are physically still in it, whether they are currently scratch volume or volumes now assigned to a storage pool. Syntax: 'Query LIBVolume [LibName] [VolName]' Note that this command is not relevant for LIBtype=MANUAL. For each library, reports volume names, volume status (Private/Scratch), and Last Use (Data/DbBackup/Export/...). There is no date/time information: that is in the Volhistory table. Note that the volume status implies the category code, as can be numerically determined via 'Query LIBRary [LibName]'. If Status shows as "Private" and Last Use is blank, it may be that the volume was last used for a DUMPDB operation or, more commonly, the volume is empty and Defined to a storage pool. Note that volumes checked out of the library (especially Offsite tapes) will not show up in 'Query LIBVolume': do 'Query Volume' instead. Query LICense TSM server command to display license audit, license terms, and compliance information. Reports: - Date and time of last AUDit LICenses - Number of registered client nodes - Number of client node licenses - For each component, two lines reporting whether it is in use, and whether it is licensed. Note that it is possible for the number of licenses in use to be greater than zero while the number licensed is zero: this is an artifact of someone trying to use such a license (and obviously failing). TSM is simply recording the attempt. In such a case, the number in use value should automatically return to zero after some 30 days after the attempt to use it: if it doesn't clear, run AUDit LICenses'. See also: AUDit LICenses; AUDITSTorage; LICENSE_DETAILS; Licenses and dormant clients Query LOG Server command to display allocation information and statistics about the Recovery Log: Available Space, Assigned Capacity, Maximum Extension, Maximum Reduction, Page Size, Total Usable Pages, Used Pages, Pct Util, Max. Pct Util, Physical Volumes (count), Log Pool Pages, Log Pool Pct Util, Log Pool Pct Wait, Cumulative Consumption, Consumption Reset Date/Time. Syntax: 'Query LOG [Format=Detailed]' The Log Pool Pct Wait value should always be zero for a healty situation. See also: RESet LOGConsumption; RESET LOGMaxutilization Query MEDia ADSMv3 server command to display information about the sequential access primary and copy storage pool library volumes moved by the MOVe MEDia command. (Actually, it will report on all library volumes, but via operands can be restricted to volumes with specific Move Media attributes. The global capabilities of this command can be used as an alternative to Query Volume, as in reporting all volumes that are dedicated to storage pools, which are empty. But there is a basic requirement that the storage pool(s) involved by managed by an automated library.) Syntax: 'Query MEDia [*|VolName] STGpool=PoolName|* [Days=Ndays] [WHERESTATUs=FULl|FILling|EMPty] [WHEREACCess=READWrite|READOnly] [WHERESTate=All|MOUNTABLEInlib| MOUNTABLENotinlib] [WHEREOVFLOcation=location] [CMd="command"] [CMDFilename=FileName] [APPend=No|Yes] [Format=Standard|Detailed|Cmd]' Days is the number of elapsed days since the most recent of the read or write date for the volume. A checked-in volume will be reported as "Mountable in library". A checked-out volume will be reported as "Mountable not in library". See also: MOVe MEDia; Overflow Storage Pool; OVFLOcation; Query DRMedia Query MGmtclass ADSM server command to get info on about one or more Management Classes. Syntax: 'Query MGmtclass [[[DomainName] [SetName] [ClassName]]] [F=D]' See also: Management classes, query Query MOunt TSM server command to get info on mounted volumes (tapes). Syntax: 'Query MOunt [Vol_Ser]' Report will be in mount request order, not drive or volume order. Report messages: ANR8329I IDLE: The tape is currently not read or written. ANR8330I IN USE: The tape is being read or written. ANR8331I DISMOUNTING: Just what it says. Notes: Does not reflect drives in use by LABEl LIBVolume. Does not return information on tapes mounted by other means on drives "owned" by TSM (as via the 'mtlib' command, manual mounts, etc.). SQL equiv: There is no Mount(s) table; but doing a Select from the Drives table yields comparable info, though not RW or RO status. See also: DISMount Volume Query Node Note that the Platform value is set the first time the client uses TSM, and that value persists though the actual platform type may change. There is no command to change this value. In any case, it is just a nicety: the actual platform type is dynamically recognized, as can be seen via 'Query SESsion'. See also: Platform Query OCCupancy Find the number of file system objects and the amount of space they take in storage pools (utilization). The Space values reported reflect the amount of data which the server knows about, which means the number of MB received from the client *after* client compression, and the number of MB written to a storage device (tape drive) *before* it may have performed its own compression. Syntax: 'Query OCCupancy [NodeName] [FileSpaceName] [STGpool=PoolName] [Type=ANY|Backup|Archive| SPacemanaged]' Note that this command displays info about files stored in storage-pools, and thus does not reflect objects which require no storage pool space, such as zero-length files and directories from Unix clients: they are just attributes, which can be stored solely in the TSM database. Query OCCupancy does not report cached files or space occupied by these files. Only migratable files are included. Report details: "Physical Space Occupied" and "Logical Space Occupied" refer to the ADSMv3 Small File Aggregation feature: the physical file can be an aggregate file (composed of logical files), with empty space resulting from expiration of logical files. "Logical Space Occupied" is the amount of space occupied by logical files in the file space, which amounts to the Physical Space value minus the "holes" created by expired files within Aggregates. space actually used to store files, excluding empty space within aggregates. "Number of Files" is the number of logical files stored in the stgpool. This number DOES NOT necessarily equate to the number of file system objects stored for this filespace in the storage pool (see points raised above). Avoid doing a Query OCCupancy while an intense database operation, such as an Import, is running: that may cause an ANR9999D condition. See also: OCCUPANCY; Symbolic links Query Option (dsmc client command) Undocumented client command to reveal all options in effect for this client. Note that output is more comprehensive than what is returned from the dsm GUI's Display Options selection. For example, this command will report INCLExcl status whereas the GUI won't. TSM: show options Query OPTion TSM server command to reveal all options in effect for this server, as coded in the server options file. Syntax: 'Query OPTion [* | Option_Name]' where you can specify one option name or a wildcard specification. Note that this command will not show values currently in effect by virtue of self-tuning (per SELFTUNE* options). Syntax: 'Query OPTion [OptionName]' See also: Query STatus Query PRocess *SM server command to see what processes have been started to internally process long-running commands. Note that the Process Number reported is ADSM's relative process number, and is not the same as the AIX process number of the dsmserv process doing the work. Syntax: 'Query PRocess [ProcessNum]' Note: Odd formatting after an upgrade might be due to not installing all the message repositories. See also: CANcel PRocess; Expiration process Query REQuest ADSM server command to display info about pending mount requests. Syntax: 'Query REQuest [requestnum]'. Obviously, if you have an automated tape library, there will be no mount requests. See also: CANCEL REQUEST; REPLY Query RESTore TSM server command to display information about restartable restore sessions. Syntax: 'Query RESTore [NodeName] [FilespaceName] [Format=Detailed]' See also: Query Backup Query SCHedule (administrative) Server command to query an administrative schedule. Syntax: Query SCHedule [Schedule_Name] [Type=Administrative] [Format=Standard|Detail]' Query SCHedule (client) Server command to query a client schedule. Syntax: Query SCHedule [Domain_Name=*|Schedule_Name] [Type=Client] [Nodes=NodeName[,NodeName]] [Format=Standard|Detail]' Query SERver ADSMv3 server command to display information about a server definition. 'Query SERver [ServerName] [Format=Detailed]' See also: DEFine SERver; Set SERVERHladdress; Set SERVERLladdress Query SEssion ADSM server command to display info about current sessions with ADSM client nodes. Syntax: 'Query SEssion [SessionNumber] [Format=Detailed]' 'Query SEssion [SessionNumber] [MINTIMethreshold=minutes] [MAXTHRoughput=kBs] [Format=Standard|Detail|Gui]' The MINTIMethreshold and MAXTHRoughput parameters act as filters on the Query SEssion output for client nodes. They can be used to setup time and throughput thresholds with which to automatically cancel sessions which have become a bottleneck to the server by using the THROUGHPUTTimethreshold and THROUGHPUTDatathreshold options. Note that the Detailed report's only additional information is to reveal any tapes in use for the session, as in: "Media Access Status: Current output volume: 000043." A Media Access Status of "Waiting for mount" can be due to the library not being in automated operation state. The "Date/Time First Data Sent" value reflects when the Consumer session began sending client data to the TSM server for storage in storage pools, after the Producer session set up processing and garnered the filespace Active files inventory from the server. See also: CommW; Consumer session; IdleW; Media Access; MediaW; Producer session; RecW; Run; SendW; Status See also: SHow NUMSESSions; SHow SESSions Query SPACETrigger ADSMv3 server command to report the settings for the database or recovery log space triggers. Syntax: Query SPACETrigger DB|LOG [Format=Standard|Detailed] See: DEFine SPACETrigger Query SQLsession Server command to display the current values of the SQL session attributes as defined by Set SQLDATETIMEformat, Set SQLDISPlaymode, and Set SQLMATHmode. Report: Column Display Format, Date-Time Format, Arithmetic Mode, Cursors Allowed Query STatus ADSM server command to display info about the general server parameters, such as those defined by the SET commands. See also: Query OPTion Query STGpool *SM server command to display info about one or more storage pools. Syntax: 'Query STGpool [STGpoolName] [POoltype=PRimary|COpy|ANY] [Format=Detailed]' Obviously, there is no need for column entries for migration where the stgpool has no next level in the stgpool hierarchy. (Column entries for migration may persist where there had been a next stgpool, but was removed.) Query SYStem ADSMv3+ command to show much the same info as the previous unsupported command 'SHOW CONFIGuration', but sticks to information valuable to customers. This is a relatively time-consuming command, as query commands go - which can make it useful as an artificial delay in server scripts and macros. Query TAPEAlertmsg TSM 5.2+ server command to display the current Set TAPEAlertmsg setting. See also: Set TAPEAlertmsg; TapeAlert Query Trace ADSM client command (dsmc Query Trace) to display the current state of ADSM tracing, as per Client User Options File (dsm.opt) options. See "CLIENT TRACING" section at bottom of this document. Query VOLHistory ADSM server command to show VOLUME HISTORY data from db and export. Syntax: 'Query VOLHistory [BEGINDate=date] [ENDDate=date] [BEGINTime=time] [ENDTime=time] [Type=All|BACKUPSET|DBBackup| DBDump|DBRpf|DBSnapshot|EXPort| RPFile|RPFSnapshot|STGDelete| STGNew|STGReuse]' Note the lack of selectivity by volume: you can compensate for this by instead doing: Select * FROM VOLHISTORY WHERE VOLUME_NAME='______'. The timestamp displayed is when the operation started, rather than finished. Does not show Checked-in volumes: the volumes reported are those which at one time had been assigned to a storage pool. Query Volume Shows storage pool volumes (not Scratch volumes, or DB backup tapes, Backupset tapes, or Export tapes.) Syntax: 'Query Volume [VolName] [ACCess=READWrite|READOnly| UNAVailable|OFfsite| DEStroyed] [STatus=ONline|OFfline|EMPty| PENding|FILling|FULl] [STGpool=*|PoolName] [DEVclass=DevclassName] [Format=Detailed]' VolName may employ wildcard characters: if omitted, all volumes are reported. The "Estimated Capacity" volume is the "logical capacity" of the volume: if 3590 hardware compression is active, the value reflects contents after compression. The better compressed that files were on the client (as with 'gzip -9'), the less compression will be possible, and the closer the value will be to physical capacity. Note that STatus=EMPty will report only volumes which have been explicitly assigned to a storage pool via DEFine Volume and which are devoid of data: it will *not* report scratch volumes, because the command is for reporting storage pool volumes and scratches are only potentials, not assigned to a storage pool. You can instead do: SELECT * FROM LIBVOLUMES WHERE STATUS='Scratch' QUERYAUTH ADSM server option for specifying the level of authority that is required for issuing server QUERY or SELECT commands. Refer to the information on QUERYAUTH parameter in the sample server options file for more details. QUERYSCHedperiod Client System Options file (dsm.sys) option to specify the number of hours the client scheduler should wait between attempts to contact the *SM server for scheduled work. Default: 12 (hours) Syntax: "QUERYSCHedperiod N_Hours". This option applies only when the SCHEDMODe option is set to POlling (not PRompted), and the client SCHEDULE command is running. The server can override this: see 'Set QUERYSCHedperiod' Quiet (-Quiet) Client System Options file (dsm.sys) option or command line option to suppress the output of most ADSM commands. Of particular value for Backup Restoral performance: eliminates the overhead of formulating and writing progress messages. Default: Verbose See also: Verbose Quiet (server command line option) See: dsmserv QUIT Command to leave an administrative client session (dsmadmc). Cannot be used for SERVER_CONSOLE sessions. Quota See: HSM quota "Quotas" on storage used Client node storage utilization might be enforced not by charging users, but limiting them according to agreed-to limits. This could be achieved by having a mechanism which literally or effectively performs 'Query Occupancy' and/or 'Query Auditoccupancy' to see how much clients have stored. You can do a 'Cancel Session' on the unruly, or even do a 'Lock Node', and send them mail about their behavior. See also: Client sessions, limit amount of data q.v. Abbreviation for Latin phrase "quod vide", meaning "which see", which in reference works is a referral to another definition. Rapid Recovery The ultimate objective of the Instant Archive function - to be able to quickly restore your client files using a Backup Set that had been created on the TSM server, without the need for a network connection, via media which your workstation can read. See also: Backup Set; Instant Archive Raw logical volume TSM AIX terminology for a volume which is used as addressable blocks by TSM for database, recovery log, storage pools: the volume does not contain a file system. The absence of a file system affords the opportunity for greater performance in *most* aspects of TSM operation. (There is no read-ahead in RLV processing, as there is in JFS file processing, so storage pool migration will be slower with RLV than with JFS.) As always, performance is subject to many vagaries, such as OS settings, hardware capabilities and their operating attributes, etc. RLVs are much simpler to set up (no need to format volumes and create file systems) - which makes RLVs the way to go for disaster recovery scenarios, where time is of the essence. However, the use of raw volumes is discouraged in some IBM doc: the Admin Guide topic "The Advantages of Using Journaled File System Files" offers specific warnings against use of raw logical volumes. In contradiction, however, the TSM Performance Tuning Guide recommends using RLVs. (The issue is taken up in APAR IC41481.) Raw logical volumes are handled via their /dev/rlv____ name. (Note that all logical volumes have a /dev/rlv, so be careful about using one in TSM.) They are created in AIX via the 'mklv' command. Note that TSM caches database and recovery log pages in memory, lessening implicit advantages of raw volumes for recent data. Note also that JFS does caching as well, which further increases performance with a file system (but at the expense of AIX system paging, in that AIX filesystem caches participate in virtual memory). The biggest undocumented issue with raw volumes is in "visibility"... Site administration typically involves a bunch of people who are not always cognizant of everything: without a file system on the volume, its purpose and usage is far less apparent than a volume with a well-defined and readily viewable file system. This greatly increases the probability of "accidents"...very expensive accidents, such as thinking that the logical volume is unused, and trying to create a file system on it. (In AIX, the 'lslv' command - if used - would show the logical volume as being Open.) And the naive may seek to extend the size of the LV at the OS level. (Protect against this by making the LV a fixed, non-extendable size.) With AIX there is no locking when using RLV. *SM deals with this by implementing locks using files in the /tmp directory (ref: msg ANR7805E). System housekeeping must not delete these lock files between system reboots. Formatting? Not for raw logical volumes: they do not need to be formatted, and the dsmfmt command has no provision for them (it only accepts file names). Beware: *SM overlays the first 512 bytes of a raw logical volume, where the Logical Volume Control Block (LVCB) usually resides, making the logical volume unusable for export-import and like operations. Although this might seem fatal, it is not the case. Once the LVCB is overwritten, you can still do the following: - Expand a logical volume - Create mirrored copies of the logical volume - Remove the logical volume - Create a journaled file system to mount the logical volume. Do not use AIX volume mirroring with RLVs: AIX uses space in the LVCB to manage the mirroring, which overlays ADSM data. Performance: ADSM spreads its activity across logical volumes assigned to it. Avoid adding RAID striping, as this will slow performance. A technique to employ if running multiple TSM servers in the same system with RLVs is to run the TSM instances as non-root and give ownership of the /dev RLV special files to separate non-root users. Ref: IBM site Technotes 1173045, 1152712 See also: Raw partition Raw logical volume, back up TSM 3.7 introduces the ability for *SM to back up raw logical volumes, via what is known as "Logical Volume Backup" and "Image Backup". (The unsupported Adsmpipe utility used to fill this role, but is now officially obsolete for that purpose.) If your logical volumes are for use with Oracle/Sybase/Informix, there are intelligent backup agents for TSM which provide better functionality and application intelligence than the lv backup. Ref: 3.7 UNIX client manual under BACKUP IMAGE; or redbook Tivoli Storage Manager Version 3.7: Technical Guide (SG24-5477), Chapter 3, Section: "Logical volume backup". See: 'dsmc Backup Image' Raw Logical volume, change lvname You many have to do this in reconstructing a replacement for a destroyed logical volume. AIX command: 'chlv -n NewLvName OldLvName' Raw Logical volume, dsmfmt? You do not format logical volumes: the dsmfmt command is used only for files to be used as ADSM volumes. Raw Logical Volume, query See: SHow LVM; SHow LVMCOPYTABLE; SHow LVMFA; SHow LVMVOLS Raw Logical Volume, size limit Through AIX 4.1, Raw Logical Volume (RLV) partitions and files are limited to 2 GB in size. It takes AIX 4.2 to go beyond 2 GB. Raw Logical volume in Sun/Solaris Watch out for two gotchas: 1. You cannot use the first cylinder of a physical disk: the first blocks hold the partition table and volume label. ADSM does not skip the first sector and so would overwrite the volume label. 2. ADSM checks if there is file system on the disk before using it. It does this by trying to mount the partition as a file system! It the mount succeeds the define fails. New disks from Sun ship partitioned and with empty file systems on them. Solution: make the partition start on cylinder 1. You could also do: 'dd if=/dev/zero of=/dev/rdsk/.... count=1024' to destroy the first superblocks so the mount fails. Msg: ANR2404E Raw partition TSM Solaris term for a disk partition used by TSM as randomy addressable blocks, for database and storage pool volumes: the OS volume does not contain a file system. You do not have to format the volume in TSM terms, but you do in OS terms. Watch out for cylinder 0. Ref: Admin Guide Raw partition, back up See: Raw logical volume, back up Raw volume support in Linux As of 2004/05, there is no support for raw volume usage in Linux as on other Unix platforms. rc.adsmhsm See: HSM rc file read-without-recall recall mode A mode that causes HSM to read a [no recall; no-recall; norecall; migrated file from ADSM storage without [Readwithoutrecall] storing it back on the local file [Read without recall] system. The last piece of information read from the file is stored in a buffer in memory on the local file system. However, if a process that accesses the file writes to or modifies the file or uses memory mapping, HSM copies the file back to the local file system. Or, if the migrated file is a binary executable file, and the file is executed, HSM copies the file back to the local file system. You can change the recall mode for a migrated file to read-without-recall by using the 'dsmattr' command. Contrast with normal recall mode and migrate-on-close recall mode. CAUTION: Readwithoutrecall has been seen to cause problems with NFS-exported file systems, as in file access stalling on the NFS client. ReadElementStatus SCSI command for some SCSI libraries (e.g., StorageTek 9714) to obtain information about the storage slots in the library. You can run that SCSI command by using the lbtest facility, selecting options 1, 6, 8, and 9. The output from option 9 will be for each slot and will reveal the address, among other things. READOnly Access Mode saying that you can only read the Storage Pool or Volume. Set with 'UPDate STGpool' or 'UPDate Volume'. ADSM will spontaneously change a volume's Access Mode to READOnly if it encounters a failure of a Write operation (message ANR1411W)...which could be the result of dirty tape heads...which can occur if a manual library has not been manually cleaned or in an automatic library the automatic cleaning has been disabled or cleaning cartridges have been exhausted. Tapes in READOnly state are so noted with the ADSM server starts. See also: Pending READWrite Access Mode saying that you can read or write the Storage Pool or Volume. Set with 'UPDate STGpool' or 'UPDate Volume'. Reason code Appears in various TSM error messages, such as ANR8216W. TSM generalizes terms because it has to accommodate multiple environments. In Unix the "reason code" is the Unix errno value (refer to /usr/include/sys/errno.h). Rebind deleted files See: Inactive files, rebind Rebinding The process of associating a file with a backed-up file with a new management class name. Rebinding occurs: - When you code a new management class on the Include statements governing subject files and do an unqualified Incremental backup. (A Selective backup binds the backed up files to the new mgmtclass, but not the Inactive files.) with a backup file is deleted. - When the management class associated with a backup file is deleted. - If you boost the retention of a copy group to which files are *not* currently bound, or decrease the retention of the copy group to which files *are* bound. What's happening: directories are by default bound to the management class/copygroup with the longest retention (RETOnly), in the absence of DIRMc specification, and so they "move" to the longest retention managment class. Rebinding does *not* occur: - For Archive files. - For partial Incremental backups. - For Inactive files where the client file system no longer contains that filename for a backup to operate on. Rebinding does not necessarily occur: - For directories, which want to be bound to the mgmtclass with the longest retention period, unless DIRMc specifically tells them otherwise. If you added Include statement to your client options file to specify use of a new management class and are perplexed to find no rebinding to it upon the next backup, it may be the case that you have a client option set on the TSM server, where its include-exclude statements take precedence of your local file. Watch out for Windows cluster servers with multiple options files: you need to be careful to code the mgmtclass on the right set of Include statements. See also: Archived files, rebinding does not occur Rebinding--> Leads the line of output from a Backup operation, as when a filespace has moved from one TSM server to another, or perhaps the management class has changed, as via Include spec. The rebinding of directories reflects their fresh backup. The rebinding indicator does not identify the management class to which the object is rebound: that can be identified in the Backups table. Note that rebinding does not apply to Archived files: see "Archived files, rebinding does not occur". See also: Directory-->; Expiring-->; Normal File-->; Updating--> Recall (HSM) The process of copying a migrated file from an ADSM Space-Managed Storage Pool back to its originating client file system. Set recall modes with the HSM command 'dsmmode -recall=Normal|Migonclose' for overall HSM action; or 'dsmattr -RECAllmode=Normal|Migonclose |Readwithoutrecall File_Name' for a specific file or files. Contrast with Restore and Retrieve. See also: Transparent Recall; Selective Recall; Recall Mode Recall information (HSM) 'dsmq' command. Recall list (HSM) 'dsmmigquery FSname' Recall Mode (HSM) 1) One of four execution modes provided by the dsmmode command. Execution modes allow you to change the HSM-related behavior of commands that run under dsmmode. The recall mode controls whether an unmodified, recalled file is returned to a migrated state when it is closed. 2) A mode assigned to a migrated file with the dsmattr command that determines how the file is processed when it is recalled. It determines whether the file is stored on the local file system, is migrated back to ADSM storage when it is closed, or is read from ADSM storage without storing it on the local file system. Recall mode of migrated file, set 'dsmattr -recallmode=n|m|r Filename' (HSM) where recall mode is one of: - n, for Normal - m, for migrate-on-close - r, for read-without-recall Recall process, remove from recall 'dsmrm Recallid' queue as determined by doing 'dsmq'. Recall processes, display 'dsmq' Recall queue, remove a process from 'dsmrm Recallid' as determined by doing 'dsmq'. REClaim= Keyword on 'DEFine STGpool' and 'UPDate STGpool' specifies the amount of reclaimable space on a volume (as a percentage) at which point reclamation should kick off, to copy the tape's contents and thus reclaim that space. That is, the value is the percentage of empty space on the volume, including empty space within Aggregates. The conventional value is 60 (%), such that volumes should undergo reclamation when their Pct. Reclaimable Space values reach 60%. The REClaim value should be 50 (%) or greater such that two volumes could be combined into one. Important note: Due to occasional I/O errors, tapes will be thrown into Readonly state, and their Pct Util may be quite low, like 3.0%. Such tapes are quite usable, but often go unnoticed, leaving you short of scratches - and reclamation won't reclaim them because their Pct. Reclaimable Space is low. You should periodically perform 'Query Volume ACCess=READOnly STatus=Filling' and do a MOVe Data to replenish your scratch pool. Reclaim pool See: RECLAIMSTGpool RECLAIM_ANALYSIS ADSMv3 SQL: Provisional database table created by the AUDIT RECLAIM command, which fixed problems created by defects in the early levels of the V3 server. See also: AUDIT RECLAIM Reclaimable space Do 'Query Volume [VolName] F=D' and look at the "Pct. Reclaimable Space" for each volume. Reclaimable volumes See: Storage pool, reclaimable volumes RECLAIMSTGpool=poolname ADSMv3: DEFine STGpool operand. (single drive reclamation) Specifies another storage pool as a target for reclaimed data. This parameter is primarily for use with storage pools that have only one drive in its library. This parameter allows the volume to be reclaimed to be mounted in its library and the data is then moved to the specified reclaim storage pool. This parameter must be an existing primary sequential storage pool. This parameter is optional, however: if used, all data will be reclaimed to that storage pool regardless of the number of drives in that library. The reclaim storage pool itself must be defined as a primary storage pool. There are no restrictions on this storage pool's definition, but it should be defined with a NEXTSTGPOOL= value that will migrate its data back into the data storage hierarchy. Because its primary function is to collect reclaimed data, its NEXTSTGPOOL= value should be the same storage pool from which the data was reclaimed. When having just a single drive, you should have your disk STGpool MIGPRocess=1 and DEVclass MOUNTLimit=1. Ref: Admin Guide "Reclaiming Volumes in a Storage Pool with One Drive" Reclamation Files on tape volumes may expire per standard rules or by virtue of the owning filespace having been deleted. With abundant tapes, one may be able to simply let the contents of tape volumes expire and recycle tapes with no effort. But in most sites that's not possible: tapes are needed, and the remaining contents of volumes have to be copied to newer, compacted volumes to create needed scratches. This is Reclamation. Volumes are chosen by the oldest "Date Last Written", not Pct Util or Pct. Reclaimable Space. It copies the remaining data on a volume to a volume that is in a Filling state, or an empty volume if no partials are present. Emptied volumes return to where they came from: the scratch pool or, if the volume had been defined to the storage pool, then it remains defined to the storage pool. The volume being reclaimed is mounted R/O, and the volume to receive the data is obviously mounted R/W. Reclamation is not something you want to do: it ties up drives, takes time, and entails additional wear on drives and media. Do it only when your scratch tape pool reaches a comfortable minimum. (There is some consideration that delaying reclamation can mean longer restoral times as compared to data on reclaimed, compacted tapes; but reclamation typically involves your oldest tapes and data, so it's usually not an issue.) ADSMv3+: When logical files are reclaimed from within an Aggregate, the Aggregate is compacted to reclaim space. Note that, in contrast, MOVe Data by default does not reclaim space where logical files were logically deleted from *within* an Aggregate. (As of TSM 5.1 there is a RECONStruct option which does allow aggregate-internal space to be reclaimed.) If the volume being reclaimed is *not* aggregated (as in the case of a volume produced under ADSMv2, or where too-small TXNGroupmax and TXNBytelimit values conspire to effectively prevent aggregation) the files are simply transferred as-is: the output likewise *not* aggregated. Thus, in some cases, a Move Data (which does no aggregate tampering) may be just as effective as a reclamation. If you are in a hurry to produce needed scratch tapes, use Move Data rather than Reclamation. Reclamation also brings together all the pieces of each filespace, which means it has to skip down the tape to get to each piece. (The portion of a filespace that is on a volume is called a Cluster.) In addition, if the target storage pool is collocated, each cluster may ask for a new output tape, and TSM isn't smart enough to find all the clusters that are bound for a particular output tape and reclaim them together. Instead it is driven by the order of filespaces on the input tape, so the same output tape may be mounted many times. The nature of collocation means that reclamation of a collocated storage pool will not harvest needed scratches as quickly as reclamation of a non-collocated copy storage pool. If an Expire Inventory is running and has reduced the Pct Util of a volume below the reclamation threshold, Reclamation will not occur until the Expire is done. The reclamation thread wakes up at least once per hour minutes to see if there is work to do (more frequently when the reclamation threshold is lower). Beware that the reclamation process may be single-threaded such that multiple MOVe Data commands may be advantageous. Note that after a reclamation, the 3590 ESTCAPacity value returns to its base number of "10,240.0" MB. When Reclamation is running, a Backup cannot start if the Reclamation is using tape drives that it needs. Messages: ANR1040I for each volume being reclaimed; ANR1044I specifying required tapes; ANR8324I for tape mounts; ANR1041I at end. See also: Cluster; MOVe Data; Pct. Reclaimable Space Reclamation, activate Do: 'UPDate STGpool PoolName REClaim=NN' making the NN percentage less than 100%. REClaim specifies the percentage of reclaimable space left on a volume for when reclamation will occur for it. When will it start? Experience is that for copy storage pools, it starts immediately; for primary storage pools, "in a little while". At what point should you reclaim tapes? In an ideal world, you would never have to: you would have sufficient tapes and library capacity such that content attrition alone would empty and return tapes automatically. In the real world, we have to perform reclamation. The best approach is to perform reclamation only when the number of scratches falls below a comfortable level. This maximizes data elimination through attrition and then acts on the residual data on media, while minimizing occupancy and wear on drives. You should avoid using a REClaim value of less than 60 (%) - which means that when the volume has a Pct. Reclaimable Space value of 60% or more less that it will undergo reclamation. If you're going that low, you are overly constrained. Note that the REClaim value should be 50 or greater such that two volumes could be combined into one. The anticipated reclamation process may take considerable time to start, particularly on collocated storage pools with a large number of volumes: it takes much less time to start on non-collocated copy storage pools which have a comparable amount of data. Reclamation, deactivate At a minimum you need to do: 'UPDate STGpool PoolName REClaim=100'. Now take action based upon stgpool type: - Primary storage pool: Reclamation for primary stgpools is performed on a volume by volume basis. That is, each volume is reclaimed as its own reclamation process. When reclamation of a single, primary stgpool volume completes, the TSM Server will check the reclamation threshold for that stgpool before looking for additional volumes to reclaim. If the reclamation threshold has been increased to 100%, no further volumes in the primary stgpool will be reclaimed. - Copy storage pool: With these, all eligible volumes are reclaimed as part of a single process. Because of this, the only time TSM checks the reclamation threshold for the copy stgpool is when the reclamation process begins. At that time, all of the eligible volumes are queued up to be reclaimed: the TSM Server does not check the reclamation threshold again until that composite process ends. Setting the reclamation percentage to 100% prevents any new reclamation processes from starting, but does not stop any running ones. You can usually force a reclamation of either pool type to end by issuing a CANcel PRocess on it. (The cancel will not take effect until at least the current aggregate is completed.) For an onsite storage pool, the new REClaim value is observed as the next volume is handled. For an offsite storage pool, the new REClaim value is *not* observed prior to the conclusion of the current process. Ref: Admin Guide manual topic "Choosing a Reclamation Threshold", "Lowering the Migration Threshold". Reclamation, offsite Volumes are not ordered by any externally visible parameter. The processing order will appear to be arbitrary. Possibly, *SM looks at all the data on all the eligible tapes, then tries to mount each input tape required (from your onsite pool) just once - which compares with working on all the eligible offsite tapes at the same time. You don't need to bring back offsite volumes in order to do reclamation on them. The valid files remaining on sparsely filled offsite volumes are copied from the original copies of the files. These original copies of the files are in the primary storage pools onsite...thus no offsite volumes need to be brought back to do reclamation. A new set of copy stgpool volumes is created which contain all the valid files reclaimed from the offsite volumes: the reclamation of an offsite storage pool effectively brings the data back onsite. You must then be sure to send these freshly-written volumes offsite. (Because of this exposure, you may want to avoid inciting reclamation of offsite volumes, and instead simply let their contents dissipate over time.) The reclaimed offsite volumes go into a holding state (Pending) for as long as you specify with the REUsedelay parameter (on define copy storage pool), meaning that in the event of a disaster, the restored TSM db will probably again point to data on those offsite volumes, which because of the db restoral would no longer be Pending. Note that all eligible offsite storage pool volumes are reclaimed in a continuous operation which remains blind to administrative changes to the reclamation threshold: If you change the REClaim value while that process is running, it will have no effect. In contrast, the reclamation of onsite volumes will look at the value as it goes to reclaim the next volume. Reclamation, pre-emption Space Reclamation will be pre-empted if an HSM recall needs a tape; will see msg ANR1080W in the Activity Log. Reclamation, prevent Do: 'UPDate STGpool PoolName REClaim=100'; More drastically achieve by setting DEVclass MOUNTLimit=1. Reclamation, prevent at start-up To prevent reclamation from occurring during a problematic TSM server restart, add the following (undocumented) option to the server options file: NOMIGRRECL Reclamation and migration See: Migration and reclamation Reclamation and the single tape drive See: RECLAIMSTGpool Reclamation in progress? 'Query STGpool ____ Format=Detailed' "Reclamation in Progress?" value. Reclamation not clearing some offsite You've done Reclamation, but some tapes offsite volumes still show small percent utilizations - not being fully reclaimed. This may be due to TSM checking for files which span volumes, to prevent an endless chain of reclamation. Reclamation not happening Be aware that with a large storage pool, (reclamation not working) it can take a substantial amount of time for TSM to start the reclamation... sometimes, hours. Beyond that, possible problem areas: - No volumes have a Pct. Reclaimable Space value at least as high as the Stgpool REClaim value. - Two mount points are not simultaneously available. (Check your DEVclass MOUNTLimit value and the actual viability of your drives.) - Large storage pools it can take a while for Reclamation to initiate - perhaps longer than the window that it is alloted by server administration schedules. - Do the subject volumes themselves have good Access values, which allow them to be mounted and reclaimed? Volumes which are offsite cannot be reclaimed if they have no represented data onsite. - A small Pct Util value may involve storage pool files which span volumes, and reclamation may not be happening because the volume that the files span to/from are in a state which precludes their use. Use 'Query CONtent F=D' on suspect volumes, looking for Segment numbering other than 1/1 in the first and/or last files, which indicates spanning from/to other volumes. Do 'Move Data' on one such volume and see what happens. - The presence of server option NOMIGRRECL will prevent it. Check your Activity Log for errors. Note that tapes are candidates for reclamation whether they are Full or Filling. Reclamation performance Is governed by the MOVEBatchsize and MOVESizethresh options, which help tune the performance of server processes that involve the movement of data between storage media. (There was a problem in TSM 4.2 where those options were not being honored for disk-to-tape reclamation where disk caching was turned on: it has since been fixed.) Number of processes: There can be only one per stgpool, as the product is currently designed. (You can instead perform multiple MOVe Data operations - but MOVe Data is not the same as reclamation.) If using LTO Ultrium, slow reclamation performance can reveal an ugly LTO firmware defect, in which the CM index is corrupted. See: LTO performance Reclamation process, cancel The cancel will take effect when it reaches a point to safely stop the reclamation. The system will finish the last process started, and once it is complete, stop. Reclamation processes, number of Only one reclamation process per storage storage pool runs at a time - and then only per the Reclamation Threshold value for the storage pool being less than 100%. Most server operations do not support multiple parallel processes. The only exceptions are migration from disk pools, backup storage pool, restore storage pool, and restore volume. Reclamation stalls awaiting tape It cannot get the tape drive(s) it needs mounts to perform the mount(s), which can be due to the drive(s) being busy with other tapes, or busy with a cleaning cartridge, or that the drive names changed across an AIX reboot wherein tape drives were added or removed. REConcile Volumes TSM server command to reconcile differences between virtual volume definitions on the source server and archive files on the target server. TSM finds all volumes of the specified device class on the source server and all corresponding archive files on the target server. The target server inventory is also compared to the local definition for virtual volumes to see if inconsistencies exist. 'REConcile Volumes [* | '-device_class_name-'] [Fix=No|Yes]' RECOncileinterval Client System Options file (dsm.sys) option to specify how often *SM automatically reconciles HSM-controlled file systems, by running dsmreconcile. Default: 24 hours Note that unless you run dsmreconcile, HSM file expiration will not occur, and HSM files whose stubs were deleted from the HSM file system will build up in *SM server storage. RECOncileinterval, query Via ADSM 'dsmc Query Options' or TSM 'dsmc show options'. Look for "reconcileInterval". Reconciliation (HSM) The process of synchronizing a file system to which you have added space management with the ADSM server you contact for space management services and building a new migration candidates list for the file system. Initiated by: - Automatically via the dsmreconcile daemon, at intervals specified via the RECOncileinterval option in the Client System Options File. - Automatically before performing threshold migration if the migration candidates list for a file system is empty. - Manually: The client root user can start reconciliation manually at any time, via the 'dsmreconcile' command. Reconciliation interval (HSM) Control via the RECOncileinterval option in the Client System Options file (dsm.sys). Default: 24 hours Reconciliation processes (HSM), max Control via the MAXRCONcileproc option in the Client System Options file (dsm.sys). Default: 3 Query via client 'dsmc Query Options' in ADSM or 'dsmc show options' in TSM; Look for "maxReconcileProc". Reconstruction See: Aggregates and reclamation; MOVe Data Recover volume See: AUDit Volume; RESTORE Volume; Volume, bad, handling Recovery Log The Recovery Log houses in-flight transactions, either: - until they are committed to the TSM database, when LOGMode Normal is in effect; - until the next database backup is performed, when LOGMode Rollforward is in effect. Note that changes are initially housed in the Recovery Log buffer pool, which means that the Recovery Log and Database on disk are not always consistent. Space must be available in the Recovery Log for a session to be established (else get msg ANS1364E). Be aware that more space will be needed as the TXNBytelimit client option and the MOVEBatchsize, MOVESizethresh, and TXNGroupmax server option values are increased. Also, longer tapes make Reclamation run longer and require more Recovery Log space. The backup of large files will keep Recovery Log space from being committed. ADVISORY: EXPIre Inventory quickly consumes Recovery Log space. Use its DUration parameter to limit the amount of time that the expiration runs. See also: Transactions, minimize number Named in /usr/lpp/adsmserv/bin/dsmserv.dsk, as used when the server starts. (See "dsmserv.dsk".) Installation default is to create it 9MB in size. A database backup will reportedly empty the log. See also: LOGPoolsize; DEFine SPACETrigger Recovery Log, analysis To see what caused the Recovery Log to fill, issue internal commands: q se f=d q log f=d q pr SHow dbtxn SHow THReads SHow logseg SHow locks SHow logv SHow txnt SHow dbvars SHow dbtxnt Recovery Log, checkpoint Consider doing this if the Recovery Log is inflated by a flurry of activity. See: CKPT Recovery Log, compressed records Only occurs when the Recovery Log is in Normal mode (as opposed to Rollforward). Msgs: ANR2362E Recovery Log, convert second primary 'REDuce LOG Nmegabytes' volume to volume copy (mirror) 'DELete LOGVolume 2ndVolName' 'DEFine LOGCopy 1stVolName 2ndVolName' Recovery Log, create 'dsmfmt -log /adsm/DB_Name Num_MB' where the final number is the desired size for the database, in megabytes, and is best defined in 4MB units, in that 1 MB more will be added for overhead if a multiple of 4MB, else more overhead will be added. For example: to allocate a database of 1GB, code "1024": ADSM will make it 1025. Recovery Log, define additional volume 'DEFine LOGVolume RecLog_VolName' Recovery Log, define volume copy 'DEFine LOGCopy RecLog_VolName (mirror) Mirror_Vol' Recovery Log, delete volume 'DELete LOGVolume VolName' Will cause TSM to start a process to move data from that volume to the remaining Recovery Log volumes. Recovery Log, extend Via ADSM server command: 'EXTend LOG N_Megabytes'. Causes a process to be created which physically formats the additional space (because it takes so long). If server down, use Unix command line: 'dsmserv extend log ' where "volname" is typically the name of a dsmfmt-formatted file which you want to augment the existing recovery log. See also: dsmserv EXTEND LOG Recovery Log, maximum size Before TSM 4.2: Per APAR IC15376, the recovery log should not exceed 5.5 GB (5440 MB). But APAR IY09200 says that the maximum size is 5420 MB; and the max usable is 5416 MB (because of how calculations are performed which store data structures in a certain fixed area in the first 1 MB of each DB and LOG volume). Msgs: ANR2452E and ANR2429E Ref: Server Admin Guide, topic Increasing the Size of Database or Recovery Log topic, in Notes. See: SHow LVMFA, which reveals that the max is 5.3GB, not 5.5. (See the reported "Maximum possible LOG 1 LP Table size".) As of TSM 4.2 (June 2001): The maximum size of the recovery log is increased to 13 GB. (Note that automatic expansion of the Recovery Log, by DBBackuptrigger, will not go beyond 12 GB, to provide wiggle room.) Advisory: It is best to not run with a maximum value because you may run into the very ugly ANR7837S situation where your Recovery Log is full and, being at the maximum, you can't add space to get your server restarted. And consider running in Normal rather than Rollforward mode: many customers are doing that to avoid log filling problems. If you run in Rollforward mode, use DBBackuptrigger. (The max size is apparently in the TSM source code as #define LOG_MAX_MAXSIZE.) Recovery Log, mirror, create Define a volume copy via: 'DEFine LOGCopy RecLog_VolName Mirror_Vol' Recovery Log, mirror, delete 'DELete LOGVolume RecLog_VolName' (It will be nearly instantaneous) Messages: ANR2263I Recovery Log, Pct. Utilized A defect in v4.1 prevents this value from going to zero after a database backup. Circumvention: do 'ckpt'. Another customer reports that setting Logmode to Normal, then back to Rollforward allows the next incremental or full to clear the log. If neither works, Halt and restart the server. Recovery Log, query 'Query LOG [Format=Detailed]' Recovery Log, reduce Via ADSM server command: 'REDuce LOG N_Megabytes'. Recovery Log allocation on a disk See: Recovery Log performance Recovery Log buffer pool See: LOGPoolsize Recovery Log consumption stats, 'RESet LOGConsumption' reset Recovery Log filling - Assure that your Copy Pool MODE is not ABSolute, which would force full backups every time, and thus burden the Recovery Log. - Review your client systems to assure that the backups they are doing are true Incrementals, to minimize the amount of data backed up each day. - Look into having your clients spread their backups out over time, to prevent Recovery Log congestion. (In particular, make sure that clients are not needlessly running backups in parallel.) - Check your server Set RANDomize setting to assure that you are staggering the start of scheduled backups. - Consider having massive clients break up their backups into multiple pieces, as via VIRTUALMountpoint and the like. - Use DBBackuptrigger. - Watch out for clients backing up very large files or commercial databases, as that constitutes a single, very large transaction, which burdens the Recovery Log. - Do sufficient BAckup DB operations over the day (like, 1 full, multiple incrementals) to keep recovery log space low. Keep in mind that TSM server processes like Expiration consume a lot of Recovery Log space. - Assure that no other TSM processes (like Expiration) are running during high-load backup periods. And likewise assure that the server system is not burdened with work that interferes with the ability of TSM to deal with its load at that time. - Look into your server LOGPoolsize, as it governs the rate at which Recovery Log transactions are committed to the database. - Tune your TSM server and database to assure that database commits can occur rapidly when they do occur. - Assure that the computer and operating system in which the TSM server runs is properly configured and tuned to assure that TSM can promptly attend to its database. - If your server is "maxed out", you should consider splitting the load to another server. - The active client may not be sending commits often enough. (Clients with NICs set to Autonegotiate may end up with dismal, erroneous datacomm rates and so "pin" the log due to not getting to a commit point.) - A TSM database volume on a very slow or troubled disk can be an affector. See also: Recovery Log pinning Recovery Log location Is held within file: /usr/lpp/adsmserv/bin/dsmserv.dsk (See "dsmserv.dsk".) Gets into that file via 'DEFine LOGVolume' (not by dsmfmt). ADSM seems to store the database file name in the ODM, in that if you restart the server with the name strings within dsmserv.dsk changed, it will still look for the old file names. Recovery Log max utilization stats, 'RESet LOGMaxutilization' reset Recovery Log mode, query 'Query STatus', look for "Log Mode" near bottom. Recovery Log mode, set See: Set LOGMode Recovery Log pages, mode for reading, "MIRRORRead LOG" definition in the define server options file. Recovery Log pages, mode for writing, "MIRRORWrite LOG" definition in the define server options file. Recovery Log performance As its name implies, the Recovery Log is more of a serially written thing rather than randomly accessed. As such, it is less sensitive to disk position than the TSM DB for server performance. Some guidelines: - Obviously, don't share the Recovery Volume disk(s) with other high-activity functions. - For best dealings with disk problems, spread the Recovery Log over multiple volumes rather than making it all one, large volume: if there is a disk surface defect, it will be isolated to one isolateable volume rather than taking out your whole, large Recovery Log volume. Via TSM mirroring, you can swap in another modest volume to take the place of the failed area. (TSM creates one thread per volume, which helps parallelization in places where benefits can be had; but with the nature of the Recovery Log file, thread counts don't matter.) Recovery Log pinning/pinned A phenomenon of long-running transactions which causes Recovery Log space to be greatly consumed... The nominally occupied region of the recovery log is bounded by head and tail pointers. The head pointer moves forward as new transactions are started. The tail pointer moves forward when the oldest existing transaction ends. Both pointers wrap around to the beginning of the log when they reach its end. During the copying of a huge file occurs will be one or more log entries relating to that operation just ahead of the tail pointer. There will be a huge area filled with log entries for transactions that have started and ended since the copying of the huge file started. There will be a small area just behind the head pointer containing log entries for the remaining pending transactions and possibly some entries for recently ended transactions. That huge area in the middle is considered to be occupied log space. When the copying of the huge file ends the tail pointer will advance to the end of the area containing recent transactions and the utilization will drop suddenly. The other activities running concurrently with the copying of the huge file are generating the transactions that keep moving the head pointer forward. Expiration is probably the biggest generator of transactions. Look also for lingering client sessions which eventually time out and cancel like "ANR0481W Session ___ for node ____ (WinNT) terminated - client did not respond within 7800 seconds." Ref: IBM site article swg21054574 See also: CKPT; SHow LOGPINned Recovery Log statistics, reset 'RESet LOGConsumption' resets the statistic on the amount of recovery log space that has been consumed since the last reset. 'RESet LOGMaxutilization' resets the max utilization statistic for the recovery log. Recovery Log volume (file) Each Recovery Log volume (file) contains info about all the other db and log files. See also: dsmserv.dsk Recovery Log volume, add 'DEFine LOGVolume VolName' Recovery Log volume, move The best approach to relocating Recovery Log volumes is to "leap-frog": add a new volume, then 'DELete LOGVolume' on the old volume. It is best to disable sessions and processes in the mean time, to prevent a mass of data from going into the Recovery Log. Note: TSM keeps track of Recovery Log volume pathnames in its database; so you can't expect to change names in the dsmserv.dsk file and then simply bring up the server: that will result in ANR7807W and ANR0259E messages. Recovery Log volume, remove 'DELete LOGVolume VolName' You may have to do a 'REDuce LOG' beforehand to take the space away from ADSM if it was previously told that that much space was available to it. (Msg ANR2445E) Recovery Log volume usage, verify If your *SM Recovery Log volumes are implemented as OS files (rather than rlv's) you can readily inspect *SM's usage of them by looking at the file timestamps, as the time of last read and write will be thereby recorded. RecvW (sometimes "RECW") "Sess State" value from 'Query SEssion' saying that the server is waiting to receive an expected message from the client. See also: Communications Wait; Idle Wait; Media Wait; Run; SendW; Start Recycle bin (Windows), excluding Exclude.dir '?:\...\RECYCLE*' Redbooks IBM practical usage guides, named for their red covers, are "how to" books, written by very experienced IBM, Customer and Business Partner professionals from around the world. Redpieces Are redbooks that are under development - made available this way to make the information available in advance of formal publication. Redpapers Are smaller technical documents also reflecting information gained during work on a particular topic. Redpieces and Redpapers are only available on the web. Redbooks can be ordered where desired. Redirection of command output The ADSM server allows command output to be redirected, as in capturing output in a file. Use ' > ' to create a file afresh or ' >> ' to append to a file. Be sure have spaces around the angle-brackets. Be aware that ADSM tends to inflate the width of such redirected output, way beyond what you are accustomed to in terminal display. For narrower output, use the "-OUTfile=SomeFilename" option on the dsmadmc invocation. Examples: 'q cont vol27 > temp' 'q cont vol28 >> temp' Note that you can't redirect output from an administrative schedule, however. Ref: Admin Ref REDuce DB nnn Reduce the amount of space that can be used in the ADSM server database, where "nnn" is the number of megabytes, which must be in multiples of 4. This command may be employed when you get message ANR2434E when attempting a DELete DBVolume. The amount of reduction possible is reflected in the "Maximum Reduction" value from 'Query DB' output, which in turn reflects the number of 4 MB partitions which have no database pages currently in them. Advisory: Reducing the DB can only be done when logmode is normal. So temporarily: Set LOGMode Normal Then reduce the DB and set the logmode back to roll-forward: Set LOGMode Rollforward Be aware that this will immediately trigger a full backup of the DB. See also: EXTend DB REDuce LOG nnn Reduce the amount of space that can be used in the ADSM server recovery log, where "nnn" is the number of megabytes, which must be in multiples of 4. The amount of reduction possible is reflected in the "Maximum Reduction" value from 'Query LOG' output, which in turn reflects the number of 4 MB partitions which have no log pages currently in them. The LOGMode must be Normal for this operation to be possible. Perform a 'Set LOGMode Normal' if necessary. See also: EXTend LOG RedWood Name for the StorageTek helical tape cartridge system. Utilizes parallel 1-by-1 CTU design to eliminate traditional queueing delays. More than 11 MB/sec head-to-tape data physical transfer rate. Cartridge holds 50GB. Unknown is the tape search speed: helical tape typically sacrifices such speed for density, and is inferior to the speed of linear tape technology. REGBACK NT Registry backup tool (from the NT Resource Kit). REGister Admin ADSM server command to define an adminstrator to the server. 'REGister Admin Adm_Name Adm_Passwd [PASSExp=0-9999Days] [CONtact="Full name, etc...]" [FORCEPwreset=No|Yes]' where a PASSExp value of 0 means that the password never expires. FORCEPwreset=Yes will induce ANR0425W. After registering, you need to 'GRant AUTHority'. REGister LICense TSM server command which enables the server for a given number of licenses, per your contract. Updates the nodelock file in the server directory. Syntax: 'REGister LICense HexLicenseNumbers|FILE=_____ Number=NumberOfLicenses' FILE may specify the files like "10client.lic" that appear in your server directory. Or you might directly enter the hex numbers supplied in the printed material that came with your shipment (though it is better to first enter them into files). You may use wildcards with FILE to grab all desired files in the current directory. Advisory: Assure that the permissions on the license files prevent unauthorized people from reading them. Note that NT deviates in requiring coding as "FILE(____)". Note that you must invoke REGister LICense as many times as it takes to add up to the total number of licenses you bought. Note that this command is an interface to a license manager package (originally a 3rd party product, but since purchased by Tivoli) - one which does little exception handling and/or returns inadequate information to the TSM server code. This inadequacy results in the following observed problems: REGister LICense will result in no change (and no error message) if the file system that the server directory is in is full. (Message ANR9627E is supposed to appear if the file system is full.) The operation can also fail in the same manner if the server system date is wacky, or the input license files specify a different server level. If the server processor board is upgraded such that its serial number changes, the REGister LICense procedure must be repeated: remove the nodelock file first. REGister LICense relies on the computer's date/time. When registering a license or restarting the ITSM server the "LicenseStartDate" is compared to the computer's date/time. "LicenseStartDate" is hardcoded in each of the ITSM server's license files. If the computer's date/time is set to before the "LicenseStartDate" that license will not be registered, and you can end up with message ANR2841W. Further, Query LICense will not show that license registered. (Note, of course, that LicenseStartDate values may differ, so you may see mixed results.) Msgs: ANR2841W See also: Unregister licenses REGister Node ADSM server command to register a node. Syntax: 'REGister Node NodeName Password [PASSExp=Expires0-9999Days] [CONtact=SomeoneToContact] [DOmain=DomainName] [COMPression=Client|Yes|No] [ARCHDELete=Yes|No] [BACKDELete=No|Yes] [CLOptset=______] [FORCEPwreset=No|Yes] [Type=Client|Server] [URL=____] [KEEPMP=No|Yes] [MAXNUMMP=1|UpTo999] [USerid=|NONE |SomeName]' where: FORCEPwreset Force the next/first usage to incite changing the password. This is particularly valuable for the server administrator to set an initial password which the client admin can change to be something known only to that person. PASSExp value of 0 means that the password never expires - unless overridden by the Set PASSExp value. COMPression=Yes Requires that the client compress its files before sending to the server. Results in the following scheduler message: "Data compression forced on by the server" URL Specifies the URL address that is used in your Web browser to administer the TSM client. By default, this command automatically creates an administrative user ID whose name is the nodename, with client owner authority over the node. This administrative user ID may be used to access the Web backup-archive client from remote locations through a Web browser. If an administrative user ID already exists with the same name as the node being registered, an administrative user ID is not automatically defined. You can suppress creation of such an administrative user ID via USerid=NONE. This process also applies if your site uses open registration. Be sure to specify the DOmain name you want, because the default is the STANDARD domain, which is what IBM supplied rather than what you set up. There must be a defined and active Policy Set. Note that this is how the client node gets a default policy domain, default management class, etc. Msgs: ANR0422W for when a non-registered node attempts to use TSM. Opposite: REMove Node See also: MAXNUMMP; Password; Set AUthentication Registered nodes, number 'Query DOmain Format=Detailed' Registered nodes, query 'Query Node' Registration The process of identifying a client node or administrator to the server by specifying a user ID, password, and contact information. For client nodes, a policy domain, compression status, and deletion privileges are also specified. See "Open Registration", "Closed Registration". Registration, make Closed Can be selected via the command: 'Set REGistration Open Closed'. Registration, make Open Can be selected via the command: 'Set REGistration Open'. Registration, query 'Query STatus' ADSM server command, look for "Registration:" value (as in Closed or Open). Registry (Windows) backup See: Backup Registry, BACKUPRegistry REGREST Standalone Windows utility to restore the registry file created with the Windows BACKUP REGISTRY command. Provided in the Windows Server Resource Kit. NTBackup will backup the registry as part of the System State. REGBACK and REGREST are Resource Kit utilities to backup and restore the Registry without the rest of the System State. See also: dsmc REStore REgistry Reinventory complete system A 3494 function invoked from the Commands menu of the operator station, to freshly inventory all storage components. Normally protected with sysadmin password. WARNING!!! This function will cause the category codes of all tapes in the library to be reset, to Insert!! (The re-inventory processes cause the existing library manager volume database to be deleted, a new database initialized, and records added for all the cartridges within the library.) You should perform this operation only when first installing the 3494, but *never* thereafter. If you inadvertently execute this destructive operation, you can perform a TSM AUDit LIBRary, which will fix the category codes. Contrast with "Inventory Update". Relabelling a tape... Will destroy ALL data remaining on it, because a new will be written immediately after the labels. Release tape drive from host Unix: 'tapeutil -f dev/rmt? release' Windows: 'ntutil -t tape_ release' after having done a "reserve". Remote Client Agent A.k.a. TSM Remote Client Agent Windows component of the client as used by the web client. See also: Client Acceptor Daemon; Scheduler Ref: "Starting the Web Client" in the Installing the Clients manual Remote console See: -CONsolemode Remote Desk Top Connection See: TDP for Domino (TDP Domino), Terminal Services restriction Removable volumes, show See: SHow ASACQUIRED REMove Admin TSM server commadn to remove an administrator from the system. 'REMove Admin Adm_Name' See also: REGister Admin; REName Admin REMove Node Server command to delete a defined node. You should have removed all of the node's filespaces and backup sets prior to removing the node itself. Syntax: 'REMove Node NodeName' See also: DELete BACKUPSET; DELete FIlespace -REMOVEOPerandlimit TSM 5.2.2 Unix client option to remove the artificial limit of 20 operands on the command line of Archive, Incremental, and Selective commands. Note that this option must appear on the command line: it is not valid in an options file. REName Admin Server command to rename an administrator. Syntax: 'REName Admin Old_Adm_Name New_Name' REName FIlespace Server command to rename a FIlespace. Syntax: 'REName FIlespace NodeName FSname Newname' Note that you can only rename a filespace within a node: you cannot rename it so that it is under another node. Advisory: Be careful that the new name does not conflict with an existing host file system, and particularly if the file system types differ. CAUTION: The filespace name you see in character form in the server may not accurately reflect reality, in that the clients may well employ different code pages (Windows: Unicode) than the server. The hexadecimal representation of the name in Query FIlespace is your ultimate reference. REName Node Server command to rename a node. Syntax: 'REName Node ' The new name must not already exist, else you get error "ANR2147E RENAME NODE: Node is already registered." Notes: The node's filespaces are, of course, brought along to be under the new name. REName STGpool ADSMv3 server command to rename a storage pool. Syntax: 'REName STGpool PoolName NewName' REPAIR STGVOL Special command, to be used under the instructions of TSM Support, to repair TSM database issues relating to storage pool problems from various causes, including from storage pool simultaneous write (COPYSTGPOOL=), as described in APAR IC37275, involving extraneous rows in the DS.Segments table or the AS.Segments table. See also: ANR0102E Note that repair tools are not rigorously developed, and may have problems, as a search of the IBM site reveals; hence the importance of running such only under IBM supervision. REPlace (-REPlace=) Client User Options file (dsm.opt) or (REPlace=No) 'dsmc' command option to specify handling when a file to be Restored or Retrieved already exists at the client location. Choices: Prompt for choice of overwriting; All to overwrite any existing files, including those read-only Yes to overwrite any existing files, except read-only files No do not overwrite any existing files, as when restarting an interrupted restoral. (Expect to see msgs like "File ____ exists, skipping", which reflects the server having gone through the effort to retrieve the file and send it to the client, only to have it be skipped by the client.) No-replace is based solely on the file name: the relative content of the file, its size, and timestamps are not factors. Command line example: -REPlace=Yes If the file system is to be NFS-served, "Prompt" should not be in effect because the NFS client won't get the prompt. See also: IFNewer Report width See: -COMMAdelimited; -DISPLaymode; SELECT output, column width; Set SQLDISPlaymode; -TABdelimited Reporting products (reports) See: TSM monitoring products REQSYSauthoutfile ADSM server option, as of 199908, to provide additional control related to the administrative authority required to issue selected commands that cause the ADSM server to write information to an external file. Choices: Yes Specifies that system authority is required for administrative commands that cause the server to write to an external file: - MOVE and QUERY DRMEDIA when CMD specified; - MOVE and QUERY MEDIA when CMD specified; - BACKUP VOLHISTORY when FILENAMES specified; - BACKUP DEVCONFIG when the FILENAMES specified; - TRACE BEGIN when a file name is specified; - QUERY SCRIPT when OUTPUTFILE specified. Yes is the default. No Specifies that system authority is not required for administrative commands that cause the server to write to an external file (i.e., there is no change to the privilege class required to execute the command). Reserve A special device command to retain control of a tape drive or the like in an environment where the drive is shared by multiple hosts, over multiple open-close processing sequences. In AIX, this is accomplished at the driver level by issuing an ioctl() to perform an SIOC_RESERVE command. Msgs: ANR8376I Reserve tape drive from host Unix: 'tapeutil -f dev/rmt? reserve' Windows: 'ntutil -t tape_ reserve' When done, release the drive: Unix: 'tapeutil -f dev/rmt? release' Windows: 'ntutil -t tape_ release' Reserve/Release A facility available via the 3590 device driver whereby an accessing system can dedicate (reserve) a tape drive to itself for the duration of processing a tape, and thereafter release it. In this way all the drives in a 3494 may be serially shared by all the RS/6000s which access the 3494. Ref: 3494/3590 device drivers manual discussion of SIOC_RESERVE and SIOC_RELEASE. RESet BUFPool Server command to reset the database buffer pool statistics, as reported by 'Query DB Format=Detailed'. Do this after changing BUFPoolsize. RESet DBMaxutilization Server command to reset the maximum utilization statistic (Max. Pct Util) for the database, as reported from 'Query DB'. RESet LOGConsumption Server command to reset the statistic on the amount of recovery log space that has been consumed since the last reset, as shows up in a 'Query LOG Format=Detailed' report. RESet LOGMaxutilization Server command to reset the max utilization statistic (Max. Pct Util) for the recovery log, as seen in 'Query LOG'. RESETARCHIVEATTRibute TSM 5.2 Windows client option to allow resetting the Windows archive attribute for files during a backup operation. Specify Yes or No. Default: No, do not reset the Windows archive attribute for files during a backup operation. resident file A file that resides on a local file system. It has not been migrated or premigrated, or it has been recalled from ADSM storage and modified. When first created, all files are resident. Contrast with premigrated file and migrated file. RESOURCETimeout TSM 4.2+ server option to specify how long the server waits for a resource before cancelling the acquisition of a resource. At timeout, the request for the resource will be cancelled, with msg ANR0530W. See also msg ANR0538I. Specify: 1 - N (minutes) Default: 10 in TSM 4.2; 60 in 5.1 (per APAR PQ56967). RESOURceutilization [1-10] TSM 3.7+ client system options file (dsm.sys) option to regulate the level of resources the TSM server and client can use during Multi-Session Backup and Archive processing later extended in TSM 5.1 for Multi-Session Restore. Specifies the number of sessions opened between the TSM server and client. Code: 1 - 10. Default: 2 With a value of 2, one Producer (control) session is used for querying the TSM server and reporting final results to the TSM server, and one Consumer (data) session is used for transferring file data. With a value of 1, you get a single, combined Producer+Consumer session. In IBM parlance, this prevents "thread switching". With numbers higher than two you may get some multiple combination: with 5 there may be 2 Producer sessions and 3 Consumer sessions. Each Consumer session results in its own entry in the accounting log and summary table, as reported by the associated Producer session. Note that IDLETimeout still pertains: if the IDLETimeout limit is reached before the 2nd session has finished backing up the filespace, the 'communication' session (1st session) is terminated and any additional file systems are not backed up, and/or the summary statisticss are not transmitted. For example, a setting of "RESOURceutilization 1" uses less system resources than a setting of "RESOURceutilization 10". Notes: the full exploitation of multiple sessions is possible only if you have both TSM 3.7 client AND server. Ref: TSM 3.7 Technical Guide redbook TSM 5.1 Technical Guide redbook RESTArt Restore ADSM v.3 client command to restart a restoral from where it left off, as when the restoral was interrupted. This is available in restorals in which ADSM is keeping track of the files involved in the restoral (see "No Query Restore"). Note that you *must* either restart an interrupted restoral, or perform a CANcel Restore, else burther backups and restorals of the filespace are inhibited. See also: RESTOREINTERVAL Restartable Restore ADSMv3+ facility restartable restore, to prevent having to start over when a restoral was interrupted, as by a data communications (network) problem or a media (disk, tape) or file problem. Is an extension of No Query Restore (NQR) in that the server, rather than the client, is maintaining the list of files involved in the restoral, thus facilitating restart after client session demise. NQR does the sorting of client files on the server machine and thus can keep a record of the list of files to restore and which ones have already been restored. RR cannot prevail where NQR cannot be used, as in the use of any of the following options (or their GUI equivalents: -latest -inactive -pick -fromdate or -todate -fromtime or -totime Falls under the more general category Fault Tolerance. Note that having a Restartable Restore pending blocks that filespace from any other action (backup, reclamation, BAckup STGpool, etc.) until the restore is finished: the filespace is locked. RR state is preserved in the *SM database and thus prevails across *SM server restart. Removal: The RR state is normally removed, and the filespace unlocked, by: - Successful conclusion of the restore. - Cancellation. The RR state is also removed by some server processes after the RESTOREINTERVAL has elapsed. Server data movement operations such as storage pool migration, reclamation processes, expiration processing, and MOVE DATA commands remove the restartable restore state from the ADSM database when they run. Ref: ADSM v3 Technical Guide redbook See also: dsmc Query RESTore; Query RESTore; RESTArt Restore; RESTOREINTERVAL Restoral, tapes needed See: Restoral preview Restoral performance Overall, consider that restoral (slow restoral) performance is inherently limited by the choices you made in configuring your TSM backup scheme. Further, the manner in which you request TSM to perform the restoral can have a dramatic impact upon performance. Consider also that establishing a file in a file system takes considerably more time than simply reading an established one, as during backup. Detailed factors: - A restoral via command line invocation (CLI) runs faster than a restoral invoked via the GUI. (See: GUI client) - In a Unix or like environment where a shell will expand exposed wildcards, prevent that from happening: let TSM expand wildcards, and thus figure out the best order in which to restore objects. This helps minimize tape mounts and rewinding. Likewise, use a single restoral operation to restore as many objects as possible, rather than multiple commands. - Avoid use of the client COMPRESSIon option for backups, as the client will have to uncompress every file being restored! - A file system that does compression (e.g., NTFS) will prolong the job. - Restoring to a file system which is networked to this client system rather that native to it (e.g., NFS, AFS) will naturally be relatively slow. - Use Collocation...to the extent that you can afford it in Backups. Collocation by FILespace will optimize restorals but cost a lot in tapes and tape mount time. - Beyond Collocation: have your storage pools defined so that Archive, Backup, HSM each have their own primary storage pools, to keep them separate. Intermingling will cause Backup data to get spread out and thus prolong Restorals. - Consider using MAXNUMMP to increase the number of drives you may simultaneously use. - In Unix clients where sparse files are rarely restored, consider adding MAKESPARSEFILE NO to dsm.opt. See: Sparse files, handling of - Use the Quiet (-Quiet) option to eliminate the overhead of formulating and writing progress messages. - ADSMv3 Small File Aggregation helps speed restorals. - Perform full backups periodically to create a complete, contiguous image of the filespace. See: Backup, full - Planning your storage pool hierarchy can make restorals a lot faster by keeping newer (more likely Active) data in an upper level storage pool and migrating older (more likely Inactive) data to a lower level storage pool via the MIGDelay control. - Employ two different node names and management classes for the same client so as to have a storage pool for only Active data as well as a more conventional storage pool of Active and Inactive data. See IBM site Technote 1148497. - ADSMv3 "No Query Restore" speeds restorals by eliminating the preliminary step of the server having to send the full repertoire of file objects it has for the client, and the need for the client to traverse the list if it already knows what needs to be restored. (But note: There have been performance problems with No Query Restore itself. IBM created the DISABLENQR client trace option to compensate. See notes at end of this file.) - If your operating system has data-rich directories such that they cannot be contained within the *SM database (as they can with most Unix systems), consider using DIRMc to keep them in a disk storage pool, to eliminate tape operations in the initial, directories portion of a restoral. - Minimize other server activity during the restoral period. Suppress some administrative schedules, which could interfere with resources available to the restore. (In particular, note that 'BAckup DB' can pre-empt other processes when it needs tape drives.) - Maximize your buffer sizes; but watch out for performance penalty at certain TCPBufsize sizes (q.v.). - Minimize your MOUNTRetention value for the duration of the restoral so as to avoid a new tape mount having to wait for a lingering tape to be dismounted from that drive. (Note that TSM does not call for a next mount as it's finishing work on the current tape, so there is always wasted time waiting the next mount.) - May be waiting for mount points on the server. Do 'Query SEssion F=D'. - Automatic tape drive cleaning and retries on a dirty drive will slow down the action in a very unaccountable way. - Tapes written years ago, or tapes whose media is marginal, may be tough for the tape drive to read, and the drive may linger on a tape block for some time, laboring to read it - and may not give any indication to the operating system that it had to undertake this extra effort and time. - Tape/drive difficulties during Backup cause TSM to continue the Backup on another tape, which results in spread data. Later returning the problem tape to read-write state for further backup use unfortunately further spreads the data. - Make sure that if you activated client tracing in the past that you did not leave it active, as its overhead will dramatically slow client performance. - If CRC data is associated with the storage pool data, the CRC is validated during the restoral, which adds some time. - Unix: Consider disabling sync for that file system for the duration of the restoral. There is also the public domain 'fastfs' program for Solaris systems, to speed restorals through use of delayed I/O. - Restoral works by reconstructing the file system directory structure first. The directories for many operating systems reside in the *SM database itself; but if yours goes to a storage pool, make the storage pool disk (as via DIRMc). - When restoring a single file, DO NOT use -SUbdir=Yes, because it may cause the directory tree to be restored (see APAR IC21360) - In Novell Netware: Try boosting the PROCESSORutilization value. - Is your tape drive technology fast in real-world start-stop processing, as opposed to streaming? That's what's involved in restoring smaller files distributed over a tape, with the positioning required. (DLT has been distinguished by poor start-stop performance.) - Tape length: Longer tapes are nice for increased data storage, but obviously make for longer positioning times. - If using ethernet (particularly 100 Mb), make sure your adapter cards are not set for Auto Negotiation. See the topic "NETWORK PERFORMANCE (ETHERNET PERFORMANCE)" near the bottom of this document. - Beware the invisible: networking administrators may have changed the "quality of service" rating - perhaps per your predecessor - so that *SM traffic has reduced priority on that network link. - If using MVS, be aware that its TCP/IP has a history of inferior performance, partly because it is an adjunct to the operating system, rather than built in. - Make sure there is no virus-scanning software running: it will take time to examine every incoming file! - If you have multiple tape drives on one SCSI chain, consider dedicating one host adapter card to each drive in order to maximize performance. - If you mix SCSI device types on a single SCSI chain, you may be limiting your fastest device to the speed of the slowest device. For example, putting a single-ended device on a SCSI chain with a differential device will cause the chain speed to drop to that of the single-ended device. - If using a database TDP, your host configuration may be self-defeating: a single drive containing your transaction log and trying to satisfy the current running server log entries and trying to restore and replay the old transaction log entries is one very busy drive, with much arm movement trying to satisfy all demands. In any database scenario, distributing I/O demands makes for much better performance. - Restorals of TDP for MSSQL (q.v.) may take a long time because the database "container" has to be recreated (formatted) before the restoral of content can occur. - Depending upon the nature of the restoral and storage pool collocation you may be able to invoke multiple 'dsmc RESTore' commands to parallelize the task, wihout running into volume contention in the TSM server. - A primary storage pool volume needed for the restoral is marked as being present in the library, but is not, and a MOUNTWait timeout has to occur before the restoral process goes on to mount a copy storage pool volume instead. - With a JFS file system (e.g., in AIX), a jfslog which is at the edge of the volume rather than in the middle will reduce performance. - The v5 client provides the option of multiple restore streams. - If using an IBM ESS 2105 (Shark), avoid using AIX LVM striping: the ESS stripes write operations internally, and redundantly striping with AIX will increase the number of write I/O operations, which can negatively performance. See also: Backup taking too long; Client performance factors; Restore Order; Server performance For additional info, search the APAR database for "adsm restore performance". Restoral preview You may be disappointed to find that there is no restoral preview in the product - an option you may see for restoral planning: you embark upon restorals with no fore-awareness of the number of tapes or which volumes will be involved. This seeming shortcoming derives from the file-oriented philosophy of the product - that you should not be concerned about where files are on their storage media. You might think that this would have been in the earlier incarnations of the product, in the days before automatic tape libraries, when operators had to respond to tape mount requests; but it didn't get implemented then. Now that TSM is an enterprise type product, the presumption is that you would by definition always have all needed tapes available in your library anyway. A Preview capability would tell us: - What volumes would be required; - If all the volumes are available (onsite, offsite, volumes Unavailable, files Damaged, etc.); - If sufficient drives are available, and how many would be used; - The amount of data that will be restored. In the absence of a restoral preview capability in the product, there are no good alternatives. Some will advise getting a list of volumes from the Volumeusage table (via Select, or SHow VOLUMEUsage), but that's a false recommendation in that the list will be that of all primary storage pool volumes in use by the node - not just those which a restoral will need. Select queries in the server, to identify the tapes containing files to be restored, are prohibitively time-consuming in the Contents table (far slower than the client itself can obtain the info); and doing a dummy restoral to a trash area to identify the tapes is wasteful, and not possible if the volumes are offsite - which is why you wanted the preview in the first place. Restoral timestamps, Unix The product reinstates the original atime and mtime as they were at the time of backup. In doing so, the ctime (inode admin change time) is necessarily changed to the restoral time, which is typically fine as ctime is of no consequence except in security investigations. Note that the product backs up files if they are changed; so if you read a file after the backup, it will not be backed up again because its mtime remains unchanged, though the atime value is changed by the reading. A restoral in effects resets the atime value. Restoral tips, NT There are some basic rules when trying to restore directories and files to an NT system, and this specifically for permissions. 1. File Permissions are ALWAYS restored 2. Directory permissions are restored when the original directory still exists 3. Directory permissions are only restored on non-existing directories if the command line interface is used, together with the -SUbdir=Yes option. 4. Restoring files to a temporary destination and then moving them will only keep the permissions when moved on the same logical drive. (NT rule) When you share a directory the sharing information is not written to the shared directory. So when you restore the directory, it won't get shared automatically. Restoral volumes, determine See: Restoral preview Restorals, prevent The product does not provide a way to disallow restorals, given that the ability to recover data is a fundamental requirement of the product. However, one way to achieve it is to have backups performed only via client schedules, with SCHEDMODe PRompted, and do UPDate Node ___ SESSIONITITiation=SERVEROnly. See also: Archives, prohibit; Backups, prevent Restore The process of copying a backup version of a file from ADSM storage to a local file system. You can restore a file to its original location or a new location. The backup copy in the storage pool is not affected. Priority: Lower than Restore. ADSMv2 Restore works as follows... Phase 1: Get info from the server about all filespace files which qualify for the restoral; Phase 2: Create those file system objects involving descriptions rather than data... Directories are restored first, directly from the ADSM database info about the directory. If the directory exists, it is not restored - the existing directory is used. If the directory does not exist: For the command line client: the directory is restored with backed up attributes if SUbdir=Yes. For the GUI: Restore by Subdirectory Branch: the directory is created and restored with backed up attributes. Restore by File Specification/Restore by Tree: the directory is created with default directory attributes. (Note that directory reconstruction occurs WITHOUT a session with the server!) Empty (zero-length) files are restored after directories and before any files containing data... Phase 3: Restore data-laden files... Files are restored with their backed up permissions when REPlace=Yes, all. If REPlace=No, *SM does not restore the existing files. Option Verbose shows name and size information for files backed up and restored, not permission information. ADSMv3 Restore works as documented in the B/A Client manual, under "No Query Restore". When a restore is running, a 'Query Mount' will show the tape mounted R/O. Note that restoral will by necessity change directory and symbolic link dates as it reestablishes them; and symbolic links may be created under "root" rather than their original creator if the operating system lacks the lchown() system call. Unicode note: The server allows only a Unicode-enabled client to restore files from a Unicode-enabled file space. WARNING: When a Restore is occurring, prevent new backup processes from running, which could create new backup file versions that could conflict with and screw up the restoral. (See: Backups, prevent.) Contrast with Backup, Retrieve, Recall. See also: dsmc REStore RESTORE Server database SQL table involved in Restartable Restore processing. See also: RESTOREINTERVAL Restore, client no longer exists Sometimes the client system that had backed up data has disappeared, but the enterprise wants to restore some data that had been on it. Refer to: Backup-Archive Clients manual, "Restore: Advanced Considerations" Restore, handling of existent file Use the Client User Options file file on client (dsm.opt) option REPlace to specify handling. Restore, number of tape drives used The manuals are unspecific about this, but TSM uses one tape drive per client session in performing restorals. The most said about this is in the Performing Large Restore Operations topic of the client Backup-Archive manual, which advocates starting multiple restore commands to use multiple tape drives - but does not say that only one tape drive will be used if only one command is issued. Note, however, that having multiple drives will not be productive if the data needed is on a single tape, as there is no tape sharing. See also: MAXNUMMP, as it affects the number of drives the client can use; KEEPMP for keeping the mount point through the session. Restore, tape mounted multiple times Though TSM in most cases mounts tapes only once during a restoral, there may be occasions where you see it mounting a tape more than once. This has been observed where files span volumes: the tape from which a file spans is mounted to get the first part of the file, then the tape containing the rest of the file is mounted, plus other files. But TSM may need to go back to that first tape for other files. Restore, using "GUI" Users with Xterminals can simply use the 'dsm' command and be presented with a nice graphical interface. Beware that the final report will not reveal the elapsed time. (Users with dumb tty terminals can have a similar capability via the "-pick" option, which presents a list, as in: 'dsmc restore -pick /home -SUbdir=Yes') See also: -PIck Restore, volumes needed See: Restoral preview Restore across architectural Cross platform restores only work on platforms those platforms that understand the other's file systems, such as among Windows, DOS, NT, and OS/2; or among AIX, IRIX, and Solaris (the "slash" and "backslash" camps). For cross-platform restores to be possible, the respective clients would both have to support the same file system type, meaning both that the client software was programmed to do so and that it was formally documented that it really could do so, in the client manual. Simply look in the Unix Client manual, under "File system and ACL support" vs. the Windows Client under "Performing an incremental, selective, or incremental-by-date backup". See also: Platform; Query Backup across architectural platforms Restore across clients (nodes) You can restore files across clients if (cross-node restoral) you know the proper client password, and in invocation of the restoral command you use option -VIRTUALNodename in Unix, or -NODename in Netware and Windows. That is, files belonging to client C_owner can be accessed from client C_other if you invoke the TSM client program (dsm or dsmc) from client C_other and know client C_owner's password. Sample CLI session, as invoked on client C_other to access C_owner files: 'dsmc restore -NODename=C_owner -PASsword=xxx ...' or use the GUI from client C_other as: 'dsm -NODename=C_owner' and more securely supply that client password at the prompt. This technique is a way for root to get files across systems, and operates upon all files - root's as well as those of all other users. Note that a 'Query SEssion' in the server shows the session active for the node specified by -NODename, rather than the actual identity of the client. Requirements: The source and destination file system architectures must be equivalent, and the level of the restoring client software must be at least the same level as the software on the client which did the backup. Ref: Backup-Archive Clients manual, "Restore or Retrieve Files to Another Workstation" See also: NODename; VIRTUALNodename Restore across nodes See: Restore across clients Restore across servers You can restore files across servers if you know the proper client password. That is, for client C1 whose natural files are on server S1, you can instead go after files stored by client C2 on server S2 if you know that other client's password and redirect to that server. Sample syntax: 'dsmc restore -server=S2 -NODename=C2 -PASsword=xxx' or use the GUI as: 'dsmc -server=S2 -NODename=C2' and more securely supply that client password at the prompt. This technique is a way for root to get files across systems and clusters, and operates upon all files - root's as well as those of all other users. Note: The other server must be defined in the Client System Options file (/usr/lpp/adsm/bin/dsm.sys). Restore and management class When a Backup is done on a file, you can employ any of a number of management classes to accomplish it. Thereafter, you can see the managment class used for that backup when you either do a 'dsmc q backup' or use the GUI. The management class reflected in a restoral is, like file size, an informational value rather than selectable, as date is. RESTORE DB See: DSMSERV RESTORE DB Restore directly from Copy Storage See: Copy Storage Pool, restore files Pool directly from Restore empty directories To ensure that you can restore empty directories, you must back them up at least once with an incremental backup. Also, ADSM restores empty directories when you use the subdirectory path method. You should also note that if a directory and its contents are deleted, and you use ADSM to restore the directory and data, all associated ACPs will be restored. If the contents of a directory are deleted but the directory is not, and ADSM is used to recover the data, all ACPs associated with the data will be recovered, but the ACPs associated with the directory will not be recovered. Directory ACPs are recovered only when a directory is newly created during restore from the ADSM backup copy. Do 'dsmc Query Backup * -dirs -sub=yes' the client to find the empties, or choose Directory Tree under 'dsm'. Example: Restore the empty directory /home/joe/empty-dir: 'dsmc restore -dir /home/joe/empty-dir' It will yield message "ANS4302E No objects on server match query", but will nevertheless restore the empty directory. Restore failing on "file not found" A way around it is to create a file by problem that name, do a selective backup to fulfill its existence, and then retry the full restore. Restore fails in Netware on long file See: Long filenames in Netware restorals name Restore Order (Restoral Order) From APAR IC24321: ADSM V3 CLIENTS ALWAYS RESTORE OR RETRIEVE DIRECTORIES EVEN WHEN PARMS SUCH AS REPLACE=NO OR -IFNEWER ARE USED (1999/07). "During ADSM restore and/or retrieve processing the objects being restored/retrieved are being returned from the server to the client in "restore order". This concept of "restore order" is that the objects are returned in the order on which they appear on the given media. This avoids restore/retrieve performance issues of sequential volume "thrashing" (positioning back and forth on a sequential volume) and multiple mounts of the same sequential media. The "restore order" considers where objects exist on sequential media and brings them back in this order so that the media can be moved from beginning to end. One of the side effects of this type of processing involves the restore/retrieve of directories. When a file needs to be restored/retrieved into a directory that does not exist yet (because its restore order is down further) the ADSM client must build a skeleton [surrogate] directory to place this file under. When the client then encounters the directory in the restore order it will overwrite this skeleton it originally put down. At this time the ADSM client is not designed to track which directories it lays down as skeletons and which were already there. This means that the client restore/retrieves directories whenever it encounters them within the restore order. This is true regardless of REPlace=No being specified. Or regardless of -ifnewer being used and the directory being restored being older. The ADSM client needs a design change in this area to track which directories it puts down as skeletons and which it does not. It needs to only restore those where it put down the skeleton. The requirement to not replace existing directories when -REPlace=No is in effect involves a design change in ADSM restore/retrieve processing that is beyond the scope of a PTF fix. However, ADSM Development agrees with the need for this requirement, and has accepted it for implementation in a future version of the product." MY NOTE: Clients like AIX which have simple directory structures have their directories in the *SM database rather than storage pools, and so they would not be on sequential media and hence would be immune to this problem. Restore performance See: Restoral performance Restore runs out of disk space? If it looks like there is sufficient file system space and yet this occurs, it's likely that files are being restored for a user whose disk quota is being exceeded. RESTORE STGpool *SM server command to restore files from one or more copy storage pools to a primary storage pool. Syntax: 'RESTORE STGpool PrimaryPool [COPYstgpool=PoolName] [NEWstgpool=NewPrimaryPool] [MAXPRocess=1|N] [Preview=No|Yes] [Wait=No|Yes]' Attempts to minimize tape mounts and positioning for the Copy Storage pool volumes from which files are restored. Depending on how scattered these files are in your Copy Storage pool, quite a bit of CPU and database activity may be required to locate the necessary files and to restore them in the optimal order. File aggregation in ADSM V.3 should help significantly. RESTORE STGpool vs. RESTORE Volume The Restore Stgpool and Restore Volume commands are very closely related. Under the covers, most of the code is the same. The major differences are: - Restore Stgpool restores primary files that have previously been marked as damaged because of a detected data-integrity error. This is done regardless of whether the volume has been designated as destroyed. - Restore Volume allows you to specify the volume name(s) rather than using UPDate Volume to designate the destroyed volume(s). For restoring a small number of volumes, the Restore Volume is more convenient, particularly if you are not interested in restoring damaged files on other volumes. For restoring damaged files or a large number of destroyed volumes, Restore Stgpool is preferable. Restore to different node See: Restore across clients Restore to tape, not disk The Restore function wants to write the subject file to disk (which is cheap and capacious these days). But sometimes you simply don't have enough disk space to accomodate standard retrieval of very large files. Here is a Unix technique for instead restoring the files, one at a time, and putting each directly to tape: In one window, do: mkfifo fifo; # Create Named Pipe, # called "fifo". dd if=fifo of=/dev/rmt1 # Tape drive # of your choice, tape in it. In another window, do: dsmc restore -REPlace=Yes SubjectFilename fifo This will restore the desired backup file and, instead of restoring it to its natural name, will direct it to "fifo". The "-REPlace=Yes" will quell the restore's fear of replacing the file which, as a FIFO type special file, will instead result in the data being sent to whatever is reading the named pipe, which in this case is the 'dd' command, which passes it to tape. When the restoral ends, the 'dd' command will end and the file's data will be on that tape. Record on the tape's external label the identity of the data written to the tape. To later extract the data from the tape, again use the 'dd' command, specifying the chosen tape drive via "if" and an output file via "of". Whereas this is plain data on a non-labeled tape, an operating system other than Unix should be able to as easily get the data from the tape. Note that the inverse is not possible: you cannot have a FIFO as input to a dsmc backup operation. (TSM will detect the named object as being a special file and back it up as such, which is to say send its description to the server, rather than try to read it as a file.) RESTORE Volume Server command to recover a primary storage pool volume (disk or tape) from data backed up to the copy storage pool, by restoring the data to one or more other volumes in the same (or designated) storage pool. At the beginning of the operation the Access Mode of the volume is changed to DEStroyed. When restoration is complete, the destroyed volume is logically empty and so is automatically deleted from the database and be given Status Scratch. 'RESTORE Volume VolName(s) [COPYstgpool=CopyPool] [NEWstgpool=NewPoolName] [MAXPRocess=1|N] [Preview=No|Yes] [Wait=No|Yes]' 'RESTORE Volume VolName Preview=Yes' will give you (among other information) a list of copy storage pool volumes needed to restore your primary volume. (Note: If you perform the Preview when expirations and reclamations are running, the volumes can change.) As the invoked restore proceeds, performing successive Query Volume commands on the bad volume will show it progressively emptying. The operation attempts to minimize tape mounts and positioning for the Copy Storage pool volumes from which files are restored by first assembling a list of restoral files by volume. Depending on how scattered these files are in your Copy Storage pool, quite a bit of database activity may be required to locate the necessary files and then restore them in the optimal order, so you can expect the restoral to take hours!! (File aggregation helps.) Primary Storage Pools are often collocated whereas it is impractical to collocate Copy Storage Pools (because of the very many mounts that would be required in a BAckup STGpool operation). Because of the collocation incongruity, the files needed to restore a volume will inevitably be spread over many copy storage pool volumes, making for a lot of mounts. (And if the client/filesystem involved only backs up a small amount of data per day, you will find the data spread over a VERY large number of Copy Storage Pool tapes, dwarfed by data from much more active clients/filesystems.) Because of this, it is of great advantage to first perform a Move Data to get as much viable data as possible off the volume before invoking the Restore Volume. The restore may request an offsite volume, as seen in Query REQuest. If you CANcel REQuest on that, the restore will continue, not stop - and it may realize that calling for the offsite volume was unnecessary, and proceed with an onsite copy storage pool volume instead. But instead it may end "successfully" though the data represented on those offsite tapes was not restored. Repetat the Restore Volume to use onsite tapes to complete it. Note that an interrupted Restore can be reinvoked to continue where it left off. You can gauge the progress of the recovery by doing 'Query Volume' on the subject volume, whose Pct Util will approach zero as its contents are recovered to other volume(s). Likewise, 'Query CONtent' will show the contents of the volume dwindling as the restore proceeds. And, obviously, Query ACtlog can be done to follow progress. Msgs: ANR2114I, ANR2110I See also: Collocation and RESTORE Volume RESTOREINTERVAL ADSMv3 server option specifying how long a restartable restore can be saved in the server database. "RESTOREINTERVAL n_minutes" where the value can be 0-10080 minutes (maximum = 1 week). Default: 1440 (1 day). See also: dsmc Query RESTore; Restartable Restore; RESTORE; SETOPT RESToremigstate (-RESToremigstate=) Client User Options file (dsm.opt) option and dsmc option to specify whether restorals of HSM-migrated files should return just the stub files (Yes), thus restoring them to their migrated state; or to fully restore the files to the local file system in resident state (No). Default: Yes Files with ACLs are always fully restored! Typically used on restoral command... 'dsmc restore -RESToremigstate=Yes -SUbdir=Yes /FileSystem' The restoral will report the full size of the file being restored; but no volume mount is needed to accomplish it, the statistics show 0 bytes transferred, and a dsmls afterward will show only the stub file (511 bytes). You should always explicitly specify -RESToremigstate=___ on the command line, because if you don't and it is coded in your options file contrary to what you intend, you will get perplexing results. Realize that Yes can only work if the file had been migrated and *then* backed up, for the stub to have been created and backed up. A file which has not been migrated obviously does not have a stub file: Backup will back up the file in the same way as for a non-HSM file system. And, naturally, small files (less then or equal to the stub size) cannot participate in migration and must be physically restored. It is important to understand that Yes only causes the TSM record portion of the stub files (first 511 bytes) to be reinstated: it does not reinstate either the Leader Data within the stub file, nor the file data in the HSM storage pool, and so is no good for restoring HSM files across TSM servers. Moreover, the stub file is *recreated*, but not *restored*, which is to say that it ends up with the default attributes for HSM files: any pre-existing attributes you may have specially set (migrate-on-close, read-without-recall) are lost. Specifying No causes a full restoral to occur, which actually restores the stub and its original attributes, plus the file data. See also: dsmmigundelete; Leader Data; MIGREQUIRESBkup RESToremigstate, query 'dsmc Query Option' in ADSM or 'dsmc show options' in TSM; look for "restoreMigState". RESTORES SQL table for currently active client restoral operations, introduced in v3 for Restartable Restores. Is what is inspected by the client 'dsmc Query RESTore' command and the server 'Query RESTore' comand. Restoring to renamed disk volumes on One day you back up your files when your OS/2, NT, and the like PC volume name is "DATA". Later that day you rename the volume to "APPS". If you wanted to restore the previously backed up data, you could change the volume name back; or you could simply specify the filespace name in curly braces, i.e.: RESTORE {OLDNAME}\* instead of RESTORE D:\* . Restrict server access Use the Groups and Users options (q.v.). Retain Extra Versions Backup copy group attribute reflecting the specification "RETExtra" (q.v.). Retain Only Version Backup copy group attribute reflecting the specification "RETOnly" (q.v.). Retension Term to describe "relaxing" a tape... Retensioning a tape means to wind to the end of the tape and then rewind to the beginning of the tape to even the tension throughout the tape. Doing this can reduce errors that would be otherwise be encountered when reading the tape. When tapes are read or written, that occurs at a much lower speed than the rewind preceding tape ejection. Whereas normal read-write speeds wind the tape relatively evenly and gently, rewinding is more stressful, and can result in the tape being stretched somewhat, or even compressed in the inner part of the spool. The bit spacing is thus slightly altered. It therefore helps to let the tape "unwind and relax", to help return the tape to a more natural condition. Reading a tape without retensioning it, itself respools the tape and causes some relaxation such that after a read error, a second read attempt may work fine. In Unix, retensioning can be performed via 'tctl ... retension'. See also the man page on the rmt Special File: you can specify a device suffix number to cause automatic retensioning. In the case of TSM, you could conceivably redefine your tape drive to use one of the dot-number suffixed variants of the device name, and achieve automatic retensioning before reading. This may be particularly desirable when you have to read a large number of tapes that have been in offsite storage. Retention The amount of time, in days, that inactive backed up or archived files are retained in the storage pool before they are deleted. The following copy group attributes define retention: RETExtra (retain extra versions), RETOnly (retain only version), RETVer (retain version). Retention period for archived files Is part of the Copy Group definition (RETVer). There is one Copy Group in a Management Class for backup files, and one for archived files, so the retention period is essentially part of the Management Class. Changing the retention setting of a management class's archive copy group will cause all archive versions bound to that management class to get the new retention. Retention period for archived files, 'UPDate COpygroup DomainName SetName change ClassName Type=Archive RETVer=N_Days|NOLimit' where RETVer specifies the retention period, and can be 0-9999 days, or "NOLimit". Effect: Changing RETVer causes any newly-archived files to pick up the new retention value, and previously-archived files also get the new retention value, because of their binding to the changed management class. Default: 365 days. Retention period for archived files, 'Query COpygroup DomainName SetName query ClassName Type=Archive RETVer=N_Days|NOLimit' Retention period for archived files, ADSM server command: query 'Query COpygroup [DomainName] [SetName] Type=Archive [Format=Detailed]' Retention period for archived files, The retention period for archive files set is set via the "RETVer" parameter of the 'DEFine COpygroup' ADSM command. Can be set for 0-9999 days, or "NOLimit". Default: 365 days. Retention period for backup files, 'UPDate COpygroup DomainName SetName change ClassName RETExtra=N_Days|NOLimit RETOnly=N_Days|NOLimit' where RETVer specifies the retention period, and can be 0-9999 days, or "NOLimit". Default: 365 days. Retention period for backup files, ADSM server command: query 'Query COpygroup [DomainName] [SetName] [Format=Detailed]' Retention period for event records 'Set EVentretention N_Days' in the server database Retention period for HSM-managed files They are permanently retained in the sense that they are server file system files and thus are implicitly permanent. What *do* expire are the migrated copies of these files, on the ADSM server. That is controlled by the MIGFILEEXPiration option in the Client System Options File (dsm.sys), whose value can be queried via: 'dsmc Query Option' in ADSM or 'dsmc show options' in TSM. You can code 0-9999 days. Default: 7 days. Retention period for migrated (HSM) Control via the MIGFILEEXPiration option files (after modified or deleted in in the Client System Options file client file system) (dsm.sys). Default: 7 RETExtra Backup Copy Group operand defining the retention period, in days, for Inactive backup versions (i.e., all but the latest backup version). The RETExtra "clock" does not start ticking until the backup version goes Inactive, by virtue of another Backup having been run to create a new Active version which displaces the prior Active version. That is, if you back up a file on January 15, 1997, but don't back it up again until March 1, 1997, RETExtra retention period for the first backup version counts from March 1, not January 15. When the file is deleted from the client and a subsequent Backup makes this known to the server, all the RETExtra copies will persist, and will continue their expiration countdown: they do not immediately disappear because the client file was deleted. RETExtra=NOLIMIT setting will cause the next-most recent copy to also be kept indefinitely (until the next backup version is created, in which case it is expired per the VERExists/VERDeleted settings). For files still present on the client, Inactive versions will be discarded by either the RETExtra versions count or the VERExists retention period - whichever comes first. RETExtra is not an independent value: it should be considered a subset of RETOnly. See also: RETExtra, RETOnly, VERDeleted, VERExists RETExtra, query 'Query COpygroup', look for "Retain Extra Versions". RETOnly Backup Copy Group operand defining the retention period, in days, for the sole remaining Inactive version of a backed-up file. The scenario is: A client file that changes over time is backed up and accumulates multiple Inactive copies, as well as the Active copy, which is an image of the file that prevails on the client. The Inactive versions age, and will be deleted from server storage once older than the RETExtra value. Because the file still exists on the client, the RETOnly value is ignored. Once the file is deleted from the client, there will be only Inactive versions in server storage. When the number of Inactive versions drops to 1, the RETOnly value is considered, and the final version will be kept only as long as its increasing age is less than RETOnly. This is to say that the RETOnly "clock" for the final backup has in effect been ticking since that final version of the file went Inactive. The RETOnly value is intended to allow you to keep the final version of the file longer than the RETExtra value, if desired. Example: RETExtra=45 and RETOnly=45... The final Inactive version will be on the server for no more than 45 days. If you wanted to keep it for 45 days longer, you would have to code RETOnly=90. It does not make sense for the RETOnly value to be less than the RETExtra value, given that both refer to the singular age of one file, whose aging has been in progress for some time. RETOnly is not an independent value: it should be considered a superset of RETExtra. (When searching the Admin Guide manual, search on "Retain Only Versions".) RETOnly, query 'Query COpygroup', look for "Retain Only Version". Retrieval performance In performing a Retrieve of Archive data, many of the same factors are at play as listed in "Restoral performance". Some specifics: - If CRC data is associated with the storage pool data, the CRC is validated during the retrieval, which adds some time. Retrieve The process of copying an archived copy of a file from ADSM storage to a local file system. You can retrieve a file to its original location or a new location. The archive copy in the storage pool is not affected. Contrast with Archive. ADSMv2 did not archive directories, but files in subdirectories were recorded by their full path name, and so during retrieval any needed subdirectories will be recreated, with new timestamps. ADSMv3+ *does* archive directories. Files which had been pointed to by symbolic links will be recreated as files having the name of the symlink. Contrast with Archive, Restore, Recall. See: dsmc RETrieve Retrieve to tape, not disk The Retrieve function wants to write the de-Archived file to disk. But sometimes you simply don't have enough disk space to accomodate standard retrieval of very large files. Here is a Unix technique for instead retrieving the files, one at a time, and putting each directly to tape: In one window, do: mkfifo fifo; # Create Named Pipe, # called "fifo". dd if=fifo of=/dev/rmt1 # Tape drive # of your choice, tape in it. In another window, do: dsmc retrieve -REPlace=Yes -DEscription="___" ArchivedFilename fifo This will retrieve the desired archived file and, instead of retrieving it to its natural name, will instead direct it to "fifo". The "-REPlace=Yes" will quell the retrieve's fear of replacing the file which, as a FIFO type special file, will instead result in the data being sent to whatever is reading the named pipe, which in this case is the 'dd' command, which passes it to tape. When the retrieval ends, the 'dd' command will end and the file's data will be on that tape. Record on the tape's external label the identity of the data written to the tape. To later extract the data from the tape, again use the 'dd' command, specifying the chosen tape drive via "if" and an output file via "of". Whereas this is plain data on a non-labeled tape, an operating system other than Unix should be able to as easily get the data from the tape. Note that the inverse is not possible: you cannot have a FIFO as input to a dsmc Archive operation. (TSM will detect the named object as being a special file and archive it as such, which is to say send its description to the server, rather than try to read it as a file.) Retrieve, handling of existent file Use the Client User Options file file on client (dsm.opt) option REPlace to specify handling. Retry Conventionally refers to retrying a backup operation, for one of the following rreasons: 1. The file is in use and, per the Shared definitions in the COpygroup definition, the operation is to be retried. In the dsmerror.log you may see an auxiliary message for this retry: " truncated while reading in Shared Static mode." 2. The file exceeds the capacity of a storage pool in the hierarchy such that the backup has to be retried with a storage pool lower in the hierarchy. 3. The backup is direct-to-tape and the tape is not mounted: the client will send the data to the server, who rejects the operation until the tape is mounted, and then the client resends the file(s). 4. In backing up an HSM file system, the file being backed up is a migrated file and so a mount of its storage pool volume is required. The Retry inflates the summary statistic "Total number of bytes transferred" in the cases where the file is actually re-sent to the server. See also: Changed Retry # 1 In a backup session client log, indicates that the file has been found to have changed as it was being backed up (you will see a preceding "Normal File--> ...Changed" entry), and that per the CHAngingretries client option, the backup of the file is being retried. The dsmerror.log will typically have a corresponding entry like " truncated while reading in Shared Static mode.". Retry drive access See: DRIVEACQUIRERETRY RETRYPeriod Client System Options file (dsm.sys) option to specify the number of minutes you want the client scheduler to wait between attempts to process a scheduled command that fails, or between unsuccessful attempts to report results to the server. Default: 20 minutes Return codes (status codes) In product releases prior to 5.1, there were no return codes that customers could test from the command line client. Per IBM then: "The return code from the execution of any of the ADSM executables (except the ADSM API) cannot be relied upon, and is not consistent and is therefore not documented. We do log errors in the error log and the schedule log, and these are what you should rely upon." As of 5.1, however, reliable, documented return codes are available, as per the B/A client manual "Return codes from the command line interface". The return code is based upon the severity letter at the end of the 'ANSnnnn_' message labels: I: 0 W: 8 E: 12 RC 4 indicates skipped files (not "failure"). RC 12 May occur if the client nodename and/or IP address are different from last session time. Ref: swg21114982 ('HELP QUERY EVENT' will also explain the return code values.) You cannot configure which messages will generate which return code. API return codes are documented in the manual "Using the Application Program Interface" (SH26-4123), and in the TSM Messages manual. Return codes, Windows Are documented in the WINERROR.H file. RETVer Archive Copy Group attribute, specifying how long to keep an archive copy. REUsedelay STGpool option which says how many days must elapse after all files have been deleted from a volume before the volume can be reused. The REUsedelay is designed to prevent a sequence of events like the following: ADSM database is backed up Reclamation moves contents of tape A to tape B Tape A is rewritten with new files ADSM database suffers failure ADSM database is restored from backup mentioned above After this sequence of events the db will have certain files recorded as being on tape A even though the files have actually been overwritten. Avoiding this situation calls for a REUsedelay value which matches the retention period for backups of the ADSM database (typically from a few days to a couple weeks). So, no useful purpose is served by setting REUsedelay to a value dramatically larger than the retention period for database backups. A volume subject to REUsedelay will show a Status of "Pending". Server internals will take care of finally deleting the pending volume from the stgpool when its time is up. This examination is believed to be in *SM's internal hourly process. Messages: ANR1342I, then ANR1341I when the deletion actually occurs, that many days later. Default: 0 (days). See also: Reclamation REUsedelay, query 'Query STGpool PoolName Format=Detailed' for "Delay Period for Volume Reuse". REUsedelay, thwart To return a volume to the Scratch pool before the REUsedelay expires, just do 'DELete Volume ______'. (Note that 'UPDate Volume' won't do it.) REVoke AUTHority ADSM server command to revoke one or more privilege classes from an administrator. Syntax: 'REVoke AUTHority Adm_Name [CLasses=SYstem|Policy|STorage| Operator|Analyst] [DOmains=domain1[,domain2...]] [STGpools=pool1[,pool2...]]' Also: GRant AUTHority, Query ADmin RIM DBMS Interface Module. Ref: Redbook "Using Databases with Tivoli Applications and RIM" (SG24-5112) RMAN The Oracle 8 Recovery Manager (backup/restore utility), to back up an Oracle database to tape, unto itself. Ships with all versions of Oracle 8. Replaced EBU from Oracle 7. TSM (ADSM ConnectAgent; TSM Data Protection) provides an interface between RMAN and *SM to allow backups straight to your *SM Server. Each backup has a unique filespace name based upon the backup timestamp. In Solaris: RMAN looks for a library named libobk.so which got installed when you install TDPO. TDPO uses TSM API to connect to TSM server to send/receive data. RMAN uses backuppiece names to backup its data, which basically means that DP for Oracle only recieves a logical name related to the data. For this, DP for Oracle has to virtualize the filespace name and highlevel name on the TSM Server. By default the backuppieces are stored under the name \adsmorc\orcnt\ where backuppiece is the name that Oracle associates with the backed up data. You can seek the objects on the TSM server by using Query FIlespace. Be aware that RMAN is not very robust in reporting errors from initialization problems. RMM Removable Media Manager; an IBM tape management system. RMSS IBM: Removable Media Storage Systems See also: SSD RMSS device driver rmt*.smc See: /dev/rmt_.smc Roll-off Another term for Expiration, referring to file objects aging out and going away. Rollforward See: Set LOGMode RPFILE DRM Recovery Plan File object volume type. See: DELete VOLHistory; EXPIre Inventory; Query RPFContent; Query RPFile; Query VOLHistory; Set DRMRPFEXpiredays; Volume Type RSM Removable Storage Management: an industry-standard API. RSM prevents TSM from direct control of the library as far as media handling is concerned. TSM is not able to label, check in, or check out tape volumes; these operations must be performed by RSM through the Windows Management Console. See also: adsmrsmd.dll RTFM Old data processing colloquialism chiding the individual to Read The F*ing Manual. More gentilely translated as Read That Fine Manual. Run "Sess State" value from 'Query SEssion' saying that the server is executing a client request (and not waiting to send data). See also: Communications Wait; Idle Wait; Media Wait; SendW; Start RUn Server command to run Scripts. Syntax: 'RUn Script_Name Substitution_Value(s) Preview=No|Yes Verbose=No|Yes' Run Time API (Runtime API) Refers to the TSM API runtime library. See also: Compile Time SAIT Sony Advanced Intelligent Tape, an enterprise tape storage technology, a follow-on to AIT. Utilizes a half-inch, single-reel cartridge and provides over twice the uncompressed capacity of the nearest linear half-inch tape drive. The drive is sized for a 5.25" bay. The first generation of SAIT tape drives (SAIT-1) provides up to 1.3 terabytes (TB) of compressed capacity (500 gigabytes (GB) uncompressed) and a transfer rate of up to 78 megabytes (MB) per second compressed (30 MB/sec uncompressed). Supported as of TSM 5.2.2. www.aittape.com/pdf/Sony_SAIT_FAQs.pdf Samba file serving complexities Samba is a way for a Unix system to function like a Windows Share server. By default, Samba simply delivers the files to the Unix file system with file names and contents in their native Windows code page. If you want the Samba server to provide file backup service as a Windows server would, you have a problem, in that TSM provides Unicode capability for Windows, but not Unix. Attempting to perform a 'dsmc i' on Unix for those files yields error "unrecognized symbols for current locale, skipping...". A way around this is to have all new files incoming to the Samba server get readable filenames, via smb.conf specs, like: client code page = 862 character set = ISO8859-8 (which are for Hebrew). A complication is that Samba's code page specs are singular, pertaining to all clients using the Samba instance. That is, all clients must use the same language for the scheme to work. To determine what code page a Windows or DOS client is using, open a DOS command prompt and type the command 'chcp'. This will report the code page number. The default for USA MS-DOS and Windows is page 437. The default for western European releases of the above operating systems is code page 850. SAN Storage Area Network, a somewhat loosely defined approach to isolating backup traffic to its own Fibre Channel network and providing peer-level storage servers. As of 2000, an immature technology with little standardization or interoperability. See http://www.computerworld.com/cwi/ story/0,1199,NAV47_STO48238,00.html SAN Data Gateway A SAN device to which the 3590 drives in a 3494 library can be attached, for access by a host. If there is question about the device addresses after hardware work, for example, the DG can re-scan its SCSI chains (after deleting them from TSM and the operating system, to be followed by reacquisition by the OS and TSM following the re-scan). SANergy Ref: TSM 3.7.3+4.1 Technical Guide redbook; TSM 4.2 Technical Guide redbook SARS Statistical Analysis and Reporting System, in 3590 tape technology. SARS analyzes and reports on tape drive and tape cartridge performance to help you determine whether the tape cartridge or the hardware in the tape drive is causing errors, determine if the tape media is degrading over time, and determine if the tape drive hardware is degrading over time. Manual: "Statistical Analysis and Reporting System User Guide", available at www.storage.ibm.com/hardsoft/tape/pubs/ pubs3590.html SCHEDCMDUser TSM 4.2+ Unix (only) client option to specify the name of a valid user on the system where a scheduled command is executed. If this option is specified, the command is executed with the authorization of the specified user. Otherwise, it is executed with the scheduler authorization. Default: Run schedule under root (UID 0) For Windows, you can use a different user for the TSM client scheduler as long as your user has the following rights: - Back up files and directories - Restore files and directories - Manage auditing and security logs You can use 3 different tools: 1) The setup wizard in the B/A client GUI, where you may choose an account other than the usual System. 2) Using the dsmcutil command, you can use the /ntaccount:ntaccount and the /ntpassword:ntpassword parameters when creating the scheduler: dsmcutil install/name:"TSM Scheduler Service" /node:ALPHA1 /password:nodepw /autostart:yes /ntaccount:ntaccount /ntpassword:ntpassword 3) If the service already exist, you can set the desired user Services, Properties - Log on tab SCHEDCOMPLETEaction Macintosh client Preferences file option to specify what action to take after a schedule has been completed. Choices: Quit Tells the scheduler application to quit once a schedule has completed. SHUTdown Causes your Mac to be shut down once a schedule has completed. SCHEDLOGname Client System Options file (dsm.sys) option to specify the schedule log. Must be coded within the server stanza. Default: the installation directory and a file name of "dsmsched.log". Best if it is a normal place, like: /var/log/adsmclient/adsmclient.log Beware symbolic links in the path, else suffer ANS1194E. SCHEDLOGRetention Client System Options file (dsm.sys) option to specify the number of days to keep schedule log entries and whether to save the pruned entries. Syntax: SCHEDLOGRetention [N | ] [D | S] where: N Do not prune the log (default). days Number of days of log to keep. D Discard the error log entries. (the default) S Save the error log entries to same-directory file dsmerlog.pru Placement: Code within server stanza. Possibly define a low number to prune old entries, to keep the file size modest. 'SCHEDLOGRetention 2 s' causes pruned entries to be saved (s) to a dsmsched.pru file. See also: ERRORLOGRetention; SCHEDLOGname SCHEDMODe (in client) Client System Options file (dsm.sys) option, to be coded in each server stanza, to specify which *SM schedule mode to use: POlling, for the client scheduler to query the *SM server for scheduled work at intervals prescribed by the QUERYSCHedperiod option; or PRompted, for the client scheduler to wait for the *SM server to contact the client when scheduled work needs to be done. This choice is available only with TCP/IP: all other communication methods use POlling. See firewall notes below. Pictorially, the tickling direction is: Polling: client --> server Prompted: client <-- server On Polling: With Polling, the server never has to contact the client: the client initiates all the communication. Despite the name, POlling does not continually interrupt the server (the QUERYSCHedperiod option limits this), and is what to use when randomizing schedule start time via the server 'Set RANDomize' command. Note that in polling, the server does not need the IP address or port number of the client. Polling is a good method to use with DHCP network access, with its varying IP addressing, as TSM never has to "remember" a client's network address that way. Note that the long intervals between polling make this method problematic for when schedules are added or revised on the server, particularly for those from DEFine CLIENTAction. On Prompted: The effect of this choice is that the client process sits dormant, and that at a scheduled time, the server will contact the client, to tickle it into initiating a session with the server. That is, it is not the case that the server unto itself conducts a session with the client, but rather that the client is merely given a wake-up call to conduct a conventional session with the server. Prompted mode does not ordinarily work across a firewall: use POlling instead, unless you employ SESSIONINITiation SERVEROnly. How the server knows the address and port number in order to reach the client: The basic approach is that when a client contacts the server, the client IP address and port number are "registered" and stored on the server. Alternately, the server may be explicitly told to use an IP address and port number per overriding node definitions in the server, per the HLAddress and LLAddress values. When it is time to prompt that client, the appropriate IP address and port numbers are used. If HLAddress/LLAddress are not used and the IP address changes for that client, or its option file is updated to specify a new TCPCLIENTPort number, then the client schedule process must be stopped and restarted in order for the new values to be "registered" with server, for it to be able to subsequently contact the client. Prompted mode log entries: "Waiting to be contacted by the server." See also: IP addresses of clients; QUERYSCHedperiod; SESSIONITIAiation; Set QUERYSCHedperiod; Set SCHEDMODes; TCPPort Ref: Tivoli Field Guide "Using the Tivoli Storage Manager Central Scheduler" SCHEDMODe (in client), query 'dsmc Query Option' in ADSM or 'dsmc show options' in TSM; SchedMode value. SCHEDMODes (in server) *SM server definition of the central scheduling modes which the server allows. Set via: 'Set SCHEDMODes [ANY|POlling|PRompted' Query via: 'Query STatus', inspect "Scheduling Modes". Schedule A time-based action for the server (Administrative Schedule) or client (Client Schedule) to perform. An Administrative Schedule is used to perform things like migration, reclamation, database backup. A Client Schedule is used to perform one of three things: ADSM client functions such as backup/restore or archive/retrieve; or a host operating system command; or a macro (by its file name, but not the ADSM MACRO command). See "Schedule, Client" for detailed info. Schedule, associate with a client 'DEFine ASSOCiation Domain_Name Schedule_Name ClientNode [,...]' Schedule, Administrative A server-defined schedule is used to perform a server command. Controlled by 'DEFine SCHedule' to define the particulars of the schedule. Don't forget to code "ACTIVE=Yes". Note that administrative schedules are associated with the administrator who last defined or updated them: the schedule will not run if that administrator is no longer valid (removed, renamed, locked). Schedule, Administrative, one time DEFine SCHedule with PERUnits=Onetime. Schedule, Client A server-defined schedule is used to perform one of three things: ADSM client functions such as backup/restore or archive/retrieve; or a client operating system command; or a macro (by its file name, but not the ADSM MACRO command). Controlled by 'DEFine SCHedule' to define the particulars of the schedule and then 'DEFine ASSOCiation' to associate the node with the schedule. Thereafter you have to invoke 'dsmc schedule' on the client for the Client Schedule to become active: it is a client-server mechanism and requires the participation of both parties. The minimum period between startup windows for a Client Schedule is 1 hour. A Client Schedule is kind of an ADSM substitute for using cron on the Unix client in order to perform the action. The Client Schedule start time will be randomized if 'Set RANDomize' is active in the server. See also: DEFine CLIENTAction; DEFine SCHedule; SET CLIENTACTDuration; Weekdays schedule, change the days Schedule, Client, Archive type One awkwardness with scheduling Archive operations via client schedules is the Description field: defined with the OPTions keyword, it becomes an unvarying value, which defeats the selectability that the Description field is for. The only recourse seems to be to omit it, which causes the archive date will be stored, like "Archive Date: 07/11/01". Multiple archives per day will not be unique, but archives on separate days will. Schedule, Client, one time DEFine SCHedule with PERUnits=Onetime, or use 'DEFine CLIENTAction' Schedule, define See: DEFine SCHedule Schedule, define to AIX SRC 'mkssys -s adsm -p /usr/lpp/adsm/bin/dsmc -u 0 -a "sched -q -pas=foobar" -O -S -n 15 -f 9' then You can start it either by calling "startsrc -s adsm" or let the Schedule, dissociate from client 'DELete ASSOCiation DomainName SchedName NodeName[, Nodename]' Schedule, interval Defined via the PERiod parameter in 'DEFine SCHedule', in the server. See also: QUERYSCHedperiod Schedule, missed At the end of the start duration for a given schedule, the schedule manager looks for nodes associated with the schedule which never "started" (probably caused by the client scheduler not being active at the known IP address). These get marked as "missed". At the same time that this "check" is performed the schedule manager also checks for nodes which are in a "started" or "re-started" state. For these nodes, there is check done to determine if there is an active session for the node/schedule combination. If there is no session (most likely caused by some sort of timeout) then the schedule is marked as "failed" in the server schedule event table. Here is the "catch": Although the client may reconnect after this time and complete the activity, the event table will NOT be updated to note this. This case is what most administrators might be seeing. There has to be some sort of garbage cleanup for clients that never do re-connect. If you see a lot of this, you should consider updating your IDLETimeout and COMMTimeout periods to longer values. Also consider a longer duration for the schedule. While the duration is used for a start period and not the time the scheuled activity must comlpete in, the end of the duration is used as a sanity check for prompted sessions that have "disappeared". Missed schedules are often caused by wrong or expired passwords, or an outdated MAXSessions server option value. Msgs: ANR2571W et al See also: Missed Schedule, query from client 'dsmc Query Schedule'. Shows schedule name, description, type, next execution, etc. Schedule, randomize starts See: Set RANDomize Schedule, run command after Use the POSTSchedulecmd Client System Options file option to specify the command to be run. Schedule, run command before Use the PRESchedulecmd Client System Options file option to specify the command to be run. Schedule Randomization Percentage Output field in 'Query STatus' report. See 'Set RANDomize' for details. Schedule Log, prevent creation That log is controlled by the SCHEDLOGName option. If running Unix, you can define the name as /dev/null to avoid creating a log file. Schedule Log name The schedule log's default name, as it resides in the standard ADSM directory, is dsmsched.log. Can be changed via the SCHEDLOGname Client System Options file (dsm.sys) option. Query via 'dsmc q o' and look for SchedLogName. Beware symbolic links in the path, else suffer ANS1194E. See: SCHEDLOGname Schedule log name, query ADSM: 'dsmc Query Options' TSM: 'dsmc SHOW Options' look for "SchedLogName". Schedule log name, set Controlled via the SCHEDLOGname Client System Options file (dsm.sys) option (q.v.). Schedule Log pruning Messages: ANS1483I, ANS1485E Schedule Randomization Percentage 'Query STatus', look for "Schedule Randomization Percentage" Schedule retry period Controlled via the RETRYPeriod Client System Options file (dsm.sys) option (q.v.). Schedule Service Windows: Employs the NT 'at' command to schedule command and programs to be run at certain times. In NT4: Go into My Computer; select Scheduled Tasks; open Add Scheduled Task; select program to be run. Note that this just runs the TSM schedule command: you additionally need to define a client schedule in the TSM server. Alternative: Specify the 'dsmc schedule' command in your Startup folder. Beginning with TSM 4.1 and the use of Microsoft Installer, the Schedule Service is not automatically configured at package installation time: configure via dsmcutil or run the setup wizards from the Backup/Archive GUI. See also: PRENschedulecmd; PRESchedulecmd Scheduled commands Their output cannot be redirected: it must go to the Activity Log. Scheduled events, start and stop 'Query EVent * * Format=Detailed' times, actual will reveal. If the events would all be backups, you could also determine by: 'Query FIlespace [NodeName] [FilespaceName] Format=Detailed' Scheduler, client See also: CAD; MANAGEDServices Scheduler, client, looping Assure that dsmerror.log and dsmsched.log are Excluded from backups. Scheduler, client, Windows, restart Settings -> Control Panel -> automatically Administrative Tools -> Services : Select the service, oen its properties, then adjust Recovery as desired. Scheduler, client, start You run the client program, telling it to run in Schedule mode, basically: /usr/lpp/adsm/bin/dsmc schedule Note that the client options files are read only when the dsmc program starts: changes made to the files after that point will not be observed by the program. You have to restart dsmc for such file changes to be picked up. In contrast, the client option set in the server is handed to the client scheduler each time it run a schedule, and so the scheduler does not have to be restarted when cloptset changes are made. Ref: Installing the Clients Scheduler, client, start automatically Unix: Add line to the client /etc/inittab file to start it at boot time. For AIX: adsm::once:/usr/lpp/adsm/bin/dsmc sched > /dev/null 2>&1 # ADSM Scheduler Windows: Make a shortcut to the scheduler EXE program and put the shortcut into the Startup folder: this causes the scheduler to start whenever a person logs on. Ref: Installing the Clients. Scheduler, client, start automatically Add to startup.cmd: in OS/2 'start "Adsm Scheduler" c:\adsm\dsmc schedule /password=actualpassword'. Add "/min" after the word "start" to have it run in a minmized window. Scheduler, client, start manually Under bsh: 'dsmc schedule > /dev/null 2>&1 < /dev/null &' or use nohup: 'nohup dsmc schedule > /dev/null 2>&1 < /dev/null &' By redirecting both Stdout and Stderr you avoid a SIGTTOU condition ("background write attempted from control terminal"); and forcing a null input you avoid situations where the command hangs awaiting input. But if the command may be trying to tell you that something is wrong (as when your client password is expired), and you are suppressing that information, then you will not know what is going on. It is healthier to direct Stdout and Stderr to a log file. On Unix you could alternately do: 'echo "/usr/lpp/adsm/bin/dsmc sched -quiet" | at now' At least do 'dsmc q o' under ADSM or 'dsmc show options' under TSM to check your options, if not invoke 'dsmc schedule' out in the open to capture any messages, then cancel it. Interesting note: If you start the scheduler simply as 'dsmc schedule', it displays a novel countdown timer, at least when SCHEDMODe PRompted is in effect. You may not want to leave a superuser terminal session sitting around like this, but it can be a valuable way to help narrow down a scheduler problem. See also: dsmc Scheduler, find in Windows (NT) regedit " adsm scheduler " Scheduler, max retries Specify via the MAXCMDRetries option in the Client Systems Options file (dsm.sys). Default: 2 Scheduler, max sessions 'Set MAXSCHedsessions %sched' Scheduler, number of times retry cmds 'Set MAXCMDRetries [N]' Scheduler, windows, not installed TSM4 does not install the Scheduler as part of the client install. You can use the dsmcutil program to install it, or do it from the GUI. Scheduler "not working" Things to look for: - Is your node actually registered on the server? If so, has a LOCK Node been done on it, or a global DISAble SESSions been done on the server (msg ANR2097I)? For that matter, is the server running? - Are you starting the scheduler process on the client as superuser? - In Unix, remember that the scheduler process is a background process, and so it behooves you to redirect Stdin, Stdout, and Stderr. (See: Scheduler, client, start ...) - In Unix, beware having "dsmc sched" in /etc/inittab with 'respawn', as the dsmc process may respawn itself, and init may alwo respawn it, resulting in port contention. Consider using dsmcad instead. - If using PASSWORDAccess Generate, did you perform the required initial superuser session to plant the client password on the client? Did the password expiration period as defined in REGister Node or Set PASSExp run out? - If the PRESchedulecmd returns a non-zero return code, the scheduled event will not run. - Is the scheduler process actually present? If present, is the process runnable? (In Unix, a 'kill -STOP' prevents it from running.) - Has a schedule been defined on the server, and has a DEFine ASSOCiation to have your node perform it? - Is the server reachable from your client, and vice versa (network, firewall issues). - The client schedule type - polling or prompted - will dictate the direction : in which to pursue analysis. - Be sure to check client dsmerror.log files for indications. - You might also check for lingering client sessions, which may exhaust your eligible license count. - For problem isolation, consider running it as 'dsmc SCHedule', leaving the superuser terminal session in a foreground state like this for a day or so (in a physically secure room). - To debug an apparent TSM server failure to schedule, define a client schedule that runs every hour, with ACTion=Command and OBJects specifying a client command which will simply log the scheduled invocation, such as the Unix command 'date >> /var/log/debug'. Scheduler Service See: Schedule Service Schedules, administrative, list Via server commands: 'Query SCHedule Type=Administrative' - or - 'SELECT * FROM ADMIN_SCHEDULES' Schedules, client, list Via server commands: 'Query SCHedule' - or - 'SELECT * FROM CLIENT_SCHEDULES' Schedules, pending TSM server: 'SHOW PENDING' Schedules in effect 'Query ASSOCiation [DomainName [SchedName]]' Scheduling Mode A mode that determines whether your client node queries a *SM server for scheduled work (client-polling) or waits to be contacted by the server when it is time to perform scheduled services (server-prompted). If using TCP/IP, best to use the "server prompted" scheduling mode. The client options file will have to have an option coded that says SCHEDMODe PRompted. The default mode of scheduling is "client polling". Scheduling Modes See: SCHEDMODes; Set SCHEDMODes Scout daemon The dsmscoutd HSM process. See: dsmscoutd Scraper Device that Magstar hardware engineering added to new 3590 drives in 1999, to attempt to remove dirt from the tapes by staying in contact with the tape as it moved by. Ended up being discontinued because friction heat would distort the tape's plastic base, and the scraper itself would become a source of dirt as it built up on the scraper. Scratch See: MAXSCRatch Scratch, make tape a scratch Via ADSM command: 'UPDate LIBVolume LibName VolName STATus=SCRatch' Via Unix command: 'mtlib -l /dev/lmcp0 -vC -V VolName -t 12e' This is just a 3494 Library Manager database change: ADSM does not see it, and it will not be reflected in 'Query LIBVolume' output. SCRATCH category, change tape to Via Unix command: 'mtlib -l /dev/lmcp0 -vC -V VolName -t 12e' which may be done if a tape already prepared via the ADSM 'CHECKIn' command somehow gets a wrong category, such as INSERT. If tape not previously prepared via the ADSM 'CHECKIn' command, you should do that, which also prepares the tape label. SCRATCH category code 'Query LIBRary' reveals the decimal category code number. Scratch tape Term used to refer to a tape available for general writing for a storage pool. The number of scratch tapes eligible for a storage pool is specified via: 'DEFine STGpool MAXSCRatch=NNN' where the default is 0, with the expectation then being that you would dedicate volumes to the pool via 'DEFine Volume STGpool VolName'. If scratch volumes are used, they are automatically deleted from the storage pool when they become empty. Scratch tape, 3490, add to 3494 'CHECKIn LIBVolume LibName VolName library containing 3490 and 3590 STATus=SCRatch tape drives [CHECKLabel=no] [SWAP=yes] [MOUNTWait=Nmins] [SEARCH=yes]' Note that this involves a tape mount. Newly purchased tapes should have been internally labeled by the vendor, so there should be no need to run the 'dsmlabel' utility. Scratch tape, 3590, add to 3494 'CHECKIn LIBVolume LibName VolName library containing 3490 and 3590 STATus=SCRatch tape drives [CHECKLabel=no] [SWAP=yes] [MOUNTWait=Nmins] [SEARCH=yes] [DEVType=3590]' Note that this involves a tape mount. Newly purchased tapes should have been internally labeled by the vendor, so there should be no need to run the 'dsmlabel' utility. Scratch tape, 3590, add to 3494 'CHECKIn LIBVolume LibName VolName library containing only 3590 STATus=SCRatch DEVType=3590 tape drives [CHECKLabel=no] [SWAP=yes] [MOUNTWait=Nmins] [SEARCH=yes]' Note that this involves a tape mount. Scratch tape, add to library 'CHECKIn LIBVolume LibName VolName (as in 3494) STATus=SCRatch [CHECKLabel=no] [SWAP=yes] [MOUNTWait=Nmins] [SEARCH=yes] [DEVType=3590]' Note that this involves a tape mount. Newly purchased tapes should have been internally labeled by the vendor, so there should be no need to run the 'dsmlabel' utility. Scratch tapes, list See: Scratch volumes, list Scratch Volume A volume which is checked into a library, and is assigned a library Category Code which makes it eligible for dynamic use in a given server storage pool. After that volume's contents have evaporated, the volume leaves the storage pool and returns to eligible status. Contrast this with volumes which are Defined into a storage pool and stay there. Ref: Admin Guide, "Scratch Volumes Versus Defined Volumes". Also, an element of Query Volume command output. Its value is Yes if the volume came from a scratch pool (and will return there when the volume empties). See also: Defined Volume Scratch volume added to stgpool Msg: ANR1340I Scratch volume ______ now defined in storage pool ________. This is when *SM itself adds the volume to the storage pool, when it needs more writable space. Corollary msg: ANR1341I Does not correspond to adding a volume to a storage pool via DEFine Volume, whose message is ANR2206I. SCRATCH volumes, count of in 3494 Via Unix command: (3590 tapes, default ADSM SCRATCH 'mtlib -l /dev/lmcp0 -vqK -s 12E' category code x'12E') Scratch volumes, list In server: SELECT VOLUME_NAME, STATUS FROM LIBVOLUMES WHERE STATUS='SCRATCH' In Unix: mtlib -l /dev/lmcp0 -qC -s ___ where the scratch category must be supplied, in hex SCRATCHCATegory Operand of 'DEFine LIBRary' server command, to specify the decimal category number for scratch volumes in the repository. Default value: 301. 3494: As the model number implies, the 3494 was introduced to contain 3490 tapes. 3590s are still an extension of that origin. Thus, the scratch category number you define is for 3490 tapes, though they are essentially non-existent today. 3590 scratches are implied to be one number higher: SCRATCHCATegory+1. So you must make allowances to avoid conflicts, particularly with the Private category number. Scratches, list SELECT LIBVOLUMES.VOLUME_NAME, - LIBVOLUMES.STATUS, - LIBVOLUMES.LIBRARY_NAME FROM - LIBVOLUMES LIBVOLUMES WHERE - (LIBVOLUMES.STATUS='Scratch') Scratches, number left SELECT COUNT(LIBVOLUMES.VOLUME_NAME) - AS "Scratch volumes" FROM LIBVOLUMES - WHERE (LIBVOLUMES.STATUS='Scratch') Or, with a 3494 you can externally query from the opsys command line, based upon the category code of your scratches: 'mtlib -l /dev/lmcp0 -qC -s ScratchCode' and then count the number of lines. Scripts See: Server Scripts scripts.smp See: SQL samples SCROLLLines Client System Options file (dsm.sys) option to specify the number of lines you want to appear at one time when ADSM displays lists of information on screen. Default: 20 lines SCROLLPrompt Client User Options file (dsm.opt) option to specify whether you want long displays to stop and prompt you to continue, or to just pump out a whole response without stopping. Default: No Specify 'No' if using the Webshell, which needs to process ADSM command output and balks at such prompts. SCRTCH MVS, OS/390 generic designation for a Scratch volume. SCSI IDs in use, list AIX cmd: 'lsdev -C -s scsi -H'. SCSI Library A library lacking an internal supervisor such that the TSM server must physically manage its actions, and must keep track of volume locations. Current SCSI libraries include: 3570; 3575; 3581; 3583; 3584. For SCSI libraries, the server maintains certain information to detect library firmware bugs. If the customer expands or otherwise change the configuration of their library, there is a procedure the customer must follow; otherwise the internal checks of the server will prevent the initialization of the library. See also: Element; SHow LIBINV SDG SAN Data Gateway As for connecting a host with fibre channel to tape drives with Ultra SCSI connections: the SDG bridges the two connection technologies. Ref: TSM 5.1 Technical Guide redbook See also: Server-free SECOND(timestamp) SQL function to return the seconds value from a timestamp. See also: HOUR(), MINUTE() Secondary Server Attachment You can obtain a license for attaching a second server to a Library. It is not a functioal thing, but rather just a marketing thing to reduce the cost of a second ADSM license for another server. If so licensed, get the following message at server startup: ANR2859I Server is licensed for Secondary Server Attachment. Ref: Administrator's Guide. Shows up in 'Query LICense' output. SECONDS See: DAYS Security in *SM First, *SM was not designed for physically insecure environments. Userid/Password: Rather rudimentary, in that there is no distinction between upper and lower case. But it uses a "double-handshake" authentication process that's pretty robust and relatively tough to crack. Client data: Can be stored in encrypted file systems (EFS). Client-server communication: Can be encrypted. (See TSM 3.7.3 + 4.1 Technical Guide redbook) Tapes: They are in proprietary, undefined format, with no customer tools for directly interpreting them. See: Set INVALIDPwlimit; Set MINPwlength; Set INVALIDPwlimit SEGMENT Column in SQL database CONTENTS table. See: Segment Number Segment Number For files that span sequential volumes, identifies the portion of the file that is on the given volume, as revealed via the Query CONtent server command or a SELECT * FROM CONTENTS. (For volumes in random-access storage pools, no value is displayed for this field.) See also: Aggregated; Query CONtent; Span volumes, files that, find Segmentation violation ("Segfault") Also known as Signal 11 (SIGSEGV). Program failure in Unix caused a programming error: the program attempts to write to a region of memory to which it does not have access, as in writing past the end of an array due to failure to check bounds. You need to upgrade to a level of the program where the defect is fixed. You may be able to temporarily avoid the failure if you can identify the circumstances under which it occurs and stay away from that scenario. The problem may occur during an incremental backup, where the Unix client is working a large list of Active files gotten from the server. In some cases, you can prevent the segfault by increasing the stack limit using the 'ulimit -s' command. If the server crashed, there may be a dsmserv.err file with some indications in it. See also: MEMORYEFficientbackup SELECT *SM command to perform an SQL Query of the TSM Database, introduced in ADSMv3. Syntax: SELECT [ALL | DISTINCT] column1[,column2] FROM table1[,table2] [WHERE "conditions"] [GROUP BY "column-list"] [HAVING "conditions] [ORDER BY "column-list" [ASC | DESC] ] Note that this implementation of Select is primitive, with a major shortcoming being the absence of a LIMIT qualifier to keep the search from plowing through the whole table when, for example, only the first occurrence of a value is desired. This Select form also differs from common SQL in requiring the specification of FROM - which thus prevents use of Select in *SM to evaluate basic expressions, as you might do "SELECT 2+2" to compute 4, or do SELECT CURRENT_TIME to see that value. (You can neatly work within this requirement and get what you want, by using a trivial *SM table as the FROM value, as in: SELECT CURRENT_TIMESTAMP FROM LOG where table LOG serves as a placebo.) Note that the *SM database is not an SQL database per se: SQL Select was added on top of it to provide customers the ability to report information in a flexible manner. The SQL tables that you process via Select do not actually exist: they are effectively constructed as your Select runs (hence the TSM db work space margin requirment.) While less flexible, the pre-programmed server commands which report from the (actual) database are much faster in that they are optimized to go directly at the actual database format, and don't have to go through the artificial SQL interface. Note that various info is not available through the SQL interface - particularly that which is accessible via client queries where the data content is specific to the client operating environment (OS, file system, etc.). Generally speaking, if there is no (supported) TSM server command which reports certain information, there will be no SQL access to it, either. Impact: The Select command may require work space to service the query, which it takes from the TSM database itself - and so you need a decent amount of free space to do more complex Selects. The SQL functions can also be performed via the ODBC interface which is provided in Windows clients (only). Appendix A in the TSM Technical Guide redbook perpetually carries ODBC usage info. See also: Events table; ODBC; SQL ... SELECT, date/time Select ... \ WHERE DATE(DATE_TIME)='mm/dd/ccyy' SELECT, example of defining headers SELECT CLIENT_VERSION AS "C-Vers", - CLIENT_RELEASE AS "C-Rel", - CLIENT_LEVEL AS "C-Lvl", - CLIENT_SUBLEVEL AS "C-Sublvl", - PLATFORM_NAME AS "OS" , - COUNT(*) AS "Nr of Nodes" FROM NODES - GROUP BY - CLIENT_VERSION,CLIENT_RELEASE,- CLIENT_LEVEL,CLIENT_SUBLEVEL,- PLATFORM_NAME SELECT, example of pattern search SELECT * FROM ACTLOG WHERE MESSAGE LIKE '%%' SELECT, example using dates SELECT * FROM ACTLOG WHERE DATE_TIME \ >'1999-12-22 00:00:00.000000' AND DATE_TIME <'1999-12-23 00:00:00.000000' SELECT, exclusive case To report columns which are in one table but not in another, use the NOT IN operators. For example, to report TSM database backup volumes which have been checked out of the library: SELECT DATE_TIME AS - "Date_______Time___________",TYPE, - BACKUP_SERIES,VOLUME_NAME - FROM VOLHISTORY WHERE - (TYPE='BACKUPFULL' OR TYPE='BACKUPINCR') AND VOLUME_NAME NOT IN (SELECT VOLUME_NAME FROM LIBVOLUMES)' SELECT, generate commands from See: SELECT, literal column output SELECT, literal column output You can cause literal text to appear in every row of a column, which is one way to generate lines containing commands which operate on various database "finds". The form is: 'Cmdname' AS " " ... where Cmdname will appear on every line. For example, here we generate Update Libvolume commands for scratches: SELECT 'UPDATE LIBV OUR_LIB' AS - " ", - VOLUME_NAME, ' STATUS=SCRATCH' FROM - LIBVOLUMES WHERE STATUS='Scratch' - > /tmp/select.output Inversely, you may employ a literal to occupy only the title of the first column of a report, to name the report - given that TSM's limited SQL excludes the ability to have a page title, as the TTITLE operator would do. Example: SELECT '' AS "Title" ... SELECT, restrict access See: QUERYAUTH SELECT, speed vs. client speed You will inevitably realize that the B/A client can obtain filespace and file information much faster than it can be obtained via the server Select command. The gist of the matter is that Select is a virtualized convenience for us server administrators to look at the data in the database, whereas the client "knows the inside scoop" and can more directly go after the data. Select is much more generalized, and entails more overhead. SELECT, terminate prematurely The SELECT may run for a ridiculously long time, and you want it gone rather that waiting for it to end. Entering 'C' to cancel is ineffectual because it merely waits for the operation to end. You need to do a CANcel SEssion from another dsmadmc invocation in order to get rid of it. This will terminate the SELECT, but not force you out of the original dsmadmc. SELECT, yesterday ...DAYS(CURRENT_DATE)-DAYS(DATE_TIME)=1 SELECT output, column width The width of a column is governed by its header; so you can use that to cause your columns to be widened to keep column content from wrapping across lines. You define a column header via the SQL "AS". SELECT output, columnar instead of Issuing Select (and Query) commands from keyword list the dsmadmc prompt may result in the report being in Keword: Value sets instead of tabular, columnar output. This can be controlled via the explicit dsmadmc -DISPLaymode= option, but is also the implicit result of the combination of the number of database entry fields (columns) you choose to report, the column width of each, and the width of your window. *SM *wants to* display the results in tabular form, and is helped in doing so by reducing the number of fields reports and/or their column width (via the AS ____ construct). Widening your window will also help. (In an xterm window, you can aid this by the use of smaller fonts: hold down the Ctrl key and then press down the right mouse button, and from the VT Fonts list choose a smaller font.) You can demonstrate the adaptation by doing 'SELECT * FROM AUDITOCC' in a narrow window, which will result in Keyword: Value sets; then widen it to get tabular output. See also: dsmadmc; -DISPLaymode Selective Backup A function that allows users to back up objects (files and directories) from a client domain that are not excluded in the include-exclude list and that meet the requirement for serialization in the backup copy group of the management class assigned to each object. A selective backup of filenames will also result in their containing directory being backed up. Performed via the 'dsmc Selective' cmd. "Selective" backs up files regardless of whether they have changed since the last backup, and so could result in more backup copies of the file(s) than usual. In computer science terms, this is a "stateless" backup. Note that the selective backups participate in your version limits. Note that a Selective backup does not back up empty directories, and it does not change the "Last Incr Date" as seen in 'dsmc Query Filespace', nor the backup dates in 'Query FIlespace' (because it is not an incremental backup). Rebinding: A Selective backup binds the backed up files to the new mgmtclass, but not the Inactive files: you must perform an unqualified Incremental backup to get the latter. Example: dsmc s -subdir=y FSname See also: dsmc Selective Selective Backup, more overhead than Running a Selective Backup can be Archive expected to entail more overhead than a comparable Archive operation, in that more complex retention policies are involved in Backup policies than in Archive. Remember that Archive retention is based purely upon time, whereas Backup involves both time and versions decisions. File expiration candidates processing based upon versions (number of same file) is performed during client Backups (in contrast to time-based retention rules, which are processed during a later, separate Expiration). The more versions you keep, the more work the server is distracted with at Backup time. Selective Backup fails on single file See: Archive fails on single file Selective migration HSM: Concerns copying user-selected files from a local file system to ADSM storage and replacing the files with stub files on the local file system. Is goverened by the "SPACEMGTECH=AUTOmatic|SELective|NONE" operand of MGmtclass. Contrast with threshold migration and demand migration. Selective recall The process of copying user-selected files from ADSM storage back to a local file system. Contrast with transparent recall. Syntax: 'dsmrecall [-recursive] [-detail] Name(s)' SELFTUNEBUFpoolsize TSM server option to specify whether TSM can automatically tune the database buffer pool size. If you specify YES, TSM resets the buffer cache hit statistics at the start of expiration processing. After expiration completes, if cache hit statistics are less than 98%, TSM increases the database buffer pool size to try to increase the cache hit percentage. The default is NO. SELFTUNETXNsize TSM server option to specify whether TSM can automatically change the values of the TXNGroupmax, MOVEBatchsize, and MOVESizethresh server options. TSM sets the TXNGroupmax option to optimize client-server throughput and sets the MOVEBatchsize and MOVESizethresh options to their maximum to optimize server throughput. Default: NO. SendW "Sess State" value from 'Query SEssion' saying that the server is waiting to send data to the client (waiting for data to be delivered to the client node that has already been sent, as in waiting for the client to respond to the send). If you see the session continually in SendW state but the Wait Time is "0 S" and the Bytes Sent keeps increasing, then it is not the case that the session is stuck in SendW state. Rather, that is just the dominant state. See also: Communications Wait; Idle Wait; Media Wait; RecvW; Run; SendW Sense Codes, 3590 Refer to the "3590 Hardware Reference" manual. Sequential devices Tape is an obvious, physical example of a sequential access medium, in which data can only be appended after the position where data was last written to the tape (in-midst updating not possible). TSM also supports sequential device definition on disk, via the FILE device class. See also: FILE SERialization A copy group attribute that specifies (backing up open files) whether an object can be modified during a backup or archive operation and what to do if it is. Specified by the SERialization parameter in the 'DEFine COpygroup' command. This parameter affects only upcoming operations: it has no effect upon data already stored on the server. See: Changed; CHAngingretries; Dynamic; Fuzzy Backup; Shared Dynamic; Shared Static; Static SERVER Device type used for a special device class where the volumes are virtual (Virtual Volumes) and exist on another *SM server as archived files. The data which may be stored across servers can include DBBackup volumes. See also: FILE server A program that runs on a mainframe, workstation, or file server that provides shared services such as backup, archive, and space management to other various (often remote) programs called clients. Server, HSM, specify Specified on the MIgrateserver option in the Client System Options file (dsm.sys). Default: the server named on the DEFAULTServer option. Server, merge into another server As of TSM 4.1, there is no way to merge one server into another server, as you might want to do in transferring a retiring server system's data and library to another server. Your only options are: - Export from the old server and Import into the other; - Run the old server as a parallel instance on the same platform where the other server lives, via database restore. (Doing this without Export-Import requires that both servers be of the same operating system type.) Server, move to another architecture This will most likely have to be performed via Export/Import, including both the server proper and all the client data, rather than simply moving the "server" portion of things and have the new server architecture use the old server data tapes, as-is. However, you *might* be able to accomplish the move via Restore DB: one customer reports successfully moving a server from AIX to Solaris via this method. Note that this is a very gray area, completely unspecified by Tivoli. One could conceivably run into problems even when moving between like architecture machines, such as from 32-bit Solaris to 64-bit Solaris. Server, move to same architecture You can rather easily move the TSM server from one system to another, of the same architecture, as when upgrading to a more powerful server. Essentially, all you have to do is move or copy the current TSM server database, recovery log, and storage pool volumes, as is, retaining the same path names. You can do 'DSMSERV RESTORE DB' across systems of the same architecture. Server, prevent all access The 'COMMmethod NONE' server option will prevent all communication with the server. Server, prevent client access Temporarily changing the server options file TCPPort value to a hoked value will prevent client access - they utilize a value coded on their client option file TCPPort option (default: 1500), which would prevent them from talking to the server when its value is different. Server, recover to new disk space You may have to recover the *SM server after the loss of the disks upon which its Database and Recovery Log resided. If you keep good records, you know how much disk space was involved, in order to recreate the space at the operating system level. But if you don't know the sizes, you can allocate a larger area: The 'dsmserv restore' command will decrease the DB and Recovery Log to its original sizes and whatever is left over will become the Maximum Extension. Server, restarting after killing, After a server is restarted, do things to watch out for 'Query DBVolume' and 'Query LIBVolume' in that a mirror copy could have become de-synced. Server, run as non-root (in Unix) The *SM server is conventionally run by user root, to be able to do anything it needs to. However, it is possible to run the *SM server under other than root... Much of the issue of doing so is in the ownership of file in the server directory and its contained files: adsmserv.licenses (ADSM, not TSM) adsmserv.lock (ADSM, not TSM) dsmaccnt.log dsmerror.log dsmlicense dsmserv.dsk dsmserv.err dsmserv.opt nodelock rc.adsmserv Likewise, adjust ownership/permissions of dbvols, logvols and diskpool volumes. You must also assure that the username under which the server is to run has high enough Unix Resource Limits (as in AIX /etc/security/limits), not artificially lower-limited by the shell under which the server is started. Not accounting for this can result in BUF087 failure of the server (msg ANR7838S). Downsides: Cannot use Shared Memory. Server, select from client In the Unix environment, a client may choose the server to contact, by using the SErvername in the Client User Options file, or by doing: 'dsm -SErvername=StanzaName' 'dsmc incremental -SErvername=StanzaName' to identify the stanza in dsm.sys which points to the server by network and port addresses. Server, shut down 'HALT' command, after doing a 'DISAble' to prevent new sessions, 'Query SEssions' to see what's active, and 'CANcel SEssion' if you can't wait for running stuff to finish. You should also 'dismount' any mounted tapes because the 'halt' does not dismount them. Note that this does not shut down HSM processes such as dsmmonitord and dsmrecalld, as these are file-system oriented and need to remain active. In Unix, it is conventional to shut down applications in /etc/rc.shutdown, wherein you could code a dsmadmc invocation of HALT. Note that Unix TSM servers conventionally respond to SIGTERM to terminate cleanly. See also: HALT Server, split? When the load on one TSM server becomes excessive, it's time to split out to another server. Decision factors: - Expire Inventory remains a single-process task, and may run far too long to be acceptable. - BAckup DB takes too long. Server, start automatically Conventionally, the installation of the product installs a server start-up method in a place standard for the given operating system, such as /etc/inittab for AIX: autosrvr:2:once: /usr/lpp/adsmserv/bin/rc.adsmserv >>/var/log/adsmserv/adsmserv.log 2>&1 Server, start manually The following steps start the ADSM server proper: - Make sure that the disks containing the TSM db, Recovery Log, and storage pools are varied online to the operating system. - In Unix, make sure your Resource Limits - particularly filesize - is sufficient to handle the CPU, memory, and file sizes the server will need. - Now invoke the server: In Unix: 'cd /usr/lpp/adsmserv/bin' './dsmserv quiet' (run in bkground) - or - './dsmserv' (run interactively) or alternately do: '/usr/lpp/adsmserv/bin/rc.adsmserv &' Do *not* do './dsmserv &', because without the "quiet" option it will be constipated, needing to output to the tty. Do 'Query DBVolume' and 'Query LIBVolume' after restart to assure that all mirrored copies are synced. If you use HSM, go start it as well. (See: HSM, start manually) Server, stop See: Server, shut down Server command line access 'dsmadmc ...' Server development site Is Tucson, AZ. Server directory (executables, Named in the DSMSERV_DIR environment license file, etc.) variable; defaults to: AIX: /usr/lpp/adsmserv/bin/ Sun: /opt/IBMadsm-s/bin/ If another directory is to be used, the environment variable must be set thus. Ref: Install manual Server disappeared, handling You find your host system up for some time, but your TSM server has disappeared. What should you do? First, try to determine why... - Look for the server process, to assure that it really has gone away. (If the process is present, see if it is in some way stopped, and what's causing it.) - Look at the last-modified dates of your recovery log, per file names in dsmserv.dsk, to get a sense of when it went away. - Look for a core/dump file in the server directory, which certainly shows when it went away. - In Unix, you can look at the /var/adm/pacct files, via 'acctcom' or like command, to see when the dsm* processes went away. - In AIX, do 'errpt -a|more' and look for a record of the dsmserv process having failed. Look for any hardware errors (disk problems, etc.) that would have precipitated the TSM failure. - Check the file systems that the server uses to assure that they have not filled. - Your system should be set up to direct the output of the server start-up to a log file, which you can examine. Note that the real indications of the problem are trapped in the Activity Log, which you can't see until the server is restarted. Server file locations Are held within file: /usr/lpp/adsmserv/bin/dsmserv.dsk (See "dsmserv.dsk".) Server files Located in /usr/lpp/adsmserv/bin/ Server "hangs" First, check the obvious: inspect your process table to see if the server process is in a Stopped state: in Unix *maybe* someone did a 'kill -STOP' on it (use 'kill -CONT' to resume it). If not that, and if you have an automated tape library, you could perhaps see if a tape was mounted by the server and perhaps deduce what the server was doing. Also use 'netstat' and/or the public domain 'lsof' command to see what TCP/IP connections were active with the server. Check for datacomm hardware problems which may be causing TCP/IP connections to stop/hang and thus clog the server. Look for an unusually high packet rate: it is not impossible for someone to conduct a "denial of service" bombardment of the server port. See also: HALT; Server lockout Server installation date/time 'Query STatus', look for "Server Installation Date/Time". Server IP address The *SM server IP address is whatever it is... There is no server option for defining its address. Clients will point to the *SM server through their option TCPServeraddress. Note that some libraries communicate with the server over TCP/IP, and may have the server network address configured into them. If you change the server IP address, you will have to go around to all the clients to update their TCPServeraddress values. (That option obviously cannot be a server-based clientopt.) Don't forget to update your library, too, if needed. You may be able to avoid the chore of changing all the clients if it is possible for you to define a DNS CNAME or Virtual IP for your server which serves the old IP address, as well as the new, native one. Changing the server network address has no effect on storage pool data: your next client backup, to the new IP address, will be as incremental as ever. Server lockout, TCP/IP Connection The server may be irrevocably hung if it Problem is rejecting TCP/IP connections. If Unix, you might try using the client on the server system to access it, changing the client options file to specifying COMMMethod SHAREDMEM to try getting in via that alternate communications method. See also: HALT Server looping, 'hung' client sessions If possible, do Query Session for the Sess State value: anything odd, or client hitting on server? Look for any peculiar client conditions which might have triggered it, like a client which was Win95 yesterday but is Linux today, or clients of differing versions hitting the server. Use operating system facilities to identify the looping process or thread, as ADSM dedicates processes or threads to specific resources, which may help pinpoint the problem. Server name Defaults to "ADSM". Server name, get 'Query STatus' Server name, set 'Set SERVername Some_Name' This sets the name which the server feeds back to the client when the client contacts the server by the network and port address contained in its dsm.sys stanza. Changing this name does not affect the client's ability to find the server, because that is set in the Client System Options File by physical addressing; however, a client with "PASSWORDAccess Generate" has the server name stored with the encrypted password (stored in /etc/security/adsm/), so the client root will have to redo the password. Assigning arbitrary server names allows you to run multiple servers, or to uniquely identify servers on multiple systems. The ADSM "Test Drive" works this way. Server operating system type If you do a client-server command like 'dsmc q sch', the system type should show up in the "Session established with server" line. Server options, query 'Query OPTion' Server options file A text file specifying options for the ADSM server. Defaults to /usr/lpp/adsmserv/bin/dsmserv.opt . If another filename is to be used, the DSMSERV_CONFIG environment variable must be set thus, or specify on -o option of 'dsmserv' command. Changes in this options file are not recognized until the server is restarted. See also: SETOPT Ref: Install manual. Ref: Installing the Server... Server performance - Choose a fast-processor computer for your server system, preferably one with multiple CPUs, and possibly multiple I/O backplanes. - Employ fast interface card in your server system, and do not mix fast and slow devices on one interface where speed will be governed by the slowest device on the chain (as is the case with SCSI). - Assure that your server system has an abundance of real memory, which is vital to the performance of any kind of server. - Do a 'Query DB Format=Detailed' and check the Cache Hit Pct. If it is less than 98 add database buffers; in the server options file increase the BUFPoolsize value. See: BUFPoolsize The Cache Wait Pct (q.v.) value shoule always be zero. - Do 'Query LOG Format=Detailed' and check that the Log Pool Pct Wait value is zero: if otherwise, something in your operating system environment or hardware configuration is hampering access. - If your server is running in a system where other things are running, realize that it can be impeded by the mix, particularly if it is assigned a priority (and, in Unix, a Nice value) which makes it the same or worse than other processes running in that system. - Investigate server options AUDITSTorage, MOVEBatchsize, MOVESizethreshold, TXNGroupmax. - In AIX, check Threads performance factors. From TSM 4.1 README: "Possible performance degradation due to threading: On some systems, TSM for AIX may exhibit significant performance degradation due to TSM using user threads instead of kernel threads. This may be an AIX problem; however, to avoid the performance degradation you should set the following environment variables before you start the server: export AIXTHREAD_MNRATIO=1:1 export AIXTHREAD_SCOPE=S - Where clients co-reside in the same system, use Shared Memory in Unix or Named Pipes in Windows. See also: MVS server performance Server PID 'SHow THReads' Server processes, number of See: Processes, server Server restart date/time 'Query STatus', look for "Server Restart Date/Time". Server script, cancel There has been no way to terminate a script as a whole, as TSM provides no "handle" for the script itself. However, you can program your script to include potential break points which will cause it to exit upon a condition which you can externally set. For example, you have a daily script called DAILY, and in it you code the test: Query SCRipt DAILY-CANCEL if (RC_OK) exit Now, to get the running script to cancel, you do simply: COPy SCRipt DAILY DAILY-CANCEL When the script finishes its current action and performs the test, it will find the "cancel" version of the script to exist and will exit, whereupon you can then DELete SCRipt DAILY-CANCEL. Server script, delay There are occasions in server scripts where you need to introduce a delay between operations; but there is no "Sleep" command or the like. The most effective way, I have found, is to use the SHOW VOLUMEUSAGE command, which is well known to take time but produce little output, so is a good candidate. (I did think about doing a 'Query ACtlog BEGINDate=-999 Search=garbage', which would certainly take time; but that would be recursive, each day adding more and more finds of "garbage" from all preceding days.) Server Script, issue OS command from There is no way to directly issue an operating system command from a Server Script. However, it can be done indirectly, by taking advantage of client schedules, which can issue OS commands. The best way is to use a one-time client schedule. Note that some commands, like 'Query MEDia' and 'Query DRMedia', can generate commands which can be written to an OS file, which can then be defined and run as a script invoked from the running script, to for example send email about a certain volume. Conversely, you can invoke server functions from outside the server, as in having a Perl script run dsmadmc, and thereby achieve more sophisticated processing. See: DEFine CLIENTAction Server Scripts Facility introduced in ADSMv3 to store administrative scripts in the *SM database, which can be conditionally 'RUn' to perform administrative tasks. The Scripts facility is a lot like Macros, except that Scripts are stored in the TSM database instead of in the client file system, and scripts provide some conditional logic capability. Server Scripts can be run from Administrative Schedules - but restrictions on them prohibit using redirection. Disallowed characters: Do not use Tab characters!! Server Scripts insidiously report lines containing them as errors!! Continuation character: - Statements: IF EXIT GOTO IF coding: IF (Curr_RC) __Action__ where the return code tested is from a preceding server command, per any of the possible RC_* values summarized in appendix B of the Admin Ref manual; and Action may be a GOTO or any server command. GOTO coding: The GOTO specifies a labeled target, as in "GOTO step_1" and "step_1:". The label may appear on a line by itself or heading a line which includes another element, such as a server command or EXIT. Comments: Code in C style: /* */ Redirection: Not possible! To compensate, consider using commands like Query MEDia and Query DRMedia, which can create an output file by parameter. What's lacking: No Else, no Not (no negation, as in "if (! ok)". Line numbering: When you DEFine SCRipt, the line numbers are assigned starting at 1, then each line is five more than the previous one, so you end up with lines numbered: 1, 6, 11, 16, 21, etc. This will probably remind you of the old Dartmouth BASIC language, where the gaps afforded you modest room to insert line in between those, with UPDate SCRipt. Loops: Dangerous - because there is no way to query or cancel a server script, meaning that a loop could be inifinite and impair your server without you having a good way to detect or do anything about it. Naming: Keep the script name as short as feasible! Every line of output resulting from the execution of the script is reported in the Activity Log on ANR2753I messages - prefaced by the name of the script. Long script names make for a lot of log inflation, particularly in causing output to span lines. Beware revising a running script, as it appears that the server executes scripts by interpretation, line by line. There is no way to interrupt a multi-command script. This causes customers to shy away from server scripts. Scripts cannot be run from the server console, for some obvious reasons: Some of the scripts a) create a lot of output b) you could start some foreground process and for the time, the Script is running, your console would be busy for all other applications. Ref: Admin Guide, Automating Server Operations, Tivoli Storage Manager Server Scripts; Admin Ref appendix on Return Codes See also: DEFine SCRipt; RUn Server scripts, move between servers Do 'Query SCRIPT scriptname FORMAT=RAW OUTPUTFILE=____' to a file, move the file to the other system, and then do a 'DEFine SCRIPT ... FILE=____' to take that file as input. Still, the best overall approach is to maintain your complex server scripts external to the TSM server and re-import after editing. In a more elaborate way, this can be achieved through TSM's Enterprise Configuration, with a Configuration Manager server and Managed Server. Server session via command line Start an "administrative client session" to interact with the server from a remote workstation, via the command: 'dsmadmc', as described in the ADSM Administrator's Reference. Server Specific Info Is the NetWare Directory Services info; i.e., Users and Groups. Server stanza A portion of the Client System Options file, typically starting with the keyword "SErvername", which governs communicating with that one server. An ADSM client may communicate with more than one server, and thus can have multiple server stanzas within the file. The server with which the client usually interacts will be coded on the DEFAULTServer line, in the section of the file which precedes the server stanzas. (Note that the "server names" in this file are just arbitrary names for the stanzas, though they are typically the actual names of the servers. It is the TCPServeraddress which actually identifies the server to communicate with.) Many client options pertain to a given server and so must appear within each respective server stanza. The Client Options Reference topic of the Backup-Archive Clients manual lists the options which may precede server stanzas in the options file. Server startup (dsmserv) Begins in /etc/inittab, which invokes /usr/lpp/adsmserv/bin/rc.adsmserv, which does 'dsmserv quiet' to start the primary daemon process, which in turn spawns as many children as it needs to do its work. See also: Processes, server Server startup, prevent interference During extraordinary server restarts, you may need to suppress normal activities - which you may do by adding the following options to dsmserv.opt file prior to server restart: DISABLESCheds Yes NOMIGRRECL (NOMIGRRECL is an undocumented option to suppress migration and reclamation.) Server startup action A site may want the *SM server to perform a certain action after the server is restarted. The product has no provision for a start-up action. The simplest way to do it is to modify the server start-up script (e.g., rc.adsmserv) to incorporate a delayed dsmadmc to incite the action after the server has gotten settled in. Server startup considerations It takes some minutes for the ADSM server to become fully ready when it is restarted: client sessions may be disallowed or delayed during this time. During start-up, the DB mirrors have to be re-synced. When the server comes up, Expire Inventory is always started automatically. Realize that the database buffer cache that a long-running server had built up is gone and has to be reinvested when a server is restarted, which can result in some slower service than when the server has been up for some time. Server startup resources The server needs the following at startup: 1. Access to the option files: found via the DSMSERV_OPT environment variable, or in the current directory 2. Access to dsmserv.dsk: must come from the current directory 3. Access to auxilary modules: found via the DSMSERV_DIR environment variable, or in the current directory 4. System needs access to the code: via explicit path information or through the PATH environment variable Server status 'Query STatus' - or - SELECT * FROM STATUS Note that arrangement and content may vary in the results from the two commands above. Server TCP/IP port number, query 'Query STatus' report entry: The TCP/IP port on which the server listens for client requests. See also: TCPPort server option Server TCP/IP port number, set Hard-code in the TCPPort server option (q.v.). Server version number From a server session (dsmadmc) you do: 'Query STatus'. Server version/release number & paying You have to pay for a new license to use a new version or release level of the product. For example, you have to pay to acquire and use TSM 4.1. When 4.2 comes out, you have to pay again. Only maintenance fixes within a release are free, downloadable from the Tivoli web site. SERVER_CONSOLE Special administrator established by ADSM server installation which allows administration from the server console (only), by virtue of starting ADSM from the server console and remaining in control of it. This is what you need to use in the case of having formatted a database and thus starting with it empty of any definitions. From there you can establish initial site definitions (register administrators, etc.). If your TSM server is already up and running via a normal rc.adsmserv start, you cannot normally use SERVER_CONSOLE to access it: The SERVER_CONSOLE user ID does not have a password. Therefore, you cannot use the user ID from an administrative client unless you set authentication off. An administator with system privilege can revoke or grant new privileges to the SERVER_CONSOLE user ID. However, you cannot do any of the following to it: - Register or update - Lock or unlock - Rename - Remove - Route commands from it Msgs: ANS8034E Ref: Admin Guide, "Managing the Server Console"; Admin Ref, "Using the Server Console" Server-free backup Offloads your server systems by having the SAN perform Backups and Restores - of volume images. (Server-free does not operate at the file level.) Exploits the capabilities of network storage and peer-level device communication on a SAN for the data to move from one storage device in the SAN to another without going through a server, eliminating server work. The SAN knows where the data is and where it is going and handles the transport without the assistance of the client node. Uses the SCSI-3 Extended Copy command to do full-volume backup and restore: the TSM server issues the command, which is carried out by the SAN's data mover. Initially implemented on Windows 2000 - as Server-free is a special form of the standard Windows 2000 Image Backup. Supports Raw and NTFS volumes, but not FAT volumes. Available in a TSM 5.1 PTF made available 3Q2002. Server-free operations made necessary the introduction of Path definitions for TSM tape libraries and tape drives. Ref: TSM 5.1 Technical Guide See also: LAN-free; OBF; SDG Server-to-server ADSM Version 3 enables multiple ADSM servers within an enterprise to be configured and administered from a central location. ADSM Version 3 server-to-server communications provides the foundation for configuring multiple ADSM Version 3 servers in an enterprise. Ref: ADSMv3 Technical Guide redbook, 6.1 ADSM Server-to-Server Implementation and Operation redbook (SG24-5244) See: DEFine SERver; Set SERVERHladdress; Set SERVERLladdress "server-to-server" module Supports Virtual Volumes and thus electronic valulting, exports/imports directly between servers, etc. Note that this module is extra charge. Ref: Redbook: ADSM Server-to-Server Implementation and Operation (SG24-5244). Server-to-server IP address and The DEFine SERver command specifies Port number these via the HLAddress and LLAddress operands, respectively. The port number is usually the same as the usual TCPPort server option value. See also: Set SERVERHladdress; Set SERVERLladdress Serverfree data bytes transferred Client Summary Statistics element: The total number of data bytes transferred during a server-free operation. If the ENABLEServerfree client option is set to No, this line will not appear. See also: Server-free SERVERHladdress See: Query SERver; Set SERVERHladdress SERVERLladdress See: Query SERver; Set SERVERLladdress SErvername (Unix only) Client System Options file (dsm.sys) option which leads and labels the stanza (distinct subsection) in that file which contains the TCP network address, port number, and other specs which pertain only to the set of definitions which you want to prevail in accessing that server. Note that this name is a STANZA NAME ONLY: IT IS *NOT* NECESSARILY THE NAME OF THE SERVER AS DEFINED ON THE SERVER BY THE 'SET SERVERNAME' COMMAND THERE! Name length: 1 - 64 characters. The stanza name may initially be "server_a", as installed. This stanza name may then be referenced by DEFAULTServer statement at the head of the Client System Options file, or by a SErvername statement in the Client User Options file (dsm.opt), or by the dsm/dsmc -SErvername command line option. This stanza name thus serves as a level of indirection in identifying and reaching the server. Once reached by the physical addresses in the stanza, the server returns its actual name in the ANS5100I message returned in a dsmadmc session. See also: DEFAULTServer; SET SERVERNAME -SErvername=StanzaName Same as SErvername, but for command line. Using -SErvername on the command line does not cause MIgrateserver to use that server. Ref: "Using the UNIX Backup-Archive Clients" and "Installing the Clients". Servers The Client System Options File, /usr/lpp/adsm/bin/dsm.sys, lists all servers which client users may contact via either the default Client User Options File (/usr/lpp/adsm/bin/dsm.opt) or an override file named by the DSM_CONFIG environment variable or via -OPTFILE on the command line. If the invoker does not specify a server, the first one coded in the Client System Options File is used. Servers, multiple, on one machine Advantages: (two servers on one system) 1. Less hardware to manage, as compared to multiple servers on multiple systems. 2. Attached tape resources can be shared 3. Disk resources can be moved between instances without an outage. 4. Multiple interfaces can be shared 5. One TSM server license 6. Can be implemented in a few hours 7. Works around application bottlenecks 8. Cheaper Disadvantages: 1. Harder to upgrade 2. Memory allocation can be an issue Refer to "Server startup resources" for general info on where the server looks for its resources. The server instance is determined by the directory wherein it is started. So... - Create a separate server directory, with its own config files and symlinks to the executable modules. - Create the new ADSM server database and recovery log. (These will be referred to by the dsmserv.dsk file which will reside in that directory.) - The dsmserv.opt TCPport option should specify a unique port number. Clients which are to use that server should have their TCPPort client option specify that port number. - Customize your client option files to point to the appropriate server. Note that you can set environment variables DSMSERV_OPT, DSMSERV_DIR, and PATH to point to resources. Ref: Admin Guide section "Running Multiple Servers on a Single Machine" Service Volume category 3494 Library Manager category code FFF9 for a tape volume which has a unique service volser, for CE use. Host systems are not made aware of Service Volumes, because of their engineering nature. Services for Macintosh NT facility for serving Mac files. ADSM can back them up from the NT; but the 3.7 and 4.1 client README file says: "Mac file support is available only for files with U.S. English characters in their names (i.e. names that do not contain accents, umlauts, Japanese characters, Chinese characters, etc.)." See also: unicode; USEUNICODEFilenames "Sess State" Entry in 'Query SEssion' output; reveals the current communications state of the server. Possible values: End The session is ending. IdleW Waiting for client's next request. MediaW The session is waiting for access to a serially usable volume (e.g., tape). RecvW Waiting to receive an expected message from the client. Run The server is executing a client request. SendW The server is waiting to send data to the client. Start The session is starting (authentication is in progress). See also the individual explorations of each of the above states in this QuickFacts. Session A period of time in which a user can communicate with an ADSM server to perform backup, archive, restore, and retrieve requests, or to perform space management tasks such as migrating and recalling selected files. HSM sessions occur for the system where the file system is resident. Session, cancel 'CANcel SEssion Session_Number|ALl' Session files What files is a session currently sending? Do 'Query SEssion F=D' to get the current output volume, then on that do 'Query CONtent ______ COUnt=-5' to see the most recent five files. Session numbering Begins at 1 with each *SM server restart. Session port number Shows up on msg ANR0406I when the session starts, like: (Tcp/Ip 100.200.300.400(4330)). Session start time Not revealed in Query SEssion: you have to do 'SELECT * FROM SESSIONS' and look at START_TIME. Session timeout problem during backup Try increasing IDLETimeout value, or choose "SLOWINCREMENTAL YES" option (q.v.) for those clients supporting it. Session type 'SHow SESSion', which reports Backup and Archive sessions. SESSION_TYPE SQL: Column in SESSIONS table, identifying the session type, as "Admin" or "Node". SESSIONINITiation TSM 5.2+ client option to control (-SESSIONINITiation=) whether the server or client initiates sessions. The overriding purpose of this option is to prevent users on the client system from initiating sessions with the TSM server. It is also used with firewalls to allow the server to initiate scheduled sessions with the client, to perform backups and the like (which could not be done prior to 5.2, with SCHEDMODe PRompted; but the mechanism by which this is achieved are not described in any IBM doc thus far. One can deduce that 5.2 changes the paradigm such that the server contact initiates the full session, rather than inciting the client to contact the server as in the old Prompted paradigm.) Placement: Use with the client schedule command. Can be used on command line. Not usable with the API. Placement: In client system options file (dsm.sys). Syntax: SESSIONINITiation [Client|SERVEROnly] where Client Specifies that the client will initiate sessions with the server by communicating on the TCP/IP port defined with the TCPPort server option. This is the default. SERVEROnly Specifies that the client understands it to be the case that the server will not accept client requests for sessions. All sessions must be initiated by the server - prompted scheduling on the port defined on the client with its TCPCLIENTPort option. So...if the client cannot initiate actions, then how can a Restore be accomplished? Via a client schedule on the TSM server, via DEFine SCHedule or DEFine CLIENTAction with ACTion=REStore. Caution: This option disables a lot of functionality, and should be activated only after having fully set up the client and tested its general inteoperability as intended after the option is in effect. (See APAR IC37509) Ref: Tivoli Field Guide "Using the Tivoli Storage Manager Central Scheduler" SESSIONINITiation TSM 5.2+ server option to control whether the server or client initiates sessions. Though often couched in terms of firewall use, the overriding purpose of this option is to prevent people on the client system from initiating sessions with the TSM server. Note that this option does not perform any firewall magic: firewalls are principally intended to keep the server from being accessed via various port numbers, whereas communications out from the server are generally uninhibited. Syntax: SESSIONINITiation=[Client|SERVEROnly] where Client Specifies that the client will initiate sessions with the server by communicating on the TCP/IP port defined with the TCPPort server option. This is the default. SERVEROnly Specifies that the server will not accept client requests for sessions. All sessions must be initiated by server-prompted scheduling on the port defined for the client with the REGISTER or UPDATE NODE commands. Set the node's HLADDRESS and LLADDRESS values as appropriate. Note that if you put SERVEROnly into effect for a node, it behooves you to put the equivalent client option into effect, to avoid confusion on the client side. SESSIONS SQL Table. Columns and samples: SESSION_ID: 6692 START_TIME: 2002-12-06 09:20:05.000000 COMMMETHOD: Tcp/Ip STATE: Run WAIT_SECONDS: 0 BYTES_SENT: 1333085 BYTES_RECEIVED: 3488 SESSION_TYPE: Node CLIENT_PLATFORM: AIX CLIENT_NAME: SYSTEM7 OWNER_NAME: MEDIA_STATE: Current output volume: 001647. (The following columns are in TSM 5:) INPUT_MOUNT_WAIT: INPUT_VOL_WAIT: INPUT_VOL_ACCESS: OUTPUT_MOUNT_WAIT: OUTPUT_VOL_WAIT: OUTPUT_VOL_ACCESS: LAST_VERB: CSResults VERB_STATE: Recv Sessions, client, number of See: RESOURceutilization Sessions, maximum, define "MAXSessions" value in the server options file (dsmserv.opt). Sessions, maximum, query 'Query STatus', look for "Maximum Sessions". Sessions, multiple See: RESOURceutilization Sessions, prevent If the server is up, 'DISAble SESSions' will prevent client nodes from starting any new Backup/Archive sessions. See also: DISAble SESSions; DISABLESCheds; Server, prevent client access SET Access See: dsmc SET Access Set ACCounting On ADSM server command to create per-session records, including KB data volumes sent from client. Set ACTlogretention TSM server command to specify the retention perion, in days, for Activity Log records. Syntax: 'Set ACTlogretention N_Days'. Default: 1 day. Will result in messages ANR2102I Activity log pruning started ANR2103I Activity log pruning completed in the Activity Log. Remember that the Activity Log lives in the TSM server database, so be conscious of how many space that can take over so many days. Important: It is absolutely vital that you somehow have at least six months worth of Activity Log records, in that you need to be able to look back at what happened to specific volumes, etc. You can accomplish this by simply leaving the Activity Log records around that long, or you can periodically capture old records before they are pruned, as via 'Query ACtlog BEGINDate=-999 > SomeFile'. Set AUthentication Server command, with System privilege, to specify whether administrators and client nodes need a password to access the server. Choices: ON Administrators and client nodes need a password to access the server. This is the default. OFF Administrators and client nodes do not need a password to access the server. See also: REGister Node Set CLIENTACTDuration TSM server command to specify the number of days that a schedule, defined with the DEFine CLIENTAction command, is to live as a server definition. TSM automatically deletes the schedules and associations with nodes from the database when the scheduled start date plus the specified number of days have passed the current date. Records for the event are deleted regardless of whether the client has processed the schedule. Syntax: Set CLIENTACTDuration Ndays See also: DEFine CLIENTAction Set CONTEXTmessaging ON Server command to get additional info when ANR9999D messages occur. Server components for info, including process name, thread name, session id, transaction data, locks that are held, and database tables that are in use. 'Set CONTEXTmessaging ON|OFf' Set DRMCHECKLabel TSM DRM command to control whether a tape's media label is read and verified before it is checked out of the library. Set DRMCHECKLabel Yes|No The default is Yes. Set DRMCMDFilename Server command to name a file that can contain the commands created when the MOVe DRMedia or Query DRMedia commands are issued without specifying a CMDFilename. Syntax: 'Set DRMCMDFilename file_name' If you are not licensed for DRM, this command will work but will complain about the absence of a license, msg ANR6752W. Set DRMCOPYstgpool Server command for DRM, to specify names of the copy storage pools to be recovered after a disaster. TSM uses these names if the PREPARE command does not include the COPYSTGPOOL parameter. If the MOVe DRMedia or Query DRMedia command does not include the COPYSTGPOOL parameter, the command processes the volumes in the MOUNTABLE state that are in the copy storage pool named by the SET DRMCOPYSTGPOOL command. At installation, all copy storage pools are eligible for DRM processing. Syntax: 'Set DRMCOPYstgpool Copy_Pool_Name[,Copy_Pool_Name]' Do 'Set DRMCOPYstgpool ""' to nullify specific names and allow all copy storage pools to participate. Use the Query DRMSTatus command to display the current settings. Set DRMDBBackupexpiredays DRM parameter; tells *SM how long to keep the DB backup tapes that it is managing before finally expiring them. Stipulations for this to work: - The age of the last volume of the series has exceeded the expiration value set by this command. - For volumes that are not virtual volumes, all volumes in the series are in VAULT state. - The volume is not part of the most recent database backup series (BACKUPFULL + BACKUPINCRs). Also watch out for a BACKUPINCR which is on disk, which may thwart expiration: do MOVe DRMedia to deal with those and allow dbbackups to expire. Do not use DELete VOLHistory on DB backup volumes when DRM is in charge. Use Query DRMSTatus to check. Syntax: Set DRMDBBackupexpiredays Ndays where Ndays can be 0 - 9999 The DBBackup volumes remain until the specified number of days has past and an Expiration is run. This necessarily overrules any retention you may think you are doing in DELete VOLHistory which intends to keep the volumes longer. Set DRMNOTMOuntablename Command to specify the name of the offsite location for storing the media. At installation, the name is set to NOTMOUNTABLE. Use the Query DRMSTatus to see the location name. The location name is used by the MOVe DRMedia command to set the location of volumes that are moving to the NOTMOUNTABLE state. 'Set DRMNOTMOuntablename location' where the location name can be up to 255 chars. If this Set command has not bee issued, the default location is NOTMOUNTABLE. Set DRMRPFEXpiredays DRM parameter to specify when recovery plan files are eligible for expiration. Syntax: Set DRMRPFEXpiredays Ndays Set INVALIDPwlimit TSM server command to define the maximum number of logon attempts allowed before the node involved is locked. Code: 0 - 9999. Default: 0, meaning no checking See also: Set INVALIDPwlimit; Set PASSExp Set INVALIDPwlimit attempts ADSMv3 server command to set a limit on the number of invalid password attempts a prospective session may make. Set LICenseauditperiod Specifies the period, in days, between automatic license audits performed by the ADSM server. Syntax: 'Set LICenseauditperiod ' where N_days can be 1-30. Default: 30 days. See also: Query STatus Set LOGMode Server command to set the mode for saving log records, which in turn determines the point to which the database can be recovered. Syntax: 'Set LOGMode Normal|Rollforward' Normal The Recovery Log keeps only uncommitted transactions. Database recovery involves restoring from the most recent db backup only: all transactions since that time are lost!! (This is particularly bad where users do Archive with the DELetefiles option: the user files will be lost!) No automatic backups are possible. TSM db mirroring is thus very important in this case, to reduce the possibility of database loss. Because of the potential for data loss, Normal mode is undesirable, antithetical to the intention of the product. Rollforward The Recovery Log keeps *all* transactions since the last database backup. Database recovery involves the most recent db backup and the intact Recovery Log contents such that all activity up to the current time is preserved. Automatic db backups are performed (via DBBackuptrigger). Note that TSM db mirroring is valuable, but not as essential in this case; but Recovery Log mirroring is more important. Other factors in choice: Rollforward makes sense when the time it takes to run an incremental backup is much less than what it takes to run a full backup. If you have the time to perform full backups at least once a day, Normal mode may be a choice for you. In either case, it is always best to use TSM mirroring for the database and recovery log. And, in either case, allocate a capacious recovery log, as a complex mix of clients can result in a lot of uncommitted transaction space. If currently using Rollforward, you can Set LOGMode Normal, then switch back (which triggers a full db backup). Note that switching from Normal to Rollforward doesn't take effect until the next full database backup, which is necessary in order to have a baseline from which the log can be used to recover a database. Perspective: Many customers report having given up on Rollforward, given its limited advantages and the big problem of the Recovery Log filling, with little hope of DBBackuptrigger curing the problem in a timely manner. Default: Normal Msgs: ANR2362E Ref: Admin Guide, "Database and Recovery Log Protection" and "Auditing a Storage Pool Volume" See also: DBBackuptrigger Set MAXSCHedsessions %sched ADSM server command to regulate the number of sessions that the server can use for processing scheduled work, as a percentage of the total number of server sessions available (MAXSessions). Roughly speaking, this regulates the percentage of "batch" sessions to "interactive" sessions. See also: MAXSessions Set MINPwlength TSM server command to set the minimum length of a password. Privilege level required: System Syntax: 'Set MINPwlength length' Specify a length from 0 - 64, where 0 means that the password length is not checked. Default: 0 See also: Set INVALIDPwlimit; Set PASSExp Set PASSExp *SM server command to specify password expiration periods. 'Set PASSExp N_Days [Node=nodelist] [Admin=adminlist]' Note that this value can override a zero PASSExp value in REGister Node. Set Password See: dsmc set password Set QUERYSCHedperiod Server command to regulate how often client nodes contact the server to obtain scheduled work when it is running in SCHEDMODe POlling operation. This can be used to universally override the client QUERYSCHedperiod option value. Syntax: Set QUERYSCHedperiod In the absence of this server setting, clients are free to hit the server as often as they like. Check server value with 'Query STatus'. Set RANDomize TSM server command to specify the degree to which schedule start times are randomized within the temporal startup window of each schedule, for clients using the client-polling mode ("SCHEDMODe POlling" option - but not "SCHEDMODe PRompted"). Syntax: 'Set RANDomize Randomize_Percent'. To verify: 'Query STatus', look for "Schedule Randomization Percentage" value. Set SCHEDMODes Server command to determine how the clients communicate with the server to begin scheduled work. Each client must be configured to select the scheduling mode in which it operates. This command is used with the SET RETRYPERIOD command to regulate the time and the number of retry attempts to process a failed command. Syntax: Set SCHEDMODes ANY|POlling|PRompted Default: ANY See also: SCHEDMODe Set SERVERHladdress To set the high-level address (IP address) of a server. TSM uses the address when you issue a DEFine SERver command with CROSSDEFine=YES. Syntax: 'Set SERVERHladdress ip_address' See also: DEFine SERver; Set SERVERLladdress Set SERVERLladdress To set the low-level address (port number) of a server. TSM uses the address when you issue a DEFine SERver command with CROSSDEFine=YES. Syntax: 'Set SERVERLladdress tcp_port' See also: DEFine SERver; Set SERVERHladdress Set SERVername TSM server command to set the name which the server feeds back to the client when the client contacts the server by the network and port address contained in its client options file stanza. Syntax: 'Set SERVername Some_Name' The name can be up to 64 characters, and must be unique across the Tivoli server network. Note that the name is that used between the server and client, and has nothing to do with the server's name in the physical network namespace (as in the DNS name in a TCP/IP network). Changing this name does not affect the client's ability to find the server, because that is set in the Client System Options File by physical addressing; however, a client with "PASSWORDAccess Generate" has the client password, as known that server, stored encrypted in a file given the name of the server /etc/security/adsm/SrvrName); so the client root will have to redo the password, or rename the file. THIS CAN HAVE FAR-REACHING RAMIFICATIONS. Assigning arbitrary server names allows you to run multiple servers, or to uniquely identify servers on multiple systems. The ADSM "Test Drive" works this way. Set SERVERPAssword To set the password for communication between servers to support enterprise administration and enterprise event logging and monitoring. Syntax: 'Set SERVERPAssword password' Set SERVERURL To specify a Uniform Resource Locator (URL) address for accessing the server from the web browser interface. TSM uses this address when a server is defined and cross definition is permitted. 'Set SERVERURL url' Query: Query STatus, see "Server URL" Set SQLDATETIMEformat To control the format in which SQL date, time, and time stamp data are displayed. See your SQL documentation for details about these formats. Syntax: 'Set SQLDATETIMEformat [Iso|Usa|Eur|Jis|Local]' Where: Iso Specifies the International Standards Organization (ISO) format. ISO is the default. Usa Specifies the IBM USA standard format. Eur Specifies the IBM European standard format. Jis Specifies the Japanese Industrial Standard Christian Era. Currently the JIS format is the same as the ISO format. Local Site-defined. Currently, the LOCAL format is the same as the ISO format. See also: Query SQLsession Set SQLDISPlaymode To control how SQL data types are displayed. Syntax: 'Set SQLDISPlaymode [Narrow|Wide]' Where: Narrow Specifies that the column display width is set to 18. Any wider string is forced onto multiple lines at the client. This is the default. Wide Specifies that the column display width is set to 250. See also: -COMMAdelimited; -DISPLaymode; -TABdelimited See also: Query SQLsession Set SQLMATHmode to round or truncate decimal numbers for SQL arithmetic. Syntax: 'Set SQLMATHmode Truncate|Round' Default: Truncate See also: Query SQLsession Set SUBFILE TSM 4.1+ server command to allow clients to back up subfiles. Product installation sets it to No; set it to Client to allow such backups. Do Query STatus in the server to check. See also: Adaptive Differencing; SUBFILE* Set SUMmaryretention TSM 3.7 server command to specify the number of days to keep information in the SQL activity Summary table. Syntax: Set SUMmaryretention Ndays where Ndays specifies the number of days to keep information in the activity summary table. Specify 0 to 9999. 0 means to not keep data. 1 says to keep the activity summary table for the current day only. Query via: Query STatus See also: Summary table Set TAPEAlertmsg TSM 5.2+ server command to control the handling of TapeAlert problem indications from a library or tape drive which supports that technology 'Set TAPEAlertmsg ON|OFf' See also: Query TAPEAlertmsg; TapeAlert SETOPT ADSMv3 server command which allows changing server options without restarting the server. It actually updates the dsmserv.opt file as well, but: it appends the specified option to the end of the file rather than changing the option where it appears in the file; and it fails to add a newline at the end of the line that it adds. Nor does it even check the current value: for example, you can specify the very same value that an option currently has, and the foolish command will add a needless duplicate to the file. Suffice to say, the programming of this command is embarassingly primitive. Note also that performing a SETOPT does *not* result in TSM re-examining the other options in the file. (You cannot use SETOPT to cause TSM to adopt changes you manually made to the file.) As of ADSMv3 you can operate on: AUDITSTorage COMMTimeout DATEformat EXPINterval EXPQUiet IDLETimeout MAXSessions NUMberformat RESTOREINTERVAL TIMEformat As of TSM3.7 you can also operate on: BUFPoolsize Msgs: ANR2119I The ________ option has been changed in the options file. Share Point Name See: UNC SHRDYnamic (Shared Dynamic) An ADSM Copy Group serialization mode, as specified by the 'DEFine COpygroup' command SERialization=SHRDYnamic operand spec. This mode specifies that if an object changes during backup or archive and continues to be changed after a number of retries, the *last* retry commits the object to the ADSM server whether or not it changed during backup or archive. Contrast with DYnamic, which should sent it on the first attempt. See also: CHAngingretries Shared memory To conduct a *SM client-server session, within a single Unix computer system, via a shared memory area instead of data communications methods. (In Windows, the comparable mechanism is Named Pipe.) The shared memory communications options were added with the V2 level 6 or 7 ADSM AIX server and the V2 level 3 (?) AIX client. COMMMethod SHAREDMEM SHMPORT 1510 The SHMPORT must be the same for both the client and the server. That is a TCP/IP port that is used between the client and the server for the initial handshake. Of course the client and the server must be running on the same machine because it uses a shared memory region on the machine for the communications. Restrictions: The client MUST: 1 - run as ROOT (as must server) or 2 - run under the same userid as the server or 3 - use PASSWORDAccess Generate (attempting to use PASSWORDAccess Prompt results in rejection with an error message.) Overall control of shared memory in your computer system is in accordance with its hardware architecture and operating system design. See appropriate doc. Use of the shared memory protocol in at least AIX results in the use of a temporary file named /tmp/adsm.shm.xxxxx being created, deleted at the end of the session. If the operating system is rebooted or the TSM server is halted, the files may not be deleted, and so external measures need to be implemented to do so. If you use the same two parameters (COMMmethod and SHMPORT) on your client (on the same machine as the server), you'll get a shared memory connection. You don't really need to specify SHMPORT on either the client or server unless you deviate from the default value of 1510. A server 'Query SEssion' will show the "Comm. Method" being "ShMem", rather than "Tcp/Ip". Note that there is no shared memory communiction between client sessions. Ref: B/A Client, "COMMMethod". Msgs: ANR8285I, ANS1474E See also: Named Pipe; NAMedpipename Shared Static See: SHRSTatic SHRSTatic An *SM copy group serialization mode, as specified by the SERialization' parameter in the 'DEFine COpygroup' command. This mode specifies that a backup or archive operation will disapprove of an object having been modified during the operation. (The object being "open" during this time doesn't matter; detection of the file attributes indicating modification does matter.) After the operation, TSM will check the object and, if it discovers the object to have been modified, TSM will reattempt the operation a number of times (see below), and the following message will be written to the dsmerror.log: "File '_____' truncated while reading in Shared Static mode." If the object has been modified after every attempt, the object is not backed up or archived. How it works (as of 1997): *SM will send the file to the server. Only AFTER it has sent the file to the server will it then go back to the client and look at the attributes to see if they have changed since the beginning of backing up the file. If they have changed, then it determines the file was open while it was backed up and will retry (if you have Shared Static) immediately, i.e. it will send the file AGAIN, and then check AGAIN. It will repeat this process for the specified number of retries (CHAngingretries). *SM will NOT be backing up any files at this time - all other file backups wait until the processing for this file is done. This could mean that the file has been sent to the server up to 4 times. See also: CHAngingretries; Serialization Contrast with: ABSolute; Dynamic; Static SHMPORT See: Shared memory "shoe-shining" Term most commonly used to refer to the reciprocating motion of linear serpentine tape (3590, 3580) as it records to the end of tape, switches head tracks, and records back toward the starting point, repeated until all possible tracks are used, as needed. Also refers to "backhitch" (q.v.). Helical scan tape technology vendors (Sony AIT) deride linear tape "shoe-shining" as causing much more wear to tapes than their technology - but the claim is specious, given the higher stresses involved in helical scan. See also: Backhitch SHow commands Unsupported, undocumented commands to reveal various supplementary info, mostly that of internals of no interest to customer. Running some of them can impose a substantial burden on the server. And they are typically session executables (not processes) which cannot be canceled. They often yield internals data meaningful only to developers: the Select command can often yield information far more useful to customers. In general, these are not things that customers should run only under the direction of TSM support personnel. IBM documents some SHow commands at: http://publib.boulder.ibm.com/tividd/td/ TSMM/SC32-9103-00/en_US/HTML/ info_show_cmds.html SHow AGGREGATE __ Undocumented *SM server command to show ??? SHow Archives NodeName FileSpace Undocumented *SM server command to show archives for a given Node filespace, revealing full path name, when archived, and management class. Sample output: /usr1 : / graphics (MC: SERV.MGM) Inserted 10/27/1998 14:55:03 Beware doing this on a large filespace because the server will have to process the whole thing. Note: does not show archiver, owner, or object size. See also: SHow Versions SHow ASAcquired Undocumented *SM server command to show acquired removable volumes. SHow ASMounted Undocumented *SM server command to show mounted (or mount in progress) volumes. SHow ASQueued Undocumented *SM server command to show the mount point queue. SHow ASVol Undocumented *SM server command to show acquired removable volumes. SHow BACKUPSET Undocumented TSM server command to show Backup Set info. SHow BFVars Undocumented *SM server command to show Bitfile Services Global Variables. SHow BFObject 0 Undocumented *SM server command to show a Bitfile Services Object. Example, for ObjectID 0.43293636: SHow BFObject 0 43293636 The object may not be found... SHow BFObject 0 43293699 Bitfile Object: 0.43293699 Bitfile Object NOT found. See also: SHow INVObject SHow BFStats ___ Undocumented *SM server command to show Bitfile Services Statistics. SHow BUFClean Undocumented *SM server command to show Database Buffer Pool - Hot Clean List. SHow BUFDirty Undocumented *SM server command to show Database Buffer Pool - Dirty Pages Table SHow BUFStats Undocumented *SM server command to show Database Buffer Pool Statistics, including Cache Hit Percentage. SHow BUFVars Undocumented *SM server command to show database buffer pool global variables. SHow BVHDR ___ Undocumented *SM server command to show ??? SHow CART Undocumented *SM server command to show Cart Info from mounted volumes. SHow CCVars Undocumented *SM server command to show Central Configuration Variables SHow CONFIGuration Undocumented *SM server command to show Configuration: Time, Status, Domain, Node, Option, Process, Session, DB, DBVolume, Log, Logvolume, Devclass, Stgpool, Volumes, Mgmtclass, Copygroups, Schedules, Associations, Bufvars, Csvars, Dbvars, Lvm, Lvmcopytable, Lvmvols, Ssvars, Tmvars, Txnt, Locks, Format3590, Formatdevclass. ADSMv3 provides the 'Query SYStem' command, which provides much the same info. SHow CSVars Undocumented *SM server command to show client schedule variables. SHow DAMAGE To show damaged files in a stgpool Example: SHOW DAMAGE STGP1 **Damaged files for storage pool STGP1, pool id 4 Bitfile: 0.7726069, Type: PRIMARY Volume ID: 1168, Volume Name: NT1681 Segment number: 1, Segment start: 14, Segment Size: 0.26218147 UX142ORA : /ORAohmspt12// al_509156970_454_1 636679436 Bitfile: 0.7726072, Type: PRIMARY Volume ID: 1168, Volume Name: NT1681 Segment number: 1, Segment start: 15, Segment Size: 0.262719 UX142ORA : /ORAsoddev33// al_509157087_93_1 636679436 Found 2 damaged bitfiles. SHow DBBACKUPVOLS Undocumented *SM server command to show info on the latest full+incremental database backup volumes. SHow DBPAGEHDR ___ Undocumented *SM server command to show ??? SHow DBPAGELSN ___ Undocumented *SM server command to show ??? SHow DBTXNSTATS Undocumented *SM server command to show Database Transaction Statistics. SHow DBTXNTable Undocumented *SM server command to show the Database Transaction Table. SHow DBVars Undocumented *SM server command to show database Service Global Variables. SHow DEADLock Undocumented *SM server command to show any deadlocks that exist. SHow DEVCLass Undocumented *SM server command to show sequential device classes. SHow DEVelopers Undocumented *SM server command to show Server Development Team + Server Contributors. (Don't expect it to be current.) SHow DISK Undocumented *SM server command to show DISKfiles data. SHow DSFreemap ___ Undocumented *SM server command to show ??? SHow DSOnline Undocumented *SM server command to show storage pool datasets (volumes) online. SHow DSVol Undocumented *SM server command to show disk storage pool datasets (volumes). SHow DUPLICATES Undocumented *SM server command to scan the database for duplicates. Warning: Runs a long time and uses a lot of system resources; and there is no way to stop it! SHow FORMAT3590 _VolName_ Undocumented *SM server command to verify that the Devclass Format spec for a given volume is correct. Yields Activity Log message like: ANR9999D asvolut.c(2086): No change required for volume _VolName_. SHow FORMATDEVCLASS _DevClass_ Undocumented *SM server command to verify that volumes in a given device class are correct in the db. Yields Activity Log message like: ANR9999D asvolut.c(2293): All volumes in _DevClass_ device class have correct entries in *SM database. SHow ICCTL Undocumented *SM server command to show control info about current image copy (db backup)? SHow ICHDR Undocumented *SM server command to show info about latest image copy (db backup)? SHow ICVARS Undocumented *SM server command to show Image Copy Global Variables. SHow IMVARS Undocumented *SM server command to show Inventory Global Variables. SHow INCLEXCL See: dsmc SHow INCLEXCL SHow INVObject 0 Undocumented *SM server command to show an inventory object, reporting its nodename, filespace, management class, etc. Example, for ObjectID 0.43293636: SHow INVObject 0 43293636 OBJECT: 0.43293636 (Backup): Node: ACSN08 Filespace: /u2. /csg/rbs/ tempThis Type: 2 CG: 1 Size: 0.0 HeaderSize: 0 BACKUP OBJECTS ENTRY: State: 1 Type: 2 MC: 1 CG: 1 /u2 : /csg/rbs/ tempThis (MC: DEFAULT) Active, Inserted 08/01/03 07:58:58 EXPIRING OBJECTS ENTRY: Expiring object entry not found. See also: SHow BFObject SHow LANGUAGES Undocumented *SM server command to show ??? SHow LIBINV Undocumented *SM server command to show the library's inventory: lib, vol, stat, use, mounts, swap, data. May show library storage slot element address, as for an STK 9710 lib. SHow LIBrary Undocumented *SM server command to show the status of the library and its drives, being the output of SIOC_INQUIRY and other operations. Meaning of fields: type= Device type, like 8 for 3590. mod= Device type modifier, like 17 for 3590. busy=0 means the drive is not mounted or even acquired by *SM. busy=1 should reflect *SM using the drive (Query MOunt). But this could result from drive maintenance. Fix by trying 'cfgmgr' AIX command, or killing the lmcpd AIX process and then doing 'cfgmgr' or '/etc/lmcpd'. online=0 means the drive is "offline", as when 'rmdev -l rmt_' had been done in AIX. In Version 2, this will only be if the polled=1. In V3, you can update a drive to be offline, in which case the polled flag will be 0. polled=1 means that *SM could not use the drive for one of three reasons: - The drive is loaded with a Non-*SM volume (eg a cleaner cartridge, or a volume from the other *SM server); - The drive is unavailable to the library manager (usually set this way by load/unload failures) - The drive cannot be opened (some other application has it open, or there's some connection problem, etc) polled=1 means the server is polling the drive every 30 seconds to see when the above three conditions all clear. (It also means that the online flag should be 0.) When the conditions clear, it turns online back to 1 and the drive should now be available to be acquired. Note that if no tape drive is currently available, *SM will wait rather than dispose of client and administrative tasks. Note that the relative positions of the drives in the list can change over one server's uptime. SHow LMVARS Undocumented *SM server command to show License Manager variables. SHow LOCKS Undocumented *SM server command to show Lock hash table contents. SHow LOCKTABLE Undocumented *SM server command to show Lock hash table contents. SHow LOCKs Same as 'SHow LOCKTABLE' SHow LOG Undocumented *SM server command to show Log status information. SHow LOGPAGE ___ Undocumented *SM server command to show ??? SHow LOGPINned Undocumented *SM server command to show contributors to Recovery Log "pinning". But you may figure out the culprit simply by doing Query SEssion. Ref: IBM site article swg21054574 See: Recovery Log pinning/pinned SHow LOGREADCACHE Undocumented *SM server command to show the Log Read Cache. SHow LOGRESET Undocumented *SM server command to show Logging service statistical variables reset. SHow LOGSEGTABLE Undocumented *SM server command to show the Log Segment Table. SHow LOGSTATS Undocumented *SM server command to show log statistics. SHow LOGVARS Undocumented *SM server command to show Log Global Variables SHow LOGWRITECACHE Undocumented *SM server command to show the Log Write Cache. SHow LSN ___ Undocumented *SM server command to show ??? SHow LSNFMT ___ Undocumented *SM server command to show ??? SHow LVM Undocumented *SM server command to show logical volume manager info: server disk volumes. SHow LVMCKPTREC Undocumented *SM server command to show LVM checkpoint record contents. SHow LVMCOPYTABLE Undocumented *SM server command to show copy table status (database and log volumes). SHow LVMCT Same as 'SHow LVMCOPYTABLE' SHow LVMDISKNAME ___ Undocumented *SM server command to show ??? SHow LVMDISKNUM ___ Undocumented *SM server command to show ??? SHow LVMDNU ___ Same as 'SHow LVMDISKNUM' SHow LVMDISKTABLE Undocumented *SM server command to show Disk Table Entries (database and log volumes). SHow LVMDNA ___ Same as 'SHow LVMDISKNAME' SHow LVMDT Same as 'SHow LVMDISKTABLE' SHow LVMFIXEDAREA Undocumented *SM server command to show the "LVM fixed area" on each data base and recovery log volume (the extra 1MB that you have to add to these volumes). This command also reveals the maximum possible size for the *SM Database and Revovery Log. SHow LVMFA Same as 'SHow LVMFIXEDAREA' SHow LVMIOSTATS Undocumented *SM server command to show ??? SHow LVMLP Undocumented *SM server command to show DB Logical Partition Information SHow LVMPAGERANGE ___ Undocumented *SM server command to show ??? SHow LVMPR ___ Same as 'SHow LVMPAGERANGE' SHow LVMRESET Undocumented *SM server command to ??? SHow LVMVOLS Undocumented *SM server command to show database and recovery log volume usage. SHow MEMU Undocumented *SM server command to show internal memory pool utilization numbers. In the report... "Freeheld bytes" reflects what the TSM server needs. "MaxQuickFree bytes" should be greater than Freeheld. Doing 'Show Memu SET MAXQUICK _____' will actually set the MaxQuickFree to the given bytes value. SHow MESSAGES Undocumented *SM server command to show ??? SHow MP Undocumented *SM server command to show allocated Mount Points; that is, drives currently in use, and their status (Alloc, Clean, Idle, Open, Opening, Reserved, Waiting). (Use SHow LIBrary to see all drives.) SHow NODE Undocumented *SM server command to show what's in a database node (not to be confused with a client node). SHow NODEHDR ___ Undocumented *SM server command to show a subset of SHow NODE: just the header info, not the records. SHow NUMSESSIONS Undocumented *SM server command to show number of client sessions. Response is like: Number of client sessions: 2 See also: Query SEssion; SHow SESSions SHow OBJ (SHow OBJects) Undocumented *SM server command to show Defined Database Object info: homeAddr=, create=, destroy=, savePointNum=, info-> . SHow OBJDir Undocumented *SM server command to show Defined Database Object Names and their corresponding Home Address in parentheses. SHow OBJHDR Undocumented *SM server command to show a more expanded view of what SHow OBJDIR displays: Type, Name, homeAddr, create, destroy,savePointNum, openList. SHow OPENHDR Same as 'SHow OPENobjects' SHow OPENobjects Undocumented *SM server command to show open Objects. Show Options See: dsmc show options SHow OUTQUEUES Undocumented *SM server command to show ??? SHow PENDing Undocumented *SM server command to show pending administrative and client schedules. Reveals nodes which use "SCHEDMODe POlling" as well as "SCHEDMODe PRompted". Reports: Domain, Schedule name, Node name, Next Execution, Deadline. SHow RAWNODE Undocumented *SM server command to show a database node (not to be confused with a client node) in dump format (raw data). SHow RECLAIM ___ Undocumented *SM server command to show ??? SHow RESQUEUE Undocumented *SM server command to show storage service ??? SHow SESSions Undocumented *SM server command to show Session information, including whether it is Backup (including backing up or restoring) or Archive (including archiving or retrieving). SessType values (perceived): 4 HSM, or an ADSMv2 backup session 5 Backup 7 Administrator The "bytes" value is actually the number on the right side of the seeming decimal point; so in "0.1889841210", the bytes value is some 1.8 GB. The number may also be negative, as in "0.-1596708786", with repeated command issuances showing the negative value decreasing, which is indicative of a register overflow condition: the bytes value is more than can be contained in a C int. See also: Query SEssion; SHow NUMSESSions SHow SLOTs Undocumented *SM server command to show slot definitions in a SCSI library, such as a 3583. SHow SMPBIT Undocumented *SM server command to show ??? SHow SMPHDR Undocumented *SM server command to show ??? SHow SPAcemg FileSpace Undocumented *SM server command to show all SPACEMGMT (HSM) Files for node. Beware: output can be enormous. SHow SQLTABLES Undocumented *SM server command to show mapped SQL tables. SHow SSLEASED Undocumented *SM server command to show storage service ??? SHow SSOPENSEGS Undocumented *SM server command to show storage service open segments. SHow SSPOOL Undocumented *SM server command to show storage service pool info. SHow SSSESSION Undocumented *SM server command to show Storage Service sessions. SHow SSVARS Undocumented *SM server command to show Storage Service Global Variables: *ClassId, *PoolId, *VolId. SHOW STORAGE USAGE Dsmadm GUI selectable; is equivalent to 'Query AUDITOccupancy NodeName'. SHow SYSTEMOBJECT Undocumented TSM4 server command to show Windows System Objects. SHow TBLSCAN ___ Undocumented *SM server command to show ??? SHow THReads Undocumented server command to show all the server's threads. Thread names are fairly descriptive. For example, each non-admin client session will have a SessionThread; if a Move Data is running, its thread name will be AfMoveDataThread. Thread 0 is main, followed by an LvmDiskServer thread for each disk volume, then others. Report begins with server PID, thread table size, active threads count, zombie threads count, cached descriptors count. tid Thread id. ktid Kernel thread ID, as reported by the "tid" operand of the AIX 'ps' -o option, like: 'ps -mefl -o pid,ppid,bnd,scount, sched,thcount,tid,comm' ptid Associated Process thread ID. det Probably refers to whether the thread was created in Detached state. Most threads show det=1, except main, TbPrefetchThread, SmAdminCommandThread, AdmSQLTimeCheckThread. zomb Presumably refers to being a zombie (child whose parent isn't listening for its end). Value usually 0. "Zombie threads" count at beginning of report tells you how many in total. join Probably indicates that pthread_join() was invoked to suspend processing of the calling thread until the target thread completes. Value always seen 0. result ?? Value always seen 0. sess Session number, if a SessionThread Thread names: LvmDiskServer Logical Volume Manager, with one thread per DB and Recovery Log volume. Note that there is no indication as to which thread is running or how much CPU time it is accumulating, hence no way to readily isolate problem threads. See also: Processes, server SHow TIME Undocumented *SM server command to show the current server date and time. SHow TMVARS Unsupported *SM server command to show Transaction Manager Global Variables + Restart Record. SHow TRANSFERSTATS ___ Undocumented *SM server command to show ??? SHow TREEstats _TableName_ Undocumented *SM server command to show statistics on an SQL table tree. Add up leaf-nodes and non-leaf-nodes for the number of pages used. Beware that this command scans the database trees, which can take a long time. Example: show tree Activity.Log SHow TXNstats Unsupported *SM server command to show Transaction manager statistics. SHow TXNTable Undocumented *SM server command to show Transaction hash table contents. SHow VERIFYEXP Undocumented *SM server command, to be used only as directed by IBM support... Verifies expiration table entries and may correct potentially corrupt entries. It is not guaranteed to fix all entries. If this doesn't clean up the problem (ie. you still see signs of the problem afterwards), then an AUDITDB operation is likely the only corrective action available. IBM Support may be contacted in response to a message like ANR9999D imexp.c(4694): ThreadId<25> Backup Entry for object 0.129710882 could not be found in Expiration Processing, whereupon guided use of this command may be warranted. Further cautions: Takes a long time to run (like Audit DB) and will tax the capacity of the Recovery Log. SHow Versions Unsupported *SM server command to show the version of every Backup file in a filespace, the management class used to back it up, whether it is Active or Inactive, and when it occurred (timestamp). However, object size id not revealed. Syntax: 'SHow Versions NodeName FileSpace [Nametype=________]' where Nametype=unicode may be needed for such cases. Example: SHow Versions ournode /home /home : / netinst (MC: OURLIBR.MGMT) Active, Inserted 06/03/1997 16:36:46 Employing the Select command on the Backups table can produce comparable results. A Deactivated date of year 1900 is a "negative infinity" setting to denote that a file is eligible for immediate expiration/deletion processing. See also: SHow Archives SHow VIRTVOL ___ Unsupported *SM server command to show ??? SHow VOLUMEUSAGE NodeName Unsupported TSM server command to display Primary Storage Pool volumes being used by a given Node for backup data. Does not reflect Copy Storage Pools, or volumes used only for Archive data or HSM data. That is, the command will report volumes which contain backup data, or a mix of Backup and Archive data for a node, but not volumes which contain only Archive data. (A Select on the VOLUMEUSAGE table *will* show copy storage pool volumes.) Sample output: adsm> SHow VOLUMEUSAGE ____ SHOW VOLUMEUSAGE started. Volume 000042 in use by node ____. Volume 000043 in use by node ____. SHOW VOLUMEUSAGE completed. You could subsequently go on to issue a 'Query CONtent' command to find out what's on the tape. IBM intends to replace this with a similar, supported command. SHow VOLUSE Same as 'SHow VOLUMEUSAGE' Shut down server 'HALT' command, after doing a 'DISAble' to prevent new sessions, 'Query sessions' to see what's active, and 'CANcel SEssion' if you can't wait for running stuff to finish. Signal 11 See: Segmentation violation SIM (3590) Service Information Message. Sent to the host system. AIX: appears in Error Log. Ref: "3590 Operator Guide" manual (GA32-0330-06) esp. Appendix B "Statistical Analysis and Reporting System User Guide" See also: MIM; SARS Single Drive Some customers attempt to implement a *SM server with a single (tape) drive. That is extremely awkward, and discouraged. Do all you can to add a second removeable storage media (tape, optical) to your installation. Remember that the second drive does not have to be of the same type as the first for purposes like BAckup STGpool: that drive can be cheaper and of lower performance, with less costly media. Single Drive copy storage pool A *SM server with a single drive needs special configuration to accomplish a BAckup STGpool to tape. The best approach is to utilize disk (disk is cheap) for the primary backup stgpool, then do a BAckup STGpool from that disk to the single sequential drive, then migrate the disk data to the next stgpool in the hierarchy, which would be the same single sequential drive. Single Drive Reclamation See: RECLAIMSTGpool Single Drive Reclamation Process Redbook "AIX Tape Management" script (SG24-4705) appendix C. SIngular Perhaps you mean "distinct", as in SELECT operations. Size See: FILE_SIZE Size factor HSM: A value that determines the weight given to the size of a file when HSM prioritizes eligible files for migration. The size of the file in this case is the size in 1-KB blocks. The size factor is used with the age factor to determine migration priority for a file. Defined when adding space management to a file system, via dsmhsm GUI or dsmmigfs command. See also: Age factor Size limit See: MAXSize Size of file for storage pool See "MAXSize" operand of DEFine STGpool. SKIPNTPermissions Windows option to allow bypassing processing of NTFS security information. Select this option for incremental backups, selective backups, or restores. Use this option with the following commands: Archive, Incremental, Restore, Retrieve, Selective. Choices: No The NTFS security information is backed up or restored. This is the default. Yes The NTFS security information is not backed up or restored with files. (Consider carefully) Also, with Yes, the SKIPNTSecuritycrc option does not apply. SKIPNTSecuritycrc Windows NT client option: Computes the security cyclic redundancy check (CRC) for a comparison of NTFS security information during an incremental or selective backup archive, restore, or retrieve operation. Performance, however, might be slower because the program must retrieve all the security descriptors. Use this option with the following commands: Archive, Incremental, Restore, Retrieve, Selective. Choices: No The security CRC is generated during a Backup. This is the default. Yes The security CRC is not generated during a Backup. All the permissions are backed up, but the program will not be able to determine if the permissions are changed during the next incremental backup. When SKIPNTPermissions Yes is in effect, the SKIPNTSecuritycrc option does not apply. The security info are stored in a variable length buffer. It is not part of the attributes structure that is used to compare to see whether anything has been changed to back it up again as part of incremental backup. What is stored in the attrib structure is the security CRC which is the checksum value of the buffer. If the security info are backed up but not the CRC, *SM won't be able to detect changes that were made to the security attributes. *SM does store the size of the four security structures (owner SID, group SID, DACL & SACL) but the size alone doesn't tell if it was changed. So the downside of setting SKIPNTSecuritycrc=Y is that TSM can only detect if the actual size of any of the four security structures has been changed. skipped ANS4940E message indication that a file was skipped during Backup because it changed, per CHAngingretries option. Skipped files Somewhat peculiar and misleading product terminology referring to files that span multiple storage pool volumes - they skip from one volume to another. As used in the AUDit Volume command's SKIPPartial keyword. See also: Span volumes, files that, find In the context of a client backup, see "Backup skips ..." SLDC Streaming Lossless Data Compression compression algorithm, as used in the 3592. See also: ALDC; ELDC; LZ1 Slot (tape library storage cell) See: Element; HOME_ELEMENT Slow performance with multiple client An individual client backup may take 10 accesses minutes; but if multiple clients simultaneously do backups, the backup time turns to hours. This can occur if the database cache is too small. Inspect your "Cache Hit Pct" number: if it is down around 80% then disk access is dominating, slowing everything down. Increase BUFPoolsize in dsmserv.opt . SLOWINCREMENTAL Option (Client System or Client User Option) for personal computers (Macintosh, Novell, Windows (only)) to perform "slow incremental backups", which means to back up one directory at a time instead of first generating a full list of all directories and files. Specify "SLOWINCREMENTAL YES" to so choose. The Default for all systems except Macintosh is "SLOWINCREMENTAL NO", so as to speed the backup itself. You may want "SLOWINCREMENTAL YES" in cases where the node session times out as the server is busy so long compiling that list before starting the first transmission. Small Files Aggregation ADSMv3 feature to group small files into a larger aggregate to improve the efficiency of backup and restoral operations, by reducing overhead. If the TXNBytelimit client option or TXNGroupmax server option values are too small or client files are very large you may not get much aggregation. Ref: Admin Guide: "Aggregate file". SMC SCSI Medium Changer, as on a 3590-B11, as used via Unix device /dev/rmt_.smc; and on the 3583 and 3584. ("Medium Changer" is also referred to as an "Autochanger".) In Unix, the associated device is /dev/smc0, /dev/smc1, etc. The smc* special file provides a path for issuing commands to control the medium changer robotic device. Though the term originated with SCSI cable connections, the terminology has been carried into Fibre Channel as well. Mounts within an SMC are specified by slot number, which means that, unlike fully automated libraries having a library manager, TSM must keep track of what slots its volumes are in, and this is reflected in Query LIBVolume output, where the Home Element should identify the slot. An AUDit LIBRary should refresh TSM's knowledge of volume locations. See also: 3590 TAPE DRIVE SPECIAL DEVICE FILES at the bottom of this document. SMIT and ADSM ADSM adds its own selection category to SMIT, as in Devices -> ADSM Devices. smpapi_* Like "smpapi_setup". These are functions provided in the TSM sample API program. The source files themselves are named dapi*.c SNA LU6.2 Systems Network Architecture Logical Unit 6.2. Snapshot Backup Actually, Windows 2000 & XP image backup. See: Image Backup SNAPSHOTCACHELocation For TSM 5.1 Windows 2000 & XP image backups, in conjunction with INCLUDE.IMAGE; or for TSM 5.1 Windows 2000 & XP open file backups, in conjunction with Windows INCLUDE.FS. Specifies the location of a pre-formatted volume which will house the Old Blocks File (OBF), which contains changes which other processing makes to the different volume which is the subject of the image backup or open files archive. The default is the system drive (typically, C:), C:\tsmlvsa . Note that the OBF file cannot be on the same volume that is being backed up. One approach to handling this is via INCLUDE.FS C: fileleveltype=dynamic See also: LVSA; OBF SNMP ADSMv3 provides SNMP support. Implement by doing: - Configure dsmserv.opt for SNMP - Configure /etc/snmpd.conf - Start /usr/lpp/adsmserv/bin/dsmsnmp - Start the ADSM server (in that order!) - Register admin SNMPADMIN with a password and analyst privileges. See: dsmsnmp. Ref: ADSMv3 Technical Guide redbook, section 9.3 SNMP MIB files AIX: /usr/lpp/adsmserv/bin/adsmserv.mib 3494: Note that the atldd package does not itself provide MIB files for the 3494. See the IBM Magstar 3494 Tape Library Guide redbook (search on SNMP) and the 3494 Tape Library Operator's Guide manual. Note that the latter manual says: "The Library Manager code does not contain any SNMP Management Information Base (MIB) support." SNMPD Later releases of AIX V4.2.1 all have a DPI V2 compliant snmpd built-in. The snmpd component is in fileset bos.net.tcp.client. You can download fixes from http://198.17.57.66/aix.us/ aixfixes?lang=english. Sockets and Backup/Restore ADSM will back up and restore special files, but (per the v3 client README file), *not* sockets: sockets are skipped during backup; and they are skipped during restore, even if they were backed up with earlier levels of the ADSM software. AIX 4.2 and HP-UX do not support creating socket files, and always skip socket files in Restore operations. Note: Early v3 software attempted to back up and restore sockets; but there were too many problems, and that functionality was removed. See also: IGNORESOCKETS Solaris errno values Do 'man -s 2 intro' on Solaris. See also IBM site Technote 1143564. Solaris restorals, speed up Employ the "fastfs" attribute, which causes directory updates to be buffered in memory rather than be written to disk as each is changed, which can dramatically slow a restoral. Risk: A hardware problem, power outage, or other system disruption will cause all the buffered data to be lost, so best to use this only for file systems which are lost causes to begin with. ftp.wins.uva.nl:/pub/solaris/fastfs.c.gz Space management Another term for describing the services performed by HSM: The process of keeping sufficient free storage space available on a local file system for new data and making the most efficient and economical use of distributed storage resources. Space management attributes HSM: Attributes contained in a Management Class that specify whether automatic migration is allowed for a file, whether selective migration is allowed for a file, how many days must elapse since a file was last accessed before it is eligible for automatic migration, whether a current backup version of a file must exist on your migration server before the file can be migrated, and the ADSM storage pool to which files are migrated. In fact, most of the attributes in a 'DEFine MGmtclass' and 'UPDate MGmtclass' are for HSM. Space management for Windows See: HSM, for Windows Space management information (HSM) 'dsmmigquery FSname' Space management settings Settings that specify the stub file size, quota, age factor, size factor, high threshold, low threshold, and the premigration percentage for a file system. A root user selects space management settings when adding space management to a file system or when updating space management. Space Management Technique Management Class specification (HSM) (SPACEMGTECHnique) governing HSM file migration... See: SPACEMGTECHnique Space monitor daemon HSM: Daemon (hsmsm) that checks space usage on all file systems for which space management is active, and automatically starts threshold migration when space usage on a file system equals or exceeds its high threshold. How often the space monitor daemon checks space usage is determined by the CHEckthresholds option in your client system options file. In addition, the space monitor daemon starts reconciliation for your file systems at the intervals specified with the RECONCILEINTERVAL option in your client system options file. Space reclamation See "Reclamation". Space used by clients (nodes) on all 'Query AUDITOccupancy [NodeName(s)] volumes [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: It is best to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' to assure that the reported information will be current. Space used on a volume 'Query Volume' Space used in storage pools, query 'Query OCCupancy [NodeName] [FileSpaceName] [STGpool=PoolName] [Type=ANY|Backup|Archive| SPacemanaged]' .SpaceMan Hidden directory in a space-managed (HSM) file system, containing files: candidates: list of migration candidates. Created by 'dsmreconcile -c FSname' fslock.pid: PID of a dsm process which is using the file system, e.g. dsmautomig, dsmreconcile, etc. orphan.stubs: Names files for which stub file exists, but no migrated file; from reconcilliation. status: symlink to point to file which records stats. premigrdb.dir, premigrdb.pag: the premigrated files database, accessed via dbm_* calls. logdir: Directory to record info about files in the process of migrate or recall. Ref: HSM Clients manual. This hidden directory is implicitly excluded from space management. SpaceMan The Space Management component of ADSM, more commonly known as HSM, which is an optional feature. Started by /etc/inittab's "adsmsmext" entry invoking /etc/rc.adsmhsm . SPACEMGTECHnique MGmtclass operand governing HSM file (HSM) migration... AUTOmatic says that files may migrate automatically or by selective command; SELective says only by selective cmd; NONE says no migration allowed. Default: NONE, as per the usual customer case of HSM not being installed. Check via client 'dsmmigquery -M -D' command. See also: Space Management Technique Span volumes, files that, find In general: SELECT * FROM CONTENTS WHERE SEGMENT>1 For specific volumes: Query CONtent VolName COUnt=1 Query CONtent VolName COUnt=-1 to see the first, and last, files on volumes suspected of harboring known spanners. See also: Segment Number; Skipped files; Spanning Spanning TSM fills tapes as much as possible, which means that as it encounters EOV when writing a file, it will split the file at that point and continue writing the remainder of it on another volume. Each piece of the file is called a Segment. Experience shows that probability is high that files will span volumes; that the last file on a volume will span to the next volume. See "Filling" for ramifications for Filling volumes. Sparse files, handling of Sparse files are those which contain empty space; that is, portions of the file are implicit per positional addressing and consume no disk space. (In Unix, at least, there is no inode or other flag to identify a file as sparse: sparseness is implicit, and not always deterministic.) Sparse files are in general problematic in that any ordinary reading of the file will result in the full, effective content of the file being presented, with the internal skip space being expanded with padding characters (bytes whose value is 0). TSM tries to properly detect sparse files and handles them appropriately: At Backup time: The TSM client attempts to discern if the file is sparse, and sets a Sparse flag if it believes that the file is sparse. At Restore time: The Sparse flag is normally honored, and restoral proceeds accordingly... If a block of a file consists only of bytes with value zero this block is not restored as a physical disk block. For sparse files with large holes in the address space this obviously improves restoral performance. However, all this data scanning is costly and aggravates restoral time in a spare file with minimal holes. Further: the Backup client may have misinterpreted a plain file as sparse and so flagged it in TSM server storage, which substantially prolongs restoral time. This can be remedied by setting the undocumented dsm.opt option MAKESPARSEFILE NO or using -makesparsefile=no on the CLI. Per the 4.1 Solaris Readme (only): If files have been backed up as sparse files and need to be restored as normal files (non-sparse files), this should be done by the internal (undocumented) option MAKESPARSEFILE NO in dsm.opt or -makesparsefile=no which is supported by the command line client only. The option is only necessary for files where the existence of physical disk blocks is required. This is the case in some rare situations for system files like ufsboot which is needed during boot time. The boot file loader of the operating system accesses physical disk blocks directly and does not support sparse files. See also "Sparse file processing" in recent server README files. Historical note: ADSMv2,3 supported an intentionally undocumented option called MAKESPARSEFILE which explicitly requested that sparse files be restored as sparse. APAR IC19767 notes that the client now handles this automatically. Sparse files, handling of, Windows Backup: TSM will back up a sparse file as a regular file if Client compression is off (COMPRESSIon No). Enable file compression (COMPRESSIon Yes) when backing up sparse files to minimize network transaction time and to maximize server storage space. (However, if your tape drive hardware does compression, the only savings will be network transmission time.) Restore: When restoring sparse files to a non-NTFS file system, set the TSM server communication time out value (COMMTimeout, and even IDLETimeout) to the maximum value of 255 to avoid client session timeout. Splitting files across volumes See: Span SpMg Space Managment (HSM) file type, in Query CONtent report. Other types: Arch, Bkup. Spreadsheet, import TSM db data into See ODBC in Appendix A of the TSM Technical Guide redbook. SQL See: Select SQL: Re-cast Like CAST(BYTES_SENT AS DECIMAL(18,0)) SQL: Selecting from multiple tables In one Select you can retrieve column entries from tables via specificity: using "Tablename.Columname" format to explicitly identify your objectives. Sample: SELECT DISTINCT contents.node_name, contents.volume_name, archives.archive_date, archives.description FROM contents,archives ... SQL: Equal symbology = SQL: Greater Than symbology > SQL: Greater Than Or Equal To symbology >= SQL: Less Than symbology < SQL: Less Than Or Equal To symbology <= SQL: Not Equal symbology <> SQL: NOT LIKE To filter out things not matching a pattern. For example, to omit storage pool names which end with the string "OFFSITE", code: STGPOOL_NAME NOT LIKE '%OFFSITE' where % is a wildcard character for SQL: Experiment with expressions The Select statement is a generalized thing, and you can take advantage of that to experiment with the forumlation of expressions. Unlike real-world SQL, the TSM Select statement requires that a table be specified with From: you can supply a placebo table which always has only one entry, to yield just one row in your output. Such a table is Log. Here's an example to display the current timestamp: SELECT CURRENT_TIMESTAMP Here's an example to display the timestamp three days ago: SELECT CURRENT_TIMESTAMP-(3 DAYS) FROM LOG SQL: Sorting On the Select statement, use the ORDER BY parameter specification, specifying the sort column by name or relative numeric position. SQL: String encoding Enclose in single quotes, like 'Joe'. SQL: Wildcard character Is percent sign (%), to represent one or more occurrences of any possible character (number, letter, or punctuation). See sample in: SQL: NOT LIKE SQL, last 24 hours Here's an example of seeking table entries less than a day old where the table has a timestamp column named "DATE_TIME": ... WHERE DATE_TIME>(CURRENT_TIMESTAMP-(1 DAY)) SQL, number format Select command output does not conform to server NUMberformat settings. There is no provision for special formatting of numbers. Your only recourse is to post-process the results. SQL, rounding result Do like: SELECT NODE_NAME, CAST(SUM(CAPACITY * (PCT_UTIL/100)) AS DECIMAL(yy,z)) as Percent_Utilized FROM FILESPACES GROUP BY NODE_NAME where yy is the max number of places to the left of the decimal point, z is the number of places to the left. Note that places to the right are padded with zeros, places to the left are not. SQL, specify a set to match in Use the IN keyword, like: "select ... where stgpool_name in ('BACKUPPOOL', 'TAPEPOOL', 'ANOTHERTAPEPOOL')". SQL BackTrack Non-Tivoli backup product from BMC sofware, for backing up various database types. To back up to TSM, Uses the TSM API to store backups of physical files or logical exports using pseudo-filenames that includes time stamps, so every time you do an SQL BackTrack backup ADSM is given a new set of unique objects. Thus there is never more than one 'version' of a 'file'. So versions-exists can safely be set to 1 and retain-extra can be set to zero (recall that retain-extra affects the retention of the 2nd, 3rd, etc. oldest versions of a file, of which there are none in this case). The versions-deleted is set to 0 so that when SQL BackTrack tells ADSM to delete an object, which it does after the two weeks you've set it to, ADSM will mark it for expiration the next time expiration is run (within 24 hours typically). The retain-only is set to zero for the same reason; once SQL BackTrack decides to delete the file, it is of no use to retain that last-good-version and longer. Ref: www.bmc.com SQL backup See: TDP for Microsoft SQL. SQL column width See: SELECT output, column width; Set SQLDISPlaymode SQL efficiencies Instead of use the construct: columname='A' or columname='B' use: columname in ('A', 'B') The latter will run in about half the time. SQL in ADSMv3 Used via 'Select' command. See available information by doing: SELECT * FROM SYSCAT.TABLES SELECT * FROM SYSCAT.TABLES WHERE - TABNAME='___' Shows table names, column count, index column count, whether unique, and table description. See example under Select in the in the Admin Ref. SELECT * FROM SYSCAT.COLUMNS SELECT * FROM SYSCAT.COLUMNS WHERE - COLNAME='___' Shows table name, column name, column number, type, length, description SELECT * FROM SYSCAT.ENUMTYPES Shows type index, name, values, description Or use the Web Admin and run the script Q_TABLES, then run Q_COLUMNS with the desired table name as parameter. You can use the following technique to send the output to a file, with commas between elements, for absorbing into your favorite spreadsheet program for manipulation and pretty printing: dsmadmc -id=id -pa=password -comma -out="syscat.tables.csv" "select * from syscat.tables" dsmadmc -id=id -pa=password -comma -out="syscat.columns.csv" "select * from syscat.columns" dsmadmc -id=id -pa=password -comma -out="syscat.enumtypes.csv" "select * from syscat.enumtypes" Ref: Admin Guide; "Using the ADSM SQL Interface", http://www.uni-karlsruhe.de/~rz57/ ADSM/3rd/handouts/raibeck.ps (a PostScript file, to print or see with a utility like the free Ghostscript or GSview) SQL node choice IBM recommends that you do NOT use the same ADSM node for the base ADSM client and SQL Agent. The SQL Agent has its own special policy requirements due to the nature of the design, i.e. each backup object is always unique. There can also be coordination issues when defining the various needed schedules. IBM also recommends that you keep the options file separate. In fact, the design of the GUI requires that the options file be kept in the SQL Agent install directory. You can use the same node, but we do not recommend it. SQL report formatting See: SELECT output, column width; Set SQLDISPlaymode; SQL column width SQL samples Shipped with the server is a scripts.smp file, containing a lot of interesting examples of SQL coding for TSM. These sample scripts can be visually inspected and adapted; or loaded at TSM install time via 'dsmserv runfile scripts.smp', or loaded anytime thereafter into a running server via 'macro scripts.smp'. SQL settings See: Set SQLDATETIMEformat; Set SQLDISPlaymode; Set SQLMATHmode; Query SQLsession SQL string comparisons Are done on a byte-for-byte basis, so they are case sensitive. Use the LCASE and UCASE functions as needed to force a name to either. SQL TDP See: TDP for Microsoft SQL Server SQLDISPlaymode See: Set SQLDISPlaymode /SQLSECURE TDP for SQL V1 function which allows use of Windows "authentication" (userid and password) to communicate with the SQL Server. TDP for SQL V2 improves upon this by allowing SQLUSERID and SQLPASSWORD to be stored in the Registry so that both GUI and command-line can be used without having to enter the userid/password; and you also have the choice of using Windows "authentication" for communicating with the SQL Server. See the "/SQLAUTHENTICATION=INTEGRATED" option. ssClone An internal server facility created to as to avoid an HSM file recall during a backup operation, by performing an "inline server copy". SSD IBM: Storage Systems Division; or Storage Subsystems Division; or Storage Subsystems Development SSD RMSS device driver IBM higher-end tape drive opsys driver software, as for the 3590 and 358x tape drive series, with different names for different platforms, such as Atape, atdd, IBMtape, IBMUltrium, IBMmag. Found at: FTP://ftp.software.ibm.com/ storage/devdrvr SSL See: HTTPS Staggered start for client schedules See: Schedule, Client Stale Shows up as Copy Status in 'Query DBVolume' or 'Query LOGVolume' command output, indicating that a Vary On is in progress to bring a volume back into service. Start "Sess State" value from 'Query SEssion' saying that the session is starting. See also: Communications Wait; Idle Wait; Media Wait; RecvW; Run; SendW Start-stop In tape technology, refers to providing data to a tape drive irregularly such that recording must stop, halting the transport of the media until data is again available for recording, whereupon the media is again set into motion. This is the kind of recording most frequently found in reality. Drives which exhibit inferior start-stop performance can greatly prolong TSM backup operations. The underlying problem with file system backup and drives with mediocre start-stop performance is in the "sputtering" way that Backup will send files as it encounters them in traversing the file system. Enlarged transaction buffering will help with this. A frontal disk storage pool serving as a consolidation buffer also does the trick. A more labor-intensive method would be to have a non-TSM (i.e., home-grown) tool run through the file system to collect the names of all the candidate files and then initiate a backup with the -FILEList option, to in effect cause streaming, eliminating all the time gaps in candidate discovery ("squeeze the air out"). It's a more desperate measure, but it may suit some installations. Contrast with: Streaming See also: Backhitch Start-up window for client schedules See: Schedule, Client State MOVe MEDia command states are MOUNTABLEInlib and MOUNTABLENotinlib (q.v.). Not to be confused with volume Status. STATE SQL: Column in BACKUPS table, identifying the backup state: 'ACTIVE_VERSION' or 'INACTIVE_VERSION' See also: Active files, identify in Select; Inactive files, identify in Select STARTTime A 'DEFine SCHedule' operand. It is by schedule, not by node. The only way to give a node a unique starttime would be to define a schedule and have only that node associated with it. Static A copy group serialization value that specifies that an object must not be modified during a backup or archive operation. If the object is in use during the first attempt, *SM will not back up or archive the object. See serialization. Contrast with Dynamic, Shared Static, and Shared Dynamic. STATUS table TSM SQL table containing most of the information contained in a Query STatus report (but not server version/release). Columns: SERVER_NAME, SERVER_HLA, SERVER_LLA, SERVER_URL, SERVER_PASSSET, INSTALL_DATE, RESTART_DATE, AUTHENTICATION, PASSEXP, INVALIDPWLIMIT, MINPWLENGTH, WEBAUTHTIMEOUT, REGISTRATION, AVAILABILITY, ACCOUNTING, ACTLOGRETENTION, LICENSEAUDITPERIOD, LASTLICENSEAUDIT, LICENSECOMPLIANCE, SCHEDULER, MAXSESSIONS, MAXSCHEDSESSIONS, EVENTRETENTION, CLIENTACTDURATION, RANDOMIZE, QUERYSCHEDPERIOD, MAXCMDRETRIES, RETRYPERIOD, SCHEDMODE, LOGMODE, DBBACKTRIGGER, ACTIVERECEIVERS, CONFIG_MANAGER, REFRESH_INTERVAL, LAST_REFRESH, CROSSDEFINE. STATUS (volumes status) The status of volumes, as in the underlying database fields reported by the customer-visible Media and Volumes tables. Value is one of: EMPty, FILling, FULl, OFfline, ONline, PENding Status info, get 'Query STatus' Status values See: dsmc status values STAtusmsgcnt TSM server option specifyin the number of records (times 1000) that will be processed between status messages during DSMSERV DUMPDB and DSMSERV LOADDB commands. Stem See: Stub STGDELETE In 'Query VOLHistory', Volume Type to say that volume was a sequential access storage pool volume that was deleted. Also under 'Volume Type' in /var/adsmserv/volumehistory.backup . STGNEW In 'Query VOLHistory', Volume Type to say that volume was a sequential access storage pool volume that was added. Also under 'Volume Type' in /var/adsmserv/volumehistory.backup . STGPOOLS SQL table of server storage pools. Columns: STGPOOL_NAME, POOLTYPE, DEVCLASS, EST_CAPACITY_MB, PCT_UTILIZED, PCT_MIGR, PCT_LOGICAL, HIGHMIG, LOWMIG, MIGPROCESS, NEXTSTGPOOL, MAXSIZE, ACCESS, DESCRIPTION, OVFLOCATION, CACHE, COLLOCATE, RECLAIM, MAXSCRATCH, REUSEDELAY, MIGR_RUNNING, MIGR_MB, MIGR_SECONDS, RECL_RUNNING, RECL_VOLUME, CHG_TIME, CHG_ADMIN, RECLAIMSTGPOOL, MIGDELAY, MIGCONTINUE STGREUSE In 'Query VOLHistory', Volume Type to say that volume was a sequential access storage pool volume that was reused. Also under 'Volume Type' in /var/adsmserv/volumehistory.backup . This Type is unusual, and has been associated with ANR0102E problems. STK Short id for Storage Technology Corp. http://www.storagetek.com/ They have a Customer Resource Center for the submission of questions. STK 9710 APAR IX75639 advised of ANR8420E I/O errors occurring on STK9710 while accessing DLT 7000 drive: errpt indicates SCSI Adapter errors. Correct by enabling the FAST DRIVE LOAD option on the STK 9710 Lib, which seems to be a requirement for this Lib/Drive to work with ADSM. (Set the FAST DRIVE LOAD via the front panel.) STK 9730 A model in the "TimberWolf" family. Is a rack-mountable, SCSI-based automated library about the size of a workstation. Without tape drives, the 9730 weighs 50 kg (110 lbs.) and is the least expensive library in the series, available with 18 or 30 cells, and 1-4 DLT drives. May be driven by ACSLS. Customer experience varies: some find problematic hardware with DLT7000 drives, as of 9/98. See "DLT7000". STK 9840 StorageTek tape drive technology, using cartridge of same form factor as IBM 3480/3490/3590, which is to say 1/2" tape, but dual-hub (diagonally opposite). Used in STK PowerHorn lib. Customers report this technology to be "rock solid". Capacity: 20 GB basic, 60 GB compressed (LZ1 method, 3:1) Recording method: linear serpentine, 288 tracks, servo tracking Load time: 12 seconds to 1st data transfer Average access time: 11 seconds Throughput: 10 MB/sec sustained. Tape speed: read/write @ 2 m/s; search @ 8 m/s Rewind time: 16 s max Cartridge: essentially square; mid-point load; dual hub (dual spool), on corner-to-corner diagonal of cartridge; metal particle tape. TSM definition: DEFine DEVclass DEVType=ECARTridge FORMAT=9840|9840C www.storagetek.com/products/tape/9840/ STK L700e StorageTek floor-standing tape library in a silo design. 678 cartridge slot capacity, extendable to 1344. Supports up to 12 StorageTek high-performance T9840 and/or high-capacity T9940 tape drives or up to 20 DLT, SDLT or LTO Ultrium tape drives; or mix any of these drives in different combinations. There is a web interface to the library. Slot 10 of a STK L700 Library is the upper import/export slot of this bulk station. Connectivity: a native 2Gb Fibre Channel optical interface. AIX handles as: Resource Name: lb0 Resource Class: library Resource Type: TSM-FCSCSI Storage Agent LAN-free backups introduced in TSM 3.7 relieve the load on the LAN by introducing the Storage Agent. This is a small TSM Manager server (without a Database or Recovery Log) which is installed and run on the TSM client machine. It handles the communication with the TSM server over the LAN but sends the data directly to SAN attached tape devices, relieving the TSM server from the actual I/O transfer. Ref: TSM 5.1 Technical Guide See: Lan-Free Backup; Server-free Storage Agent and logging/accounting The Storage Agent operates unto itself, and does not produce logs or accounting records, and so there are no entries in either the TSM server Summary table or accounting records to identify Storage Agent actions. As of TSM 5.2 there exists TSM server option DISPLAYLFINFO to cause Storage Agent identification. With it, records for Storage Agent activity will appear in the Summary table and TSM server accounting records, tagged with "NodeName(StorageAgentName)" instead of just NodeName. This allows you to benefit from further information and distinguish ordinary, direct client-server sessions from those performed through a Storage Agent. Storage pool A named set of storage volumes that is used as the destination for Backup, Archive, or HSM migrate operations. May be arranged in a hierarchy, for downward migration according to age. The storage pool is assigned to a Devclass. Can also be Copy Storage Pools to provide backup of one or more levels of the hierarchy. Can be an AIX file, prepped with the dsmfmt cmd, which serves as a random- access storage pool; or a raw logical volume. Files within a given storage pool are not segregated by management class: files belonging to different management classes may exist on the same volume. Is target of: DEFine COpygroup ... DESTination=PoolName and: DEFine STGpool ... NEXTstgpool=PoolName and: DEFine Volume PoolName VolName Note that storage pools cannot span libraries. Storage pool, assign You do 'DEFine STGpool' to assign it to a Devclass; then do 'DEFine COpygroup' to make it part of a Copy Group in a Management Class, which is under a Policy Set, which needs to be Activated. Storage pool, back up Have a Copy Storage Pool, and perhaps nightly issue the command: 'BAckup STGpool PrimaryPoolName CopyPoolName [MAXPRocess=N] [Preview=Yes|VOLumesonly]' Storage pool, Copy Storage Pool, See: DEFine STGpool (copy) Storage pool, disk You may, of course, allocate storage pools on disk. In *SM database restoral, part of that procedure is to audit any disk storage pool volumes; so a good-sized backup storage pool on disk will add to that time. Considerations: - Because there is no reclamation for random access storage pools: - disk fragmentation is a concern; - aggregates are not rebuilt, so as objects within an aggregate expire, that space is not freed until all objects in the aggregate have expired. This can cause inefficient utilization of the disk space over time. - FILE device classes could be used, but represent configuration and performance concerns. - While such an environment is technically possible, it is not the intended *SM usage model, and IBM does not recommend it at this time. See: Backup through disk storage pool Storage pool, disk, define See: DEFine STGpool (disk) Storage pool, disk, performance There have been reports that reading from a disk storage pool is done a file at a time and not buffered, "because it is a random access device". This dramatically impedes the performance of BAckup STGpool and Reclamation. Another drawback from using disk storage pools is that they nullify the advantages of multi-session restore. From the Client manual, in the description of the RESOURceutilization option: "If all of the files are on disk, only one session is used. There is no multi-session for a pure disk storage pool". See also: Multi-Session Restore Storage pool, HSM, define 'DEFine MGmtclass MIGDESTination=StgPl' Default destination: SPACEMGPOOL. Storage pool, HSM, update 'UPDate MGmtclass MIGDESTination=StgPl'. If this updated MGmtclass is in the active policy set, you will need to re-ACTivate the POlicyset for the change to become active. Storage pool, last used date/time Alas, *SM does not allow customers to determine when the storage pool was last used for reading or writing: there is no command to query for this information. Storage pool, number of files in, 'Query OCCupancy [NodeName] query [FileSpaceName] [STGpool=PoolName] [Type=ANY|Backup|Archive| SPacemanaged]' Storage pool, outside library See: Overflow Storage Pool; OVFLOcation Storage pool, reclaimable volumes SELECT VOLUME_NAME,STGPOOL_NAME,- PCT_UTILIZED FROM VOLUMES WHERE - STATUS='FULL' AND PCT_RECLAIM>50 Storage pool, rename ADSMv3: 'REName STGpool PoolName NewName' Storage pool, restore 'RESTORE STGpool PrimaryPoolName' Storage pool, skip during writing You can cause this to happen by making and go to next in hierarchy its ACCess=READOnly; or change the MAXSize to a silly, low value. See: UPDate STGpool Storage pool, space used 'Query OCCupancy [NodeName] [FileSpaceName] [STGpool=PoolName] [Type=ANY|Backup|Archive| SPacemanaged]' Storage pool, tape, define See: DEFine STGpool (tape) Storage pool, tape, prevent usage 'UPDate DEVclass DevclassName MOUNTLimit=0' Storage pool, volumes in 'Query Volume STGpool=Pool_Name' Storage Pool Count As seen in Query DEVclass report. Is the number of storage pools that are assigned to the device class, via 'DEFine STGpool'. Storage pool devices class A storage pool is defined with a single device class. Thus, it is not possible to have both FILE and tape participate in the stgpool, as you might want to do to effect a copy storage pool where you have only a single tape drive. Storage pool disk volume which no In the history of a TSM server you might longer exists, delete end up with some storage pool disk volumes which physically no longer exist, but which are still known to TSM. They are non-existent, and in TSM are offline. How do you clean them out? Trying to create an imposter volume so that you can delete it is virtually impossible, because content simply doesn't match TSM expectations. A Delete Volume fails. One customer reports success in using Restore Volume: it restores some data and then deletes the old, original volume. Obviously, though, you want TSM administration procedures in place to avoid getting into this situaation. Storage pool hierarchy, defining Use either 'DEFine STGpool' or 'UPDate STGpool' and use "NEXTstgpool=PoolName" to define the next storage pool down in the hierarchy. So if you had "diskpool" and "tapepool", you would define the latter to be the next level by doing: 'UPDate STGpool Storage pool logical volume, max size Under AIX 4.1, ADSM storage pool logical volumes are limited to 2GB in size, as are files, because of AIX programming restrictions. AIX 4.2 relieves that limit. Storage pool migration, query 'Query STGpool [STGpoolName]' Storage pool migration, set The high migration threshold is specified via the "HIghmig=N" operand of 'DEFine STGpool' and 'UPDate STGpool'. The low migration threshold is specified via the "LOwmig=N" operand. Storage pool naming If you employ disciplined, methodical naming conventions to your storage pools, you will make your life a lot easier when it comes to performing administration, as various commands (e.g., Query MEDia) allow you to specify the storage pool name with wildcard characters. Example: You have a hierarchy of disk and tape for your three kinds of data, plus a local copy storage pool and an offsite pool... Disk: POLSET1.STGP_ARCHIVE_DISK POLSET1.STGP_BACKUP_DISK POLSET1.STGP_HSM_DISK Tape: POLSET1.STGP_ARCHIVE_3590 POLSET1.STGP_BACKUP_3590 POLSET1.STGP_HSM_3590 Copy: POLSET1.STGP_ARCHIVE_COPY POLSET1.STGP_BACKUP_COPY POLSET1.STGP_HSM_COPY Offsite: POLSET1.STGP_ARCHIVE_OFFSITE POLSET1.STGP_BACKUP_OFFSITE POLSET1.STGP_HSM_OFFSITE The commonality in the names facilitates the use of wildcards to seek, for example, full volumes in the Offsite pool set that can be ejected from your library and be sent offsite. Storage pool occupancy by node SELECT STGPOOL_NAME, - SUM(NUM_FILES) AS "Total Files", - SUM(PHYSICAL_MB) AS "Physical MB",- SUM(LOGICAL_MB) AS "Logical MB" - FROM OCCUPANCY - WHERE NODE_NAME='UPPER_CASE_NAME' - GROUP BY STGPOOL_NAME Storage pool space and transactions TSM has two basic media types for storing data: random (disk) and sequential (tape). Because of the different characteristics of the two types of media, TSM manages each differently, particularly when data is to move to the next storage pool... Disk volumes defined to a *SM storage pool have a fixed size, allowing the server to determine the capacity of the storage pool. Since these volumes are created and managed by TSM, it is able to determine during the beginning of a transaction if there is enough space in the disk storage pool to contain the data to be stored. It is important to note that if this occurs, the storage pool is approaching fullness, and migration should be run to move data to make room for new data entering the TSM storage hierarchy. However, if migration is disabled or the file exceeds the maximum file size for files allowed in the disk pool (MAXSize), TSM will move new data to the next storage pool in the hierarchy. This is only possible because TSM is knows the capacity of the disk pool and manages the allocation of the disk volumes. Sequential storage media and storage pools are different in several ways. First, sequential (tape) media is variable length and its drives are capable of compression to increase the amount of data it can store. This prevents TSM from knowing the absolute capacity of the storage pool or tapes, and so when the transaction begins it is not possible to determine how much data a storage pool tape will receive. TSM can only check to ensure that the file does not exceed the maximum file size for this sequential storage pool. If TSM is able to allocate a volume, it proceeds to store data on it. Secondly, sequential storage pools tend to be open ended or are capable of adding volumes to the pool. Again, TSM cannot know how much these volumes are capable of holding and so cannot determine if the transaction data will fit on the volume. However, TSM is typically able to continue storing data if the volume fills, by allocating another sequential volume. Again, as with disk storage pools, if the sequential storage pool becomes full, migration will move data to the next pool to make room for new data. FILE volumes are a combination of disk and sequential media. TSM allocates these volumes on disk media but treats them as sequential. Hence, TSM does not presume to know the amount of space of the scratch file volumes. Typically, those using FILE devclass will allow enough scratch volumes that handle their daily workload and allow migration to ensure enough space is available in the pool. If there are files larger than the FILE volumes and it is necessary to store the data in the next storage pools then it is recommended that the storage pool be changed to a Disk pool rather that a File pool. Ref: APAR IY00820 Msgs: ANS1329S See also: MAXSize Storage pool volume, query 'Query Volume Vol_Name' Storage pool volume, long gone, delete If you're fully following TSM procedures and no server defects affect operations, you should not encounter situations where you end up with a phantom storage pool volume: one that the storage pool thinks it still has, but that has long been gone from the TSM system. If you do end up with that situation, here are possible ways to proceed: If a disk volume: - Halt the TSM server; - From the server directory do: 'dsmserv auditdb diskstorage fix=yes' If a removable volume: - 'CHECKOut LIBVolume ... FORCE=Yes' Later, to bring back: Storage pool volumes, count SELECT STGPOOL_NAME,count(*) FROM - VOLUMES GROUP BY STGPOOL_NAME Storage pool volumes, how used There is no definitive information on how TSM uses multiple volumes in a storage pool, as during Backup. Users report a "write each file to one volume" pattern: when the file size is huge (e.g, a full DB2 backup) disk volumes get filled one at a time; but in the case of a a number of modest-sized files, TSM seems to spread them over all the volumes. Storage pool volumes, query 'Query Volume [STGpool=Pool_Name]' SELECT STGPOOL_NAME, COUNT(*) AS - "# Vols." FROM VOLUMES GROUP BY - STGPOOL_NAME Storage pool volumes and performance For DISK (random access) volumes, *SM spreads its activity out over multiple volumes, so you're better off with more small disks than a few larger ones. *SM creates a (LvmDiskServer) thread for each volume (see "Processes, server (dsmserv's)", so you get more parallelization. The size of your aggregates, as governed by TXNGroupmax and TXNBytelimit, affects the speed of operation across storage pools. See also: MOVEBatchsize Storage pools, number of SELECT COUNT(STGPOOL_NAME) AS - "Number of storage pools" FROM STGPOOLS Storage pools, query 'Query STGpool' Reports pool names, device class, capacity, %utilization, migration, and next storage pool. Storage pools and database backup Do not use macros to schedule backup of your storage pools and database because they would inappropriately run in parallel (in that the Backup server command generates a parallel process). Instead, do the following in this order: 1) Back up your storage pools 2) Update the volumes to change the access to OFfsite for your newly-created copy storage pool volumes 3) Back up your database 4) Back up your devconfig and volume history files (external to ADSM) StorageTek 9710 StorageTek 9710. With ADSM V2, a 3rd party product called ADSM Enhanced Server was required to support the 9710. Running STK's ACSLS (which is server software for the robot) ADSM talks to ACSLS via the Enhanced Server code. Starting with ADSM V3, you can run an STK9710 with ADSM using only the IBM drivers, OR you can use ADSM V3 and talk to the 9710 via ACSLS. StorageTek 9710 and 9714, labeling If having problems labelling tapes, tapes (I/O errors) check that the library is in "Fastload" mode, which ADSM needs. StorageTek 9710/9714 Library Audit Make sure FAST LOAD is enabled on the time 9710 to minimize AUDit LIBRary time (it can cause mount processing delays if it is disabled). And use the Checklabel=barcode option on the AUDit LIBRary command so that it won't mount each tape and read the header. The audit then takes only 1-2 minutes at most. StorageTek 9730 As of 1998, StorageTek had available software so that ADSM would see the library as a 9710. Stored Size In ADSMv3 'Query CONtent ... Format=Detailed': The size of the physical file, in bytes. If the file is a logical file that is stored as part of an aggregate, this value indicates the size of the entire aggregate. The inability to see the actual size of files from the server is a major annoyance in being able to produce reports and examine problems. This information SHOULD be possible to get from the server: after all, when you do a query from the client you certainly see actual file sizes. StorWatch 1998 IBM product: storage resource managment software products integrated with storage hardware. Streaming In tape technology, refers to providing data to a tape drive continuously such that recording is continuous: the media never stops moving. This is relatively rare in reality, except in applications such as media copying and real-time data acquisition (e.g., scientific experiments and field studies). Contrast with: Start-stop STRMNTBRMS The BRMS maintenance task, in the backup of Domino data on AS400/iSeries, that handles expiration of backup data etc. Stub file A file that replaces the original file on a local file system when the file is migrated to ADSM storage. A stub file contains the information necessary to recall a migrated file from the server storage pool (HSM file management overhead). This information consumes 511 bytes. Because file systems are usually allocated in blocks larger than that, HSM exploits the blksize-511 byte are to store a copy of the leading data from the (migrated) file, for convenience of limited inspection via operating system commands like the Unix 'file' and 'head' commands. See also: dsmmigundelete; Leader data Stub file size (HSM) The size of a file that replaces the original file on a local file system when the file is migrated to ADSM storage. The size specified for stub files determines how much leader data can be stored in the stub file. The default for stub file size is the block size defined for a file system minus 1 byte. Define via 'dsmmigfs -STubsize=NNN'. The stub contains information ADSM needs to recall the file, plus some amount of user data. ADSM needs 511 bytes, so the amount of data which can also reside in the stub is the defined stub size minus the 511 bytes. When you do a dsmmigundelete, ADSM simply puts back enough data to recreate the stubs, with 0 bytes of user data (since you don't want ADSM going out to tapes to recover the rest of the stub). When the file gets recalled, then migrated again, we once again have user data that we can leave in the stub, so the stub size goes back to its original value. Stub files, in restoral -RESToremigstate=Yes (default) will restore the files only as stubs. Stub files, recreate 'dsmmigundelete FSname' Sub-file backups A.k.a "Adaptive differencing" and "adaptive sub-file backup". Available as of TSM 4.1, in the Windows client (intended for laptop computer users), and supported by all TSM 4.1 servers. Operates by creating a /cache subdirectory under the /baclient directory. (Make sure you exclude that from backups!) Make possible by doing Set SUBFILE on the TSM server. Can control what gets backed up by using include.subfile, exclude.subfile. Caveats: - Limited to 2 GB files, max. - If the delta file grows beyond a fixed size of the base, the file is backed up again to create a new base, which is a network load. - Reduces the amount of data backed up, but restorals are still voluminous: a restore requires the base and the last delta file - which leads to extra tape mounts without collocation. - Backups mysteriously stop when the client subfile cache becomes corrupted. Fix that by to deleting the entire cache directory and let it build a new one on the next backup. - The stats in dsmsched.log show the size of the original file, not the size of the subfile that actually got backed up. - Only the backup-complete stats will reveal how much data actually sent. See also: Adaptive differencing; Set SUBFILE SUBFILEBackup (-SUBFILEBackup=) V4 Windows client option for the options file or command line, specifying whether adaptive subfile backup is used. (This option can also be defined on the server.) Syntax: SUBFILEBackup No | Yes Default: No SUBFILECACHEPath (-SUBFILECACHEPath=) V4 Windows client option for the options file or command line, specifying the path where the client cache resides for adaptive subfile backup processing. The cache directory houses reference files and the small database which manages them. If a path is not specified, TSM creates a path called \cache under the directory where the TSM executables reside. The parent pathname of the pathname specified by the subfilecachep option must exist. For example, if c:\temp\cache is specified, c:\temp must already exist. Note: This option can also be defined on the server. Syntax: SUBFILECACHEP Path_Name SUBFILECACHESize (-SUBFILECACHESize=) 4 Windows client option for the options file or command line, specifying the client cache size for adaptive subfile backup. Note: This option can also be defined on the server. Syntax: SUBFILECACHES Size_in_MB where the size can be from 1 - 1024 MB. Default: 10 (MB) SUbdir (-SUbdir=) Client User Options file (dsm.opt) option or dsmc option to specify whether directory operations should include subdirectories, on commands: ARCHIVE, Delete ARchive, Query ARCHIVE, Query BACKUP, RESTORE, RETRIEVE, SELECTIVE. Note: When restoring a single file, DO NOT use -SUbdir=Yes, because it may cause the directory tree to be restored (see APAR IC21360). Specify: Yes or No Default: No SUbdir, query 'dsmc Query Options' in ADSM or 'dsmc show options' in TSM; look for "subdir". Subquery An SQL operation where a Select is done within a Select: the internal Select is a Subquery. The Subquery is like a subroutine, and as such must have the same number and type of columnar results as the Where condition which calls it. The Subquery extracts a set of data from the table it processes, from which the higher query can select elements according to its query. See also: Join Subscription See: Enterprise Configuration and Policy Management SUBSTRing SQL function Format: SUBSTR( column_name, first_position, length) = 'string'. You can use this in SELECT or in WHERE. The separators are always "," and, perhaps, you need put one blank after that space. SUG Abbreviation for an APAR closure reason, indicating that it was closed as a Suggestion for future functionality. Some issues in software may extend beyond the current architecture, or into other areas of the product, and cannot feasibly be addressed as an isolated work item. Instead, they will be addressed in the longer term development of the product, to be worked into the overall architecture in a careful, deliberated manner, with all parties in the development area aware. SUM SQL statement to yield the total of all the rows of a given numeric column. Example: SELECT SUM(NUM_FILES) AS \ "Number of filespace objects" FROM OCCUPANCY See also: AVG; COUNT; MAX; MIN; ORDER BY SUMMARY table SQL table added in TSM 3.7, as described in that server's Readme file. The activity summary table contains statistics about each client session and server processes, saved for as many days as specified in Set SUMmaryretention (q.v.). It is a summary of the whole session - which contrasts with TSM accounting records, where there may be multiple threads in a session and an accounting record for each, which makes for separate pieces of information. Table contents: 1. START_TIME Start Time 2. END_TIME End Time 3. ACTIVITY Process or Session Activity Name: 'EXPIRATION', 'FULL_DBBACKUP' 'MIGRATION' 'BACKUP' 'RESTORE' 'TAPE MOUNT' 'RECLAMATION' 'STGPOOL BACKUP' 'RETRIEVE' 4. NUMBER Process or Session Number 5. ENTITY Associated user or stgpool(s) associated with the activity 6. COMMETH Communications Method 7. ADDRESS Network address 8. SCHEDULE_NAME Schedule Name 9. EXAMINED Number of objects (files and/or dirs) examined by the process/session 10. AFFECTED Number of objects affected (moved, copied or deleted) by the process/session 11. FAILED Number of objects that failed in the process or session 12. BYTES Bytes processed 13. IDLE Seconds that the session or process was idle 14. MEDIAW Seconds that the session or process was waiting for access to media (volume mounts) 15. PROCESSES Number of processes used 16. SUCCESSFUL As of 2003/04 isn't useful for determining the success of a client operation. corresponds to the "Normal server termination indicator" in the TSM server accounting records, which basically says that the session between client and server ended normally. Beware that the SUMMARY table has been the subject of many APARs and attempted fixes, so may not be fully reliable. As one customer put it: the Summary Table is a notoriously dubious source of information. It was broken again in TSM 5.1 (see APAR IC33455). For monitoring client status, use Query EVent Format=Detailed or the EVENTS table; or use the TSM accounting records. See also: Accounting; Set SUMmaryretention Sun client level software Can run 2.5.1 code on the 2.6 machine without problem. Sun client performance Try setting "DISKMAP NO" in dsmserv.opt. This setting can improve performance with larger disk pools and with some disk sub-systems. To get the best disk storage pool performance on Sun, IBM recommends using the raw-partitions (see the reference manual or the help on "define vol" and the notes on the disk device class). Sun system, restoring via ADSM Use Solaris jumpstart to rebuild from ADSM backups. The ADSM client code is loaded into the mini-root that Solaris runs when the box is network booted. This client code can then contact the ADSM server and restore the directories / /opt /usr and so on. Beware that mount point directories cannot appear in that they are overlaid by mounts when the backup is Sun system raw partitions When creating the partition with the /etc/format utility, do not include cylinder 0 (zero) in the partition intended for use as a raw partition. Note that Solaris 2.5.1 limits partition size to 2 GB. Sun third-party hardware - watch out Sun sells various third-party hardware, such as FibreChannel HBAs. Customers report finding that Qlogic HBAs bought from Sun would not work with the IBMTape driver, for example; but purchased directly from Qlogic, the card would work fine. Sun substituted microcode to operate with Sun disks - not others. SuperDLT (SDLT) New in 2000. Capacity: 110 GB native; 220 GB with 2:1 compression. Brings servo-positioning to DLT via Laser Guided Magnetic Recording (LGMR) system and Pivoting Optical Servo (POS) system uses optical servo tracks on the back coating of the tape: this gives DLT better start-stop performance than its previous incarnation, and eliminates the need for pre-formatting tapes. Backward read compatible with DLT 4000, DLT 7000 and DLT 8000 drives, using DLTtape IV media. http://www.dltape.com/superdlt SuperDLT-2 Next generation of SDLT with 160 GB native capacity (320 GB with 2:1 compression). Superuser The supreme, most powerful account in an operating system. In Unix, it is "root"; in Windows, it is the System account. SWAP Secure Web Admin Proxy Sybase backups See the product SQL-Backtrack for Sybase from BMC Software (http://www.bmc.com). You'll also need the TSM OBSI module. Symbolic link A Unix file system object which serves as an "alias" to another file by symbolically naming the target file. Is created by the 'ln -s' command. The nature of the data involved in a symbolic link means that it will not be stored solely in the TSM database, as directories and empty files can be: the symbolic link will become a storage pool object. Symbolic links (Unix) and handling by Backup (incremental or selective): backs ADSM operations up the symlink itself, and not the target of the symbolic link, unless SUbdir=Yes is in effect, in which case it will back up the symbolic link plus any files and directories that it points to. Restore, when symlink was to a file: Restores the symlink, regardless of whether the file it points to still exists. Restore, when symlink was to directory: - Without the files in the directory, and the symbolic link does not exist in the file system, nothing is returned. - Along with the files in the directory and the symbolic link does not exist on your file system, TSM builds the directory and puts the files in that directory. If the subdir option is set to yes, TSM recursively restores all subdirectories of the directory. - And the symbolic link already exists: the result depends on how the FOLlowsymbolic option is set; if it is set to: Yes The symbolic link is restored and overwrites the directory. If FOLlowsymbolic=Yes is in effect, a symbolic link can be used as a virtual mount point. No TSM displays an error message. (No is the default.) Archive: Backs up the target of the symlink, under the name of the symlink. See also: ARCHSYMLinkasfile; FOLlowsymbolic Symbolic link restoral characteristics Symbolic links are restored with the same owner and group they had at Backup time; but their timestamp is that of Restoral time rather than Backup time, in that symbolic links have to be regenerated rather than physically restored. SYMbolicdestination Client System Options file (dsm.sys) option to specify a symbolic ADSM server name. For SNA communication. Default: none System Files The pagefile, Registry, etc. The Windows Client manual stipulates that you should exclude System Files per se from backups: they are separately backed up as system objects and should not be backed up as ordinary files. A dsm.smp sample exclude list is provided with the install. System Files, list of There is no list of system files: you simply enumerate them via a Windows lookup, as TSM does via the Windows API function SfcGetNextProtectedFile(). TSM 5.2's client relays such information to you as a convenience feature, via its Query SYSTEMInfo command. SYSTEM OBJECT Name of filespace created in TSM backups of the Windows system state. "System Object" data (including the Registry) cannot be the subject of TSM Archive operations. Instead, you could use MS Backup to Backup System State to local disk, then use TSM to Archive this. Ref: "Determining what files get backed up as part of your system objects" http://www.ibm.com/support/entdocview.wss ?uid=swg21141874 System Object, restore to different The receiving machine must have the same machine hostname, and it must have identical hardware, as you are restoring the Registry, which includes hardware information. See redbook "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment". System Objects See: Windows NT System Objects System privilege, grant 'GRant AUTHority Adm_Name CLasses=SYstem' System Protected Under Windows 2000, Microsoft implemented the concept of "system protected" files. Win2K keeps a catalog of all the files it considers "system and boot files", and they are flagged as "system protected". Those files are considered part of Win2K "system state", and are all backed up and restored as a set. When you run backups via the scheduler on Win2K, TSM gets the whole Microsoft-defined "System state", which includes the "system protected files", plus Active Directory, plus COM+DB, plus Registry, and a bunch of other stuff, depending on whether it's WIN2K or Win2K pro. When you run backups via the GUI on Win2K, you must specificially select SYSTEM OBJECT to get a backup of "system state". Ref: "TSM 3.7.3 and 4.1 Technical Guide" redbook System State (Windows) Windows 2000 logical grouping of the key system files and databases which in combination define the state of the Windows system. Constituents: Active Directory, Boot Files, COM+ Class Registry, Registry, Sys Vol. Does not include things like Removable Storage Management database. SYSTEMObject Windows: The designated name of the System Objects. In 5.2 you can exclude System Object from backups by coding: DOMain -SYSTEMObject Systems Network Architecture Logical A set of rules for data to be Unit 6.2 (SNA LU6.2) transmitted in a network Application programs communicate with each other using a layer of SNA called Advanced Program-to-Program Communication (APPC). Discontinued as of TSM 4.2. -TABdelimited dsmadmc option for reporting with output being tab-delimited. Contrast with -COMMAdelimited. See also: -DISPLaymode Tables, SQL 'SELECT * FROM SYSCAT.TABLES' Tape, add to automated library 'CHECKIn LIBVolume ...' Note that this involves a tape mount. Tape, audit (examine its barcode 'mtlib -l /dev/lmcp0 -a -V VolName' to assure physically in library) Causes the robot to move to the tape and scan its barcode. 'mtlib -l /dev/lmcp0 -a -L FileName' can be used to examine tapes en mass, by taking the first volser on each line of the file. Tape, bad, handling See: Volume, bad, handling Tape, erase There are times that you need to actually erase a tape, either to satsify legal requirements or in retiring a tape, to obliterating data on TSM tapes whose contents have expired or been copied. The tapeutil/ntutil commands have an Erase function, readily usable from the command line or prompting. See: ntutil; tapeutil Tape, identify physically in library There may be times when you are unsure as which is actually tape XXXXXX in the library. Some ways to find out: - If the library provides a means to query its database, try to locate the tape by cell that way. You may also be able to tell by looking at the statistics for the number of times the tape has been mounted. - Cause the tape to be mounted as you watch, which certainly establishes which volume the systems think it is. You can do this from ADSM: 'AUDit Volume VolName Fix=No'; or outside of ADSM use, like: 'mtlib -l /dev/lmcp0 -m -f /dev/rmt? -V VolName' Tape, initialize for use with a For simple, manually-mounted tape: storage pool 'dsmlabel -drive=/dev/mt0' where the drive must be one which was specifically ADSM-defined. It will iteratively prompt for volsers so you can do a bunch of tapes at once. For robotic tape library: 'dsmlabel -drive=/dev/mt0 -library=/dev/lmcp0' Tape, number of times mounted 'Query Volume ______ Format=Detailed' "Number of Times Mounted" value (q.v.). Tape, remove from automated library 'CHECKOut LIBVolume LibName VolName (as in 3494) [CHECKLabel=no] [FORCE=yes] [REMove=no]' Tape checkin date There is no way to determine when a tape was checked into the library: ADSM doesn't track it in volume stats, and libraries like the 3494 don't record it as part of database inventory info. Tape contention handling technique TSM really likes to fill a storage pool tape before starting on a new one, and sometimes this can result in contention. For example, consider an Archiving user whose session was waiting on a tape that is busy as input to a BAckup STGpool operation that would be reading from that tape for some time. To keep the user from waiting further, you can do Update Volume ... Access=Readonly, which TSM immediately recognized to allow the archive session to proceed with another output volume. Then do Update Volume ... Access=Readwrite to put the contended volume back into its original state, and everyone is happy. Tape density, achieving Generally speaking, rewriting a tape from its beginning, as relabeling does, is the only opportunity to change a tape's density (which is to say that it is not possible to change to a different density in the midst of a computer tape, as one might on a home VCR). This has been true of computer tapes since the early days of open reel tapes. Actually achieving the desired density is a function of the application causing the drive to switch density, which in turn must be supported and allowed by the hardware. In TSM terms, this is a function of the Devclass definitions. Operating system commands are usually available to verify tape drive attribute selections to assure that the application has triggered the values you expect. If not, check the hardware to assure that it can inherently achieve that value, that there is no operator setting preventing it, and that the media allows it. (Tape cartridges have sensing indentations by which drives can determine what a given tape can do.) If your OS attributes check finds settings not as desired, the less usual cause could be a TSM defect. Tape device driver ADSM relies on the Atape driver for the Magstar family of tapes and libraries (on AIX), but it relies on the device driver shipped with ADSM for all others (DLT, 8mm, 4mm, QIC, optical drives, STK drives). Tape drive, define for use by ADSM Note that ADSM must use its own device drivers for most tape drives (e.g., 8mm) except for devices such as the 3494 which supply their own drivers. Refer to ADSM Device Configuration. Do: 'lsdev -C -s scsi -H' to list SCSI devices and identify their adapters. Do the following via SMIT: Select DEVICES. Select ADSM Devices. Select Tape Drive. Select Add a Tape Drive. Select the ADSM-SCSI-MT. Select the adapter to which the device is attached. All this will generate a command like: 'mkdev -c adsmtape -t 'ADSM-SCSI-MT' -s 'scsi' -p 'scsi0' -w '60' The resulting tape drive is what is needed by the 'dsmlabel' command. Tape drive, make available (online) 'UPDate DRive LibName Drive_Name to *SM ONLine=Yes' Tape drive, make offline to host AIX: 'rmdev -l DeviceName' Example: rmdev -l rmt2 This desensitizes the operating system to maintenance being done on the attached drive, for example. Experience shows that it is usually unnecessary to do this, however. Tape drive, make online to host AIX: 'mkdev -l DeviceName' Tape drive, make unavailable (offline) 'UPDate DRive LibName Drive_Name to *SM ONLine=No' 3494: You can also go the the Operator Station, and in the Service Mode panel called Availability, render the drive offline. This will be recognized by *SM, as reflected in msg ANR8775I and 'SHow LIBrary' command output. Note that this operation is immediate, and would disrupt anything operating on the drive (the request is not queued until the drive is free). Tape drive, when it went offline 'SHow LIBrary' report element "offline time/date" reflects this. Tape drive, 3590, release from host Unix: 'tapeutil -f dev/rmt_ release' Windows: 'ntutil -t tape_ release' after having done a "reserve". Tape drive, 3590, reserve from host Unix: 'tapeutil -f dev/rmt_ reserve' Windows: 'ntutil -t tape_ reserve' When done, release the drive: Unix: 'tapeutil -f dev/rmt_ release' Windows: 'ntutil -t tape_ release' Tape drive availability and ADSM If no tape drives are currently available (as reflected in SHow LIBrary) ADSM will wait until one becomes available, rather than dispose of client and administrative jobs. Tape drive cleaning The most insidious cause of tape processing problems (outright I/O errors and time-consuming read/write retries) is dirty tape drives. Tape libraries are not air-sealed (nor are tape cartridges): any crud that floats around in your environment will eventually end up in the tape drives and cartridges. And all the mounts and dismounts will spread the contaminants to other tapes and drives. All tape libraries provide for some kind of cleaning, be it automatic or manual, usually via a cleaning cartridge: make sure that your library has such, that cleaning is activated, and is being done. Cleaning tape is necessarily abrasive, because it is a dry cleaning method. As such the cleaning process wears down the tape head a bit. If that concerns you, keep it in perspective: the objective is reliable reading and writing, not making the (replaceable) heads last decades. Beyond cleaning cartridges, your shop should periodically use a HEPA vacuum cleaner to clean out the interior of the library, where dust and dirt will accumulate and be agitated by the motion of the robotics. Another issue is the manual handling of cartridges, where dirty hands and miscellaneous human detritus will get on and into cartridges. Tapes which go offsite have further opportunities for contamination. Consider placing a portable air cleaner or two alongside your library, particularly if it is in a dusty or high-traffic area. Computer rooms are not Clean Rooms. See: cleaning (such as "3590 cleaning") Tape drive parameters, query Use the 'tapeutil'/'ntutil' command "Query/Set Parameters" selection. Or: AIX: 'lsattr -EHl rmt1' or 'mt -f /dev/rmt1 status' Tape drive parameters, set Use the 'tapeutil'/'ntutil' command "Query/Set Parameters" selection. But be aware that TSM sets things the way that it wants, so best not to interfere. Tape drive performance See: Tape drive throughput Tape drive status, from host 'mtlib -l /dev/lmcp0 -f /dev/rmt1 -qD' to query by device name (-f), or 'mtlib -l /dev/lmcp0 -x 0 -qD' to query by relative tape drive in library (-x 0, -x 1, etc.). (but note that the relative drive method is unreliable). Tape drive throughput See the "THROUGHPUT MEASUREMENT" topic near the bottom of this doc. See also: Migration performance; MOVe Data performance Tape drive Vital Product Data Unix: 'tapeutil -f /dev/rmt0 vpd' Windows: 'ntutil -t tape vpd' Microcode level shows up as "Revision Level". Tape drives, in 3494, list From AIX: 'mtlib -l /dev/lmcp0 -D' Tape drives, list available ADSM 'lsdev -C -c tape -H' tape drives Tape drives, list supported ADSM 'lsdev -P -c adsmtape tape drives -F "type subclass description" -H' Tape drives, maximum that ADSM can Devclass controls it. ask for Tape drives, not all being used in a See: Drives, not all in library being used Tape drives, where they are specified They are defined via 'DEFine DRive', in ADSM and are associated with an already-defined library, as in: 'DEFine DRive 8MMLIB 8mmdrive DEVIce=/dev/mt0'. Do 'Query DRive' to list them. Tape ejections, phantom See: Ejections, "phantom" Tape history, query 'Query VOLHistory' Tape I/O error message ANR8359E Media fault ... (q.v.) Tape labels ADSM wants tapes to have VOL1, HDR1, and HDR2 labels. The tapes you get "pre-labeled" from a tape vendor may have only VOL1, HDR1; so it's always best to label the tapes yourself, regardless. Ref: APAR IX77477 Tape leak A term I invented to describe the product's propensity for using a fresh tape when a Filling tape is busy, resulting in Filling tapes which will probably never be used again, resulting in a perplexing dwindling of scratch tapes. A full discussion of this is found in the topic 'Shrinking (dwindling) number of available scratch tapes ("tape leak")' near the bottom of this document. Tape library, list volumes Use AIX command: 'mtlib -l /dev/lmcp0 -vqI' for fully-labeled information, or just 'mtlib -l /dev/lmcp0 -qI' for unlabeled data fields: volser, category code, volume attribute, volume class (type of tape drive; equates to device class), volume type. (or use options -vqI for verbosity, for more descriptive output) The tapes reported do not include CE tape or cleaning tapes. Tape lifetime See: MP1 TAPE MOUNT Activity value in the TSM SUMMARY table. Query: SELECT DATE(START_TIME) , DRIVE_NAME, VOLUME_NAME FROM SUMMARY WHERE ACTIVITY='TAPE MOUNT' Tape operator The TSM server supports sending mount messages to a special session via: 'dsmadmc -MOUNTmode'. See: -MOUNTmode Tape performance See: Tape drive throughput Tape pool, steps in defining Define tape drive(s) via SMIT. (They need to be specially defined for ADSM: the /dev/rmt? drives already defined in your system are *not* eligible for use by ADSM. Tape pool, 8mm, steps in defining You should first have established an 8mm tape drive to use, via SMIT. (See "Tape drive, define for use by ADSM".) Define library, as in: DEFine LIBRary 8mmlib LIBType=manual (Note that DEVice is not coded for manual.) Define device class, as in: 'DEFine DEVclass 8mmclass DEVType=8mm LIBRary=8mmlib MOUNTLimit=1 ESTCAPacity=2300M' Define the sequential storage pool: DEFine STGpool 8mmpool 8mmclass DESCription="___" Define the tape drive(s) to use: DEFine DRive 8MMLIB 8mmdrive DEVIce=/dev/mt0 Define specific tape volumes for pool: DEFine Volume 8mmpool VolName [ACCess=READWrite|READOnly| UNAVailable|OFfsite] You also need to label the tapes, via 'dsmlabel' (q.v.). Tape recovery procedure See: Volume, bad, handling Tape reliability ("tape is tape") Tape is still being used because it is relatively cheap, un-delicate, and capacious. But it is not the ultimate in reliability. Unlike hermetically sealed disk technology, the tape medium is exposed to the environment, is pulled and stressed, and abrades as it rubs past transport guides and tape heads. (By its nature, flexible magnetic media has to be in contact with the read-write head.) Moreover, in manufacturing, the quality of the medium cannot be as readily assured by inspection as can disk platters. All this means that when using tape you cannot unilaterally depend upon it, and it behooves you to have a secondary copy of important data. See http://www.sresearch.com/library.htm Tape security We sometimes have site managers asking how secure *SM data tapes are, wondering if someone may be able to harvest data from *SM scratch tapes, and whether *SM expiration erases the old data. By data processing definition, tapes - like disks or any other media (including paper) - are supposed to be physically secure, as in kept in a room that non-authorized people cannot enter, and that the people in the room are trustworthy. That is the fundamental protection for tapes written by any application. Expiration is a logical process, not physical: nothing goes near the tape in the process. Only the "catalog entry" for the expired data is obliterated, while the tape remains intact. Being an append-only medium, there is no potential for partial erasure of tape contents. You can wholly write over the tape with binary zeroes when it is empty if you like, to obliterate prior contents; but next use effects obliteration anyway. Note that *SM tape data format is unpublished: even we as *SM administrators don't know how to physically access it. Tape storage pool, define See: 'DEFine STGpool' Tape technology Newsgroups comp.arch.storage and comp.data.administration tend to have such discussions. Tape volume, assign to a storage pool 'DEFine Volume Poolname VolName' The alternative to dedicating tape volumes to a storage pool is to define the STGpool with "MAXSCRatch=NNN", to use scratch volumes instead. Tape volume, eject from library Via Unix command you can effect this by to Convenience I/O Station changing the category code to EJECT (X'FF10'): 'mtlib -l /dev/lmcp0 -vC -V VolName -t FF10' Tape volume, set Category code in Via Unix command: library 'mtlib -l /dev/lmcp0 -vC -V VolName -t Hexadecimal_New_Category' Tape volumes, consolidate Use the ADSM server 'MOVe Data' command to move data from one volume in a storage pool to other volumes in it, as in the case of ADSM happening to write a few files on a new tape when the other tape(s) in the storage pool are mostly empty. This operation eliminates the wasteful use of the second volume, as in: 'MOVe Data 000994'. TapeAlert A patented technology and standard of the American National Standards Institute (ANSI) that defines conditions and problems that are experienced by tape drives. The technology enables a server to read TapeAlert flags from a tape drive through the SCSI interface. The server reads the flags from Log Sense Page 0x2E. You will find TapeAlert summarized in the IBM 358x Setup and Operator Guide manuals, with flag values. In TSM terms, TapeAlert is a software application supported by TSM 5.2+ that provides detailed device diagnostic information using a standard interface that makes it easy to detect problems which could have an impact on backup quality. It is a standard mechanism for tape and library devices to report hardware errors. From the use of worn-out tapes, to defects in the device hardware, TapeAlert enables TSM to provide messages that provide easy-to-understand warnings of errors as they arise, and suggests a course of action to remedy the problem. To take advantage of TapeAlert, you need TapeAlert-compatible tape drives or libraries. See also: Set TAPEAlertmsg TAPEIOBUFS TSM 3.7 server option for MVS (only). Specifies how many tape I/O buffers the server can use to write to or read from tape media. The default is 1. Syntax: TAPEIOBUFS number_of_buffers The number_of_buffers specifies the number of I/O buffers that the server can use to write to or read from a tape media. You can specify an integer from 1 to 9, where 1 means that no overlapped BSAM I/O is used. For a value greater than 1, the server can use up to that number of buffers to overlap the I/O with BSAM. Note: The server determines the value based on settings for the TXNBYTELIMIT client option and the MOVEBATCHSIZE, MOVESIZETHRESH, TXNGROUPMAX, AND USELARGEBUFFERS server options. The server uses the maximum number of buffers it can fill before reaching the end of the data transfer buffer or the end of the transaction. A larger number of I/O buffers may increase I/O throughput but require more memory. The memory required is determined by the following formula: number_of_buffers x 32K x mount limit Performance: Boosting the number can obviously improve throughput. tapelog Command to view the AIX /var/adm/ras/Atape.rmt?.dump? file. Syntax: 'tapelog {-l DeviceName | -f FileName'. Ref: IBM SCSI Tape manual, Chapter 9. Src: /usr/lpp/Atape/samples/tapelog.c TAPEPrompt (-TAPEPrompt=) Client User Options file (dsm.opt) or command line option to specify whether to wait for a tape to be mounted if required for an interactive backup, archive, restore, or retrieve process; or to prompt the user for a choice. Is not in effect for a schedule type operation. Specify: No or Yes Specifying No makes operations more transparent, but does not account for the mount delay. HSM: "No" must be chosen for HSM, because of its implicit action, and because an NFS client of an exported HSM file system obviously will not get the prompt. See client message ANS4116I as with HSM actions; ANS4117I; and ANS4118I as with incremental backup. Default: Yes, prompt the user when a tape mount is required. Note that the DEVclass MOUNTWait value does not pertain to a wait for a tape drive to be free. Note: Specifying Yes does not cause the needed volume to be identified to the client; it merely gives you the opportunity to decline mounting. Tapes, label all in 3494 library The modern way is to use the LABEl having category code of Insert LIBVolume command, to both label and 'dsmlabel -drive=/dev/XXXX -library=/dev/lmcp0 -search -keep [-overwrite]' Tapes, number to restore a node SHow VOLUMEUSAGE Node_Name Tapes, number used by a node SELECT NODE_NAME AS '_NodeName_', - COUNT(DISTINCT VOLUME_NAME) AS - "Number of tapes used" FROM - VOLUMEUSAGE GROUP BY NODE_NAME Tapes, prevent usage See: Storage pool, tape, prevent usage Tapes in library, list Use AIX command: (including Category codes) 'mtlib -l /dev/lmcp0 -vqI' for fully-labeled information, or just 'mtlib -l /dev/lmcp0 -qI' for unlabeled data fields: VolSer, category code, volume attribute, volume class (type of tape drive; equates to device class), volume type. (or use options -vqI for verbosity, for more descriptive output) The tapes reported do not include CE tape or cleaning tapes. Tapes in use for a session 'Query SEssion [SessionNumber] Format=Detailed' Tapes needed in a restoral See: Restoral preview Tapes supported ADSM supports a specified repertoire of tape drives, which must be accessed through its own device drivers. Exception: For IBM 1/2" tape drives, ADSM uses the device drivers supplied with the hardware. Tapes used by a node See: Volume usage, by node tapeutil 3490/3590 tape utility for Unix, provided as part of the Magstar Device Drivers, available at ftp.storsys.ibm.com, under devdrvr. For an interactive session, simply invoke by name and follow the menu. For a batch session, invoke with operands as from 'tapeutil -\?'. There is no man page, but there is complete documentation in the manual "IBM TotalStorage Tape Device Drivers: Installation and User's Guide", available from the same ftp location. "Device Info" returns iocinfo info, including devtype, devsubtype, tapetype, and block size. "Erase" will erase the full length of the tape. Experience shows that this operation will experience no write problems on a bad tape though prior and subsequent TSM writing will result in I/O errors; so just because Erase is happy doesn't mean the tape is fine. "Inquiry" returns a block of info akin to that from the AIX 'lscfg' command. "Read and Write Tests" by default will write 20 blocks of 204800 bytes, write 2 file marks, backspace 2 file marks, backspace 20 records, read the written data, and forward spacing file mark. Src: /usr/lpp/Atape/samples/tapeutil.c "tapeutil", for NT See: ntutil TB Terabytes, usually being 1024 ** 4. TCA See: Trusted Communication Agent TCP_ADDRESS (TSM 4.2+) SQL NODES table entry for the TCP/IP address of the client node as of the last time that the client node contacted the server. The field is blank if the client software does not support reporting this information to the server. Corresponds to the Query Node field "TCP/IP Address". Derives from the GUID value TCP_NAME (TSM 4.2+) SQL NODES table entry for the host name of the client node as of the last time that the client node contacted the server. The field is blank if the client software does not support reporting this information to the server. Corresponds to the Query Node field "TCP/IP Name". TCP/IP Transmission Control Protocol/Internet Protocol. Consists of two main protocols: TCP, for session-oriented (stream) connections, as used by ADSM and TSM; and UDP, for "connectionless" operations, as in send a packet and hope they got it. TCP/IP access to server, disable The 'COMMmethod NONE' server option will TCP/IP address of server See: TCPServeraddress prevent all communication with the server. TCP/IP and OS/390 (MVS) In the OS/390 environment, TCP/IP is a separate task, not integral to the operating system as in Unix. Thus, it is essential that TCP/IP be up before the *SM server is started, and should not be brought down before the *SM server. TCP/IP port number of client The client needs a TCP port number when it needs to be contacted by the server, during SCHEDMODE PROMPTED. Default = 1501. Change via the TCPCLIENTPort Client System Options file (dsm.sys) option. See: TCPPort TCP/IP port number of client, get 'dsmc Query Options' in ADSM or 'dsmc show options' in TSM; see "TcpClientPortNumTcpPort" value. TCP/IP port number of client, set TCPCLIENTPort Client System Options file (dsm.sys) option. See: TCPCLIENTPort TCP/IP port number of server The TCPPort value. Default = 1500. TCP/IP port number of server, get 'Query OPTions', "TcpPort" value. TCP/IP port number of server, set "TCPPort" definition in the server options file. TCP/IP window size of server, get 'Query OPTions', "TCPWindowsize" value. TCP/IP window size of server, set "TCPWindowsize" definition in the server options file. TCPADMINPort, -TCPADMINPort TSM 5.1+ client command line or options file option to specify a separate TCP/IP port number on which the TSM server is waiting for requests for administrative client sessions, allowing secure administrative sessions within a private network, as used for firewalls. Placement: Unix: dsm.sys, within a server stanza. Windows: dsm.opt. Syntax: TCPADMINPort nnnn Default: The value of the TCPPort option. Note that the port may not be used for ordinary client sessions: it is for administrative sessions only. TCPADMINPort TSM 5.1+ server option, corresponding to the same-named client option, to specify the port number on which the server TCP/IP communication driver is to wait for requests for sessions other than client sessions. This includes administrative sessions, server-to-server sessions, SNMP subagent sessions, storage agent sessions, library client sessions, managed server sessions, and event server sessions. Perspective: Using different port numbers for the options TCPPORT and TCPADMINPORT enables you to create one set of firewall rules for client sessions and another set for the other session types listed above. By using the SESSIONINITIATION parameter of REGISTER and UPDATE NODE, you can close the port specified by TCPPORT at the firewall, and specify nodes whose scheduled sessions will be started from the server. If the two port numbers are different, separate threads will be used to service client sessions and the session types. If you allow the two options to use the same port number (by default or by explicitly setting them to the same port number), a single server thread will be used to service all session requests. Client sessions which attempt to use the port specified by TCPADMINPORT will be terminated (if TCPPORT and TCPADMINPORT specify different ports). Administrative sessions are allowed on either port, but by default will use the port specified by TCPADMINPORT. TCPBuffsize Client System Options file (dsm.sys) option to specify the size for the ADSM internal communications buffer, in kilobytes. Code from 1 to 32 (KB). Placement: Within a server stanza. Default: 8 (KB) TCPBufsize Server Options file (dsmserv.opt): Specifies the size, in kilobytes, of the buffer used for TCP/IP send requests. During a Restore, client data moves from the ADSM session component to a TCP communication driver. Syntax: "TCPBufsize " in the range 0-32 (default: 4) Performance (particularly, restorals: This option affects whether or not the ADSM server sends the data to the client directly from the session buffer or copies the data to the TCP buffer. A 32K buffer size forces ADSM to copy data to its communication buffer and flush the buffer when it fills, which entails overhead. TCPBufsize server option, query 'Query OPTion', "TCPBufsize" value. TCPCLIENTAddress (-TCPCLIENTAddress=) Client System Options file (dsm.sys) or command line option for when your client node has more than one network address (multi-homed) and you want the *SM server to communicate with the client using this network address, rather than whatever address it may have previously stored in client communication. Note that the address specified is the Service IP address: the IP address used for primary traffic to and from the node. The specified address can be a name or dotted number. Use only with SCHEDMODE PRompted. Default: use whatever address the client response to. See also: HLAddress; NODename TCPCLIENTPort Client System Options file (dsm.sys) option to specify the TCP port number that the server should use to communicate with the client, when Schedule is active. Use only with SCHEDMODE PRompted. Default: 1501 (being TCPPort+1) See also: LLAddress TCPNodelay AIX (only) Client System Options file (dsm.sys) option to specify whether small transactions should be sent immediately or be buffered before sending. Ordinarily, TSM buffers small transactions until the TXNBytelimit is reached, and then the whole buffer is sent. Sending immediately improves continuity and throughput, but at the expense of more packets being sent and, ostensibly, smaller Aggregates. Default: No, buffer before sending See also: TXNBytelimit; TXNGroupmax TCPNodelay Server Options file (dsmserv.opt): Specifies whether the server allows data packets that are less than the TCP/IP maximum transmission unit (MTU) size to be sent out immediately over the network, to a client (in client-server sessions) or another server (the target server, in server-to-server virtual volume operations); or whether small stuff should be buffered before sending. Default: Yes (send immediately) TCPNodelay, query in client 'dsmc Query Options' in ADSM or 'dsmc show options' in TSM; see "TcpNoDelay". TCPNodelay, query in server 'Query OPTions', see "TCPNoDelay". TCPPort Client System Options file (dsm.sys) or command line option to specify the port address of a server when using TCP/IP. (Unfortunately, the name of this option is ambiguous and leads to confusion: it really should have been called TCPSERVERPort, to be as specific as the existing TCPCLIENTPort option.) Code within a server stanza. Default: 1500. Note that TCPPort+1 (1501) is used by the *SM Client Scheduler (dsmc schedule) when using SCHEDMODE PRompted to listen for the "prompt" from the Server to initiate a scheduled operation. When you start up a client with SCHEDMODE PRompted, it contacts the server on TCPPORT (1500) and registers its IP address. It then disconnects and (only at the appointed schedule time!) listens on port TCPPORT+1 for the server to contact it. TCPPort Server option. Defines the TCP/IP port upon which the server listens for client requests. Default: 1500. Note that the *SM server can only have one such port defined for clients. A way around this is to use a front-end which serves a different port and relays to the *SM real port. The Unix netcat facility is one such method. Tip: Temporarily coding a hoked value during a maintenance time when you need to bring the server up for maintenance tasks will surely keep those pesky clients out, as they user their client option file TCPPort value. See also: TECPort TCPPort server option, query 'Query OPTion' tcpQueryAddress A name which may pop up in TSM server problems, being a function in tcpcomm.c to handle reverse DNS lookups, via the gethostbyaddr system call. The "tcpinfo" traceclass can be used in a server trace to inspect TCP/IP DNS performance issues. TCPServeraddress Client System Options file (dsm.sys) or command line option to specify the TCP/IP address for a *SM server, as either a name or dotted IP address number. Placement: Within a server stanza. Usage: Where you have a single NIC in the client, or don't care how outgoing TSM client traffic is routed, specify the server location as a network hostname. In a multi-homed ethernet portal environment, where the client has multiple NICs or one NIC with multiple portals each on a different subnet, specify the TSM server network location via IP address via this option to have outgoing TSM client traffic go through a specific subnet rather than the default route. (You should confer with your network people to achieve optimal throughput. Plan and configure for it: It is very bad form to capriciously decide to send large amounts of data over a subnet which may be intended for other purposes. Keep in mind the difference between LAN and SAN.) Note: The servername which may be coded here has nothing to do with the server name established within the server via Set SERVername, as the former is a network address and the latter is just a name that the server tells the client during session initialization. Note: There is no speed advantage to coding 127.0.0.1 (localhost) when both the client and server are on the same system: communication has to go through the local protocol stack in both cases. Advisories: Code an IP address rather than a hostname. This will avoid two problems: (1) access problems when Domain Name Service is flakey, and (2) lack of certainty where the server hostname is defined in DNS with multiple IP addresses. See also: -SERVER; Set SERVERHladdress; Set SERVERLladdress; -TCPPort TCPWindowsize client option Client System Options file (dsm.sys) option to specify the size, in KB, to be used for the TCP/IP sliding window for the client node: the size of the buffer used when sending or receiving data. Code a value from 1 to 2048 (KB), but remember that your operating system TCP/IP buffer size must be at least as large: - In AIX, do not exceed the sb_max system value as seen with the Query: 'no -a -o=sb_max' command. (Note that sb_max is expressed in bytes and TCPWindowsize is expressed in KB. So if "sb_max" shows as 65535, then TCPWindowsize must be 64 or less.) - In HP-UX, the limit is the kernel parameter STRMSGSZ, which is expressed in KB. - Solaris: max TCPWindowsize is 1024. - Windows NT4: max supported is 64KB-1byte, so specify "63". The client checks to assure that the value specifies is not too high: if it is, an error message saying so results. You should respond by either reducing the TSM value or increasing the opsys value. TCPWindowsize 0 *may* work in some systems, meaning to use the operating system settings. Default: 16 (KB) TCPWindowsize server option Specifies the amount of data to send or receive before TCP/IP exchanges acknowledgements with the client node in client-server sessions. Also pertains to the target server in server-to-server (virtual volume) operations. This actual window size that is used in a session will be the minimum size of the server and client window sizes. Larger window sizes may improve performance at the expense of memory usage. A value of 0 causes the operating system default to be used, avoiding conflicts. TCPWindowsize server option, query 'Query OPTion', see "TCPWindowsize". TCPWindowsize server option, set Definition in the server options file (dsmserv.opt), to specify the size of the TCP sliding window: the amount of data to send or receive before TCP/IP exchanges acknowledgements with the client node. The actual window size that is used in a session will be the minimum size of the server and client window sizes. Larger window sizes may improve performance at the expense of memory usage. Allowed range: 0 - 2048. 0 indicates that the default window size set for AIX should be used. Values from 1 to 2048 indicate that the window size is in the range of 1 KB to 2 MB. Default: 0 which indicates that ADSM should accept the AIX default window size. TDP Tivoli Data Protection: the equivalent of the former ADSM Agents, for backing up databases. The TDPs operate as a middle man, between the database API which give it access to the data, and the *SM server. A TDP install will also install the TSM API, which it needs to communicate with the *SM server. Cost: The various "TDP for____" packages are not free, unlike the basic clients: you must order each TDP through your normal channels, and thus obtain a "Paid in Full" licenses for use with the TSM server. In operation, a TDP does not perform locking on the database that it is accessing, because it is just a guest visiting the database via the API of the application controlling the database. See: Data Protection Agents TDP and retries The number of retries during backup has historically been hardcoded into the software. This may change as the software evolves. TDP backups overview When initiating a TDP backup, the tdpo.opt file is referenced. That further involves dsm.opt, and dsm.sys in multi-user systems. The tdpo.opt and the dsm.opt should be in directory /opt/tivoli/tsm/client/oracle/bin64/ (omit the "64" if running 32-bit mode). For 64-bit operation, the dsm.sys has to be in the /opt/tivoli/tsm/client/api/bin64/ directory. A recommendation is to create a separate domain for your database backups, and set the management class retentions to 1-0-0-0. You can also set it as the default management class. TDP compression Reportedly only eliminates whitespace. TDP for Domino (TDP Domino) Backup product for Lotus Domino mail servers, replaced 2002/04 by Tivoli Storage Manager for Mail (q.v.). Works at the database level, and thus provides fast backup and restore of the entire database as compared with the document-oriented TDP for Notes. But in order to restore a single document, you need to restore the database to an alternate name and copy out the document you want. Also, any particular database backup only consists of two TSM server objects instead of a possible thousands with TDP for Notes. Summary info is stored in the TSM server Activity Log in the form of ANE4991I messages. Each time a "DOMDSMC INCREMENTAL" backup is run it should be picking up "new" databases (as well as logged databases that the DBIID has changed or non-logged databases that the internal time stamp has changed on). Expiration: An active log backup will never expire: you need to inactivate the log backup, and that can only happen if there is no active database backup that requires that log and then the INACTIVATELOGS command is run. It is possible for a private folder to be stored on the desktop rather than in the database, as with folder types: "Shared, desktop private on first use" "Shared, private on first use" Tracing: Turn on by adding this to the invocation: /TRACEFLAG=ALL /TRACEFILE=filename.txt You cannot run Domino Server third party products reliably through Windows Terminal Services (or Remote Desk Top Connection): Domino itself does not support it. This is documented in the IBM support knowledge base article 1083052, which can be sought at www.ibm.com, and Lotus TechDoc 186006. TDP for Exchange Tivoli Data Protection for Exchange. Backup product for Microsoft Exchange mail servers, replaced 2002/04 by Tivoli Storage Manager for Mail (q.v.). Backs up Exchange Server database files (.EDB, .STM) and log files (.LOG) according to Microsoft specifications, stored in TSM server storage pools as Backup type files. Will create only one session for each instance that you run. If there is an error during a backup, it will retry up to a maximum of 4 attempts. The level of granularity for Exchange backups is at the Storage Group level, meaning that separate Storage Groups can be backed up simultaneously. Example: start tdpexcc backup SG1 full start tdpexcc backup SG2 full start tdpexcc backup SG3 full start tdpexcc backup SG4 full Naturally, all other elements of the backup must be sufficient for such parallelism to be meaningful. With some TDPs you may need to separately install the TSM API; but for this one the API code is included: you do not need to install any TSM BA client components unless you decide to the use the TSM BA client scheduler. DSM.OPT: You don't need to put anything in the DSM.OPT file under Exchange Server 5.5: by default, DP will back up the Information Store and Directory. Scheduling: Must be a Command type schedule, which launches the TDP on the client machine. See the sample batch files shipped with the TDP. See manual. Version 2.2 released 2001/03, supporting Exchange 2000. As a new version number, must be purchased: cannot be downloaded. During a backup, each page of the database is examined for the correct checksum to verify that the data on the page is valid. TDP for Exchange (actually the Exchange backup/restore API itself) won't allow you to back up a corrupted database. When doing a full backup, this TDP will "inactivate" any previous incrementals that exist. TDP for Exchange performs incremental and differential backups by backing up the full transaction log files to TSM. They are all placed into a single TSM backup object. During restore, the individual log files will be extracted from the single TSM object and be written back to disk. A brief history of versions: 1.1.0 1998/04 1.1.1 1999/11 2.2.0 2001/03 5.1.5 2002/10 5.2.1 2003/09 The version jumped from 2 to 5 to align with the base TSM products. TDP for Exchange, API level, query 'tdpexcc query tsm' TDP for Exchange, port numbers Normal client port is 1501. TDP for Informix For Informix database backup. Is an API which implements the Open Group Backup Services application program interface (Open Group XBSA) functions. This TDP does not provide a CLI or GUI because such an interface is provided by Informix. Backups and restore are driven through Informix with a utility that Informix provides called ON-Bar. You can use the BA client to query the backup data. Note that a dsmc Query Filespace will show 0 MB because that field is not used with the TDP. A dsmc Query Backup will also work if you can interpret the object naming scheme. (Use of the B/A client query commands typically works but is "supported".) Object expiration: For database backups, general TSM policy is used. Log backups are uniquely named... you can use an Informix tool called onsmsync to control their expiration. Ref: IBM KB article "Managing Informix logs that are saved on the TSM Server" TDP for Lotus Domino vs. Notes TDP for Lotus Notes and TDP for Lotus Domino are not compatible with each other. With Lotus Domino Server R5, Lotus provided an API solely for the purposes of backup and restore, which is performed at the database level. Domino R4 did not have this...and so the technique for backing up and restoring on Domino R4 was very different. It was done at the item level. TDP for Lotus Notes >Product discontinued 2001/09/30.< Backs up at the document level. Good aspect: you have restore granularity down to the document level. Bad aspect: because each document takes one TSM server object and because backing up or restoring of an entire database with many documents could take a while and cause large TSM Server database extents. You can physically accomplish the task of backing up the Lotus Notes database using the ordinary TSM Backup/Archive client while the Notes server is running - but it may not be restorable, because the database was in transition during the backup. Hence the need for TDP. TDP for Microsoft SQL Server All objects are stored on the TSM server (TDP for SQL; SQL TDP; MSSQL) as Backup objects (not Archive, so an Archive Copy Group is not required). Stripes: A separate TSM Server session is created for each stripe, which then waits for the SQL Server to send data to each stripe. The SQL Server determines which data goes to which stripe, and writes the data to it. Environment variables: not used License file: agent.lic Options file (default): dsm.opt located in the TDP installation directory, or as specified by the /TSMOPTFile=____ command line parameter. Watch out for blanks in the path name when it is coded in the Object spec of a client schedule: enter the path name such that it ends up in double quotes in the schedule (by enclosing the double-quoted string in single quotes). Return codes: Look in the tdpsql.log and/or dsierror.log to find out the cause. Also see return codes in the API manual. Notes: The TDP, as an API, does not support things like the MS "RESTORE ... VERIFYONLY" operation. 5.2.1 will install and run with the PAID license from the 2.2.1 product. Retention periods: You cannot extend the retention period of a single backup but leave all of the others as they were: the same management class settings apply to all versions of a particular file. You can change the retention period of ALL of the current backups by changing the management class settings or by binding the backups to a new management class by using the INCLUDE statement and running a new backup. Inactivating old backups: Deleted databases do not "automatically" get inactivated: it is up to you to manually inactivate them, which you can do... Via the CLI, use the TDPSQLC INACTIVATE command, which is very similar to the RESTORE command (TDPSQLC HELP INACTIVATE). Via the GUI, go to "View", "Inactivate Tab", and you will see a new tab show up which allows you to choose the database backups that you would like to inactivate. Expiration of data: V1 of the MSSQL backups product performed its own expired data deletion; but therafter the product conforms to standard TSM server policies. (The BACKDEL parameter is for deletion of temporary TSM Server objects used in unique situations such as a change in management class.) Restoral times: Before 2005: The larger the database being restored, the more time is required, as the DB file "container" is recreated on disk, with pre-formatting, before its contents can be restored. For example, a customer reports a 22 GB db restoral taking hours. The TDP waits around for the MSSQL work to complete before the TDP can proceed. (Boosting the COMMTimeout value is advised.) This should improve in SQL Server 2005, where SQL Server must still allocate the file space before doing a database restore, but the time-consuming initialization of database pages is no longer required. For a discussion of data striping, see IBM site Technote 1145253. See also: DIFFESTIMATE TDP for NDMP The TSM server uses NDMP to connect to the NAS file server to initiate, control, and monitor a file system backup or restore operation. This TDP used to be an add-on, separately priced and licensed product for performing NDMP backup and restore for Network Attached Storage file servers. As of TSM 5, it is incorporated into TSM Extended Edition. In 'REGister Node', Type=NAS is used. Ref: Admin Guide See also: nasnodename; NDMP; NetApp TDP for Oracle Operates between RMAN and the TSM server to effect Oracle backups. All objects are stored on the TSM server as Backup objects (not Archive, so an Archive Copy Group is not required). Error logs: dsierror.log, created by the TSM API; tdpoerror.log, created by the TDP proper. (tdpoerror.log is created in the local directory; may be $ORACLE_HOME/dbs/tdpoerror.log.) PASSWORDAccess settings: - In Unix, must be set to Prompt... Oracle specifies that a 3rd party vendor (in this case, TDP for Oracle) cannot spawn a child process (which in the TSM case would be the TCA). The TDP is not an executable, so it is not able to have a child process. Thus for Unix, there is no child process capability for the dsmtca module to retrieve the password. Therefore, the TDP Oracle for the Unix Operating Systems must use PASSWORDAccess Prompt. IBM recommends that you set TDPO_NODE in the tdpo.opt file, to be a node name different from the computer name. - In Windows, must be set to Generate. Do not set TDPO_NODE in the tdpo.opt file. Backuppiece: The Rman specifications state that only one copy of a backuppiece will exist at one time on Media Manager (DP for Oracle). So Oracle/Rman first tries to delete the backuppiece that it is about to create on the TSM Server. Unfortunately, Oracle/Rman also specifies that the delete routine act as single and seperate operation, so when Oracle tries to delete a backuppiece that does not exist, that is an error and DP for Oracle returns that error. There is no way for DP for Oracle to determine if the deletion is a true delete of a backuppiece or if Oracle is checking for backuppiece existence prior to backup. Consider: Changing the filespace name to something other than adsmorc... In the event that you have multiple oracle instances on the same client, it is much more managable when they each have a unique name. For example, if the database is discontinued, you can simply delete the filespace for that database. (The filespace name can be set in the tdpo.opt file). You will need to create a unique tdpo.opt file for each database. See also: RMAN TDP for Oracle and multi-stream backup Oracle can employ what it calls Channels to effect parallel backups. The effect within the TSM server depends upon your TSM storage pool collocation setting. With COLlocate=No, multi-streaming will occur and parallel backup will occur to your multiple tape drives. With COLlocate=Yes, multi-streaming will not occur: all the sessions wait for the same tape volume. Collocation is typically desirable for restoral performance - but its value is minimized as very large backup files tend to occupy few tapes anyway. And in a commercial database restoral, you would often want all the db components restored together, and all backups from that point in time would be clustered together on tapes anyway, where any space taken by unrelated backups would be on either side of the Oracle backup data and would not much matter. If you do want collocation for Oracle backups, you can take the approach of defining a separate tape storage pool with COLlocate=No for the clients that run multiple stream backups; or you can employ a primary disk storage pool ahead of tape, where a DISK type storage pool does not collocate. TDP for R/3 (SAP) For automatic password handling (client option file PASSWORDAccess Generate), the encrypted password will be stored in the R/3 configuration file (init.bki), and the password can be set via the following - only if you have not already set that encrypted client password via the standard TSM client: Unix: backint -p /oracle/SID/dbs/init.utl -f password Windows: backint -p :\orant\database\init.utl -f password Example: backint -p initSYS.utl -f password See also: Backint TDP for SQL See: TDP for Microsoft SQL Server TDP maintenance and licenses "PTF" or "fixtest" versions of the Data Protection clients do not include a license file, meaning that they won't run at all without the ".lic" file: you need to have a "Paid in Full" license or a "Try and Buy" license. You can obtain a Try and Buy license through your IBM representative. TDPO Tivoli Data Protection for Oracle tdpo. file In Unix, the TDP file in which the node password is written, for PASSWORDAccess Generate. (In Windows, the password is stored in the Registry.) TDPO_AVG_SIZE TDP Oracle tdpo.opt option to specify the average size of an object sent to the TSM server, to influence where the sent object goes first in the storage pool hierarchy. The value should be large enough to accommodate the largest objects sent, but not to be so large that no objects would go to a first level disk storage pool (instead going to the next level tape storage pool). This option was discontinued in TDP 2.2.1 as being counterproductive. TDPO_FS Tivoli Data Protection for Oracle option to specify a file space name on the TSM server which TDP for Oracle uses for backup, delete, and restore operations. Name length: 1 to 1024 characters Default: adsmorc tdpoerror.log TDP for Oracle error log. As of 2.2.1, TDP Oracle no longer uses the Tivoli Storage Manager API error log file, dsierror.log. tdpsdan.txt The TDP for SQL Danish language message repository. See also: ANS0102W Teach A tape library operation wherein the robotic mechanism carefully explores the internals of the library, learning what elements (tape storage racks, tape drives) are present, and their exact locations in space (usually via infrared reflector patches). TEC Tivoli Enterprise Console; or, Tivoli Event Console. Aka T/EC. Tivoli Enterprise Console product is a powerful, rules-based event management application that integrates network, systems, database, and application management. It offers a centralized, global view of your computing enterprise while ensuring the high availability of your application and computing resources. It collects, processes, and automatically responds to common management events, such as a database server that is not responding, a lost network connection, or a successfully completed batch processing job. It acts as a central collection point for alarms and events from a variety of sources, including those from other Tivoli software applications, Tivoli partner applications, custom applications, network management platforms, and relational database systems. Ref: TSM Admin Guide, "Logging Tivoli Storage Manager Events to Receivers" See also: TECHost; TECBegineventlogging; TECPort; Data Protection Agents TEC events Refers to to the event sent from a monitored system to the Tivoli Enterprise Console server. TECBegineventlogging Server option to activate the Tivoli Enterprise Console receiver during startup. This is analogous to issuing a BEGIN EVENTLOGGING TIVOLI on the server console. This specifies whether event logging for the Tivoli receiver should begin when the server starts up. If the TECHost option is specified, TECBegineventlogging defaults to Yes. Syntax: TECBegineventlogging Yes|No Yes Specifies that event logging begins when the server starts up and if a TECHost option is specified. No Specifies that event logging should not begin when the server starts up. To later begin event logging to the Tivoli receiver (if the TECHOST option has been specified), you must issue the BEGIN EVENTLOGGING command. Technical Guide redbook Each new version of TSM is typically accompanied by a Technical Guide redbook which nicely explains all the new features in that version. View at http://www.redbooks.ibm.com . In addition, in the frontmatter of the manuals is a Summary of Changes which enumerates the technical improvements in that release of the software. TECHost Server option to specify the Tivoli Enterprise Console server host for the Tivoli event server. Syntax: TECHost TECPort Server option to specify the Tivoli Enterprise Console port number on which the Tivoli event server is listening. This option is only required if the Tivoli event server is on a system that does not have a Port Mapper service running (portmap process). Syntax: TECPort where the port number must be between 0 and 32767. See also: TCPPort Testflag Nomenclature for a provisional client software developer's flag, which can be specified as a dsm.opt option (e.g., "TESTFLAG NODETACH") or like in Trace: 'dsmc i -traceflags=_______ ...' to cause some unusual action in the client. Threads See: Processes, server; SHow THReads Threads, client The TSM client uses the producer-consumer multithreading model. In a standard Incremental backup: When the producer thread gets a file specification to be processed, it queries the TSM server for information about existing backups for that file spec. The server sends the query results back to the client. The producer thread uses the query results to determine which files have changed since the last backup, then builds transactions (representing files to be backed up) to be processed by the consumer thread. The consumer thread then backs up the files in each transaction. Since it is the consumer thread that does the actual backup work (i.e. the transfer of the data to the server), you see its session with a large number of bytes received. An idle producer thread is typically due to it not being given any more file specs to process, so it isn't querying the TSM server. Once the consumer thread is done with its work (and there are no more file specifications to process), then the consumer and producer threads will close out their server sessions. If the producer session is timed out via the server's IDLETimeout setting, it will re-establish itself if necessary. The client's main thread is responsible for giving the producer thread file specs to process. The producer thread doesn't close out its session after processing each file spec for performance reasons; if the file specs are coming in fairly quickly, then the overhead of stopping/restarting sessions could impact performance. In theory, the producer could close its session after a certain period of inactivity. See also: Multi-session Client Threshold for non-journaled Windows client GUI preference, incremental backups introduced in TSM 4.2. Corresponds to the INCRTHreshold option. Ref: Windows client manual; TSM 4.2 Technical Guide redbook Threshold migration The process of moving files from a local file system to ADSM storage based on the high and low thresholds defined for the file system. Threshold migration is started automatically by HSM and can be started manually by a root user. Contrast with demand migration and selective migration. Threshold migration (HSM), max number Control via the MAXThresholdproc option of simultaneous processes in the Client System Options file (dsm.sys). Default: 3 Threshold migration (HSM), query Via the AIX command: 'dsmmigfs Query [FileSysName]' Threshold migration of a file system, Via Unix command: 'dsmautomig FSname' (HSM) force Threshold migration of a file system Control via the AIX command: (HSM) set levels 'dsmmigfs Add|Update -hthreshold=N' for the high threshold migration percentage level. Use: 'dsmmigfs Add|Update -lthreshold=N' for the low threshold migration percentage level. THROUGHPUTDatathreshold Server option: Specifies throughput threshold that a client Consumer session must achieve to prevent being cancelled after a specified number of minutes (plus media wait time). The time threshold starts at the time a client sending data the server for storage (as opposed to setup or session housekeeping data). Syntax: THROUGHPUTDatathreshold Nkbpersec where the number of KB per second specifies the throughput that client sessions must achieve to prevent cancellation after THROUGHPUTTimethreshold minutes have elapsed. This threshold does not include time spent waiting for media mounts. A value of 0 prevents examining client sessions for insufficient throughput. Throughput is computed by adding send and receive byte counts and dividing by the length of the session. The length does not include time spent waiting for media mounts and starts at the time a client sends data to the server for storage. Code: 0 - 99999999. Default: 0 Note: Interactive sessions, i.e. command line and graphical interface clients, will be affected by these parameters as calculations are cumulative across multiple operations. When a session is cancelled for being over the throughput time threshold and under the throughput data threshold, the following new message will appear: ANR0488W Session xx for node yy ( zz ) terminated - transfer rate is less than ww kilobytes per second and more than vv minutes have elapsed since first data transfer xx = session number yy = node name zz = platform name ww = transfer rate in kilobytes per second vv = elapsed time since first data transfer See also: Consumer session; SETOPT THROUGHPUTTimethreshold Server option: Specifies time threshold for a Consumer session after which it may be cancelled for insufficient throughput. Syntax: THROUGHPUTTimethreshold Nmins where the minutes specify the threshold for examining client sessions and cancelling them if the throughput threshold is not met (see the THROUGHPUTDatathreshold option). This threshold does not include time spent waiting for media mounts. The time threshold starts at the time a client sending data the server for storage (as opposed to setup or session housekeeping data). A value of 0 prevents examining client sessions for insufficient throughput. Code: 0 - 99999999 (minutes). Default: 0 (which disables it) See also: Consumer session; SETOPT tid Thread ID. Time of day, per server ADSM server command 'SHow TIME' (undocumented) Time zone See: ACCept Date TIMEformat, client option, query 'dsmc Query Option' in ADSM or 'dsmc show options' in TSM; see "Time Format" value. 0 indicates that your opsys dictates the format. TIMEformat, server option, query 'Query OPTion' and look at the "TimeFormat" value. TIMEformat, client option, set Definition in the client user options file. Specifies the format by which time is displayed by the ADSM client. NOTE: Not usable with AIX or Solaris, in that they use NLS locale settings. Code: 1 for 23:00:00 2 for 23,00,00 3 for 23.00.00 4 for 12:00:00AM/PM Default: 1 Query: 'dsmc Query Options' in ADSM or 'dsmc show options' in TSM and look at the "Time Format" value. A value of 0 indicates that your opsys dictates the format. See also: DATEformat TIMEformat, server option, set Definition in the server options file. Specifies the format by which time is displayed by the ADSM server: 1 for 23:00:00 2 for 23,00,00 3 for 23.00.00 4 for 12:00:00AM/PM Default: 1 Ref: Installing the Server... Timeout values See: COMMTimeout; IDLETimeout; MOUNTWait; THROUGHPUTTimethreshold; Client sessions, limit time TIMESTAMP SQL: A typename in the ADSM database. In report form, it looks like: 2000-05-10 22:37:37.000000 Portions of it can be accessed via a CAST(... AS ___) where ___ can be one of DATE, DAY, DAYNAME, DAYOFWEEK, DAYOFYEAR, DAYS, DAYSINMONTH, DAYSINYEAR, HOUR, MINUTE, MONTH, MONTHNAME, QUARTER, SECOND, TIME, TIMESTAMP, WEEK, YEAR. Sample of seeking date > 7 days old: SELECT * FROM ADSM.FILESPACES WHERE CAST((CURRENT_TIMESTAMP-BACKUP_END)DAY AS DECIMAL(18,0))>7 See also: HOUR(); MINUTE(); SECOND(). Timestamp Control Mode (HSM) One of four execution modes provided by the dsmmode command. Execution modes allow you to change the space management related behavior of commands that run under dsmmode. The timestamp control mode controls whether commands preserve the access time for a file or set it to the current time. See also: execution mode Tivoli The name of the enterprise management software company, acquired by IBM, and then given responsiblity for the * Storage Manager product. Tivoli Data Protection for Exchange See: TDP for Exchange Tivoli Storage Manager Formally called IBM Tivoli Storage Manager, as of 2002/04. Tivoli Storage Manager for Databases Consolidates former products as of 2002/05: Tivoli Storage Manager for Databases: Tivoli Data Protection for Informix, Tivoli Data Protection for Oracle, and Tivoli Data Protection for Microsoft SQL. Relies on the backup application program interfaces (APIs) provided by several different database packages to store backup data in the TSM server. Microsoft SQL Server, Oracle and IBM Informix. (A TSM backup client is also available for IBM DB2 databases, but this client is included with the DB2 software; it is not part of the Tivoli Storage Manager for Databases product.) Ref: May 2002 whitepaper "Comprehensive, flexible backup and recovery for relational databases". www.tivoli.com/products/index/ storage-mgr-db/ See also: TDP for Informix Tivoli Storage Manager for Hardware Various hardware storage subsystems provide facilities which specifically help make backups more efficient, such as Flash Copy on the IBM ESS. This provides a means for TSM to perform backups from the snapshots, rather than contending with the file system or database at the operating system or database system level. There are, of course, ramifications and caveats. This adjunct product is currently for DB2 and Oracle database backups. http://www.ibm.com/software/tivoli/ products/storage-mgr-hardware/ Tivoli Storage Manager for Mail A software module for IBM Tivoli Storage Manager that automates the data protection of email servers running either Lotus Domino or Microsoft Exchange. This single facility replaces the two prior, separate products as of 2002/04: Tivoli Data Protection for Lotus Domino, and Tivoli Data Protection for Microsoft Exchange Server. www.tivoli.com/products/index/ storage-mgr-mail/ Tivoli.com The Tivoli web site, until 2003/02/01, when it was absorbed into IBM.com for corporate consistency. \tivoli\tsm\Server\adsmdll.dll Like: C:\tivoli\tsm\Server\adsmdll.dll At least through TSM 4.2, this is the TSM client module on Windows. TLM Generically, Tape Library Manager. Product: Backup and disaster recovery product from Connected. TLS-NNNN Qualstar company Tape Library System model number, where NNNN idenfiies the specific model. The first N is the DLT series identifier. The second N specifies the number of drives in the library. The final NN is the maximum number of cartridges within magazines. TME Tivoli Management Environment. An integrated suite of systems management applications for a distributed client/server environment. /tmp The Unix temporary files file system. *SM has never wanted to back up the /tmp file system, or any files in it, via Incremental or Selective: there is an implied Exclude in effect for /tmp, even if you don't specify one. Some customers report being able to get around this by coding /tmp in the client DOMain option. Likewise, HSM does not allow you to add /tmp to its repertoire of controlled file systems, as that doesn't make sense. See: ALL-LOCAL; DOMain; Raw logical volume; Shared memory /tmp/.8000001e.1a0e The kind of filename created by mail reader Pine, owned by a user, containing the PID of the pine process. -TODate (and -FROMDate) Client option, as used with Restore and Retrieve, to limits the operation to Active and Inactive files up to and including the specified date. Used on RESTORE, RETRIEVE, QUERY ARCHIVE and QUERY BACKUP command line commands, usually in conjunction with -TOTime (and -FROMTime). The operation proceeds by the server sending the client the full list of files, for the client to filter out those meeting the date requirement. A non-query operation will then cause the client to request the server to send the data for each candidate file to the client, which will then write it to the designated location. See also: DATEformat Contrast with: -PITDate Total number of bytes transferred: In the summary statistics from an Archive or Backup operation, or the Activity Log message ANE4961I which records the client operation stats, the sum of all bytes transferred. The value will be reported in a form suiting its magnitude, as in samples: "114.45 MB", "1.53 GB". Note that in Unix and other systems with simple directory structures, the number will probably be less than the sum reflected by including the numbers shown on "Directory-->" lines of the report, in that *SM stores only the name and attributes of directories, in its database. Note also that Retry operations may inflate this value, if they result in the file being re-sent to the server, as in the case of the beginning of a direct-to-tape backup, when the tape is not yet mounted (message ANS1114I). Total number of objects backed up Client Summary Statistics element: The total number of objects updated. These are files whose attributes, such as file owner or file permissions, have changed. Total number of objects deleted: In the summary statistics from an Archive or Backup operation, or the Activity Log message ANE4957I which records the client operation stats. This is a count of the objects deleted from the client disk file system after being successfully sent to the server storage pool in an Archive operation where -DELetefiles is used. The number is zero for all Backup commands. Total number of objects expired Client Summary Statistics element: Objects that have been expired either because they do not longer exist on the TSM client, have been excluded by the client, or has been rebound to a new management class which is retaining a less number of versions. Total number of objects failed: In the summary statistics from an Archive or Backup operation, or the Activity Log message ANE4961I which records the client operation stats. Reflects problems encountered during the job. Refer to the dsmerror.log for problem details. Message ANS1802E will appear at the end of the backup of the file system having the problem. Message ANS1228E usually points out the file that failed. The failure cause most typically is files being active during backup, as per message ANS4037E (consider boosting your CHAngingretries value). Or, message ANS4005E points out a file which was deleted before it could be backed up. Can also Search the body of the job for messages other than ANS1898I progress messages. See also messages ANS4228E, ANS4312E. Could be the inability to use a tape that is stuck in a drive, or that the drive is disabled. If the number of failed = number of examined, it is likely a client defect, as in APAR IC41440. Total number of objects inspected: In the summary statistics from an Archive or Backup operation, or the Activity Log message ANE4952I which records the client operation stats. Reflects the number of file system objects eligible for inspection - which is reduced in Backup or Archive according to the Include/Exclude options you coded for either operation. When using journal-based backup, the number of objects inspected may be less than the number of objects backed up. In Unix, the "." file in the highest level directory is not backed up, which is why "objects backed up" is one less than "objects inspected". Total number of objects rebound Client Summary Statistics element: Total number of objects rebound to a different management class. Total Storage Expert (TSE) Can co-exist with TSM; but be aware that TSE is a Java application, and as such is a resource hog. TotalStorage See: IBM TotalStorage -TOTime (and -FROMTime) Client option, as used with Restore and Retrieve, to limit the operation to files up to and including the specified time. Used on RESTORE, RETRIEVE, QUERY ARCHIVE and QUERY BACKUP command line commands, usually in conjunction with -TODate (and -FROMDate) to limit the files involved. The operation proceeds by the server sending the client the full list of files, for the client to filter out those meeting the time requirement. A non-query operation will then cause the client to request the server to send the data for each candidate file to the client, which will then write it to the designated location. See also: TIMEformat TPname Client System Options file (dsm.sys) option to specify a symbolic name for the transaction program name. For SNA. Discontinued as of TSM 4.2. TPNProfilename server option, query 'Query OPTion' TRACE Server command for tracing server operation to capture data relating to a problem situation. You should do so only as instructed by IBM Support, noting that tracing can add overhead and itself jeopardize full, stable operation. Example: adsm> trace enable PVR MMS (use PVR for suspected drive problems, MMS for suspected robotics problems. PVR generates a LOT of output) adsm> trace begin tsmtrace.out ...replicate your problem situation... adsm> trace end Capture the results, from the Activity Log, via like: adsm> q actlog begintime=