ADSM/TSM Quick Facts in alphabetical order, supplemented thereafter by topic discussions as compiled by Richard Sims (r b s @ b u . e d u), Boston University (www.bu.edu), Office of Information Technology On the web at http://people.bu.edu/rbs/ADSM.QuickFacts Last update: 2004/10/06 This reference was originally created for my own use as a systems programmer's "survival tool", to accumulate essential information and references that I knew I would have to refer to again, and quickly re-find it. In participating in the ADSM-L mailing list, it became apparent that others had a similar need, and so it made sense to share the information. The information herein derives from many sources, including submissions from other TSM customers. This, the information is that which everyone involved with TSM has contributed to a common knowledge base, and this reference serves as an accumulation of that knowledge, largely reflective of the reality of working with the TSM product as an administrator. I serve as a compiler and contributor. This informal, "real-world" reference is intended to augment the formal, authoritative documentation provided by Tivoli and allied vendors, as frequently referenced herein. See the REFERENCES area at the bottom of this document for pointers to salient publications. Command syntax is included for the convenience of a roaming techie carrying a printed copy of this document, and thus is not to be considered definitive or inclusive of all levels for all platforms: refer to manuals for the syntax specific to your environment. Upper case characters shown in command syntax indicates that at least those characters are required, not that they have to be entered in upper case. I realize that I need to better "webify" this reference, and intend to do so in the future. (TSM administration is just a tiny portion of my work, and many other things demand my time.) In dealing with the product, one essential principle must be kept in mind, which governs the way the product operates and restricts the server administrator's control of that data: the data which the client sends to a server storage pool will always belong to the client - not the server. There is no provision on the server for inspecting or manipulating file system objects sent by the client. Filespaces are the property of the client, and if the client decides not to do another backup, that is the client's business: the server shall take no action on the Active, non-expiring files therein. It is incumbent upon the server administrator, therefore, to maintain a relationship with client administrators for information to be passed when a filespace is obsolete and discardable, when it has fallen into disuse. ? "Match-one" wildcard character used in Include/Exclude patterns to match any single character except the directory separator; it does not match to end of string. Cannot be used in directory or volume names. * "Match-all" wildcard character used in Include/Exclude patterns to match zero or more characters, but it does not cross a directory boundary. Cannot be used in directory or volume names. * (asterisk) SQL SELECT: to specify that all columns in a table are being referenced, which is to say the entirety of a row. As in: SELECT PLATFORM_NAME, COUNT(*) AS "Number of nodes" FROM NODES *.* Wildcard specification often seen in Windows include-exclude specifications. Note that *.* means any file name with the '.' character anywhere in the name, whereas * means any file name. *SM Wildcard product name first used on ADSM-L by Peter Jodda to generically refer to the ADSM->TSM product - which has become adroit, given the increasing frequency with which IBM is changing the name of the product. See also: ESM; ITSM & (ampersand) Special character in the MOVe DRMedia, MOVe MEDia, and Query DRMedia commands, CMd operand, as the lead character for special variable names. [ "Open character class" bracket character used in Include/Exclude patterns to begin the enumeration of a character class. That is, to wildcard on any of the individual characters specified. End the enumeration with ']'; which is to say, enclose all the characters within brackets. You can code like [abc] to represent the characters a, b, and c; or like [a-c] to accomplish the same thing. Within the character class specification, you can code special characters with a backslash, as in [abc\]de] to include the ']' char. > Redirection character in the server administrative command line interface, if at least one space on each side of it, saying to replace the specified output file. There is no "escape" character to render this character "un-special", as a backslash does in Unix. Thus, you should avoid coding " > " in an SQL statement: eliminate at least one space on either side of it. Ref: Admin Ref "Redirecting Command Output" >> Redirection characters in the server administrative command line interface, if at least one space on each side of it, saying to append to the specified output file. Ref: Admin Ref "Redirecting Command Output" {} Use braces in a file path specification within a query or restore/retrieve to isolate and explicitly identify the file space name (or virtual mount point name) to *SM, in cases where there can be ambiguity. By default, *SM uses the file space with the longest name which matches the beginning of that file path spec, and that may not be what you want. For example: If you have two filespaces "/a" and "/a/b" and want to query "/a/b/somefile" from the /a file system, specify "{/a/}somefile". See: File space, explicit specification || SQL: Logical OR operator. Also effects concatenation, as in SELECT filespace_name || hl_name || ll_name AS "_______File Name________" Note that not all SQL implementation support || for concatenation: you may have to use CONCAT() instead. - "Character class range" character used in Include/Exclude patterns to specify a range of enumerated characters as in "[a-z]". ] "Close character class" character used in Include/Exclude patterns to end the enumeration of a character class. \ "Literal escape" character used in Include/Exclude patterns to cause an enumerated character class character to be treated literally, as when you want to include a closing square bracket as part of the enumerated string ([abc\]xyz]). ... "Match N directories" characters used in Include/Exclude patterns to match zero or more directories. Example: "exclude /cache/.../*" excludes all directories (and files) under directory "/cache/". ... As a filespace name being displayed at the server, indicates that the client stored the filespace name in Unicode, and the server lacks the "code page" which allows displaying the name in its Unicode form. / (slash) At the end of a filespec, in Unix means "directory". A 'dsmc i' on a filespec ending in a slash says to backup only directories with matching names. To back up files under the directories, you need to have an asterisk after the slash (/*). If you specify what you know to be a directory name, without a slash, *SM will doggedly believe it to be the name of a file - which is why you need to maintain the discipline of always coding directory names with a slash at the end. /... In ordinary include-exclude statements, is a wildcard meaning zero or more directories. /... DFSInclexcl: is interepreted as the global root of DFS. /.... DFSInclexcl: Match zero or more directories (in that "/..." is interepreted as the global root of DFS). /* */ Used in Macros to enclose comments. J The comments cannot be nested and cannot span lines. Every line of a comment must contain the comment delimiters. = (SQL) Is equal to. The SQL standard specifies that the equality test is case sensitive when comparing strings. != (not equal) For SQL, you instead need to code "<>". <> SQL: Means "not equal". $$ACTIVE$$ The name given to the provisional active policy set where definitions have been made (manually or via Import), but you have not yet performed the required VALidate POlicyset and ACTivate POlicyset to commit the provisional definitions, whereafter there will be a policy set named ACTIVE. Ref: Admin Guide See also: Import 0xdeadbeef Some subsystems pre-populate allocated memory with the hexadecimal string 0xdeadbeef (this 32-bit hex value is a data processing affectation) so as to be able to detect that an application has failed to initialize an acquired subset with binary zeroes. Landing on a halfword boundary can obviously lead to getting variant "0xbeefdead". 10.0.0.0 - 10.255.255.255 Private subnet address range, as defined in RFC 1918, commonly used via Network Address Translation behind some firewall routers/switches. You cannot address such a subnet from the Internet: private subnet addresses can readily initiate communication with each other and servers on the Internet, but Internet users cannot initiate contacts with them. See also: 172.16.0.0 - 172.31.255.255; 192.168.0.0 - 192.168.255.255 1500 Server port default number for serving clients. Specify via TCPPort server option and DEFine SERver LLAddress. 1501 Client port for backups (schedule). Note that this port exists only when the scheduled session is due: the client does not keep a port when it is waiting for the schedule to come around. 1510 Client port for Shared Memory. 1543 ADSM HTTPS port number. 1580 Client admin port. HTTPPort default. 1581 Default HTTPPort number for the Web Client TCP/IP port. 172.16.0.0 - 172.31.255.255 Private subnet address range, as defined in RFC 1918, commonly used via Network Address Translation behind some firewall routers/switches. You cannot address such a subnet from the Internet: private subnet addresses can readily initiate communication with each other and servers on the Internet, but Internet users cannot initiate contacts with them. See also: 10.0.0.0 - 10.255.255.255; 192.168.0.0 - 192.168.255.255 192.168.0.0 - 192.168.255.255 Private subnet address range, as defined in RFC 1918, commonly used via Network Address Translation behind Asante and other brand firewall routers/switches. You cannot address such a subnet from the Internet: private subnet addresses can readily initiate communication with each other and servers on the Internet, but Internet users cannot initiate contacts with them. See also: 10.0.0.0 - 10.255.255.255; 172.16.0.0 - 172.31.255.255 2 GB limit (2 GB limit) Through AIX 4.1, Raw Logical Volume (RLV) partitions and files are limited to 2 GB in size. It takes AIX 4.2 to go beyond 2 GB. 2105 Model number of the IBM Versatile Storage Server. Provides SNMP MIB software ibm2100.mib . www.ibm.com/software/vss 3420 IBM's legacy, open-reel, half-inch tape format, circa 1974. Records data linearly in 9 tracks (1 byte plus odd parity). Reels could hold as much as 2400 feet of tape. Capacity: 150 MB Pigment: Iron Models 4,6,8 handle up to 6250 bpi, with an inter-block gap of 0.3". Reel capacity: Varies according to block size - max is 169 MB for a 2400' reel at 6250 bpi. 3466 See also: Network Storage Manager (NSM) 3466, number of *SM servers Originally, just one ADSM server per 3466 box. But as of 2000, multiple, as in allowing the 3466 to perfor DR onto another TSM server. (See http://www. storage.ibm.com/nsm/nsmpubs/nspubs.htm) 3466 web admin port number 1580. You can specify it as part of the URL, like http://______:1580 . 3480, 3490, 3490E, 3590, 3494... IBM's high tape devices (3480, 3490, 3490E, 3590, 3494, etc.) are defined in SMIT under DEVICES then TAPE DRIVES; not thru ADSM DEVICES. This is because they are shipped with the tape hardware, not with ADSM. Also, these devices use the "/dev/rmtX" format: all other ADSM tape drives are of the format "/dev/mtX" format. 3480 IBM's first generation of this 1/2" tape cartridge technology, announced March 22, 1984 and available January, 1985. Used a single-reel approach and servo tracking pre-recorded on the tape for precise positioning and block addressing. Excellent start-stop performance. The cartridge technology would endure and become the IBM cartridge standard, prevailing into the 3490 and 3590 models for at least 20 more years. Tracks: 18, recorded linearly and in parallel until EOT encountered (not serpentine like later technologies), whereupon the tape would be full. Recording density: 38,000 bytes/inch Read/write rate: 3 MB/sec Rewind time: 48 seconds Tape type: chromium dioxide (CrO2) Tape length: 550 feet Cartridge dimensions: 4.2" wide x 4.8" high x 1" thick Cartridge capacity: Varies according to block size - max is 208 MB. Transfer rate: 3 MB/s Next generation: 3490 3480 cleaning cartridge Employs a nylon filament ribbon instead of magnetic tape. 3480 tape cartridge AKA "Cartridge System Tape". Color: all gray. Identifier letter: '1'. See also: CST; HPCT; Media Type 3480 tape drive definition Defined in SMIT under DEVICES then TAPE DRIVES; not thru ADSM DEVICES. This is because as an IBM "high tape device" it is shipped with the tape hardware, not with ADSM. Also, these devices use the "/dev/rmtX" format: all other ADSM tape drives are of the format "/dev/mtX". 3490 IBM's second generation of this 1/2" tape cartridge technology, circa 1989, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Media type: CST Tracks: 18 (like its 3480 predecessor) recorded linearly and in parallel until EOT encountered (not serpentine like later technologies), whereupon the tape would be full. Transfer rate: 3 MB/sec sustained Capacity: 400 MB physical Tape type: chromium dioxide (CrO2) Tape length: 550 feet Note: Cannot read tapes produced on 3490E, due to 36-track format of that newer technology. Previous generation: 3480 Next generation: 3490E 3490 cleaning cartridge Employs a nylon filament ribbon instead of magnetic tape. 3490 EOV processing 3490E volumes will do EOV processing just before the drive signals end of tape (based on a calculation from IBM drives), when the drive signals end of tape, or when maxcapacity is reached, if maxcapacity has been set. When the drive signals end of tape, EOV processing will occur even if maxcapacity has not been reached. Contrast with 3590 EOV processing. 3490 not getting 2.4 GB per tape? In MVS TSM, if you are seeing your 3490 cartridges getting only some 800 MB per tape, it is probably that your Devclass specification has COMPression=No rather than Yes. Also check that your MAXCAPacity value allows filling the tape, and that at the 3490 drive itself that it isn't hard-configured to prevent the host from setting a high density. 3490 tape cartridge AKA "Enhanced Capacity Cartridge System Tape". Color: gray top, white base. Identifier letter: 'E' Capacity: 800 MB native; 2.4 GB compressed (IDRC 3:1 compression) 3490 tape drive definition Defined in SMIT under DEVICES then TAPE DRIVES; not thru ADSM DEVICES. This is because as an IBM "high tape device" it is shipped with the tape hardware, not with ADSM. Also, these devices use the "/dev/rmtX" format: all other ADSM tape drives are of the format "/dev/mtX". 3490E IBM's third generation of this 1/2" tape cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Designation: CST-2 Tracks: 36, implemented in two sets of 18 tracks: the first 18 tracks are recorded in the forward direction until EOT is encountered, whereupon the heads are electronically switched (no physical head or tape shifting) and the tape is then written backwards towards BOT. Can read 3480 and 3490 tapes. Capacity: 800 MB physical; 2.4 GB with 3:1 compression. IDRC recording mode is the default, and so tapes created on such a drive must be read on an IDRC-capable drive. Transfer rate: Between host and tape unit buffer: 9 MB/sec. Between buffer and drive head: 3 MB/sec. Capacity: 800 MB physical Tape type: chromium dioxide (CrO2) Tape length: 800 feet Previous generation: 3490 Next generation: 3590 3490E cleaning cartridge Employs a nylon filament ribbon instead of magnetic tape. 3490E Model F 36-track head to read/write 18 tracks bidirectionally. 349x tape library use, define "ENABLE3590LIBRary" definition in the server options file. Ref: Installing the Server and Administrative Client. 3494 IBM robotic libary with cartridge tapes, originally introduced to hold 3490 tapes and drives, but later to hold 3590 tapes and drives (same cartridge dimensions). Model HA1 is high availability: instead of just one accessor (robotic mechanism) at one end, it has two, at each end. See also: Convenience Input-Output Station; Dual Gripper; Fixed-home Cell; Floating-home Cell; High Capacity Output Facility; Library audit; Library; 3494, define; Library Manager; SCRATCHCATegory; Volume Categories; Volume States 3494, access via web This was introduced as part of the IBM StorWatch facility in a 3494 Library Manager component called 3494 Tape Library Specialist, available circa late 2000. It is a convenience facility, that is read-only: one can do status inquiries, but no functional operations. If at the appropriate LM level, the System Summary window will show "3494 Specialist". 3494, add tape to 'CHECKIn LIBVolume ...' Note that this involves a tape mount. 3494, audit tape (examine its barcode 'mtlib -l /dev/lmcp0 -a -V VolName' to assure physically in library) Causes the robot to move to the tape and scan its barcode. 'mtlib -l /dev/lmcp0 -a -L FileName' can be used to examine tapes en mass, by taking the first volser on each line of the file. 3494, CE slot See: 3494 reserved cells 3494, change Library Manager PC In rare circumstances it will be necessary to swap out the 3494's industrial PC and put in a new one. A major consideration here is that the tape inventory is kept in that PC, and the prospect of doing a Reinventory Complete System after such a swap is wholly unpalatable in that it will discard the inventory and rebuid it - with all the tape category code values being lost, being reset to Insert. So you want to avoid that. (A TSM AUDit LIBRary can fix the category codes, but...) And as Enterprise level hardware and software, such changes should be approached more intelligently by service personnel, anyway. Realize that the LM consists of the PC, the LM software, and a logically separate database - which should be as manageable as all databases can be. If you activate the Service menu on the 3494 control panel, under Utilities you will find "Dump database..." and "Restore database...", which the service personnel should fully exploit if at all possible to preserve the database across the hardware change. (The current LM software level may have to be brought up to the level of the intended, new PC for the database transfer to work well.) 3494, change to manual operation On rare occurrences, the 3494 robot will fail and you need to continue processing, by switching to manual operation. This involves: - Go to the 3494 Operator Station and proceed per the Using Manual Mode instructions in the 3494 OpGuide. Be sure to let the library Pause operation complete before entering Manual Mode. - TSM may have to be told that the library is in manual mode. You cannot achieve this via UPDate LIBRary: you have to define another instance of your library under a new name, with LIBType=MANUAL. Then do UPDate DEVclass to change your 3590 device class to use the library in manual mode for the duration of the robotic outage. - Either watch the Activity Log, doing periodic Query REQuest commands; or run 'dsmadmc -MOUNTmode'. REPLY to outstanding mount requests to inform TSM when a tape is mounted and ready. If everything is going right, you should see mount messages on the tape drive's display and in the Manual Mode console window, where the volser and slot location will be displayed. If a tape has already been mounted in Manual Mode, dismounted, and then called for again, there will be an "*" next to the slot number when it is displayed on the tape drive calling for the tape, to clue you in that it is a recent repeater. 3494, count of all volumes Via Unix command: 'mtlib -l /dev/lmcp0 -vqK' 3494, count of cartridges in There seems to be no way to determine Convenience I/O Station this. One might think of using the cmd 'mtlib -l /dev/lmcp0 -vqK -s ff10' to get the number, but the FF10 category code is in effect only as the volume is being processed on its way to the Convenience I/O. The 3494 Operator Station status summary will say: "Convenience I/O: Volumes present", but not how many. The only recourse seems to be to create a C program per the device driver manual and the mtlibio.h header file to inspect the library_data.in_out_status value, performing an And with value 0x20 and looking for the result to be 0 if the Convenience I/O is *not* all empty. 3494, count of CE volumes Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fff6' 3494, count of cleaning cartridges Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fffd' 3494, count of SCRATCH volumes Via Unix command: (3590 tapes, default ADSM SCRATCH 'mtlib -l /dev/lmcp0 -vqK -s 12E' category code) 3494, eject tape from See: 3494, remove tape from 3494, identify dbbackup tape See: dsmserv RESTORE DB, volser unknown 3494, inventory operations See: Inventory Update; Reinventory complete system 3494, list all tapes 'mtlib -l /dev/lmcp0 -qI' (or use options -vqI for verbosity, for more descriptive output) 3494, manually control Use the 'mtlib' command, which comes with 3494 Tape Library Device Driver. Do 'mtlib -\?' to get usage info. 3494, monitor See: mtevent 3494, not all drives being used See: Drives, not all in library being used 3494, number of drives in Via Unix command: 'mtlib -l /dev/lmcp0 -qS' 3494, number of frames (boxes) The mtlib command won't reveal this. The frames show in the "Component Availability" option in the 3494 Tape Library Specialist. 3494, partition/share TSM SAN tape library sharing support is only for libraries that use SCSI commands to control the library robotics and the tape management. This does *not* include the 3494, which uses network communication for control. Sharing of the 3494/3590s thus has to occur via conventional partitioning or dynamic drive sharing (which is via the Auto-Share feature introduced in 1999). There is no dynamic sharing of tape volumes: they have to be pre-assigned to their separate TSM servers via Category Codes. Ref: Redpaper "Tivoli Storage Manager: SAN Tape Library Sharing". Redbook "Guide to Sharing and Partitioning IBM Tape Library Data" (SG24-4409) 3494, ping You can ping a 3494 from another system within the same subnet, regardless of whether that system is in the LM's list of LAN-authorized hosts. If you cannot ping the 3494 from a location outside the subnet, it may mean that the 3494's subnet is not routed - meaning that systems on that subnet cannot be reached from outside. 3494, remote operation See "Remote Library Manager Console Feature" in the 3494 manuals. 3494, remove tape from 'CHECKOut LIBVolume LibName VolName [CHECKLabel=no] [FORCE=yes] [REMove=no]' To physically cause an eject via AIX command, change the category code to EJECT (X'FF10'): 'mtlib -l /dev/lmcp0 -vC -V VolName -t ff10' The more recent Library Manager software has a Manage Import/Export Volumes menu, wherein Manage Insert Volumes claims ejectability. 3494, RS-232 connect to SP Yes, you can connect a 3494 to an RS/6000 SP via RS-232, though it is uncommon, slow, and of limited distance compare to using ethernet. 3494, status 'mtlib -l /dev/lmcp0 -qL' 3494, steps to set up in ADSM - Define the library - Define the drives in it - Add "ENABLE3590LIBRARY YES" to dsmserv.opt - Restart the server. (Startup message "ANR8451I 349x library LibName is ready for operations".) 3494 Cell 1 Special cell in a 3494: it is specially examined by the robot after the doors are closed. You would put here any tape manually removed from a drive, for the robot to put away. It will read the serial name, then examine the cell which was that tape cartridge's last home: finding it empty, the robot will store the tape there. The physical location of that cell: first frame, inner wall, upper leftmost cell (which the library keeps empty). 3494 cells, total and available 'mtlib -l /dev/lmcp0 -qL' lines: "number of cells", "available cells". 3494 cleaner cycles remaining 'mtlib -l /dev/lmcp0 -qL' line: "avail 3590 cleaner cycles" 3494 cleaning cartridge See: Cleaner Cartridge, 3494 3494 connectivity A 3494 can be simultaneously connected via LAN and RS-232. 3494 diagnosis See: trcatl 3494 ESCON device control Some implementations may involve ESCON connection to 3490 drives plus SCSI connection to 3590 drives. The ESCON 3490 ATL driver is called mtdd and the SCSI 3590 ATL driver was called atldd, and they have shared modules between them. One thus may be hesitant to install atldd due to this "sharing". In the pure ESCON drive case, the commands go down the ESCON channel, which is also the data path. If you install atldd, the commands now first go to the Library Manager, which then reissues them to those drives. Thus, it is quite safe to install atldd for ESCON devices. 3494 inaccessible (usually after Check for the following: just installed) - That the 3494 is in an Online state. - In the server, that the atldd software (LMCPD) has been installed and that the lmcpd process is running. - That your /etc/ibmatl.conf is correct: if a TCP/IP connection, specify the IP addr; if RS/232, specify the /dev/tty port to which the cable is attached. - If a TCP/IP connection, that you can ping the 3494 by both its network name and IP address (to assure that DNS was correctly set up in your shop). - If a LAN connection: - Check that the 3494 is not on a Not Routed subnet: such a router configuration prevents systems outside the subnet from reaching systems residing on that subnet. - A port number must be in your host /etc/services for it to communicate with the 3494. By default, the Library Driver software installation creates a port '3494/tcp' entry, which should matches the default port at the 3494 itself, per the 3494 installation OS/2 TCP/IP configuration work. - Your host needs to be authorized to the 3494 Library Manager, under "LAN options", "Add LAN host". (RS/232 direct physical connection is its own authorization.) Make sure you specify the full host network name, including domain (e.g., a.b.com). If communications had been working but stopped when your OS was updated, assure that it still has the same host name! - If an RS/232 connection: - Check the Availability of your Direct Attach Ports (RS-232): the System Summary should show them by number, if Initialized, in the "CU ports (RTIC)" report line. If not, go into Service Mode, under Availability, to render them Available. - Connecting the 3494 to a host is a DTE<->DTE connection, meaning that you must employ a "null modem" cable or connector adapter. - Certainly, make sure the RS-232 cable is run and attached to the port inside the 3494 that you think it is. - Try performing 'mtlib' queries to verify, outside of *SM, that the library can be reached. Presuming 3590 drives in the 3494, make sure your server options file includes: ENABLE3590LIBRARY YES 3494 Intervention Required detail The only way to determine the nature of the Int Req on the 3494 is to go to its Operator Station and see, under menu Commands->Operator intervention. There is no programming interface available to allow you to get this information remotely. 3494 IP address, determine Go to the 3494 control panel. From the Commands menu, select "LAN options", and then "LM LAN information". 3494 Manual Mode If the 3494's Accessor is nonfunctional you can operate the library in Manual Mode. Using volumes in Manual Mode affects their status: The 3494 redbook (SG24-4632) says that when volumes are used in Manual Mode, their LMDB indicator is set to "Manual Mode", as used to direct error recovery when the lib is returned to Auto mode. This is obviously necessary because the location of all volumes in the library is jeopardized by the LM's loss of control of the library. The 3494 Operator Guide manual instructs you to have Inventory Update active upon return to Auto mode, to re-establish the current location of all volumes. 3494 microcode level See: "Library Manager, microcode level" 3494 port number See: Port number, for 3494 communication 3494 problem: robot is dropping This has been seen where the innards of cartridges the 3494 have gone out of alignment, for any of a number of reasons. Re-teaching can often solve the problem, as the robot re-learns positions and thus realigns itself. 3494 problem: robot misses some During its repositioning operations, the fiducials - but not all robot attempts to align itself with the edges of each fiducial, but after dwelling on one it keeps on searching, as though it didn't see it. This operation involves the LED, which is carried on the accessor along with the laser (which is only for barcode reading). The problem is that the light signal involved in the sensing is too weak, which may be due to dirt, an aged LED, or a failing sensor. The signal is marginal, so some fiducials are seen, but not others. 3494 problems See also "3494 OPERATOR STATION MESSAGES" section at the bottom of this document. 3494 reserved cells A 3494 minimally has two reserved cells: 1 A 1 Gripper error recovery (1 A 3 if Dual Gripper installed). 1 A 20 CE cartridge (3590). 1 A 19 is also reserved for 3490E, if such cartridges participate. _ K 6 Not a cell, but a designation for a tape drive on wall _. 3494 scratch category, default See: DEFine LIBRary 3494 sharing Can be done with TSM 3.7+, via the "3494SHARED YES" server option; but you still need to "logically" partition the 3494 via separate tape Category Codes. Ref: Guide to Sharing and Partitioning IBM Tape Library Dataservers, SG24-4409. Redbooks: Tivoli Storage Manager Version 3.7.3 & 4.1: Technical Guide, section 8.2; Tivoli Storage Manager SAN Tape Library Sharing. See also: 3494SHARED; DRIVEACQUIRERETRY; MPTIMEOUT 3494 sluggish The 3494 may be taking an unusually long time to mount tapes or scan barcodes. Possible reasons: - A lot of drive cleaning activity can delay mounts. (A library suddenly exposed to a lot of dust could evidence a sudden surge in cleaning.) A shortage of cleaning cartridges could aggravate that. - Drive problems which delay ejects or positioning. - Library running in degraded mode. - lmcpd daemon or network problems which delay getting requests to the library. - See if response to 'mtlib' commands is sluggish. This can be caused by DNS service problems to the OS2 embedded system. (That PC is typically configured once, then forgotten; but DNS servers may change in your environment, requiring the OS2 config to need updating.) Use the mtlib command to get status on the library to see if any odd condition, and visit the 3494 if necessary to inspect its status. Observe it responding to host requests to gauge where the delay is. 3494 SNMP support The 3494 (beginning with Library Manager code 518) supports SNMP alert messaging, enabling you to monitor 3494 operations from one or more SNMP monitor stations. This initial support provides more than 80 operator-class alert messages covering: 3494 device operations Data cartridge alerts Service requests VTS alerts See "SNMP Options" in the 3494 Operator Guide manual. 3494 status 'mtlib -l /dev/lmcp0 -qL' 3494 Tape Library Specialist Provides web access to your 3494 LM. Requires that the LM PC have at least 64 MB of memory, be at LM code level 524 or greater, and have FC 5045 (Enhanced Library Manager). 3494 tapes, list 'mtlib -l /dev/lmcp0 -qI' (or use options -vqI for verbosity, for more descriptive output) 3494 TCP/IP, set up This is done during 3494 installation, in OS/2 mode, upon invoking the HOSTINST command, where a virtual "flip-book" will appear so that you can click on tabs within it, including a Network tab. After installation, you could go into OS/2 and there do 'cd \tcpip\bin' and enter the command 'tcpipcfg' and click in the Network tab. Therein you can set the IP address, subnet mask, and default gateway. 3494 volume, list state, class, 'mtlib -l /dev/lmcp0 -vqV -V VolName' volser, category 3494 volume, last usage date 'mtlib -l /dev/lmcp0 -qE -uFs -V VolName' 3494 volumes, list 'mtlib -l /dev/lmcp0 -qI' (or use options -vqI for verbosity, for more descriptive output) 3494SHARED To improve performance of allocation of 3590 drives in the 3494, introduced by APAR IX88531... ADSM was checking all available drives on a 3494 for availability before using one of them. Each check took 2 seconds and was being performed twice per drive, once for each available drive and once for the selected drive. This resulted in needless delays in mounting a volume. The reason for this is that in a shared 3494 library environment, ADSM physically verifies that each drive assigned to ADSM is available and not being used by another application. The problem is that if ADSM is the only application using the assigned drives, this extra time to physically check the drives is not needed. This was addressed by adding a new option, 3494SHARED, to control sharing. Selections: No (default) The, the 3494 is not being shared by any other application. That is, only one or more ADSM servers are accessing the 3494. Yes ADSM will select a drive that is available and not being used by any other application. You should only enable this option if you have more than two (2) drives in your library. If you are currently sharing a 3494 library with other application, you will need to specify this option. See also: DRIVEACQUIRERETRY; MPTIMEOUT 3495 Predecessor to the 3494, containing a GM robot, like used in car assembly. 3570 The IBM 3570 Tape Subsystem is based on the same technology as the IBM 3590 High Performance Tape Subsystem. It functionally expands the capability of tape to perform both write and read-intensive operations. It provides a faster data access than other tape technologies with a drive time to read/write data of eight seconds from cassette insertion. The 3570 also incorporates a high-speed search function. The tape drive reads and writes data in a 128-track format, four tracks at a time. Data is written using an interleaved serpentine longitudinal recording format starting at the center of the tape (mid-tape load point) and continuing to near the end of the tape. The head is indexed to the next set of four tracks and data is written back to the mid-tape load point. This process continues in the other direction until the tape is full. Cartridge: The 3570 uses a unique, robust, twin-hub tape cassette that is approximately half the size of the 3490/3590 cartridge tapes, with a cassette capacity of 5 GB uncompressed and up to 15G per cassette with LZ1 data compaction. Also called "Magstar MP" (where the MP stands for Multi-Purpose), supported by the Atape driver. Think "3590, Jr." The tape is half-wound at load time, so can get to either end of the tape in half the time than if the tape were fully wound. Cartridge type letter: 'F' (does not participate in the volser). An early problem of "Lost tension" was common, attributed to bad tapes, rather than the tape drives. *SM library type: SCSI Library 3570 "tapeutil" for NT See: ntutil 3570, to act as an ADSM library Configure to operate in Random Mode and Base Configuration. This allows ADSM to use the second drive for reclamation. (The Magstar will not function as a library within ADSM when set to "automatic".) The /dev/rmt_.smc SCSI Media Changer special device allows library style control of the 3570. 3570/3575 Autoclean This feature does not interfere with ADSM: the 3570 has its own slot for the cleaner that is not visible to ADSM, and the 3575 hides the cleaners from ADSM. 3570 configurations Base: All library elements are available to all hosts. In dual drive models, it is selected from Drive 1 but applies to both drives. This config is primarily used for single host attachment. (Special Note for dual drive models: In this config, you can only load tapes to Drive 1 via the LED display panel as everything is keyed off of Drive 1. However, you may load tapes to Drive 2 via tapeutil if the Library mode is set to 'Random'.) Split: This config is most often used when the library unit is to be twin-tailed between 2 hosts. In this config, the library is "split" into 2 smaller half size libraries, each to be used by only one host. This is advantageous when an application does not allow the sharing of one tape drive between 2 hosts. The "first/primary" library consists of: Drive 1 The import/export (priority) cell The right most magazine Transport Mechanism The "second" library consists of: Drive 2 The leftmost magazine Transport Mechanism 3570 Element addresses Drive 0 is element 16, Drive 1 is element 17. 3570 mode A 3570 library must be in RANDOM mode to be usable by TSM: AUTO mode is no good. 3570 tape drive cleaning Enable Autocleaning. Check with the library operator guide. The 3570 has a dedicated cleaning tape tape storage slot, which does not take one of the library slots. 3575 3570 library from IBM. Attachment via: SCSI-2. As of early 2001, customers report problem of tape media snapping: the cartridge gets loaded into the drive by the library but it never comes ready: such a cartridge may not be repairable. Does not have a Teach operation like the 3494. Ref: Red book: Magstar MP 3575 Tape Library Dataserver: Muliplatform Implementation. *SM library type: SCSI Library 3575, support C-Format XL tapes? In AIX, do 'lscfg -vl rmt_': A drive capable of supporting C tapes should report "Machine Type and Model 03570C.." and the microcode level should be at least 41A. 3575 configuration The library should be device /dev/smc0 as reflected in AIX command 'lsdev -C tape'...not /dev/lb0 nor /dev/rmtX.smc as erroneously specified in the Admin manuals. 3575 tape drive cleaning The 3575 does NOT have a dedicated cleaning tape storage slot. It takes up one of the "normal" tape slots, reducing the Library capacity by one. 357x library/drives configuration You don't need to define an ADSM device for 357x library/drives under AIX: the ADSM server on AIX uses the /dev/rmtx device. Don't go under SMIT ADSM DEVICES but just run 'cfgmgr'. Once the rmtx devices are available in AIX, you can define them to ADSM via the admin command line. For example, assuming you have two drives, rmt0 and rmt1, you would use the following adsm admin commands to define the library and drives: DEFine LIBRary mylib LIBType=SCSI DEVice=/dev/rmt0.smc DEFine DRive mylib drive1 DEVice=/dev/rmt0 ELEMent=16 DEFine DRive drive mylib drive2 DEVice=/dev/rmt1 ELEMent=17 (you may want to verify the element numbers but these are usually the default ones) 3575 - L32 Magstar Library contents, Unix: 'tapeutil -f /dev/smc0 inventory' list 358x drives These are LTO Ultrium drives. Supported by IBM Atape device driver. See: LTO; Ultrium 3580 IBM model number for LTO Ultrium tape drive. A basic full-height, 5.25 drive SCSI enclosure; two-line LCD readout. Flavors: L11, low-voltage differential (LVD) Ultra2 Wide SCSI; H11, high-voltage differential SCSI. Often used with Adaptec 29160 SCSI card (but use the IBM driver - not the Adaptec driver). The 3580 Tape Drive is capable of data transfer rates of 15 MB per second with no compression and 30 MB per second at 2:1 compression. (Do not expect to come close to such numbers when backing up small files: see "Backhitch".) Review: www.internetweek.com/reviews00/ rev120400-2.htm The Ultrium 1 drives have had problems: - Tapes would get stuck in the drives. IBM (Europe?) engineered a field compensation involving installing a "clip" in the drive. This is ECA 009, which is not a mandatory EC; to be applied only if the customer sees frequent B881 errors in the library containing the drive. The part number is 18P7835 (includes tool). Taks about half an hour to apply. One customer reports having the clip, but still problems, which seems to be inferior cartridge construction. - Faulty microcode. As evidenced in late 2003 defect where certain types of permanent write errors, with subsequent rewind command, causes an end of data (EOD) mark to be written at the BOT (beginning of tape). See also: LTO; Ultrium 3580 (LTO) cleaning cartridge life The manual specifies how much you should expect out of a cleaning cartridge: "The IBM TotalStorage LTO Ultrium Cleaning Cartridge is valid for 50 uses." (2003 manual) 3581 IBM model number for LTO Ultrium tape drive with autoloader. Houses one drive and seven slots: five in front, two in the rear. *SM library type: SCSI Library See also: Backhitch; LTO; Ultrium 3581, configuring under AIX Simply install the device driver and you should be able to see both the drive and medium changer devices as SCSI tape devices (/dev/rmt0 and /dev/smc0). When you will configure the library and drive in TSM, use device type "LTO", not SCSI. Ref: TSM 4.1.3 server README file 3582 IBM LTO Ultrium cartridge tape library. Up to 2 Ultrium 2 tape drives and 23 tape cartridges. Requires Atape driver on AIX and like hosts: Atape level 8.1.3.0 added support for 3582 library. Reportedly not supported by TSM 5.2.2. See also: Backhitch; LTO; Ultrium 3583 IBM LTO Ultrium cartridge tape library. Formal name: "LTO Ultrium Scalable Tape Library 3583". (But it is only slightly scalable: look to the 3584 for higher capacity.) Six drives, 18 cartridges. Can have up to 5 storage columns, which the picker/mounter accesses as in a silo. Column 1 can contain a single-slot or 12-slot I/O station. Column 2 contains cartridge storage slots and is standard in all libraries. Column 3 contains drives. Columns 4 and 5 may be optionally installed and contain cartridge storage slots. Beginning with Column 1 (the I/O station column), the columns are ordered clockwise. The three columns which can house cartridges do so with three removable magazines of six slots each: 18 slots per column, 54 slots total. Add two removable I/O station magazines through the door and one inside the door to total 72 cells, 60 of which are wholly inside the unit. total cartridge storage slots. (There are reports that 2 of those 60 slots are reserved for internal tape drive mounts, though that doesn't show up in the doc.) Model L72: 72 cartridge storage slots As of 2004 handles the Ultrium 2 or Ultrium 1 tape drive. The Ultrium 2 drive can work with Ultrium 1 media, but at lesser speeds (see "Tape Drive Performance" in the 3583 Setup and Operator Guide manual. Cleaning tapes should live in the reserved, nonaddressable slots at the top of silo columns (where the picker's bar code reader cannot look). http://www.storage.ibm.com/hardsoft/tape /pubs/pubs3583.html *SM library type: SCSI Library The 3583 had a variety of early problems such as static buildup: the picker would run fine for a while, until enough static built up, then it would die for no reason apparent to the user. The fix was to replace the early rev picker with a newer design. See also: 3584; Accelis; L1; Ultrium 3583, convert I/O station to slots Via Setup->Utils->Config. Then you have to get the change understood by TSM - and perhaps the operating system. A TSM AUDit LIBRary may be enough; or you may have to incite an operating system re-learning of the SCSI change, which may involve rebooting the opsys. 3583 cleaning cartridge Volser must start with "CLNI" so that the library recognizes the cleaning tape as such (else it assumes it's a data cartridge). The cleaning cartridge is stored in any slot in the library. Recent (2002/12) updates to firmware force the library to handle cleaning itself and hide the cleaning cartridges from *SM. 3583 door locked, never openable See description of padlock icon in the 3583 manual. A basic cause is that the I/O station has been configured as all storage slots (rather than all I/O slots). In a Windows environment, this may be cause by RSM taking control of the library: disable RSM when is it not needed. This condition may be a fluke which power-cycling the library will undo. 3583 driver and installation The LTO/Ultrium tape technology was jointly developed by IBM, and so they provide a native device driver. In AIX, it is supported by Atape; in Solaris, by IBMtape; in Windows, by IBMUltrium; in HP-UX, by atdd. 1. Install the Ultrium device driver, available from ftp://ftp.software.ibm.com/storage /devdrvr// directory 2. In NT, under Tape Devices, press ESC on the first panel. 3. Select the Drivers tab and add your library. 4. Select the 3583 library and click on OK. 5. Press Yes to use the existing files. 3583 "missing slots" If not all storage cells in the library are usable (the count of usable slots is short), it can be caused by a corrupt volume whose label cannot be read during an AUDit LIBRary. You may have to perform a Restore Volume once the volume is identified. 3584 The high end of IBM's mid-range tape library offerings. Formal name: LTO UltraScalable Tape Library Initially housed LTO Ultrium drives and cartridges; but as of mid 2004 also supports 3592 J1A. Twelve drives, 72 cartridges. Can also support DLT. Interface: Fibre Channel or SCSI Its robotics are reported to be much faster than those in the 3494, making for faster mounting of tapes. In Unix, the library is defined as device /dev/smc0, and by default is LUN 1 on the lowest-number tape drive in the partition - normally drive 1 in the library, termed the Master Drive by CEs. (Remove that drive and you suffer ANR8840E trying to interact with the library.) In AIX, 'lsdev -Cc tape' should show all the devices. *SM library type: SCSI Library See also: LTO; Ultrium 3584 bar code reading The library can be set to read either just the 6-char cartridge serial ("normal" mode) or that plus the "L1" tape cartridge identifier as well ("extended" mode). 3584 cleaning cartridge Volser must start with "CLNI" or "CLNU" so that the library recognizes the cleaning tape as such (else it assumes it's a data cartridge). The cleaning cartridge is stored in any data-tape slot in the library (but certainly not the Diagnostic Tape slot). Follow the 3584 manual's procedure for inserting cleaning cartridges. Auto Clean should be activated. The cleaning tape is valid for 50 uses. When the cartridge expires, the library displays an Activity screen like the following: Remove CLNUxxL1 Cleaning Cartridge Expired 3590 IBM's fourth generation of this 1/2" tape cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Uses magneto-resistive heads for high density recording. Introduced: 1995 Tape length: 300 meters (1100 feet) Tracks: 128, written 16 at a time, in serpentine fashion. The head contains 32 track writers: As the tape moves forward, 16 tracks are written until EOT is encountered, whereupon electronic switching causes the other 16 track writers in the heads to be used as the tape moved backwards towards BOT. Then, the head is physically moved (indexed) to repeat the process, until finally all 128 tracks are written as 8 interleaved sets of 16 tracks. Transfer rate: Between host and tape unit buffer: 20 MB/sec with fast, wide, differential SCSI; 17 MB/sec via ESCON channel interface. Between buffer and drive head: 9 MB/sec. Pigment: MP1 (Metal Particle 1) Note that "3590" is a special, reserved DEVType used in 'DEFine DEVclass'. Cartridge type letter: 'J' (does not participate in the volser). See publications references at the bottom of this document. See also: 3590E Previous generation: 3490E Next generation: 3590E See also: MP1 3590, AIX error messages If a defective 3590 is continually putting these out, rendering the drive Unavailable from the 3494 console will cause the errors to be discontinued. 3590, bad block, dealing with Sometimes there is just one bad area on a long, expensive tape. Wouldn't it be nice to be able to flag that area as bad and be able to use the remainder of the tape for viable storage? Unfortunately, there is no documented way to achieve this with 3590 tape technology: when just one area of a tape goes badk the tape becomes worthless. 3590, handling DO NOT unspool tape from a 3590 cartridge unless you are either performing a careful leader block replacement or a post-mortem. Unspooling the tape can destroy it! The situation is clearances: The spool inside the cartridge is spring-loaded so as to keep it from moving when not loaded. The tape drive will push the spool hub upward into the cartridge slightly, which disengages the locking. The positioning is exacting. If the spool is not at just the right elevation within the cartridge, the edge of the tape will abrade against the cartridge shell, resulting in substantial, irreversible damage to the tape. 3590, write-protected? With all modern media, a "void" in the sensing position indicates writing not allowed. IBM 3480/3490/3590 tape cartridges have a thumbwheel (File Protect Selector) which, when turned, reveals a flat spot on the thumbwheel cylinder, which is that void/depression indicating writing not allowed. So, when you see the dot, it means that the media is write-protected. Rotate the thumbwheel away from that to make the media writable. Some cartridges show a padlock instead of a dot, which is a great leap forward in human engineering. See also: Write-protection of media 3590 barcode Is formally "Automation Identification Manufacturers Uniform Symbol Description Version 3", otherwise known as Code 39. It runs across the full width of the label. The two recognized vendors: Engineered Data Products (EDP) Tri-Optic Wright Line Tri-Code Ref: Redbook "IBM Magstar Tape Products Family: A Practical Guide", topic Cartridge Labels and Bar Codes. See also: Code 39 3590 Blksize See: Block size used for removable media 3590 capacity See: 3590 'J'; 3590 'K' See also: ESTCAPacity 3590 cleaning See: 3590 tape drive cleaning 3590 cleaning interval The normal preventve maintenance interval for the 3590 is once every 150 GB (about once every 15 tapes). Adjust via the 3494 Operator Station Commands menu selection "Schedule Cleaning, in the "Usage clean" box. The Magstar Tape Guide redbook recommends setting the value to 999 to let the drive incite cleaning, rather than have the 3494 Library Manager initiate it (apparently to minimize drive wear). Ref: 3590 manual; "IBM Magstar Tape Products Family: A Practical Guide" redbook 3590 cleaning tape Color: Black shell, with gray end notches 3590 cleaning tape mounts, by drive, Put the 3494 into Pause mode; display Open the 3494 door to access the given 3590's control panel; Select "Show Statistics Menu"; See "Clean Mounts" value. 3590 compression of data The 3590 performs automatic compression of data written to the tape, increasing both the effective capacity of the 10 GB cartridge and boosting the effective write speed of the drive. The 3590's data compression algorithm is a Ziv-Lempel technique called IBMLZ1, more effective than the BAC algorithm used in the 3480 and 3490. Ref: Redbook "Magstar and IBM 3590 High Performance Tape Subsystem Technical Guide" (SG24-2506) See also: Compression algorithm, client 3590 Devclass, define 'DEFine DEVclass DevclassName DEVType=3590 LIBRary=LibName [FORMAT=DRIVE|3590B|3590C| 3590E-B|3590E-C] [MOUNTLimit=Ndrives] [MOUNTRetention=Nmins] [PREFIX=TapeVolserPrefix] [ESTCAPacity=X] [MOUNTWait=Nmins]' Note that "3590" is a special, reserved DEVType. 3590 drive* See: 3590 tape drive* 3590 EOV processing There is a volume status full for 3590 volumes. 3590 volumes will do EOV processing when the drive signals end of tape, or when the maxcapacity is reached, if maxcapacity has been set. When the drive signals end of tape, EOV processing will occur even if maxcapacity has not been reached. Contrast with 3490 EOV processing. 3590 errors See: MIM; SARS; SIM; VCR 3590 exploded diagram (internals) http://www.thic.org/pdf/Oct00/ imation.jgoins.001003.pdf page 20 3590 Fibre Channel interface There are two fibre channel interfaces on the 3590 drive, for attaching to up to 2 hosts. Supported in TSM 3.7.3.6 Available for 3590E & 3590H drives but not for 3590B. 3590 'J' 3590 High Performance Cartridge Tape (HPCT), the original 3590 tape cartridge, containing 300 meters of half-inch tape. Predecessor: 3490 "E" Barcodette letter: 'J' Color of leader block and notch tabs: blue Compatible drives: 3590 B; 3590 E; 3590 H Capacity: 10 GB native on Model B drives (up to 30 GB with 3:1 compression); 20 GB native on Model E drives (up to 60 GB with 3:1 compression); 30 GB native on Model H drives (up to 90 GB with 3:1 compression); Notes: Has the thickest tape of the 3590 tape family, so should be the most robust. See also: 3590 cleaning tape; 3590 tape cartridge; 3590 'K'; EHPCT; HPCT 3590 'K' (3590 K; 3590K) 3590 Extended High Performance Cartridge Tape, aka "Extended length", "double length": 600 meters of thinner tape. Available: March 3, 2000 Predecessor: 3590 'J' Barcodette letter: 'K' Color of leader block and notch tabs: green Compatible drives: 3590 E; 3590 H Capacity: 40 GB native on 3590 E drives (up to 120 GB with 3:1 compression, depending upon the compressability of the data); 60 GB native on Model H drives (up to 120 GB with 3:1 compression); Hardware Announcement: ZG02-0301 Notes: The double length of the tape spool makes for longer average positioning times. Fragility: Because so much tape is packed into the cartridge, it tends to be rather close to the inside of the shell, and so is more readily damaged if the tape is dropped, as compared to the 3590 'J'. 3590 microcode level Unix: 'tapeutil -f /dev/rmt_ vpd' (drive must not be busy) see "Revision Level" value AIX: 'lscfg -vl rmt_' see "Device Specific.(FW)" Windows: 'ntutil -t tape_ vpd' Microcode level shows up as "Revision Level". 3590 Model B11 Single-drive unit with attached 10-cartridge Automatic Cartridge Facility, intended to be rack-mounted (IBM 7202 rack). Can be used as a mini library. Interface is via integral SCSI-3 controller with two ports. As of late 1996 it is not possible to perform reclamation between 2 3590 B11s, because they are considered separate "libraries". Ref: "IBM TotalStorage Tape Device Drivers: Installation and User's Guide", Tape and Medium Changer Device Driver section. 3590 Model B1A Single-drive unit intended to be installed in a 3494 library. Interface is via integral SCSI-3 controller with two ports. 3590 Model E11 Rack-mounted 3590E drive with attached 10-cartridge ACF. 3590 Model E1A 3590E drive to be incorporated into a 3494. 3590 modes of operation (Referring to a 3590 drive, not in a 3494 library, with a tape magazine feeder on it.) Manual: The operator selects Start to load the next cartridge. Accumulate: Take each next cartridge from the Priority Cell, return to the magazine. Automatic: Load next tape from magazine without a host Load request. System: Wait for Load request from host before loading next tape from magazine. Random: Host treats magazine as a mini library of 10 cartridges and uses Medium Mover SCSI cmds to select and move tapes between cells. Library: For incorporation of 3590 in a tape library server machine (robot). 3590 performance See: 3590 speed 3590 SCSI device address Selectable from the 3590's mini-panel, under the SET ADDRESS selection, device address range 0-F. 3590 Sense Codes Refer to the "3590 Hardware Reference" manual. 3590 servo tracks Each IBM 3590 High Performance Tape Cartridge has three prerecorded servo tracks, recorded at time of manufacture. The servo tracks enable the IBM 3590 tape subsystem drive to position the read/write head accurately during the write operation. If the servo tracks are damaged, the tape cannot be written to. 3590 sharing between two TSM servers Whether by fibre or SCSI cabling, when sharing a 3590 drive between two TSM servers, watch out for SCSI resets during reboots of the servers. If the server code and hardware don't mesh exactly right, its possible to get a "mount point reserved" state, which requires a TSM restart to clear. 3590 speed Note from 1995 3590 announcement, number 195-106: "The actual throughput a customer may achieve is a function of many components, such as system processor, disk data rate, data block size, data compressibility, I/O attachments, and the system or application software used. Although the drive is capable of a 9-20MB/sec instantaneous data rate, other components of the system may limit the actual effective data rate. For example, an AS/400 Model F80 may save data with a 3590 drive at up to 5.7MB/sec. In a current RISC System/6000 environment, without filesystem striping, the disk, filesystem, and utilities will typically limit data rates to under 4MB/sec. However, for memory-to-tape or tape-to-tape applications, a RISC System/6000 may achieve data rates of up to 13MB/sec (9MB/sec uncompacted). With the 3590, the tape drive should no longer be the limiting component to achieving higher performance. See also IBM site Technote "D/T3590 Tape Drive Performance" 3590 statistics The 3590 tape drive tracks various usage statistics, which you can ask it to return to you, such as Drive Lifetime Mounts, Drive Lifetime Megabytes Written or Read, from the Log Page X'3D' (Subsystem Statistics), via discrete programming or with the 'tapeutil' command Log Sense Page operation, specifying page code 3d and a selected parameter number, like 40 for Drive Lifetime Mounts. Refer to the 3590 Hardware Reference manual for byte positions. See also: 3590 tape drive, hours powered on; 3590 tape mounts, by drive 3590 tape cartridge AKA "High Performance Cartridge Tape". See: 3590 'J' 3590 tape drive The IBM tape drive used in the 3494 tape robot, supporting 10Gbytes per cartridge uncompressed, or typically 30Gbytes compressed via IDRC. Uses High Performance Cartridge Tape. 3590 tape drive, hours powered on Put the 3494 into Pause mode; Open the 3494 door to access the given 3590's control panel; Select "Show Statistics Menu"; See "Pwr On Hrs" value. 3590 tape drive, release from host Unix: 'tapeutil -f dev/rmt? release' after having done a "reserve" Windows: 'ntutil -t tape_ release' 3590 tape drive, reserve from host Unix: 'tapeutil -f dev/rmt? reserve' Windows: 'ntutil -t tape_ reserve' When done, release the drive: Unix: 'tapeutil -f dev/rmt? release' Windows: 'ntutil -t tape_ release' 3590 tape drive Available? (AIX) 'lsdev -C -l rmt1' 3590 tape drive cleaning The drive may detect when it needs cleaning, at which point it will display its need on its front panel, and notify the library (if so attached via RS-422 interface) and the host system (AIX gets Error Log entry ERRID_TAPE_ERR6, "tape drive needs cleaning", or TAPE_DRIVE_CLEANING entry - there will be no corresponding Activity Log entry). The 3494 Library Manager would respond by adding a cleaning task to its Clean Queue, for when the drive is free. The 3494 may also be configured to perform cleaning on a scheduled basis, but be aware that this entails additional wear on the drive and makes the drive unavailable for some time, so choose this only if you find tapes going read-only due to I/O errors. Msgs: ANR8914I 3590 tape drive model number Do 'mtlib -l /dev/lmcp0 -D' The model number is in the third returned token. For example, in returned line: " 0, 00116050 003590B1A00" the model is 3590 B1A. 3590 tape drive serial number Do 'mtlib -l /dev/lmcp0 -D' The serial number is the second returned token, all but the last digit. For example, in returned line: " 0, 00116050 003590B1A00" the serial number is 11605. 3590 tape drive sharing As of TSM 3.7, two TSM servers to be connected to each port on a twin-tailed 3590 SCSI drive in the 3494, in a feature called "auto-sharing". Prior to this, individual drives in a 3494 library could only be attached to a particular server (library partitioning): each drive was owned by one server. 3590 tape drive status, from host 'mtlib -l /dev/lmcp0 -qD -f /dev/rmt1' 3590 tape drive use, define "ENABLE3590LIBRary" definition in the server options file. 3590 tape drives, list From AIX: 'mtlib -l /dev/lmcp0 -D' 3590 tape drives, list in AIX 'lsdev -C -c tape -H -t 3590' 3590 tape drives, not being used in a See: Drives, not all in library being library used 3590 tape mounts, by drive Put the 3494 into Pause mode; Open the 3494 door to access the given 3590's control panel; Select "Show Statistics Menu"; See "Mounts to Drv" value. See also: 3590 tape drive, hours powered on; 3590 statistics 3590 volume, veryify Devclass See: SHow FORMAT3590 _VolName_ 3590B The original 3590 tape drives. Cartridges supported: 3590 'J' (10-30 GB), 'K' (20-60 GB) (Early B drives can use only 'J'.) Tracks: 128 total tracks, 16 at a time, in serpentine fashion. Number of servo tracks: 3 Interfaces: Two, SCSI (FWD) Previous generation: none in 3590 series; but 3490E conceptually. See also: 3590C 3590B vs. 3590E drives A tape labelled by a 3590E drive cannot be read by a 3590B drive. A tape labelled by a 3590B drive can be read by a 3590E drive, but cannot be written by a 3590E drive. The E model can read the B formatted cartridge. The E model writes in 256 track format only and can not write or append to a B formatted tape. The E model can reformat a B format tape and then can write in the E format. The B model can not read E formatted data. The B model can reformat an E format tape and then can write in the B format: the B model device must be a minimum device code level (A_39F or B_731) to do so. 3590C FORMAT value in DEFine DEVclass for the original 3590 tape drives, when data compression is to be performed by the tape drive. See also: 3590C; DRIVE 3590E IBM's fifth generation of this 1/2" tape cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Cartridges supported: 3590 'J' (20-60 GB), 'K' (40-120 GB) Tracks: 256 (2x the 3590B), written 16 at a time, in serpentine fashion. The head contains 32 track writers: As the tape moves forward, 16 tracks are written until EOT is encountered, whereupon electronic switching causes the other 16 track writers in the heads to be used as the tape moved backwards towards BOT. Then, the head is physically moved (indexed) to repeat the process, until finally all 256 tracks are written as 16 interleaved sets of 16 tracks. Number of servo tracks: 3 Interfaces: Two, SCSI (FWD) or FC As of March, 2000 comes with support for 3590 Extended High Performance Cartridge Tape, to again double capacity. Devclass: FORMAT=3590E-C (not DRIVE) Previous generation: 3590B Next generation: 3590K 3590E? (Is a drive 3590E?) Expect to be able to tell if a 3590 drive is an E model by visual inspection: - Rear of drive (power cord end) having stickers saying "Magstar Model E" and "2x" (meaning that the EHPC feature is installed in the drive). - Drive display showing like "E1A-X" (drive type, where X indicates extended) in the lower leftcorner. (See Table 5 in 3590 Operator Guide manual.) 3590EE Extra long 3590E tapes (double length), available only from Imation starting early 2000. The cartridge accent color is green instead of blue and have a K label instead of J. Must be used with 3590E drives. 3590H IBM's sixth generation of this 1/2" cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Cartridges supported: 3590 'J' (30-90 GB), 'K' (60-180 GB) Capacity: 30GB native, ~90 GB compressed Tracks: 384 (1.5 times the 3590E) Compatibility: Can read, but not write, 128-track (3590) and 256-track(3590E) tapes. Supported in: TSM 5.1.6 Interfaces: Two, SCSI (FWD) or FC Devclass: FORMAT=3590E-C (not DRIVE) Previous generation: 3590E Next generation: 3592 (which is a complete departure, wholly incompatible) 3590K See: 3590 'K' 3590L AIX ODM type for 3590 Library models. 3592 The IBM TotalStorage Enterprise Tape Drive and Cartridge model numbers, introduced toward the end of 2003. The drive is only a drive: it slides into a cradle which externally provides power to the drive. The small form factor more severely limits the size of panel messages, to 8 chars. This model is a technology leap, akin to 3490->3590, meaning that though cartridge form remains the same, there is no compatibility whatever between this and what came before. Cleaning cartridges for the 3592 drive are likewise different. Rather than having a leader block, as in 3590 cartridges, the 3592 has a leader pin, located behind a retractable door. The 3592 cartridge is IBM's first one in the 359x series with an embedded memory chip (Cartridge Memory): Records are written to the chip every time the cartridge is unloaded from a 3592 J1A tape drive. These records are then used by the IBM Statistical Analysis and Reporting System (SARS) to analyze and report on tape drive and cartridge usage and help diagnose andisolate tape errors. SARS can also be used to proactively determine if the tape media or tape drive is degrading over time. Cleaning tapes also have CM, emphatically limiting their usage to 50 cycles. The 3592 cartridges come in four types: - The 3592 "JA" long rewritable cartridge: the high capacity tape which most customers would probably buy. Native capacity: 300 GB (Customers report getting up to 1.2 TB.) Can be initialized to 60 GB to serve in a fast-access manner. Works with 3592 J1A tape drive. - The 3592 "JJ" short rewritable cartridge: the economical choice where lesser amounts of data is written to separate tapes. Native capacity: 300 GB. Works with 3592 J1A tape drive. - The 3592 "JW" long write-once (WORM) cartridge. Native capacity: 300 GB. - The 3592 "JR" short write-once (WORM) cartridge. Native capacity: 60 GB. Compression type: Byte Level Compression Scheme Swapping. With this type, it is not possible for the data to expand. (IBM docs also say that the drive uses LZ1 compression, and Streaming Lossless Data Compression (SLDC) data compression algorithm, and ELDC.) The TSM SCALECAPACITY operand of DEFine DEVClass can scale native capacity back from 100 GB down to a low of 60 GB. The 3592 cartridges may live in either a 3494 library (in a new frame type - L22, D22, and D24 - separate from any other 3590 tape drives in the library); or a special frame of a 3584 library. Host connectivity: Dual ported switched fabric 2-Gbps Fibre Channel attachment (but online to only one host at a time). Physical connection is FC, but the drive employs the SCSI-3 command set for operation, in a manner greatly compatible with the 3590, simplifying host application support of the drive. As with the 3590 tape generation, the 3592 has servo information factory-written on the tape. (Do not degauss such cartridges. If you need to obliterate the data on a cartridge, perform a Data Security Erase.) Drive data transfer rate: up to 40MB/s Data life: 30 years Barcode label: Consists of 8 chars, the first 6 being the tape volser, and the last 2 being media type ("JA"). Tape vendors: Fuji, Imation (IBM will not be manufacturing tape) The J1A version of the drive is supported in the 3584 library, as of mid 2004. IBM brochure, specs: G225-6987-01 http://www.fuji-magnetics.com/en/company /news/index2_html Next generation: None, as of 2004/09 3599 An IBM "machine type / model" spec for ordering any Magstar cartridges: 3599-001, -002, -003 are 3590 J cartridges; 3599-004, -005, -006 are 3590 K cartridges; 3599-007 is 3590 cleaning cartridge; 3599-011, -012, -013 are 3592 cartridges 3599-017 is 3592 cleaning cartridge. 3599 A product from Bow Industries for cleaning and retensioning 3590 tape cartridges. www.bowindustries.com/3599.htm 3600 IBM LTO tape library, announced 2001/03/22, withdrawn 2002/10/29. Models: 3600-109 1.8 TB autoloader 3600-220 2/4 TB tower; 1 or 2 drives developers artificially limit the 3600-R20 2/4 TB rack; 1 or 2 drives The 220 and R20 come with two removable magazines that can each hold up to 10 LTO data or cleaning cartridges. 3995 IBM optical media library, utilizing double-sided, CD-sized optical platters contained in protective plastic cartridges. The media can be rewritable (Magneto-Optical), CCW (Continuous Composite Write-once), or permanent WORM (Write-Once, Read-Many). Each side of a cartridge is an Optical Volume. The optical drive has a fixed, single head: the autochanger can flip the cartridge to make the other side (volume) face the head. See also: WORM 3995 C60 Make sure Device Type ends up as WORM, not OPTICAL. 3995 drives Define as /dev/rop_ (not /dev/op_). See APAR IX79416, which describes element numbers vs. SCSI IDs. 3995 manuals http://www.storage.ibm.com/hardsoft/ opticalstor/pubs/pubs3995.html 3995 web page http://www.storage.ibm.com/hardsoft/ opticalstor/3995/maine.html http://www.s390.ibm.com/os390/bkserv/hw/ 50_srch.html 56Kb modem uploads With 56Kb modem technology, 53Kb is the fastest download speed you can usually expect, and 33Kb is the highest upload speed possible. And remember that phone line quality can reduce that further. Ref: www.56k.com 64-bit filesize support Was added in PTF 6 of the version 2 client. 64-bit ready? (Is ADSM?) Per Dave Cannon, ADSM Development, 1998/04/17, the ADSM server has always used 64-bit values for handling sizes and capacities. 7206 IBM model number for 4mm tape drive. Media capacity: 4 GB Transfer rate: 400 KB/S 7207 IBM model number for QIC tape drive. Media capacity: 1.2 GB Transfer rate: 300 KB/S 7208 IBM model number for 8mm tape drive. Media capacity: 5 GB Transfer rate: 500 KB/S 7331 IBM model number for a tape library containing 8mm tapes. It comes with a driver (Atape on AIX, IBMtape on Solaris) for the robot to go with the generic OST driver for the drive. That's to support non-ADSM applications, but ADSM has its own driver for these devices. Media capacity: 7 GB Transfer rate: 500 KB/S 7332 IBM model number for 4mm tape drive. Media capacity: 4 GB Transfer rate: 400 KB/S 7337 A DLT library. Define in ADSM like: DEFine LIBRary autoDLTlib LIBType=SCSI DEVice=/dev/lb0 DEFine DRive autodltlib drive01 DEVice=/dev/mt0 ELEMent=116 DEFine DRive autodltlib drive02 DEVice=/dev/mt1 ELEMent=117 DEFine DEVclass autodlt_class DEVType=dlt LIBRary=autodltlib DEFine STGpool autodlt_pool autodlt_class MAXSCRatch=15 8200 Refers to recording format for 8mm tapes, for a capacity of about 2.3 GB. 8200C Refers to recording format for 8mm tapes, for a capacity of about 3.5 GB. 8500 Refers to recording format for 8mm tapes, for a capacity of about 5.0 GB. 8500C Refers to recording format for 8mm tapes, for a capacity of about 7.0 GB. 8900 Refers to recording format for 8mm tapes, for a capacity of about 20.0 GB. 8mm drives All are made by Exabyte. 8mm tape technology Yecch! Horribly unreliable. Tends to be "write only" - write okay, but tapes unreadable thereafter. 9710/9714 See: StorageTek 9840 See: STK 9840 9940b drive Devclass: - If employing the Gresham Advantape driver: generictape - If employing the Tivoli driver: ecartridge ABC Archive Backup Client for *SM, as on OpenVMS. The software is written by SSSI. It uses the TSM API to save and restore files. See also: OpenVMS ABSolute A Copy Group mode value (MODE=ABSolute) that indicates that an object is considered for backup even if it has not changed since the last time it was backed up; that is, force all files to be backed up. See also: MODE Contrast with: MODified. See also: SERialization (another Copy Group parameter) Accelis (LTO) Designer name for the next generation (sometimes misspelled "Accellis") 3570 tape, LTO. Cartridge is same as 3570, including dual-hub, half-wound for rapid initial access to data residing at either end of the tape (intended to be 10 seconds or less). Physically sturdier than Ultrium, Accelis was intended for large-scale automated libraries. But Accelis never made it to reality: increasing disk capacity made the higher-capacity Ultrium more realistic; and two-hub tape cartridges are wasteful in containing "50% air" instead of tape. Accelis would have had: Cartridge Memory (LTO CM, LTO-CM) chip is embedded in the cartridge: a non-contacting RF module, with non-volatile memory capacity of 4096 bytes, provides for storage and retrieval of cartridge, data positioning, and user specified info. Recording method: Multi-channel linear serpentine Capacity: 25 GB native, uncompressed Transfer rate: 10-20 MB/second. http://www.Accelis.com/ "What Happened to Accelis?": http://www.enterprisestorageforum.com/ technology/features/article.php/1461291 See also: 3583; LTO; MAM; Ultrium (LTO) ACCept Date TSM server command to cause the server to accept the current date and time as valid when an invalid date and time are detected. Syntax: 'ACCept Date' Note that one should not normally have to do this, even across Daylight Savings Time changes, as the conventions under which application programs are run on the server system should let the server automatically have the correct date and time. In Unix systems, for example, the TZ (Time Zone) environment variable specifies the time zone offsets for Daylight and Standard times. In AIX you can do 'ps eww ' to inspect the env vars of the running server. In a z/OS environment, see IBM site article swg21153685. See also: Daylight Savings Time Access Line-item title from the 'Query Volume Format=Detailed' report, which says how the volume may be accessed: Read-Only, Read/Write, Unavailable, Destroyed, OFfsite. Use 'UPDate Volume' to change the access value. If Access is Read-Only for a storage pool within a hierarchy of storage pools, ADSM will skip that level and attempt to write the data to the next level. Access TSM db: Column in Volumes table. Possible values: DESTROYED, OFFSITE, READONLY, READWRITE, UNAVAILABLE Access Control Lists (AIX) Extended permissions which are preserved in Backup/Restore. "Access denied" A message which may be seen in some environments; usually means that some other program has the file open in a manner that prevents other applications from opening it (including ADSM). Access mode A storage pool and storage volume attribute recorded in the ADSM database specifying whether data can be written to or read from storage pools or storage volumes. It can be one of: Read/write Can read or write volume in the storage pool. Set with UPDate STGpool or UPDate Volume. Read-only Volume can only be read. Set with UPDate STGpool or UPDate Volume. Unavailable Volume is not available for any kind of access. Set with UPDate STGpool or UPDate Volume. DEStroyed Possible for a primary storage pool (only), says that the volume has been permanently damaged. Do RESTORE STGpool or RESTORE Volume. Set with UPDate Volume. OFfsite Possible for a copy storage pool, says that volume is away and can't be mounted. Set with UPDate Volume. Ref: Admin Guide See also: DEStroyed Access time When a file was last read: its "atime" value (stat struct st_atime). The Backup operation results in the file's access timestamp being changed as each file is backed up, because as a generalized application it is performing conventional I/O to read the contents of the file, and the operating system records this access. (That is, it is not Backup itself which modifies the timestamp: it's merely that its actions incidentally cause it to change.) Beginning with the Version 2 Release 1 Level 0.1 PTF, UNIX backup and archive processes changed the ctime instead of user access time (atime). This was done because the HSM feature on AIX uses atime in assessing a file's eligibility and priority for migration. However, since the change of ctime conflicts with other existing software, with this Level 0.2 PTF, UNIX backup and archive functions now perform as they did with Version 1: atime is updated, but not ctime. AIX customers might consider geting around that by the rather painful step of using the 'cplv' command to make a copy of the file system logical volumes, then 'fsck' and 'mount' the copy and run backup; but that isn't very reliable. One thinks of maybe getting around the problem by remounting a mounted file system read-only; but in AIX that doesn't work, as lower level mechanisms know that the singular file has been touched. (See topic "MOUNTING FILE SYSTEMS READ-ONLY FOR BACKUP" near the bottom of this documentation.) Network Appliance devices can make an instant snapshot image of a file system for convenient backup, a la AFS design. Veritas Netbackup can restore the atime but at the expense of the ctime (http://seer.support.veritas.com/docs/ 240723.htm) See also: FlashCopy Accessor On a tape robot (e.g., 3494) is the part which moves within the library and carries the arm/hand assembly. See also: Gripper Accounting Records client session activities, with an accounting record written at the end of each client node session (in which a server interaction is required). The information recorded chiefly reflects volumetrics, and thus would be more useful for cross-charging purposes than for more illuminating uses. Note that a client session which does not require interaction with the server, such as 'q option', does not result in an accounting record being written. A busy system will create VOLUMINOUS accounting files, so use judiciously. See also: dsmaccnt.log; SUMMARY Accounting, query 'Query STatus', seek "Accounting:". Unfortunately, its output is meager, revealing only On or Off. See also: dsmaccnt.log Accounting, turn off 'Set ACCounting OFf' Accounting, turn on 'Set ACCounting ON' See also: dsmaccnt.log Accounting log Unix: Is file dsmaccnt.log, located in either the server directory or the directory specified on the DSMSERV_ACCOUNTING_DIR environment variable. MVS (OS/390): the recording occurs in SMF records, subtype 14. Accounting recording begins when 'Set ACCounting ON' is done and client activity occurs. The server keeps the file open, and the file will grow endlessly: there is no expiration pruning done by TSM; so you should cut the file off periodically, either when the server starts/ends, or by turning accounting off for the curation of the cut-off. Accounting log directory Specified via environment variable DSMSERV_ACCOUNTING_DIR (q.v.) in Unix environments, or NT Registry key. Introduced late in *SMv3. Accounting record layout/fields See the Admin Guide for a description of record contents. Field 24, "Amount of media wait time during the session", refers to time waiting for tape mounts. Note that maintenance levels may add accounting fields. See layout description in "ACCOUNTING RECORD FORMAT" near the bottom of this functional directory. Accounting records processing There are no formal tools for doing this. The IBM FTP site's adsm/nosuppt directory contains an adsmacct.exec REXX script, but that's it. See http://people.bu.edu/rbs/TSM_Aids.html for a Perl program to do this. ACF 3590 tape drive: Automatic Cartridge Facility: a magazine which can hold 10 cartridges. Note that this does not exist as such on the 3494: it has a 10-cartridge Convenience I/O Station, which is little more than a pass-through area. ACL handling (Access Control Lists) ACL info will be stored in the *SM database by Archive and Backup, unless it is too big, in which case the ACL info will be stored in a storage pool, which can be controlled by DIRMc. See also: Archive; Backup; DIRMc; INCRBYDate Ref: Using the Unix Backup-Archive Clients (indexed under Access Permissions, describing ACLs as "extended permissions"). ACLs (Access Control Lists) and Changes to Unix ACLs do not change the mtime affecting backup file mtime, so such a change will not cause the file to be backed up by date. ACLS Typically a misspelling of "ACSLS", but could be Auto Cartridge Loader System. ACS Automated Cartridge System ACSACCESSID Server option to specify the id for the ACS access control. Syntax: ACSACCESSID name Code a name 1-64 characters long. The default id is hostname. ACSDRVID Device Driver ID for ACSLS. ACSLOCKDRIVE Server option to specify if the drives within the ACSLS libraries to be locked. Drive locking ensures the exclusive use of the drive within the ACSLS library in a shared environment. However, there are some performance improvements if locking is not performed. If the ADSM drives are not shared with other applications in the configuration then drive locking are not required. Syntax: ACSLOCKDRIVE [YES | NO] Default: NO ACSLS Refers to the STK Automated Cartridge System Library Software. Based upon an RPC client (SSI) - server (CSI) model, it manages the physical aspects of tape cartridge storage and retrieval, while data retrieval is separate, over SCSI or other method. Whenever TSM has a command to send to the robot arm, it changes the command into something that works rather like an RPC call that goes over to the ACSLS software, then ACSLS issues the SCSI commands to the robot arm. ACSLS is typically needed only when sharing a library, wherein ACSLS arbitrates requests; otherwise TSM may control the library directly. Performance: As of 2000/06, severely impaired by being single-threaded, resulting in long tape mount times as *SM queries the drive several times before being sure that a mount is safe. http://www.stortek.com/StorageTek/ software/acsls/ Debugging: Use 'rpcinfo -p' on the server to look for the following ACSLS programs being registered in Portmap: program vers proto port 536871166 2 tcp 4354 300031 2 tcp 4355 then use 'rpcinfo -t ...' to reflect off the program instances. ACSQUICKINIT Server option to specify if the initialization of the ACSLS library should be quick or full initialization during the server startup. The full initialization matches the ACSLS inventory with the ADSM inventory and validate the locking for each ADSM owned volume. It also validates the drive locking and dismount all volumes currently in the ADSM drive. The full initialization takes about 1-2 seconds per volume and can take a long time during the server startup if the library inventory is large. ACSQUICKINIT bypasses all the inventory matching, lock validation and volume dismounting from the drive. The user must ensure the integrity of the ADSM inventory and drive availability, all ADSM volumes or drives are assumed locked by the same lock_id and available. This option is useful for server restart, and should only be used if all ADSM inventory and resources remain the same while the server is down. Syntax: ACSQUICKINIT [YES | NO] Default: NO ACSTIMEOUTX Server option to specify the multiple for the build-in timeout value for ACSLS API. The build-in timeout value for ACS audit API is 1800 seconds, for all other APIs are 600 seconds. If the multiple value specifed is 5, the timeout value for audit API becomes 9000 seconds and all other APIs becomes 3000 seconds. Syntax: ACSTIMEOUTX value Code a number from 1 - 100. Default: 1 Activate Policy Set See: ACTivate POlicyset; Policy set, activate ACTivate POlicyset *SM server command to specify an existing policy set as the Active policy set for a policy domain. Syntax: 'ACTivate POlicyset ' (Be sure to do 'VALidate POlicyset' beforehand.) You need to do an Activate after making management class changes. ACTIVE Column name in the ADMIN_SCHEDULES SQL database table. Possible values: YES, NO. SELECT * FROM ADMIN_SCHEDULES Active Directory See: Windows Active Directory Active file system A file system for which space management is activated. HSM can perform all space management tasks for an active file system, including automatic migration, recall, and reconciliation and selective migration and recall. Contrast with inactive file system. Active files, identify in Select STATE='ACTIVE_VERSION' See also: Inactive files, identify in Select; STATE Active files, number and bytes Do 'EXPort Node NodeName \ FILESpace=FileSpaceName \ FILEData=BACKUPActive \ Preview=Yes' Message ANR0986I will report the number of files and bytes. An alternate method, reporting MB only, follows the definition of Active files, meaning files remaining in the file system - as reflected in a Unix 'df' command and: SELECT SUM(CAPACITY*PCT_UTIL/100) FROM FILESPACES WHERE NODE_NAME='____' This Select is very fast and obviously depends upon whole file system backups. (Selective backups and limited backups can throw it off.) See also: Inactive files, number and bytes Active files, report in terms of MB By definition, Active files are those which are currently present in the client file system, which a current backup causes to be reflected in filespace numbers, so the following yields reasonable results: SELECT NODE_NAME, FILESPACE_NAME, FILESPACE_TYPE, CAPACITY AS "File System Size in MB", PCT_UTIL, DECIMAL((CAPACITY * (PCT_UTIL / 100.0)), 10, 2) AS "MB of Active Files" FROM FILESPACES ORDER BY NODE_NAME, FILESPACE_NAME Caveats: The amount of data in a TSM server filespace will differ somewhat from the client file system where some files are excluded from backups, and more so where client compression is employed. But in most cases the numbers will be good. Active files for a user, identify via SELECT COUNT(*) AS "Active files count"- Select FROM BACKUPS WHERE - NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='___' AND OWNER='___' - AND STATE='ACTIVE_VERSION' Active policy set The policy set within a policy domain most recently subjected to an 'activate' to effectively establish its specificaitons as those to be in effect. This policy set is used by all client nodes assigned to the current policy domain. See policy set. Active Version (Active File) The most recent backup copy of an object stored in ADSM storage for an object that currently exists on a file server or workstation. An active version remains active and exempt from deletion until it is replaced by a new backup version, or ADSM detects during a backup that the user has deleted the original object from a file server or workstation. Note that active and inactive files may exist on the same volumes. See also: ACTIVE_VERSION; Inactive Version; INACTIVE_VERSION Active versions, keep in stgpool For faster restoral, you may want to retain Active files in a higher storage pool of your storage pool hierarchy. There has been no operand in the product to allow you to specify this explicitly; but you can roughly achieve that end via the Stgpool MIGDelay value, to keep recent (Active) files in the higher storage pool. Of course, if there is little turnover in the file system feeding the storage pool, Active files will get old and will migrate. ACTIVE_VERSION SQL DB: State value in Backups table for a current, Active file. See also: DEACTIVATE_DATE Activity log Contains all messages normally sent to the server console during server operation. This is information stored in the TSM server database, not in a separate file. Do 'Query ACtlog' to get info. Each time the server starts it begins logging with message: ANR2100I Activity log process has started. See also: Activity log pruning Activity log, create an entry As of TSM 3.7.3 you can, from the client side, cause messages to be added to the server Activity Log (ANE4771I) by using the API's dsmLogEvent. Another means, crude but effective: use an unrecognized command name, like: "COMMENT At this time we will be powering off our tape robot." It will show up on an ANR2017I message, followed by "ANR2000E Unknown command - COMMENT.", which can be ignored. See also: ISSUE MESSAGE Activity log, number of entries There is no server command to readily determine the amount of database space consumed by the Activity Log. The only close way is to count the number of log entries, as via batch command: 'dsmadmc -id=___ -pa=___ q act BEGINDate=-9999 | grep ANR | wc -l' or do: SELECT COUNT(*) FROM ACTLOG See also: Activity log pruning Activity log, search 'Query ACtlog ... Search='Search string' Activity log, Select entries more than SELECT SERVERNAME,NODENAME,DATE_TIME - an hour old FROM ACTLOG WHERE - (CAST((CURRENT_TIMESTAMP-DATE_TIME) - HOURS AS INTEGER)>1) Activity log, seek a message number 'Query ACtlog ... MSGno=____' or SELECT MESSAGE FROM ACTLOG WHERE - MSGNO=0988 Seek one less than an hour old: SELECT MESSAGE FROM ACTLOG WHERE - MSGNO=0986 AND - DATE_TIME<(CURRENT_TIMESTAMP-(1 HOUR)) Activity log, seek message text SELECT * FROM ACTLOG WHERE MESSAGE LIKE '%%' Activity log, seek severity messages 'SELECT * FROM ACTLOG WHERE \ in last 2 days (SEVERITY='W' OR SEVERITY='E' OR \ SEVERITY='D') AND \ DAYS(CURRENT_TIMESTAMP)- \ DAYS(DATE_TIME) <2 Activity log content, query 'Query ACtlog' Activity log pruning (prune) Occurs just after midnite, driven by 'Set ACTlogretention N_Days' value. The first message which always remains in the Activity Log, related to the pruning, are ANR2102I and ANR2103I. Activity log retention period, query 'Query STatus', look for "Activity Log Retention Period" Activity log retention period, set 'Set ACTlogretention N_Days' Activity Summary Table See: SUMMARY table ACTLOG The *SM database Activity Log table. Columns: DATE_TIME, MSGNO, SEVERITY, MESSAGE, ORIGINATOR, NODENAME, OWNERNAME, SCHEDNAME, DOMAINNAME, SESSID ACTlogretention See: Set ACTlogretention AD See: Windows Active Directory Adaptive Differencing A.k.a "adaptive sub-file backup" and "mobile backup", to back up only the changed portions of a file rather than the whole file. Is employed for files > 1 KB and < 2 GB. (The low-end limit (1024 bytes) was due to some strange behavior with really small files, e.g., if a file started out at 5 k and then was truncated to 8 bytes. The solution was to just send the entire file if the file fell below the 1 KB threshold - no problem since these are tiny files. Initially introduced for TSM4 Windows clients, intended for roaming users needing to back update on laptop computers, over a telephone line. Note that the transfer speed thus varies greatly according to the phone line. See "56Kb modem uploads" for insight. (All 4.1+ servers can store the subfile data sent by the Windows client - providing that it is turned on in the server, via 'Set SUBFILE'.) Limitations: the differencing subsystem in use is limited to 32 bits, meaning 2 GB files. The developers chose 2 GB (instead of 4 GB) as the limit to avoid any possible boundary problems near the 32-bit addressing limit and also because this technology was aimed at the mobile market (read: Who is going to have files on their laptops > 2 GB?). As of 2003 there are no plans to go to 64 bits. Ref: TSM 3.7.3 and 4.1 Technical Guide redbook; Windows client manual; Whitepaper on TSM Adaptive Sub-file Differencing at http://www.ibm.com/ software/tivoli/library/whitepapers/ See also: Set SUBFILE; SUBFILE* ADIC Vendor: Advanced Digital Information Corporation - a leading device-independent storage solutions provider to the open systems marketplace. A reseller. www.adic.com ADMIN Name of the default administrator ID, from the TSM installation. Admin GUI There is none for ADSMv3: there is a command line admin client, and a web admin client instead. Administrative client A program that runs on a file server, workstation, or mainframe. This program allows an ADSM administrator to control and monitor an ADSM server using ADSM administrative commands. Contrast with backup-archive client. Administrative command line interface Beginning with the 3.7 client, the Administrative command line interface is no longer part of the Typical install, in order to bring it in line with the needs of the "typical" TSM user, who is an end user who does not require this capability. If you run a Custom install, you can select the Admin component to be installed. Administrative schedule A schedule to control operations affecting the TSM server. Note that you can't redirect output from an administrative schedule. That is, if you define an administrative schedule, you cannot code ">" or ">>" in the CMD. This seems to be related to the restriction that you can't redirect output from an Admin command issued from the ADSM console. Experience shows that an admin schedule will not be kicked off if a Server Script is running (at least in ADSMv3). The only restricted commands are MACRO and Query ACtlog, because... MACRO: Macros are valid only from administrative clients. Scheduling of admin commands is contained solely within the server and the server has no knowledge of macros. Query ACtlog: Since all output from scheduled admin commands is forced to the actlog then scheduling a Query ACtlog would force the resulitng output right back to the actlog, thereby doubling the size of the actlog. See: DEFine SCHedule, administrative Administrative schedule, run one time Define the administrative schedule with PERUnits=Onetime. Administrative schedules, disable See: DISABLESCheds Administrative schedules, prevent See: DISABLESCheds Administrator A user who is registered with an ADSM server as an administrator. Administrators are assigned one or more privilege classes that determine which administrative tasks they can perform. Administrators can use the administrative client to enter ADSM server commands and queries according to their privileges. Be aware that ADSM associates schedules and other definitions with the administrator who created or last changed it, and that removal or locking of the admin can cause the object to stop operating. In light of this affiliation, it is best for a shop to define a general administrator ID (much like root on a Unix system) which should be used to manage resources having sensitivity to the adminstrator ID. Administrator, add See: Administrator, register Administrator, lock out 'LOCK Admin Admin_Name' See also: Administrators, web, lock out Administrator, password, change 'UPDate Admin Admin_Name PassWord' Administrator, register 'REGister Admin ...' (q.v.) The administrator starts out with Default privilege class. To get more, the 'GRant AUTHority' command must be issued. Administrator, remove 'REMove Admin Adm_Name' Administrator, rename 'REName Admin Old_Adm_Name New_Name' Administrator, revoke authority 'REVoke AUTHority Adm_Name [CLasses=SYstem|Policy|STorage| Operator|Analyst] [DOmains=domain1[,domain2...]] [STGpools=pool1[,pool2...]]' Administrator, unlock 'UNLOCK Admin Adm_Name' Administrator, update info or password 'UPDate Admin ...' (q.v.) Administrator files Located in /usr/lpp/adsm/bin/ Administrator passwords, reset Shamefully, some sites lose track of all their administrator passwords, and need to restore administrator access. The one way is to bring the server down and then start it interactively, which is to say implicitly under the SERVER_CONSOLE administrator id. See: HALT; UPDate Admin Administrator privilege classes From highest level to lowest: System - Total authority Policy - Policy domains, sets, management classes, copy groups, schedules. Storage - Manage storage resources. Operator - Server operation, availability of storage media. Analyst - Reset counters, track server statistics. Default - Can do queries. Right out of a 'REGister Admin' cmd, the individual gets Default privilege. To get more, the 'GRant AUTHority' command must be issued. Administrators, query 'Query admin * Format=Detailed' Administrators, web, lock out You can update the server options file COMMMethod option to eliminate the HTTP and HTTPS specifications. See also: "Administrator, lock out" for locking out a single administrator. adsm The command used to invoke the standard ADSM interface (GUI), for access to Utilities, Server, Administrative Client, Backup-Archive Client, and HSM Client management. /usr/bin/adsm -> /usr/lpp/adsmserv/ezadsm/adsm. Contrast with the 'dsmadm' command, which is the GUI for pure server administration. ADSM ADSTAR Distributed Storage Manager. Consisted of Versions 1, 2, and 3 through Release 1. See also: IBM Tivoli Storage Manager; Tivoli Storage Manager; TSM; WDSF ADSM components installed AIX: 'lslpp -l "adsm*"' See also: TSM components installed ADSM monitoring products ADSM Manager (see http://www.mainstar.com/adsm.htm). Tivoli Decision Support for Storage Management Analysis. This agent program now ships free with TSM V4.1; however you do need a Tivoli Decision Support server. See redbook Tivoli Storage Management Reporting SG24-6109. See also: TSM monitoring products. ADSM origins See: WDSF ADSM server version/release level Revealed in server command Query STatus. Is not available in any SQL table via Select. ADSM usage, restrict by groups Use the "Groups" option in the Client System Options file (dsm.sys) to name the Unix groups which may use ADSM services. See also "Users" option. ADSM.DISKLOG (MVS) Is created as a result of the ANRINST job. You can find a sample of the JCL in the ADSM.SAMPLIB. ADSM.SYS The C:\adsm.sys directory is the "Registry Staging Directory", backed up as part of the system object backup (systemstate and systemservices objects), as the Backup client is traversing the C: DRIVE. ADSM.SYS is excluded from "traditional" incremental and selective backups ("exclude c:\adsm.sys\...\*" is implicit - but should really be "exclude.dir c:\adsm.sys", to avoid timing problems.) Note that backups may report adsm.sys\WMI, adsm.sys\IIS and adsm.sys\EVENTLOG as "skipped": these are not files, but subdirectories. You may employ "exclude.dir c:\adsm.sys" in your include-exclude list to eliminate the messages. (A future enhancement may implicitly do exclude.dir.) For Windows 2003, ADSM.SYS includes VSS metadata, which also needs to be backed up. See: BACKUPRegistry; NT Registry, back up; REGREST ADSM_DD_* These are AIX device errors (circa 1997), as appear in the AIX Error Log. ADSM logs certain device errors in the AIX system error log. Accompanying Sense Data details the error condition. ADSM_DD_LOG1 (0XAC3AB953) DEVICE DRIVER SOFTWARE ERROR Logged by the ADSM device driver when a problem is suspected in the ADSM device driver software. For example, if the ADSM device driver issues a SCSI I/O command with an illegal operation code the command fails and the error is logged with this identifier. ADSM_DD_LOG2 (0X5680E405) HARDWARE/COMMAND-ABORTED ERROR Logged by the ADSM device driver when the device reports a hardware error or command-aborted error in response to a SCSI I/O command. ADSM_DD_LOG3 (0X461B41DE) MEDIA ERROR Logged by the ADSM device driver when a SCSI I/O command fails because of corrupted or incompatible media, or because a drive requires cleaning. ADSM_DD_LOG4 (0X4225DB66) TARGET DEVICE GOT UNIT ATTENTION Logged by the ADSM device driver after receiving a UNIT ATTENTION notification from a device. UNIT ATTENTIONs are informational and usually indicate that some state of the device has changed. For example, this error would be logged if the door of a library device was opened and then closed again. Logging this event indicates that the activity occurred and that the library inventory may have been changed. ADSM_DD_LOG5 (0XDAC55CE5) PERMANENT UNKNOWN ERROR Logged by the ADSM device driver after receiving an unknown error from a device in response to a SCSI I/O cmd. There is no single cause for this: the cause is to be determined by examining the Command, Status Code, and Sense Data. For example, it could be that a SCSI command such as Reserve (X'16') or Release (X'17') was issued with no args (rest of Command is all zeroes). adsmfsm /etc/filesystems attribute, set "true", which is added when 'dsmmigfs' or its GUI equivalent is run to add ADSM HSM control to an AIX file system. Adsmpipe An unsupported Unix utility which uses the *SM API to provide archive, backup, retrieve, and restore facilities for any data that can be piped into it, including raw logical volumes. (In that TSM 3.7 can back up Unix raw logical volumes, there no need for Adsmpipe to serve that purpose. However, it is still useful for situations where it is inconvenient or impossible to back up a regular file, such as capturing the output of an Oracle Export operation where there isn't sufficient Unix disk space to hold it for 'dsmc i'.) By default, files are stored on the server under filespace name "/pipe" (which can be overridden via -s). Do 'adsmpipe' to see usage. -f Mandatory option to specify the name used for the file in the filespace. -c To backup file to the *SM server. -f here specifies the arbitrary name to be assigned to the file as it is to be stored in the *SM server. Input comes from Stdin. Messages go to Stderr. -x To restore file from the *SM server. Do not include the filespace name in the -f spec. Output goes to Stdout. Messages go to Stderr. -t To list previous backup files. Messages go to Stderr. -m To choose a management class. The session will show up as an ordinary backup, including in accounting data. There is a surprising amount of crossover between this API-based facility and the standard B/A client: 'dsmc q f' will show the backup as type "API:ADSMPIPE". 'dsmc q ba -su=y /pipe/\*' will show the files. 'dsmc restore -su=y /pipe/' will restore the file. To get the software: go to http://www.redbooks.ibm.com/, search on the redbook title (or "adsmpipe"), and then on its page click Additional Material, whereunder lies the utility. That leads to: ftp://www.redbooks.ibm.com/redbooks/ SG244335/ (The file may be labeled "adsmpipe.tar" but may in fact be a compressed file; so should actually have been named "adsmpipe.tar.Z".) Ref: Redbook "Using ADSM to Back Up Databases" (SG24-4335) .adsmrc (Unix client) The ADSMv3 Backup/Archive GUI introduced an Estimate function. It collects statistics from the ADSM server, which the client stores, by *SM server address, in the .adsmrc file in the user's Unix home directory, or Windows dsm.ini file. Client installation also creates this file in the client directory. Ref: Client manual chapter 3 "Estimating Backup processing Time"; ADSMv3 Technical Guide redbook See also: dsm.ini; Estimate; TSM GUI Preferences adsmrsmd.dll Windows library provided with the TSM 4.1 server for Windows. (Not installed with 3.7, though.) For Removable Storage Management (RSM). Should be in directory: c:\program files\tivoli\tsm\server\ as both: adsmrsm.dll and adsmrsmd.dll Messages: ANR9955W See also: RSM adsmscsi Older device driver for Windows (2000 and lower), for each disk drive. With Windows 2003 you instead use tsmscsi, installing it on each drive now, rather than having one device driver for all the drives. See manuals. adsmserv.licenses ADSMv2 file in /usr/lpp/adsmserv/bin/, installed with the base server code and updated by the 'REGister LICense' command to contain encoded character data (which is not the same as the hex strings you typed into the command). For later ADSM/TSM releases, see "nodelock". If the server processor board is upgraded such that its serial number changes, the REGister LICense procedure must be repeated - but you should first clear out the /usr/lpp/adsmserv/bin/adsmserv.licenses file, else repeating "ANR9616I Invalid license record" messages will occur. See: License...; REGister LICense adsmserv.lock The ADSM server lock file. It both carries information about the currently running server, and serves as a lock point to prevent a second instance from running. Sample contents: "dsmserv process ID 19046 started Tue Sep 1 06:46:25 1998". See also: dsmserv.lock ADSTAR An acronym: ADvanced STorage And Retrieval. In the 1992 time period, IBM under John Akers tried spinning off subsidiary companies to handle the various facets of IBM business. ADSTAR was the advanced storage company, whose principal product was hardware, but also created some software to help utilize the hardware they made. Thus, ADSM was originally a software product produced by a hardware company. Lou Gerstner subsequently became CEO, thought little of the disparate sub-companies approach, and re-reorganized things such that ADSTAR was reduced to mostly a name, with its ADSM product now being developed under the software division. ADSTAR Distributed Storage Manager A client/server program product that (ADSM) provides storage management services to customers in a multivendor computer environment. Advanced Device Support license For devices such as a 3494 robotic tape library. Advanced Program-to-Program An implementation of the SNA LU6.2 Communications (APPC) protocol that allows interconnected systems to communicate and share the processing of programs. See Systems Network Architecture Logical Unit 6.2 and Common Programming Interface Communications. Discontinued as of TSM 4.2. afmigr.c Archival migration agent. See also: dfmigr.c AFS You can use the standard dsm and dsmc client commands on AFS file systems, but they cannot back up AFS Access Control Lists for directories or mount points: use dsm.afs or dsmafs, and dsmc.afs or dsmcafs to accomplish complete AFS backups by file. The file backup client is installable from the adsm.afs.client installation file, and the DFS fileset backup agent is installable from adsm.butaafs.client. You may need to purchase the Open Systems Environment Support license for AFS/DFS clients. AFS and TSM 5.x There is no AFS support in TSM 5.x, as there is none specifically in AIX 5.x (AIX 4.3.3 being the latest). This seems to derive from the change in the climate of AFS, where it has gone open-source, thus no longer a viable IBM/Transarc product. AFS backups, delete You can use 'delbuta' to delete from AFS and TSM. Or: Use 'deletedump' from the backup interface to delete the buta dumps from the AFS backup database. The only extra step you need to do is run 'delbuta -s' to synchronize the TSM server. Do this after each deletedump run, and you should be all set. AFS backups, reality Backing up AFS is painful no matter how you do it... Backup by volume (using the *SM replacement for butc) is fast, but can easily consume a LOT of *SM storage space because it is a full image backup every time. To do backup by file properly, you need to keep a list of mount points and have a backup server (or set of clients) that has a lot of memory so that you can use an AFS memory cache - and using a disk cache takes "forever". AFSBackupmntpnt Client System Options file option, valid only when you use dsmafs and dsmcafs. (dsmc will emit error message ANS4900S and ignore the option.) Specifies whether you want ADSM to see a AFS mount point as a mount point (Yes) or as a directory (No): Yes ADSM considers a AFS mount point to be just that: ADSM will back up only the mount point info, and not enter the directory. This is the safer of the two options, but limits what will be done. No ADSM regards a AFS mount point as a directory: ADSM will enter it and (blindly) back up all that it finds there. Note that this can be dangerous, in that use of the 'fts crmount' command is open to all users, who through intent or ignorance can mount parts or all of the local file system or a remote one, or even create "loops". All of this is to say that file-oriented backups of AFS file systems is problematic. See also: DFSBackupmntpt Age factor HSM: A value that determines the weight given to the age of a file when HSM prioritizes eligible files for migration. The age of the file in this case is the number of days since the file was last accessed. The age factor is used with the size factor to determine migration priority for a file. It is a weighting factor, not an absolute number of days since last access. Defined when adding space management to a file system, via dsmhsm GUI or dsmmigfs command. See also: Size factor agent.lic file As in /usr/tivoli/tsm/client/oracle/bin/ Is the TDPO client license file. Lower level servers don't have server side licensing. TSM uses that file to verify on the client side. TDPO will not run without a valid agent.lic file. Aggregate See: Aggregates; Reclamation; Stored Size. Aggregate data transfer rate Statistic at end of Backup/Archive job, reflecting transmission over the full job time, which thus includes all client "think time", file system traversal, and even time the process was out of the operating system dispatch queue. Is calculated by dividing the total number of bytes transferred by the elapsed processing time. Both Tivoli Storage Manager processing and network time are included in the aggregate transfer rate. Therefore, the aggregate transfer rate is lower than the network transfer rate. Contrast with Network data transfer rate, which can be expected to be a much higher number because of the way it is calculated. Ref: B/A Client manual glossary. Aggregate function SQL: A function, such as Sum(), Count(), Avg(), and Var(), that you can use to calculate totals. In writing expressions and in programming, you can use SQL aggregate functions to determine various statistics on sets of values. Aggregated? In ADSMv3 'Query CONtent ... Format=Detailed': Reveals whether or not the file is stored in the server in an Aggregate and, if so, the position within the aggregate, as in "11/23". If not aggregated, it will report "No". See also: Segment Number; Stored Size Aggregates Refers to the Small Files Aggregation (aka Small File Aggregation) feature in ADSMv3. During Backup and Archive operations, small files are automatically packaged into larger objects called Aggregates, to be transferred and managed as a whole, thus reducing overhead (database and tape space) and improving performance. An Aggregate is a single file stored at the server. Space-managed (HSM) files are not aggregated, which lessens HSM performance. The TSM API certainly supports Aggregation; but Aggregation depends upon the files in a transaction all being in the same file space. TDPs use the API, but often work with very large files, which may each be a separate file space of their own. Hence, you may not see Aggregation with TDPs. But the size of the files means that Aggregation is not an issue for performance. The size of the aggregate varies with the size of the client files and the number of bytes allowed for a single transaction, per the TXNGroupmax server option (transaction size as number of files) and the TXNBytelimit client option (transaction size as number of bytes). Too-small values can conspire to prevent aggregation - so beware using TCPNodelay in AIX. As is the case with files in general, an Aggregate will seek the storage pool in the hierarchy which has sufficient free space to accommodate the Aggregate. An aggregate that cannot fit entirely within a volume will span volumes, and if the break point is in the midst of a file, the file will span volumes. Note that in Reclamation the aggregate will be simply copied with its original size: no effort will be made to construct output aggregates of some nicer size, ostensibly because the data is being kept in a size known to be a happy one for the client, to facilitate restorals. Files which were stored on the server unaggregated (as for example, long-retention files stored under ADSMv2) will remain that way indefinitely and so consume more server space than may be realized. (You can verify with Query CONtent F=D.) Version 2 clients accessing a v3 server should use the QUIET option during Backup and Archive so that files will be aggregated even if a media mount is required. Your Stgpool MAXSize value limits the size of an Aggregate, not the size of any one file in the Aggregate. See also: Aggregated?; NOAGGREGATES; Segment Number Ref: Front of Quick Start manual; Technical Guide redbook; Admin Guide "How the Server Groups Files before Storing" Aggregates and reclamation As expiration deletes files from the server, vacant space can develop within aggregates. For data stored on sequential media, this vacant space is removed during reclamation processing, in a method called "reconstruction" (because it entails rebuilding an aggregate without the empty space). Aggregation, see in database SELECT * FROM CONTENTS WHERE NODE_NAME='UPPER_CASE_NAME' ... In the report: FILE_SIZE is the Physical, or Aggregate, size. The size reflects the TXNBytelimit in effect on the client at the time of the Backup or Archive. AGGREGATED is either "No" (as in the case of HSM, or files Archived or Backup'ed before ADSMv3), or the relative number of the reported file within the aggregate, like "2/16". The value reflects the TXNGroupmax server limit on the number of files in an Aggregate, plus the client TXNBytelimit limiting the size of the Aggregate. Remember that the Aggregate will shrink as reclamation recovers space from old files within the Aggregate. AIT Advanced Intelligent Tape technology, developed by Sony and introduced in 1996 to handle the capacity requirements of large, data-intensive applications. This is video-style, helical-scan technology, wherein data is written in diagonal slashes across the width of the tape. Like 8mm tape, is less reliable than linear tape technologies because AIT tightly wraps the tape around various heads and guides at much sharper angles than linear tape, and its heads are mechanically active, making for higher wear on the tape, lowering reliability. Memory-in-Cassette (MIC) feature puts a flash memory chip in with the tape, for remembering file positions or storing a imited amount of data: the MIC chip contains key parameters such as a tape log, search map, number of times loaded, and application info that allow flexible management of the media and its contents. The memory size was 16 MB in AIT-1; is 64 MB in AIT-3. See: //www.aittape.com/mic.html Cleaning: The technology monitors itself and invokes a built-in Active Head Cleaner as needed; a cleaning cartridge is recommended periodically to remove dust and build-up. Tape type: Advanced Metal Evaporated (AME) Cassette size: tiny, 3.5 inch, 8mm tape. Capacity: 36 GB native; 70 GB compressed (2:1). Sony claims their AIT drives of *all* generations achieve 2.6:1 average compression ratio using Adaptive Lossless Data Compression (ALDC), which would yield 90 GB. Transfer rate: 4 MB/s without compression, 10 MB/s with compression (in the QF 3 MB/s is written). Head life: 50,000 hours Media rating: 30,000 passes. Lifetime estimated at over 30 years. Ref: www.sony.com/ait www.aittape.com/ait1.html http://www.mediabysony.com/ctsc/ pdf/spec_ait3.pdf http://www.tapelibrary.com/aitmic.html http://www.aittape.com/ ait-tape-backup-comparison.html http://www.tape-drives-media.co.uk/sony /about_sony_ait.htm Technology is similar to Mammoth-2. See also: MAM; SAIT AIT-2 (AIT2) Next step in AIT. Capacity: 50 GB native; 100 GB compressed (2:1). Sony claims their AIT drives of *all* generations achieve 2.6:1 average compression ratio using Adaptive Lossless Data Compression (ALDC), which would yield 130 GB. Transfer rate: 6 MB/sec max without compression; 15 MB/s with. Technology is similar to Mammoth-2. AIT-3 (AIT3) Next Sony AIT generation - still using 8mm tape and helical-scan technology. Capacity: 100 GB without compression, 260GB with 2.6:1 compression. MIC: 64 MB flash memory AIX 4.2.0 Per IBMer Andy Raibeck, 1998/10/12, responding to a question as to whether the ADSMv3 clients are supported under AIX 4.2.0: "AIX 4.2.0 is not a supported ADSM platform. We would have liked to support it, but the number of problems we had trying to get ADSM to run on 4.2.0 made it impractical." AIX 5L, 32-bit client The 32-bit B/A client for both AIX 4.3.3 & AIX 5L is in the package tivoli.tsm.client.ba.aix43.32bit (API client in tivoli.tsm.client.api.aix43.32bit, image client in tivoli.tsm.client.image.aix43.32bit, etc.). Many people seem to be confused by "aix43"-part of the names looking for non-existent *.aix51.32bit packages. AIXASYNCIO and AIXDIRECTIO notes Direct I/O only works for storage pool volumes. Further, it "works best" with storage pool files created on a JFS filesystem that is NOT large file enabled. Apparently, AIX usually implicitly disables direct I/O on I/O transactions on large file enabled JFS due to TSM's I/O patterns. To ensure use of direct I/O, you have to use non-large file enabled JFS, which limits your volumes to 2 GB each, which is very restrictive. IBM recommends: AIXDIRECTIO YES AIXSYNCIO NO Asynchronous I/O supposedly has no JFS or file size limitations, but is only used for TSM database volumes. Recovery log and storage pool volumes do not use async I/O. AIX 5.1 documentation mentions changes to the async I/O interfaces to support offsets greater than 2 GB, however, which implies that at least some versions (32-bit TSM server?) do in fact have a 2 GB file size limitation for async I/O. I was unable to get clarity on this point in the PMR I opened. ALDC Adaptive Lossless Data Compression compression algorithm, as used in Sony AIT-2. IBM's ALDC employs their proprietary version of the Lempel-Ziv compression algorithm called IBM LZ1. Ref: IBM site paper "Design considerations for the ALDC cores". See also: ELDC; LZ1; SLDC ALL-AUTO-LOFS Specification for client DOMain option to say that all loopback file systems (lofs) handled by automounter are to be backed up. See also: ALL-LOFS ALL-AUTO-NFS Specification for client DOMain option to say that all network file systems (lofs) handled by automounter are to be backed up. See also: ALL-NFS ALL-LOCAL The Client User Options file (dsm.opt) DOMain statement default, which may be coded explicitly, to include all local hard drives, excluding /tmp in Unix, and excluding any removeable media drives, such as CD-ROM. Local drives do not include NFS-mounted file systems. In 4.1.2, its default is to include the System Object (includes Registry, event logs, comp+db, system files, Cert Serv Db, AD, frs, cluster db - depends if pro, dc etc on which of these the system object contains). If you specify a DOMAIN that is not ALL-LOCAL, and want the System Object backed up, then you need to include SYSTEMOBJECT, as in: DOMAIN C: E: SYSTEMOBJECT See also: File systems, local; /tmp ALL-LOFS Specification for client DOMain option to say that all loopback file systems (lofs), except those handled by the automounter, are to be backed up. See also: ALL-AUTO-LOFS ALL-NFS Specification for client DOMain option to say that all network file systems (lofs), except those handled by the automounter, are to be backed up. See also: ALL-AUTO-NFS Allow access to files See: dsmc SET Access Always backup ADSMv3 client GUI backup choice to back up files regardless of whether they have changed. Equivalent to command line 'dsmc Selective ...'. You should normally use "Incremental (complete)" instead, because "Always" redundantly sends to the *SM server data that it already has, thus inflating tape utilization and *SM server database space requirements. Amanda The Advanced Maryland Automatic Network Disk Archiver. A free backup system that allows the administrator of a LAN to set up a single master backup server to back up multiple hosts to a single large capacity tape drive. AMANDA uses native dump and/or GNU tar facilities and can back up a large number of workstations running multiple versions of Unix. Recent versions can also use SAMBA to back up Microsoft Windows 95/NT hosts. http://www.amanda.org/ (Don't expect to find a system overview of Amanda. Documentation on Amanda is very limited.) http://sourceforge.net/projects/amanda/ http://www.backupcentral.com/amanda.html AMENG See also: LANGuage; USEUNICODEFilenames Amount Migrated As from 'Query STGpool Format=Detailed'. Specifies the amount of data, in MB, that has been migrated, if migration is in progress. If migration is not in progress, this value indicates the amount of data migrated during the last migration. When multiple, parallel migration processes are used for the storage pool, this value indicates the total amount of data migrated by all processes. Note that the value can be higher than reflected in the Pct Migr value if data was pouring into the storage pool as migration was occurring. See also: Pct Migr; Pct Util ANE Messages prefix for event logging. See messages manual. aobpswd Password-setting utility for the TDP for Oracle. Connects to the server specified in the dsm.opt file, to establish an encrypted password in a public file on your client system. This creates a file called TDPO. in the directory specified via the DSMO_PSWDPATH environment variable (or the current directory, if that variable is not set). Thereafter, this file must be readable to anyone running TDPO. Use aobpswd to later update the password. Note that you need to rerun aobpswd before the password expires on the server. Ref: TDP Oracle manual APA AutoPort Aggregation APARs applied to ADSM on AIX system See: PTFs applied to ADSM on AIX system API Application Programming Interface. Available for TSM Backup, Archive, and HSM facilities plus associated queries, providing a library such that programs may directly perform common operations. As of 4.1, available for: AS/400, Netware, OS/2, Unix, Windows ADSM location: /usr/lpp/adsm/api The API can not be used to access files backed up or archived with the regular Backup-Archive clients. Attempting to do so will yield "ANS4245E (RC122) Format unknown" (same as ANS1245E). Nor can files stored via the API be seen by the conventional clients. Nor can different APIs see each others' files. The only general information that you can query is file spaces and management classes. In the API manual, Chapter 4, Interoperability, briefly indicates that the regular command line client can do some things with data sent to the server via the API - but not vice versa. This is highly frustrating, as one would want to use the API to gain finely controlled access to data backed up by regular clients. The "Format unknown" problem is rather similar to the issue of trying to use a regular client of a given level to gain access to data backed up by another regular client at a higher level: the lower level client cannot decipher the advanced format which the higher level client used in storing the data. Thus, interoperability in general is limited in the product. LAN-free support: The TSM API supports LAN-free, as of TSM 4.2. Note that there is no administrative API. Performance: The APIs typically do not aggregate files as do standard TSM clients. Lack of aggregation is usually not detrimental to performance with APIs, though, in that they typically deal with a small number of large files. Encryption: As of late 2003, the API does not support encryption. Ref: Using the API. API, Windows Note that the TSM API for Windows handles objects as case insensitive but case preserving. This is an anomaly resulting from the fact that SQL Server allows case-sensitive databases names. API config file See the info in the "Using the API" manual about configuration file options appropriate to the API. Note that the API config file is specified on the dsmInit call. API header files See: dsmapi*.h API installed? AIX: There will be a /usr/lpp/adsm/api directory. APPC Advanced Program-to-Program Communications. Discontinued as of TSM 4.2. Application client A software application that runs on a workstation or personal computer and uses the ADSM application programming interface (API) function calls to back up, archive, restore, and retrieve objects. Contrast with backup-archive client. Application Programming Interface A set of functions that application (API) clients can call to store, query, and retrieve data from ADSM storage. Arch Archive file type, in Query CONtent report. Other types: Bkup, SpMg ARCHDELete A Yes/No parameter on the 'REGister Node' and 'UPDate Node' commands to specify whether the client node can delete its own archived files from the server. Default: Yes. See also: BACKDELete Archive The process of copying files to a long-term storage device. V2 Archive only archives files: it does *not* archive directories, or symbolic links, or special files!!! Just files. (Thus, Archive is not strictly suitable for making file system images. See the V2archive option in modern clients to achieve the same operation.) File permissions are retained, including Access Control Lists (ACLs). Symbolic links are followed, to archive the file pointed to by the symlink. Directories are not archived in ADSMv2, but files in subdirectories are recorded by their full path name, and so during retrieval any needed subdirectories will be recreated, with new timestamps. In contrast, ADSMv3 *does* archive directories. Archived data belongs to the user who performed the archive. Include/Exclude is not applicable to archiving: just to backups. When you archive a file, you can specify whether to delete the file from your local file system after it is copied to ADSM storage or leave the original file intact. Archive copies may be accompanied by descriptive information, may imply data compression software usage, and may be retrieved by archive date, object name, or description. Windows: "System Object" data (including the Registry) is not archived. Instead, you could use MS Backup to Backup System State to local disk, then use TSM to archive this. Contrast with Retrieve. See also: dsmc Archive; dsmc Delete ARchive; FILESOnly; V2archive For a technique on archiving a large number of individual files, see entry "Archived files, delete from client". Archive, delete the archived files Use the DELetefiles option. Archive, exclude files In TSM 4.1: EXCLUDE.Archive Archive, from Windows, automatic date You can effect this from the DOS command in Description command line, like: dsmc archive c:\test1\ -su=y -desc="%date% Test Archive" Archive, latest Unfortunately, there is no command line option to return the latest version of an archived file. However, for a simple filename (no wildcard characters) you can do: 'dsmc q archive ' which will return a list of all the archived files, where the latest is at the bottom, and can readily be extracted (in Unix, via the 'tail -1' command). Archive, long term, issues A classic situation that site technicians have to contend with is site management mandating the keeping of data for very long term periods, as in five to ten years or more. This may be incited by requirements as made by Sarbanes-Oxley. In approaching this, however, site management typically neglects to consider issues which are essential to the data's long-term viability: - Will you be able to find the media in ten years? Years are a long time in a corporate environment, where mergers and relocations and demand for space cause a lot of things to be moved around - and forgotten. Will the site be able to exercise inventory control over long-term data? - Will anyone know what those tapes are for in the future? The purpose of the tapes has to be clearly documented and somehow remain with the tapes - but not on the tapes. Will that doc even survive? - Will you be able to use the media then? Tapes may survive long periods (if properly stored), but the drives which created them and could read them are transient technology, with readability over multiple generations being rare. Likewise, operating systems and applications greatly evolve over time. And don't overlook the need for human knowledge to be able to make use of the data in the future. To fully assure that frozen data and media kept for years would be usable in the future, the whole enviroment in which they were created would essentially have to be frozen in time: computer, OS, appls, peripherals, support, user procedures. That's hardly realistic, and so the long-term viability of frozen data is just as problematic. To keep long-term data viable, it has to move with technology. This means not only copying it across evolving media technologies, but also keeping its format viable. For example: XML today, but tomorrow...what? That said, if long-term archiving (in the generic sense) is needed, it is best to proceed in as "vanilla" a manner as possible. For example, rather than create a backup of your commercial database, instead perform an unload: this will make the data reloadable into any contemporary database. Keep in mind that it is not the TSM administrator's responsibility to assure anything other than the safekeeping of stored data. It is the responsibility of the data's owners to assure that it is logically usable in the future. Archive, prevent client from doing See: Archiving, prohibit Archive, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)] on all volumes [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: It is best to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' to assure that the reported information will be current. Archive and Migration If a disk Archive storage pool fills, ADSM will start a Migration to tape to drain it; but because the pool filled and there is no more space there, the active Archive session wants to write directly to tape; but that tape is in use for Migration, so the client session has to wait. Archive archives nothing A situation wherein you invoke Archive like 'dsmc arch "/my/directory/*"' and nothing gets archived. Possible reasons: - /my/directory/ contains only subdirectories, no files; and the subdirectories had been archived in previous Archive operations. - You have EXCLUDE.ARCHIVE statements which specifies the files in this directory. Archive Attribute In Windows, an advanced attribute of a file, as seen under file Properties, Advanced. It is used by lots of other backup software to define if a file was already backed up, and if it has to be backed up the next time. As of TSM 5.2, the Windows client provides a RESETARCHIVEATTRibute option for resetting the Windows archive attribute for files during a backup operation. See also: RESETARCHIVEATTRibute Archive bit See: Archive Attribute Archive copy An object or group of objects residing in an archive storage pool in ADSM storage. Archive Copy Group A policy object that contains attributes that control the generation, destination, and expiration of archived copies of files. An archive copy group is stored in a management class. Archive Copy Group, define 'DEFine COpygroup DomainName PolicySet MGmtclass Type=Archive DESTination=PoolName [RETVer=N_Days|NOLimit] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' Archive descriptions Descriptions are supplementary identifiers which assist in uniquely identifying archive files. Descriptions are stored in secondary tables, in contrast to the primary archive table entries which store archive directory and file data information. Archive directory An archive directory is defined to be unique by: node, filespace, directory/level, owner and description. See also: CLEAN ARCHDIRectories Archive drive contents Windows: dsmc archive d:\* -subdir=yes Archive fails on single file Andy Raibeck wrote in March 1999: "In the case of a SELECTIVE backup or an ARCHIVE, if one or more files can not be backed up (or archived) then the event will be failed. The rationale for this is that if you ask to selectively back up or archive one or more files, the assumption is that you want each and every one of those files to be processed. If even one file fails, then the event will have a status of failed. So the basic difference is that with incremental we expect that one or more files might not be able to be processed, so we do not flag such a case as failed. In other cases, like SELECTIVE or ARCHIVE, we expect that each file specified *must* be processed successfully, or else we flag the operation as failed." Archive files, how to See: dsmc Archive Archive operation, retry when file in Have the CHAngingretries (q.v.) Client use System Options file (dsm.sys) option specify how many retries you want. Default: 4. Archive retention grace period The number of days ADSM retains an archive copy when the server is unable to rebind the object to an appropriate management class. Defined via the ARCHRETention parameter of 'DEFine DOmain'. Archive retention grace period, query 'Query DOmain Format=Detailed', see "Archive Retention (Grace Period)". Archive storage pool, keep separate It is best to keep your Archive storage pool separate from others (Backup, HSM) so that restorals can be done more quickly. If Archive data was in the same storage pool as Backups, there would be a lot of unrelated data for the restoral to have to skip over. Archive users SELECT DISTINCT OWNER FROM ARCHIVES [WHERE node_name='UpperCase'] SELECT NODE_NAME,OWNER,TYPE,COUNT(*) AS "Number of objects" FROM ARCHIVES WHERE NODE_NAME='____' OR NODE_NAME='____' GROUP BY NODE_NAME,OWNER,TYPE Archive users, files count SELECT OWNER,count(*) AS "Number of files" FROM ARCHIVES WHERE NODE_NAME='UPPER_CASE_NAME' GROUP BY OWNER Archive vs. Backup Archive is intended for the long-term storage of individual files on tape, while Backup is for safeguarding the contents of a file system to facilitate the later recovery of any part of it. Returning files to the file system en mass is thus the forte of Restore, whereas Retrieve brings back individual files as needed. Retention policies for Archive files is rudimentary, whereas for Backups it is much more comprehensive. See also: http://www.storsol.com/cfusion /template.cfm?page1=wp_whyaisa&page2= blank_men Archive vs. Selective Backup, The two are rather similar; but... differences The owner of a backup file is the user whose name is attached to the file, whereas the owner of an archive file is the person who performed the Archive operation. Frequency of archive is unrestricted, whereas backup can be restricted. Retention rules are simple for archive, but more involved for backup. Archive files are deleteable by the end user; Backup files cannot be selectively deleted. ADSMv2 Backup would handle directories, but Archive would not: in ADSMv3+, both Backup and Archive handle directories. Retrieval is rather different for the two: backup allows selection of old versions by date; archive distinction is by date and/or the Description associated with the files. ARCHIVE_DATE Column in *SM server database ARCHIVES table. Format: YYYY-MM-DD HH:MM:SS.xxxxxx Example: SELECT * FROM ARCHIVES WHERE ARCHIVE_DATE> '1997-01-01 00:00:00.000000' AND ARCHIVE_DATE< '1998-12-31 00:00:00.000000' Archived copy A copy of a file that resides in an ADSM archive storage pool. Archived file, change retention? The retention of individual Archive files cannot be changed: you can only Retrieve and then re-Archive the file. *SM is an enterprise software package, meaning that it operates according to site policies. It prohibits users from circumventing site policies, and thus will not allow users to extend archive retentions beyond their site-defined values. The product is also architected for security and privacy, providing the server administrator no means of retrieving, inspecting, deleting, or altering the contents or attributes of individual files. In terms of retention, all that the server administrator can do is change the retention policy for the management class, which affects all files in that class. See also: Archived files, retention period, update Archived files, count SELECT COUNT(*) AS "Count" FROM ARCHIVES WHERE NODE_NAME='' Archived files: deletable by client Whether the client can delete archived node? files now stored on the server. Controlled by the ARCHDELete parameter on the 'REGister Node' and 'UPDate Node' commands. Default: Yes. Query via 'Query Node Format=Detailed'. Archived files, delete from client Via client command: 'dsmc Delete ARchive FileName(s)' (q.v.) You could first try it on a 'Query ARchive' to get comfortable. Archived files, list from client See: dsmc Query ARchive Archived files, list from server 'SHow Archives NodeName FileSpace' Archived files, list from server, 'Query CONtent VolName ...' by volume Archived files, rebinding does not From the TSM Admin. manual, chapter on occur Implementing Policies for Client Data, topic How Files and Directories Are Associated with a Management Class: "Archive copies are never rebound because each archive operation creates a different archive copy. Archive copies remain bound to the management class name specified when the user archived them." (Reiterated in the client B/A manual, under "Binding and Rebinding Management Classes to Files".) Beware, however, that changing the retention setting of a management class's archive copy group will cause all archive versions bound to that management class to conform to the new retention. Note that you can use an ARCHmc to specify an alternate management class for the archive operation. Archived files, report by owner As of ADSMv3 there is still no way to do this from the client. But it can be done within the server via SQL, like: SELECT OWNER,FILESPACE_NAME,TYPE, ARCHIVE_DATE FROM ARCHIVES WHERE NODE_NAME='UPPER_CASE_NAME' - AND OWNER='joe' Archived files, report by year Example: SELECT * FROM ARCHIVES WHERE YEAR(ARCHIVE_DATE)=1998 Archived files, retention period Is part of the Copy Group definition. Is defined in DEFine DOmain to provide a just-in-case default value. Note that there is one Copy Group in a Management Class for backup files, and one for archived files, so the retention period is essentially part of the Management Class. Archived files, retention period, set The retention period for archive files is set via the "RETVer" parameter of the 'DEFine COpygroup' ADSM command. Can be set for 0-9999 days, or "NOLimit". Default: 365 days. Archived files, retention period, While you cannot change the retention update for an individual file, you can change it for all files bound to a given Management Class: 'UPDate COpygroup DomainName SetName ClassName Type=Archive RETVer=N_Days|NOLimit' where RETVer specifies the retention period, and can be 0-9999 days, or "NOLimit". Default: 365 days. Effect: Changing RETVer causes any newly-archived files to pick up the new retention value, and previously-archived files also get the new retention value, because of their binding to the changed management class. (The TSM database Archives table contains an Archive_Date column: there is no "Expiration_Date" column, and so the archived files conform to whatever the prevailing management class retention rules are at the time. So if you extend your retention policy, it pertains to all archive files, old and new.) Archived files, retention period, See: 'Query COpygroup ... Type=Archive' query Archived files, retrieve from client Via client dsmc command: 'RETrieve [-DEscription="..."] [-FROMDate=date] [-TODate=date] [-FROMOwner=owner] [-FROMNode=node] [-PIck] [-Quiet] [-REPlace=value] [-SErvername=StanzaName] [-SUbdir=No|Yes] [-TAPEPrompt=value] OrigFileName(s) [NewFileName(s)]' Archived files don't show up Some users have encountered the unusual problem of having archived files, and know they should not yet have expired, but the archived files do not show up in a client query, despite being performed from the owning user, etc. Analysis with a Select on the Archives table revealed the cause to be directories missing from the server storage pools, which prevented hierarchically finding the files in a client -subdir query. The fix was to re-archive the missing directories. Use ARCHmc (q.v.) to help avoid problems. ARCHIVES SQL: *SM server database table containing basic information about each archived object (but not its size). Along with BACKUPS and CONTENTS, constitutes the bulk of the *SM database contents. Columns: NODE_NAME, FILESPACE_NAME, TYPE, HL_NAME, LL_NAME, OBJECT_ID, ARCHIVE_DATE, OWNER, DESCRIPTION, CLASS_NAME. Archiving, prohibit Prohibit archiving by employing one of the following: In the *SM server: - LOCK Node, which prevents all access from the client - and which may be too extreme. - ADSMv2: Do not define an archive Copy Group in the Management Class used by that user. This causes the following message when trying to do an archive: ANS5007W The policy set does not contain any archive copy groups. Unable to continue with archive. - ADSMv3: Code NOARCHIVE in the include-exclude file, as in: "include ?:\...\* NOARCHIVE" which prevents all archiving. - 'UPDate Node ... MAXNUMMP=0', to be in effect during the day, to prevent Backup and Archive, but allow Restore and Retrieve. In the *SM client: - Employ EXCLUDE.ARCHIVE for the subject area. For example, you want to prevent your client system users from archiving files that are in file system /fs1: EXCLUDE.ARCHIVE /fs1/.../* Attempts to archive will then get: ANS1115W File '/fs1/abc/xyz' excluded by Include/Exclude list Retrieve and Delete Archive continue to function as usual. ARCHmc (-ARCHmc) Archive option, to be specified on the 'dsmc archive' command line (only), to select a Management Class and thus override the default Management Class for the client Policy Domain. (ADSM v3.1 allowed it in dsm.opt; but that's not the intention of the option.) Default: the Management Class in the active Policy Set. See "Archive files, how to" for example. As of ADSMv3.1 mid-1999 APAR IX89638 (PTF 3.1.0.7), archived directories are not bound to the management class with the longest retention. See also: CLASS_NAME; dsmBindMC ARCHRETention Parameter of 'DEFine DOmain' to specify the retention grace period for the policy domain, to protect old versions from deletion when the respective copy group is not available. Specified as the number of days (from date of archive) to retain archive copies. ARCserve Competing product from Computer Associates, to back up Microsoft Exchange Server mailboxes. Advertises the ability to restore individual mailboxes, but what they don't tell you is that they do it in a non-Microsoft supported way: they totally circumvent the MS Exchange APIs. The performance is terrible and the product as a whole has given customers lots of problems. See also: Tivoli Storage Manager for Mail ARCHSYMLinkasfile Archive option as of ADSMv3 PTF 7. If you specify ARCHSYMLinkasfile=No then symbolic links will not be followed: the symlink itself will be archived. If you specify ARCHSYMLinkasfile=Yes (the default), then symbolic links will be followed in order to archive the target files. Unrelated: See also FOLlowsymbolic Ref: Installing the Clients manual ARTIC 3494: A Real-Time Interface Coprocessor. This card in the industrial computer within the 3494 manages RS-232 and RS-422 communication, as serial connections to a host and command/feedback info with the tape drives. A patch panel with eight DB-25 slots mounted vertically in the left hand side of the interior of the first frame connects to the card. AS SQL clause for assigning an alias to a report column header title, rather than letting the data name be the default column title or expression used on the column's contents. The alias then becomes the column name in the output, and can be referred to in GROUP BY, ORDER BY, and HAVING clauses - but not in a WHERE clause. The title string should be in double quotes. Note that if the column header widths in combination exceed the width of the display window, the output will be forced into "Title: Value" format. Sample: SELECT VOLUME_NAME AS - "Scratch Vols" FROM LIBVOLUMES WHERE STATUS='Scratch' results in output like: Scratch Vols ------------------ 000049 000084 AS/400 Visit: www.as400.ibm.com ASC SQL: Ascending order, in conjunction with ORDER BY, as like: GROUP BY NODE_NAME ORDER BY NODE_NAME ASC ASC/ASCQ codes Additional Sense Codes and Additional Sense Code Qualifiers involved in I/O errors. The ASC is byte 12 of the sense bytes, and the ASCQ is byte 13 (as numbered from 0). They are reported in hex, in message ANR8302E. ASC=29 ASCQ=00 indicates a SCSI bus reset. Could be a bad adapter, cable, terminator, drive, etc.). The drives could be causing an adapter problem which in turn causes a vus reset, or a problematic adapter could be causing the bus reset that causes the drive errors. ASC=3B ASCQ=0D is "Medium dest element full", which can mean that the tape storage slot or drive is already occupied, as when a library's inventory is awry. Perform a re-inventory. ASC=3B ASCQ=0E is "Medium source element empty", saying that there is no tape in the storage slot as there should be, meaning that the library's inventory is awry. Perform a re-inventory. See Appendix B of the Messages manual. See also: ANR8302E ASR Automated System Recovery - a restore feature of Windows XP Professional and Windows Server 2003 that provides a framework for saving and recovering the Windows XP or Windows Server 2003 operating state, in the event of a catastrophic system or hardware failure. TSM creates the files required for ASR recovery and stores them on the TSM server. In the backup, TSM will generate the ASR files in the :\adsm.sys\ASR staging directory on your local machine and store these these files in the ASR file space on the TSM server. Ref: Windows B/A Client manual, Appendix F "ASR supplemental information"; Redbook "TSM BMR for Windows 2003 and XP" Msgs: ANS1468E ASSISTVCRRECovery Server option to specify whether the ADSM server will assist the 3570/3590 drive in recovering from a lost or corrupted Vital Cartridge Records (VCR) condition. If you specify Yes (the default) and if TSM detects an error during the mount processing, it locates to the end-of-data during the dismount processing to allow the drive to restore the VCR. During the tape operation, there may be some small effect on performance because the drive cannot perform a fast locate with a lost or corrupted VCR. However, there is no loss of data. See also: VCR ASSISTVCRRECovery, query 'Query OPTions', see "AssistVCRRecovery" Association Server-defined chedules are associated with client nodes so that the client will be contacted to run them in a client-server arrangement. See 'DEFine ASSOCiation', 'DELete ASSOCiation'. Atape Moniker for the Magstar tape driver, which supports 3590, 3570, and 3575. Download from ftp.storsys.ibm.com, in the /devdrvr/ directory. In AIX, is installed in /usr/lpp/Atape/. Sometimes, Atape will force you to re-create the TSM tape devices; and a reboot may be necessary (as in the Atape driver rewriting AIX's bosboot area): so perform such upgrades off hours. See also: IBMtape Atape header file, for programming AIX: /usr/include/sys/Atape.h Solaris: /usr/include/sys/st.h HP-UX: /usr/include/sys/atdd.h Windows: , Atape level 'lslpp -ql Atape.driver' atime See: Access time; Backup ATL Automated Tape Library: a frame containing tape storage cells and a robotic mechanism which can respond to host commands to retrieve tapes from storage cells and mount them for reading and writing. atldd Moniker for the 3494 library device driver, "AIX LAN/TTY: Automated Tape Library Device Driver", software which comes with the 3494 on floppy diskettes. Is installed in /usr/lpp/atldd/. Download from: ftp://service.boulder.ibm.com/storage/ devdrvr/ See also: LMCP atldd Available? 'lsdev -C -l lmcp0' atldd level 'lslpp -ql atldd.driver' ATS IBM Advanced Technical Support. They host "Lunch and Learn" conference call seminars ATTN messages (3590) Attention (ATTN) messages indicate error conditions that customer personnel may be able to resolve. For example, the operator can correct the ATTN ACF message with a supplemental message of Magazine not locked. Ref: 3590 Operator Guide (GA32-0330-06) Appendix B especially. Attribute See: Volume attributes Attributes of tape drive, list AIX: 'lsattr -EHl rmt1' or 'mt -f /dev/rmt1 status' AUDit DB Undocumented (and therefore unsuported) server command in ADSMv3+, ostensibly a developer service aid, to perform an audit on-line (without taking the server down). Syntax (known): 'AUDIT DB [PARTITION=partion-name] [FIX=Yes]' e.g. 'AUDIT DB PARTITION=DISKSTORAGE' as when a volume cannot be deleted. See also: dsmserv AUDITDB AUDit LIBRary Creates a background process which (as in verifying 3494's volumes) checks that *SM's knowledge of the library's contents are consistent with the library's inventory. This is a bidirectional synchronization task, where the TSM server acquires library inventory information and may subsequently instruct the library to adjust some volume attributes to correspond with TSM volume status info. Syntax: 'AUDit LIBRary LibName [CHECKLabel=Yes|Barcode]' where the barcode check was added in the 2.1.x.10 level of the server to make barcode checking an option rather than the implicit default, due to so many customers having odd barcodes (as in those with more than 6-char serials). Also, using CHECKLabel=Barcode greatly reduces time by eliminating mounts to read the header on the tapes - which is acceptable if you run a tight ship and are confident of barcodes corresponding with internal tape labeling. Sample: 'AUDit LIBRary OURLIB'. The audit needs to be run when the library is not in use (no volumes mounted): if the library is busy, the Audit will likely hang. Runtime: Probably not long. One user with 400 tapes quotes 2-3 minutes. Tip: With a 3494 or comparable library, you may employ the 'mtlib' command to check the category codes of the tapes in the library for reasonableness, and possibly use the 'mtlib' command to adjust errant values without resorting to the disruption of an AUDit LIBRary. This audit is performed when the server is restarted (no known means of suppressing this). In a 349X library, AUDit LIBRary will instruct the library to restore Scratch and Private category codes to match TSM's libvolumes information. This is a particularly valuable capability for when library category codes have been wiped out by an inadvertent Teach or Reinventory operation at the library (which resets category codes to Insert). AUDit LICenses *SM server command to start a background process which both audits the data storage used by each client node and licensing features in use on the server. This process then compares the storage utilization and other licensing factors to the license terms that have been defined to the server to determine if the current server configuration is in compliance with the license terms. There is no "Wait" capability, so use with server scripts is awkward. Syntax: 'AUDit LICense'. Will hopefully complete with messages ANR2825I License audit process 3 completed successfully - N nodes audited ANR2811I Audit License completed - Server is in compliance with license terms. Must be done before running 'Query AUDITOccupancy' for its output to show current values. Note that the time of the audit shows up in Query AUDITOccupancy output. Msgs: ANR2812W, ANR2834W, ANR2841W See also: Auditoccupancy; AUDITSTorage; License...; Query LICense; Set LICenseauditperiod; SHow LMVARS AUDIT RECLAIM Command introduced in v3.1.1.5 to fix a bug introduced by the 3.1.0.0 code. See also: RECLAIM_ANALYSIS AUDit Volume TSM server command to audit a volume, and optionally fix inconsistencies. If a disk volume, it must be online; if a tape volume, it will be mounted (unless TSM realizes that it contains no data, as when you are trying to fix an anomaly). What this does is validate file information stored in the database with that stored on the tape. It does this by reading every byte of every file on the volume and checks control information which the server imbeds in the file when it is stored. The same code is used for reading and checking the file as would be used if the file were to be restored to a client. (In contrast, MOVe Data simply copies files from one volume to another. There are, however, some conditions which MOVe Data will detect which AUDit Volume will not.) If a file on the volume had previously been marked as Damaged, and Audit Volume does not detect any errors in it this time, that file's state is reset. AUDit Volume is a good way to fix niggly problems which prevent a volume from finally reaching a state of Empty when some residual data won't otherwise disappear. Syntax: 'AUDit Volume VolName [Fix=No|Yes] [SKIPPartial=No|Yes] [Quiet=No|Yes]'. "Fix=Yes" will delete unrecoverable files from a damaged volume (you will have to re-backup the files). Caution: Do not use AUDit Volume on a problem disk volume without first determining, from the operating system level, what the problem with the disk actually is. Realize that a disk electronics problem can make intact files look bad, or inconsistently make them look bad. What goes on: The database governs all, and so location of the files on the tape is necessarily controlled by the current db state. That is to say, Audit Volume positions to each next file according to db records. At that position, it expects to find the start of a file it previously recorded on the medium. If not (as when the tape had been written over), then that's a definite inconsistency, and eligible for db deletion, depending up Fix. The Audit reads each file to verify medium readability. (The Admin Guide suggests using it for checking out volumes which have been out of circulation for some time.) Medium surface/recording problems will result in some tape drives (e.g., 3590) doggedly trying to re-read that area of the tape, which will entail considerable time. A hopeless file will be marked Damaged or otherwise handled according to the Fix rules. The Audit cannot repair the medium problem: you can thereafter do a Restore Volume to logically fix it. Whether the medium itself is bad is uncertain: there may indeed be a bad surface problem or creasing in the tape; but it might also be that the drive which wrote it did so without sufficient magnetic coercivity, or the coercivity of the medium was "tough", or tracking was screwy back then - in which case the tape may well be reusable. Exercise via tapeutil or the like is in order. Audit Volume has additional help these days: the CRCData Stgpool option now in TSM 5.1, which writes Cyclic Redudancy Check data as part of storing the file. This complements the tape technology's byte error correction encoding to check file integrity. Ref: TSM 5.1 Technical Guide redbook DR note: Audit Volume cannot rebuild *SM database entries from storage pool tape contents: there is no capability in the product to do that kind of thing. Msgs: ANR2333W, ANR2334W See also: dsmserv AUDITDB AUDITDB See: 'DSMSERV AUDITDB' AUDITOCC SQL: TSM database table housing the data that Query AUDITOccupancy reports. Columns: NODE_NAME, BACKUP_MB, BACKUP_COPY_MB, ARCHIVE_MB, ARCHIVE_COPY_MB, SPACEMG_MB, SPACEMG_COPY_MB, TOTAL_MB (This separately reports primary and copy storage pool numbers, in contrast to 'Query AUDITOccupancy', which report them combined.) Be sure to run 'AUDit LICenses' before reporting from it (as is also required for 'Query AUDITOccupancy'). See also: AUDITSTorage; Query AUDITOccupancy AUDit Volume performance Will be impacted if CRC recording is in effect. AUDITSTorage TSM server option. As part of a license audit operation, the server calculates, by node, the amount of server storage used for backup, archive, and space-managed files. For servers managing large amounts of data, this calculation can take a great deal of CPU time and can stall other server activity. You can use the AUDITSTorage option to specify that storage is not to be calculated as part of a license audit. Note: This option was previously called NOAUDITStorage. Syntax: "AUDITSTorage Yes|No" Yes Specifies that storage is to be calculated as part of a license audit. This is the default. No Specifies that storage is not to be calculated as part of a license audit. (Expect this to impair the results from Query AUDITOccupancy) Authentication The process of checking and authorizing a user's password before allowing that user access to the ADSM server. (Password prompting does not occur if PASSWORDAccess is set to Generate.) Authentication can be turned on or off by an administrator with system privilege. See also: Password security Authentication, query 'Query STatus' Authentication, turn off 'Set AUthentication OFf' Authentication, turn on 'Set AUthentication ON' The password expiration period is established via 'Set PASSExp NDays' (Defaults to 90 days). Authorization Rule A specification that allows another user to either restore or retrieve a user's objects from ADSM storage. Authorized User In the TSM Client for Unix: any user running with a real user ID of 0 (root) or who owns the TSM executable with the owner execution permission bit set to s. Auto Fill 3494 device state for its tape drives: pre-loading is enabled, which will keep the ACL index stack filled with volumes from a specified category. See /usr/include/sys/mtlibio.h Auto Migration, manually perform for 'dsmautomig [FSname]' file system (HSM) Auto Migrate on Non-Usage In output of 'dsmmigquery -M -D', an (HSM) attribute of the management class which specifies the number of days since a file was last accessed before it is eligible for automatic migration. Defined via AUTOMIGNOnuse in management class. See: AUTOMIGNOnuse Auto-sharing See: 3590 tape drive sharing AUTOFsrename Macintosh and Windows clients option controlling the automatic renaming of pre-Unicode filespaces on the *SM server when a Unicode-enabled client is first used. The filespace is renamed by adding "_OLD" to the end of its name. Syntax: AUTOFsrename Prompt | Yes | No AUTOLabel Parameter of DEFine LIBRary, as of TSM 5.2, to specify whether the server attempts to automatically label tape volumes for SCSI libraries. See: DEFine LIBRary Autoloader A strictly sequential tape magazine for 3480/3490 tape drives. Contrast with Library, which is random. Automatic Cartridge Facility 3590 tape drive: a magazine which can hold 10 cartridges. Automatic migration (HSM) The process HSM uses to automatically move files from a local file system to ADSM storage based on options and settings chosen by a root user on your workstation. This process is controlled by the space monitor daemon (dsmmonitord). Is goverened by the "SPACEMGTECH=AUTOmatic|SELective|NONE" operand of MGmtclass. See also: threshold migration; demand migration; dsmautomig Automatic reconciliation The process HSM uses to reconcile your file systems at regular intervals set by a root user on your workstation. This process is controlled by the space monitor daemon (dsmmonitord). See: Reconciliation; RECOncileinterval AUTOMIGNOnuse Mgmtclass parameter specifying the number of days which must elapse since the file was last accessed before it is eligible for automatic migration. Default: 0 meaning that the file is immediately available for migration. Query: 'Query MGmtclass' and look for "Auto-Migrate on Non-Use". Beware setting this value higher than one or two days: if all the files are accessed, the migration threshold may be exceeded and yet no migration can occur; hence, a thrashing situation. See also: Auto Migrate on Non-Usage AUTOMOUNT (ADSMv2 only) Client System Options file (dsm.sys) option for Sun systems only. Specifies a symbolic link to an NFS mount point monitored by an automount daemon. There is no support for automounted file systems under AIX. Availability Element of 'Query STatus', specifying whether the server is enabled or disabled; that is, it will be "Disabled" if 'DISAble SESSions' had been done, else will show "Enabled". look for "Availability". Average file size: ADSMv2: In the summary statistics from an Archive or Backup operation, is the average size of the files processed. Note that this value is the true average, and is not the "Total number of bytes transferred" divided by "Total number of objects backed up" because the "transferred" number is often inflated by retries and the like. See also: Total number of bytes transferred AVG SQL statement to yield the average of all the rows of a given numeric column. See also: COUNT; MAX; MIN; SUM B Unit declarator signifying Bytes. Example: "Page size = 4 KB" b Unit declarator signifying bits. Example: "Transmit at 56 Kb/sec" B/A Abbreviation for Backup/Archive, as when referring to the B/A Client manual. BAC Informal acronym for the Backup/Archive Client. BAC Binary Arithmetic Compression: algorithm used in the IBM 3480 and 3490 tape system's IDRC for hardware compression the data written to tape. See also: 3590 compression of data Back up some files once a week See IBM doc "How to backup only some files once a week": http://www.ibm.com/support/docview.wss? uid=swg21049445 Back up storage pool See: BAckup STGpool BACKDELete A Yes/No parameter on the 'REGister Node' and 'UPDate Node' commands to specify whether the client node can delete its own backup files from the server, as part of a dsmc Delete Filespace. Default: No. See also: ARCHDELete Backed-up files, list from client 'dsmc Query backup "*" -FROMDate=xxx -NODename=xxx -PASsword=xxx' Backed-up files, list from server You can do a Select on the Backups or Contents table for the filespace; but there's a lot of overhead in the query. A lower overhead method, assuming that the client data is Collocated, is to do a Query CONTent on the volume it was more recently using (Activity Log, SHow VOLUMEUSAGE). A negative COUnt value will report the most recent files first, from the end of the volume. Backed-up files count (HSM) In dsmreconcile log. Backhitch Relatively obscurant term used to describe the start/stop repositioning that some tape drives have to perform after writing stops, in order to recommence writing the next burst of data adjoining the last burst. This is time-consuming and prolongs the backup of small files. Lesser tape technologies such as DLT are notorious for this. This effect is sometimes called "shoe-shining", referring to the reciprocating motion. Redbook "IBM TotalStorage Tape Selection and Differentiation Guide" notes that LTO is 5x slower than 3590H in its backhitch; and "In a non-data streaming environment, the excellent tape start/stop and backhitch properties of the 3590 class provides much better performance than LTO." See Tivoli whitepaper "IBM LTO Ultrium Performance Considerations" Ref: IBM site Technote 1111444 See also: DLT and start/stop operations; "shoe-shining"; Start-stop; Streaming Backint SAP client; uses the TSM API and performs TSM Archiving rather than Backup. Msgs prefix: BKI See also: TDP for R/3 BACKRETention Parameter of 'DEFine DOmain' to specify the retention grace period for the policy domain, to protect old versions from deletion when the respective Copy Group is not available. You should, however, have a Copy Group to formally establish your retention periods: do 'Query COpygroup' to check. Specify as the number of days (from date of deactivation) to retain backup versions that are no longer on the client's system. Backup The process of copying one or more files, directories, and ACLs to a server backup type storage pool to protect against data loss. During a Backup, the server is responsible for evaluating versions-based retention rules, to mark the oldest Inactive file as expired if the new incoming version causes the oldest Inactive version to be "pushed out" of the set. (See: "Versions-based file expiration") ADSMv2 did not back up special files: character, block, FIFO (named pipes), or sockets). ADSMv3 *will* back up some special files: character, block, FIFO (named pipes); but ADSMv3 will *not* back up or restore sockets (see "Sockets and Backup/Restore"). More trivially, the "." file in the highest level directory is not backed up, which is why "objects backed up" is one less than "objects inspected".) Backups types: - Incremental: new or changed files; Can be one of: - full: all new and changed files are backed up, and takes care of deleted files; - partial: simply looks for files new or changed since last backup date, so omits old-dated files new to client, and deleted files are not expired. An example of a partial incremental is -INCRBYDate. Via 'dsmc Incremental'. (Note that the file will be physically backed up again only if TSM deems the content of the file to have been changed: if only the attributes (e.g., Unix permissions) have been changed, then TSM will simply update the attributes of the object on the server.) - Selective: you select the files. Via 'dsmc Selective'. Priority: Lower than BAckup DB, higher than Restore. Full incrementals are the norm, as started by 'dsmc incremental /FSName'. Use an Include-Exclude Options File if you need to limit inclusion. Use a Virtual Mount Point to start at other than the top of a file system. Use the DOMain Client User Options File option to define default filesystems to be backed up. (Incremental backup will back up empty directories. Do 'dsmc Query Backup * -dirs -sub=yes' the client to find the empties, or choose Directory Tree under 'dsm'.) To effect backup, TSM examines the file's attributes such as size, modification date and time (Unix mtime), ownership (Unix UID), group (Unix GID), (Unix) file permissions, ACL, special opsys markers such as NTFS file security descriptors, and compares it to those attributes of the most recent backup version of that file. (Unix atime - access time - is ignored.) Ref: B/A Client manual, "Backing Up and Restoring Files" chapter, "Backup: Related Topics", "What Does TSM Consider a Changed File"; and under the description of Copy Mode. This means that for normal incremental backups, TSM has to query the database for each file being backed up in order to determine whether that file is a candidate for incremental backup. This adds some overhead to the backup process. TSM tries to be generic where it can, and in Unix does not record the inode number. Thus, if a 'cp -p' or 'mv' is done such that the file is replaced (its inode number changes) but only the ctime attribute is different, then the file data will not be backed up in the next incremental backup: the TSM client will just send the new ctime value for updating in the TSM database. Backup changes the file's access timestamp (Unix stat struct st_atime): the time of last "access" or "reference", as seen via Unix 'ls -alu ...' command. The NT client uses the FILE_FLAG_BACKUP_SEMANTICS option when a file is opened, to prevent updating the Access time. See also: Directories and Backup; -INCRBYDate; SLOWINCREMENTAL; Updating--> Contrast with Restore. For a technique on backing up a large number of individual files, see entry "Archived files, delete from client". Backup, batched transaction buffering See: TXNBytelimit Backup, delete all copies Currently the only way to purge all copies of a single file on the server is to setup a new Management Class which keeps 0 versions of the file. Run an incremental while the files is still on the local FS and specify this new MC on an Include statement for that file. Next change the Include/Exclude so the file now is excluded. The next incremental will expire the file under the new policy which will keep 0 inactive versions of the file. Backup, delete part of it ADSM doesn't provide a means for server commands to delete part of a backup; but you can effect it by emplacing an Exclude for the object to be deleted: the next backup will render it obsolete in the backups. Backup, exclude files Specify "EXclude" in the Include-exclude options file entry to exclude a file or group of files from ADSM backup services. (Directories are never excluded from backups.) Backup, full (force) You can get a full backup of a file system via one of the following methods (being careful to weigh the ramifications of each approach): - In the server, do 'UPDate COpygroup ... MODE=ABSolute' in the associated Management Class, which causes files to be backed up regardless of having been modified. (You will have to do a 'VALidate POlicyset' and 'ACTivate POlicyset' to put the change into effect.) Don't forget to change back when the backup is done. - Consider GENerate BACKUPSET (q.v.), which creates a package of the file system's current Active backup files. See: Backup Set; dsmc REStore BACKUPSET; Query BACKUPSETContents - At PC client: relabel the drive and do a backup. At Unix client: mount the file system read-only at a different mount point and do a backup. - As server admin, do 'REName FIlespace' to cause the filespace to be fully repopulated in the next backup (hence a full backup): you could then rename this just-in filespace to some special name and rename the original back into place. - Do a Selective Backup; like 'dsmc s -su=y FSname' in Unix. (In the NT GUI, next to the Help button there is a pull down menu: choose option "always backup".) - Define a variant node name which would be associated with a management class with the desired retention policy, code an alternate server stanza in the Client System Options file, and select it via the -SErvername command line option. Backup, full, periodic (weekly, etc.) Some sites have backup requirements which do not mesh with TSM's "incremental forever" philosophy. For example, they want to perform incrementals daily, and fulls weekly and monthly. For guidance, see article "Performing Full Client Backups with TSM" on the IBM website. Backup, last (most recent) Determine the date of last backup via: Client command: 'dsmc Query Filespace' Server commands: 'Query FIlespace [NodeName] [FilespaceName] Format=Detailed' SELECT * FROM FILESPACES WHERE - NODE_NAME='UPPER_CASE_NAME' and look at BACKUP_START, BACKUP_END Select: Backup, management class used Shows up in 'query backup', whether via command line or GUI. Backup, more data than expected going If you perform a backup and expect like 5 GB of data to go and instead find much more, it's usually a symptom of retries, as in files being open and changing during the backup. Backup, OS/2 OS/2 files have an archive byte (-a or +a). Some say that if this changes, ADSM will back up such files; but others say that ADSM uses the filesize-filedate-filetime combination. Backup, prohibit See: Backups, prevent Backup, selective A function that allows users to back up objects from a client domain that are not excluded in the include-exclude list and that meet the requirement for serialization in the backup copy group of the management class assigned to each object. Performed via the 'dsmc Selective' cmd. See: Selective Backup. Backup, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)] on all volumes [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: You need to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' for the reported information to be current. Backup, subfile See: Adaptive Differencing; Set SUBFILE; SUBFILE* Backup, successful? Consider something like the following to report on errors, to be run via schedule: /* FILESERVER BACKUP EXCEPTIONS */ Query EVent DomainName SchedName BEGINDate=TODAY-1 ENDDate=TODAY-1 EXceptionsonly=YES Format=Detailed >> /var/log/backup-problems File will end up with message: "ANR2034E QUERY EVENT: No match found for this query." if no problems (no exceptions found). Backup, undo There is no way to undo standard client Incremental or Selective backups. Backup, which file systems to back up Specify a file system name via the "DOMain option" (q.v.) or specify a file system subdirectory via the "VIRTUALMountpoint" option (q.v.) and then code it like a file system in the "DOMain option" (q.v.). Backup, which files are backed up See the client manual; search the PDF (Backup criteria) for the word "modified". In the Windows client manual, see: - "Understanding which files are backed up" - "Copy mode" - "Resetarchiveattribute" (TSM does not use the Windows archive attribute to determine if a file is a candidate for incremental backup.) - And, Windows Journal-based backup. It is also the case that TSM respects the entries in Windows Registry subkey HKLM\System\CurrentControlSet\Control\ BackupRestore\FilesNotToBackup (No, this is not mentioned in the client manual; is in the 4.2 Technical Guide redbook. File \Pagefile.sys should be in this list.) Always do 'dsmc q inclexcl' in Windows to see the realities of inclusion. Note that there is also a list of Registry keys not to be restored, in KeysNotToRestore. Unix: See the criteria listed under the description of "Copy mode" (p.128 of the 5.2 manual). Backup copies, number of Defined in Backup Copy Group. Backup Copy Group A policy object that contains attributes which control the generation, destination, and expiration of backup versions of files. A backup copy group belongs to a management class. Backup Copy Group, define 'DEFine COpygroup DomainName PolicySet MGmtclass [Type=Backup] DESTination=Pool_Name [FREQuency=Ndays] [VERExists=N_Versions|NOLimit] [VERDeleted=N_Versions|NOLimit] [RETExtra=N_Versions|NOLimit] [RETOnly=N_Versions|NOLimit] [MODE=MODified|ABSolute] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' Backup Copy Group, update 'UPDate COpygroup DomainName PolicySet MGmtclass [Type=Backup] [DESTination=Pool_Name] [FREQuency=Ndays] [VERExists=N_Versions|NOLimit] [VERDeleted=N_Versions|NOLimit] [RETExtra=N_Versions|NOLimit] [RETOnly=N_Versions|NOLimit] [MODE=MODified|ABSolute] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' BAckup DB TSM server command to back up the TSM database to tape (backs up only used pages, not the whole physical space). This operation is essential when LOGMode Rollforward is in effect, as this is the only way that the Recovery Log is cleared. 'BAckup DB DEVclass=DevclassName [Type=Incremental| Full|DBSnapshot] [VOLumenames=VolNames| FILE:File_Name] [Scratch=Yes|No] [Wait=No|Yes]' The VOLumenames list will be used if there is at least one volume in it which is not already occupied; else TSM will use a scratch tape per the default Scratch=Yes. Note that the DevClass can be of DEVType FILE...which could allow you to have a large-capacity hard drive inside a fire-proof enclosure so as to produce a secure backup for disaster with no extra effort. DBSnapshot Specifies that you want to run a full snapshot database backup, to make a "point in time" image for possible later db restoral (in which the Recovery Log will *not* participate). The entire contents of a database are copied and a new snapshot database backup is created without interrupting the existing full and incremental backup series for the database. If roll-forward db mode is in effect, and a snapshot is performed, the recovery log keeps growing. Before doing one of these, be aware that the latest snapshot db backup cannot be deleted! Priority: Higher than filespace Backup, so will preempt it if conflict. The Recovery Log space represented in the backup will not be reclaimed until the backup finishes: the Pct Util does not decrease as the backup proceeds. The tape used *does* show up in a 'Query MOunts". Note that unlike in other ADSM tape operations, the tape is immediately unloaded when the backup is complete. If using scratch volumes, beware that this function will gradually consume all your scratch volumes unless you do periodic pruning ('DELete VOLHistory'). If specifying volsers to use, they must *not* already be assigned to a DBBackup or storage pool: if they are, ADSM will instead try to use a scratch volume, unless Scratch=No. Example: 'BAckup DB DEVclass=LIBR.DEVC_3590 VOL=000050 Type=full Scratch=No' You should free old dbbackup volumes: 'DELete VOLHistory TOD=-N T=DBB' where "-N" should specify a value like -7, saying to deleted any older than 7 days, meaning you keep the latest 7 days worth for safety. It is best to schedule this deletion to occur immediately prior to doing BAckup DB: in this way you can assure that a tape will be available, even if the scratch pool was exhausted. Messages: ANR1360I when output volume opened; ANR1361I when the volume is closed; ANR4554I tracks progress; ANR4550I at completion (reports number of pages backed up). Incremental DB Backup does *not* automatically write to the last tape used in a full backup: it will write to a scratch tape instead. (And each incremental writes to a new tape.) Queries: Do either: 'Query VOLHistory Type=DBBackup' or 'Query LIBVolume' to reveal the database backup volume. (A 'Query Volume' is no help because it only reports storage pool volumes, and by their nature, database backup media are outside ADSM storage. See: Database backup volume, pruning. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting). See also: DELete VOLHistory BAckup DB performance As of mid-2001, BAckup DB is still a plodding task. Data rates, even with the best disk, tape, and CPU hardware, are only 3 - 4 MB/sec, which is well below hardware speeds. Thus, the TSM database system itself is the drag on performance. BAckup DB to a scratch 3590 tape Perform like the following example: in the 3494 'BAckup DB DEVclass=LIBR.DEVC_3590 Type=Full' BAckup DB to a specific 3590 tape Perform like the following example: in the 3494 'BAckup DB DEVclass=LIBR.DEVC_3590 Type=Full VOLumenames=000050 Scratch=No' BAckup DEVCONFig ADSM server command to back up the device configuration information which ADSM uses in standalong recoveries. Syntax: 'BAckup DEVCONFig [Filenames=___]' (No entry is written to the Activity Log to indicate that this was performed.) See also DEVCONFig server option. Backup failure message "ANS4638E Incremental backup of 'FileSystemName' finished with 2 failure" Backup files An elemental concept in *SM relates to its database orientation: each file is unique by nodename, filespace, and filename. Together, the nodename, filespace name, and filename constitute the database key for managing the file. Backup files: deletable by client Controlled by the BACKDELete parameter node? on the 'REGister Node' and 'UPDate Node' commands. Default: No (which thus prohibits a "DELete FIlespace" operation from the client). Query via 'Query Node Format=Detailed'. Backup files, management class binding By design, you can not have different backup versions of the same file bound to different management classes. All backup versions of a given file are bound to the same management class. Backup files, delete *SM provides no inherent method to do this, but you can achieve it by the following paradigm: 1. Update Copygroup Verexists to 1, ACTivate POlicyset, do a fresh incremental backup. This gets rid of all but the last (active) version of a file. 2. Update Copygroup Retainonly and Retainextra to 0; ACTivate POlicyset; EXPIre Inventory. This gets ADSM to forget about inactive files. 3. If the files are "uniquely identified by the sub-directory structure above the files" add those dirs to the exclude list. Do an Incremental Backup. The files in the excluded dirs get marked inactive. The next EXPIre Inventory should then remove them from the tapes. See also: Database, delete table entry Backup files, list from server 'Query CONtent VolName ...' Backup files, retention period Is part of the Copy Group definition. Is defined in DEFine DOmain to provide a just-in-case default value. Note that there is one Copy Group in a Management Class for backup files, and one for archived files, so the retention period is essentially part of the Management Class. Backup files, versions 'SHOW Versions NodeName FileSpace' Backup files for a node, list from SELECT NODE_NAME, FILESPACE_NAME, - SERVER HL_NAME, LL_NAME, OWNER, STATE, - BACKUP_DATE, DEACTIVATE_DATE FROM - BACKUPS WHERE - NODE_NAME='UPPER_CASE_NAME' (Be sure that node name is upper case.) Backup generations See "Backup version" Backup Image See: dsmc Backup Image Backup laptop computers Look into CoreData's Remoteworx for ADSM software, which detects and transmits only the byte-level data changes for each backup file, to an ADSM client PC running Windows. See www.coredata.com. Backup objects for day, query at server SELECT * FROM BACKUPS WHERE - NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='___' AND - DATE(BACKUP_DATE)='2000-01-14' Backup of HSM-managed files Use one server for HSM plus the Backup of that HSM area: this allows ADSM to effect the backup (of large files) by copying from one storage pool tape to another, without recalling the file to the host file system. In the typical backup of an HSM-managed file system, ADSM will back up all the files too small to be HSM-migrated (4095 bytes or less); and then any files which were in the disk level of the HSM storage pool hierarchy, in that they had not yet migrated down to the tape level; and then copy across tapes in the storage pool. If Backup gets hung up on a code defect while doing cross-tape backup, you can circumvent by doing a dsmrecall of the problem file(s). The backup will then occur from the file system copy. Be advised that cross-pool backup can sometimes require three drives, as files can span tapes. With only two drives, you can run into an "Insufficient mount points available" condition (ANR0535W, ANR0567). Backup Operation Element of report from 'Query VOLHistory' or 'DSMSERV DISPlay DBBackupvolumes' to identify the operation number for this volume within the backup series. Will be 0 for a full backup, 1 for first incremental backup, etc. See also: Backup Series Backup operation, retry when file in Have the CHAngingretries (q.v.) Client use System Options file (dsm.sys) option specify how many retries you want. Default: 4. Backup performance Many factors can affect backup performance. Here are some things to look at: - Client system capability and load at the time of backup. - If Expiration is running on the server, performance is guaranteed to be impaired, due to the CPU and database load involved. - Use client compression judiciously. Be aware that COMPRESSAlways=No can cause the whole transaction and all the files involved within it to be processed again, without compression. This will show up in the "Objects compressed by:" backup statistics number being negative (like "-29%"). (To see how much compression is costing, compress a copy of a typical, large file that is involved in your backups, outside of TSM, performing the compression with a utility like gzip.) Beware that using client compression and sending that data to tape drives which also compress data can result in prolonged time at the tape drive as its algorithms struggle to find patterns in the patternless compressed data. - Using the MEMORYEFficientbackup option considerably reduces performance. - The client manual advises: "A very large include-exclude list may decrease backup performance." - A file system that does compression (e.g., NTFS) will prolong the job. - Backing up a file system which is networked to this client system rather than native to it (e.g., NFS, AFS) will naturally be relatively slow. - Make sure that if you activated client tracing in the past that you did not leave it active, as its overhead will dramatically slow client performance. - File system topology: conventional directories with more than about 1000 files slow down all access, including ADSM. (You can gauge this by doing a Unix 'find' command in large file systems and appreciate just how painful it is to have too many files in one directory.) - Consider using MAXNUMMP to increase the number of drives you may simultaneously use. - Your Copy Group SERialization choice could be causing the backup of active files to be attempted multiple times. - May be waiting for mount points on the server. Do 'Query SEssion F=D'. - Examine the Backup log for things like a lot of retries on active files, and inspect the timestamp sequence for indications of problem areas in the file system. - If an Incremental backup is slow while a Selective or Incrbydate is fast, it can indicate a client with insufficient real memory or other processes consuming memory that the client needs to process an Active files list expeditiously. - If the client under-estimates the size of an object it is sending to the server, there may be performance degradation and/or the backup may fail. See IBM site TechNote 1156827. - Defragment your hard drive! You can regain a lot of performance. (This can also be achieved by performing a file-oriented copy of the file system to a fresh disk, which will also eliminate empty space in directories.) - If a Windows system, consider running DISKCLEAN on the filesystem. - In a PC, routine periodic executions of a disk analyzer (e.g., CHKDSK, or more thorough commercial product) are vital to find drive problems which can impair performance. - Do your schedule log, dsmerror log, or server Activity Log show errors or contention affecting progress? - Avoid using the unqualified Exclude option to exclude a file system or directory, as Exclude is for *files*: subdirectories will still be traversed and examined for candidates. Instead, use Exclude.FS or Exclude.Dir, as appropriate. - TSM Journaling may help a lot. - The number of versions of files that you keep, per your Backup Copy Group, entails overhead: During a Backup, the server has additional work to do in having to check retention policies for this next version of a file causing the oldest one in the storage pool having to be marked for expiration. See also: DEACTIVATE_DATE - If AIX, consider using the TCPNodelay client option to send small transactions right away, before filling the TCP/IP buffer. - If running on a PC, disable anti-virus and other software which adds overhead to file access. - Backups of very large data masses, such as databases, benefit from going directly to tape, where streaming can often be faster than first going to disk, with its rotational positioning issues. And speed will be further increased by hardware data compression in the drive. - If backups first go to a disk storage pool, consider making it RAID type, to benefit from parallel striping across multiple, separate channels & disk drives. But avoid RAID 5, which is poor at sequential writing. - Make sure your server BUFPoolsize is sufficient to cache some 99% of requests (do 'q db f=d'), else server performance plummets. - Maximize your TXNBytelimit and TXNGroupmax definitions to make the most efficient use of network bandwidth. - Balance access of multiple clients to one server and carefully schedule server admin tasks to avoid waiting for tape mounts, migration, expirations, and the like. Migration in particular should be avoided during backups: see IBM site TechNote 1110026. - Make sure that LARGECOMmbuffers Yes is in effect in your client (the default is No, except for AIX). - The client RESOURceutilization option can be used to boost the number of sessions. - If server and client are in the same system, use Shared Memory in Unix and Named Pipes in Windows. - If client accesses server across network, examine TCP/IP tuning values and see if other unusual activity is congesting the network. - See if your client TCPWindowsize is too small - but don't increase it beyond a recommendes size. (63 is good for Windows.) - Is your ethernet card in Autonegotiate mode? Shame on you! - Beware the invisible: networking administrators may have changed the "quality of service" rating - perhaps per your predecessor - so that *SM traffic has reduced priority on that network link. - If it is a large file system and the directories are reasonably balanced, consider using VIRTUALMountpoint definitions to allow backing up the file system in parallel. - A normal incremental backup on a very large file system will cause the *SM client to allocate large amounts of memory for file tables, which can cause the client system to page heavily. Make sure the system has enough real memory, and that other work running on that system at the same time is not causing contention for memory. Consider doing Incrbydate backups, which don't use file tables, or perhaps "Fast Incrementals". - Consider it time to split that file system into two or more file systems which are more manageable. - Look for misconfigured network equipment (adapters, switches, etc.). - Are you using ethernet to transfer large volumes of data? Consider that ethernet's standard MTU size is tiny, fine for messaging but not well suited to large volumes of data, making for a lot of processor and transmission overhead in transferring the data in numerous tiny packets. Consider the Jumbo Frame capability in some incarnations of gigabit ethernet, or a transmission technology like fibre channel, which is designed for volume data transfers. That is, ethernet's capacity does not scale in proportion to its speed increase. - If warranted, put your *SM traffic onto a private network (like a SAN does) to avoid competing with other traffic in getting your data through. - If you have multiple tape drives on one SCSI chain, consider dedicating one host adapter card to each drive in order to maximize performance. - If your computer system has only one bus, it could be constrained. (RS/6000 systems can have multiple, independent buses, which distribute I/O.) - Tape drive technologies which don't handle start-stop well (e.g., DLT) will prolong backups. See: Backhitch - Automatic tape drive cleaning and retries on a dirty drive will slow down the action. - Tapes whose media is marginal may be tough for the tape drive to write, and the drive may linger on a tape block for some time, laboring until it sucessfully writes it - and may not give any indication to the operating system that it had to undertake this extra effort and time. (As an example, with a watchable task: Via 'Query Process' I once observed a Backup Stgpool taking about four times as long as it should in writing a 3590 tape, the Files count repeatedly remaining contant over 20 seconds as it struggled to write modest-sized files.) - If you mix SCSI device types on a single SCSI chain, you may be limiting your fastest device to the speed of the slowest device. For example, putting a single-ended device on a SCSI chain with a differential device will cause the chain speed to drop to that of the single-ended device. - In Unix, use the public domain 'lsof' command to see what the client process is currently working on. - In Solaris, use the 'truss' command to see where the client is processing. - Is cyclic redundancy checking enabled for the server/client (*SM 5.1)? This entails considerable overhead. - Exchange 2000: Consider un-checking the option "Zero Out Deleted Database Pages" (required restart of the Exchange Services). See IBM article ID# 1144592 titled "Data Protection for Exchange On-line Backup Performance is Slow and Microsoft KB 815068. - A Windows TSM server may be I/O impaired due to its SCSI or Fibre Channel block size. See IBM site Technote 1167281. If none of the above pan out, consider rerunning the problem backup with client tracing active. See CLIENT TRACING near the bottom of this document. See also: Backup taking too long; Client performance factors Backup performance with 3590 tapes Writing directly to 3590 tapes, rather than have an intermediate disk, is 3X-4X faster: 3590's stream the data where disks can't. Ref: ADSM Version 2 Release 1.5 Performance Evaluation Report. BACKup REgistry During Incremental backup of a Windows system, the Registry area is backed up. However, in cases where you want to back up the Resistry alone, you can do so with the BACKup REgistry command. The command backs up Registry hives listed in Registry key HKEY_LOCAL_MACHINEM\System\ CurrentControlSet\Control\Hivelist Syntax: BACKup REgistry Note that there in current clients, there are no operands, to guarantee system consistency. Earlier clients had modifying parameters: BACKup REgistry ENTIRE Backs up both the Machine and User hives. BACKup REgistry MACHINE Backs up the Machine root key hives (registry subkeys). BACKup REgistry USER Backs up User root key hives (registry subkeys). See also: BACKUPRegistry Backup Required Before Migration In output of 'dsmmigquery -M -D', an (HSM) attribute of the management class which determines whether it is necessary for a backup copy (Backup/Restore) of the file to exist before it can be migrated by HSM. Defined via MIGREQUIRESBkup in management class. See: MIGREQUIRESBkup Backup retention grace period The number of days ADSM retains a backup version when the server is unable to rebind the object to an appropriate management class. Defined via the BACKRETention parameter of 'DEFine DOmain'. Backup retention grace period, query 'Query DOmain Format=Detailed', see "Backup Retention (Grace Period)". Backup Series Element of report from 'Query VOLHistory' or 'DSMSERV DISPlay DBBackupvolumes' to identify the TSM database backup series of which the volume is a part. Each backup series consists of a full backup and all incremental backups that apply to that full backup, up to the next full backup of the TSM database. Note: After a DSMSERV LOADDB, the Backup Series number will revert to 1. When doing DELete VOLHistory, be sure to delete the whole series at once, to avoid the ANR8448E problem. See also: BAckup VOLHistory Backup sessions, multiple See: RESOURceutilization Backup Set TSM 3.7+ facility to create a collection of a client node's current Active backup files as a single point-in-time amalgam (snapshot) on sequential media, to be stored and managed as a single object in a format tailored to and restorable on the client system whose data is therein represented. The GENerate BACKUPSET server command is used to create the set, intended to be written to sequential media, typically of a type which can be read either on the server or client such that the client can perform a 'dsmc REStore BACKUPSET' either through the TSM server or by directly reading the media from the client node. The media is often something like a CD-ROM, JAZ, or ZIP. Note that you cannot write more than one Backup Sets to a given volume. If this is a concern, look into server-to-server virtual volumes. (See: Virtual Volumes) Also known by the misleading name "Instant Archive". Note that the retention period can be specified when the backup set is created: it is not governed by a management class. Also termed "LAN-free Restore". The consolidated, contiguous nature of the set speeds restoral. ("Speeds" may be an exaggeration: while Backup Sets are generated via TSM db lookups, they are restored via lookups in the sequential media in which the Backup Set is contained, which can be slow.) Backup Sets are frozen, point-in-time snapshots: they are in no way incremental, and nothing can be added to one. But there are several downsides to this approach: The first is that it is expensive to create the Backup Set, in in terms of time, media, and mounts. Second, the set is really "outside" of the normal TSM paradigm, further evidenced by the awkwardness of later trying to determine the contents of the set, given that its inventory is not tracked in the TSM database (which would represent too much overhead). You will not see a directory structure for a backupset. Note that you can create the Backup Set on the server as devtype File and then FTP the result to the client, as perhaps to burn a CD - but be sure to perform the FTP in binary mode! Backup Sets are not a DR substitute for copy storage pools in that Backup Sets hold only Active files, whereas copy storage pools hold all files, Active and Inactive. There is no support in the TSM API for the backup set format. Further, Backup Sets are unsuitable for API-stored objects (TDP backups, etc.) in that the client APIs are not programmed to later deal with Backup Sets, and so cannot perform client-based restores with them. Likewise, the standard Backup/Archive clients do not handle API-generated data. See: Backup Set; GENerate BACKUPSET; dsmc Query BACKUPSET; dsmc REStore BACKUPSET; Query BACKUPSET; Query BACKUPSETContents Ref: TSM 3.7 Technical Guide redbook Backup Set, amount of data Normal Backup Set queries report the number of files, but not the amount of data. You can determine the latter by realizing that a Backup Set consists of all the Active files in a file system, and that is equivalent to the file system size and percent utilized as recorded at last backup, reportable via Query FIlespace. Backup Set, list contents Client: 'Query BACKUPSET' Server: 'Query BACKUPSETContents' See also: dsmc Query BACKUPSET Backup set, on CD In writing Backup Sets to CDs you need to account for the amount of data exceeding the capacity of a CD... Define a devclass of type FILE and set the MAXCAPacity to under the size of the CD capacity. This will cause the data to span TSM volumes (FILEs), resulting in each volume being on a separate CD. Be mindful of the requirement: The label on the media must meet the following restrictions: - No more than 11 characters - Same name for file name and volume label. This might not be problem for local backupset restores but is mandatory for server backupsets over devclass with type REMOVABLEFILE. The creation utility DirectCD creates random CD volume label beginning with creation date, which will will not match TSM volume label. Ref: Admin Ref; Admin Guide "Generating Client Backup Sets on the Server" & "Configuring Removable File Devices" Backup set, remove from Volhistory A backup set which expires through normal retention processing may leave the volume in the volhistory. There is an undocumented form of DELete VOLHistory to get it out of there: 'DELete VOLHistory TODate=TODAY [TOTime=hh:mm:ss] TYPE=BACKUPSET VOLume=______ [FORCE=YES]' Note that VOLume may be case-sensitive. Backup Set and CLI vs. GUI In the beginning (early 2001), only the CLI could deal with Backup Sets. The GUI was later given that capability. However: The GUI can be used only to restore an entire backup set. The CLI is more flexible, and can be used to restore an entire backup set or individual files within a backup set. Backup Set and TDP The TDPs do not support backup sets - because they use the TSM client API, which does not support Backup Sets. Backup Set and the client API The TSM client API does not support Backup Sets. Backup Set restoral performance Some specific considerations: - A Backup Set may contain multiple filespaces, and so getting to the data you want within the composite may take time. (Watch out: If you specify a destination other than the original location, data from all file spaces is restored to the location you specify.) - There is no table of contents for backup sets: The entire tape or set has to be read for each restore or query - which explains why a Query BACKUPSETContents is about as time-consuming as an actual restoral. See also "Restoral performance", as general considerations apply. Backup Set volumes not checked in SELECT COUNT(VOLUME_NAME) FROM VOLHISTORY WHERE TYPE='BACKUPSET' AND VOLUME_NAME NOT IN (SELECT VOLUME_NAME FROM LIBVOLUMES) Backup Sets, report SELECT VOLUME_NAME FROM VOLHISTORY WHERE TYPE='BACKUPSET' Backup Sets, report number SELECT COUNT(VOLUME_NAME) FROM VOLHISTORY WHERE TYPE='BACKUPSET' Backup skips some PC disks Possible causes: (skipping) - Options file updated to add disk, but scheduler process not restarted. - Drive improperly labeled. - Drive was relabeled since PC reboot or since ADSM client was started. - The permissions on the drive are wrong. - Drive attributes differ from those of drives which *will* backup. - Give ADSM full control to the root on each drive (may have been run by SYSTEM account, lacking root access). - Msgmode is QUIET instead of VERBOSE, so you see no messages if nothing goes wrong. - ADSM client code may be defective such that it fails if the disk label is in mixed case, rather than all upper or lower. Backup skips some Unix files An obvious cause for this occurring is that the file matches an Exclude. Another cause: The Unix client manual advises that skipping can occur when the LANG environment variable is set to C, POSIX (limiting the valid characters to those with ASCII codes less than 128), or other values with limitations for valid characters, and the file name contains characters with ASCII codes higher than 127. Backup "stalled" Many ADSM customers complain that their client backup is "stalled". In fact, it is almost always the case that it is processing, simply taking longer than the person thinks. In traditional incremental backups, the client must get from the server a list of all files that it has for the filespace, and then run through its file system, comparing each file against that list to see if it warrants backup. That entails considerable server database work, network traffic, client CPU time, and client I/O...which is aggravated by overpopulated directories. Summary advice: give it time. BAckup STGpool *SM server operation to create a backup copy of a storage pool in a Copy Storage Pool (by definition on serial medium, i.e., tape). Syntax: 'BAckup STGpool PrimaryPoolName CopyPoolName [MAXPRocess=N] [Preview=No|Yes|VOLumesonly] [Wait=No|Yes]' Note that storage pool backups are incremental in nature so you only produce copies of files that have not already been copied. (It is incremental in the sense of adding new objects to the backup storage pool. It is not exactly like a client incremental backup operation: BAckup STGpool itself does not cause objects to be identified as deletable from the *SM database. It is Expire Inventory that rids the backup storage pool of obsolete objects.) Order of backup: most recent data first, then work back in time. BAckup STGpool copies data: it does not examine the data for issues...you need to use AUDit Volume for that, optionally using CRC data. Only one backup may be started per storage pool: attempting to start a second results in error message "Backup already active for pool ___". MAXPRocess: Specify only as many as you will have available mount points or drives to service them (DEVclass MOUNTLimit, less any drives already in use or unavailable (Query DRive)). Each process will select a node and copy all the files for that node. Processes that finish early will quit. The last surviving process should be expected to go on to other nodes' data in the storage pool. If you don't actually get that many processes, it could be due to the number of mount points or there being too few nodes represented in the stgpool data. Elapsed time cannot be less than the time to process the largest client data set. Beware using all the tape drives: migration is a lower priority process and thus can be stuck for hours waiting for BAckup STGpool to end, which can result in irate Archive users. MAXPRocess and preemption: If you invoked BAckup STGpool to use all drives and a scheduled Backup DB started, the Backup DB process would pre-empt one of the BAckup STGpool processes to gain access to a drive (msg ANR1440I): the other BAckup STGpool processes continue unaffected. (TSM will not reinitiate the terminated process after the preempting process has completed.) Preview: Reveals the number of files and bytes to be backed up and a list of the primary storage pool volumes that would be mounted. You cannot backup a storage pool on one computer architecture and restore it on another: use Export/Import. If a client is introducing files to a primary storage pool while that pool is being backed up to a copy storage pool, the new files may get copied to the copy storage pool, depending upon the progress that the BAckup STGpool has made. Preemption: BAckup STGpool will wait until needed tape drives are available: it does not preempt Backups or HSM Recalls or even Reclamation. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting - as archive type files). Msgs: ANR1212I, ANR0986I (reports process, number of files, and bytes), ANR1214I (reports storage pool name, number of files, and bytes), ANR1221E (if insufficient space in copy storage pool) See also: Aggregates BAckup STGpool, estimate requirements Use the Preview option. BAckup STGpool, how to stop If you need to stop the backup prematurely, you can do one of: - CANcel PRocess on each of its processes. But: you need to know the process numbers, and so can't, for example, make the stop an administrative schedule. - UPDate STGpool ... ACCess=READOnly This will conveniently cause all the backup processes to stop after they have finished with the file they are currently working on. In the Activity Log you will find message ANR1221E, saying that the process terminated because of insufficient space. (Updating the storage pool back to READWrite before a process stops will prevent the process from stopping: it has to transition to the next file for it to see the READOnly status.) BAckup STGpool, minimize time To minimize the time for the operation: - Perform the operation when nothing else is going on in ADSM; - Maximize your TSM database Cache Hit Pct. (standard tuning); - Maximize the 'BAckup STGpool' MAXPRocess number to: The lesser of the number of tape drives or nodes available when backing up disk pools (which needs tape drives only for the outputs); The lesser of either half the number of tape drives or the number of nodes when backing up tape pools (which needs tape drives for both input and output). - If you have an odd number of tape drives during a tape pool backup, one drive will likely end up with a tape lingering it after stgpool backup is done with that tape, and ADSM's rotational re-use of the drive will have to wait for a dismount. So for the duration of the storage pool backup, consider having your DEVclass MOUNTRetention value 1 to assure that the drive is ready for the next mount. - If you have plenty of tapes, consider marking previous stgpool backup tapes read-only such that ADSM will always perform the backup to an empty tape and so not have to take time to change tapes when it fills last night's. BAckup STGpool, order within hierarchy When performing a Backup Stgpool on a storage pool hierarchy, it should be done from the top of the hierarchy to the bottom: you should not skip around (as for example doing the third level, then the first level, then the second). Remember that files migrate downward in the hierarchy, not upward. If you do the Backup Stgpool in the same downward order, you will guarantee not missing files which may have migrated in between storage pool backups. BAckup STGpool taking too long Can be due to tapes whose media is marginal, tough for the input tape drive to read or the output tape drive to write, causing lingering on a tape block for some time, laboring until it sucessfully completes the I/O - and may not give any indication to the operating system that it had to undertake this extra effort and time. To analyze: Observe via 'Query Process'. ostensibly seeing the Files count repeatedly remaining contant as a file of just modest file size is copied. But is it the input or output volume? To determine, do 'UPDate Volume ______ ACCess=READOnly' on the output volume: this will cause the BAckup STGpool to switch to a new output volume. If subsequent copying suffers no delay, then the output tape was the problem; else it was probably the input volume that was troublesome. While the operation proceeds, return the prior output volume to READWrite state, which will tend to cause it to be used for output when the current output volume fills, at which time a different input volume is likely. If copying becomes sluggish again, then certainly that volume is the problem. BAckup STGPOOLHierarchy There is no such command - but there should be: The point of a storage pool hierarchy is that if a file object is in any storage pool within the hierarchy, that is "there". In concert with this concept, there should be a command which generally backs up the hierarchy to backup storage. The existing command, BAckup STGpool is antithetical, in that it addresses a physical subset of the whole, logical hierarchy: it is both a nuisance to have to invoke against each primary storage pool in turn, and problematic in that a file which moves in the hierarchy might be missed by the piecemeal backup. Backup storage pool See also: Copy Storage Pool Backup storage pool, disk? Beware using a disk as the 1st level of (disk buffer for Backup) a backup storage pool hierarchy. TSM storage hierarchy rules specify that if a given file is too big to fit into the (remaining) space of a storage pool, that it should instead go directly down to the next level (presumably, tape). What can happen is that the disk storage pool can get full because migration cannot occur fast enough, and the backup will instead try to go directly to tape, which can result in the client session getting hung up on a Media Wait (MediaW status). Mitigation: Use MAXSize on the disk storage pool, to keep large files from using it up quickly. However, many clients back up large files routinely, so you end up with the old situation of clients waiting for tape drives. Another problem with using this kind of disk buffering for Backups is that the migration generates locks which interfere with Backup, worse on a multiprocessor system. If TSM is able to migrate at all, it will be thrashing trying to keep up, continually re-examining the storage pool contents to fulfill its migration rules of largest file sizes and nodes. Lastly, you have to be concerned that your backup data may not all be on tape: being on disk, it represents an incomplete tape data set, and jeopardizes recoverability of that filespace, should the disk go bad. See also: Backup through disk storage pool Backup success message "Successful incremental backup of 'FileSystemName'", which has no message number. Backup successful? You can check the 11th field of the dsmaccnt.log. BACKup SYSTEMObject See: dsmc BACKup SYSTEMObject Backup table See: BACKUPS Backup taking too long Sometimes it may seem that the backup (seems like it "hangs" client is hung, but almost always it is (hung, freezes, sluggish, slow)) active. To determine why it's taking as long as it is, you need to take a close look at the system and see if it or TSM is really hung, or simply slow or blocked. Examination of the evolutionary context of the client might show that the number of files on it has been steadily increasing, and so the number in TSM storage, and thus an increasingly burdensome inventory obtained from the server during a dsmc Incremental. The amount of available CPU power and memory at the time are principal factors: it may be that the system's load has evolved whereas its real memory has not, and it needs more. Use your opsys monitoring tools to determine if the TSM client is actually busy in terms of CPU time and I/O in examination of the file system: the backup may simply be still be looking for new files to send to server storage. The monitor should show I/O and CPU activity proceeding. In the client log, look for the backup lingering in a particular area of the file system, which can indicate a bad file or disk area, where a chkdsk or the like may uncover a problem. You could also try a comparative INCRBYDate type backup and see if that does better, which would indicate difficulty dealing with the size of the inventory. TSM Journaling may also be an option. Consider doing client tracing to identify where the time is concentrated. (See "CLIENT TRACING" section at bottom of this document.) If not hung, then one or more of the many performance affectors may be at play. See: Backup performance Backup through disk storage pool It is traditional to back up directly to (disk buffer) tape, but you can do it through a storage pool hierarchy with a disk storage pool ahead of tape. Advantages: - Immediacy: no waiting for tape mount. - No queueing for limited tape drives when collocation is in effect. - 'BAckup STGpool' can be faster, to the extent that the backup data is still on disk, as opposed to a tape-to-tape operation. Disadvantages: - ADSM server is busier, having to move the data first to disk, then to tape (with corresponding database updates). - There can still be some delays for tape mounts, as migration works to drain the disk storage pool. - Backup data tends to be on disk and tape, rather than all on tape. (This can be mitigated by setting migration levels to 0% low and 0% high to force all the data to tape.) - A considerable amount of disk space is dedicated to a transient operation. - With some tape drive technology you may get better throughput by going directly to tape because the streaming speed of some tape technology is by nature faster than disk. With better tape technology, the tape is always positioned, ready for writing whereas the rotating disk has to wait for its spot to come around again. And, the compression in tape drive hardware can result in the effective write speed exceeding even the streaming rate spec. - If the disk pool fills, incoming clients will go into media wait and will remain tape-destined even if the disk pool empties. - In *SM database restoral, part of that procedure is to audit any disk storage pool volumes; so a good-sized backup storage pool on disk will add to that time. See also: Backup storage pool, disk? Backup version An object, directory, or file space that a user has backed up that resides in a backup storage pool in ADSM storage. The most recent is the "active" version; older ones are "inactive" versions. Versions are controlled in the Backup Copy Group definition (see 'DEFine COpygroup'). "VERExists" limits the number of versions, with the excess being deleted - regardless of the RETExtra which would otherwise keep them around. "VERDeleted" limits versions kept of deleted files. "RETExtra" is the retention period, in days, for all but the latest backup version. "RETOnly" is the retention period, in days, for the sole remaining backup version of a file deleted from the client file system. Note that individual backups cannot be deleted from either the client or server. See Active Version and Inactive Version. Backup version, make unrecoverable First, optionally, move the file on the client system to another directory. 2nd, in the original directory replace the file with a small stub of junk. 3rd, do a selective backup of the stub as many times as you have 'versions' set in the management class. This will make any backups of the real file unrestorable. 4th, change the options to stop backing up the real file. There is a way to "trick" ADSM into deleting the backups: Code an EXCLUDE statement for the file, then perform an incremental backup. This will cause existing backup versions to be flagged for deletion. Next, run EXPIre Inventory, and voila! The versions will be deleted. Backup via Schedule, on NT Running backups on NT systems through "NT services" can be problematic: If you choose Logon As and assign it an ADMIN ID with all the necessary privileges you can think of, it still may not work. Instead, double-click on the ADSM scheduler and click on the button to run the service as the local System Account. BAckup VOLHistory ADSM server command to back up the volume history data to an opsys file. Syntax: 'BAckup VOLHistory [Filenames=___]' (No entry is written to the Activity Log to indicate that this was performed.) Note that you need not explicitly execute this command if the VOLumeHistory option is coded in the server options file, in that the option causes ADSM to automatically back up the volume history whenever it does something like a database backup. However, ADSM does not automatically back up the volume history if a 'DELete VOLHistory' is performed, so you may want to manually invoke the backup then. See also: Backup Series; VOLUMEHistory Backup MB, over last 24 hours SELECT SUM(BYTES)/1000/1000 AS "MB_per_day" FROM SUMMARY WHERE ACTIVITY='BACKUP' AND (CURRENT_TIMESTAMP-END_TIME)HOURS <= 24 HOURS Backup vs. Archive, differences See "Archive vs. Selective Backup". Backup vs. Migration, priorities Backups have priority over migration. Backup without expiration Use INCRBYDate (q.v). Backup without rebinding In AIX, accomplish by remounting the file system on a special mount point name; or, on a PC, change the volume name/label of the hard drive. Then back up with a different, special management class. This will cause a full backup and create a new filespace name. Another approach would be to do the rename on the other end: rename the ADSM filespace and then back up with the usual management class, which will cause a full backup to occur and regenerate the former filespace afresh. Backup won't happen See: Backup skips some PC disks BACKUP_DIR Part of Tivoli Data Protection for Oracle. Should be listed in your tdpo.opt file. It specifies the client directory which wil be used for storing the files on your server. If you list the filespaces created for that node on the server after a succesful backup, you will see one filespace with the same name as you BACKUP_DIR. Backup-archive client A program that runs on a file server, PC, or workstation that provides a means for ADSM users to back up, archive, restore, and retrieve objects. Contrast with application client and administrative client. BackupDomainList The title under which DOMain-named file systems appear in the output of the client command 'Query Options'. BackupExec Veritas Backup Exec product. A dubious aspect is the handling of open files, per a selectable option: it copies a 'stub' to tape, allowing for it to skip the file. Apparently, most of the time when you restore the file, it's either a null file or a partial copy of the original, either way being useless. http://www.BackupExec.com/ BACKUPFULL In 'Query VOLHistory' or 'DSMSERV DISPlay DBBackupvolumes' or VOLHISTORY database TYPE output, this is the Volume Type to say that volume was used for a full backup of the database. BACKUPINCR In 'Query VOLHistory' or VOLHISTORY database TYPE output, this is the Volume Type to say that volume was used for an incremental backup of the database. BACKUPRegistry Option for NT systems only, to specify whether ADSM should back up the NT Registry during incremental backups. Specify: Yes or No Default: Yes The Registry backup works by using an NT API function to write the contents of the Registry into the adsm.sys directory. (The documentation has erroneously been suggesting that the system32\config Registry area should be Excluded from the backup: it should not). The files written have the same layout as the native registry files in \winnt\system32\config. You can back up just the Registry with the BACKup Registry command. In Windows 2000 and beyond, you can use the DOMain option to control the backup of system objects. Ref: redbook "Windows NT Backup and Recovery with ADSM" (SG24-2231): topic 4.1.2.1 Registry Backup BACKUPS SQL: TSM database table containing info about all active and inactive files backed up. Along with ARCHIVES and CONTENTS, constitutes the bulk of the *SM database contents. Columns: NODE_NAME, FILESPACE_NAME, STATE (active, inactive), TYPE, HL_NAME, LL_NAME, OBJECT_ID, BACKUP_DATE, DEACTIVATE_DATE, OWNER, CLASS_NAME. Notes: Does not contain information about file sizes or the volumes which the objects are on (see the Contents table). In a Select, you can do CONCAT(HL_NAME, LL_NAME) to stick those two components together, to make the output more familiar; or concatenate the whole path by doing: SELECT FILESPACE_NAME || HL_NAME || LL_NAME FROM BACKUPS. See: DEACTIVATE_DATE; OWNER; STATE; TYPE Backups, count of bytes received Use the Summary table, available in TSM 3.7+, like: SELECT SUM(BYTES) AS Sum_Bytes - FROM ADSM.SUMMARY - WHERE (DATE(END_TIME) = CURRENT DATE \ - 1 DAYS AND TIME(END_TIME) >= \ '20.00.00') OR (DATE(END_TIME) = \ CURRENT DATE) AND ACTIVITY = 'BACKUP' See also: Summary table Backups, parallelize Going to a disk pool first is one way; then the data migrates to tape. To go directly to tape: You may need to define your STGpool with COLlocation=FILespace to achieve such results; else *SM will try to fill one tape at a time, making all other processes wait for access to the tape. Further subdivision is afforded via VIRTUALMountpoint. (Subdivide and conquer.) That may not be a good solution where what you are backing up is not a file system, but a commercial database backup via agent, or a buta backup, where each backup creates a separate filespace. In such situations you can use the approach of separate management classes, so as to have separate storage pools, but still using the same library and tape pool. If you have COLlocation=Yes (node) and need to force parallelization during a backup session, you can momentarily toggle the single, current output tape from READWrite to READOnly to incite *SM to have multiple output tapes. Backups, prevent There are times when you want to prevent backups from occurring, as when a restoral is running and fresh backups of the same file system would create version confusion in the restoral process, or where client nodes tend to inappropriately use the TSM client during the day, as in kicking off Backups at times when drives are needed for other scheduled tasks. You can prevent backups in several ways: In the *SM server: - LOCK Node, which prevents all access from the client - and which may be too extreme. - 'UPDate Node ... MAXNUMMP=0', to be in effect during the day, to prevent Backup and Archive, but allow Restore and Retrieve. In the *SM client: - In the Include-Exclude list, code EXCLUDE.FS for each file system. In general: - If the backups are performed via client schedule: Unfortunately, client schedules lack the ACTIVE= keyword such that we can render them inactive. Instead, you can do a temporary DELete ASSOCiation to divorce the node from the backup schedule. - If the backups are being performed independently by the client: Do DISAble SESSions after the restoral starts, to allow it to proceed but prevent further client sessions. Or you might do UPDate STGpool ... ACCess=READOnly, which would certainly prevent backups from proceeding. See also: "Restorals, prevent" for another approach Backups go directly to tape, not disk Some shops have their backups first go as intended to a disk storage pool, with migration to tape. But they may find backups going directory to tape. Possible causes: - The file exceeds the STGpool MAXSize. - The file exceeds the physical storage pool size. - The backup occurred choosing a management class which goes to tape. - Maybe only some of the data is going directly to tape: the directories. Remember that *SM by default stores directories under the Management Class with the longest retention, modifiable via DIRMc. - Your storage pool hierarchy was changed by someone. - See also "ANS1329S" discussion about COMPRESSAlways effects. - Your client (perhaps DB2 backup) may be overestimating the size of the object being backed up. - Um, the stgpool Access mode is Read/Write, yes? A good thing to check: Do a short Select * From Backups... to examine some of those files, and see what they are actually using for a Management Class. Backups without expiration Use INCRBYDate (q.v). Backupset See: Backup Set baclient Shorthand for Backup-Archive Client. bak DFS command to start the backup and restore operations that direct them to buta. See also: buta; butc; DFS bakserver BackUp Server: DFS program to manage info in its database, serving recording and query operations. See also "buserver" of AFS. Barcode See CHECKLabel Barcode, examine tape to assure that 'mtlib -l /dev/lmcp0 -a -V VolName' it is physically in library) Causes the robot to move to the tape and scan its barcode. 'mtlib -l /dev/lmcp0 -a -L FileName' can be used to examine tapes en mass, by taking the first volser on each line of the file. Bare Metal Restore (BMR) Grudgingly performed by TSM, if at all: is basically left to 3rd party providers such as The Kernel Group (see www.tkg.com/products.html). Redbook: "ADSM Client Disaster Recovery: Bare Metal Restore" (SG24-4880) See also: BMR Users group: TSM AIX Bare Metal Restore Special interest group. Subscribe by sending email to TSMAIXBMR-subscribe@yahoogroups.com or via the yahoogroups web interface at http://www.yahoogroups.com Bare Metal Restore, Windows? BMR of Windows is highly problematic, due to the Registry orientation of the operating system and hardware dependencies. I.e., don't expect it to work. As one customer put it: "Windows is the least transportable and least modular OS ever." Batch mode Start an "administrative client session" to issue a single server command or macro, via the command: 'dsmadmc -id=YOURID -pa=YOURPW CMDNAME', as described in the ADSM Administrator's Reference. BCV EMC disk: Business Continuance Volumes. BEGin EVentlogging Server command to begin logging events to one or more receivers. A receiver for which event logging has begun is an active receiver. When the server is started, event logging automatically begins for the console and activity log and for any receivers that are started automatically based on entries in the server options file. You can use this command to begin logging events to receivers for which event logging is not automatically started at server startup. You can also use this command after you have disabled event logging to one or more receivers. Syntax: 'BEGin EVentlogging [ALL|CONSOLE|ACTLOG |EVENTSERVER|FILE|FILETEXT|SNMP |TIVOLI|USEREXIT]' See: User exit Benchmark Surprisingly, many sites simply buy hardware and start using it, and then maybe wonder if it is providing its full performance potential. What should happen is that the selection of hardware should be based upon performance specifications published by the vendor; then, once it is made operational at the customer site, the customer should conduct tests to measure and record its actual performance, under ideal conditions. That is a benchmark. Going through this process gives you a basis for accepting or rejecting the new facilities and, if you accept them, you have a basis for later comparing daily performance to know when problems or capacity issues are occurring. .BFS File name extension created by the server for FILE type scratch volumes which contain client data. Ref: Admin Guide, Defining and Updating FILE Device Classes See also: FILE Billing products Chargeback/TSM, an optional plugin to Servergraph/TSM (www.servergraph.com). Bindery A database that consists of three system files for a NetWare 3.11 server. The files contain user IDs and user restrictions. The Bindery is the first thing that ADSM backs up during an Incremental Backup. ADSM issues a Close to the Bindery, followed by anOpen (about 2 seconds later). This causes the Bindery to be written to disk, so that it can be backed up. Binding The process of associating an object with a management class name, and hence a set of rules. See "Files, binding to management class" Bit Vector Database concept for efficiently storing sparse data. Database records usually consist of multiple fields. In some db applications, only a few of the fields may have data: if you simply allocate space for all possible fields in database records, you will end up with a lot of empty space inflating your db. To save space you can instead use a prefacing sequence of bits in each database record which, left to right, correspond to the data fields in the db record, and in the db record you allocate space only for the data fields which contain data for this record. If the bit's value is zero, it means that the field had no data and does not participate in this record. If the bit's value is one, it means that the field does participate in the record and its value can be found in the db record, in the position relative to the other "one" values. Example: A university database is defined with records consisting of four fields: Person name, College, Campus address, Campus phone number. But not all students or staff members reside on campus, so allocating space for the last two fields would be wasteful. In the case of staff member John Doe, the last three fields are unnecessary, and so his database record would have a bit vector value of 1000, meaning that only his name appears in the database record. Bitfile Internal terminology denoting an Aggregate. Sometimes seen like "0.29131728", which is notation specifying an OBJECT_ID HIGH portion (0) and an OBJECT_ID LOW portion (29131728). (OBJECT_ID appears in the Archives and Backups database tables.) Note that in the BACKUPS table, the OBJECT_ID is just the low portion. See also: OBJECT_ID Bkup Backup file type, in Query CONtent report. Other types: Arch, SpMg Blksize See: Block size used for removable media Block size used for removable media *SM sets the block size of all its (tape, optical disc) blksize tape/optical devices internally. Setting it in smit has no effect, except for tar, dd, and any other applications that do not set it themselves. ADSM uses variable blocking on all tapes, ie. blocksize is 0. Generally however, for 3590 it will attempt to write out a full 256K block, which is the largest allowed blocksize with variable blocking. Some blocks, eg. the last block in a series, will be shorter. AIX: use 'lsattr -E -l rmt1' to verify. DLT: ADSMv3 sets blksize to 256KB. Ref: IBM site Technote 1167281 BMR Bare Metal Restore. The Kernel Group has a product of that name. However, as of 2001/02 TKG has not been committing the resources required to develop the product, given the lack of SSA disk, raw volume support, and Windows 2000. URL: http://www.tkg.com/products.html See also: Bare Metal Restore BOOKS Client User Options file (dsm.opt) option for making the ADSM online publications available through the ADSM GUI's Help menu, View Books item. The option specifies the command to invoke, which in Unix would be 'dtext'. Books, online, installing Follow the instructions contained in the booklet which accompanies the Online Product Library CD-ROM. Books, online, storage location Located in /usr/ebt/adsm/ More specifically: /usr/ebt/adsm/books Books, online, using If under the ADSM GUI: Click on the Help menu, View Books item. From the Unix prompt: 'dtext', which invokes the DynaText hypertext browser: /usr/bin/dtext -> /usr/ebt/bin/dtext. Books component product name "adsmbook.obj" As in 'lslpp -l adsmbook.obj'. BOT A Beginning Of Tape tape mark. See also: EOT BPX-Tcp/Ip The OpenEdition sockets API is used by the Tivoli Storage Manager for MVS 3.7 when the server is running under OS/390 R5 or greater. Therefore, "BPX-Tcp/Ip" is displayed when the server is using the OpenEdition sockets API (callable service). "BPX" are the first three characters of the names of the API functions that are being used by the server. Braces See: {}; File space, explicit specification BRMS AS/400 (iSeries) Backup Recovery and Media Services, a fully automated backup, recovery, and media management strategy used with OS/400 on the iSeries server. The iSeries TSM client referred to as the BRMS Application Client to TSM. The BRMS Application Client function is based on a unique implementation of the TSM Application Programming Interface (API) and does not provide functions typically available with TSM Backup/Archive clients. The solution it integrated into BRMS and has a native iSeries look and feel. There is no TSM command line or GUI interfaces. The BRMS Application client is not a Tivoli Backup/Archive client nor a Tivoli Data Protection Client. You can use BRMS to save low-volume user data on distributed iSeries systems to any Tivoli Storage Manager (TSM) server. You can do this by using a BRMS component called the BRMS Application Client, which is provided with the base BRMS product. The BRMS Application Client has the look and feel of BRMS and iSeries. It is not a TSM Backup or Archive client. There is little difference in the way BRMS saves objects to TSM servers and the way it saves objects to media. A TSM server is just another device that BRMS uses for your save and restore operations. BRMS backups can span volumes. There is reportedly a well-known throughput bottleneck with BRMS. (600Kb/s is actually quite a respectable figure for BRMS.) Ref: In IBM webspace you can search for "TSM frequencly asked questions" and "TSM tips and techniques" which talk of BRMS in relation to TSM. BU Seldom used abbreviation for backup. Buffer pool statistics, reset 'RESet BUFPool' BUFFPoolsize You mean: BUFPoolsize BUFPoolsize Definition in the server options file. Specifies the size of the database buffer pool in memory, in KBytes (i.e. 8192 = 8192 KB = 8 MB). A larger buffer pool can keep more database pages in the memory cache and lessen I/O to the database. As the ADSM (3.1) Performance Tuning Guide advised: While increasing BUFPoolsize, care must be taken not to cause paging in the virtual memory system. Monitor system memory usage to check for any increased paging after the BUFPoolsize change. (Use the 'RESet BUFPool' command to reset the statistics.) Note that a TSM server, like servers of all kinds, benefits from the host system having abundant real memory. Skimping is counter-productive. The minimum value is 256 KB; the maximum value is limited only by available virtual memory. Evaluate performance by looking at 'Query DB F=D' output Cache values. A "Cache Hit Pct." of 98% is a reasonable target. Default: 512 (KB) To change the value, either directly edit the server options file and restart the server, or use SETOPT BUFPoolsize and perform a RESet BUFPool. You can have the server tune the value itself via the SELFTUNEBUFpoolsize option. Ref: Installing the Server See also: SETOPT BUFPoolsize; LOGPoolsize; RESet BUFPool; SELFTUNEBUFpoolsize BUFPoolsize server option, query 'Query OPTion' Bulk Eject category 3494 Library Manager category code FF11 for a tape volume to be deposited in the High-Capacity Output Facility. After the volume has been so deposited its volser is deleted from the inventory. bus_domination Attribute for tape drives on a SCSI bus. Should be set "Yes" only if the drive is the only device on the bus. buserver BackUp Server: AFS program to manage info in its database, serving recording and query operations. See also "bakserver" of DFS. Busy file See: Changed buta (AFS) (Back Up To ADSM) is an ADSM API application which replaces the AFS butc. The "buta" programs are the ADSM agent programs that work with the native AFS volume backup system and send the data to ADSM. (The AFS buta and DFS buta are two similar but independent programs.) The buta tools only backup/restore at the volume level, so to get a single file you have to restore the volume to another location and then grovel for the file. This is why ADSM's AFS facilities are preferred. The "buta" backup style provides AFS disaster recovery. All of the necessary data is stored to restore AFS partitions to an AFS server, in the event of loss of a disk or server. It does not allow AFS users to backup and restore AFS data, per the ADSM backup model. All backup and restore operations require operator intervention. ADSM management classes do not control file retention and expiration for the AFS files data. Locking: The AFS volume is locked in the buta backup, but you should be backing up clone volumes, not the actuals. There is a paper published in the Decorum 97 Proceedings (from Transarc) describing the buta approach. As of AFS 3.6, butc itself supports backups to TSM, via XBSA (q.v.), meaning that buta will no longer be necessary. License: Its name is "Open Systems Environment", as per /usr/lpp/adsm/bin/README. The file backup client is installable from the adsm.afs.client installation file, and the DFS fileset backup agent is installable from adsm.butaafs.client. Executables: /usr/afs/buta/. See publication "AFS/DFS Backup Clients", SH26-4048 and http://www.storage.ibm.com/software/ adsm/adafsdfs.htm . There's a white paper available at: http://www.storage.ibm.com/software/ adsm/adwhdfs.htm Compare buta with "dsm.afs". See also: bak; XBSA buta (DFS) (Back Up To ADSM) is an ADSM API application which replaces the AFS butc. The "buta" programs are the ADSM agent programs that work with the native AFS fileset backup system and send the data to ADSM. (The AFS buta and DFS buta are two similar but independent programs.) The buta tools only backup/restore at the fileset level, so to get a single file you have to restore the fileset to another location and then grovel for the file. This is why ADSM's AFS facilities are preferred. Each dumped fileset (incremental or full) is sent to the ADSM server as a file whose name is the same as that of the fileset. The fileset dump files associated with a dump are stored within a single file space on the ADSM server, and the name of the file space is the dump-id string. The "buta" backup style provides DFS disaster recovery. All of the necessary data is stored to restore DFS aggregates to an DFS server, in the event of loss of a disk or server. It does not allow DFS users to backup and restore DFS data, per the ADSM backup model. All backup and restore operations require operator intervention. ADSM management classes do not control file retention and expiration for the DFS files data. Locking: The DFS fileset is locked in the buta backup, but you should be backing up clone filesets, not the actuals. License: Its name is "Open Systems Environment", as per /usr/lpp/adsm/bin/README. The file backup client is installable from the adsm.dfs.client installation file, and the DFS fileset backup agent is installable from adsm.butadfs.client. Executables: in /var/dce/dfs/buta/ . See publication "AFS/DFS Backup Clients", SH26-4048 and http://www.storage.ibm.com/software/ adsm/adafsdfs.htm . There's a white paper available at: http://www.storage.ibm.com/software/ adsm/adwhdfs.htm Compare buta with "dsm.dfs". See also: bak butc (AFS) Back Up Tape Coordinator: AFS volume dumps and restores are performed through this program, which reads and writes an attached tape device and then interacts with the buserver to record them. Butc is replaced by buta to instead perform the backups to ADSM. As of AFS 3.6, butc itself supports backups to TSM through XBSA (q.v.), meaning that buta will no longer be necessary. See also: bak butc (DFS) Back Up Tape Coordinator: DFS fileset dumps and restores are performed through this program, which reads and writes an attached tape device and then interacts with the buserver to record them. Butc is replaced by buta to instead perform the backups to ADSM. See also: bak bydate You mean -INCRBYDate (q.v.). C: vs C:\* specification C: refers to the entire drive, while C:\* refers to all files in the root of C: (and subdirectories as well if -SUBDIR=YES is specified). A C:\* backup will not cause the Registry System Objects to be backed up, whereas a C: backup will. Cache (storage pool) When files are migrated from disk storage pools, duplicate copies of the files may remain in disk storage ("cached") as long as TSM can afford the space, thus making for faster retrieval. As such, this is *not* a write-through cache: the caching only begins once the storage pool HIghmig value is exceeded. ADSM will delete the cached disk files only when space is needed. This is why the Pct Util value in a 'Query Volume' or 'Query STGpool' report can look much higher than its defined "High Mig%" threshold value (Pct Util will always hover around 99% with Cache activated). Define HIghmig lower to assure the disk-stored files also being on tape, but at the expense of more tape action. When caching is in effect, the best way to get a sense of "real" storage pool utilization is via 'Query OCCupancy'. Note that the storage pool LOwmig value is effectively overridden to 0 when CAChe is in effect, because once migration starts, TSM wants to assure that everything is cached. You might as well define LOwmig as 0 to avoid confusion in this situation. Performance: Requires additional database space and updating thereof. Can also result in disk fragmentation due to lingering files. Is best used for the disks which may be part of Archive and HSM storage pools, because of the likelihood of retrievals; but avoid use with disks leading a backup storage pool hierarchy, because such disks serve as buffers and so caching would be a waste of overhead. With caching, the storage pool Pct Migr value does not include cached data. See also the description of message ANR0534W. CAChe Disk stgpool parameter to say whether or not caching is in effect. Note that if you had operated CAChe=Yes and then turn it off, turning it off doesn't clear the cached files from the diskpool - you need to also do one of the following: - Fill the diskpool to 100%, which will cause the cached versions to be released to make room for the new files; or - Migrate down to 0, then do MOVe Data commands on all the disk volumes, which will free the cached images. Cache Hit Pct. Element of 'Query DB F=D' report, reflecting server database performance. (Also revealed by 'SHow BUFStats'.) The value should be up around 98%. (You should periodically do 'RESet BUFPool' to reset the statistics counts to assure valid values, particularly if the "Total Buffer Requests" from Query DB is negative (counter overflow).) If the Cache Hit Pct. value is significantly less, then the server is being substantially slowed in having to perform database disk I/O to service lookup requests, which will be most noticeable in degrading backups being performed by multiple clients simultaneously. Your ability to realize a high value in this cache is affected by the same factors as any other cache: The more, new entries in the cache - as from lots of client backups - the less likely it may be that any of those resident in the cache may serve a future reference, and so the lookup has to go all the way back to the disk-based database, meaning a "cache miss". It's all probability, and the inability to predict the future. Increase BUFPoolsize in dsmserv.opt . Note: You can have a high Cache Hit Pct. and yet performance still suffering if you skimp on real memory in your server system, because all modern operating systems use virtual memory, and in a shortage of real memory, much of what had been in real memory will instead be out on the backing store, necessitating I/O to get it back in, which entails substantial delay. See topic "TSM Tuning Considerations" at the bottom of this document. See also: RESet BUFPool Cache Wait Pct. Element of 'Query DB F=D' report. Specifies, as a percentage, the number requests for a database buffer pool page that was unavailable (because all database buffer pool pages are occupied). You want the number to be 0.0. If greater, increase the size of the buffer pool with the BUFPoolsize option. You can reset this value with the 'RESet BUFPool' command. Caching, turn off 'UPDate STGpool PoolName CAChe=No' If you turn caching off, there's no reason for ADSM to suddenly remove the cache images and lose the investment already made: that stuff is residual, and will go away as space is needed. CAD See: Client Acceptor Daemon Calibration Sensor 3494 robotic tape library sensor: In addition to the bar code reader, the 3494 accessor contains another, more primitive visions system, based upon infrared rather than laser: it is the Calibration Sensor, located in the top right side of the picker. This sensor is used during Teach, bouncing its light off the white, rectangular reflective pads (called Fiducials) which are stuck onto various surfaces inside the 3494. This gives the robot its first actual sensing of where things actually are inside. CANcel EXPIration TSM server command to cancel an expiration process if there is one currently running. This does NOT require the process ID to be specified, and so this command can be scheduled using the server administrative command scheduling utility to help manage expiration processing and the time it consumes. TSM will record the point where it stopped, in the TSM database, which will be the point from which it resumes when the next EXPIre Inventory is run. As such, this may be preferable to CANcel PRocess. Msgs: ANR0813I when stopped by CANcel PRocess See also: Expiration, stop CANcel PRocess TSM server command to cancel a background process. Syntax: 'CANcel PRocess Process_Number' Notes: Processes waiting on resources won't cancel until they can get that resource - at which point they will go away. For example, a Backup Stgpool process which is having trouble reading or writing a tape, and is consumed with retrying the I/O, cannot be immediately cancelled. When a process is canceled, it often has to wait for lock requests to clear prior to going away: SHOW LOCKS may be used to inspect. CANcel REQuest *SM server command to cancel pending mount requests. Syntax: 'CANcel REQuest [requestnum|ALl] [PERManent]' where PERManent causes the volume status to be marked Unavailable, which prevents further mounts of that tape. CANcel RESTore ADSMv3 server command to cancel a Restartable Restore operation. Syntax: 'CANcel RESTore Session_Number|ALl' See also: dsmc CANcel Restore; Query RESTore CANcel SEssion To cancel an administrative or client session. Syntax: 'CANcel SEssion [SessionNum|ALl]' A client conducting a dsm session will get an alert box saying "Stopped by user", though it was actually the server which stopped it. An administrative session which is canceled gets regenerated... adsm> cancel se 4706 ANS5658E TCP/IP failure. ANS5102I Return code -50. ANS5787E Communication timeout. Reissue the command. ANS5100I Session established... ANS5102I Return code -50. SELECT command sessions are a problem: depending on complexity of the query it quite possible for the server to hang, and Tivoli has stated that the Cancel may not be able to cancel the Select, such that halting and restarting the server is the only way out of that situation. Ref: Admin Guide, Monitoring the TSM Server, Using SQL to Query the TSM Database, Issuing SELECT Commnds. Msgs: ANS4017E Candidates A file in the .SpaceMan directory of an HSM-managed file system, listing migration candidates (q.v.). The fields on each line: 1. Migration Priority number, which dsmreconcile computes based upon file size and last access. 2. Size of file, in bytes. 3. Timestamp of last file access (atime), in seconds since 1970. 4. Rest of pathname in file system. Capacity Column in 'Query FIlespace' server command output, which reflects the size of the object as it exists on the client. Note that this does *not* reflect the space occupied in ADSM. See also: Pct Util Cartridge devtype, considerations When using a devclass with DEVType=Cartridge, 3590 devices can only read. This is to allow customers who used 3591's (3590 devices with the A01 controller) to read those tapes with a 3590 (3590 devices with the A00 controller). The 3591 device emulates a 3490, and uses the Cartridge devtype. 3590's use the 3590 devtype. You can do a Help Define Devclass, or check the readme for information on defining a 3590 devclass, but it is basically the same as Cartridge, with a DEVType=3590. The 3591 devices exist on MVS and VM only, so the compatibality mode is only valid on these platforms. On all other platforms, you can only use a 3590 with the 3590 devtype. Cartridge System Tape (CST) A designation for the base 3490 cartridge technology, which reads and writes 18 tracks on half-inch tape. Sometimes referred to as MEDIA1. Contrast with ECCST and HPCT. See also: ECCST; HPCT; Media Type CAST SQL: To alter the data representation in a query operation: CAST(Column_Name AS ___) See: TIMESTAMP Categories See: Volume Categories Category code, search for volumes 'mtlib -l /dev/lmcp0 -qC -s ____' will report only volumes having the specified category code. Category code control point Category codes are controlled at the ADSM LIBRary level. Category code of one tape in library, Via Unix command: list 'mtlib -l /dev/lmcp0 -vqV -V VolName' In TSM: 'Query LIBVolume LibName VolName' indirectly shows the Category Code in the Status value, which you can then see in numerical terms by doing 'Query LIBRary [LibName]'. Category code of one tape in library, Via Unix command: set 'mtlib -l /dev/lmcp0 -vC -V VolName -t Hexadecimal_New_Category' (Does not involve a tape mount.) No ADSM command will performs this function, nor does the 3494 control panel provide a means for doing it. By virtue of doing this outside of ADSM, you should do 'AUDit LIBRary LibName' afterward for each ADSM-defined library name affected, so that ADSM sees and registers the change. In TSM: 'UPDate LIBVolume LibName VolName STATus=[PRIvate|SCRatch]' indirectly changes the Category Code to the Status value reflected in 'Query LIBRary [LibName]'. Category Codes Ref: Redbook "IBM Magstar Tape Products Family: A Practical Guide" (SG24-4632), Appendix A Category codes of all tapes in Use AIX command: library, list 'mtlib -l /dev/lmcp0 -vqI' for fully-labeled information, or just 'mtlib -l /dev/lmcp0 -qI' for unlabeled data fields: volser, category code, volume attribute, volume class (type of tape drive; equates to device class), volume type. (or use options -vqI for verbosity, for more descriptive output) The tapes reported do not include CE tape or cleaning tapes. In TSM: 'Query LIBVolume [LibName] [VolName]' indirectly shows the Category Code in the Status value, which you can then see in numerical terms by doing 'Query LIBRary [LibName]'. Category Table (TSM) /usr/tivoli/tsm/etc/category_table Contains a list of tape library category codes, like: FF00=inserted. (unassigned, in ATL) CC= Completion Code value in I/O operations, as appears in error messages. See the back of the Messages manuals for a list of Completion Codes and suggested handling. CCW Continuous Composite WORM, as in a type of optical WORM drive that can be in the 3995 library. CD See also: DVD... CD for Backup Set See: Backup set, on CD CDRW (CD-RW) support? Tivoli Storage Manager V5.1, V4.2 and V4.1 for Windows and Windows 2000 supports removable media devices such as Iomega JAZ, Iomega ZIP, CD-R, CD-RW, and optical devices provided a file system is supplied on the media. The devices are defined using a device class of device type REMOVEABLEFILE. (Ref: Tivoli Storage Manager web pages for device support, under "Platform Specific Notes") With CD-ROM support for Windows, administrators can also use CD-ROM media as an output device class. Using CD-ROM media as output requires other software which uses a file system on top of the CD-ROM media. ADAPTEC Direct CD software is the most common package for this application. This media allows other software to write to a CD by using a drive letter and file names. The media can be either CD-R (read) or CD-RW (read/write). (Ref: Tivoli Storage Manager for Windows Administrator's Guide) CE (C.E.) IBM Customer Engineer. CE volumes, count of in 3494 Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fff6' Cell (tape library storage slot) For libraries containing their own supervisor (e.g., 3494), TSM does not know or care about where volumes are stored in the library, in that it merely has to ask the library to mount them as needed, so does not need to know. See: Element; HOME_ELEMENT; Library... SHow LIBINV Cell 1 See: 3494 Cell 1 Central Scheduling A function that allows an *SM administrator to schedule backup, archive, and space management operations from a central location. The operations can be scheduled on a periodic basis or on an explicit date. Shows up in server command Query STATus output as "Central Scheduler: Active". (It is not documented in the manuals what controls its Active/Inactive state) Changed Keyword at end of a line in client backup log indicating that the file changed as it was being backed up, as: Normal File--> 1,544,241,152 /SomeFile Changed Backup may be reattempted according to the CHAngingretries value. In the dsmerror.log you may see an auxiliary message for the retry: " truncated while reading in Shared Static mode." See also: CHAngingretries; Retry; SERialization CHAngingretries (-CHAngingretries=) Client System Options file (dsm.sys) option to specify how many additional times you want *SM to attempt to back up or archive a file that is "in use", as discovering during the first attempt to back it up, when the Copy Group SERialization is SHRSTatic or SHRDYnamic (but not STatic or DYnamic). Note that the option controls retries: if you specify "CHAngingretries 3", then the backup or archive operation will try a total of 4 times - the initial attempt plus the three retries. Be aware that the retry will be right after the failed attempt: *SM does not go on to all other files and then come back and retry this one. Option placement: within server stanza. Spec: CHAngingretries { 0|1|2|3|4 } Default: 4 retries. Note: It may be futile to attempt to retry, in that if the file is large it will likely be undergoing writing for a long time. Note: Does not control number of retries in presence of read errors. This option's final effect depends upon the COpygroup's SERialization "shared" setting: Static prohibits retries if the file is busy; Dynamic causes the operation to proceed on the first try; Shared Static will cause the attempt to be abandoned if the file remains busy, but Shared Dynamic will cause backup or archiving to occur on the final attempt. See also: Changed; Fuzzy Backup; Retry; SERialization CHAngingretries, query The 'dsmc q o' command will *not* reveal the value of this option: you have to examine the dsm.sys options file. CHAR SQL function to return a string (aligned left). Syntax: CHAR(expression[,n]) See also: LEFT() CHECKIn LIBVolume TSM server command to check a *labeled* tape into an automated tape library. (For 3494 and like libraries, the volume must be in Insert mode.) 'CHECKIn LIBVolume LibName VolName STATus=PRIvate|SCRatch|CLEaner [CHECKLabel=Yes|No|Barcode] [SWAP=No|Yes] [MOUNTWait=Nmins] [SEARCH=No|Yes|Bulk] [CLEANINGS=1..1000] [VOLList=vol1,vol2,vol3 ...] [DEVType=3590]' (Omit VolName if SEARCH=Yes. You can do CHECKLabel=Barcode only if SEARCH=Yes.) Note that this command is not relevant for LIBtype=MANUAL. Note that SEARCH=Bulk will result in message ANR8373I, which requires doing 'REPLY ' and ' >> ' (redirection). Command output, suppress Use the Client System Options file (dsm.sys) option "Quiet". See also: VERBOSE Command routine ADSMv3: Command routing allows the server that originated the command to route the command to multiple servers and then to collect the output from these servers. Format: Server1[,ServerN]: server cmd Commands, uncommitted, roll back 'rollback' COMMIT TSM server command used in a macro to commit command-induced changes to the TSM database. Syntax: COMMIT See also: Itemcommit Committing database updates The Recovery Log holds uncommitted database updates. See: CKPT; LOGPoolsize COMMMethod Server Options File operand specifying one of more communications methods which clients may use to reach the server. Should specify at least one of: HTTP (for Web admin client) IPXSPX (discontinued in TSM4) NETBIOS (discontinued in TSM4) NONE (to block external access to the server) SHAREDMEM (shared memory, within a single computer system) SNALU6.2 (APPC - discontinued in TSM4) SNMP TCPIP (the default, being TCP, not UDP) (Ref: Installing the Server, Chap. 5) COMMMethod Client System Options file (dsm.sys) option to specify the one communication method to use to reach each server. Should specify one of: 3270 (discontinued in TSM4) 400comm HTTP (for Web Admin) IPXspx NAMEdpipe NETBios PWScs SHAREdmem (shared memory, within a single computer system) SHMPORT SNAlu6.2 TCPip (is TCP, not UDP) Be sure to code it, once, on each server stanza. See also: Shared memory COMMmethod server option, query 'Query OPTion'. You will see as many "CommMethod" entries as were defined in the server options file. Common Programming Interface A programming interface that allows Communications (CPIC) program-to-program communication using SNA LU6.2. See Systems Network Architecture Logical Unit 6.2. Discontinued as of TSM 4.2. COMMOpentimeout Definition in the Server Options File. Specifies the maximum number of seconds that the ADSM server waits for a response from a client when trying to initiate a conversation. Default: 20 seconds. Ref: Installing the Server... COMMTimeout Definition in the Server Options File. Specifies the communication timeout value in seconds: how long the server waits during a database update for an expected message from a client. Default: 60 seconds. Too small a value can result in ANR0481W session termination and ANS1005E. A value of 3600 is much more realistic. A large value is necessary to give the client time to rummage around in its file system, fill a buffer with files' data, and finally send it - especially for Incremental backups of large file systems having few updates, where the client is out of communication with the server for large amounts of time. If client compression is active, be sure to allow enough time for the client to decompress large files. Ref: Installing the Server... See also: IDLETimeout; SETOPT; Sparse files, handling of, Windows COMMTimeout server option, query 'Query OPTion' Communication method "COMMmethod" definition in the server options file. The method by which a client and server exchange information. The UNIX application client can use the TCP/IP or SNA LU6.2 method. The Windows application client can use the 3270, TCP/IP, NETBIOS, or IPX/SPX method. The OS/2 application client can use the 3270, TCP/IP, PWSCS, SNA LU6.2, NETBIOS, IPX/SPX, or Named Pipe method. The Novell NetWare application client can use the IPX/SPX, PWSCS, SNA LU6.2, or TCP/IP methods. See IPX/SPX, Named Pipe, NETBIOS, Programmable Workstation Communication Service, Systems Network Architecture Logical Unit 6.2, and Transmission Control Protocol/Internet Protocol. Communication protocol A set of defined interfaces that allows computers to communicate with each other. Communications timeout value, define "COMMTimeout" definition in the server options file. Communications Wait (CommW, commwait) "Sess State" value in 'Query SEssion' for when the server was waiting to receive expected data from the client or waiting for the communication layer to accept data to be sent to the client. An excessive value indicates a problem in the communication layer or in the client. Recorded in the 23rd field of the accounting record, and the "Pct. Comm. Wait Last Session" field of the 'Query Node Format=Detailed' server command. See also: Idle Wait; Media Wait; RecvW; Run; SendW; Start CommW See: Communications Wait commwait See: Communications Wait Competing products ARCserve; Veritas; www.redisafe.com; www.graphiumsoftware.com Compile Time (Compile Time API) Refers to a compiled application, which may emply a Run Time API (q.v.). The term "Compile Time API" may be employed with a TDP, which is a middleware application which employs both the TDP subject API (database, mail, etc.) plus the TSM API. Compress files sent from client to Can be defined via COMPRESSIon option server? in the dsm.sys Client System Options file. Specifying "Yes" causes *SM to compress files before sending them to the *SM server. Worth doing if you have a fast client processor. COMPRESSAlways Client User Options file (dsm.opt) option to specify handling of a file which *grows* during compression. (COMPRESSIon option must be set for this option to come into play.) Default: v2: No, do not send the object if it grows during compression. v3: Yes, do send if it grows during compression. Notes: Specifying No can result in wasted processing... The TXNGroupmax and TXNBytelimit options govern transaction size, and if a file grows in compression when COMPRESSAlways=No, the whole transaction and all the files involved within it must be processed again, without compression. This will show up in the "Objects compressed by:" backup statistics number being negative (like "-29%"). Messages: ANS1310E; ANS1329S See also IBM site TechNote 1156827. Compression Refers to data compression, the primary objective being to save storage pool space, and secondarily data transfer time. TSM compression is governed according to REGister Node settings, client option settings (COMPRESSIon), and Devclass Format. Object attributes may also specify that the data has already been compressed such that TSM will not attempt to compress it further. Drives: Either client compression or drive compression should be used, but not both, as the compression operation at the drive may actually cause the data to expand. EXCLUDE.COMPRESSION can be used to defeat compression for certain files during Archive and Backup processing. Ref: TSM Admin Guide, "Using Data Compression" See also: File size COMPression= Operand of REGister Node to control client data compression: No The client may not compress data sent to the server - regardless of client options. Each client session will show: "Data compression forced off by the server" in the headings, just under the Server Version line of the client log. Yes The client must always compress data sent to the server - regardless of client options. Each client session will show: "Data compression forced on by the server" in the headings, just under the Server Version line of the client log. Client The client may choose whether or not to compress data sent to the server, via client options. Default: COMPression=Client COMPRESSIon (client compression) Client System Options file (dsm.sys) option. Code in a server stanza. Specifying "Yes" causes *SM to compress files before sending them to the TSM server, during Backup and Archive operations, for storage as given - if the server allows the client to make a choice about compression, via "COMPRESSIon=Client" in 'REGister Node'. Conversely, the client has to uncompress the files in a restoral or retrieval. (The need for the client to decompress the data coming back from the server is implicit in the data, and thus is independent of any client option.) Worth considering if you have a fast client processor and the storage device does not do hardware compression (most tape drives do). Compression increases data communication throughput and takes less space if the destination storage pool is Disk - but less desirable if the storage pool is tape, in that the tape drive is better for doing compression, in hardware. Beware: if the file expands during compression then TSM will restart the entire transaction - which could involve resending other files, per the TXNGroupmax / TXNBytelimit values. The slower your client, the longer it takes to compress the file, and thus the longer the exposure to this possibility. Check at client by doing: 'dsmc Query Option' for ADSM or 'dsmc show options' for TSM. The dsmc summary will contain the extra line: "Compression percent reduction:", which is not present without compression. Note that during the operation the progress dots will be fewer and slower than if not using compression. With "COMPRESSIon Yes", the server COMMTimeout option becomes more important - particularly with large files - as the client takes considerable time doing decompression. How long does compression take? One way to get a sense of it is to, outside of TSM, compress a copy of a typical, large file that is involved in your backups, performing the compression with a utility like gzip. Where the client options call for both compression and encryption, compression is reportedly performed before encryption - which makes sense, as encrypted data is effectively binary data, which would either see little compression, or even exapansion. And, encryption means data secured by a key, so it further makes sense to prohibit any access to the data file if you do not first have the key. See also: Sparse files, handling of, Windows Compression, by tape drive Once the writing of a tape has begun with or without compression, that method will persist for the remainder of the tape is full. Changing Devclass FORMAT will affect only newly used tapes. Compression, client, control methods Client compression may be controlled by several means: - Client option file spec. - Client Option Set in the server. (Do 'dsmc query options' to see what's in effect, per options file and server side Option Set.) - Mandated in the server definition of that client node. If compression is in effect by any of the above methods, it will be reflected in the statistics at the end of a Backup session ("Objects compressed by:"). Compression algorithm, client Is Ziv Lempel (LZI), the same as that used in pkzip, MVS HAC, and most likely unix as well, and yes the data will normally grow when trying to compress it for a second time, as in a client being defined with COMPRESSAlways=Yes and a compressed file being backed up. Per the 3590 Intro and Planning Guide: "Data Compression is not recommended for encrypted data. Compressing encrypted data may reduce the effective tape capacity." This would seem to say that any tough binary data, like pre-compressed data from a *SM client, would expand rather than compress, due to the expectations and limitations of the algorithm. Compression being done by client node? Controlled by the COMPression parameter (before it sends files to server for on the 'REGister Node' and 'UPDate Node' backup and archive) commands. Default: Client (it determines whether to compress files). Query from ADSM server: 'Query Node Format=Detailed'. "Yes" means that it will always compress files sent to server; "No" means that it won't. Query from client: 'dsmc Query Option' for ADSM, or 'dsmc show options' for TSM look for "Compression". Is also seen in result from client backup and archive, in "Objects compressed by:" line at end of job. Compression being done by *SM server Controlled via the DEVclass "FORMAT" on 3590 tape drives? operand. Compression being done by tape drive? Most tape drives can perform hardware compression of data. (The 3590 can.) Find out via the AIX command: '/usr/sbin/lsattr -E -l rmt1' where "rmt1" is a sample tape drive name. TSM will set compression according to your DEVclass FORMAT=____ value. You can use SMIT to permanently change this, or do explicit: 'chdev -l rmt1 compress=yes|no'. You can also use the "compress" and "nocompress" keywords in the 'tapeutil' or 'ntutil' command to turn compression on and off for subsequent *util operations (only). Configuration file An optional file pointed to by your application that can contain the same options that are found in the client options file (for non-UNIX platforms) or in the client user options file and client system options file (for UNIX platforms). If your application points to a configuration file and values are defined for options, then the values specified in the configuration file override any value set in the client options files. Configuration Manager See: Enterprise Configuration and Policy Management Connect Agents Commercial implementations of the ADSM API to provide high-performance, integrated, online backups and restores of industry-leading databases. TSM renamed them to "Data Protection" (agents) (q.v.). See http://www.storage.ibm.com/ software/adsm/addbase.htm Console mode See: -CONsolemode; Remote console -CONsolemode Command-line option for ADSM administrative client commands ('dsmadmc', etc.) to see all unsolicited server console output. Sometimes referred to as "remote console". Results in a display-only session (no input prompt - you cannot enter commands). And unlike the Activity Log, no date-timestamps lead each line. Start an "administrative client session" via the command: 'dsmadmc -CONsolemode'. To have Operations monitor ADSM, consider setting up a "monitor" admin ID and a shell script which would invoke something to the effect of: 'dsmadmc -ID=monitor -CONsolemode -OUTfile=/var/log/ADSMmonitor.YYYYMMDD' and thus see and log events. Note that ADSM administrator commands cannot be issued in Console Mode. See also: dsmadmc; -MOUNTmode Ref: Administrator's Reference Consumer session The session which actually performs the data backup. (To use an FTP analogy, this is the "data channel".) Sometimes called the "data thread". Contrast with: Producer session See also: RESOURceutilization Contemporary Cybernetics 8mm drives 8510 is dual density (2.2gig and 5gig). (That brand was subsumed by Exabyte: see http://www.exabyte.com/home/ products.html for models.) Content Manager CommonStore CommonStore seamlessly integrates SAP R/3 and Lotus Domino with leading IBM archive systems such as IBM Content Manager, IBM Content Manager OnDemand, or TSM. The solution supports the archiving of virtually any kind of business information, including old, inactive data, e-mail documents, scanned images, faxes, computer printed output and business files. You can offload, archive, and e-mail documents from your existing Lotus Notes databases onto long-term archive systems. You can also accomplish a fully auditable document management system with your Lotus Notes client. http://www.ibm.com/software/data/ commonstore/ CONTENTS (SQL) The *SM database table which is the entirety of all filespaces data. (As such, Select queries against this table are quite expensive.) Along with Archives and Backups tables, constitutes the bulk of the *SM database contents. Columns: VOLUME_NAME, NODE_NAME (upper case), TYPE (Bkup, Arch, SpMg), FILESPACE_NAME (/fs), FILE_NAME (/subdir/ name), AGGREGATED (n/N), FILE_SIZE, SEGMENT (n/N), CACHED (Yes/No) Whereas the Backups table records a single instance of the backed up file, the Contents table records the primary storage pool instance plus all copy storage pool instances. Note that no timestamp is available for the file objects: that info can be obtained from the Backups table. But a major problem with the Contents is the absence of anything to uniquely identify the instance of its FILE_NAME, to be able to correlate with the corresponding entry in the Backups table, as would be possible if the Contents table carried the OBJECT_ID. The best you can do is try to bracket the files by creation timestamp as compares with the volume DATE_TIME column from the Volhistory table and the LAST_WRITE_DATE from the Volumes table. See also: Query CONtent Continuation and quoting Specifying things in quotes can always get confusing... When you need to convey an object name which contains blanks, you must enclose it in quotes. Further, you must nest quotes in cases where you need to use quotes not just to convey the object to *SM, but to have an enclosing set of quotes stored along with the name. This is particulary true with the OBJECTS parameter of the DEFine SCHedule command for client schedules. In its case, quoted names need to have enclosing double-quotes stored with them; and you convey that composite to *SM with single quotes. Doing this correctly is simple if you just consider how the composite has to end up... Wrong: OBJECTS='"Object 1"'- '"Object 2"' Right: OBJECTS='"Object 1" '- '"Object 2"' That is, the composite must end up being stored as: "Object 1" "Object 2" for feeding to and proper processing by the client command. The Wrong form would result in: "Object 1""Object 2" mooshing, which when illustrated this way is obviously wrong. The Wrong form can result in a ANS1102E error. Ref: "Using Continuation Characters" in the Admin Ref. Continuing server command lines Code either a hyphen (-) or backslash (continuation) (\) at the end of the line and contine coding anywhere on the next line. Continuing client options Lines in the Client System Options File (continuation) and Client User Options File are not continued per se: instead, you re-code the option on successive lines. For example, the DOMain option usually entails a lot of file system names; so code a comfortable number of file system names on each line, as in: DOMain /FileSystemName1, ... DOMain /FileSystemName7, ... Count() SQL function to calculate the number of records returned by a query. Note that this differs from Sum(), which computes a sum from the contents of a column. Convenience Eject category 3494 Library Manager category code FF10 for a tape volume to be ejected via the Convenience I/O Station. After the volume has been so ejected its volser is deleted from the inventory. Convenience Input-Output Station 3494 hardware feature which provides 10 (Convenience I/O) access slots in the door for inputting cartridges to the 3494 or receiving cartridges from it. May also be used for the transient mounting of tapes for immediate processing, not to become part of the repository. The Convenience I/O Station is just a basic pass-through area, and should not be confused with the more sophisticated Automatic Cartridge Facility magazine available for the 3590 tape drive. We find that it takes some 2 minutes, 40 seconds for the robot to take 10 tapes from the I/O station and store them into cells. When cartridges have been inserted from the outside by an operator, the Operator Panel light "Input Mode" is lit. It changes to unlit as soon as the robot takes the last cartridge from the station. When cartridges have been inserted from the inside by the robot, the Operator Panel light "Output Mode" is lit. The Operator Station System Summary display shows "Convenience I/O: Volumes present" for as long as there are cartridges in the station. See also the related High Capacity Output Facility. Convenience I/O Station, count of See: 3494, count of cartridges in cartridges in Convenience I/O Station CONVert Archive TSM4.2 server command to be run once on each node to improve the efficiency of a command line or API client query of archive files and directories using the Description option, where many files may have the same description. Previously, an API client could not perform an efficient query at all and a Version 3.1 or later command line client could perform such a query only if the node had signed onto the server from a GUI at least once. Syntax: CONVert Archive NodeName Wait=No|Yes Msgs: ANR0911I COPied COPied=ANY|Yes|No Operand of 'Query CONtent' command, to specify whether to restrict query output either to files that are backed up to a copy storage pool (Yes) or to files that are not backed up to a copy storage pool (No). Copy Group A policy object assigned to a Management Class specifying attributes which control the generation, destination, and expiration of backup versions of files and archived copies of files. It is the Copy Group which defines the destination Storage Pools to use for Backup and Archive. ADSM Copygroup names are always "STANDARD": you cannot assign names, which is conceptually pointless anyway in that there can only be one copygroup of a given type assigned to a management class. 'Query Mgm' does not reveal the Copygroups within the management class, unfortunately: you have to do 'Query COpygroup'. Note that Copy Groups are used only with Backup and Archive. HSM does not use them: instead, its Storage Pool is defined via the MGmtclass attribute "MIGDESTination". See "Archive Copy Group" and "Backup Copy Group". Copy group, Archive type, define See: DEFine COpygroup, archive type Copy group, Backup type, define See: DEFine COpygroup, backup type Copy group, Archive, query 'Query COpygroup [CopyGroupName] (defaults to Backup type copy group) Type=Archive' Copy group, Backup, query 'Query COpygroup [CopyGroupName] (defaults to Backup type copy group) [Type=Backup]' Copy group, delete 'DELete COpygroup DomainName PolicySet MgmtClass [Type=Backup|Archive]' Copy group, query 'Query COpygroup [CopyGroupName]' (defaults to Backup type copy group) COPy MGmtclass Server command to copy a management class within a policy set. (But a management class cannot be copied across policy domains or policy sets.) Syntax: 'COPy MGmtclass DomainName SetName FromClass ToClass' Then use 'UPDate MGmtclass' and other UPDate commands to tailor the copy. Note that the new name does not make it into the Active policy set until you do an ACTivate POlicyset. Copy Storage Pool A special storage pool, consisting of serial volumes (tapes) whose purpose is to provide space to have a surity backup of one or more levels in a standard Storage Pool hierarchy. The Copy Storage Pool is employed via the 'BAckup STGpool' command (q.v.). There cannot be a hierarchy of Copy Storage Pools, as can be the case with Primary Storage Pools. Be aware that making such a Copy results in that much more file information being tracked in the database...about 200 bytes for each file copy in a Copy Storage Pool, which is added to the file's existing database entry rather than create a separate entry. Copy Storage Pools are typically not collocated because it would mean a mount for every collocated node or file system, which could be a lot. Note that there is no way to readily migrate copy storage pool data, as for example when you want to move to a new tape technology and want to transparently move (rather than copy) the current data. Ref: Admin Guide topic Estimating and Monitoring Database and Recovery Log Space Requirements Copy Storage Pool, define See: DEFine STGpool (copy) Copy Storage Pool, delete node data You cannot directly delete a node's data from a copy storage pool; but you can circuitously effect it by using MOVe NODEdata to shift the node's data to separate tapes in the copy stgpool (temporarily changing the stgpool to COLlocate=Yes), and then doing DELete Volume on the newly written volumes. Copy storage pool, files not in Invoke 'Query CONtent' command with COPied=No to detect files which are not yet in a copy storage pool. Copy Storage Pool, moving data You don't: if you move the primary storage pool data to another location you should have done a 'BAckup STGpool' which will create a content-equivalent area, whereafter you can delete the volumes in the old Copy Storage Pool and then delete the old Copy Storage Pool. Note that neither the 'MOVe Data' command nor the 'MOVe NODEdata' command will not move data from one Copy Storage Pool to another. Copy Storage Pool, restore files Yes, if the primary storage pool is directly from unavailable or one of its volumes is destroyed, data can be obtained directly from the copy storage pool Ref: TSM Admin Guide chapter 8, introducing the Copy Storage Pool: ...when a client attempts to retrieve a file and the server detects an error in the file copy in the primary storage pool, the server marks the file as damaged. At the next attempt to access the file, the server obtains the file from a copy storage pool. Ref: TSM Admin Guide, chapter Protecting and Recovering Your Server, Storage Pool Protection: An Overview... "If data is lost or damaged, you can restore individual volumes or entire storage pools from the copy storage pools. TSM tries to access the file from a copy storage pool if the primary copy of the file cannot be obtained for one of the following reasons: - The primary file copy has been previously marked damaged. - The primary file is stored on a volume that is UNAVailable or DEStroyed. - The primary file is stored on an offline volume. - The primary file is located in a storage pool that is UNAVailable, and the operation is for restore, retrieve, or recall of files to a user, or export of file data." Copy Storage Pool, restore volume from 'RESTORE Volume ...' Copy Storage Pool & disaster recovery The Copy Storage Pool is a secondary recovery vehicle after the Primary Storage Pool, and so the Copy Storage Pool is rarely collocated for optimal recovery as the Primary pool often is. This makes for a big contention problem in disaster recovery, as each volume may be in demand by multiple restoral processes due to client data intermingling. A somewhat devious approach to this problem is to define the Devclass for the Copy Storage Pool with a FORMAT which disables data compression by the tape drive, thus using more tapes, and hence reducing the possibility of collision. Consider employing multiple management classes and primary storage pools with their own backup storage pools to distribute data and prevent contention at restoral time. If you have both high and low density drives in your library, use the lows for the Copy Storage Pool. Or maybe you could use a Virtual Tape Server, which implicitly stages tape data to disk. Copy Storage Pool volume damaged If a volume in a Copy Storage Pool has been damaged - but is not fully destroyed - try doing a Move Data first in rebuilding the data, rather than just deleting the volume and doing a fresh BAckup STGpool. Why? If you did the above and then found the primary storage pool volume also bad, you would have unwittingly deleted your only copies of the data, which could have been retrieved from that partially readable copy storage pool volume. So it is most prudent to preserve as much as possible first, before proceeding to try to recreate the remainder. Copy Storage Pool volume destroyed If a volume in a Copy Storage Pool has been destroyed, the only reasonable action is to make this known to ADSM by doing 'DELete Volume' and then do a fresh 'BAckup STGpool' to effectively recreate its contents on another volume. (Note that Copy Storage Pool volumes cannot be marked DEStroyed.) Copy Storage Pools current? The Auditocc SQL table allows you to quickly determine if your Copy Storage Pools have all the data in the Primary Storage Pools, by comparing: BACKUP_MB to BACKUP_COPY_MB ARCHIVE_MB to ARCHIVE_COPY_MB SPACEMG_MB to SPACEMG_COPY_MB If the COPY value is higher, it indicates that you have the same data in multiple Copy Storage Pools, as in an offsite pool. COPY_TYPE Column in VOLUMEUSAGE SQL table denoting the types of files: BACKUP, ARCHIVE, etc. Copygroup See: Copy Group COPYSTGpools TSM 5.1+ feature providing the possibility to simultaneously store a client's files into each copy storage pool specified for the primary storage pool where the clients files are written. The simultaneous write to the copy pools only takes place during backup or archive from the client. In other words, when the data enters the storage pool hierarchy. It does not take place during data migration from an HSM client nor on a LAN free backup from a Storage Agent. Naturally, if your storage pools are on tape, you will need a tape drive for the primary storage pool action and the copy storage pool action: 2 drives. Your mount point usage values must accommodate this. Maximum length of the copy pool name: 30 chars Maximum number of copy pool names: 10, separated by commas (no intervening spaces) This option is restricted to only primary storage pools using NATIVE or NONBLOCK data format. The COPYContinue parameter may also be specified to further govern operation. Note: The function provided by COPYSTGpools is not intended to replace the BACKUP STGPOOL command. If you use the COPYSTGpools parameter, continue to use BACKUP STGPOOL to ensure that the copy storage pools are complete copies of the primary storage pool. There are cases when a copy may not be created. COUNT(*) SQL statement to yield the number of rows satisfying a given condition: the number of occurrences. There should be as many elements to the left of the count specification as there are specified after the GROUP BY, else you will encounter a logical specification error. Example: SELECT OWNER,COUNT(*) AS "Number of files" FROM ARCHIVES GROUP BY OWNER SELECT NODE_NAME,OWNER,COUNT(*) AS "Number of files" FROM ARCHIVES GROUP BY NODE_NAME,OWNER See also: AVG; MAX; MIN; SUM COUrier DRM media state for volumes containing valid data and which are in the hands of a courier, going offsite. Their next state should be VAULT. See also: COURIERRetrieve; MOuntable; NOTMOuntable; VAult; VAULTRetrieve COURIERRetrieve DRM media state for volumes empty of data, which are being retrieved by a courier. Their next state should be ONSITERetrieve. See also: COUrier; MOuntable; NOTMOuntable; VAult; VAULTRetrieve CPIC Common Programming Interface Communications. .cpp Name suffix seen in some messages. Refers to a C++ programming language source module. CRC Cyclic Redundancy Checking. Available as of TSM 5.1: provides the option of specifying whether a cyclic redundancy check (CRC) is performed during a client session with the server, or for storage pools. The server validates the data by using a cyclic redundancy check which can help identify data corruption. The CRC values are validated when AUDit Volume is performed and during restore/retrieve processing, but not during other types of data movement (e.g., migration, reclamation, BAckup STGpool, MOVe Data). It is important to realize that the CRC values are stored when the data is first enters TSM, via Backup or Archive, to be stored in a storage pool which has CRCdata specified. The CRC info is thereby stored with the data and is associated with it for the life of that data in the TSM server, and will move with the data even if the data is moved to a storage pool where CRC recording is not in effect. Likewise, if data was not originally stored with CRC, it will not attain CRC if moved into a CRCed storage pool. Activated: VALIdateprotocol of DEFine SERver; CRCData operand of DEFine STGpool; REGister Node VALIdateprotocol operand; Verified: "Validate Protocol" value in Query SERver; "Validate Data?" value in Query STGpool Ref: IBM site TechNote 1143615 Cristie Bare Machine Recovery IBM-sponsored complementary product for TSM: A complete system recovery solution that allows a machine complete recovery from normal TSM backups. http://www.ibm.com/software/tivoli/ products/storage-mgr/cristie-bmr.html Cross-client restoral See: Restore across clients Cross-node restoral See: Restore across clients CSQryPending Verb type as seen in ANR0444W message. Reflects client-server query for pending scheduled tasks. CST See: Cartridge System Tape See also: ECCST; HPCT; Media Type CST-2 Designation for 3490E (q.v.). Ctime and backups The "inode change time" value (ctime) reflects when some administrative action was performed on a file, as in chown, chgrp, and like operations. When ADSM Backup sees that the ctime value has changed, it will back up the file again. This can be problematic for HSM-managed files, in that such backup requires copying from tape to tape, and there may be too few drives available during the height of nightly backups, which could cause the backup to fail then. So try to avoid mass chgrp and like operations on HSM-managed files. CURRENT_DATE SQL: Should be the current date, like "2001-09-01". But in ADSM 3.1.2.50, the month number was one more than it should be. Examples: SELECT CURRENT_DATE FROM LOG SELECT * FROM ACTLOG WHERE DATE(DATE_TIME)=CURRENT_DATE See also: Set SQLDATETIMEformat CURRENT_TIME SQL: The current time, like HH:MM:SS format. See also: Set SQLDATETIMEformat CURRENT_TIMESTAMP SQL: The current date and time, like YYYY-MM-DD HH:MM:SS or YYYYMMDDHHMMSS. See also: Set SQLDATETIMEformat CURRENT_USER SQL: Your administrator userid, in upper case. D2D Colloquialism for Disk-to-Disk, as in a disk backup scheme where the back store is disk rather than tape. D2D backup Really an ordinary backup, where the TSM server primary storage pool is of ramdom access devtype DISK rather serial access FILE or one of the various tape drive types. See also: DISK D2T Colloquialism for Disk-to-Tape, as in a disk backup scheme where the back store is tape - the traditional backup medium. Damaged files These are files in which the server found errors when a user attempted to restore, retrieve, or recall the file; or when an 'AUDit Volume' is run, with resulting Activity Log message like: "ANR2314I Audit volume process ended for volume 000185; 1 files inspected, 0 damaged files deleted, 1 damaged files marked as damaged." TSM knows when there is a copy of the file in the Backup Storage Pool, from which you may recover the file via 'RESTORE Volume', if not 'RESTORE STGpool'. If the client attempts to retrieve a damaged file, the TSM server knows that the file may instead be obtained from the copy stgpool and so goes there. The marking of a file as Damaged will not cause the next client backup to again back up the file, given that the supposed damage may simply be a dirty tape drive. Doing an AUDit Volume Fix=Yes on a primary storage pool volume may cause the file to be deleted therefrom, and the next backup to store a fresh copy of the file into that storage pool. Msgs: ANR0548W See also: Destroyed Damaged files, list from server 'Query CONtent VolName ... DAmaged=Yes' (Interestingly, there is no "Damaged" column available to customers in the Contents table in the TSM SQL database.) DAT Digital Audio Tape, a 4mm format which, like 8mm, has been exploited for data backup use. It is a relatively fragile medium, intended more for convenience than continuous use. Note that *SM Devclass refers to this device type as "4MM" rather than "DAT". A DDS cartridge should be retired after 2000 passes, or 100 full backups. A DDS drive should be cleaned every 24 hours of use, with a DDS cleaning cartridge. Head clogging is relatively common. Recording formats: DDS2 and DDS3 (Digital Data Storage). DDS2 - for DDS2 format without compression DDS2C - for DDS2 with hardware compression DDS2 - for DDS3 format without compression DDS3C - for DDS3 format with hardware compression Data access control mode One of four execution modes provided by the 'dsmmode' command. Execution modes allow you to change the space management related behavior of commands that run under dsmmode. The data access control mode controls whether a command can access a migrated file, sees a migrated file as zero-length, or receives an input/output error if it attempts to access a migrated file. See also execution mode. Data channel In a client Backup session, the part of the session which actually performs the data backup. Contrast with: Producer session See: Consumer session Data mover A named device that accepts a request from TSM to transfer data and can be used to perform outboard copy operations. As used with Network Addressable Storage (NAS) file server. Related: REGISTER NODE TYPE=NAS Data ONTAP Microkernel operating system in NetApp systems. Data Protection Agents Tivoli name for the Connect Agents that were part of ADSM. More common name: TDP (Tivoli Data Protection). The TDPs are specialized programs based upon the TSM API to back up a specialized object, such as a commercial database, like Oracle. As such, the TDPs typically also employ an application API so as to mingle within an active database, for example. You can download the TDP software from the TSM web site, but you additionally need a license and license file for the software to work. See also: TDP Data thread In a client Backup session, the part of the session which actually performs the data backup. Contrast with: Producer session See: Consumer session Data transfer time Statistic in a Backup report: the total time TSM requires to transfer data across the network. Transfer statistics may not match the file statistics if the operation was retried due to a communications failure or session loss. The transfer statistics display the bytes attempted to be transferred across all command attempts. Beware that if this value is too small (as when sending a small amount of data) then the resulting Network Data Transfer Rate will be skewed, reporting a higher number than the theoretical maximum. Look instead to the Elapsed time, to compute sustained throughput. Ref: Backup/Archive Client manual, "Displaying Backup Processing Status". Database The TSM Database is a proprietary database, governing all server operations and containing a catalog of all stored file system objects. All data storage operations effectively go through the database. The TSM Database contains: - All the administrative definitions and client passwords; - The Activity Log; - The catalog of all the file system objects stored in storage pools on behalf of the clients; - The names of storage pool volumes; - In a No Query Restore, the list of files to participate in the restoral; - Digital signatures as used in subfile backups. Named in dsmserv.dsk, as used when the server starts. (See "dsmserv.dsk".) Customers may perform database queries via the SELECT command (q.v.) and via the ODBC interface. The TSM database is dedicated to the purposes of TSM operation. It is not a general purpose database for arbitrary use, and there is no provided means for adding or thereafter updating arbitrary data. Why a proprietary db, and not something like DB2? Well, in the early days of ADSM, DB2's platform support was limited, so this product-specific, universal database was developed. It is also the case that this db is optimized for storage management operations in terms of schema and locking. But the problem with the old ADSM db is that is is very limited in features, and so a DB2 approach is being re-examined. See also: Database, space taken for files; DEFine SPACETrigger; ODBC Database, back up Perform via ADSM server command 'BAckup DB' (q.v.). To back up to a 3590 tape in the 3494, choose a tape which is not already defined to a storage pool. Note that there is no query command to later directly reveal which tape a database backup was written to: you have to do 'Query VOLHistory Type=DBBackup'. Database, back up unconventionally An unorthodox approach for supporting point-in-time restorals of the ADSM database that came to mind would be to employ standard *SM database mirroring and at an appointed time do a Vary Off of the database volume(s), which can then be image-copied to tape, or even be left as-is, with a replacement disk area put into place (Vary On) rotationally. In this way you would never have to do a Backup DB again. Database, back up to a scratch 3590 Perform like the following example: tape in the 3494 'BAckup DB DEVclass=OURLIBR.DEVC_3590 Type=Full' Database, back up to a specific 3590 Perform like the following example: tape in the 3494 'BAckup DB DEVclass=OURLIBR.DEVC_3590 Type=Full VOLumenames=000049 Scratch=No' Database, "compress" See: dsmserv UNLOADDB (TSM 3.7) Database, content and compression The TSM Server database has a b-tree organization with internal references to index nodes and siblings. The database grows sequentially from the beginning to end, and pages that are deleted internally are re-used later when new information is added. The only utility that can compress the database so that "gaps" of deleted pages are not present is the database dump/load utility. After extensive database deletions, due to expiration processing or filespace/volume delete processing, pages in the midst of the database space may become free, but pages closer to the beginning or end of the database still allocated. To reduce the size of your database, sufficient free pages must exist at the end of the linear database space that is allocated over your database volumes. A database dump followed by a load will remove free pages from the beginning of the database space to minimize free space fragmentation and may allow the database size to be reduced. Database, convert second primary 'REDuce DB Nmegabytes' volume to volume copy (mirror) 'DELete DBVolume 2ndVolName' 'DEFine DBCopy 1stVolName 2ndVolName' Database, create 'dsmfmt -db /adsm/DB_Name Num_MB' where the final number is the desired size for the database, in megabytes, and is best defined in 4MB units, in that 1 MB more (the LVM Fixed Area, as seen with SHow LVMFA) will be added for overhead if a multiple of 4MB, else more overhead will be added. For example: to allocate a database of 1GB, code "1024": ADSM will make it 1025. Database, defragment See: dsmserv UNLOADDB (TSM 3.7) Database, defragment? You can gauge how much your TSM database is fragmented by doing Query DB and compare the Pct Util against the Maximum Reduction: a "compacted" database with a modest utilization will allow a large reduction, but a "fragmented" one will be much less reducible. Database, delete table entry See: Backup files, delete; DELRECORD; File, selectively delete from *SM storage Database, designed for integrity The design of the database updating for ADSM uses 2-phase commit, allowing recovery from hardware and power failures with a consistent database. The ADSM Database is composed of 2 types of files, the DB and the LOG, which should be located on separate volumes. Updates to the DB are grouped into transactions (a set of updates). A 2-phase commit scheme works the following way, for the discussion assume we modify DB pages 22, 23: 1) start transaction 2) read 22 from DB and write to LOG 3) update 22' in DB and write 22' to log 4) same as 2), 3) for page 23 5) commit 6) free LOG space Database, empty If you just formatted the database and want to start fresh with ADSM, you need to access ADSM from its console, via SERVER_CONSOLE mode (q.v.). From there you can register administrators, etc., and get started. Database, enlarge You can extend the space which may be used within database "volumes" (actually, files) by using the 'EXTend DB' command. If your existing files are full, you *cannot* extend the files themselves: they are fixed in size. Instead, you have to add a volume (file), as follows: - Create and format the physical file by doing this from AIX: 'dsmfmt -db /adsm/dbext1 100' which will create a 101 MB file, with 1 MB added for overhead. - Define the volume (file) to ADSM: 'DEFine DBVolume /adsm/dbext1 The space will now show up in 'Query DBVolume' and 'Query DB', but will not yet be available for use. - Make the space available: 'EXTend DB 100' Note that doing this may automatically trigger a database backup, with message ANR4552I, depending. Database, extend usable space 'EXTend DB N_Megabytes' The extension is a physical operation, so shell "filesize" limit could disrupt the operation. Note that doing this may automatically trigger a database backup, with message ANR4552I, depending. Database, maximum size Per APAR IC15376, the ADSM database should not exceed 500 GB. Per the TSM 5.1 Admin Guide: 530 GB. Ref: Server Admin Guide, topic Increasing the Size of the Database or Recovery Log topic, in Notes. See: SHow LVMFA, which reveals that the max is actually 531.2 GB. (See the reported "Maximum possible DB LP Table size".) See also: Volume, maximum size Database, mirror See: MIRRORRead LOG Database, mirror, create Define a volume copy via: 'DEFine DBVolume Db_VolName Copy_VolName 'DEFine DBCopy Db_VolName Copy_VolName' Then you can do an 'EXTend DB N_Megabytes' (which will automatically kick off a full database backup). Database, mirror, delete 'DELete DBVolume Db_VolName' (It will be almost instantaneous) Message: ANR2243I Database, number of filespace objects See: Objects in database Database, query 'Query DB [Format=Detailed]' Database, rebuild from storage pool No: in a disaster situation, the ADSM tapes? server database *cannot* be rebuilt from the data on the storage pool tapes, because the tape files have meaning only per the database contents. Database, reduce by duress Sometimes you have to minimize the size of your database in order to relocate it or the like, but can't Reduce DB sufficiently as it sits. If so, try: - Prune all but the most recent Activity Log entries. - Delete any abandoned or useless filespaces to make room. (Q FI F=D will help you find those which have not seen a backup in many a day, but watch out for those that are just Archive type.) - Delete antique Libvol entries. - If still not enough space, an approach you could possibly use would be to Export and delete any dormant node data, to Import after you have moved the db, to bring that data back. Database, reduce space utilized You can end up with a lot of empty space in your database volumes. If you need to reclaim, you can employ the technique of successively adding a volume to the database and then deleting the oldest volume, until all the original volumes have been treated. This will consolidate the data, and can be done while *SM is up. Note that free space within the database is a good thing, for record expansion. Database, remove volume 'DELete DBVolume Db_VolName' That starts a process to migrate data from the volume being deleted to the remaining volumes. You can monitor the progress of that migration by doing 'q dbv f=d'. Database, reorganize See: dsmserv UNLOADDB (TSM 3.7) Database, space taken per client node This is difficule to determine (and no one really cares, anyway), but here's an approach: The Occupancy info, which provides the number of filespace objects), by type, in primary and copy storage pools. The Admin Guide topic "Estimating and Monitoring Database and Recovery Log Space Requirements" provides numbers for space utilized. The product of the two would yield an approximate number. Database, space taken for files From Admin Guide chapter Managing the Database and Recovery Log, topic Estimating and Monitoring Database and Recovery Log Space Requirements: - Each version of a file that ADSM stores requires about 400 to 600 bytes of database space. (This is an approximation which anticipates average usage. Consider that for Archive files, the Description itself can consume up to 255 chars, or contribute less if not used.) - Each cached or copy storage pool copy of a file requires about 100 to 200 bytes of database space. - Overhead could increase the required space up to an additional 25%. These are worst-case estimations: the aggregation of small files will substantially reduce database requirements. Note that space in the database is used from the bottom, up. Ref: Admin Guide: Estimating and Monitoring Database and Recovery Log Space Requirements. Database, verify and fix errors See: 'DSMSERV AUDITDB' Database allocation on a disk For optimal performance and minimal seek times: - Use the center of a disk for TSM space. This means that the disk arm is never more than half a disk away from the spot it needs to reach to service TSM. - You could then allocate one biggish space straddling the center of the disk; but if you instead make it two spaces which touch at the center of the disk, you gain benefit from TSM's practice of creating one thread per TSM volume, so this way you can have two and thus some parallelism. Database Backup To capture a backup copy of the ADSM database on serial media, via the 'BAckup DB' command. Database backups are not portable across platforms - they were not designed to be so - and include a lot of information that is platform specific: use Export/Import to migrate across platforms. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting). See also: dsmserv RESTORE DB Database backup, latest SELECT DATE_TIME AS - "DATE TIME ",TYPE, - MAX(BACKUP_SERIES),VOLUME_NAME FROM - VOLHISTORY WHERE TYPE='BACKUPFULL' OR - TYPE='BACKUPINCR' Database backup, query volumes 'Query VOLHistory Type=DBBackup'. The timestamp displayed is when the database backup started, rather than finished. Another method: 'Query DRMedia DBBackup=Yes COPYstgpool=NONE' Note that using Query DRMedia affords you the ability to very selectively retrieve info, and send it to a file, even from a server script. Database backup, delete all 'DELete VOLHistory TODate=TODAY TOTime=NOW Type=DBBackup' (Note that TSM will not allow you to delete your last database backup, for safety reasons. You can circumvent this, and free a "trapped" tape, by doing a placebo db backup to devclass type File.) Database backup in progress? Do 'Query DB Format=Detailed' and look at "Backup in Progress?". Database backup trigger, define See: DEFine DBBackuptrigger Database backup trigger, query 'Query DBBackuptrigger [Format=Detailed]' Database backup volume Do 'Query VOLHistory Type=DBBackup', if the ADSM server is up, or 'Query OPTions' and look for "VolumeHistory". If ADSM is down, you can find that information in the file specified on the "VOLUMEHistory" definition in the server options file (dsmserv.opt). See "DSMSERV DISPlay DBBackupvolumes" for displaying information about specific volumes when the volume history file is unavailable. See "DSMSERV RESTORE DB Preview=Yes" for displaying a list of the volumes needed to restore the database to its most current state. Database backup volume, pruning If you do not have DRM: Use 'DELete VOLHistory TODate=SomeDate TOTime=SomeTime Type=DBBackup' to manage the number of database backups to keep. If you have DRM: 'Set DRMDBBackupexpiredays __' Database backup volumes, identifying Seek "BACKUPFULL" or "BACKUPINCR" in the current volume history backup file - a handy way to find them, without having to go into ADSM. Or perform server query: select volume_name from volhistory - where (upper(type)='BACKUPFULL' or - upper(type)='BACKUPINCR') Database backup volumes, identifying Unfortunately, when a 'DELete historical VOLHistory' is performed the volsers of the deleted volumes are not noted. But you can get them two other ways: 1. Have an operating system job capture the volsers of the BACKUPFULL, BACKUPINCR volumes contained in the volume history backup file (named in the server VOLUMEHistory option) before and after the db backup, then compare. 2. Do 'Query ACtlog BEGINDate=-N MSGno=1361' to pick up the historical volsers of the db backup volumes at backup completion to check against those no longer in the volume history. Database backups (Oracle, etc.) Done with TSM via the Tivoli Data Protection (TDP) products. See: TDP See also: Adsmpipe Database buffer pool size, define "BUFPoolsize" definition in the server options file. Database buffer pool statistics, reset 'RESet BUFPool' Database change statistics since last 'Query DB Format=Detailed' backup Database consumption factors - All the administrative definitions are here; elminate what is no longer needed. - The Activity Log is contained in the database: control amount retained via 'Set ACTlogretention N_Days'. The Activity Log also logs administrator commands, Events, client session summary statistics, etc., which you may want to limit. - The database is at the mercy of client nodes or their filespaces being abandoned, and client file systems and disks being renamed such that obsolete filespaces consume space. - Volume history entries consume some space: eliminate what's obsolete via 'DELete VOLHistory'. - More than anything, the number of files cataloged in the database consume the most space, and your Copy Group retention policies govern the amount kept. Nodes which have a sudden growth in file system files will inflate the db via Backup. See: "Many small files" problem - Restartable Restores consume space in that the server is maintaining state information in the database (the SQL RESTORE table). Generally control via server option RESTOREINTERVAL, and reclaim space from specific restartable restores via the server command CANCEL RESTORE. Also, during such a restore the server will need extra database space to sort filenames in its goal to minimize tape mounts during the restoral, and so there will be that surge in usage. - Complex SELECT operations will require extra database space to work the operation. - When you Archive a file, the directory containing it is also archived. When the -DEscription="..." option is used, to render the archived file unique, it also causes the archived directory to be rendered unique, and so you end up with an unexpectedly large number of directories in the *SM database, even though they are all effectively duplicates in terms of path. - The size of the Aggregate in Small Files Aggregation is also a factor: the more small files in an aggregate, the lower the overhead in database cataloging. As the 3.1 Technical Guide puts it, "The database entries for a logical file within an aggregate are less than entries for a single physical file." See: Aggregate - Make sure that clients are not running Selective backups or Archives on their file systems (i.e., full backups) routinely instead of Incremental backups, as that will rapidly inflate the database. Likewise, be very careful of coding MODE=ABSolute in your Copy Group definitions. - Talk to client administrators about excluding useless files from backup, like temp directories and web browser cache files. - Make sure that 'EXPIre Inventory' is being run regularly - and that it gets to run to completion. Note that API-based clients, such as the TDP series and HSM, require their own, separate expiration handling: failing to do that will result in data endlessly piling up in the storage pools and database. - Not using the DIRMc option can result in directories being needlessly retained after their files have expired, in that the default is for directories to bind to the management class with the longest retention period (RETOnly). - Realize that long-lived data that was stored in the server without aggregation will be output from reclamation likewise unaggregated, thus using more database space than if it were aggregated. (See: Reclamation) - With the Lotus Notes Agent, *SM is cataloging every document in the Notes database (.NSF file). - Beware the debris left around from the use of DEFine CLIENTAction (q.v.). - Windows System Objects are large and consist of thousands of files. - Wholesale changes of ACLs (Access Control Lists) in a file system may cause all the files to be backed up afresh. - Daylight Savings Time transitions can cause defective TSM software to back up every file. - Use of DISK devclass volumes can use more db space. (See Admin Guide table "Comparing Random Access and Sequential Access Disk Devices".) In that the common cause of db growth is file deluge from a client node, simple ways to inspect are: produce a summary of recent *SM accounting records; harvest session-end ANE* records from the Activity Log; and to do a Query Content with a negative count value on recently written storage pool tapes. (Ideally, you should be running accounting record summaries on a regular basis as a part of system management.) Database file It is named within file: /usr/lpp/adsmserv/bin/dsmserv.dsk (See "dsmserv.dsk".) Database file name (location) Is defined within file: /usr/lpp/adsmserv/bin/dsmserv.dsk (See "dsmserv.dsk".) The name gets into that file via 'DEFine DBVolume' (not by dsmfmt). ADSM seems to store the database file name in the ODM, in that if you restart the server with the name strings within dsmserv.dsk changed, it will still look for the old file names. Database file name, determine 'Query DBVolume [Format=Detailed]' Database filling indication Activity log will contain message ANR0362W when utilization exceeds 80%. Database fragmentation, gauge Try the following to report: SELECT CAST((100 - ( CAST(MAX_REDUCTION_MB AS FLOAT) * 256 ) / (CAST(USABLE_PAGES AS FLOAT) - CAST(USED_PAGES AS FLOAT) ) * 100) AS DECIMAL(4,2)) AS PERCENT_FRAG FROM DB Database full indication ANR0131E diagnosticid: Server DB space exhausted. Database growth See: Database consumption factors Database location See "Database file name" Database log pages, mode for reading, "MIRRORRead DB" definition in the define server options file. Database log pages, mode for writing, "MIRRORWrite DB" definition in the define server options file. Database max utilization stats, reset 'RESet DBMaxutilization' Resets the Max. Pct Util number, which is seen in a 'Query DB', to be the same as the current Pct Util value. Database page size 'Query DB Format=Detailed', "Page Size (bytes):" Currently: 4096 Database performance - Locate the database on disks which are separate from other operating system services, and choose fast disks and connection methods (like Ultra SCSI). - Spread over multiple physical volumes (disks) rather than consolidating on a single large volume: TSM gives a process thread to each volume, so performance can improve through parallelism. And, of course, you always benefit by having more disk arms to access data. - Avoid RAID striping, as this will slow performance. (Striping is for distributing I/O across multiple disks. This slows down db operations because striping involves a relatively costly set-up overhead to get multiple disk working together to handle the streaming type writing of a lot of data. DB operations constitute many operations involving small amounts of data, and thus the overhead of striping is detrimental.) - Do 'Query DB F=D' and look at the Cache Hit Pct. The value should be up around 98%. If less, consider boosting the server BUFPoolsize option. - Assure that the server system has plenty of real memory so as to avoid paging in serving database needs. See also: Server performance Database robustness The *SM database is private to the product. Unfortunately, it is not a robust database, and as long as it remains proprietary it will likely be the product's Achilles heel. Running multiple, simultaneous, intense database-updating operations (Delete Filespace, Delete Volume) has historically caused problems, including database deadlocks, server crashes, and even database damage. AVOID DOING SO!! Database size issues See: Database consumption factors Database space utilization issues So your database seems bloated. Is there something you can do? The ADSM database will inevitably grow with the number of files being backed up and the number of backup versions retained and their retention periods. Beyond the usual, the following are pertinent to database space utilization: - Make sure you are running expiration regularly. - The Activity Log is in the database. Examine your 'Set ACTlogretention' value and look for runaway errors that may have consumed much space. - Look for abandoned File Spaces, the result of PC users renaming their disks or file systems and then doing backups under the new name. - Volume History information tends to be kept forever: you need to periodically run 'DELete VOLHistory'. And with that command you should also be deleting old DBBackup volumes to reclaim tapes. - Using verbose descriptions for Archive files will eat space. (Each can be up to 255 chars.) - Consider coercing client systems to exclude rather useless files from backups, such as temp files and web browser cache files. Database space required for HSM files Figure 143 bytes + filename length. Database Space Trigger ADSM V3.1.2 feature which allows setting a trigger (%) and when reached, will dynamically create a new volume, define it to the database and extend the db. Database volume (file) Each database volume (file) contains info about all the other db and log files. See also: dsmserv.dsk Database volume, add 'DEFine DBVolume Vol_Ser' Database volume, delete 'DELete DBVolume Vol_Ser' Database volume, query 'Query DBVolume [VolName] [Format=Detailed]' Database volume, vary back on 'VARy ONline VolName' after message ANR0202W, ANR0203W, ANR0204W, ANR0205W. Always look into the cause before attempting to bring the possibly defective volume back. Database volume usage, verify If your *SM db volumes are implemented as OS files (rather than rlv's) you can readily inspect *SM's usage of them by looking at the file timestamps, as the time of last read and write will be thereby recorded. Databases, backing up Is performed via ADSM Connect Agents and TSM Data Protection (agents). For supported list, see the Clients software list (URL available at the bottom of this document). For others you'll have to seek another source. General note: Backing up active databases using simple incremental backup, from outside the database, is problematic because part of the database is on disk and part is in memory, and perhaps elsewhere (e.g., recovery log). Unlike a sequential file, which is updated either appended to it or replacing it, a database gets updated in random locations inside of it - often "behind" the backup utility, which is reading the database as a sequential file. Furthermore, many databases consist of multiple, interrelated files, and to it is impossible for an external backup utilities to capture a consistent image of the data. Thus, it's advisable to back up databases using an API-based utility which participates in the database environment to back it up from the inside, and thus get a consistent and restorable image. Alternately, some database applications can themselves make a backup copy of the database, which can then be backed up via TSM incremental backup. Ref: redbook Using ADSM to Back Up Databases (SG24-4335) DATE SQL: The month-day-year portion of the TIMESTAMP value, of form MM/DD/YYYY. Sample usage: SELECT NODE_NAME, PLATFORM_NAME, - DATE(LASTACC_TIME) FROM NODES SELECT DATE(DATE_TIME) FROM VOLHISTORY - WHERE TYPE='BACKUPFULL' See also: TIMESTAMP Date, per server ADSM server command 'SHow TIME' (q.v.). See also: ACCept Date DATE_TIME SQL database column, as in VOLHISTORY, being a timestamp (date and time), like: 2001-07-30 09:30:07.000000 See also: CURRENT_DATE; DATE DATEformat, client option, query Do ADSM 'dsmc Query Option' or TSM 'show options' and look at the "Date Format" value. A value of 0 indicates that your opsys dictates the format. See also: TIMEformat DATEformat, client option, set Definition in the client user options file. Specifies the format by which dates are displayed by the *SM client. NOTE: Not usable with AIX or Solaris, in that they use NLS locale settings (see /usr/lib/nls/loc in AIX, and /usr/lib/localedef/src in Solaris). Do 'locale' in AIX to see its settings. "1" - format is MM/DD/YYYY (default) "2" - format is DD-MM-YYYY "3" - format is YYYY-MM-DD "4" - format is DD.MM.YYYY "5" - format is YYYY.MM.DD Default: 1 Query: ADSM 'dsmc Query Options' or TSM 'dsmc show options' and look at the "Date Format" value. A value of 0 indicates that your opsys dictates the format. Advisory: Use 4-digit year values. Various problems have been encountered when using 2-digit year values, such as Retrieve not finding files which were Archived using a RETV=NOLIMIT (so date past 12/31/99). DATEformat, server option, query 'Query OPTion' and look at the "DateFormat" value. DATEformat, server option, set Definition in the server options file. Specifies the format by which dates are displayed by the ADSM server (except for 'Query ACtlog' output, which is always in MM/DD/YY format). "1" - format is MM/DD/YYYY (default) "2" - format is DD-MM-YYYY "3" - format is YYYY-MM-DD "4" - format is DD.MM.YYYY "5" - format is YYYY.MM.DD Default: 1 Ref: Installing the Server... DAY(timestamp) SQL function to return the day of the month from a timestamp. See also: HOUR(); MINUTE(); SECOND() Day of week in Select See: DAYNAME Daylight Savings Time You should not have to do anything in TSM during a Daylight Savings Time transition: that should be handled by your computer operating system, and all applications running in the system will pick up the adjusted time. In a z/OS environment, see IBM site article swg21153685. See also: ACCept Date; NTFS and Daylight Savings Time DAYNAME(timestamp) SQL function to return the day of the week from a timestamp. Example: SELECT ... FROM ... WHERE DAYNAME(current_date)='Sunday' See also: HOUR(); MINUTE(); SECOND() DAYS SQL "labeled duration": a specific unit of time as expressed by a number (which can be the result of an expression) followed by one of the seven duration keywords: YEARS, MONTHS, DAYS, HOURS, MINUTES, SECONDS, or MICROSECONDS (q.v.). The number specified is converted as if it were assigned to a DECIMAL(15,0) number. A labeled duration can only be used as an operand of an arithmetic operator in which the other operand is a value of data type DATE, TIME, or TIMESTAMP. Thus, the expression HIREDATE + 2 MONTHS + 14 DAYS is valid, whereas the expression HIREDATE + (2 MONTHS + 14 DAYS) is not. In both of these expressions, the labeled durations are 2 MONTHS and 14 DAYS. DAYS(timestamp) SQL function to get the number of days from a timestamp (since January 1, Year 1). DB2 backups Is not a TDP, but like them it utilizes the TSM client API to store the data on the TSM server. It is best to invoke the client while sitting within the client directory. Instead of, or addition to that, you may want to set the following environment variables: Basic client: DSM_CONFIG=: DSM_DIR=: DSM_LOG=: API client: DSMI_CONFIG=: DSMI_DIR=: DSMI_LOG=: Each backup is its own filespace, whose name is that of the DB2 database plus a timestamp. See redbook: "Using ADSM to Back Up Databases", SG24-4335-03 and "Managing VLDB Using DB2 UDB EEE", SG24-5105-00. DB2 backups, delete You have to manually inactive the backups using the db2adutl delete command. Sample tasks: 'db2adutl query full' will list your db2 backups; 'db2adutl delete full older than N days' will delete. DB2 backups, query Like: db2adutl query full (You cannot use 'dsmc query backup' because the backups were stored via the TSM client API.) DB2 log handling The DB2 database backup does not pick up the DB2 logs: use the user exit program provided by DB2 to archive (not backup) the inactive log files. DB2 restore command Like: db2 restore db db0107 use tsm .DBB File name extension created by the server for FILE type scratch volumes which contain TSM database backup data. Ref: Admin Guide, Defining and Updating FILE Device Classes DBBACKUP In 'Query VOLHistory', volume type for sequential access storage volumes used for database backups. Also under 'Volume Type' in /var/adsmserv/volumehistory.backup . DBBackup tapes vanishing with DRM Watch out that you don't delete database volume history with the same number of days as the DRM "Set DRMDBBackupexpiredays" command: just when ADSM DRM is changing the status of the db tapes to "vault retrieve" you are also deleting them from the volume history and therefore never see them as "vault retrieve". DBBackuptrigger The Database Backup Trigger: to define when TSM is to automatically run a full or incremental backup of the TSM database, based upon the Recovery Log filling, when running in Rollforward mode. (As opposed to getting message ANR0314W in Normal mode.) At triggering time, TSM also automatically deletes any unnecessary recovery log records - which may take valuable time. Msgs: ANR4553I See: DEFine DBBackuptrigger; Set LOGMode DBDUMP In 'Query VOLHistory', Volume Type to say that volume was used for an online dump of the database (pre ADSM V2R1). Also under 'Volume Type' in /var/adsmserv/volumehistory.backup . .dbf See: Oracle database factoids DBPAGESHADOW TSM 4.1 server option. Provides a means of mirroring the last batch of information written to the server database. If enabled, the server will mirror the pages to the file specified by DBPAGESHADOWFILE option. On restart, the server will use the contents of this file to validate the information in the server database and if needed take corrective action if the information in the actual server database volumes is not correct as verified by the information in the page shadow file. In this way if an outage occur that affects both mirrored volumes, the server can recover pages that have been partially written. See the dsmserv.opt.smp file for an explanation of the DBPAGESHADOW and DBPAGESHADOWFILE options. Note that the DBPAGESHADOWFILE description differs from what is documented in the TSM publications. This option does NOT prepend the server name to the file name: the file name used is simply the name specified on the option. DBPAGESHADOWFILE TSM 4.1 server option (boolean). Specifies the name of the database page shadowing file. See: DBPAGESHADOW DBSnapshot See: BAckup DB; DELete VOLHistory; "Out of band"; Query VOLHistory DBSnapshot, delete This is performed with the command 'DELete VOLHistory ... Type=DBSnapshot'. However, TSM insists that the latest snapshot database backup cannot be deleted! A way to get around this would be to perform another DBSnapshot, this time directed at a File type of output devclass. This would allow you to delete the tape volume from TSM and re-use it, and you could then delete the file at the operating system level. This presumes that you have enough disk space for the file. You might be able to get away with making the file /dev/null if you are on Unix. D/CAS Circa 1990 Data CASsette tape technology using a specially notched Philips audio cassette catridge and 1/8" tape, full width. Variations: D/CAS-43 50 MB Tape vendors: Maxell 184720 D/CAS-86 100 MB 600 feet length, 16,000 ftpi Tape vendors: Maxell CS-600XD DCR Design Change Request DDS* Digital Data Storage: the data recording format for 4mm (DAT) tapes, as in DDS1, DDS2, DDS3. See: DAT DDS2 tapes Can be read by DDS2 and DDS3 drives. DEACTIVATE_DATE *SM SQL: Column in the BACKUPS table, being the date and time that the object was deactivated; that is, when it went from being an Active file to Inactive. Example: 2000-08-16 02:53:27.000000 The value is naturally null for Active files (those whose STATE is ACTIVE_VERSION). It may also be null for Inactive files (INACTIVE_VERSION): this is the case for old files marked for expiration based on number of versions (rather than retention periods), so marked during client Backup processing (Incremental or Selective). Note that such marked files can be seen in a server Select, but cannot be seen from client queries. During expiration if the TSM server encounters an inactive version without a deactivation date, then TSM expires this object. Looked at another way, if client backup processing does not occur, version-oriented expiration cannot occur. See also: dsmc Query Backup Deadlocks in server? 'SHow DEADLocks' (q.v.) Msgs: ANR0390W Debugging See "CLIENT TRACING" and "SERVER TRACING" at bottom of this document. DEC SQL function to convert a string to a decimal number. Syntax: DEC(String,Precision,Scale) String Is the string to be converted Precision Is the length for the portion before the decimal point. Scale Is the length for the portion after the decimal point. DEC Alpha client Storage Solutions Specialists provides an ADSM API called ABC. See HTTP://WWW.STORSOL.COM. DEFAULT The generic identifier for the default management class, as shows up in the CLASS_NAME column in the Archives and Backups SQL tables. Note that "DEFAULT" is a reserved word: you cannot define a management class with that name. See also: CLASS_NAME; Default management class Default management class The management class *SM assigns to a storage pool file if there is no INCLUDE option in effect which explicitly assigns a management class to specified file system object names. Hard links are bound to the default management class in that they are not directories or files. Note that automatic migration occurs *only* for the default management class; for the incl-excl named management class you have to manually incite migration. Default management class, establish 'ASsign DEFMGmtclass DomainName SetName ClassName' Default management class, query 'Query POlicyset' and look in the Default Mgmt Class Name column or 'Query MGmtclass' and look in the Default Mgmt Class column DEFAULTServer Client System Options file (dsm.sys) option to specify the default server. This is a reference to the SErvername stanza which is coded later in the file: it is *not* the actual server name, which is set via SET SERVERNAME. The stanza name is restricted to 8 characters (not 64, as the manual says). HSM migration will use this value unless MIgrateserver is specified. DEFine Administrator You mean: REGister Admin DEFine ASSOCiation Server command to associate one or more client nodes with a client schedule which was established via 'DEFine SCHedule'. Syntax: 'DEFine ASSOCiation Domain_Name Schedule_Name Node_name [,...]' Note that defining a new schedule to a client does not result in it promptly "seeing" the new schedule, when SCHEDMODe PRompted is in effect: you need to restart the scheduler so that it talkes to the server and gets scheduled for the new task. Related: 'DELete ASSOCiation' DEFine BACKUPSET Server command to define a client backup set that was previously generated on one server and make it available to the server running this command. The client node has the option of restoring the backup set from the server running this command rather than the one on which the backup set was generated. Any backup set generated on one server can be defined to another server as long as the servers share a common device type. The level of the server to which the backup set is being defining must be equal to or greater than the level of the server that generated the backup set. You can also use the DEFINE BACKUPSET command to redefine a backup set that was deleted on a server. Syntax: 'DEFine BACKUPSET Client_NodeName BackupSetName DEVclass=DevclassName VOLumes=VolName[,VolName...] [RETention=Ndays|NOLimit] [DESCription=____]' See also: GENerate BACKUPSET DEFine CLIENTAction TSM server command to schedule one or more clients to perform a command, once. This results in the definition of a client schedule with a name like "@1", PRIority=1, PERUnits=Onetime, and DURunits to the number of days set by the duration period of the client action. It also does DEFine ASSOCiation to have the operation handled by the specified nodenames. 'DEFine CLIENTAction [NodeName[,Nodename]] [DOmain=DomainName] ACTion=ActionToPerform [OPTions=AssociatedOptions] [OBJects=ActionObjects] [Wait=No|Yes]' where ACTion is one of: Incremental Selective Archive REStore RETrieve IMAGEBACkup IMAGEREStore Command Macro For OBJects: Normally code within double quotes; but if you need to code quotes within quotes, enclose the whole in single quotes and the internals as double quotes. Example: DEFine CLIENTAction NODEA - ACTion=Command - OBJects='mail -s "Subject line, body empty" joe /dev/null' Where ACTion=Command, you can code OBJects with multiple operating system commands, separated by the conventional command separator for that environment. For example, in Unix, you can cause a delayed execution by coding a 'sleep' ahead of the command, as in: OBJects='sleep 20; date'. If there is any question about the invoked commands being in the Path which the scheduler process may have been started with, by all means code the commands with full path specs, which will avoid 127 return code issues. The Wait option became available in TSM 4.1. Note that a Command is run under the account under which the TSM server was started (in Unix, usually root). Timing: How soon the action is performed is at the mercy of your client SCHEDMODe spec: POlling is at the client's whim, and will result in major delay compared to PRompted, where the server initiates contact with the client (when it gets around to it - *not* necessarily immediately). When using PRompted, watch out for PRESchedulecmd and POSTSchedulecmd, which would thus get invoked every time. Housekeeping: Because of the schedule clutter left behind, you should periodically run 'DELete SCHedule Domain_Name @*', which gets rid of the temporary schedule and association. Msgs: ANR2510I, ANR2561I See also: DEFine SCHedule, client; SET CLIENTACTDuration DEFine CLIENTOpt Server command to add a client option to an option set. Syntax: DEFine CLIENTOpt OptionSetName OptionName 'OptionValue' [Force=No|Yes] [SEQnumber=number] Force will cause the server-defined option to override that in the client option file - for singular options only...not additive options like Include-Exclude and DOMain. Additive options will always be seen by the client (as long it is at least V3), and will be logically processed ahead of the client options. Code the OptionValue in single quotes to handle multi-word values, and use double-quotes within the single quotes to further contain sub-values. Example: DEFine CLIENTOpt SETNAME INCLEXCL 'Exclude "*:\...\Temporary Internet Files\...\"' SEQ=0 DEFine CLOptset Examples: DEFine cloptset ts1 desc='Test option sets' COMMIT DEFine CLIENTOpt ts1 CHAngingretries 1 seq=10 DEFine CLIENTOpt ts1 COMPRESSAlways=Yes Force=Yes SEQnumber=20 DEFine CLIENTOpt ts1 INCLEXCL "exclude /tmp/.../*" DEFine CLIENTOpt ts1 INCLEXCL "include ""*:\My Docs\...\*""" COMMIT DEFine COpygroup Server command to define a Backup or Archive copy group within a policy domain, policy set, and management class. Does not take effect until you have performed 'VALidate POlicyset' and 'ACTivate POlicyset'. DEFine COpygroup, archive type 'DEFine COpygroup DomainName PolicySet MgmtClass Type=Archive DESTination=PoolName [RETVer=N_Days|NOLimit] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic] DEFine COpygroup, backup type 'DEFine COpygroup DomainName PolicySet MgmtClass [Type=Backup] DESTination=Pool_Name [FREQuency=Ndays] [VERExists=N_Versions|NOLimit] [VERDeleted=N_Versions|NOLimit] [RETExtra=N_Versions|NOLimit] [RETOnly=N_Versions|NOLimit] [MODE=MODified|ABSolute] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' DEFine DBBackuptrigger Server command to define settings for the database backup trigger. Syntax: 'DEFine DBBackuptrigger DEVclass=DevclassName [LOGFullpct=N] [INCRDEVclass=DevclassName] [NUMINCremental=???]' where: LOGFullpct Specifies the Recovery Log percent fullness threshold at which an automatic backup is triggered, 1 - 99. Default: 50 (%). Choose a value which gives the backup a chance to complete before the Log fills. NUMINCremental Specifies the maximum number of Incrementals that will be performed before a Full is done. Code 0 - 32, where 0 says to only do Fulls. Default = 6. See also: DBBackuptrigger DEFine DBCopy Server command to define a volume copy (mirror) of a database volume. Syntax: 'DEFine DBCopy Db_VolName Copy_VolName' DEFine DBVolume Server command to define an additional volume for the database. Syntax: 'DEFine DBVolume Vol_Ser Formatsize=#MB Wait=No|Yes' Messages: ANR2429E DEFINE DBVolume: Maximum database capacity exceeded. Note that you benefit from having more DB volumes. See: Database performance DEFine DEVclass Server command to define a device class for storage pools, and associating it with a previously defined library, if applicable. Note that the device class DISK is pre-defined in TSM, as used in DEFine STGpool for random access devices. See also: Devclass DEFine DEVclass (3590) 'DEFine DEVclass DevclassName DEVType=3590 LIBRary=LibName [FORMAT=DRIVE|3590B|3590C| 3590E-B|3590E-C] [MOUNTRetention=Nmins] [PREFIX=ADSM|TapeVolserPrefix] [ESTCAPacity=X] [MOUNTWait=Nmins] [MOUNTLimit=DRIVES|Ndrives|0]' DEFine DEVclass (File) 'DEFine DEVclass DevclassName DEVType=FILE [MOUNTLimit=1|Ndrives|DRIVES] [MAXCAPacity=4M|maxcapacity] [DIRectory=currentdir|dirname]' Note that "3590" is a special, reserved DEVType. Specifying MOUNTLimit=DRIVES allows *SM to adapt to the number of drives actually available. (Do not use for External LIbraries (q.v.).) DEFine DOmain Server command to define a policy domain. Syntax: 'DEFine DOmain DomainName [description="___"] [backretention=NN] [archretention=NN]' Since a client node is assigned to one domain name, it makes sense for the domain name to be the same as the client node name (i.e., the host name). DEFine DRive Server command to define a drive to be used in a previously-defined library. Syntax: 'DEFine DRive LibName DriveName DEVIce=/dev/??? [ONLine=Yes|No] [CLEANFREQuency=None|Asneeded|N] [ELEMent=SCSI_Lib_Element_Addr]' where ONLine says whether a drive should be considered available to *SM. The TSM Admin Ref manual specifically advises: "Each drive is assigned to a single library." DO NOT attempt to define a physical drive to more than one library! Doing so will result in conflicts which will render drives offline. Thus, with a single library, you cannot use the same drives for multiple scratch pools, for example. To get around this: say you have both 3590J tapes and 3590Ks, but want the lesser tapes used for offsite volumes. What you can do is use DEFine Volume to assign the 3590s to the offsite pool - which will go on to use the general scratch pool only when its assigned volumes are used up. Example: 'DEFine DRive OURLIBR OURLIBR.3590_300 DEVIce=/dev/rmt1' TSM will get the device type from the library's Devclass, which will subsequently turn up in 'Query DRive'. It is not necessary to perform an ACTivate POlicyset after the Define. In a 3494, how does TSM communicate with the Library Manager to perform a mount on a specific drive if the LM knows nothing about the opsys device spec? In a preliminary operation, TSM issues an ioctl() MTDEVICE request, after having performed an open() on the /dev/rmt_ name to obtain a file descriptor, to first obtain that Device Number from the Library Manager, and thereafter uses that physical address for subsequent mount requests. For an example, see /usr/lpp/Atape/samples/tapeutil.c . DEFine LIBRary Server command to define a Library. Syntax for 3494: 'DEFine LIBRary LibName LIBType=349x - DEVIce=/dev/lmcp0 PRIVATECATegory=Np_decimal SCRATCHCATegory=Ns_decimal' The default Private category code: 300 (= X'12C'). The default Scratch category code: 301 (= X'12D'). With 3494 libraries and 3590 tapes, the defined Scratch category code is for 3490 type tapes, and that value + 1 will be used for your 3590 tapes. Server option ENABLE3590LIBRARY must also be defined for 3590 use. In choosing category code numbers, be aware that the 'mtlib' command associated with 3494s reports category code numbers in hexadecimal: you may want to choose values which come out to nice, round numbers in hex, and code their decimal equivalents in the DEFine LIBRary. Realize also that choosing category codes is a major commitment: you can't change them in UPDate LIBRary. AUTOLabel is new in TSM 5.2, for SCSI libraries, to specify whether the server attempts to automatically label tape volumes. Requires checking in the tapes with CHECKLabel=Barcode on the CHECKIn LIBVolume command. "No" Specifies that the server does not attempt to label any volumes. "Yes" says to label only unlabeled volumes. OVERWRITE is to attempt to overwrite an existing label - only if both the existing label and the bar code label are not already defined in any server storage pool or volume history list. DO NOT attempt to define multiple libraries to simultaneously use the same drives. See comments under DEFine DRive. See also: ENABLE3590LIBRARY; Query LIBRary; SCRATCHCATegory; UPDate LIBRary DEFine LOGCopy Server command to define a volume copy (mirror) of a recovery log volume. Syntax: 'DEFine LOGCopy RecLog_VolName Mirror_Vol' DEFine LOGVolume Server command to define an additional recovery log volume. Syntax: 'DEFine LOGVolume RecLog_VolName' Messages: ANR2452E DEFine MGmtclass Server command to define a management class within a policy set. Syntax: 'DEFine MGmtclass DomainName SetName ClassName [SPACEMGTECH=AUTOmatic| SELective|NONE] [AUTOMIGNOnuse=Ndays] [MIGREQUIRESBkup=Yes|No] [MIGDESTination=poolname] [DESCription="___"]' Note that except for DESCription, all of the optional parameters are Space Management Attributes for HSM. DEFine PATH TSM server command to define a path, and thus access, from a source to a destination - a new requirement as of TSM 5.1, to support server-free backups. The source and destination must be defined before the path. Additional info: http://www.ibm.com/support/ docview.wss?uid=swg21083662 See also: DEFine DRive; Paths DEFine POlicyset Server command to define a policy set within a policy Domain. Syntax: 'DEFine POlicyset Domain_Name SetName [DESCription="___"]' DEFine SCHedule, administrative Server command to define an administrative schedule. Syntax: 'DEFine SCHedule SchedName Type=Administrative CMD=CommandString [ACTIVE=No|Yes] [DESCription="___"] [PRIority=5|N] [STARTDate=MM/DD/YYYY|TODAY] [STARTTime=NNN] [DURation=N] [DURunits=Minutes|Hours|Days| INDefinite] [PERiod=N] [PERUnits=Hours|Days|Weeks| Months|Years|Onetime] [DAYofweek=ANY|WEEKDay|WEEKEnd| SUnday|Monday|TUesday| Wednesday|THursday| Friday|SAturday] [EXPiration=Never|some_date]' The schedule name can be up to 30 chars. In CMD=CommandString: string length is limited to 512 chars; you cannot specify redirection (> or >>). Macros cannot be scheduled (as they reside on the client, not the server), but you can schedule (server) Scripts. DEFine SCHedule, client Server command to define a schedule which a client may use via server command 'DEFine ASSOCiation'. Syntax: 'DEFine SCHedule DomainName SchedName [DESCription="___"] [ACTion=Incremental|Selective| Archive|REStore| RETrieve|Command|Macro] [OPTions="___"] [OBJects="___"] [PRIority=N] [STARTDate=NNN] [STARTTime=HH:MM:SS|NOW] [DURation=N] [DURunits=Hours|Minutes|Days| INDefinite] [PERiod=N] [PERUnits=Days|Hours|Weeks| Months|Years|Onetime] [DAYofweek=ANY|WEEKDay|WEEKEnd| SUnday|Monday|TUesday| Wednesday|THursday| Friday|SAturday] [EXPiration=Never|some_date]' The schedule name can be up to 30 chars. Use PERUnits=Onetime to perform the schedule once. ACTion=Command allows specifying that the schedule processes a client operating system command or script whose name is specified via the OBJECTS parameter. Be careful not to specify too many objects, or use wildcards, else msg ANS1102E can result. See also "Continuation and quoting". Note that because TSM has no knowledge of the workings of the invoked command, it can only interpret rc 0 from the invoked command as success and any other value as failure, so plan accordingly. OBJects specifies the objects (file spaces or directories) for which the specified action is performed. OPTions specify options to the dsmc command, just as you would when manually invoking dsmc on that client platform, including leading hyphen as appropriate (e.g., -subdir=yes). Once the schedule is defined, you need to bind it to the client node name: see 'DEFine ASSOCiation'. Then you can start the scheduler process on the client node. See also: DEFine CLIENTAction; DURation; SET CLIENTACTDuration; SHow PENDing DEFine SCRipt ADSMv3 server command to define a Server Script. Syntax: 'DEFine SCRipt Script_Name ["Command_Line..." [Line=NNN] | File=File_Name] [DESCription=_____]' Command lines are best given in quotes, and can be up to 1200 characters long. The description length can be up to 255. The DEFine will fail if there is a syntax error in the script, such as a goto target lacking a trailing colon or target label longer than 30 chars, with msg ANR1469E. It is probably best to create and maintain scripts in files in the server system file system, as the line-oriented revision method is quite awkward. See also: Server Scripts; UPDate SCRipt DEFine SERver To define a Server for Server-to-Server Communications, or to define a Tivoli Storage Manager storage agent as if it were a server. Syntax: For Enterprise Configuration, Enterprise Event Logging, Command Routing, and Storage Agent: 'DEFine SERver ServerName SERVERPAssword=____ HLAddress=ip_address LLAddress=tcp_port [COMMmethod=TCPIP] [URL=url] [DESCription=____] [CROSSDEFine=No|Yes]' For Virtual Volumes: 'DEFine SERver ServerName PAssword=____ HLAddress=ip_address LLAddress=tcp_port [COMMmethod=TCPIP] [URL=____] [DELgraceperiod=NDays] [NODEName=NodeName] [DESCription=____]' See also: Query SERver; Set SERVERHladdress; Set SERVERLladdress DEFine SPACETrigger ADSMv3 server command to define settings for triggers that determine when and how the server resolves space shortages in the database and recovery log. It can then allocate more space for the database and recovery log when space utilization reaches a specified value. After allocating more space, it automatically extends the database or recovery log to make use of the new space. Note: Setting a space trigger does not mean that the percentage used in the database and recovery log will always be less than the value specified with the FULLPCT parameter. TSM checks usage when database and recovery log activity results in a commit. Deleting database volumes and reducing the database does not cause the trigger to activate. Therefore, the utilization percentage can exceed the set value before new volumes are online. Mirroring: If the server is defined with mirrored copies for the database or recovery log volumes, TSM tries to create new mirrored copies when the utilization percentage is reached. The number of mirrored copies will be the same as the maximum number of mirrors defined for any existing volumes. If sufficient disk space is not available, TSM creates a database or recovery log volume without a mirrored copy. Syntax: DEFine SPACETrigger DB|LOG Fullpct=__ [SPACEexpansion=N_Pct] [EXPansionprefix=______] [MAXimumsize=N_MB] Msgs: ANR4410I; ANR4411I; ANR4412I; ANR4414I; ANR4415I; ANR4430W; ANR7860W See also: Query SPACETrigger DEFine STGpool (copy) DEFine STGpool PoolName DevclassName POoltype=COpy [DESCription="___"] [ACCess=READWrite|READOnly| UNAVailable] [COLlocate=No|Yes|FIlespace] [REClaim=PctOfReclaimableSpace] [MAXSCRatch=N] [REUsedelay=N] PoolName can be up to 30 characters. See also: MAXSCRatch DEFine STGpool (disk) Server command to define a storage pool. Syntax for a random access storage pool: 'DEFine STGpool PoolName DISK [DESCription="___"] [ACCess=READWrite|READOnly| UNAVailable] [MAXSize=MaxFileSize] [NEXTstgpool=PoolName] [MIGDelay=Ndays] [MIGContinue=Yes|No] [HIghmig=PctVal] [LOwmig=PctVal] [CAChe=Yes|No] [MIGPRocess=N]' PoolName can be up to 30 characters. Note that MIGPRocess pertains only to disk storage pools. See also: DISK; MIGContinue DEFine STGpool (tape) Server command to define a storage pool. Syntax for a tape storage pool: 'DEFine STGpool PoolName DevclassName [DESCription="___"] [ACCess=READWrite|READOnly| UNAVailable] [MAXSize=NOLimit|MaxFileSize] [NEXTstgpool=PoolName] [MIGDelay=Ndays] [MIGContinue=Yes|No] [HIghmig=PctVal] [LOwmig=PctVal] [COLlocate=No|Yes|FIlespace] [REClaim=N] [MAXSCRatch=N] [REUsedelay=N] [OVFLOcation=______]' PoolName can be up to 30 characters. Note that once a storage pool is defined, it is thereafter stuck with the specified devclass: you cannot change it with UPDate STGpool. (You are left with doing REName STGpool, and then redefine the original name to be as you want it, whereafter you can do Move Data to transfer contents from old to new.) The OVFLOcation value will appear in message ANR8766I telling of the place for the ejected volume, so use capitalization and wording which makes it stand out in that context. See also: MAXSCRatch; MIGContinue DEFine Volume Server command to define a volume in a storage pool (define to a storage pool). Syntax: 'DEFine Volume PoolName VolName [ACCess=READWrite|READOnly| UNAVailable|OFfsite] [LOcation="___"]' Resulting msg: ANR2206I Note that a volume can belong to only one storage pool. A storage pool which normally uses scratch volumes may also have specific volumes defined to it: the server will use the defined volume first. (Ref: Admin Guide, "How the Server Selects Volumes with Collocation Enabled") If a 3590 tape, do 'CHECKIn' after. Defined Volume A volume which is permanently assigned to a storage pool via DEFine Volume. Contrast with Scratch Volumes, which are dynamically taken for use in storage pools, whereafter they leave the storage pool to return to Scratch state. Ref: Admin Guide, "Scratch Volumes Versus Defined Volumes". See also: Scratch Volume Degraded Operation 3494 state wherein the library is basically operational, but an auxiliary aspect of it is inoperative, such as the Convenience I/O Station. delbuta DFS: ADSM-provided command (Ksh script) to delete a fileset backup (dump) from both ADSM storage (via 'dsmadmc ... DELete FIlespace') and the DFS backup database (via 'bak deletedump'). 'delbuta {-a Age|-d Date|-i DumpID|-s} [-t Type] [-f FileName] [-n] [-p] [-h]' where you can specify removal by age, creation date, or individual Dump ID. You can further qualify by type ('f' for full backups, 'i' for incrementals, 'a' for incrementals based upon a parent full or incremental); or by a list contained within a file. Use -n to see a preview of what would be done, -p to prompt before each deletion, -h to show command usage. Where: /var/dce/dfs/buta/delbuta Ref: AFS/DFS Backup Clients manual, chapter 7. Delete ACcess See: dsmc Delete ACcess DELETE ARCHCONVERSION Process seen in the server the first time a node goes into the Archive GUI when the archive data needs to be converted, as when upgrading clients between certain (unknown) levels. The conversion operation can be very time-consuming, depending upon the amount of archive data in server storage which needs to be converted. Msgs: ANS5148W Delete ARchive See: dsmc Delete ARchive DELete ASSOCiation ADSM Server command to remove the association between one or more clients with a schedule. Syntax: 'DELete ASSOCiation Domain_Name Schedule_Name Node_name [,...]' Related: 'DEFine ASSOCiation', 'Query ASSOCiation'. DELete BACKUPSET Server command to delete a backup set prior to its natural expiration. A Backup Set's retention period is established when the set is created, and it will automatically be deleted thereafter. Syntax: 'DELete BACKUPSET Node_Name Backup_Set_Name [BEGINDate=____] [BEGINTime=____] [ENDDate=____] [ENDTime=____] [WHERERETention=N_Days|NOLimit] [WHEREDESCription=____] [Preview=No|Yes]' Note that the node name and backup set name are required parameters: you may use wildcard characters such as "* *" in those positions. And in using wildcards in these positions you may be able to get around the restriction of not being able to delete the last backupset. See also: DELete VOLHistory DELete DBVolume TSM server command to delete a database volume, which is performed asynchronously, by a process. ADSM will automatically move any data on the volume to remaining database space, thus consolidating it. Deletion is only logical: the physical database volume/file remains intact. The best approach is to delete volumes in the reverse order that you added them so as to minimize the possibility of data being moved more than once in the case of multiple volume deletions. The best approach to removing a DB volume is to first Reduce the database and then delete a volume. Syntax: "DELete DBVolume VolName". DELete DEVclass ADSM server command to delete a device class. Syntax: 'DELete DEVclass DevclassName' DELete DRive TSM server command to delete a drive from a library. Syntax: 'DELete DRive LibName Drive_Name' Example: 'DELete DRive OURLIBR OURLIBR.3590_300' Notes: A drive that is in use - busy - cannot be deleted (you will get error ANR8413E or the like). All paths related to a drive must be deleted before the drive itself can be deleted. Use SHOW LIBrary to verify status. Msgs: ANR8412I DELete FIlespace (from server) TSM server command to delete a client file space. The deletion of objects is immediate: no later Expire Inventory is required. The deletion of the filespace takes place file by file, and can run for days for large filespaces. Syntax: 'DELete FIlespace NodeName FilespaceName [Type=ANY|Backup| Archive|SPacemanaged] [Wait=No|Yes] [OWNer=OwnerName] [NAMETYPE=SERVER|UNIcode|FSID] [CODEType=BOTH|UNIcode| NONUNIcode]' By default, results in an asynchronous process being run in the server to effect the database deletions, which you can monitor via Query PRocess. You need to wait for this to finish before, say, doing a fresh incremental backup on this filespace name. Use Wait to make the deletion synchronous. For Windows filespaces, you may have to add NAMETYPE=UNICODE to get it to work. WARNING; DO NOT RUN MORE THAN ONE DELETE FILESPACE AT A TIME!!! Doing so could jeopardize your *SM database. See entry on "Database robustness". Also, do not run a DELete FIlespace when clients are active, as the entirety of the Delete could end up in your Recovery Log as client updates prevent the administrative updates from being committed. Note that "Type=ANY" removes only Backup and Archive copies, not HSM file copies: you have to specify "SPacemanaged" to effect the more extreme measure of deleting HSM filespaces. Note also that the deletion will be an intense database operation, which can result in commands stalling. Moreover, competing processes - especially for the same node - will likely need access to the same database blocks, and collide with the message "ANR0390W A server database deadlock situation...". For this reason is it best to run only one DELete FIlespace at one time. If interrupted: Files up to that point are gone. If a pending Restore is in effect, this operation should not work. Speed: rather time-consuming - we've seen about 50 files/second. See also: Delete Filespace (from client) Delete Filespace (from client) ADSM client command: 'dsmc Delete Filespace', which will present a selection menu of file spaces (though this requires "BACKDELete=Yes" on 'REGister Node', which is contrary to the default, so that you may need to do it from the server). Results in an *asynchronous* process being run in the server to effect the database deletions and inventory expiration: you must wait for this to finish before, say, doing a fresh incremental backup on this filespace name. Speed: rather time-consuming - we've seen about 50 files/second. If a pending Restore is in effect, this operation should not work. See also: DELete FIlespace (from server) Delete Filespace fails to delete it You may be intending to delete a node, and are pursing the preliminary steps of deleting its filespaces. The Delete Filespace may seem happy, but doing a Query Filespace thereafter shows that the filespace has not gone away. This is likely a server software defect: a server level upgrade may correct it. Beyond that, you might try doing Delete Filespace from the client, selecting the filespace by relative number, and see if that makes it go away. (From the server side, 'DELete FIlespace *' may work - but you may not want all that node's filespaces deleted!) If not, do SELECT * FROM VOLUMEUSAGE WHERE NODE_NAME="__" and see if any volumes show up, where the volumes may be in a wacky state you may be able to correct; or you may be able to delete the volumes, assuming collocation by node such that no other nodes' data are on the volume, or where you can first perform a Move to separate out the nodes data on that volume. Your only other choice would be an appropriate audit operation - which is dicey stuff: you should contact TSM Support. DELete LIBRary ADSM server command to delete a library. Prior to doing this, all the library's assigned drives must be deleted. WARNING!! Deleting a library causes all of its volues to be checked out! If you unfortunately do this, you will need to use the 'mtlib' AIX command to fix the Category codes, and then use 'AUDit LIBRary' to reconcile ADSM with the library reality. DELete LOGVolume ADSM server command to delete a Recovery Log volume. ADSM will automatically start a process to move any data on the volume to remaining Recovery Log space, thus consolidating it. To delete a log volume, Query LOG needs to show a Maximum Extension value at least as large as the volume being deleted. Deletion is only logical: the physical recovery volume/file remains intact. The best approach is to delete volumes in the reverse order that you added them so as to minimize the possibility of data being moved more than once in the case of multiple volume deletions. Syntax: 'DELete LOGVolume VolName'. Delete Node You mean 'REMove Node'. DELETE OBJECT See: File, selectively delete from *SM storage; File Space, delete selected files DELete SCHedule, administrative Server command to delete an administrative schedule. Syntax: 'DELete SCHedule SchedName Type=Administrative' See also: DEFine SCHedule DELete SCHedule, client Server command to delete a client schedule. Syntax: 'DELete SCHedule DomainName SchedName [Type=Client]' See also: DEFine SCHedule DELete SCRipt Server command to delete a server script or one line from it. Syntax: 'DELete SCRipt Script_Name [Line=Line_Number]' Deleting a whole script causes the following prompt to appear: Do you wish to proceed? (Yes/No) (There is no prompt when simply deleting a line.) Deleting a line does not cause lines below it to "slide up" to take the old line number: all lines retain their prior numbers. Msgs: ANR1457I Delete selected files from ADSM See: Filespace, delete selected files storage DELete VOLHistory TSM server command to delete non-storage pool volumes, such as those used for database backups and Exports. Syntax: 'DELete VOLHistory TODate=MM/DD/YYYY|TODAY |TODAY-Ndays TOTime=HH:MM:SS|NOW |NOW+hrs:mins|NOW-hrs:mins Type=All|DBBackup [DEVclass=___] |DBSnapshot [DEV=___] |DBDump|DBRpf|EXPort |RPFile [DELETELatest=[No|Yes] |RPFSnapshot [DELETELatest=[No|Yes] |STGNew |STGReuse|STGDelete' There is no provision for deleting a single volume, sadly. As of ADSMv3, you will get an error if you try to delete all DBBackup copies: you must keep at least 1, per APARs IX86694 and IX86661. This is also the case for DBSnapshot volumes: the latest cannot be deleted. Do not use this command to delete DBB volumes that are under the control of DRM: DRM itself handles that per Set DRMDBBackupexpiredays. (If you are paying for and using DRM, let it do what it is supposed to: meddling jeopardizes site recoverability.) Do not expect *SM to delete old DBBackup entries reflecting Incremental type 'BAckup DB' operations until the next full backup is performed. That is, the full and incrementals constitute a set, and you should not expect to be able to delete critical data within the set: the whole set must be of sufficient age that it can entirely go (msg ANR8448E). "Type=BACKUPSET" is not documented but may work, being a holdover frome version 4.1 days. Also, there was a bug in the 4.2 days that prevented some backupsets from being deleted with the DELete BACKUPSET command; you could delete them with 'DELete VOLHistory Type=BACKUPSET Volume= TODate=' Msgs: ANR2467I (reports number of volumes deleted, but not volnames) See also: Backup Series; Backup set, remove from Volhistory DELete Volume TSM server command to delete a volume from a storage pool and, optionally, the files within the volume, if the volume is not empty. Syntax: 'DELete Volume VolName [DISCARDdata=No|Yes]' Specifying DISCARDdata=Yes will cause the removal of all database information about the files that were backed up to that tape, and so the next incremental backup will take all such files afresh. (This is logical deletion: The volume is not mounted. The physical data remains on the tape, though logically inaccessible. If you have security and/or privacy concerns for such tapes that had been used by TSM and are being decommissioned from the library, consider using a utility like the tapeutil command's "erase" function to physically eradicate the data.) Note that the volume may not immediately return to the scratch pool if REUsedelay is in effect. Also, if the volume is offsite, you should recall to onsite. Multiple simultaneous: V3 experience reveals no problems running more than one data-discarding Delete Volume at a time. I've run 5 at a time without incident. Deleting a primary storage pool copy of a file also causes any copy storage pool copies to be deleted (a form of instant expiration of data, in that the primary copy constitutes the stem of the database entry). Ref: Admin Guide, "Deleting Storage Pool Volumes". Notes: No Activity Log or dsmerror.log entry will be written as a result of this action. Volumes whose Access is Unavailable cannot be deleted. If a pending Restore is in effect, this operation should not work. "ANS8001I Return code 13" indicates that the command was invoked without "DISCARDdata=Yes" and the volume still contains data. Messages: ANR1341I See also: DELete VOLHistory "deleted" In backup summary statistics, as in "Total number of objects deleted:". Refers to the number of files expired because not found (or excluded) in the backup operation. Those files will be flagged in the body of the report with "Expiring-->". Deleted files, rebind See: Inactive files, rebind Deleted from storage pool, messages ANR1341I, ANR2208I, ANR2223I DELetefiles (-DELetefiles) Client option to delete files from the client file system after Archive has stored them on the server. Can also be used with the restore image command and the incremental option to delete files from the restored image if they were deleted from the file space after the image was created. Note particularly the statement that the operation will not delete the file until it is stored on the server. This affects when in the sequence that the file will actually be deleted. Remember that *SM batches Archive data into Aggregates, as defined by transaction sizings (TXN* options) and so the file(s) will not be deleted until the transaction is completed. DANGER!!: If your server runs with Logmode Normal, you may lose files if the server has to be restored, because all transactions since the last server database backup will be lost! Before using DELetefiles in a site, carefully consider all factors. What about directories? The Archive operation has no capability for deleting directories, for several reasons... First, directories may be the home of objects other than the files being deleted (e.g., symbolic links, special files, unrelated files), and because in the time it takes to archive files from any given directory, new files may have been introduced into it. If you want directories deleted, you need to do so thereafter, with an operating system function. See also: Total number of objects deleted Dell firmware advisory Customers report serious quality problems with Dell firmware, as for the Dell Powervault 136T. Beware. DELRECORD Undocumented, unsupported command noted in some APARs for deleting TSM db table entries. Usage undefined. See also: Database, delete table entry Delta file As used in subfile backups. Msgs: ANS1328E Demand Migration The process HSM uses to respond to an out-of-space condition on a file system. HSM migrates files to ADSM storage until space usage drops to the low threshold set for the file system. If the high threshold and low threshold are the same, HSM attempts to migrate one file. Density See: Tape density DES See: ENCryptkey; PASSWORDDIR -DEScription="..." Used on 'dsmc Archive' or 'dsmc Query ARchive' or 'dsmc Retrieve' to specify a text string describing the archived file, which can be used to render it unique among archived files of the same name. Wildcard characters may be used. Be aware that rendering the file unique in this way also implicitly renders the path directory unique such that it will also be archived again if there isn't one of the same description already stored in the server. That is, the given description is also applied to the path directory. If you do not specify a description with the archive command, the default is to provide a tagged date, in the form "Archive Date: __________", where the date value inserted is the system date, always 10 characters long. (If your date format uses a two digit year, there will be two blank spaces at the end of the date.) Note that only the date is provides - not the time of day. Description, on an Archive file Is set via -DEscription="..." in the 'dsmc archive' operation. Note that you cannot change the archive file Description after archiving. DESTination A Copy Group attribute that specifies the storage pool to which a file is backed up, archived, or migrated. At installation, ADSM provides three storage destinations named BACKUPPOOL, ARCHIVEPOOL, and SPACEMGTPOOL. Destination for Migrated Files In output of 'dsmmigquery -M -D', an (HSM) attribute of the management class which specifies the name of the ADSM storage pool in which the file is stored when it is migrated. Defined via MIGDESTination in management class. See: MIGDESTination DEStroyed Access Mode for a primary storage pool volume saying that it has been permanently damaged, and needs a 'RESTORE STGpool' or 'RESTORE Volume' (which itself will mark the volume DEStroyed, msg ANR2114I). Set: 'UPDate Volume ... ACCess=DEStroyed'. (Note that Copy Storage Pool volumes cannot be marked DEStroyed.) If there is a storage pool backup for the volume, access to files that were on the volume causes *SM to automatically obtain them instead from the copy storage pool. Note that marking volumes as "Destroyed" does not affect the status of the files on the volumes: the next Incremental Backup job will not back up those files afresh. All that the Destroyed mode does is render the volume unmountable. See: Copy Storage Pool, restore files directly from But the volume or storage pool RESTORE operation should still be performed, to repopulate the primary storage pool with the files. See also: RESTORE Volume /dev/fsm The HSM File Space Manager character special file, apparently created when HSM comes up. Should look like: crw-rw-rwT- 1 root sys 255, 0 Dec 5 12:28 /dev/fsm If need to re-create, do: 'mknod /dev/fsm c 255 0' 'chmod 1666 /dev/fsm' /dev/lb_ SCSI library supported by *SM device driver, such as the 9710. /dev/lmcp0 3494 Library Manager Control Point special device, established by configuring and making this "tape" device Available via SMIT, as part of installing the atldd (automated tape library device driver). (Specifically, 'mkdev -l lmcp0" creates the dev in AIX.) /dev/mt_ In Unix systems, tape drives that are used by *SM, but not supported by *SM device drivers. AIX usage note: When alternating use of the drive between AIX and *SM, make one available and the other unavailable, else you will have usage problems. For example, if the drive was most recently used with *SM, do: rmdev -l mt0; mkdev -l rmt0; and then the inverse when done. /dev/rmt_ Magnetic tape drive supported as a GENERICTAPE device. /dev/rmt_.smc For controlling the SCSI Medium Changer (SMC), as on 3570, 3575, 3590-B11 Automatic Cartridge Facility. /dev/rmt_.smc, creation When running 'cfgmgr -v' to define a 3590 library, the 3590's mode has to be in "RANDOM" for the rmt_.smc file to be created. /dev/rop_ Optical drives supported by ADSM. /dev/vscsiN See "vscsi". Devclass The device class for storage pools: a storage pool is assigned to a device class. The device class also allows you to specify a device type and the maximum number of tape drives that it can ask for. For random access (disk), the Devclass must be the reserved name "DISK". For tape, the Devclass is whatever you choose, via 'DEFine DEVclass'. Used in: 'DEFine DBBackuptrigger', 'DEFine STGpool', 'Query Volume' See also: Query DEVclass; SHow DEVCLass Devclass, 3590, define See "DEFine DEVclass (3590)". Devclass, rename There is no command to do this: you have to define a new devclass, reassign to it, then delete the old name. Devclass, verify all volumes in See: SHow FORMATDEVCLASS _DevClass_ DEVCLASSES SQL table for devclass definitions. Columns: DEVCLASS_NAME, ACCESS_STRATEGY (Random, Sequential), STGPOOL_COUNT, DEVTYPE, FORMAT, CAPACITY, MOUNTLIMIT, MOUNTWAIT, MOUNTRETENTION, PREFIX, LIBRARY_NAME, DIRECTORY, SERVERNAME, RETRYPERIOD, RETRYINTERVAL, LAST_UPDATE_BY, LAST_UPDATE (YYYY-MM-DD HH:MM:SS.000000) DEVCONFig Definition in the server options file, dsmserv.opt (/usr/lpp/adsmserv/bin/dsmserv.opt). Specifies the name of the file(s) that should receive device configuration information and thus become backups when such information is changed by the server. Use 'BAckup DEVCONFig' to force updating of the file(s). Default: none Ref: Installing the Server... See also: Device config... DEVCONFig server option, query 'Query OPTion' devconfig.out In TSM v5 and higher the first line of file must be: SET SERVERNAME ADSM Device Specified via "DEVIce=DeviceName" in 'DEFine DRive ...' device category As seen in 'mtlib -l /dev/lmcp0 -f /dev/rmt2 -qD' on a 3494. See: Category Codes Device class See: Devclass Device config file considerations During a *SM DB restore, if your libtype is set to manual in your devconfig file, check that SHARED=NO is not part of the DEFINE LIBR statement. See also: DEVCONFig Device config file, determine name 'Query OPTions', look for "Devconfig" Device config info, file(s) to "DEVCONFig" definition in the receive as backup, define server options file, dsmserv.opt (/usr/lpp/adsmserv/bin/dsmserv.opt). The files will end up containing all device configuration info that administrators set up, in ADSM command format, such as "DEFine DEVclass..." and "DEFINE LIBRARY" command lines. Device configuration, backup manually 'BAckup devconfig' causes the info to be captured in command line format in files defined on DEVCONFIG statements in the server options file, dsmserv.opt (/usr/lpp/adsmserv/bin/dsmserv.opt). Device configuration, restore Occurs as part of the process involved in the following commands (run from the AIX command line): 'dsmserv restore db' 'dsmserv loaddb' 'DSMSERV DISPlay DBBackupvolumes' Device drivers, tape drives Under Unix: Drives which are used with a name of the form "/dev/rmtX" employ tape device drivers supplied with the operating system, which in AIX are stored in /usr/lib/drivers. These are defined in SMIT under DEVICES then TAPE DRIVES. For example, IBM "high tape device" drives such as 3590 have their driver software shipped with the tape hardware. Drives used with a name of the form "/dev/mtX" employ tape device drivers supplied by ADSM itself. These are defined in SMIT under ADSM DEVICES. And their library will be /dev/lb0. DEVNOREADCHECK Undocumented VM opsys option: allows the server to ignore the RING IN/NO RING status of the input tape. DEVType Operand of 'DEFine DEVclass', for specifying device class. Recognized: FILE, 4MM, 8MM, QIC, 3590, CARTridge, OPTical. Note: Devtypes can change from one TSM version to another such that they cannot be caried across in an upgrade. The upgrade may nullify such DEVTypes. Thus, in performing an upgrade it is wise to check your DEVclasses. df of HSM file system (AIX) Performing a 'df' command on the HSM server system with the basic HSM-managed file system name will cause the return of a hdr line plus two data lines, the first being the JFS file system and the second being the FSM mounted over the JFS. However, if you enter the file system name with a slash at the end of it, you will get one data line, being just the FSM mounted over the JFS. dfmigr.c Disk file migration agent. See also: afmigr.c DFS The file backup client is installable from the adsm.dfs.client installation file, and the DFS fileset backup agent is installable from adsm.butadfs.client. You need to purchase the Open Systems Environment Support license for AFS/DFS clients. The DCE backup utilities are located in /opt/dcelocal/bin. See 'buta', 'delbuta'. DFS backup to Solaris IBM reportedly has no plans to support this type of client. DFSBackupmntpnt Client System Options file option, valid only when you use dsmdfs and dsmcdfs. (dsmc will emit error message ANS4900S and ignore the option.) Specifies whether you want ADSM to see a DFS mount point as a mount point (Yes, which is the default) or as a directory (No): Yes ADSM considers a DFS mount point to be just that: ADSM will back up only the mount point info, and not enter the directory. This is the safer of the two options, but limits what will be done. No ADSM regards a DFS mount point as a directory: ADSM will enter it and (blindly) back up all that it finds there. Note that this can be dangerous, in that use of the 'fts crmount' command is open to all users, who through intent or ignorance can mount parts or all of the local file system or a remote one, or even create "loops". Default: Yes By default, when doing an incremental backup on any DFS mount point or DFS virtual mount point, TSM does not traverse the mount points: it will only backup the mount point metadata. To backup mount a point as a regular directory and traverse the mount point, set DFSBackupmntpnt No before doing the backup. If you want to backup a mount point as mount point and backup the data below the mount point, first backup the parent directory of the mount point and then backup mount point separately as a virtual mount point. See also: AFSBackupmntpnt DFSInclexcl Client System Options file option, valid only when you use dsmdfs and dsmcdfs. (dsmc will emit error message ANS4900S and ignore the option.) Specifies the path and file name of your DFS include-exclude options file. DHCP database, back up Do not attempt to back this up directly: it can be made to produce a backup copy of its database periodically (system32/dhcp/backup), and then that copy can be backed up with TSM incremental backup. You also can make a copy of the DHCP registry setup info in a REG file for backup. The key is located in HKEY_LOCAL_MACHINE\System\ CurrentControlSet\Services\DHCPServer\ Configuration. Ref: http://support.microsoft.com/ support/kb/articles/Q130/6/42.asp Diamond icon in v3 GUI Restore A four-sided diamond icon to the left of a file in the v3 GUI shown in a Restore selection tree display indicates that the file is Inactive. Shown to the left of a directory, indicates that the directory contains inactive files. DIFFESTIMATE Option in the TDPSQL.CFG file. Prior to performing a database backup, the TDP for SQL client must 'reserve' the required space in the storage pool. It *should* get the estimate right for full backups and transaction log backups because the space used in the database and transaction logs is available from SQL Server. But: For differential backups, there is no way of knowing how much data is to be backed up until the backup is complete. The TDP for SQL client therefore uses the percentage specified in the the DIFFESTIMATE option to calculate a figure based on the total space used. E.g., for a database of 50GB with a DIFFESTIMATE value of 20, TDP will reserve 10Gb (20% of 50GB). A "Server out of data storage space" error will arise if the actual backup exceeds the calculated estimate. If the storage pool is not big enough to accomodate the larger backup, of if other backup data prevents further space being reserved, this error will occur. Setting DIFFESTIMATE to 100 will ensure that there is always sufficient space available, but will prevent space in your primary storage pool being utilised by other clients and may force the backup to occur to the next storage pool in the hierarchy unnecessarily. It is worth setting DIFFESTIMATE to the maximum proportion of the data you can envisage ever being backed up during a differential backup. Directories, empty, and Selective Selective Backup does not back up empty Backup directories. Directories, empty, restoring See: Restore empty directories Directories and Archive ADSM Archive does not save directory structure: the only ADSM facility which does is Incremental Backup (Selective Backup does not, either). See also: DIRMc Directories and Backup A normal Incremental Backup will *not* back up directories whose timestamp has changed since the last backup. This is because it would be pointless to do so: *SM already has the information it needs about the directory itself in order to recreate it, and restoral of a directory reconstructs it, with contemporary datestamps. An -INCRBYDate Backup, in contrast, *will* back up pre-existing directories whose timestamps it sees as newer, because it knows nothing about them having been previously backed up, by virtue of simple date comparison. See also: Directory performance; DIRMc" Directories and binding to management class The reason that directories are bound to the management class with the longest retention is that there is no guarantee that the files within the directory will all be bound to the same management class. A simple example: suppose I have a directory called C:\ANDY with two files in it, like this: C:\ ANDY\ PRODFILE.TXT TESTFILE.TXT and that the include/exclude list specifies two different management classes: INCLUDE C:\ANDY\PRODFILE.TXT MC90DAYS INCLUDE C:\ANDY\TESTFILE.TXT MC15DAYS So which management class should C:\ANDY be bound to? The question becomes even more interesting if a new file is introduced to the C:\ANDY directory and an include statement binds it to, say, the MC180DAYS management class. Binding directories to the management class with the longest retention (RETOnly) is how TSM can assure that the directory is restorable no matter which management class the files under that directory are bound to. If all management classes have the same retention, TSM will choose the one first in alphabetical order. (APAR IY11805 talked about first choosing by most recently updated mgmtclass definition, but that appears false.) Ordinary directory entries - those with only basic info - will be stored in the database, but entries with more info may end up in a storage pool. The way around this is to use DIRMc to bind the directories to a management class that resides on disk. Alternatively one could create the disk management class such that it has the longest retention, and thus negate the need to code DIRMc. One "gotcha": be careful when creating new management classes or updating exising existing management classes. You will always want to ensure that the *disk* management class has the longest retention. Directories and Restore Whereas ordinary restore operations reinstate the original file permissions, directory permissions are only restored when using the SUbdir=Y option of 'dsmc' or the Restore Subdirectory Branch function of dsm GUI. Directories may be in the *SM db When a file system is restored, you may see *SM rebuild the directory structure long before any tapes are mounted. It can do this when the directory structure is basic such that it can be stored as a dababase object (much like many empty files can be). In such cases, there is no storage pool space associated with directories, and no tape use. With more complex directory structures (Unix directories with Access Control Lists, Windows directories, and the like), the extended information associated with directories exceeds the basic database attributes data structure, and so the directory information needs to be stored in a storage pool. That is where the DIRMc option comes in: it allows you to control the management class that will get associated with the directory information that needs to get stored in a storage pool. See also: DIRMc Directories missing in restore Perhaps you backed them up with a DIRMc which resolved to a shorter retention than the files in the directories. (Later ADSM software should prevent this.) This is why in the absence of DIRMc, directories are bound to the copygroup with the longest retention period - to prevent such loss. Directories visible in restore, but Simplest cause: In a GUI display, you files not shown need to click on the folder/directory to open it, to see what's inside. This could otherwise be a permissions thing: you are attempting to access files that were backed up by someone other than you, and which do not belong to you. Directory--> Leading identifier on a line out of incremental Backup, reflecting the backup of a directory entry. Note that with basic directory structures, as on Unix systems, *SM is able to store directory info in the server database itself because the info involves only name and basic attributes: the contents of a directory are the files themselves, which are handled separately. Thus, directory backups usually do not have to be in a storage pool. Note that the number of bytes reflected in this report line is the size of the directory as it is in the file system. Because *SM is storing just name and attributes, it is the actual amount that *SM stores rather than the file system number that will contribute to the "Total number of bytes transferred:" value in the summary statistics from an Archive or Backup operation. Note that the number will probably be less than the sum reflected by including the numbers shown on "Directory-->" lines of the report, in that *SM stores only the name and attributes of directories. See also: Rebinding--> Directory performance Conventional directories are simply flat, sequential files which contain a list of file names which cross-reference to the physical data on the disk. As primitive data structures, directories impede performance, as lookups are serial, take time, and involve lockouts as the directory may be updated. As everyone finds, on multiple operating systems, the more files you have in a directory, the worse the performance for anything in your operating system going after files in that directory. The gross rule of thumb is that about 1000 files is about all that is realistic in a directory. Use subdirectories to create a topology which is akin to an equilateral triangle for best performance. Also, from a 2.1 README: "Tens of thousands of files in a single random-ordered directory can cause performance slowdowns and server session timeouts for the Backup/Archive client, because the list of files must be sorted before *SM can operate on them. Try to limit the number of files in a single random-ordered directory, or increase the server timeout period." Directory permissions restored Occurred in some V2 levels. Per ADSM, incorrectly "it is working as designed and was documented in IC07282". Circumvent by using dsmc restore with -SUbdir=Yes on the command line or dsm Restore by Subdirectory Branch in the GUI to restore the directory with the correct permissions. Directory separator character '/' for Unix, DOS, OS/2, and Novell. See also ":" volume/folder separator for Macintosh. Directory timestamp preservation, *SM easily preserves the timestamp of Windows restored directories through use of the Windows API function SetFileTime(). DIRMc Client System Options file (dsm.sys) backup option to specify the Management Class to use for directories. (For Backup only; not for Archive. See ARCHMc for Archive.) Syntax: DIRMc ManagementClassName Placement: Must be within server stanza With some client types (e.g., Unix), the directory structure is simple enough that directory information can be stored in the ADSM database such that storage pool space is not required for it: the use of DIRMc does not change this. However, where a client uses richer directories or when an ACL (Access Control List) is associated with the directory, there is too much information and so *does* need to be stored in a storage pool. (Note that this same principal pertains to all simple objects, and thus empty files as well.) The DIRMc option was originated because, without it, the directories would be bound to the management class that has a backup copygroup with the longest retention period (see below). In many sites that was causing directories to go directly to tape resulting in excessive tape mounts and prolonged retrievals. (Additional note: Beyond being bound to the management class with the longest backup retention, if multiple management classes have the same creation date, directories will be bound to the management class earliest in alphabetical order, per APAR IY11805.) Performance: You could use DIRMc to put directory data into a separate management class such that it could be on a volume separate from the file data and thus speed restorals, particularly if the volume is disk. (In a file system restoral, the directory structure is restored first.) Systems known to have data-rich directory information which must go to a storage pool: DFS (with its ACLs), Novell, Windows NTFS. Default: the Management Class in the active Policy Set which has the longest retention period (RETOnly); and in the case of there being multiple management classes with the same RETOnly, the management class whose name is highest in collating sequence gets picked. (The number of versions kept is not a factor.) Thus, in the absence of DIRMc, database and storage pool consumption can be aggravated by retaining directories after their files have expired. If used, be sure to choose a management class which retains directories as long as the files in them. NOTE: As of ADSMv3, DIRMc is not as relevant as it once was, because of Restore Order processing (q.v.), which creates an interim, surrogate directory structure and restore/retrieves the actual directory information whenever it is encountered within the restore order (the order in which data appears on the backup media). However, the restoral ultimately has to retouch those surrogate directories, and you don't want that to happen by wading through a set of data tapes unrelated to the restored data (where the dirs ended up by virtue of longest retention). So use of DIRMc is still desirable for file systems whose directories end up in storage pools. See also: Directories may be in the *SM db; Restore Order DIRMc, query In ADSM do 'dsmc Query Options': under GENERAL OPTIONS see "dirmc". In TSM do 'dsmc show options' and inspect the "Directory MC:" line. If your client options do not specify an override, the value will say 'DEFAULT'. -DIrsonly Client option, as used with Retrieve, to process directories only - not files. DISAble Through ADSMv2, the command to disable client sessions. Now DISAble SESSions. DISAble EVents ADSMv3+ server command to disable the processing of one or more events to one or more receivers (destinations). Syntax: 'DISAble EVents ALL[,CONSOLE][,ACTLOG] [,ACTLOG][,EVENTSERVER][,FILE] [,SNMP][,TIVOLI][,USEREXIT] EventName[,ALL][,INFO] [,WARNING][,ERROR][,SEVERE] NODEname=NodeName[,NodeName...] SERVername=ServerName [,ServerName]' where: TIVOLI Is the Tivoli Management Environment (TME) as a receiver. Example: 'DISAble EV ACTLOG ANE4991 *' DISAble SESSions Server command to prevent client nodes from starting any new Backup/Archive sessions. Current client node sessions are allowed to complete. Administrators can continue to access the server. Duration: Does not survive across a TSM server restart: the status is reset to Enable. Determine status via 'Query STatus' and look for "Availability". Msgs: ANR2097I See also: DISAble; DISABLESCheds; ENable SESSions DISABLESCheds Server option to specify whether administrative and client schedules are disabled during an TSM server recovery scenario. Syntax: DISABLESCheds Yes | No Default: No Query: Query OPTion, "DisableScheds" Disaster recovery See: Copy Storage Pool and disaster recovery Disaster Recovery Manager See: DRM Disaster recovery, short scenario, - Restore the server node from a AIX system mksysb image; - Restore the other volume groups (including the ones used for the adsm database, log, storage pool, etc.) from a savevg; - Follow the instructions & run the scripts so wonderfully prepared by DRM. (The DRM script knows everything about the database size, volhist, which volumes were considered offsite, etc.) DISK Predefined Devclass name for random access storage pools, as used in 'DEFine STGpool DISK ...'. Beware their use, as a frequently changing population of many files can result in fragmentation as time passes, and a high penalty in disk access overhead. With DISK TSM keeps track of each (4 KB) block in the DISK volumes, which means maintaining a map of all the blocks, searching and updating that map in each storage pool reference. Realize that Reclamation occurs on serial media, and thus not for DISK, meaning that the space formerly occupied by small files in a multi-file Aggregate cannot be reclaimed. REUsedelay is not applicable to DISK volumes: your data will probably not be recoverable because the space vacated by expired files, where whole Aggregates expired, is reused on disk, whereas such space remains untouched on tape. Restoral performance may be impaired if using random-access DISK rather than sequential-access FILE or tape: you may see only one restore session instead of multiple. That is, with DISK there is no Multi-session Restore. See: http://www-1.ibm.com/support/ docview.wss?uid=swg21144301 DISK storage pools are best used for only first point of arrival on a TSM system: the data must migrate to sequential access storage (FILE, tape) to be safe. Ref: Admin Guide table "Comparing Random Access and Sequential Access Disk Devices" See also: D2D; FILE; Multi-session restore Disk Pacing Term to describe AIX's control of Unix's traditional inclination to buffer any amount of file data, no matter how large. AIX limitation thus prevents memory overloading. Disk stgpool not being used See: Backups go directly to tape, not disk Disk storage pool See: Storage pool, disk See also: Backup storage pool, disk?; Backup through disk storage pool Disk Table The TSM database and recovery log volumes, as can be reported via 'SHow LVMDISKTABLE' (q.v.). DiskXtender A hierarchical storage product by Legato. For it to work with TSM, you need to have file dsm.opt in the DX home directory. DISKMAP ADSM server option for Sun Solaris. Specifies how ADSM performs I/O to a disk storage pool. Either: Yes To map client data to memory (default); No Write client data directly to disk. The more effective method for your current system needs to be determined by experimentation. Disks supported ADSM supports any disk storage device which is supported by the operating system. Dismount tape, whether mounted by Via Unix command: ADSM or other 'mtlib -l /dev/lmcp0 -d -f /dev/rmt?' 'mtlib -l /dev/lmcp0 -d -x Rel_Drive#' (but note that the relative drive method is unreliable). Msgs: "Demount operation Cancelled - Order sequence." probably means that the drive is actively in use by TSM, despite your impression. See also: Mount tape Dismount tape which was mounted by 'DISMount Volume VolName' *SM (The volume must be idle, as revealed in 'Query MOunt'.) DISMount Volume *SM server command to dismount an idle, mounted volume. Syntax: 'DISMount Volume VolName'. If volume is in use, ADSM gives message ANR8348E DISMOUNT VOLUME: Volume ______ is not "Idle". See also: Query MOunt DISPLAYLFINFO See: Storage Agent and logging/accounting -DISPLaymode ADSMv3 dsmadmc option for report formatting, with output being in either "list" or "table" form. Prior to this, the output from Administrative Query commands was displayed in a tabular format or a list format, depending on the column width of the operating system's command line window, which made it difficult to write scripts that parsed the output from the Query commands as the output format was not predictable. Choices: LISt The output is in list format, with each line consisting of a row title and one data item, like... Description: Blah-blah TABle The output is in tabular format, with column headings. See also: -COMMAdelimited; SELECT output, columnar instead of keyword list; -TABdelimited DISTINCT SQL keyword, as used with SELECT, to yield only distinct, unique, entries, to eliminate multiple column entries of the same content. Form: SELECT DISTINCT FROM Note that DISTINCT has the effect of taking the first occurrence of each row, so is no good for use with SUM(). DLT Digital Linear Tape. Single-hub cartridge with 1/2" tape where the external end is equipped with a plastic leader loop, (which has been the single largest source of DLT failures). Data is recorded on DLTtape in a serpentine linear format. DLT technology has lacked servo tracks on the tape as Magstar and LTO have, making for poor DLT start-stop performance as it has to fumble around in repositioning, which can greatly prolong backups, etc. DLT is thus intended to be a streaming medium, not start-stop. Super DLTtape finally provides servo tracking, in the form of Laser Guided Magnetic Recording (LGMR), which puts optical targets on the backside of the tape. http://www.dlttape.com/ http://www.overlanddata.com/PDFs/ 104278-102_A.pdf http://www.cartagena.com/naspa/LTO1.pdf See also: SuperDLT DLT and repositioning DLT (prior to SuperDLT) lacks absolute positioning capability, and so when you need to perform an operation (Audit Volume) which is to skip a bad block or file, it must rewind the tape and then do a Locate/Seek. DLT and start/stop operations *SM does a lot of start/stop operations on a tape, and DLT has not been designed for this (until SuperDLT). Whenever the DLT stops, it has to back up the tape a bit ("backhitch") before moving forward to get the tracking right. Sometimes, it seems, it doesn't get it right anyway, resulting in I/O errors. A lot of repositioning "beats up" the drive, and can result in premature failure. See: Backhitch DLT barcode label specs Can be found in various vendor manuals, such as the Qualstar TLS-6000 Technical Services Manual, section 2.3.1, at www.qualstar.com/146035.htm#pubpdf DLT cartridge inspection/leader repair See Product Information Note at www.qualstar.com/146035.htm#pubpdf DLT cleaner tape When a DLT clean tape is used, it writes a tape mark 1/20th down the tape. The next clean uses up 1/20 more tape. When you have used it 20 times, putting it back in the drive doesn't clean anything. You can degauss it to erase the tape marks and then reuse it up to 3 times, though that can result in the tape head being dirtied rather than cleaned. DLT drives All are made by Quantum. Quantum bought the technology from DEC, which at the time called them TKxx tape drives. DLT Forum Is on the Quantum Web Site: http://www.dlttape.com/index_wrapper.asp DLT IV media specs 1/2 inch data cartridge Metal particle formulation for high durability. 1,828 feet length 30 year archival storage life 1,000,000 passes MTBF 35 GB native capacity on DLT 7000, 20GB on DLT 4000 40 GB native capacity on DLT 8000 DLT Library sources http://www.adic.com DLT media life DLT tapes are spec'd at 500,000 passes. In general, the problem that usually occurs with DLT is not tape wear, but contamination. The cleaner the environment, the better chance the tapes will have of achieving their full wear life...some 38 years. Streaming will prematurely wear the tapes. DLT tapes density DLT 4000 are 20GB native, 40GB "typical compression". Manually load a tape and look very carefully at the density lights on the DLT drive. DLT tapes can do 35GB, but for backwards compatibility they can do lower densities. The drive decides on the density when the tape is first written to and that density is used forever more. It is possible to "reformat" the media to a higher density: 0. Make sure there is no ADSM data on the tape and the volume has been deleted from the library and ADSM volume list. Mark the drive as "offline" in ADSM. 1. Mount the tape manually in the drive 2. Use the "density select" button to choose 35GB. 3. At the UNIX system: 'dd if=/dev/zero of=/dev/rmt/X count=100' (/dev/rmt/X is the real OS device driver for the drive) 4. Dismount the tape. 5. Mark the drive as online. 6. Get ADSM to relabel the tape. This works because the DLT drive will change the media density IF it is writing at the beginning of the tape. This should result in getting > 35GB on DLT tapes. DLT vs. Magstar (3590, 3570) drives DLT tapes are clumsy and fragile; With a DLT the queue-up time is much longer than any of the magstars, and the search time is even worse; DLT drive heads wear faster. DLT also writes data to the very edges of a tape causing the tape edges to wear. Both have cartridges consisting of a single spool, with the tape pulled out via a leader. DLTs are prone to load problems, especially as the drive and tape wear: there is a little hook in the drive that must engage a plastic loop in the tape leader, and when the hook comes loose from its catch, a service call is required to get it repaired. And, of course, the plastic leader loop breaks. Customers report Magstar throughput much faster than DLT, helped by the servo tracks on tape that DLT lacks. Magstar-MP's are optimized for start-stop actions, and that is much of what ADSM will do to a drive. DLT is optimized for data streaming. If a MP tape head gets off alignment during a write operation, the servo track reader on the drive stops writing and adjusts. DLT aligns itself during the load of the tape. If it gets off track during a write it has no way to correct and could overwrite data. New technology DLT drives can read older DLT tapes, whereas Magstar typically does not support backward compatibility. DLT4000 Capacity: 20GB native, 40GB "typical compression". Transfer rate: 1.5 MB/sec DLT7000 Digital Linear Tape drives, often found in the STK 9370. Can read DLT4000 tapes. Tape capacity: 35 GB. Transfer rate: 5 MB/sec Beware that they have had power supply problems (there are 2 inside each drive): Low voltage on those power supplies will cause drives to fail to unload. And always make sure to be at the latest stable microcode level. See also: SuperDLT DLT7000 cleaning There is a cleaning light, and it comes on for two different things: "clean requested", and "clean required". There is a tiny cable that goes from the drives back to the robot. With hardware cleaning on, that is how the "clean required" gets back to the robot and causes it to mount the cleaning tape. A "clean request" doesn't. That is, the light coming on does not always result in cleaning being done. DLT7000 compression DLT7000 reportedly come configured to maximize data thruput, and will automatically fall out of compression to do this. If you want to maximize data storage, then you need to modify the drive behavior. See the hardware manual. DLT7000 tape labels Reportedly must be a 1703 style label and have the letter 'd' in the lower left corner. DLT8000 Digital Linear Tape drives. DLT type IV or better cartridges must be used. Can read DLT4000 tapes. Tape capacity: 40 GB. Transfer rate: 6 MB/sec DM services Unexplained Tivoli internal name for HSM under TSM, as seen in numerous references in the Messages manual series 9000 messages, apparently because it would be too confusing for its Tivoli Space Manager to have the acronym "TSM". "DM" probably stands for Data Migrator. .DMP File name extension created by the server for FILE type scratch volumes which contain Database dump and unload data. Ref: Admin Guide, Defining and Updating FILE Device Classes DNSLOOKUP TSM 5.2+ compensatory server option for improving the performance of Web Admin and possibly other client access by specifying: DNSLOOKUP NO Background: DNS lookup control is provided in web (HTTPD) servers in general. (In IBM software, the control name is DNSLOOKUP; in the popular Apache web server, the control is HostnameLookups.) Web servers by default perform a reverse-DNS query on the requesting IP address before servicing the web request. This reverse-DNS query (C gethostbyaddr call) is used to retrieve the host and domain name of the client, which is logged in the access log and may be used in various ways. The problem comes when DNS service is impaired. It may be the case that your OS specifies multiple DNS servers, and one or more of them may not actually be DNS servers, or may be down, or unresponsive. This can result in a delay of up to four seconds before rotating to the next DNS server. Other causes of delay involve use of a firewall or DHCP with no DNS server (list) specified. You can gauge if you have such a DNS problem through the use of the 'nslookup' or 'host' commands. Note that DNS lookup problems affect the performance of all applications in your system, and should be investigated, as the use of gethostbyaddr is common. With DNSLOOKUP OFF specified, only the IP address is had. See also: Web Admin performance issues Documentation, feed back to IBM Send comments on manuals, printed and online, to: starpubs@sjsvm28.vnet.ibm.com Domain See: Policy Domain DOMain Client User Options file (dsm.opt) option to specify the default file systems in your client domain which are to be eligible for incremental backup, as when you do 'dsmc Incremental' and do not specify a file system. DOMain is ignored in Archive and Selective Backup. The DOMain statement can be coded repeatedly: the effect is additive. That is, coding "DOMain a:" followed by "DOMain b:" on the next line is the same as coding "DOMain a: b:". Note that Domains may also be specified in the client options set defined on the server, which are also additive, preceding what is coded in the client's options file. When a file system is named via DOMain, all of its directories are always backed up, regardless of Inclue/Exclude definitions: the Include/Exclude specs affect only eligibility of *files* within directories. AIX: You cannot code a name which is not one coded in /etc/filesystems (as you might try to do in alternately mounting a file system R/O): you will get an ANS4071E error message. Default: all local filesystems, except /tmp. (Default is same as coding "ALL-LOCAL", which includes all local hard drives, excluding /tmp, and excludes any removeable media drives, such as CD-ROM, and excludes loopback file systems and those mounted by Automounter. Local drives do not include NFS-mounted file systems.) Verify: 'dsmc q fi' or 'dsmc q op'. Override by specifying file systems on the 'incremental' command, as in: 'dsmc Incremental /fs3' Note that instead of a file system you can code a file system subdirectory, defined previously via the VIRTUALMountpoint option. Do not confuse DOMain with Policy Domain: they are entirely different! See also: File systems, local; SYSTEMObject Domain list, in GUI From the GUI menu, choose "edit" -> preferences; there you'll find a "backup" tab which will give you access to your domain options, and a self-explicit "include-exclude" tab. -DOMain=____ Client command line option to specify file system name(s) which augment those specified on the Client User Options file DOMain statement(s). For example: If your options file contains "DOMain /fs1 /fs2" and you invoke a backup with -DOMain="/fs3 /fs4" then the backup will operate on /fs1, /fs2, /fs3, and /fs4. Note that both DOMain and -DOMain are ignored if you explicitly list file systems to be backed up, as with 'dsmc i /fs7 /fs8'. DOMAIN.Image Client Options File (dsm.opt) option for those clients supporting Image Backups. Specifies the mounted file systems and raw logical volumes to be included by default when Backup Image is performed without file system or raw logical volume arguments. Syntax: DOMAIN.Image Name1 [Name2 ...] See also: dsmc Backup Image; MODE domdsm.cfg The default name for the TDP For Domino configuration file. Values in that file are established via the 'domdsmc set' command. Note that if the file contains invalid values, TDP will use default values. "Preference" info, by default, comes from this cfg - not domdsm.opt . Remember that dsm.opt is the TSM API config file. You can point to an alternate configuration file using the DOMI_CONFIG environment variable. domdsmc query dbbackup TDP Domino command to report on previously backed up Domino database instances. If it fails to find any, it may be that the domdsmc executable does not have the set-user-id bit on: perform Unix command 'chmod 6771' to turn it on. See IBM KB article 1109089. Domino See: domdsm.cfg; Lotus Domino; Tivoli Storage Manager for Mail Domino backup There are two *guaranteed* ways to get a consistent Domino database backup: 1) Shut down the Domino server and back up the files, as via the B/A client. 2) Use Data Protection for Domino, which uses the Domino backup and restore APIs. This can be done while the Domino Server is up even if the database is changing during backup. Some customers point to the TSM 5.1 Open File support and believe they can use that instead; but if a database is "open", you cannot absolutely guarantee that the database will be in a consistant state during the point in time the "freeze" happens, because not all of the database may be on the disk - some may still be in memory. The Domino transaction logging introduced in Domino 5 make sure that the database can be made consistent even after a crash. Domino restoral considerations When performing a restoral with TDP Notes, the restored physical files are seen to have contemporary timestamps, rather than reflecting the timestamps of the backups. This is because the external, physical file timestamps don't matter, and receive no special attention: what matters are the timestamps internal to the Domino database, which is what the TDP is concerned with. DOS/Win31 client Available in ADSM v.2, but not v.3. dpid2 daemon Serves as a translator between SMUX and DPI (SNMP Multiplexor Protocol and Distributed Protocol Interface) traffic. Make sure that it is known to the snmp agent, as by adding a 'smux' line to /etc/snmpd.conf for the dpid2 daemon; else /var/log could fill with msgs: dpid2 lost connection to agent dpid2 smux_wait: youLoseBig [ps2pe: Error 0] Dr. Watson errors (Windows) May be caused by having old options in your options file, which are no longer supported by the newer client. DRIVE FORMAT value in DEFine DEVclass to indicate that the maximum capabilities of the tape drive should be used. Note that this is not as reliable or as definitive as more specific values. See also: 3590B; 3590C; FORMAT Drive A drive is defined to belong to a previously-defined Library. Drive, define to Library See: 'DEFine DRive' Drive, update 'UPDate DRive ...' (q.v.) Drive, vary online/offline 'UPDate DRive ...' (q.v.) Drive cleaning, excessive Can be caused by bad drive microcode, as seen with DLT7000. The microcode does not record calibration track onto to tapes correctly. So the drives detect a weak signal and think that cleaning is needed. Drive mounts count See: 3590 tape mounts, by drive Drive status, from host 'mtlib -l /dev/lmcp0 -f /dev/rmt1 -qD' DRIVEACQUIRERETRY TSM4.1 server option for 3494 sharing. Allows an administrator to set the number of times the server will retry to acquire a drive. Possible values: 0 To retry forever. This is the default. -1 To never retry. 1 to 9999 The number of times the server will retry. See also: 3494SHARED; MPTIMEOUT Driver not working - can't see tape Has occurred in the case of an operating drives system like Solaris 2.7 booted in 64-bit mode, but the driver being 32-bit. DRIVES SQL table. Elements, as of ADSMv3: LIBRARY_NAME: FSERV.LIB DRIVE_NAME: FSERV.3590_500 DEVICE_TYPE: 3590 DEVICE: /dev/rmt5 ONLINE: YES ELEMENT: ACS_DRIVE_ID: LAST_UPDATE_BY: LAST_UPDATE: CLEAN_FREQ: Later, TSM added the columns... DRIVE_STATE ALLOCATED_TO Note: Does not reveal the media mounted on a drive. Drives, maximum to use at once See: MOUNTLimit Drives, not all in library being used As in you find processes waiting for (Insufficient mount points drives (do 'Query SEssion F=D' and find ANR0535W, ANR0567W) some sessions waiting for mount points), though you believe you have enough drives in the library to handle the requests... - Most obviously, do 'Query DRive' and make sure all are online. - In the server, do 'SHow LIBrary' and see if it thinks all the drives are available. Inspect the "mod=" value: if you have a mixture of model numbers, some of your drives might not get used. A further consideration is that using new drives with old server software (as with inappropriate definitions such that TSM thinks they are older drives) could result in erratic behavior, as in perhaps balky dismounting, etc. Review TSM documentation on how to best define such devices for use in your library, and appropriate levels of software and device drivers. - If all your drives get rotationally used, but all cannot be used simultaneously, then it's a DEVclass MOUNTLimit problem (and be aware that MOUNTLimit=DRIVES is not always reliable, so may be better to explicitly specify the number). - If not all drives get rotationally used, some have a problem: Attempt to use 'mtlib' and 'tapeutil'/'ntutil' commands on those. - Check your client MAXNUMMP value. - Watch out for the devclass for your drives somehow having changed and thus being incompatible with your storage pools. - If just certain drives never get used, then there is a problem specific to those drives... - If a 3494 or like library, look for an Intervention Required condition, caused by a load/unload failure or similar, which takes the drive out of service. - At the library manager station, check the availability status of the drives. (They can be logically made unavailable there.) - Check the front panel of the drives, looking for "ONLINE=0" or like anomaly. - In AIX, do 'lsdev -C -c tape -H -t 3590' and see if all drives have status of Available. - Are you trying to use a new tape technology with a server level which doesn't support it such that the drive devclass is GENERICTAPE rather than the actual type, needed to mount and use the tapes that go with that drive technology? - In a more obscure case, a 3494/3590 customer reports this being caused by the cleaning brush on the drive not functioning correctly: replaced, cleaned, no more problem. - 5.1 changed things so that we now have to define a Path for libraries and drives, which may be at the root of your difficulty. Do Query PATH in addition to Query DRive, and possibly SHow LIBRary, to seek out any missing defs or bad states. - Assure that your MAXscratch value is appropriate. Keep in mind that various TSM tasks simply cannot be done in parallel. Drives, number of in 3494 Via Unix command: 'mtlib -l /dev/lmcp0 -qS' Drives, query 'Query DRive [LibName] [DriveName] [Format=Detailed]' DRM TSM Disaster Recovery Manager. In AIX environment, does 2 major things: 1. Automates (mostly) the vaulting process for moving/tracking copy storage pool tapes and DB backup tapes offsite and onsite. If you have a tape robot and do a lot of tape vaulting you can either: a) Have a very expensive ADSM administrator do all the checking and status updates daily for vaulting tapes; b) Have a very expensive UNIX dude write scripts to automate the process (and of course maintain them); or c) Pay for DRM and get the function ready to go out of the box. 2. Generates the "recovery plan" file that is a concatenated series of scripts and instructions that tell you how to rebuild your *SM server in an offsite, DR environment (which is the first thing you have to do in a disaster situation - you have to get your *SM server back up at your recovery recovery site, before you can start using *SM to recover your appls.) Ref: Admin Guide manual; Tivoli Storage Management Concepts redbook Competing product: AutoVault, at CodeRelief.com - a very inexpensive alternative, no TSM hooks. See also: ORMSTate DRM, add primary, copy stgpools SET DRMPRIMSTGPOOL SET DRMCOPYSTGPOOL DRM, prevent from checking tape label To keep DRM from checking the tape label before ejecting a tape: Set DRMCHECKLabel No DRM and ACS libraries DRM won't do checkouts from ACS libraries. (You can write scripts to work around it.) DRM considerations Numerous customers report encountering inconsistencies with DRM, as in doing Query DRMedia and finding 18 of 50 offsite volumes not listed. This may have to do with changing status of vault retrieve volumes which somehow are not checked-in in time. When the volume history is truncated to the point where this state change was made the volume is 'lost'. - Make sure that you use DRM to expire *SM database backup volumes. - Watch out for human error: In using MOVe DRMedia to return tapes, a mistyped a volser for a volume that is still physically offsite but has just gone to vault retrieve state, the volume will be deleted and left at the vault: it's not in a DRM state anymore and you have to do manual inventory to find it. - The offsite vendor can mistakenly omit a tape to be returned and ops runs MOVe DRMedia anyway and the tape is "lost". - A volume inadvertently left in the tape library and not sent offsite cannot be returned. - A MOVe DRMedia done by mistake, or an automated script which is not in tune with retention policies can result in inconsistencies. As always, keeping good records will help uncover and rectify problems. If an automated library, after you explode the DRM files, you may have to edit DEVICE.CONFIGURATION.FILE to put actual location and volser of your DB backup tape. That's so the DR script (and the server) can find it. DRMDBBackupexpiredays See: Set DRMDBBackupexpiredays DRMEDIA SQL: TSM database table recording disaster recovery media, which is to say database backup volumes and copy storage pool volumes. Columns, with samples: VOLUME_NAME: 000004 STATE: MOUNTABLE (always this unless MOVe DRMedia is done) UPD_DATE: 2000-11-12 15:11:29.000000 LOCATION: STGPOOL_NAME: OUR.STGP_COPY LIB_NAME: OUR.LIB VOLTYPE: CopyStgPool DBBackup dscameng.txt American English message text file. The DSM_DIR client environment variable should point to the directory where the file should reside. dsierror.log *SM API error log (like dsmerror.log) where information about processing errors is written. Because buta is built upon the API, use of buta also causes this log to be created. The DSMI_LOG client environment variable should point to the directory where you want the dsierror.log to reside. If unspecified, the error log will be written to the current directory. The error log for client root activity (HSM migration, etc.) will be /dsierror.log. See also: DSMI_LOG; "ERRORLOGRetention"; tdpoerror.log ____.dsk VMware virtual disk files, such as win98.dsk, linux.dsk, etc. Backing up such files per se is not the best idea, and is worse if the .dsk area is active. The best course is to run the backup from within the guest operating system. dsm The GUI client for backup/archive, restore/retrieve. Contrast with 'dsmc' command, for command line interface. AIX: /usr/lpp/adsm/bin/dsm IRIX: /usr/adsm/dsm Solaris: /opt/IBMDSMba5/solaris/dsm and symlink from /usr/sbin/dsmc Beware: ADSM install renders this cmd setGID bin, which thwarts superuser uses. Assure setGID chmod'ed off. Ref: Using the UNIX Backup-Archive Client, chapter 1. DSM_CONFIG Client environment variable to point to the Client User Options file (dsm.opt) for users who create their own rather than depend upon the default file /usr/lpp/adsm/bin/dsm.opt. Ref: "Installing the Clients" manual. See also: -optfile DSM_DIR Officially, the client environment variable to point to the directory containing dscameng.txt, dsm.sys, dsmtca, and dsmstat. But is also observed by /etc/rc.adsmhsm as the directory from which HSM should run installfsm, dsmrecalld, and dsmmonitord. Ref: "Installing the Clients" manual. DSM_LOG Client environment variable to point to the *directory* where you want the dsmerror.log to reside. (Remember to code the directory name, not the file name.) If undefined, the error log will be written to the current directory. Beware symbolic links in the path, else suffer ANS1192E. Advice: Avoid using this if possible, because it forces use of a single error log file, which can make for permissions usage problems across multiple users, and muddy later debugging in having the errors from all manner of sessions intermixed in the file. Ref: "Installing the Clients" manual. See also: ERRORLOGName option dsm.afs The dsm.afs backup style provides the standard ADSM user interface and backup/restore model to AFS users, which unlike plain dsm will back up AFS Access Control Lists for directories. Users can have control over the backup of their data, and can restore individual files without requiring operator intervention. Individual AFS files are maintained by the ADSM system, and the ADSM management classes control file retention and expiration. Additional information is needed in order to restore an AFS server disk. Contrast with buta, which operates on entire AFS volumes. dsm.ini (Windows client) The ADSMv3 Backup/Archive GUI introduced an Estimate function. It collects statistics from the ADSM server, which the client stores, by server, in the dsm.ini file in the backup-archive client directory. (Comparable file in the Unix environment is .adsmrc.) Client installation also creates this file in the client directory. Ref: Client manual chapter 3 "Estimating Backup processing Time"; ADSMv3 Technical Guide redbook This file is also being used, in at least a provisional manner, to make the GUI configurable, as in limiting what an end user can do. See IBM site Solution swg21109086. See also: .adsmrc; Estimate; TSM GUI Preferences dsm.opt file See Client User Options file. AIX: /usr/lpp/adsm/bin/dsm.opt. IRIX: /usr/adsm/dsm.opt. Solaris: /usr/bin (so located due to the Solaris packaging mechanism wherein an install will delete old files, and /usr/bin was deemed "safe" - but not really the best choice) The DSM_CONFIG client environment variable may point to the options file to use, instead of using the options file in the the default location. dsm.opt.smp file Sample Client User Options file. Use this to create your first dsm.opt file. dsm.sys file See: Client System Options File. AIX: /usr/lpp/adsm/bin/dsm.sys IRIX: /usr/adsm/dsm.sys Solaris: /usr/bin (so located due to the Solaris packaging mechanism wherein an install will delete old files, and /usr/bin was deemed "safe" - but not really the best choice) The DSM_DIR client environment variable may be used to point to the directory where the file to be used resides. Beware there being multiple dsm.sys files, as in AIX maybe having: /usr/tivoli/tsm/client/api/bin/dsm.sys /usr/tivoli/tsm/client/api/bin64/dsm.sys /usr/tivoli/tsm/client/ba/bin/dsm.sys dsm.sys.smp file Sample Client System Options file. Use this to create your first dsm.sys file. In /usr/lpp/adsm/bin dsmaccnt.log This is the ADSM server accounting file on an AIX system, which is written to after 'Set ACCounting ON' is done. The file is located in the directory from which the server is started, which is typically /usr/lpp/adsmserv/bin/. See also: Accounting... dsmadm The GUI command for server administration of Administrators, Central Scheduler, Database, Recovery Log, File Spaces, Nodes, Policy Domains, Server, and Storage Pools. Contrast with the 'adsm' command, which is principally for client management. dsmadmc *SM administrative client command line mode for server cmds, available as a client on all *SM systems where the *SM client software has been installed. (On Windows clients, dsmadmc is not installed by default: you have to perform a Custom install, marking the admin command line client for installation. After a basic install, you can go back and install dsmadmc by reinvoking the install, choosing Modify type, there marking just the admin command line client for installation. See IBM doc item 1083434.) The dsmadmc command starts an "administrative client session" to interact with the server from a remote workstation, as described in the *SM Administrator's Reference. In Unix, the version level preface and command output all go to Stdout. Note that the dsmadmc command is neutral: you can use it on any platform type to communicate to a TSM server on any platform type. The dsmadmc invoker does not have to be a superuser. To enter console mode (display only): 'dsmadmc -CONsolemode' To enter mount mode (monitor mounts): 'dsmadmc -MOUNTmode' To enter batch mode (single command): 'dsmadmc -id=____ -pa=____ Command...' 'dsmadmc -id=____ -pa=____ macro Name' To enter interactive mode: 'dsmadmc -id=YourID -pa=YourPW' Options: -CONsolemode Run in Console mode, to display TSM server msgs but allow no input. -DATAOnly=[No|Yes] (TSM 5.2+) To suppress the display of headers (product version, copyright, ANS8000I command echo, column headers) and ANS8002I trailer. Error messages are not suppressed. -DISPLaymode=[LISt|TABle] The interface is normally adaptive, displaying output in tabular form if the window is wide enough, otherwise reverting to Identifier:Value form. This option allows you to force query output to one or the other, regardless of the window width. Note that, regardless of window width, query commands may be programmed with a fixed column width. -ID=____ Specify administrator ID. -Itemcommit Say that you want to commit commands inside a macro as each command is executed. This prevents the macro from failing if any command in it encounters "No match found" (RC 11) or the like. See also: COMMIT -MOUNTmode Run in Mount mode, to display all mount messages, such as ANR8319I, ANR8337I, ANR8765I. No input allowed. -NOConfirm Say you don't want TSM to request confirmation before executing vital commands. Example: Select, "This SQL query might generate a big table, or take a long time. Do you wish to continue ? Y/N" -OUTfile=____ All terminal commands and responses are to be captured in the named file, as well as be displayed on the screen. The file will not reflect command input prompting but will record the cmd. Use this rather than Unix 'dsmadmc | tee ', which doesn't work. -PASsword=____ Specify admin password. -Quiet Don't display Stdout msgs to screen; but Stderr will. -SERVER=____ Select a server other than the one in this system's client options file. (Not avail. in Windows: use -TCPServeraddress instead.) -COMMAdelimited Specifies that any tabular output from a server query is to be formatted as comma-separated strings rather than in readable format. This option is intended to be used primarily when redirecting the output of an SQL query (SELECT command). The comma-separated value format is a standard data format which can be processed by many common programs, including spreadsheets, data bases, and report generators. Note that where values themselves contain commas, TSM will enclose the value in quotes, e.g. "123,456". -TABdelimited Specifies that any tabular output from a server query is to be formatted as tab-separated strings rather than in readable format. This option is intended to be used primarily when redirecting the output of an SQL query (SELECT command). The tab-separated value format is a standard data format which can be processed by many common programs, including spreadsheets, databases, and report generators. Tabs make parsing easier compared to commas, in that it is not uncommon for values to contain commas. You can also specify any option allowed in the client options file. Alas, there is no option to specify a file containing a list of commands to be invoked. The dsmadmc client command is obviously useless if the server is not up. See my description of the ANS8023E message. Notes: Prior to TSM 5.2 and the -DATAOnly option, there is no way to suppress headers or ANS800x messages that appear in the output - you are left to remove them after the fact. You might use ODBC, but that accesses just the TSM db, not any TSM commands. You can suppress the "more..." scrolling prompt only by running a command in batch mode (adding the command to the end of the line) and piping the output to cat... dsmadmc SomCmd | cat. Install note: dsmadmc may not install by Ref: Admin Ref chapter 3: "Using Administrative Client Options". See also: -Itemcommit dsmapi*.h *SM API header files, for compiling your own API-based application: dsmapifp.h dsmapips.h dsmapitd.h In TSM 3.7, lives in /usr/tivoli/tsm/client/api/bin/sample/ They are best included in C source modules in the following order: #include "dsmapitd.h" #include "dsmapifp.h" #include "dapitype.h" #include "dapiutil.h" #include "dsmrc.h" See also: libApiDS.a dsmapitca The ADSM API Trusted Communication Agent For non-root users, the ADSM client uses a trusted client (dsmtca) process to communicate with the ADSM server via a TCP session. This dsmtca process runs setuid root, and communicates with the user process (API) via shared memory, which requires the use of semaphores. The DSM_DIR client environment variable should point to the directory where the file should reside. dsmattr HSM: Command to set or display the recall mode for a migrated file. Syntax: 'dsmattr [-RECAllmode=Normal|Migonclose| Readwithoutrecall] [-RECUrsive] FileName(s)|Dir(s)' See "Readwithoutrecall". dsmautomig (HSM) Command to start threshold migration for a file system. dsmmonitord checks the need for migration every 5 minutes (or as specified on the CHEckthresholds Client System Options file (dsm.sys)) and if needed will automatically invoke dsmautomig to do threshhold migrations. Query: ADSM 'dsmc Query Options' or TSM 'dsmc show options', look for "checkThresholds". Note that persistent dsmautomig invocations are an indication that HSM thinks the file system is runninng out of space, despite what a 'df' may show. Deleting files or extending the file system has been shown to stop these "dry heaves" dsmautomig invocations. See "dsmmonitord", "automatic migration", "demand migration". dsmBeginQuery API function. dsmBindMC API call to bind the file object to a management class. It does so by scanning the Include/Exclude list for a spec matching the object, wherein you may have previously coded a management class for a filespec. What the call returns reflects what it has found - which is to say that the dsmBindMC call does not itself specify the Management Class. You'll end up with the default management class if the dsmBindMC processing did not find a spec for the object in the Include/Exclude list. It would be nice if there were a call which were as definitive as the -ARCHMc spec for the command line client, but such is not the case. dsmc Command-line version of the client for backup-restore, archive-retrieve. Invoking simply 'dsmc' puts you into the command line client, in interactive mode (aka "loop mode"). Contrast with 'dsm' command, for graphical interface (GUI). To direct to another server, invoke like this: 'dsmc q fi -server=Srvr', or 'dsmc i -server=Srvr /home'. (Note that the options *must* be coded AFTER the operation.) AIX: /usr/lpp/adsm/bin/dsmc IRIX: /usr/adsm/dsmc NT: Reference the B/A Client manual for Windows manual, section "Starting a Command Line Session", where you can Start->Programs->TSM folder->Command Line icon; or use the Windows command line to shuffle over to the TSM directory and issue the 'dsmc' command. Solaris: /opt/IBMDSMba5/solaris/dsmc, and symlink from /usr/sbin/dsmc Note that you can run a macro file with dsmc: put various commands like Incremental into a file, the run as 'dsmc macro MacroFilename'. Beware: ADSM install renders this cmd setGID bin, which thwarts superuser uses. Assure setGID chmod'ed off. Ref: Using the UNIX Backup-Archive Client, chapter 7. See also: dsmc LOOP dsmc and wildcards (asterisk) New TSM users in at least a Unix environment may not realize that how you utilize a wildcard may cause results to be wholly different than they expect. For example: A novice user goes into a directory and wants to see all the files that are in the backup storage pool for that directory, so they enter: dsmc query backup * But what does that really do? The asterisk is exposed to the Unix shell that is controlling the user session, and it expands the asterisk into a list of all the files in the directory. So the query will end up trying to ask the TSM server for information on the files currently in the directory - which may have no correlation with what is in the backup storage pool. (This theoretical example sidesteps the TSM complication that it may disallow such wildcarding, with error message ANS1102E; but we're trying to explore a point here.) So how do you then pose the request to the TSM server that it show all backed up files from the directory? By one of the following constructs (where this is a Unix example): dsmc query backup '*' dsmc query backup \* dsmc query backup "*" By quoting or escaping the asterisk, the shell passes it, intact, to the dsmc command, which responds by formulating an API request to the TSM server for all files contained within the stored filespace for this directory. And this yields the expected results. The rule here may be expressed as: * refers to the file system '*' refers to the filespace Note that the above does *not* apply to the Windows environment: the Windows command processor does not expand wildcards, but rather just passes them on to the invoked program as-is. dsmc Archive To archive named files. Syntax: 'Archive [-ARCHMc=managementclass] [-DELetefiles] [-DEscription="..."] [-SErvername=StanzaName] [-SUbdir=No|Yes] [-TAPEPrompt=value] FileSpec(s)' The number of FileSpecs is limited to 20; see "dsmc command line limits". Wildcard characters in the FileSpec(s) can be passed to the Archive command for it to expand them: this avoids the shell implicitly expanding the names, which can result in the command line arguments limit being exceeded. For example: instead of coding: dsmc Archive myfiles.* code: dsmc Archive 'myfiles.*' or... dsmc Archive myfiles.\* Note that the archive operation will succeed even if you don't have Unix permissions to delete the file after archiving. It is important to understand that an Archive operation is deemed "explicit": that you definitely want all the specified files sent...WITHOUT EXCEPTION. Because of this, message ANS1115W and a return code 4 will be produced if you have an Exclude in play for an included object. (Due to the preservational nature of Archive, you very much want to know if some file was not preserved.) It is advisable to make use of the DEscription, as it renders the archived object unique - but be aware that doing so also forces the path directories to be archived once more, if the description is unique. Archiving a file automatically archives the directories in the path to it. As of ADSMv3.1 mid-1999 APAR IX89638 (PTF 3.1.0.7), archived directories are not bound to the management class with the longest retention. Note that you cannot change the archive file Description after archiving. See also: DELetefiles; dsmc Archive dsmc Backup Image TSM3.7+ client command to create an image backup of one or more file spaces that you specify. Available for major Unix systems (AIX, Sun, HP). This is a raw logical volume backup, which backs up a physical image of a volume rather than individually backing up the files contained within it. This is achieved with the TSM API (which must be installed). This backup is totally independent of ordinary Backup/Restore, and the two cannot mingle. Image backups need to be run as "root". Syntax: 'dsmc Backup Image File_Spec' where File_Spec identifies either the name of the file system that occupies the logical volume (more specifically, the mount point directory name), when that file system is mounted; or the name of the logical volume itself, when it has no mounted file system. If the volume contains a file system, you must specify by file system name: that allows you to supplement the image backup with Incremental or Selective backups via the MODE option. It also assures that the mounted file system, if any, is dismounted before the image backup is performed. The client and server both must be at least 3.7. Advisory: When a file system is specified, the operation will try to unmount the file system volume, remount it read-only, perform the backup, and then remount it as it was. This can be disruptive, and is problematic if the backup is interrupted. Use the Include.Image option to include an image for backup, or to assign a specific management class to an image object. Syntax: 'dsmc Backup Image [Opts] Filespec(s)' Ref: Redbook "Tivoli Storage Manager Version 3.7 Technical Guide"; IBM online info item swg21153898 Msgs: ANS1063E; ANS1068E See also: MODE dsmc Backup NAS Contacts the TSM EE server for it to initiate an image backup of one or more file systems belonging to a Network Attached Storage (NAS) file server. The NAS file server performs the outboard data movement. A server process starts in order to perform the backup. See also: NDMP; NetApp dsmc BACKup SYSTEMObject Windows client command to back up all valid system objects, allowing you to perform a backup of System Objects separate from ordinary files. Note that an Incremental Backup will ordinarily also back up System Objects. Verification: The backup log will show messages like "Backup System Object: Event log", "Backup System Object: Registry". Note that this command cannot be scheduled. dsmc CANcel Restore ADSMv3 client command to cancel a Restore operation. See also: CANcel RESTore dsmc command line limits By default, the number of FileNames which can be specified on the dsmc command line to 20 (message ANS1102E); and the TSM backup-archive client's command-line parsing is limited to 2048 total bytes (message ANS1209E The input argument list exceeds the maximum length of 2048 characters.). The intent is to protect hapless customers from themselves - but that of course penalizes everyone, deprives the product of the flexibility that its Enterprise status warrants, and prevents it from scaling to the capabilities of the operating system environment which the customer chose for large-scale processing. (In AIX, at least, the command line length limit is defined by the ARG_MAX value in /usr/include/sys/limits.h: exceeding that results in the typical shell error "arg list too long".) As of the TSM 5.2.2 Unix client, this limitation is relieved in the form of the -REMOVEOPerandlimit command line option. In other environments, there are some circumventions you can employ: - Use the -FILEList option. - In the Unix environment, use the 'xargs' command to efficiently invoke the command with up to 20 filespecs per invocation, via the -n20 option. Within an interactive session (which you invoked by entering 'dsmc' with no operands): A physical line may not contain more than 256 characters, and may be continued to a maximum of 1500 characters. Ref: B/A Clients manual, "Entering client commands" See also: -FILEList; -REMOVEOPerandlimit dsmc Delete ACcess TSM client command to revoke access to files that you previously allowed others to access via 'dsmc SET Access'. Syntax: 'dsmc Delete ACcess [options]' You will be presented with a list from which to choose. (As such, this is a quick, convenient way to display all access permissions.) dsmc Delete ARchive TSM client command to delete Archived files from TSM server storage. Syntax: 'dsmc Delete ARchive [options] FileSpec' In more detail: 'dsmc Delete ARchive [-NOPRompt] [-DEscription="..."] [-PIck] [-SErvername=StanzaName] [-SUbdir=No|Yes] FileSpec(s)' If you do not qualify the deletion with a unique Archive file description, all archived files of that name will be deleted. The number of FileSpecs is limited to 20; see "dsmc command line limits". The delete actually only marks the entries for deletion: it is Expire Inventory which actually removes the entries and reclaims space. But the marking is irreversible: there is no customer-provided means for un-marking the files; and the marking does not show up in the Archives table. Thus, a Select on the Archives table continues to show the files exactly as before the Delete Archive. dsmc Delete Filespace ADSM client command to delete filespaces from *SM server storage. Syntax: 'dsmc Delete Filespace [options]' You will be presented with a list of filespaces to choose from. dsmc EXPire TSM client command to inactivate the backup objects you specify in the file specification or with the filelist option. The command does not remove workstation files: if you expire a file or directory that still exists on your workstation, the file or directory is backed up again during the next incremental backup unless you exclude the object from backup processing. If you expire a directory that contains active files, those files will not appear in a subsequent query from the GUI. However, these files will display on the command line if you specify the proper query with a wildcard character for the directory. dsmc Help Client command line interface command to see help topics on the use of dsmc commands and option, plus message numbers. (Note that you have to scroll down to see everything.) When you invoke 'dsmc Help', there is no interaction with the TSM server. dsmc Incremental The basic command line client command to perform an incremental backup. Syntax: 'Incremental [] FileSpec(s)' FileSpec(s): Most commonly will be file system name(s). If you want to back up just a directory, how you specify the directory will make a difference... In specifying a file system name, you enter just the name, like "/home", and TSM will pursue backing up the full file system. But if you specify a directory name like /home/user1, only that single directory entry will be backed up: you need to specify /home/user1/ to explicitly tell TSM that rather than just back up that object, that you are telling it to back up a directory *and* what is contained in it. The number of FileSpecs is limited to 20; see "dsmc command line limits". Note that whereas scheduled backups result in each line being timestamped, this does not happen with command line incremental backups. (Neither running the command as a background process, nor redirecting the output will result in timestamping the lines.) The number of filespec operands may be limited: see "dsmc command line limits". See also: dsmc Selective dsmc LOOP To start a loop-mode (interactive) client session. Same as entering just 'dsmc'. dsmc Query ACcess TSM client command to display a list of users whom you have given access rights to your Backup and/or Archive files, via dsmc SET ACcess, so that they can subsequently perform Restore or Retrieve using -FROMNode, -FROMOwner, etc. 'dsmc Query ACcess [-scrolllines] [-scrollprompt]' See also: dsmc SET Access dsmc Query ACTIVEDIRECTORY Windows TSM 4.1 client command to provide information about backed up Active Directory. Ref: Redpiece "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment" dsmc Query ARchive *SM client command to list specified Archive files. Syntax: 'dsmc Query ARchive [-DEscription="___"] [-FROMDate=date] [-TODate=date] [-FROMNode=nodename] [-FROMOwner=ownername] [-SCROLLPrompt=value] [-SCROLLLines=number] [-SErvername=StanzaName] [-SUbdir=No|Yes] FileSpec(s)' The number of FileSpecs is limited to 20; see "dsmc command line limits". Wildcard characters in the filename(s) can be passed to the Archive command for it to expand them: this avoids the shell implicitly expanding the names, which can result in the command line arguments limit being exceeded. For example: instead of coding: dsmc Query ARchive myfiles.* code: dsmc Query Archive 'myfiles.*' or... dsmc Query Archive myfiles.\* Displays: File size, archive date and time, file name, expiration date, and file description (but not file owner). Performing a wide search for your archive files is a challenge. You'd like to say "look for all my archive files, beginning at the root of the mounted file systems". But it doesn't want to comply. What you have to do is restrict the search to a file system. For example, if your file activity is in /home, you can do: dsmc q archive /home/ -subdir=yes -desc="whatever" Note the foolishness of these client commands: unless you code a slash (/) or slash-asterisk (/*) at the end of the directory name, the commands assume that you are looking for an individual *file* of that name, and turns up nothing! Note: Root can see the archive files owned by others, but the query does not reveal file owners. Note that you can query across nodes, but only if the file system architectures are compatible. See also: dsmc Query Backup across architectural platforms dsmc Query Backup *SM client command to list specified backup files, issued as: 'dsmc Query Backup [options] ' Options: -DIrsonly: Display only directory names for backup versions of your files, as in: 'dsmc Query Backup -dirs -sub=yes '. -FROMDate=date -FROMTime=time -INActive To include Inactive files in the operation. All Active files will be displayed first, and then the Inactive ones. Note that files marked for expiration cannot be seen from the client, but can be seen in a server Select on the BACKUPS table. -SCROLLPrompt=Yes -SCROLLLines=number -SErvername=StanzaName -SUbdir=Yes -TODate=date -TOTime=time -DATEFORMAT, -FROMNode, -FROMOWNER, -NODename, -NUMBERFORMAT, -PASsword, -QUIET, -TIMEFORMAT, -VERBOSE The number of FileSpecs is limited to 20; see "dsmc command line limits". Note that it is not possible to use a filespec which is the top of your file system (e.g., "/" in Unix) and have dsmc report all files, regardless of filespace. It can't do that: you have to base the query on filespaces. Wildcards: Use only opsys (shell) wildcard characters, which can only be used in the file name or extension. They cannot be used to specify destination files, file systems, or directories. In light of this, you would best do 'Query Filespace' first to see what file systems were being backed up, rather than frustrate yourself trying to use wildcards which get you nowhere. This query command will display file size, backup timestamp, managment class, active/inactive, and file; but there is no way to get file details such as username, group info, file timestamps, or even the type of file system object (to be able to distinguish between directories and files, for example): neither the -verbose nor -description CLI options help get more info. In contrast to the CLI, the GUI will provide such further info, via its View menu, "File details" selection - but this operates on one file at a time. Note that the speed of this query command in returning results bears no relationship to the speed of a restoral of the same files, both because of further *SM database lookup requirements and media handling. See also: dsmc and wildcards; DEACTIVATE_DATE dsmc Query Backup across architectural Cross platform Querying of files only platforms work on those platforms that understand the other's file systems, such as among Windows, DOS, NT, and OS/2; or among AIX, IRIX, and Solaris - and even there incompatibilities may exist. Mac's can't be either the source or the target in moves from another platform. A succinct way to express the schism is to say that there are the "slash" and "backslash" camps, and that their files cannot mingle. See also: Restore across architectural platforms dsmc Query BACKUPSET *SM client command to query a backup set from a local file or the server, to see metadata about the Backup Set: its name, generatin date, retention, and description. You must be superuser to query a backupset from the server. Syntax: 'Query BACKUPSET [Options] BackupsetName|LocalFileName' Note that there is no way from the client to query the contents of a backup set. See also: Backup Set; Query BACKUPSETContents dsmc Query CERTSERVDB Windows TSM 4.1 client command. Ref: Redpiece "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment" dsmc Query CLUSTERDB Windows TSM 4.1 client command. Ref: Redpiece "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment" dsmc Query COMPLUSDB Windows TSM 4.1 client command. Ref: Redpiece "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment" dsmc Query Filespace TSM client command to report filespaces known to the server for this client. The "Last Incr Date" column reflects the date of the last successful, full Incremental backup. If its value is null, it could be the result of: - The filespace having been created by Archive activity only. - Doing backups other than complete Incremental type (e.g., Selective, or Incremental on a subdirectory in the file system). - The Incremental backup having been interrupted. - The Incremental backup suffering from files changing during backup and you don't have Shared Dynamic copy serialization active, or files selected for backup disappear from the client before the backup can be done. - It's a filespace for odd backup types such as buta. Syntax: 'dsmc Query Filespace [-FROMNode=____]' See also: Query FIlespace dsmc Query INCLEXCL TSM 4.1: Formalized client command to display the list of Include-Exclude statements that are in effect for the client, in the order in which they are processed during Backup and Archive operations. This is the best way to interpret your include-exclude statements, as it reports your client-based and server-based (Cloptset) specifications together. Report columns: Mode Incl or Excl Function Archive or All Pattern '#' appears at the front where '*' was coded for "all drives". Source File Where the include or exclude is: dsm.opt = Your client. Server = Cloptset. Operating System = Windows Registry value. This command is valid for all UNIX, all Windows, and NetWare clients. Historical notes: Was introduced in ADSMv3.PTF6 as an undocumented client command, like 'dsmc Query OPTION'. In TSM 3.7, Tivoli management decided that, because it was unsupported, it should not be a Query, but rather a Show command, being consistent with undocumented and unsupported SHow commands in the server. That command persisted into TSM 4.1.2, where the capability was formalized as the 'dsmc Query INCLEXCL' command. Customers still using it in older client levels need to realize that because it was "unsupported", it would not necessarily be capable of recognizing newer Exclude options, like EXCLUDE.FS (as was discovered). For example, if you have no EXCLUDE.FS statements coded and don't get the message "No exclude filespace statements defined.", then the Query code is behind the times. See also: dsmc SHow INCLEXCL dsmc Query Mgmtclass ADSM client command to display info about the management classes available in the active policy set available to the client. 'dsmc Query Mgmtclass [-detail] [-FROMNode=____]' where -detail reveals Copy Group info, which includes retention periods. dsmc Query Options Undocumented ADSM client command, contributed by developers, to report combined settings from the Client System Options file and Client User Options file. In ADSMv3, also shows the merged options in effect (those from dsm.opt and the cloptset). TSM: Replaced by 'show options'. dsmc Query RESTore ADSM client command to display a list of your restartable restore sessions, as maintained in the server database. Reports: owner, replace, subdir, preservepath, source, destination. Restartable sessions are indicated by negative numbers, and their Restore State is reported as "restartable". See also: RESTOREINTERVAL dsmc Query SChedule ADSM client command to display the events scehduled for your node. dsmc Query SEssion ADSM client command to display info about your ADSM session: current node name, when the session was established, server info, and server connection. dsmc Query SYSTEMInfo TSM 5.x Windows client meta command to provide a comprehensive report on the TSM Windows environment - options files, environment variables, files implicitly and explicitly excluded, etc. Creates a dsminfo.txt file. dsmc Query SYSTEMObject TSM 4.1 Windows client command to provide information about backed up System Objects. Ref: Redpiece "Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment" dsmc Query Tracestatus ADSM client command to display a list of available client trace flags and their current settings. Ref: Trace Facility Guide dsmc REStore Client command to restore file system objects. 'dsmc REStore [FILE] [] []' Allowable options: -DIrsonly, -FILESOnly, -FROMDate, -FROMNode, -FROMOwner, -FROMTime, -IFNewer, -INActive, -LAtest, -PIck, -PITDate, -PITTime, -PRESERvepath, -REPlace, -RESToremigstate, -SUbdir, -TAPEPrompt, -TODate, -TOTime. The number of SourceFilespecs is limited to 20; see "dsmc command line limits". If you are restoring a directory, it is important that you specify the SourceFilespec with a directory indicator (slash (/) in Unix, backslash (\) in Windows, else the restore will conduct a prolonged search for what it presumes to be a file rather than a directory. This is particularly important for point-in-time restorals, where the client does a lot of filtering. See also: dsmc and wildcards; Restore... dsmc REStore BACKUPSET Client command to restore a Backup Set from the server, a local file, or a local tape device. The location of the Backup Set may be specified via -LOCation. The default location is server. Use client cmd 'dsmc Query BACKUPSET' to get metadata about the backup set. Use server cmd 'Query BACKUPSETContents' to either check the contents of the Backup Set or gauge access performance (which excludes the destination disk performance factors involved in a client dsmc REStore BACKUPSET). dsmc REStore REgistry TSM command to restore a Windows Registry. But it will restore only the most recent one, rather than an inactive version. You can manually restore an older version by using the GUI to restore the files to their original location, the adsm.sys directory. Start the Registry restore within the GUI with the command Restore Registry in the menu Utilities or within the ADSM CLI with REGBACK ENTIRE. Be sure that you check the Activate Key after Restore box in the dialog window. The ADSM client tries to restore the latest version of the files into the adsm.sys directory, but this time, you do not allow to replace the files on your disk. This will guarantee that the 'older' files will remain on the disk. The last dialog window which appears is a confirmation that the registry restore is completed and activated as the current registry. The machine must be rebooted for the changes to take effect. See also: REGREST dsmc RETrieve *SM client command to retrieve a previously Archived file. Syntax: 'dsmc RETrieve [options] SourceFilespec [DestFilespec]' where you may specify files or directories. Allowable options: -DEScription, -DIrsonly, -FILESOnly, -FROMDate, -FROMNode, -FROMOwner, -FROMTime, -IFNewer, -PIck, -PRESERvepath, -REPlace, -RESToremigstate, -SUbdir, -TAPEPrompt, -TODate, -TOTime. The number of SourceFilespecs is limited to 20; see "dsmc command line limits". dsmc SCHedule See: Scheduler, client, start manually dsmc Selective TSM client command to selectively back up files and/or directories that you specify. Syntax: 'dsmc Selective [-Options...] FileSpec(s)' Allowable options: -DIrsonly, -FILESOnly, -VOLinformation, -CHAngingretries, -Quiet, -SUbdir, -TAPEPrompt When files are named, the directories that contain them are also backed up, unless the -FILESOnly option is present. The number of FileSpecs is limited to 20; see "dsmc command line limits". To specify a whole Unix file system, enter its name with a trailing slash. You must be the owner of a file in order to back it up: having read access is not enough. (You get "ANS1136E Not file owner" if you try.) Your include-exclude specs apply to Selective backups. It is important to understand that a Selective backup is deemed "explicit": that you definitely want all the specified files backed up...WITHOUT EXCEPTION. Because of this, message ANS1115W and a return code 4 will be produced if you have an Exclude in play for an included object. Relative to Incremental backups, Selective backups are "out of band": they do not participate in the Incremental continuum, in several ways: - In a selective backup, copies of the files are sent to the server even if they have not changed since the last backup. This might result in having more than one copy of the same file on the server, and can result in old Inactive versions of the file being pushed out of existence, per retention versions policies. - The backup date will not be reflected in 'Query Filespace F=D', or in 'dsmc Query Filespace'. If you change the management class on an Include, Selective backup will cause rebinding of only the current, Active file being backed up: it will not rebind previously backed up files, as an unqualified Incremental will. See also: Selective Backup dsmc SET Access *SM client command to grant another user, at the same or different node, access to Backup or Archive copies of your files, which they would do using -FROMNode and -FROMOwner. Syntax: 'dsmc SET Access {Archive|Backup} {filespec...} NodeName [User_at_NodeName] [Options...]' The filespec should identify files, and not just name a directory. The access permissions are stored in the TSM database. Thus, the original granting client system may vanish and the grantee can still access the files. There is no check for either the node or user being known to the *SM server - though the node needs to be registered with the *SM server for that node and its user to subsequently access the data that you are authorizing access to, else error ANS1353E will be encountered. Note that this applies only to *your* specific files, even if you are root. That is, if you are root and attempt to grant file system access to root at another node, you will *not* be able to see files created by other users as you would as root on the native system. Inverse: 'dsmc Delete ACcess'. See also: dsmc Query ACcess; -FROMNode; -FROMOwner; -NODename dsmc SET Password *SM client command to change the ADSM password for your workstation. If you do not specify the old and new password parameters, you are prompted once for your old password and twice for your new password. Syntax: 'dsmc SET Password OldOne NewOne' dsmc SHow INCLEXCL TSM: Undocumented client command, contributed by developers, to evaluate your Include-Exclude options as TSM thinks of them. This command is invaluable in revealing the mingling of server-defined Include/Exclude statements and those from the client options file. Beware: In that this operation is unsupported, it may not be capable of recognizing newer Exclude options. For example, if you have no EXCLUDE.FS statements coded and don't get the message "No exclude filespace statements defined.", then the SHow code is behind the times. Shortcoming: Does not reveal the managment class which may be coded on Include lines...you have to browse your options file. Read the report from the top down. Remember that Include/Exclude's defined in the server Client Option Set in effect for this node will precede those defined on the client (additive). Report elements: No exclude filespace statements defined Means that there are no "EXCLUDE.FS" options defined in the client options file. No exclude directory statements defined Means that there are no "EXCLUDE.DIR" options defined in the client options file. No include/exclude statements defined Means that there are no "INCLExcl" options defined in the client options file. (Message shows up even in client platforms where INCLExcl is not a defined client option.) ADSM: 'dsmc Query INCLEXCL'. dsmc SHOW Options TSM client command to reveal all options in effect for this client. Note that output is more comprehensive than what is returned from the dsm GUI's Display Options selection. For example, this command will report InclExcl status whereas the GUI won't. ADSM: 'dsmc query options' (The ADSM query option command was an undocumented command developed for internal use. In support of this the command was changed in TSM to a show option command so that it fell in line with the standard ADSM/TSM conventions for non-supported commands.) dsmc status values (AIX) Do not depend upon 'dsmc' to yield meaningful return codes (see advisory under "Return codes"). However, observation shows that the dsmc command typically returns the following shell status values. 0 The command worked. In the case of a server query (Query Filespace) there were objects to be reported. 2 The command failed. In the case of a server query (Query Filespace) there were no objects to be reported. 168 The command failed for lack of server access due to no password established for "password=generate" type access and invoked by non-root user such that no password prompt was issued. Accompanied by message ANS4503E. (Don't confuse these Unix status values with TSM return codes.) dsmc.afs Command-line dsm.afs dsmc.nlm won't unload (Novell Netware) Have option "VERBOSE" in the options file, not "QUIET". Then, rather than unload the nlm at the Netware console, go into the dsmc.nlm session and press 'Q' to quit. dsmcad See: Client Acceptor Daemon (CAD) DSMCDEFAULTCOMMAND Undocumented ADSM/TSM client option for the default subcommand to be executed when 'dsmc' is invoked with no operands. Normally, the value defaults to "LOOP", which is what you are accustomed to in invoking 'dsmc', that being the same as invoking 'dsmc LOOP'. Conceivably, you might change it to something like HELP rather than LOOP; but probably nothing else. Placement: in dsm.opt file (not dsm.sys) dsmcdfs Command-line interface for backing up and restoring DFS fileset data, which this command understands as such, and so will properly back up and restore DFS ACLs and mount points, as well as directories and files. See also: dsmdfs dsmccnm.h ADSM 3.1.0.7 introduced a new performance monitoring function which includes this file. See APAR IC24370 See also: dsmcperf.dll; perfctr.ini dsmcperf.dll ADSM 3.1.0.7 introduced a new performance monitoring function which includes this file. See APAR IC24370 See also: dsmccnm.h.dll; perfctr.ini dsmcrash.log, dsmcrash.dmp TSM 5.2+ failure analysis data capture files. The object is to provide for "first failure data capture" of crashes by capturing the info by IBM facilities the first time the crash occurs. Dr. Watson itself does a nice job of this, but TSM should not depend upon Dr. Watson being installed or configured to capture the needed info. dsmcsvc.exe This is the NT scheduler service. It has nothing to do with the Web client or the old Web shell client. Use 'DSMCUTIL LIST' to get a list of installed services. dsmcutil.exe Scheduler Service Configuration Utility in Windows. Allows *SM Scheduler Services installation and configuration on local and remote Windows machines. The Scheduler Service Configuration Utility runs on Windows only and must be run from an account that belongs to the Administrator/Domain Administrator group. Syntax: 'dsmcutil Command Options' Example: update the node name and password to new node: 'dsmcutil update /name:"your service name" /node:newnodename /password:password' ADSMv2 name (dsmcsvci.exe in ADSMv3). Use 'DSMCUTIL LIST' to get a list of installed NT services. The /COMMSERVER and /COMMPORT options are used to override values in the client options file used by the service. They correspond to different client options depending on the communications method being used (and yes, there is /CommMethod dsmcutil option). For TCP/IP, they correspond to -TCPServername and -tcpPort, respectively. Written by Pete Tanenhaus . Ref: Installing the Clients; dsmcutil.hlp file in the BAclient dir. dsmcsvci.exe ADSMv3 name (dsmcutil.exe in ADSMv2). dsmdf HSM command to display all file systems which are under the control of HSM. Does not display any which are not. Note that running the AIX 'df' command will show the file system twice - first as a device-and-filesystem and then as filesystem-and-filesystem, where the latter reflects the FSM overlay. Much the same comes out of an AIX 'mount' command. Invoke 'dsmmighelp' for assistance with all the HSM commands. dsmdfs GUI interface for backing up and restoring DFS fileset data, which this command understands as such, and so will properly back up and restore DFS ACLs and mount points, as well as directories and files. Its look and usage ie exactly the same as 'dsm'. Notes: Do not try to select the type "AGFS" for backup - that is the aggregate. Instead, go into the type "DFS" file system. You should also define some VIRTUALMountpoints to be able to directly select within the "/..." file system. See also: dsmcdfs dsmdu HSM command to display *SM space usage for files and directories under the control of HSM, in terms of 1 KB blocks; that is, the true size of all files in a directory, whether resident or migrated. Syntax: 'dsmdu [-a] [-s] [Dir_Name(s)]' where -a shows each file -s reports just a sum total Dir_Name(s) One or more directories to report on. If omitted, defaults to the current dir. Contrast with the Unix 'du -sk' command, which can only report on files currently present in the directory, such that migrated files throw it off. Invoke 'dsmmighelp' for assistance with all the HSM commands. dsmerror.log Where information about processing errors is written. The DSM_LOG client environment variable may be used to specify a directory where you want the dsmerror.log to reside. If unspecified, the error log for a dsm or dsmc client session will be written to the current directory. ADSM doesn't want you to have dsmerror.log be a symlink to /dev/null: if it finds that case, it will actually remove the symbolic link and replace it with a real dsmerror.log file! (See messages ANS1192E and ANS1190E.) The error log for client root activity (HSM migration, etc.) will be /dsmerror.log. In Macintosh OS X, the default error log name is instead "TSM Error Log". Don't try to use a single dsmerror.log for all sessions in the system: It's unusual and unhealthy, from both logical an physical standpoints, to mingle the error logging from all sessions - which may involve simultaneous sessions. In such an error log, you want a clear-cut sequence of operations and consequences reflected. If you want all error logs to go to a single directory, consider creating a wrapper script for dsmc, named the same or differently, which will put all error logs into a single, all-writable directory, with an error log path spec which appends the username, for uniqueness and singularity. The wrapper script would invoke dsmc with the -ERRORLOGname= option spec. Advisory: Exclude dsmerror.log from backups, to prevent wasted time and possible problems. See: DSM_LOG; ERRORLOGName; ERRORLOGRetention; dsierror.log dsmerror.log ownership The error log file will be owned by the user that initiated the client session. However, if another user subsequently invokes the client, it can try and fail to gain access to that file because of permissions problems. You could make the file "public writable", but that is problematic in mixing error logging, making for later confusion in inspection of that log. Each user should end up with a separate error log, per invocation from separate "current directory" locations. Try to avoid using the DSM_LOG client environment variable, which would force use of a single error log file for the environment. dsmfmt TSM server-provided command for AIX, to format file system "volumes", which can be spaces to contain the TSM database, recovery log, storage pool, or a file which serves as a random access storage pool. Not for AIX raw logical volumes or Solaris raw partitions: they do not need to be formatted by TSM, and the dsmfmt command has no provision for them (it only accepts file names). But note that Solaris raw partitions need to be formatted in OS terms. Note that dsmfmt does *not* update the dsmserv.dsk file to add the new server component: that happens under a dsmserv invocation. Located in /usr/lpp/adsmserv/bin/. The command *creates* the designated file, so the file must not already exist. Unix note: There is no man page! Ref: Administrator's Reference manual, Appendix A. The size to be specified is the desired size, in MB, not counting the 1 MB overhead that dsmfmt will add (so if you say 4MB, you will get a 5MB resultant file). So the size should always be an odd number. To format a database volume: 'dsmfmt -db DBNAME SizeInMB-1MB' To format a recovery log volume: 'dsmfmt -log DBNAME SizeInMB-1MB' To format a file as a storage pool: 'dsmfmt -data NAME SizeInMB-1MB' The name given the file is the name to be used for the storage volume when it is later defined to the server. What the utility does is not exciting: it writes the chars "Eric" repeatedly to fill the space. Beware the shell "filesize" limit preventing formatting of a large file. dsmfmt errno 27 (EFBIG - File too It may be that your Unix "filesize" large) (errno = 27) limit prohibits writing a file that large. Do 'limit filesize' to check. If that value is too small, try 'unlimit filesize'. If that doesn't boost the value, you need to change the limit value that the operating system imposes upon you (in AIX, change /etc/security/limits). Another cause: the JFS file system not configured to allow "large files" (greater than 2 GB), per Large File Enabled. Do 'lsfs -q' and look for the "bf" value: if "false", not in effect. dsmfmt errno 28 (ENOSPC - No space No more disk blocks are left in the file left on device) (errno = 28) system. Most commonly, this occurs because you simply did not plan ahead for sufficient space. In an AIX JFS enabled for Large Files, free space fragmentation may be the problem: there are not 32 contiguous 4 KB blocks available. dsmfmt "File size..." error With a very large format (e.g., 80 GB), the following error message appears: "File size for /directory/filename must be less than 68,589,453,312 bytes." You may be exceeding file size limits for your operating system, or in Unix may be exceeding the filesize resource limit for your process. dsmfmt performance Dsmfmt is I/O intensive. Beware doing it on a volume or RAID or path which is also being used for other I/O intensive tasks such as OS paging. dsmfmt.42 Version of dsmfmt for AIX 4.2, so as to support volumes > 2GB in size. In such a system, dsmfmt should be a symlink to dsmfmt.42 . Be sure to define the filesystem as "large file enabled". dsmhsm ADSM HSM client command to invoke the Xwindows interface. Note that there is no 'dsmhsmc' command for line-mode HSM commands. There are instead individual commands such as 'dsmdf', 'dsmdu', 'dsmrm', etc. Invoke 'dsmmighelp' for assistance with all the HSM commands. DSMI_CONFIG ADSM API: Environment variable pointing to the Client User Options file (dsm.opt). Note that it should point at the options file itself, not the directory that it resides in. Ref: "AFS/DFS Backup Clients" manual. DSMI_DIR ADSM API: The client environment variable to point to the directory containing dscameng.txt, dsm.sys, and dsmtca. Ref: "AFS/DFS Backup Clients" manual. DSMI_LOG ADSM API: Client environment variable to point to the *directory* where you want the dsierror.log to reside. (Remember to code the directory name, not the file name.) If undefined, the error log will be written to the current directory. Ref: "Installing the Clients" manual. DSMI_ORC_CONFIG TDP for Oracle environment variable, to point to the client user options file (dsm.opt). dsmInit() TSM API function to start a session from the TSM client to the TSM server. There can only be one active session open at a time within one client process. dsmlabel To label a tape, or optical disk, for use in a storage pool. (Tapes must be labeled to prevent overwriting tapes which don't belong to ADSM and to control tapes once ADSM has used them (and re-use when they become empty). Syntax: 'dsmlabel -drive=/dev/XXXX [-drive...] -library=/dev/lmcp0 [-search] [-keep] [-overwrite] [-format] [-help] [-barcode] [-trace]'. where the drive must be one which was specifically ADSM-defined, via SMIT. You can specify up to 8 drives, to more quickly perform the labeling. It will iteratively prompt for a label volsers so you can do lots of tapes. Type just 'dsmlabel' for full help. "-format" is effective only on optical cartridges. -barcode Use the barcode reader to select volumes: will cause the first six characters of the barcode to be used as the volume label. Dsmlabel does not change Category Codes. If you Ctrl-C the job, it will end after the current tape is done. Tapes new to a 3494 tape library will have a category code of Insert both before and after the dsmlabel operation. Ref: Administrator's Reference manual See also: 'LABEl LIBVolume'; "Tape, initialize for use with a storage pool". Newly purchased tapes should have been internally labeled by the vendor, so there should be no need to run the 'dsmlabel' utility. dsmls HSM command to list files in a directory and show file states. Syntax: 'dsmls [-n] [-R] [Filespec...]' where: -n Omits column headings from report. -R Traverses subdirectories. Note that it does not expand wildcard specifications itself, so you CANNOT code something like: dsmls /filesys/files.199803\* In report: Resident Size: Shows up as '?' if the path used is a symlink, because HSM is uncertain as to the actual filespace name. File State: m = migrated m (r) = migrated, with recallmode set to Readwithoutrecall '?' if the path used is a symlink. Note that the premigrated files are reported from the premigrdb database located in the .SpaceMan directory. Note that the command does not report when the file was migrated. dsmmigfs Add, dsmmigfs Update HSM: Command to add or remove space management, or to query it. 'dsmmigfs Add [-OPTIONS] FSname' causes: 1. Creates .SpaceMan dir in the filesys 2. Updates /etc/adsm/SpaceMan/config/dsmmigfstab to add the filesys definition to HSM, with selected options 3. Updates the /etc/filesystems stanza for the filesys to add a "nodename" entry is added, "mount" is changed to "false", and "adsmfsm=true" is added. 4. Mounts FSM over the AIX filesys. 5. Activates HSM management of it. But it does not result in that Filespace becoming known in the ADSM server: the first migration or backup will do that. Add/Update options: -HThreshold=N Specifies high threshold for migration from the HSM-managed file system to the HSM storage pool. -Lthreshold=N Specifies low threshold for migration from the HSM-managed file system to the HSM storage pool. (A low value is good for loading a file system, but not for keeping many files recalled.) -Pmpercentage=N The percentage of space in the file system that you want to contain premigrated files that are listed next in the migration candidates list for the file system. -Agefactor=N The age factor to assign to all files in the file system. -Sizefactor The size factor to assign to all files in the file system. -Quota=N The max number of megabytes (MB) of data that can be migrated and premigrated from the file system to ADSM storage pools. Default: the same number of MB as allocated for the file system itself. -STubsize=N The size of stub files left on the file system when HSM migrates files to ADM storage. Hints: Specifying a low Lthreshold value helps in file system loading by keeping migration active, to prevent message ANS4103E condition. dsmmigfs Deactivate/REActivate/REMove HSM: Command to deactivate, reactivate, or remove space management for a file system. Syntax: 'dsmmigfs Deactivate ' 'dsmmigfs REActivate ' 'dsmmigfs REMove ' dsmmigfs GLOBALDeactivate HSM: Command to deactivate or reactivate /GLOBALREActivate space management for all file systems on the client system. Syntax: dsmmigfs GLOBALDeactivate dsmmigfs GLOBALREActivate dsmmigfs Query HSM: Command to query space management settings for named or all HSM-controlled file systems. Syntax: 'dsmmigfs Query [ ]' dsmmigfs REMove HSM: Command to remove space management from a file system. Syntax: 'dsmmigfs REMove [FileSysName(s)>]' or use the GUI cmd 'dsmhsm'. This will perform a Reconcile, Expire, and then unmount of the FSM, also involving an update of /etc/filesystems in AIX. Make sure you are not sitting in that directory at the time, or the unmount will fail with messages ANS9230E and ANS9078W. It is best to do this *before* doing a Delete Filespace: if you do it after, you will have to do the Del Filespace twice to finally get rid of the file space. dsmmigfstab HSM: file system table naming the AIX file systems which are to be managed by HSM. Located in /etc/adsm/SpaceMan/config. Add file systems to the list via the dsmhsm GUI, or the 'dsmmigfs add FileSystemName' command. Query via: 'dsmmigfs query [FileSystemName...]' dsmmighelp HSM: Command to display usage information on its command repertoire. dsmmigquery HSM: Command to display space management information, such as migration candidates, recall list. 'dsmmigquery [-Candidatelist] [-SORTEDMigrated] [-SORTEDAll] [-Help] [file systems]' 'dsmmigquery [-Mgmtclass] [-Detail] [-Options]' Caution: defaults to current directory, so be sure to specify file system name. dsmmigrate HSM: Command to migrate selected files from a local file system to an ADSM storage pool. Syntax: 'dsmmigrate [-R] [-v] FileSpec(s)' where... -R Specifies recursive pursuit of subdirectories. -v Displays the name and size of each file migrated. If using a wildcard, it is faster to allow dsmmigrate to expand it per its own processing order, as in invoking like: 'dsmmigrate \*.gz' with the asterisk quoted so that ADSM expands it rather than the shell. To migrate all files in a file system: 'dsmmigrate /file/system/\*' To perform a dsmmigrate on a file, you must be the file's owner, else suffer ANS9096E. Note: For a large file system this may take some time, and depending upon the ADSM server configuration you might get message ANS4017E on the client, which would mean that that the server waited up to its COMMTimeout value for the client to come back with something for the server to do, but nadda, so the server dismissed the session. (Issue the server command 'Query OPTion' to see the prevailing CommTimeOut value, in seconds.) Dsmmigrate will typically generate dsmerror.log data in the current directory when given a wildcard and some of the files need not be migrated. dsmmigundelete HSM: Command to recreate deleted stub files, to reinstate file instances which were inadvertently deleted from the HSM-managed file system. (This command operates on whole file systems: you cannot specify single files.) This operation depends upon the original directory structure being intact: it will not recreate a stub file where the file's directory is missing. Thus, this command cannot be used as a generalized restoral method. The stub contains information ADSM needs to recall the file, plus some amount of user data. ADSM needs 511 bytes, so the amount of data which can also reside in the stub is the defined stub size minus the 511 bytes. When you do a dsmmigundelete, ADSM simply puts back enough data to recreate the stubs, with 0 bytes of user data (since you don't want us going out to tapes to recover the rest of the stub). When the file gets recalled, then migrated again, we once again have user data that we can leave in the stub, so the stub size goes back to its original value. This goes to show that the leading file data in the stub file is a copy of what's in the full, migrated file. See also: Leader data dsmmode HSM: Command to set one or more execution modes which affect the HSM-related behavior of commands: -dataaccess controls whether a migrated file can be retrieved. -timestamp controls whether the file's atime value is set to the current time when accessed. -outofspace controls whether HSM returns an error code rather than try to recover from out-of-space conditions. -recall controls how a migrated file is recalled: Normal or Migonclose. Note, however, that the outofspace parameter will *not* prevent commands like 'cp' from encountering "No space left on device" conditions. dsmmonitord HSM monitoring daemon, started by /etc/inittab's "adsmsmext" entry invoking /etc/rc.adsmhsm . It is busy: every 2 seconds it looks for file-system-full conditions so as to start migration; and every 5 minutes to do threshhold migrations (or the interval specified on the CHEckthresholds Client System Options file (dsm.sys)). This daemon also runs dsmreconcile (from either the directory specified via DSM_DIR or the directory whence dsmmonitord was invoked) according to the interval defined via the RECOncileinterval Client System Options file (dsm.sys) option, and automatically before performing threshold migration if the migration candidates list for a file system is empty. Be aware that this daemon does not help if the user attempts to recall a file of a size which causes the local file system to be exhausted: what happens is that the user gets a "ANS9285K Cannot complete remote file access" error message - which says nothing about this. Full usage (as found in the binary): 'dsmmonitord [-s seconds] [-t directory] [-v]' dsmmonitord PID Is remembered in file: /etc/adsm/SpaceMan/dsmmonitord.pid dsmnotes The backup client command for the Lotus ConnectAgent. Sample usage: 'dsmnotes incr d:\notes\data\mail\johndoe.nsf' DSMO_PSWDPATH See: aobpswd dsmperf.dll You mean: dsmcperf.dll (q.v.) dsmq HSM: Command to display all information, for all files currently queued for recall. Columns: ID Recall ID DPID The PID of the dsmrecall daemon. Start Time When it started INODE Inode number of the file being recalled. Filesystem File system involved. Original Name Name of file that was migrated. dsmrecall HSM: Command to explicitly demigrate (recall) files which were previously migrated. Syntax: 'dsmrecall [-recursive] [-detail] Name(s)' The -detail option alas shows details only upon completion of the full operation: it does not reveal progress. If using a wildcard, it is *much* faster to allow dsmrecall to expand it per its own processing order: having the shell expand it forces dsmrecall to get the files off tape in collating order, rather than the order it knows them to be on the tape(s) - so invoke like: 'dsmrecall somefiles.199807\*' with the asterisk quotes so *SM expands it rather than the shell. Note that during a recall, as the recalled file is being written back to disk that its timestamp will be "now", and thereafter will be set to the file's original timestamp. Dsmrecall will typically not generate dsmerror.log data in the current directory when given a wildcard and some of the files need not be recalled. In the presence of msg "ANR8776W Media in drive DRIVE1 (/dev/rmt1) contains lost VCR data; performance may be degraded.", it may be faster to do a Restore of the files to a temp area, if you simply want to reference the data. dsmrecalld HSM daemon to perform the recall of migrated files. It is started by /etc/inittab's "adsmsmext" entry invoking /etc/rc.adsmhsm . Control via the MINRecalldaemons and MAXRecalldaemons options in the Client System Options file (dsm.sys). Default: 20 Full usage (as found in the binary): dsmrecalld [-t timeout] [-r retries] [{-s | -h}] [{-i | -n}] [-v] -t timeout in seconds; only valid with -s -r number of times to retry recall; only valid with -s -s soft recall, will time out; default -h hard recall, will not time out -i interruptable, can be cancelled; default -n non-interruptable, cannot be cancelled dsmrecalld PID Is remembered in file: /etc/adsm/SpaceMan/dsmrecalld.pid dsmreconcile HSM: Client root user command to synchronize client and server and build a new migration candidates list for a file system. Is usually run automatically by dsmmonitord, invoking dsmreconcile once for each controlled file system, at a frequency (mostly) controlled by the RECOncileinterval Client System Options file (dsm.sys) option. Can also be run manually as needed. Syntax: 'dsmreconcile [-Candidatelist] [-Fileinfo] [FileSystemName(s)]' Note that HSM will also run reconcilliation automatically before performing threshold migration if the migration candidates list for a file system is empty. Msgs: "Note: unable to find any candidates in the file system." can indicate that all files have been migrated. See also: Expiration (HSM); MIGFILEEXPiration; Migration candidates list (HSM). dsmreg.lic ADSMv2 /usr/lpp/adsmserv/bin executable module for converting given license codes into encoded hex strings which are then written to the adsmserv.licenses file. See: adsmserv.licenses; License...; REGister LICense dsmrm HSM: Command to remove a recall process from the recall queue. dsmsched.log The schedule log's default name, as it resides in the standard ADSM directory. Can be changed via the SCHEDLOGname Client System Options file (dsm.sys) option. To verify the name: in ADSM, do 'dsmc q o' and look for SchedLogName; in TSM, do 'dsmc show opt'. Obviously, you need write access to the directory in which the log is to be produced in order to have a log. See: SCHEDLOGname dsmscoutd HSM 5+ Scout Daemon, which seeks migration candidates. Its operation is governed by the Maxcandidates value. dsmserv Command in /usr/lpp/adsmserv/bin/ to start the ADSM server. This is something which would be done by the /usr/lpp/adsmserv/bin/rc.adsmserv shell being executed by the "autosrvr" line which ADSM installation added to the /etc/inittab file. Command-line options: -F To overwrite shared memory when restarting the server after a server crash. Code before other options. noexpire Suppress inventory expiration, otherwise specified via EXPINterval. -o FileName Specifies the server options file to be used, as when running more than one server. quiet Start the server as a daemon program. The server runs as a background process, and does not read commands from the server console. Output messages are directed to the SERVER_CONSOLE. Note that there is no option for preventing client sessions from starting, which can be inconvenient in some circumstances, like restarting after a hinkey problem. Performance: dsmserv performs regular fsync() calls. When used for stand-alone operations like database restorals, the run time can be 6 hours with the syncing and 15 minutes without. Since dsmserv is an unstripped module, there is the opportunity to CSECT-replace the fsync by statically linking in a dummy fsync function which simply returns (keeping dsmserv from getting fsync from the shared library). See also: Processes, server; dsmserv.42 Ref: ADSM Installing the Server... TSM Admin Guide chapter on Managing Server Operations; Starting, Halting, and Restarting the Server dsmserv AUDITDB A salvage command for when *SM is down with a bad database or disk storage pool volume, to look for structural problems and logical inconsistencies. Run this command *before* starting the server, typically after having reloaded the database. Syntax: 'DSMSERV AUDITDB [ADMIN|ARCHSTORAGE|DISKSTORAGE| INVENTORY|STORAGE] [FIX=No|Yes] [Detail=No|Yes] [LOGMODE=NORMAL|ROLLFORWARD] [FILE=ReportOutputFile]' The various qualifiers represent partial database treatments. Reportedly, running with no qualifiers does everything represented in the partial qualifiers. ARCHDESCRIPTIONS [FIX=Yes] To fix corrupted database as evidenced in message 'Error 1246208 deleting row from table "Archive.Descriptions"'. DISKSTORAGE: Causes disk storage pool volumes to be audited. FIX=No: Report, but not fix, any logical inconsistencies found. If the audit finds inconsistencies, re-issue the command specifying FIX=Yes before making the server available for production work. Because AUDITDB must be run with FIX=Yes to recover the database, the recommended usage in a recovery situation is FIX=Yes the first time. FIX=Yes: Fix any inconsistencies and issues messages indicating the actions taken. Detail=No: Test only the referential integrity of the database, to just reveal any problems. This is the default. Detail=Yes: Test the referential integrity of the database and the integrity of each database entry. LOGMODE=NORMAL: Allows you to override your server's Rollforward logmode, to avoid running out of recovery log space. (Note that Logmode is controlled via the Set command, which you obviously cannot perform when you cannot bring your server up because it has the problem you are addressing.) Tivoli recommends opening a problem report with them before running this audit - under their guidance. Per their advisory: "If errors are encountered during normal production use of the server that suggest that the database is damaged, the root cause of the errors must be determined with the assistance of IBM Support. Performing DSMSERV AUDITDB on a server database that has structural damage to the database tables may result in the loss of more data or additional damage to the database." Be aware that such an audit cannot correct all problems: it will fail on an inconsistency in the database, as one example. If your database is TSM-mirrored, you should first set the MIRRORREAD DB server option to VERIFY: this will force the server to compare database pages across the mirrored volumes, and if an inconsistency is found on a given mirror volume, that volume will be marked as stale and it will be forced to resynchronize with a remaining valid volume. Runtime: Beware that this command is not optimized, and can take a very long time to run, proportional to the amount of data to be audited. Some customers report it running over 4 days for an 8 GB database! (Processing time has been observed to be non-linear, as in one customer finding it taking over 3 days to get halfway through the database, then finishing less than a day later.) If coming from a TSM v4 system, you may see dramatically lesser runtimes if you first run CLEANUP BACKUPGROUP. Consult the Readme and Support if unsure. Msgs: ANR0104E; ANR4142I; ANR4206I; ANR4306I Ref: Admin Ref, Appendix See also: AUDit DB (online cmd) See also separate TSM DATABASE AUDITING samples towards the bottom of this doc. dsmserv AUDitdb, interrupt? There's no vendor documentation saying whether an AUDitdb can be stopped (as in killing its process), safely. The process reportedly disregards Ctrl-C (SIGINT) and simple 'kill' command (SIGTERM): only a 'kill -9' (SIGKILL) terminates the process. Customer reports of having stopped the process tell of no (known) ill effects; but that is non-deterministic: hold onto that backup tape! dsmserv AUDitdb archd fix=yes Undocumented ADSM initial command to correct a corrupted database as evidenced in message 'Error 1246208 deleting row from table "Archive.Descriptions"'. dsmserv DISPlay DBBackupvolumes Stand-alone command to display database backup volume information when the volume history file (e.g., /var/adsmserv/volumehistory.backup) is not available. Full syntax: 'DSMSERV DISPlay DBBackupvolumes DEVclass=DevclassName VOLumenames=VolName[,VolName...]' Example: 'DSMSERV DISPlay DBBackupvolumes DEVclass=OURLIBR.DEVC_3590 VOLumenames=VolName[,VolName...]' Note that this command will want to use a tape drive - one specified in the file named by the DEVCONFig dsmserv.opt parameter - to mount the tape R/O. (Drive must be free, else get ANR8420E I/O error.) You can use this command form to try identify the database backup tapes when the volume history file is absent, not up to date, or lacking DBBACKUP entries. The command requires the devconfig file - which may also have been lost - and entails going hunting through a possibly large number of tapes until you finally find the latest dbbackup tape. See also: dsmserv RESTORE DB, volser unknown dsmserv DUMPDB ADSM database salvage function, to be used in conjunction with DSMSERV LOADDB (q.v.). See also: STAtusmsgcnt dsmserv DUMPDB and LOADDB These are part of a salvage utility that was a stop gap solution for ADSM version 1 until the database backup and recovery functions could be added in ADSM version 2. Unless you are on ADSM version 1 (which is unsupported except for the VSE server), you should be using BAckup DB and DSMSERV RESTORE DB functions to backup/recover your database (and also for migrating ADSM server to a different hardware server of the same operating system type). The circumstances under which you might use DUMPDB and LOADDB today are very rare and probably would involve the absence of regular ADSM database backups (regular database backups using BAckup DB are obviously recommended) and are probably recommended only under the direction of IBM ADSM service support. See also: dsmserv LOADDB; LOADDB dsmserv EXTEND LOG FileName N_MB Stand-alone command to extend the Recovery Log to a new volume when its size is insufficient for ADSM start-up. (Note that you are to add a new volume, *not* extend the existing one.) The new volume should have been separately prepared by running 'dsmfmt -log ...'. The extend operation will run dsmserv for the short time that it takes to extend the log and format the new volume, plus add the new volume name to the dsmserv.dsk file, whereafter the stand-alone server process shuts down. Thereafter you may bring up the server normally. dsmserv FORMAT Ref: Administrator's Reference, TSM Utilities appendix. dsmserv INSTALL Changed to DSMSERV FORMAT in ADSMv3. Ref: Administrator's Reference, Appendix D. dsmserv LOADDB Stand-alone command to reload the ADSM database after having done 'DSMSERV DUMPDB' and 'DSMSERV INSTALL'. After a DUMPDB, it is best to perform the LOADDB to a database having twice the capcity as the amount that was dumped... As the Admin Guide says: "The DSMSERV LOADDB utility may increase the size of the database. The server packs data in pages in the order in which they are inserted. The DSMSERV DUMPDB utility does not preserve that order. Therefore, page packing is not optimized, and the database may require additional space." See topic "ADSM DATABASE STRUCTURE AND DUMPDB/LOADDB" at the bottom of this file for further information. This operation takes a looooooong time: it slows as it gets further along, with tremendous disk activity. Example: 'DSMSERV LOADDB DEVclass=OURLIBR.DEVC_3590 VOLumenames=VolName[,VolName...]' Note: After the reload, the next BAckup DB will restart your Backup Series number as 1. See also: Backup Series; STAtusmsgcnt dsmserv RESTORE DB A set of commands for restoring the *SM server database, under varying conditions. If the database and/or recovery log volumes are destroyed, use dsmfmt to prepare replacements AT LEAST EQUAL IN CAPACITY to the originals. (Failure to make them equal in capacity can result in server failure.) DO NOT reformat the recovery log volume if doing a rollforward recovery: you need its data for the recovery. Second, you have to initialize them by running DSMSERV INSTALL. Then you can run the RESTORE DB command. You would be wise to set server config file option DISABLESCheds before proceeding. With most forms of Restore DB, you will also need a copy of the volume history file and your server options file with its pointer to the vol history. This makes the RESTORE DB process simpler as you can just specify a date rather than having to work out which backup is on what volser. The -todate=xx/xx/xxxx -totime=xx:xx options allow you to select which database backup(s) to restore from; NOT a point at which the recovery log should be rolled forward to. ==> Do NOT restart the server between the install and the restore db command: doing this would delete all the entries in the volume history file! Do's and Dont's: Realize that Restore DB was designed to restore back onto the same machine where the image was taken: that is, Restore DB is not intended to serve as a cross-platform migration mechanism. You can do 'DSMSERV RESTORE DB' across systems of the same architecture: see the Admin Guide, Managing Server Operations, Moving the Tivoli Storage Manager Server, for the rules. It is illegal, risky, and in some cases logically impossible to employ Restore DB to migrate the *SM database across platforms, which is to say different operating systems and hardware architectures. (See IBM site TechNote 1137678.) The same considerations apply in this issue as in moving any other kind of data files across systems and platforms: - Character set encodings may differ: ASCII vs. EBCDIC; single-byte vs. double-byte. - Binary byte order may differ: "big-endian" vs. "little-endian", as in the classic Intel architecture conventions v. the rest of the world. - Binary unit lengths may differ: as in 32-bit word lengths vs 64-bit. - The data may contain other environmental depedencies. Simply put, the architectures and software levels of the giving and taking systems must be equivalent. In general, use Export/Import migrate across systems. (One customer reported successfully migrating from AIX to Solaris via Restore DB; but the totality of success is unknown, and it might succeed only with very specific levels of the two operating systems and *SM servers.) See also IBM site TechNote 1111554 ("Post Database Restore Steps"). See also: Export dsmserv RESTORE DB, volser unknown TSM provides a command to assist with the situation where you need to perform a TSM database restoral and the volume history information has been lost, as in a disk failure. See: dsmserv DISPlay DBBackupvolumes The command requires the devconfig file - which may also have been lost - and entails going hunting through a possibly large number of tapes until you finally find the latest dbbackup tape. What you really need in such circumstances is something to dramatically reduce the number of volumes to search through... One 3494 user reported combined loss of the *SM database and volume history backup file, leaving no evidence of what volume to use in restoring the database. That's a desperate situation, calling for desperate measures... If you know the approximate time period of when your dbbackup was taken, you can narrow it down to a few tape volumes and then try each in a db restore: only one tape in a given time period can be a dbbackup, and the others ordinary data, which db restore should spit out... Go to your 3494 operator panel. Activate Service Mode. In the Utilities menu, choose View Logs. Go into the candidate TRN (transactions) log. Look for MOUNT_COMPLETE, DEMOUNT_COMPLETE entries in your time period. The volser is in angle brackets, like <001646001646>, wherein the volser is 001646. (Watch out for the 3494 PC clock being mis-set.) dsmserv RESTORE DB Preview=Yes Stand-alone command to display a list of the volumes needed to restore the database to its most current state, without performing the restoral operation. You must be in the directory with the dsmserv.opt file, else will get ANR0000E message; so do: 'cd /usr/lpp/adsmserv/bin' 'DSMSERV RESTORE DB Preview=Yes' dsmserv runfile Command for the *SM server to run a single procedure encoded into a file, and halt upon completing that task. Syntax: dsmserv runfile where the file contains one or more TSM server commands, one per line (akin to a TSM macro). This command is most commonly run to load the provided sample scripts: dsmserv runfile scripts.smp and to initialize web admin definitions: dsmserv runfile dsmserv.idl Ref: Admin Ref manual; Quick Start manual See also: Web Admin dsmserv UNLOADDB TSM 3.7 Stand-alone command to facilitate defragmentation (reorganization) of the TSM database, via unload-reload, unloading the database in key order, to later reload that preserve. (The operation does not "compress" the db, as an early edition of the TSM Admin Guide stated, but rather reclaims empty space by compacting database records - putting them closer together.) Syntax: DSMSERV UNLOADDB DEVclass=DevclassName [VOLumenames=Volnameslist] [Scratch=Yes|No] [CONSISTENT=Yes|No where: CONSISTENT Specifies whether server transaction processing should be suspended so that the unloaded database is a transactionally-consistent image. Default: Yes The procedure: - Shut down the server. - dsmserv unloaddb devclass=tapeclass scratch=yes - Halt that server instance. - Reinitialize the db and recovery log as needed, as in: dsmserv format 1 log1 2 db1 db2 - Reload the database: dsmserv loaddb devclass=tapeclass volumnames=db001,db002,db003 - Consider doing a DSMSERV AUDITDB to fix any inconsistencies before putting the database back into production. Ref: Admin Guide topic "Optimizing the Performance of the Database and Recovery Log"; Admin Ref appendix A The Tivoli documentation is superficial, failing to provide information as to how long you can expect your database to be out of commission, the risks involved, the actual benefits, or how long you can expect them to last. For execution, there is no documentation saying what constitutes success or failure, what messages may appear, or what to do if the operation fails. Is it worth it? Customers who have tried the operation report improvements of about 10% immediately after the reload, and very long runtimes (maybe days). It is probably not worth it. dsmserv UPGRADEDB Update some of the database meta-data, (dsmserv -UPGRADEDB) which would be invoked - only if it needed to be invoked - when the server is down. Conventionally, a product upgrade from one release to the next will require an UPGRADEDB; but when going between PTFs and patches of the same release an UPGRADEDB should not be required. It does not have to convert any database data - and thus the operation is insensitive to the size of the actual database and should take seconds to execute regardless of the database size. All your policies, devices, etc. will be preserved. Note that upgrades which do not involve any change in data formats will not utilize an Upgradedb. Upgrades that do involve data format changes will usually perform the Upgradedb automatically - or in some cases tell the customer that it needs to be done. So, usually you do not have to manually invoke an Upgradedb. Naturally, server upgrades are performed when the server is down. DSMSERV_ACCOUNTING_DIR Server environment variable to specify the directory in which the dsmaccnt.log accounting file will be written. If directory doesn't exist, or the environment variable is not set, the current directory is used for the accounting file. NT note: a Registry key instead specifies this location. DSMSERV_CONFIG Server environment variable to point to the Server Options file. DSMSERV_DIR Server environment variable to point to the directory containing the server executables. DSMSERV_OPT Server environment variable to point to the server options file. dsmserv.42 Version of dsmserv for AIX 4.2, so as to support ADSM file system volumes > 2GB in size. In such a system, dsmserv should be a symlink to dsmserv.42 . Be sure to define the filesystem as "large file enabled". dsmserv.cat ADSM V.3 message catalog installed in /usr/lib/nls/msg/en_US. dsmserv.dsk File in which names the database and recovery log files/volumes, each on its own line, as referenced by the server when it starts. Created: Via 'dsmserv format', as specified in the Quick Start manual. Updated: Each time you define or delete server volumes. (Humans should never have to touch this file.) Where: AIX: /usr/lpp/adsmserv/bin/ Sun: /opt/IBMadsm-s/bin/ At start-up, dsmserv.dsk is used to find ONE data base or recovery log volume: the rest of the volumes are located through a structure in the first 1 MB that is added to each of the data base and recovery log volumes. That is, each db and log file contains info about all the other db and log files, so in a pinch you could start the server by creating a minimal dsmserv.dsk file containing just one db and log file name: the server will thereafter update dsmserv.dsk with all the log and db file names. dsmserv.err Server error log, in the server directory, written when the server crashes, ostensibly when the server is being run in the foreground. Seen to contain messages: ANR7833S, ANR7834S, ANR7837S, ANR7838S See also: dsmsvc.err DSMSERV.IDL See: Web Admin (webadmin) dsmserv.lock The TSM server lock file. It both carries information about the currently running server, and serves as a lock point to prevent a second instance from running. Sample contents: "dsmserv process ID 19046 started Tue Sep 1 06:46:25 1998". Msgs: ANR7804I See also: adsmserv.lock dsmserv.opt Server Options File, normally residing in the server directory. Specifies a variety of server options, one of the most important being the TCP port number through which clients reach the server, as coded in their Client System Options File. Note that the server reads the file from top to bottom during restart. Some options, like COMMmethod, are additive, while others are unique specifications. For unique options, the last one specified in the file is the last one used. Updating: Whereas the server reads its options file only at start time, changes made to the file via a text editor will not go into effect until the next server restart. Use the SETOPT command (q.v.) to both update the file and put some options into effect. (Beware, however, that the command appends to the file, which can result in there being multiple, redundant options in the file which you will want to clean up.) The DSMSERV_CONFIG environment variable, or the -o option of 'dsmserv' command, can be used to specify an alternate location for the file. Ref: Admin Ref manual, appendix "Server Options Reference" See also: Query OPTion dsmserv's, number of See: Processes, server dsmsetpw HSM: Command to change the ADSM password for your client node. dsmsm HSM: Space monitor daemon process which runs when there are space-managed file systems defined in /etc/adsm/SpaceMan/config/dsmmigfstab dsmsm PID HSM: Is remembered in file: /etc/adsm/SpaceMan/config/ dsmmigfstab.pid dsmsnmp ADSMv3: SNMP component. Must be started before the ADSM server. dsmsta Storage Agent. dsmstat Monitors NFS mounted filesystems to be potentially backed up. DSM_DIR also points to this. See: NFSTIMEout dsmsvc.err Server error log, in the server directory, written when the server crashes, ostensibly when the server is being run in the background. See also: dsmserv.err DSMSVC.EXE Service name of the web server bound to TCP port 1580. dsmtca Trusted Communication Agent, aka Trusted Client Agent program. Employing the client option PASSWORDAccess Generate causes dsmtca to run as root. For non-root users, the ADSM client uses a trusted client (dsmtca) process to communicate with the ADSM server via a TCP session. This dsmtca process runs setuid root, and communicates with the user process (dsmc) via shared memory, which requires the use of semaphores. So for non-root users, when you start a dsmc session, it hands data to dsmtca as an intermediary to send to the server. The DSM_DIR client environment variable should point to the directory where the file should reside. dsmulog You can capture *SM server console messages to a user log file with the *SM dsmulog utility. You can invoke the utility with the ADSMSTART shell script which is provided as part of the ADSM AIX server package. You can have the server messages written to one or more user log files. When the dsmulog utility detects that the server it is capturing messages from is stopped or halted, it closes the current log file and ends its processing. (/usr/lpp/adsmserv/bin/) Ref: Admin Guide; Admin Ref; /usr/lpp/adsmserv/bin/adsmstart.smp dsmwebcl.log The) Web Client log, where all Web Client messages are written. (Error messages are written to the error log file.) Location: Either the current working directory or the directory you specify with the DSM_LOG environment variable. See also: Web client Dual Gripper 3494 feature to add a second gripper to the cartridge picker ("hand") so that it can hold one cartridge to be stored and grab one for retrieval. This feature makes possible "Floating-home Cell" so that cartridges need not be assigned fixed cells. "Reach" factors result in the loss of the top and bottom two rows of your storage cells, so consider carefully if you really need a dual gripper. (Except in a very active environment with frequent tape transitions, storage cells are preferred over having a dual gripper.) The gripper is not controlled by host software: it is a 3484 Library Manager optimizer function (i.e microcode). The dual gripper is only used during periods of high (as determined by the LM) activity. Dual Gripper usage statistics Gripper usage info is available from the 3494's Service Mode... Go to the Service menu thereunder, and select View Usage Info. DUMPDB See: DSMSERV DUMPDB dumpel.exe Windows: Dump Event Log, a Windows command-line utility that dumps an event log for a local or remote system into a tab-separated text file. This utility can also be used as a filter. DURation In schedules: The DURation setting specifies the size of the window within which the scheduled event can begin - or resume. For example, if the scheduled event starts at 6 PM and has a DURation of 5 hours, then the event can start anywhere from 6 PM to 11 PM. Perhaps more importantly, if the scheduled event is preempted (msg ANR0487W), ADSM will know enough to restart the event if resources (i.e., tape drives) become available within the window. DVD as server serial media Backups can be performed to DVD, in place of tape. The Admin Guide manual provides some guidance in configuring for this. One Windows customer reports success in a somewhat different way: Use the Windows program called DLA (Drive Letter Assingment) from Veritas, often included in the burner software; or use a package like IN-CD from Nero. You can then format the DVD (or CD) like a diskette. Then define a device-class of removable file and a manual library. Now you can write directly on the CD or DVD. See also: CD... DYnamic An ADSM Copy Group serialization mode, as specified by the 'DEFine COpygroup' command SERialization=DYnamic operand spec. This mode specifies that ADSM accepts the first attempt to back up or archive an object, regardless of any changes made during backup or archive processing. See: Serialization. Contrast with Shared Dynamic, Shared Static, and Static. See also CHAngingretries option. DynaText The hypertext utility in ADSMv2 to read the online Books on most platforms supporting ADSM: all Unixes, Macintosh, Microsoft Windows. Obsolete, with the advent of HTML and PDF. 'E' See: 3490 tape cartridge; Media Type E-fix IBM term for an emergency software patch created for a single customer's situation. As such, e-fixes should not be adopted by other customers. See also: Patch levels E-Lic Electronic Licensing - A key file that is on the CD, but not located on any download sites. Thus you must have the CD loaded in most cases before being able to use the downloaded filesets. EBU Enterprise Backup UTILITY used with Oracle 7 databases. Involves a Backup Catalog. See "RMAN" for Oracle 8 databases. ECCST Enhanced Capacity Cartridge System Tape; a designation for the 3490E cartridge technology, which reads and writes 36 tracks on half-inch tape. Sometimes referred to as MEDIA2. Contrast with CST and HPCT. See also: CST; HPCT; Media Type .edb Filename suffix for MS Exchange Database. Related: .pst Editor ADSMv3 client option (dsm.opt or dsm.sys) option controlling the command line interface editor, which allows you to recall a limited number of previously-issued commands (up to 20) via the keyboard (up-arrow, down-arrow), and edit them (up-arrow, Delete, Insert keys). Specify: Yes or No Default: Yes Ref: B/A Client manual, Using Commands, Using Previous Commands EHPCT 3590 Extended High Performance Cartridge Tape, as typically used in 359E drives. See: 3590 'K' See also: CST; HPCT Eject tape from 3494 library Via TSM server command: 'CHECKOut LIBVolume LibName VolName [CHECKLabel=no] [FORCE=yes] [REMove=Yes]' where the default REMove=Yes causes the ejection. Via Unix command you can effect this by changing the category code to EJECT (X'FF10'): 'mtlib -l /dev/lmcp0 -vC -V VolName -t ff10' Ejections, "phantom" Tapes get ejected from the tape library without TSM having done it. Customers report the following causes: - Drive incorrectly configured by installation personnel. Reads fail, and the drive (erroneously) signals the library manager that the tape is so bad that it should be spit out. - Excessive SCSI chain length. Caused severe errors such that the tape was rejected. Ejects, pending Via Unix command: 'mtlib -l /dev/lmcp0 -qS' Elapsed processing time Statistic at end of Backup/Archive job, recording how long the job took, in hours, minutes, and seconds, in HH:MM:SS format, like: 00:01:36. This is calculated by subtracting the starting time of a command process from the ending time of the completed command process. Shows up in server Activity Log on message ANE4964I. ELDC Embedded Lossless Data Compression compression algorithm, as used in the 3592. See also: ALDC; LZ1; SLDC Element Term used to describe some part of a SCSI Library, such as the 3575. The element number allows addressing of the hardware item as a subset of the SCSI address. An element number may be used to address a tape drive, a tape storage slot, or the robotics of the library. In such libraries, the host program (TSM) is physically controlling actions and hence specific addressing is necessary. In libraries where there is a supervisor program (e.g., 3494), actions are controlled by logical host requests to the library, rather than physical directives, and so element addressing is not in effect. In TSM, an element is described in the 'DEFine DRive' command ELEMent parameter. Note that element numbers do not necessarily start with 1. See also: HOME_ELEMENT Element address SCSI designation of the internal elements of a SCSI device, such as a small SCSI library, where each slot, drive, and door has its own element address as a subset of the library's SCSI address. Element addresses have fixed assignments, per the device manufacturer: your definitions must conform to them. If a SCSI library drive cannot be used within TSM but can be used successfully via external means (e.g., the Unix 'tar' command) that could indicate incorrect Elementk addresses. Another symptom of an element mismatch is if TSM will mount a tape but be unable to use it and/or dismount it. Element addresses, existing You can probably use the 'tapeutil' or 'ntutil' command: open first device and then do Element Inventory (14). Or use 'lbtest' (q.v.): Select 6 to open the library, 8 to get the element count and 9 to get the inventory. Scroll back to the top of the 9 listing to find the drives and element addresses associated with SCSI IDs. In AIX, note that the 'lsdev' command is typically of no help in identifying the element address from the SCSI ID and drive - there is no direct correlation. Example of using lbtest: Library with three drives mt1, mt2 and mt3 (drives can be either rmtX or mtX devices). The slot address are 5, 6, and 7. It is believed that mt1 goes with element 5. To test this theory a tape needs to loaded in the drive located at slot 5 either manually or using lbtest. To use lbtest do the following: - Invoke lbtest - Select 1: Manual test - Select 1: Set device special file (e.g., /dev/lb0) - Prompt: "Return to continue:" Press Enter - Select 6: open - Select 8: ioctl return element count (shows the number of drives, slots, ee ports and transports) - Select 9: ioctl return all library inventory (Will show the element address of all components. Next to element address you will see indications of FULL or EMPTY.) - Select 11: move medium transport element address: Source address moving from: (select any slot with tape) Destination address move to: (in this case it would be 5) Invert option: Select 0 for not invert - Select 40: execute command (which does AIX command `tctl -f /dev/mt1 rewoffl`) If the command is successful, the drive and element match. If you get the message "Driver not ready" try /dev/mt2 and so on until it is successful: the process of elimination. - Select 11: move medium Source address will be 5 and destination will be 6 for the next drive. - Select 40: execute command - Repeat selections 11 and 40 for each remaining drive. - After the last drive has been verified select 11 to return tape to its slot. select 99 to return to opening menu select 9 to quit Element number See: Element address Empty Typical status of a tape in a 'Query Volume' report, reflecting a sequential access volume that either had just been acquired for use from the Scratch pool, or had been assigned to the storage pool via DEFine Volume, and data has not yet been written to the volume. Can also be caused when the empty tapes are not in the library by virtue of MOVe MEDia: another MOVe MEDia would have to be done to get them to go to scratch, because if the tapes are out of the library and go to scratch you will lose track of them. See also: Pending Empty directories, backup Empty directories are only backed up during an Incremental backup, not in a Selective backup. (Some portions of the ADSM documentation suggest that empty directories are not backed up: this is incorrect - they are backed up.) Empty directories, restoring See "Restore and empty directories". Empty file and Backup The backup of an empty file does not require storage pool space or a tape mount: it is the trivial case where all the info about the empty file can be stored entirely in the database entry. However, if supplementary data such as an Access Control List (ACL) is attached to the file, it means that the entry is too data-rich to be entirely stored in the database and so ends up in a storage pool. EMTEC European Multimedia Technologies Former name: BASF Magnetics, which changed its name to EMTEC Magnetics after it was sold by BASF AG in 1996. Starting in 2002, all famous BASF-brand audio, video and data media products will bear the name "EMTEC". Emulex LP8000 Fibre Channel Adapter Needs to be configured as "fcs0" device for it to work with the TSM smit menus. If inadvertently defined as an lpfc0 device, it suggests that you have loaded the "emulex" device driver instead, which corresponds to the filesets devices.pci.lpfc.diag and devices.pci.lpfc.rte, which are filesets are provided by Emulex. In order to have the device recognized as a fcs0 device instead of lpfc0 device, you need to remove those two filesets and rerun cfgmgr. You of course will need to have the proper IBM AIX fibre channel filesets installed. Those filesets are dicussed in the TSM server readme. http://www.emulex.com/ts/fc/docs/ frame8k.htm ENable Through ADSMv2, the command to enable client sessions. Now ENable SESSions. ENable SESSions TSM server command to permit client node Backup and Archive sessions, undoing the prohibition of a prior DISAble SESSions command. Note that the Disable status does not survive across an AIX reboot: the status is reset to Enable. Determine status via 'Query STatus' and look for "Availability". Msgs: ANR2096I See also: DISAble SESSions; ENable ENABLE3590LIBRary Definition in the server options file (dsmserv.opt). Specifies the use of 3590 tape drives within 349x tape libraries. Default: No? Msgs: ANR8745E Ref: Installing the Server... ENABLE3590LIBRary server option, query 'Query OPTion' ENABLELanfree TSM client option to specify whether to enable an available LAN-free path to a storage area network (SAN) attached storage device. A LAN-free path allows backup, restore, archive, and retrieve processing between the Tivoli Storage Manager client and the SAN-attached storage device. See also: LanFree bytes transferred ENABLEServerfree TSM client option to specify whether to enable SAN-based server-free image backup which off-loads data movement processing from the client and server processor and from the LAN during image backup and restore operations. Client data is moved directly from the client disks to SAN-attached storage devices by a third-party copy function initiated by the Tivoli Storage Manager server. The client disks must be SAN-attached and accessible from the data mover, such as a SAN router. If SAN errors occurs, the client fails-over to a direct connection to the server and moves the data via LAN-free or LAN-based data movement. See also: Server-free; Serverfree data bytes transferred Encryption of client-sent data New in TSM 4.1. Uses uses a standard 56-bit DES routine to provide the encryption. The encryption support uses a very simple key management method, where the key is a textual password. The key is only used at the client, it is not transferred or stored at the server. Multiple keys can be used, but only the key entered when the ENCryptkey client option was set to SAVE is stored. Information stored in the file stream on the server indicates that encryption was used and which type. Unlike the TSM user password, the encryption key password is case-sensitive. If the password is lost or forgotten, the encrypted data cannot be decrypted, which means that the data is lost. Where the client options call for both compression and encryption, compression is reportedly performed before encryption - which makes sense, as encrypted data is effectively binary data, which would either see little compression, or even exapansion. And, encryption means data secured by a key, so it further makes sense to prohibit any access to the data file if you do not first have the key. Performance hit: Be well aware that encrypting network traffic comes at a substantial price, in lowering throughput. See: ENCryptkey ENCryptkey TSM 4.1 Windows option, later extended to other clients, specifying whether to save the encryption key password to the Registry in encrypted format. (Saving it avoids being prompted for the password when invoking the client, much like "PASSWORDAccess generate" saves the plain password.) Syntax: ENCryptkey Save|Prompt where Save says to save the encryption key password while Prompt says not to save it, such that you are prompted in each invocation of the client. Where stored: Unix: The encryption key and password are encrypted and stored in the TSM.PWD file, in a directory determined by the PASSWORDDIR option. Windows: Registry Default: Save See also: /etc/security/adsm/; INCLUDE.ENCRYPT; EXCLUDE.ENCRYPT End of volume (EOV) The condition when a tape drive reaches the physical end of the tape. Unlike disks, which have fixed, known geometries, tape lengths are inexact. In writing a tape, its end location is known only by running into it. End-of-volume message ANR8341I End-of-volume reached... Enhanced Virtual Tape Server 1998 IBM product: To optimize tape storage resources, improve performance, and lower the total cost of ownership. See also: Virtual Tape Server Enrollment Certificate Files Files provided by Tivoli, with your server shipment, containing server license data. Filenames are of the form _______.lic . See: REGister LICense Enterprise Configuration and Policy TSM feature which makes possible Management providing Storage Manager configuration and policy information to any number of managed servers after having been defined on a configuration server. The managed servers "subscribe" to profiles owned by the configuration manager, and thereafter receive updates made on the managing server. The managed server cannot effect changes to such served information: it is only a recipient. Ref: Admin Guide, chapter on "Working with a Network of IBM Tivoli Storage Manager Servers" Enterprise Management Agent The TSM 3.7 name for the Web Client. Environment variables See: DSM_CONFIG, DSM_DIR, DSM_LOG, DSMSERV_ACCOUNTING_DIR, VIRTUALMountpoint In AIX, you can inspect the env vars for a running process via: ps eww Ref: Admin Guide, "Defining Environment Variables"; Quick Start, "Defining Environment Variables" EOS End of Service. IBM term for discontinuance of support for an old product. Their words: "Defect support for Tivoli products will generally be provided only for the current release and the most recent prior release. A prior release will be eligible for service for 12 months following general availability of the current release. These releases will be supported at the latest maintenance ("point release") level. Usually, there will be 12 months' notice of EOS for a specific release. At the time of product withdrawal, notice of the EOS date for the final release will be given. At the time a release reaches EOS, it will no longer be supported, updated, patched, or maintained. After the effective EOS date, Tivoli may elect, at its sole discretion, to provide custom support beyond the EOS date for a fee." See also: WDfM EOT An End Of Tape tape mark. See also: BOT EOV See: End of volume EOV message ANR8341I End-of-volume reached... ERA codes (from 3494) See MTIOCLEW (Library Event Wait) Unsolicited Attention Interrupts table in the rear of the SCSI Device Drivers manual. Erase tape See: Tape, erase errno The name of the Unix system standard error number, as enumerated in header file /usr/include/sys/errno.h . Some *SM messages explicitly refer to it by its name, some by generic return code. errno 2 Common error indicating "no such file or directory", often caused by specifying a file name without using its full path, such that the operation seeks the file in the current directory rather than a specific place. Error handler See: ERRORPROG Error log A text file (dsmerror.log) written on disk that contains ADSM processing error messages. Beware symbolic links in the path, else suffer ANS1192E. See also: DSM_LOG; ERRORLOGname; ERRORLOGRetention Error log, operating system AIX has a real hardware error log, reported by the 'errpt' command. Solaris records various hardware problems in the general /var/log/messages log file. Error log, query ADSM 'dsmc Query Options' or TSM 'dsmc show options', look for "Error log". Error log, specify location The DSM_LOG Client environment variable may be used to specify the directory in which the log will live. ADSMv3: add this to dsm.sys: * Error log errorlogname /var/adm/log/ dsmerror.log errorlogretention 14 D Error log size management Use the client option ERRORLOGRetention to prune old entries from the log, and to potentially save old entries. Error messages language "LANGuage" definition in the server options file. Error number In messages, usually refers to the error number returned by the operating system. In Unix, this is the "errno" (q.v.). Error Recovery Cell See "Gripper Error Recovery Cell" ERRORLOGname Macintosh, Novell, and Windows options file and command line option for specifying the name of the TSM error log file (dsmerror.log), where error messages are written. (Note that it is the name of a file, not a directory.) Beware symbolic links in the path, else suffer ANS1192E. See also: DSM_LOG; dsmerror.log; ERRORLOGRetention ERRORLOGRetention Client System Options file (dsm.sys) option (not Client User Options file, as the manual may erroneously say) to specify the number of days to keep error log entries, and whether to save the pruned entries (in file dsmerlog.pru). Syntax: ERRORLOGRetention [N | ] [D | S] where: N Do not prune the log (default). days Number of days of log to keep. D Discard the error log entries (the default) S Save the error log entries to same-directory file dsmerlog.pru Placement: Code within server stanza. Default: Keep logged entries indefinitely. See also: SCHEDLOGRetention ERRORPROG Client System Options file (dsm.sys) option to specify a program which ADSM should execute, with the message as an operand, if a severe error occurs during HSM processing. Can be as simple as "/bin/cat". Code within the server stanza. ERT Estimated Restore Time See also: Estimate ESM Enterprise Storage Manager, as in ADSM or TSM. ESTCAPacity The estimated capacity of volumes in a Device Class, as specified in the 'DEFine DEVclass' command. This is almost always just a human reference value, having no impact on how much data TSM actually puts onto a tape - which is as much as it can. Note that the value "latches" for a given volume when use of the volume first begins. Changing the ESTCAPacity value will apply to future volumes, but will not change the estimated capacity of prevailing volumes (as revealed in a 'Query Volumes' report). After a reclamation, the ESTCAPacity value for the volume returns to the base number for the medium type. Estimate The ADSMv3 Backup/Archive GUI introduced an Estimate function. At the conclusion of backups, this implicit function collects statistics from the *SM server, which the client stores, by *SM server address, in the .adsmrc file in the user's Unix home directory, or Windows dsm.ini file. In a later operation, the GUI user may invoke the Estimate function to get a sense of what will be involved in a subsequent Backup, Archive, Restore, or Retrieve: The client can then estimate the elapsed time for the operation on the basis of the saved historical information. A user can then choose to cancel the operation before it starts if the amount of data selected or the estimated elapsed time for the operation is excessive. The information provided: Number of Objects Selected: The number of objects (files and directories) selected for an operation such as backup or restore. Calculated Size: The Estimate function calculates the number of bytes the currently selected objects occupy by scanning the selected directories or requesting file information from the *SM server. Estimated Transfer Time: The client estimates the elapsed time for the operation on the basis of historical info, calculating it by using the average transfer rate and average compression rate from previous operations. See also: .adsmrc; dsm.ini Estimated Capacity A column in a 'Query STGpool' report telling of the estimated capacity of the storage pool. The value is dependent upon the stgpool MAXSCRatch value having been set: If the stgpool has stored data on at least one scratch volume, the estimated capacity includes the maximum number of scratch volumes allowed for the pool. (For tape stgpools, the EstCap number is a rather abstract value, amortized over the all the tapes in a library - which typically have to be available for use in other storage pools as well, and so is usually meaningless for any single stgpool. See "Pct Util, from Query STGpool" for observations on deriving the amount of data contained in the stgpool.) TSM uses estimated capacity to determine when to begin reclamation of stgpool volumes. Estimated Capacity A column in a 'Query Volumes' report telling of the estimated capacity of a volume, which is as was specified via the ESTCAPacity operand of the 'DEFine DEVclass' command. The value reported is the "logical capacity": the content after 3590 hardware compression. If the files were well compressed on the client, then little or no compression can be done by the drives and thus the closer the value will be to physical capacity. Experience shows that the capacity value is not assigned to a volume until the first data is actually written to it. Ref: TSM Admin Guide, "How TSM Fills Volumes" See also: ESTCAPacity; Pct Util /etc/.3494sock Unix domain socket file created by the Library Manager Control Point daemon (lmcpd). /etc/adsm/ Unix directory created for storing control information. All Unix systems have the HSM SpaceMan subdirectory in there. Non-AIX Unix systems have their encrypted client password file in there for option PASSWORDAccess GENERATE. The 3.7 Solaris client (at least, GUI) is reported to experience a Segmentation Fault failure due to a problem in the encrypted password file. Removing the problem file from the /etc/adsm/ directory (or, the whole directory) will eliminate the SegFault. (Naturally, you have to perform a root client-server operation like 'dsmc q sch' to cause the password file to be re-established.) See also: /etc/security/adsm; Password, client, where stored on client; PASSWORDDIR /etc/adsm/SpaceMan/status HSM status info, which is the symlink target of the .SpaceMan/status entry in the space-managed file system. /etc/ibmatl.conf Library Manager Control Point Daemon (lmcpd) configuration file in Unix. Defines the 3494 libraries that this host system will communicate with Each active line in the file consists of three parts: 1. Library name: Is best chosen to be the network name of your library, such as "LIB1" in a "LIB1.UNIVERSITY.EDU" name. In AIX, the name must be the one that was tied to the /dev/lmcp_ device driver during SMIT configuration. In Solaris, this is the arbitrary symbolic name you will specify on the DEVIce operand of the DEFine LIBRary TSM server command, and use with the 'mtlib' command -l option to work with the library. 2. Connection type: If RS-232, the name of the serial device, such as /dev/tty1. If TCP/IP, the IP address of the library. (Do not code ":portnumber" as a suffix unless you have configured the 3494 to use a port number other than "3494", as reflected in /etc/services.) 3. Identifier: The 1-8 character name you told the 3494 in Add LAN Host to call this host system (Host Alias). The file may be updated at any time; but the lmcpd does not look at the file except when it starts, so needs to be restarted to see the changes. Ref: "IBM SCSI Tape Drive, Medium Changer, and Library Device Drivers: Installation and User's Guide" manual (GC35-0154) See also: Library Manager Control Point Daemon /etc/ibmatl.pid Library Manager Control Point (LMCP) Daemon PID number file. The lmcpd apparently keeps it open and locked, so it is not possible for even root to open and read it. /etc/mnttab in Solaris Prior to Solaris 8, /etc/mnttab was a mounts table file. As of Solaris 8, it is a mount point for the mnttab file system! The name should be excluded from backups (in dsm.opt code "Domain -/etc/mnttab"), as it does not have to be restored: the OS will re-create it. /etc/security/adsm/ AIX default directory where ADSM stores the client password. Overridable via the PASSWORDDIR option. ADSMv3: Should contain one or more files whose upper case names are the servers used by this client, and whose contents consist of an explanatory string followed by an encrypted password for reaching that server. TSMv4: File name is TSM.PWD . This password file is established by the client superuser performing a client-server command which requires password access, such as 'dsmc q sched'. See also: Client password, where stored on client; ENCryptkey; /etc/adsm; PASSWORDDIR Ethernet card, force us of specific You may have multiple ethernet cards in a computer and want client sessions to use a particular card. (In networking terms, the client is "multi-homed".) This can be effected via the client TCPCLIENTAddress option, in most cases; but watch out for the server-side node definition having a hard-coded HLAddress specification. Event ID NN (e.g. Event ID 11) An NT Event number, as can be seen in the NT Event Viewer. A handy place to search for their meaning: http://www.eventid.net/search.asp Event ID: 17055 As when backing up an MS SQL db. Apparently the backup process was interrupted and this caused the BAK file to become corrupt. This also makes it impossible to restore from the BAK file, another reported symptom. The BAK files were deleted and recreated and things worked thereafter. Event Logging An ADSM feature. You can define event receivers using FILEEXIT or USEREXIT support and collect real time event data. You can then create your own parsing utilites (or borrow someones) to sort the data and arrange the results to suit your needs. This avoids the Query Event command, which is compute intensive and requires a generous amount of server resources. Event Logging is one way to alleviate expensive queries against your server. See: BEGin EVentlogging; Disable Events; ENable EVents; Query ENabled Event records, delete old 'DELete event Date [Time]' Event records retention period, query 'Query STatus', look for "Event Record Retention Period" Event records retention period, set 'Set EVentretention N_Days' Default: Installation causes it to be set to 10 days. Event return codes Return codes in the Event Log can be other than what you might expect... If a client schedule command is executed asynchronously, then it is not possible for TSM to track the outcome, in which case the event will be reported as "Complete" with return code 0. To get a true return code, run the command synchronously, where possible, as in using Wait=Yes. If the command is a Server Script that includes several commands which are simply stacked to run in sequence, each of those commands may or may not end with return code 0, but ultimately the script exits with a return code of 0, then the event will be reported as "Complete" with return code 0. The obvious treatment here is to write the Script to examine the return code from each invoked comamnd and exit early when a result is non-zero. Again, such commands must be synchronous. See also: Return codes Event server See: TEC EVENTS table SQL table. Columns: SCHEDULED_START, ACTUAL_START, DOMAIN_NAME, SCHEDULE_NAME, NODE_NAME, STATUS, RESULT, REASON. More reliable than the SUMMARY table, but getting at data can be a challenge. You need to specify values for the SCHEDULED_START and/or ACTUAL_START columns in order to get older data from the EVENTS table: SELECT * FROM EVENTS WHERE SCHEDULED_START>'06/13/2003'. Restriction: Dates must be explicit, not computed or relative; so the construct "scheduled_start>current_timestamp - 1 day" won't work (see APAR IC34609). For a developer, the EVENTS table is a little tricky. Unlike BACKUPS, NODES, ACTLOG, etc., which have a finite number of records, the EVENTS table is unbounded. If you do a Query EVent with date criteria beyond your event record retention setting, you'll get a status of Uncertain. If you do a Query EVent for future dates, you get a status of Future. When the Query EVent function was "translated" to the SELECTable EVENTS table, the question as to what constitutes a complete table (i.e. SELECT * FROM EVENTS) needed to be addressed. Since EVENTS is unbounded, the table is theoretically infinite in size. So the developers decided to mirror Query EVent behavior and thus get only the records for today, by default. Note that SELECT does not support the reporting of Future events from the EVENTS table, but it will show you Uncertain records that go past your event record retention. See also: APAR IC34609 re timestamps Events, administrative command 'Query EVent ScheduleName schedules, query Type=Administrative' Events, client schedules, query 'Query EVent DomainName ScheduleName' to see all of them. Or use: 'Query EVent * * EXceptionsonly=Yes' to see just problems, and if none, get message "ANR2034E QUERY EVENT: No match found for this query." EVENTSERVer Server option to specify whether, at startup, the server should try to contact the event server. Code "Yes" or "No". Default: Yes Exabyte 480 8mm library with 4 drives and 80 tape slots. A rotating cylindrical silo sits above the four tape drives. *EXC_MOUNTWait It is an Exchange Agent only option that tells the Exchange Agent to wait for media (tape) mounts when necessary. Values: Yes, No. excdsm.log The TDP for Exchange log file, normally located in the installation directory for TDP for Exchange (unless you changed it). Exchange Microsoft Exchange, a mail agent. Exchange stores all mailboxes in one file (information store) ... therefore you can't restore individual mailboxes. (More specifically, there is no "brick level" backup/restore due to the absence of a native "backup and restore" API from Microsoft (as of Exchange 5.5 and 2000; a subsequent version may provide the API capability). In Exchange 2000, you can somewhat mitigate having to do mailbox restores if you use the deleted mailbox retention option. (Or called something very similar) This will allow you to recover a mailbox after it has been deleted X number of days ago, based on this setting. Exchange 2003 should have "Recovery Storage Group" that will allow you to restore an individual mailbox "database" (not a single mailbox, just the mailbox database) into a special storage group without impacting the live server. You can then connect to it and use ExMerge to export the individual mailbox. Still lacking, but something. Ref: In www.adsm.org, search on "brick", and in particular see Del Hoobler's postings.) Backed up by Tivoli Storage Manager for Mail (q.v.). If you have version 1.1.0.0 of the ADSMConnect Exchange Agent, then you MUST be running the backup as Exchange Site Service Account. This account, by default, has the correct permissions to back up the Exchange Server. Performance: Tivoli's original testing showed that "/buffers:3,1024" seemed to produce the best results. Redbook: Connect Agent for Exchange. See also: ARCserve; TDP for Exchange; TXNBytelimit; TXNGroupmax Exchange, delete old backups With TDP for Exchange version 1, look at the "EXCDSMC /ADSMAUTODELETE" command. With TDP for Exchange version 2, you do not have to worry about deletions because it has the added function of TSM policy management that will handle expiration of old backups automatically. Exchange, restore a single mailbox? *SM can only do this if Microsoft provides an API that makes it possible, and Microsoft DOES NOT have mailbox/item level backup and restore APIs for any version of Exchange including the new Exchange 2000. There are vendors who have coded solutions using APIs (like MAPI) that are not intended for backup and restore. These solutions tend to take large amounts of time for backups and full restores... (Try restoring a 50Gig IS or storage group from an item level backup and restore.) Microsoft themselves claims that they have tried to come up with a way to provide some type of item level restore support via the backup and restore APIs but have not succeeded because of the architecture of the JET database (the database that is the heart of Exchange.) Microsoft contends that customers should take advantage of deleted item level recovery and the new deleted mailbox level recovery of Exchange 2000 to solve these problems. Ref: "TDP for Microsoft Exchange Server Installation and User's Guide" manual, appendix B topic "Individual Mailbox Restore". A third party vendor, Ontrack Software (www.ontrack.com) has a software product called PowerControls which claims to read a .edb full backup to extract a single mailbox. Exchange, restore across servers? It can be done. One customer says: The trick is to specify the TSM-nodename of the FROM-server when you restore on the TO-server. For instance: tdpexcc restore "Storage Group C" FULL /Mountwait=Yes /MountDatabases=Yes /excserver= /fromexcserver= /TSMPassword= /tsmnode= Another says: Go to the restore server and do a restore of the mail (make sure erase existing logs is CHECKED!), but DO NOT restore the DIRECTORY, only the information store, private and public. Then after the restore restart the services for exchange and go into the Administrator program (see tech net article ID Q146920 for full details). Go into Server Objects, and then select Consistency Adjuster. Under the Private Information Store section make sure Synchronize with the directory is checked, click All Inconsistencies and away you go. This will rebuild the user directory whole list and all the mail. Naturally, be sure that your operating system, Exchange, and TDP levels are all the same across the server systems, and do the deed only after having a full backup. Here are some Microsoft docs explaining some issues to keep in mind: http://www.microsoft.com/exchange/ techinfo/deployment/2000/ MailboxRecover.asp http://www.microsoft.com/exchange/ techinfo/deployment/2000/ E2Krecovery.asp Exchange, restoring You can restore the Exchange Db to a different computer, provided it is within the same Exchange Org.; but only the info store - not the directory. Performance: An Exchange restore will almost always be slower than backup because it is writing to disk and, more importantly, it is replaying transaction logs. Use Collocation by filespace, to keep the data for your separate storage groups on different tapes to facilitate running parallel restores. Exchange 2000 SRS, back up via CLI To backup the Exchange 2000 Site Replication Service via the command line, do like: tdpexcc backup "SRS Storage" full /tsmoptfile=dsm.opt /logfile=exsch.log /excapp=SRS >> excfull.log Exchange 2003 (Exchange Server 2003) Requires Data Protection for Exchange version 5.2.1 at a minimum. See: http://www.ibm.com/support/ entdocview.wss?uid=swg21157215 Exchange Agent Only deals with Information Store (IS) and Directory (DIR) data. The Message Transfer Agent (MTA) is not dealt with at all. The Exchange Agent has 4 backup types: Full, Copy, Incremental, Differential: "Full" and "Copy" backup contain the database file, all transaction logs, and a patch file. "Incremental" and "Differential" backup contain the database file, all transaction logs, and a patch file. Each backup will show which type it is in the backup history list on the Restore Tab. See also: TDP for Exchange Exchange databases There are 2/3 databases in Exchange... - The Directory, dir.edb, which stores the users/groups/etc. - The Public Database, pub.edb, which store public folders and such. - The Private Database, priv.edb, which stores the private mailboxes and such. Exchange product files Seagate had a product for backing up open Exchange files. It uses ADSM as a backup device (through the API). Then Seagate sold the backup software division to Veritas, so see: http://www.veritas.com/products/stormint Exclude The process of specifying a file or group of files in your include-exclude options file with an exclude option to prevent them from being backed up or migrated. You can exclude a file from backup and space management, backup only, or space management only. Note that exclusion operates ONLY ON FILES! Any directories which ADSM finds as it traverses the file system will be backed up. The other implication of this is that ADSM will always traverse directories, even if you don't want it to, so it can waste a lot of time. To avoid directory traversal, use EXCLUDE.DIR, or consider using virtual mount points instead to specify major subdirectories to be processed, and omit subdirectories to be ignored. Note that excluding a file for which there are prior backups has essentially the same effect as if the file had been deleted from the client: all the backup versions suddenly become expired. EXclude Client option to specify files that should be excluded from TSM Archive, Backup, or HSM services. Placement: Unix: Either in the client system options file or, more commonly, in the file named on the INCLExcl option. Other: In the client options file. You cannot exclude in Restorals. Remember that upper/lower case matters! For backup exclusion, code as: 'EXclude.backup pattern...' For HSM exclusion, code as: 'EXclude.spacemgmt pattern...' To exclude from *both* backup and HSM: 'EXclude pattern...' As to "pattern"... /dir/* covers all files in dir and /dir/.../* covers all files in all subdirs of dir, so both cover all files below dir. Further, /dir/.../* includes /dir/*, so only one exclude is necessary to exclude a whole branch. Effects: The file(s) are expired in the next backup. Note that with DFS you need to use four dots (as in /dir/..../*). Messages: ANS4119W See also: EXCLUDE.DIR; EXCLUDE.File; etc EXCLUDE.FS Exclude a drive You can code your client Domain statement to omit the drive you don't want backed up. Note that specification like 'EXCLUDE.Dir "C:\"' should not be used to try to exclude the root of a drive. Exclude and retention (expiration) When you exclude files or directories, it has the same effect as if the objects were no longer on the client system: the the backup versions will be eligible for expiration. Exclude archive files In TSM 4.1: EXCLUDE.Archive In earlier levels, a circumvention is to include them to a special management class that does not exist. You will then get an error message and the files will not be archived. Exclude from Restore There is no Exclude option to exclude file system objects during a Restore. To try to circumvent, you might create a dummy object of that name in the file system and then tell the Restore not to replace files. Exclude ignored? See: Include-Exclude "not working" EXCLUDE.Archive TSM 4.1+: Exclude a file or a group of files that match the pattern from Archiving (only). This does not preclude the archiving of directories in the path of the file - but in any case, this should not be an issue, in that TSM does not archive directories that it knows to already be in server storage. There is no Exclude that excludes from both Archive and Backup. EXCLUDE.Backup Excludes a file or a group of files from backup services only. There is no Exclude that excludes from both Backup and Archive. Effects: The file(s) are expired in the next backup. EXCLUDE.COMPRESSION Can be used to defeat compression for certain files during Archive and Backup processing. Where used: To alleviate the problem of server storage pool space being mis-estimated and backups thus failing because already-compressed files expand during TSM client compression. So you would thus code like: EXCLUDE.COMPRESSION *.gz EXCLUDE.Dir (ADSM v.3+) Specifies a directory (and files and subdirectories) that you want to exclude from Backup services only, thus keeping *SM from scanning the directory for files and subdirectories to possibly back up. (The simpler EXCLUDE does *not* prevent the directory from being traversed to possibly back up subdirectories.) The pattern is a directory name, not a file specification. Wildcards *are* allowed. In Unix, specify like: EXCLUDE.Dir /dirname or EXCLUDE.Dir /dirnames* In Windows, note that you cannot do like "EXCLUDE.Dir G:" to exclude a drive: you need to have "EXCLUDE.Dir G:\*". Use this option when you have both the backup-archive client and the HSM client installed. Do not attempt to specify like 'EXCLUDE.Dir "C:\"' to try to exclude the root of a drive. Effects: The directory and all files below it are expired in the next backup. Note that EXCLUDE.Dir takes precedence over all other Include/Exclude statements, regardless of relative positions. Note that EXCLUDE.Dir cannot be overridden with an Include. EXCLUDE.Dir *does not* apply if you perform a Selective backup of a single file under that directory; but it does apply if the Selective employs wildcard characters to identify files under that directory. EXCLUDE.ENCRYPT TSM 4.1 Windows option to exclude files from encryption processing. See also: ENCryptkey; INCLUDE.ENCRYPT EXCLUDE.File Excludes files, but not directories, that match the pattern from normal backup services, but not from HSM services. Effects: The file(s) are expired in the next backup. EXCLUDE.File.Backup Excludes a file from normal backup services. EXCLUDE.FS (ADSM v.3+) Specifies a filespace/filesystem that you want to exclude from Backup services. (This option applies only to Backup operations - not Archive or HSM.) This option is available in the Unix client, but not the Windows client (as of TSM 5.2.2). In TSM (not ADSM) the filespace may be coded using a pattern. Effects: The specified file system(s) are skipped, as though they were not specified on the command line of the Domain option. (Note that the file systems are *not* expired, as lesser EXCLUDEs do.) Note that EXCLUDE.FS takes precedence over all other Include statements and non-EXCLUDE.FS Exclude statements, regardless of relative positions. But: Does it make sense to exclude a file system? Or should you instead not include it in the first place, as in not coding it in a DOMain statement or as a dsmc command object? (Make sure that you *do* have a DOMain statement coded in your options file!) With client schedules, an alternative is to use the OBJects parameter to control the file systems to back up. See also: dsmc Query INCLEXCL; dsmc SHow INCLEXCL EXCLUDE.HSM No, there is no such thing. What you want to do is simply EXCLUDE, which excludes the object from both Backup and HSM. Exclude.Restore An ad hoc, undocumented addition you may stumble upon in the TSM 5.2 client. It is there only for use under the direction of IBM Service: there is no assurance that it will work as you expect, or in all cases. AVOID IT. Executing Operating System command or Message in client schedule log, script: referring to a command being run per either the PRESchedulecmd, PRENschedulecmd, POSTSchedulecmd, or POSTNschedulecmd option; or by the DEFine SCHedule ACTion=Command spec where OBJects="___" specifies the command name. Execution Mode (HSM) A mode that controls the space management related behavior of commands that run under the dsmmode command. The dsmmode command provides four execution modes - a data access control mode that controls whether a migrated file can be accessed, a time stamp control mode that controls whether the access time for a file is set to the current time when the file is accessed, an out-of-space protection mode that controls whether HSM intercepts an out-of-space condition on a file system, and a recall mode that controls whether a file is stored on your local file system when accessed, or stored on your local file system only while it is being accessed, and then migrated back to ADSM storage when it is closed. .EXP File name extension created by the server for FILE type scratch volumes which contain Export data. Ref: Admin Guide, Defining and Updating FILE Device Classes See also: FILE EXPINterval Definition in the Server Options file. Specifies the number of hours between automatic inventory expiration runs, after first running it when the server comes up. Setting the interval to 0 sets the process to manual, and then you must enter the 'EXPIre Inventory' command to start the process. Default: 24 hours Automatic expiration can be suppressed by starting 'dsmserv' with the "noexpire" command line option. You can also code "EXPINterval 0". Ref: Installing the Server... See also: SETOPT EXPInterval server option, change 'SETOPT EXPINterval ___' while up, or change dsmserv.opt file EXPINterval for next start-up. EXPInterval server option, query 'Query OPTion', look for "ExpInterval". Expiration The process by which objects are deleted from storage pools because their expiration date or retention period has passed. Backed up or archived objects are marked for deletion based on the criteria defined in the backup or archive copy group ('Query COpygroup'). File objects are evaluated for removal at Expiration time either by having been marked as expired at Backup time (per your retention policy Versions rules) or per the retention periods specified in the Backup Copy Group. The expiration process has two phases: 1. Data expiration on ITSM database. 2. Data expiration on tapes. (Freeing tapes to Scratch can seem to be delayed as this is under way.) The order in which expiration occurs has been observed to be the same as types are listed in the ANR0812I message: backup objects, archive objects, DB backup volumes (DRMDBBackupexpiredays), recovery plan files (DRM). Avoid doing expirations during incremental backups - the backups will be degraded. Beware that as a database operation, the expiration will require Recovery Log space. If the expiration is massive, the Recovery Log will fill, and so you should have DBBackuptrigger configured. If SELFTUNEBUFpoolsize is in effect, the Bufpool statistics are reset before the expiration. Messages: ANR4391I, ANR0811I, ANR0812I, ANR0813I See also: DEACTIVATE_DATE; dsmc EXPire; EXPInterval; SELFTUNEBUFpoolsize Expiration (HSM) The retention period for HSM-migrated files is controlled via the MIGFILEEXPiration option in the Client System Options file (governing their removal from the migration area after having been modified or deleted in the client file system) such that the storage pool image is obsolete. The client system file is, of course, permanent and does not expire. Possible values: 0-9999 (days). Default: 7 (days). The value can be queried via: 'dsmc Query Option' in ADSM or 'dsmc show options' in TSM; look for "migFileExpiration". Expiration, invocation Invoked automatically per Server Options file option EXPInterval; Invoke manually: 'EXPIre Inventory'. Expiration, stop (cancel) 'CANcel PRocess Process_Number' will cause the next Expire Inventory to start over. 'CANcel EXPIration' is simpler, and will cause the expiration to checkpoint so that the next Expire Inventory will resume. You may also want to change the EXPINterval server option to "EXPINterval 0" to prevent further expirations, at their assigned intervals - though this means having to take down the server. See also: CANcel EXPIration Expiration date for a Backup file Perform a SELECT on the Backups table to get the DEACTIVATE_DATE, and then add your prevailing backup retention period. Expiration date for an Archive file Perform a SELECT on the Archives table to get the ARCHIVE_DATE, and then add your prevailing archive retention period. Expiration happening? 'Query ACtlog BEGINDate=-999 s=expira' should reveal ANR0812I messages reflecting deletions. Expiration happening outside schedule When you have an administrative schedule performing 'EXPIre Inventory', you want to defeat automatic expirations which otherwise occur via the ExpInterval server option. Expiration messages, control "EXPQUiet" server option (q.v.). Expiration not happening - Is your EXPINterval server option set to a good value, or do you have an administrative schedule doing Expire Inventory regularly? - Retention periods defined in the Copy Group define how long storage pool files will be retained: if you have long retentions then you won't see data expiring any time soon. - Did the management class to which the files were bound disappear? (You can query a few files to check.) If so, the default management class copy group values pertain; or, if no such default copy group, then the DEFine DOMain grace period prevails. See also: Grace period Expiration performance Some things to consider: - Boosting BUFPoolsize to a high value will cut run time substantially. - Avoid running when other database- intensive operations are scheduled. (The "What else is running?" question.) - Standard operating system configuration issues: CPU speed, memory size, disk and paging space performance, contention with other system processes, etc. - Look for TSM db disk problems in the operating system error log. - Performing the expiration with SKipdirs=No with less than TSM server level 5.1.5.1 will result in not just directories being skipped in Expiration, but also the files within those directories! This causes file to build up in the TSM server. Reverting to SKipdirs=Yes will gradually fix the performance problem. - The more versions you have of a file in server storage, and the longer your Backup Copy Group retention policies, the longer Expiration will take, because time-based policy processing occurs during Expiration (in contrast with versions-based processing, which occurs at client Backup time). Ref: IBM site Solution 1141810: "How to determine when disk tuning is needed for your ITSM server". See also: Database performance Expiration period, HSM See: Expiration (HSM); MIGFILEEXPiration Expiration process As reported in Query Process, like: Examined 14784 objects, deleting 14592 backup objects, 16 archive objects, 0 DB backup volumes, 0 recovery plan files; 0 errors encountered. Notes: - Backup and Archive objects may be deleted in concert: it is not the case that expiration will go through all Backup object first, then move on to Archive object deletions. Expiration processes, list 'SELECT STATUS FROM PROCESSES WHERE PROCESS ='Expiration' ' Expiration slow (ADSMv3) APAR PQ26279 describes a major ADSM software defect in which expiration was overly slow in initial and later runs. Expire files by name See: dsmc EXPire EXPIre Inventory *SM server command to manually start inventory expiration processing, via a background process, to remove outdated client Archive, Backup, and Backupset objects from server storage pools according to the terms specified by the Copypool retention and versions specifications for the management classes to which the objects are bound. EXPIre Inventory processes Backup files according to having been marked as expired at Backup time, per retention versions rules; or by examining Inactive files according to retention time values. Expiration naturally removes the storage pool object instance, as well as the appropriate database reference. Expiration is also employed by the server to remove expired server state settings such as Restartable Restore. (The name "Expire Inventory" is misleading, as the function performed by the command is actually database deletion, by virtue of deleting files previously marked expired during Backup, and those computed at Expire Inventory time as having outlived the time-based retention policy.) EXPIre Inventory can be cancelled. Syntax: 'EXPIre Inventory [Quiet=No|Yes] [Wait=No|Yes] [DUration=1-2147483648_Mins] [SKipdirs=No|Yes]' DUration can be defined to limit how long the task runs. (Note: At the end of the duration, the expiration will stop and the point where it stopped is recorded in the TSM database, which will be the point from which it resumes when the next EXPIre Inventory is run.) SKipdirs is per APAR IY06778, due to the revised expiration algorithm experiencing performance degradation while expiring archive objects. (The problem with deleting archive directories, is that TSM must not delete the directory object if there are still files dependent upon it. So, to delete an archive directory, TSM needs to see if ANY files referenced that directory using another set of database calls. This other set of database calls is where the extra time was being spent.) SKipdirs is thus a formalized circumvention for a design change which wasn't properly thought through or tested. The intent of SKipdirs=Yes initially was to allow EXPIre Inventory to bypass all the directories created by Archive. This was a circumvention until the CLEANUP ARCHDIR utilities could be run to clear out these objects. However, until the fix in TSM server level 5.1.5.1, SKipdirs=Yes can also prevent Backup directories and the files under them from being deleted, resulting in ever longer EXPIre Inventory executions and database bloat. SKipdirs=Yes should *not* be used perpetually. Note that API-based clients, such as the TDPs, require their own, separate expiration handling (actually, deletion). Likewise, HSM handles expiration of its own files separately: see MIGFILEEXPiration. How long it takes: The time is proportional to the amount of data ready to be expired. (It is not the case that it plows through the entire *SM database at each invocation, seeking things ready to be expired.) Expire inventory works through the nodes in the order they were registered. This is a disruptive operation which can cause *SM processing to slow to a crawl, so run off-hours so that it will not conflict with things. Reclamation should be disabled during the Expiration ('UPDate STGpool PoolName REClaim=100') so that it doesn't get kicked off prematurely and waste resources in copying data that will be expired as expiration proceeds. WARNING: Expiration quickly consumes space in the Recovery Log, and can exhaust it if the amount of data expiration is great. The DUration operand is there to help keep this from happening. Msgs: ANR0812I; ANR0813I; ANR4391I to record each filespace processed when started in non-quiet mode. See also: CANcel EXPIration; dsmc EXPire; Expiration, stop; Expiring.Objects; Restartable Restore; Server Options file option EXPInterval EXPIre Inventory, placement EXPIre Inventory is best kicked off at the end of a daily (e.g., morning) administration job so that it will reduce tape occupancy levels so that following Reclamation work can run efficiently thereafter. EXPIre Inventory, results Message ANR0812I reports the number of objects removed upon normal conclusion, and ANR0813I for abnormal conclusion. An historic shortcoming is lack of reporting of the number of bytes involved. You can compensate for this by doing 'AUDit LICenses' and 'Select * From Auditocc' before and after the 'EXPIre Inventory'. Expire processing order It looks like Expire processing occurs in the order that you add your client nodes to the *SM server. Expiring--> Leads the line of output from a Backup operation, as when Backup finds that a file has been removed from the file system since the last Backup. The file will be rendered Inactive in server storage. The previously Active copy in server storage is "deactivated". Note that no server storage space is freed until Expire Inventory processing occurs. See also: Updating-->; Normal File-->; Rebinding--> Expiring file HSM: A migrated or premigrated file that has been marked for expiration and removal from *SM storage. If a stub file or an original copy of a premigrated file is deleted from a local file system, or if the original copy of a premigrated file is updated, the corresponding migrated or premigrated file is marked for expiration the next time reconciliation is run. It expires and is removed from *SM storage after the number of days specified with the MIGFILEEXPiration option have elapsed. See: MIGFILEEXPiration Expiring.Objects An internal server table to record what is available for expiration at any given point in time. It's maintained "on-the-fly" as new objects come into the system and the existing objects get moved to Inactive or available for expiration. The records contain the pertinent information for the server to complete the deletion. So, instead of walking the inventory tables at EXPIre Inventory time and performing lengthy calculations then as to what objects can go, that workload is distributed over time. On larger systems, it greatly speeds up the process of figuring out what can be deleted and what can't. Fluctuations in expire time are due to external events, such as a filesystem that had purged a lot of files, retention policies changed, etc. Export *SM server meta command encompassing a family of object exports which allow parts of the server to be written to removable media (tape) so that the data can be transferred to another server - even one of a different architecture (supposedly). The produced tape will end up in the LIBVolumes list with a Last Use type of "Export". Note that Export will write out Backup files first, before other types, and exports first from things directly resident in its database (directories, empty files, etc). Export apparently uses *SM database space for scratch pad use, as database usage will increase when only Export is running. One cute thing you can do for an abandoned filespace is to Export it to a file, archive the file, and delete the filespace such that the data is preserved but all the database space reflecting the individual files is reclaimed. Export is sometimes advocated for getting long-term storage data out of the TSM server, to reduce overhead. This is effective, but lost are all the advantages of TSM database inventory tracking of the data, where it is then up to you to somehow keep track of what you wrote to what export tape and how to get it back. Message ANR0617I will summarize how well the export went: SUCCESS or INCOMPLETE. Watch for message ANR0627I saying that files were skipped, as can happen when input tapes suffer I/O errors. (Export will nicely go on to completion, getting as much data as it can.) To export from one *SM server's storage pools to another, use the ADSMv3+ Virtual volumes facility (see chapter 13 of the Admin Guide). Note: Your success in exporting from one server to another is probabalistic, as the vendor would do little testing in this area. Exporting across platforms is dicey at best. (Be particularly cautious with EBCDIC vs. ASCII platforms.) You will probably have the best chance when the receiving server is at the same level or higher compared to the exporting server. Ref: Admin Guide, Managing Server Operations, Moving the Tivoli Storage Manager Server See also: dsmserv RESTORE DB; IMport EXPORT In 'Query VOLHistory', Volume Type to say that volume was used to record data for export. Also under 'Volume Type' in /var/adsmserv/volumehistory.backup . EXPort Node TSM server command to export client node definitions to serial media (tape). Syntax: 'EXPort Node [NodeName(s)] [FILESpace=FileSpaceName(s)] [DOMains=DomainName(s)] [FILEData=None|All|ARchive| Backup|BACKUPActive| ALLActive|SPacemanaged] [Preview=No|Yes] [DEVclass=DevclassName] [Scratch=Yes|No] [VOLumenames=VolName(s)] [USEDVolumelist=file_name]' Note that exporting to a device type of SERVER allows exporting the data to another ADSM server, via virtual volumes (electronic valuting). Hint: Using Preview=Yes is a handy way of determining the amount of data owned by a node. Consider doing a LOCK Node first! Export via FTP rather than tape Keep in mind that you can export to a devclass of type FILE, and then FTP the resultant file to the other system for Importation. Export-Import across libraries In some cases, customers want to perform an Export-Import from one library to another of the same type, usually at different sites, to rebuild the TSM server at the other site. The TSM manuals have been without information on how to approach this... - Do 'LOCK Node' on all involved client nodes to prevent inadvertent changes to the data you intend to export, and nullify all administrative schedules which could interfere with the long-running Export. - Perform an Export of all data. Carefully check the results of the operation to assure that all the data successfully made it to tape. (The volumes will show up in VOLHistory as Volume Type "EXPORT".) - Perform a CHECKOut LIBVolume to eject the volumes. - Transport the tapes to the new site. - Flick the read/write tab on the tapes to read-only before inserting into the new library, as you'll want to assure that this vital data is not obliterated until you're sure that the new TSM system is complete and stable. - Insert the tapes into the new library. - Perform a CHECKIn LIBVolume with a STATus=PRIvate. - Perform Import. Check that the amount of data imported matches that in the Export. - At some later time, perform a CHECKOut LIBVolume of the read-only volumes and change their tab to read-write to enable their re-use, then perform a CHECKIn LIBVolume as STATus=SCRatch. Leave the old TSM system and library intact until the new TSM system is complete: it is not unknown for there to be problems with Export-Import. Export-Import across servers You may get stuck with a situation where you have an old server and a new server and no common tape hardware nor means of disconnecting tape drives from one system to attach to the other, in performing a traditional Export-Import. In that case, if you're running Unix, a "trick" you might try is to do the export over the network, by doing the export-import using File devices which are in reality FIFO special files, which on the sending system is being read by an 'r**' command to send the data over to the network to be caught by a program there which feeds the FIFO that Import is reading over there. On the sending and receiving systems do: mkfifo fifo On the sending system do: cat fifo | rsh othersys 'cat > fifo' And then have the sending *SM system do an Export Node to a File type device and a VOlumename being the file name of fifo, and have the receiving TSM system do an Import from a File type device where VOlumename is fifo on that system. (Note: This is an unproven concept, but should work.) Export-Import Node A method of copying a node from one ADSM server to another, retaining the same Domain and Node names. (If the node imports with Domain name which is odd to your ADSM server, you can thereafter do an 'UPDate Node' to reassign the node to a more suitable Domain in your server.) Note that this migrates the Filespace data, but the file system stays where it is; and so Export-Import is inappropriate for when you want to transfer an HSM file system from one ADSM server host to another (use cross-node restore instead). EXPQUiet Server option to control the verbosity of expiration messages: No (default) allows verbosity; Yes minimizes output. ext3 file system support The TSM 5.1.5 client for Linux provides (Linux client) support for ext3 file systems. Prior to that, one could effect backups via dsmc by defining the file systems of interest as VIRTUALMountpoint's: subsequent restoral can be performed via either dsmc or dsm. The filespace will be recorded as type EXT2 on the server. EXTend DB ADSM server command to extend the database "assigned space" to use more of the "available space". Causes a process to be created which physically formats the additional space (because it takes so long). 'Query DB' will immediately show the space being available, though the formatting has not completed. Syntax: 'EXTend DB N_Megabytes' Note that doing this may automatically trigger a database backup, with message ANR4552I, depending upon your DBBackuptrigger values. EXTend LOG TSM server command to extend the Recovery Log "assigned space" to use more of the "available space". Causes a process to be created which physically formats the additional space (because it takes so long). 'Query LOG' will immediately show the space being available, though the formatting has not completed. Syntax: 'EXTend LOG N_Megabytes' Results in ANR0307I formatting progress messages to appear in the Activity Log. Caution: In some cases, customers have found that with Logmode Rollforward, the next db backup after the extension fails to clear the Recovery Log. Restarting the server is the only known way to clear that situation. See also: dsmserv EXTEND LOG EXTernal Operand of 'DEFine LIBRary' server command, to specify that a mountable media repository is managed by an external media management system. External Library A collection of drives manage by a media managment system that is not part of ADSM, as for example some mainframe tape management system. (A 3494 that is used directly by *SM is *not* an External Library.) EZADSM Early name for the ADSM Utilities. Name obsoleted in ADSM 2.1.0. Failed Status in Query EVent output indicating that the scheduled event did occur but the client reports a failure in executing the operation, and successive retries have not succeeded. See also: Missed; Total number of objects failed FAS Fabric-Attached Storage, as employed in NetApp brand network attached storage product. FC Fibre Channel. Current 3590 drives can be attached to hosts via Fibre Channel or SCSI. FCA Fibre Channel Adapter card. fcs0 See: Emulex LP8000 Fibre Channel Adapter FDR/UPSTREAM Backup/restore product from Innovation Data Processing, which they say is a comprehensive, powerful, high performance storage management solution for backup of most of the open systems LAN/UNIX platforms and S/390 Linux data to OS/390 or z/OS mainframe backup server. UPSTREAM will provide automated operations with fast, reliable and verifiable backups/restores/archival and file transfers that can be automatically initiated and controlled from either client or the mainframe backup server. UPSTREAM provides unique data reduction techniques including online database agents offering maximum safety with superior disaster recovery protection. Supports Windows and AIX. (The vendor's website is poor.) FFFA volume category code, 3494 Reflects a tape which was manually removed from the 3494, by opening the door and removing the tape from a cell, instead of otherwise ejecting it. To remove the Library Manager entry for the volume, to allow the cell to be reused, change the Category Code to FFFB. See: Volume Categories Fibre Channel adapter, mixing disk IBM's official statement concerning the and tape on same one FC HBA sharing of tape and disk on a single adapter, as of 2003/05: "...Using a single Fibre Channel host bus adapter (HBA) on a host server for concurrent tape and disk operations is generally not recommended. In high performance, high stress situations with dissimilar I/O devices, stability problems can arise. IBM is focused on assuring configuration interoperability. In so doing, IBM tests single HBA configurations to determine interoperability. Certain customer environments using AIX with the IBM FC Switch (2109) connecting both ESS (2105) and Magstar 3590 Tape have demonstrated acceptable interoperability. For customers that are considering sharing a single HBA with concurrent disk and tape operations, it is strongly recommended that the sales team conduct a Pre-Sales Solutions Assurance Review with members of the Techline or ATS team to review the issues and concerns. IBM and IBM's partners will continue evaluating other configurations and make specific statements regarding interoperability as available." Ref: IBM Ultrium Device Drivers Installation and User's Guide, as one place. Synposis: You risk a hang or data corruption, not that it certainly won't work. See also: HBA FibreChannel and number of tape drives A rule of thumb is that there should not be more than three tape drives per FibreChannel path. FICON IBM term, used with S/390, for Fiber Connection of devices. A follow-on to ESCON. Ref: redbook "Introduction to IBM S/390 FICON" (SG24-5176) FID messages (3590) Failure ID message numbers, which appear on the 3590 drive panel. FID 1 These messages indicate device errors that require operator and service representative, or service representative only action. The problem is acute. The device cannot perform any tasks. FID 2 These messages report a degraded device condition. The problem is serious. The customer can schedule a service call. FID 3 These messages report a degraded device condition. The problem is moderate. The customer can schedule a service call. FID 4 These messages report a service circuitry failure. The device requires service, but normal drive function is not affected. The customer can schedule a service call. Ref: 3590 Operator Guide (GA32-0330-06) Appendix B especially. Fiducials White, light-reflective rectangles attached to the corners of tape drives and cell racks in a 3494 tape robot for the infrared sensor on the robot head to determine exactly where such elements exactly are, when in Teach mode. Ref: "IBM 3590 High Performance Tape Subsystem User's Guide" (GA32-0330-0) FILE In DEFine DEVclass, is a DEVType which refers to a disk file in a file system of the *SM server computer, which is regarded as a form of sequential access media - which implicitly means singular access, which is to say that a FILE is dedicated to a single active Session, where no other Sessions can use the FILE volume - including multi-session processes. (This is in contrast to the DISK device class, which is random access, and can be simultaneously used by multiple Sessions.) Naturally, there is no library or drive defined for FILE. FILE type volumes may be either Scratch or Defined type. For Scratch type, when the server needs to allocate a scratch "volume" (file), it creates a new file in the directory specified in the DEFine. For scratch volumes used to store client data, the file created by the server has a file name extension of .BFS. For scratch volumes used to store export data, a file name extension of .EXP is used. For example, suppose you define a device class with a DIRECTORY of /ADSMSTOR and the server needs a scratch volume in this device class to store export data, the file which the server creates might then be named /ADSMSTOR/00566497.EXP . When empty, Scratch type FILE volume size is controlled by the Devclass MAXCAPacity value: when a volume is filled, another is created and used. The number of such volumes is limited by the Stgpool MAXSCRatch value: if inadequate, you will ultimately encounter "out of space" stgpool error messages. Scratch type FILE volumes are deleted from the file system, giving back the space they occupied. Instead of Scratch, you may do DEFine Volume to pre-assign volumes in the FILE pool, in conjunction with setting MAXSCRatch=0. This allows you to attain predictable results, as in spreading I/O load over multiple OS disks. Properties: - FILE type devices are sequential media, and are treated in many respects like tape. - No prep (labeling, formatting) is required. - They require mountpoints, are mounted and dismounted, etc. - Volume name must be unique, as it is a file system file name. - MOUNTLimit may be used to limit the number of simultaneous volumes in use in the pool, and thus limit processes: when limit reached, new processes wait for FILEs. MOUNTLimit=DRIVES is not valid in that there are no "drives". - There should be no actual manual intervention required in their use. FILE devs may be used for a variety of purposes, including electronic vaulting. Ref: Admin Guide table "Comparing Random Access and Sequential Access Disk Devices" See also: DISK; SERVER; Sequential devices; Storage pool space and transactions See also IBM site Technote 1141492 FILE devclass performance As a sequential pseudo device, FILE benefits from several real and conceptual performance advantages, over DISK (random access) class: - There is only the need to keep track of where files start within the FILE area, rather than map blocks as in DISK class. - Access is linear, without TSM having to hop around seeking the next piece of the series. - Access is dedicated rather than shared, eliminating contention. However, there are inconvenient realities in this pretense: - The FILE area is built upon a file system's disk blocks - which can be expected to be scattered about on the disk. - The disk will often be shared, and so there is real contention involved. FILE is tape emulation: there are certain TSM functionality advantages, but don't fool yourself into believing that FILE is truly sequential. File, delete from filespace See: File Space, delete selected files File, expirable? See: SHow Versions File, find on a set of volumes SELECT VOLUME_NAME FROM CONTENTS WHERE - NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='{fsname}' AND - FILE_NAME='{path.without.fsname} {filename}' File, find when only filename known There may be times when you know the name of a file, but not what directory (or perhaps even filespace) it is in. In the TSM server you can do: SELECT * FROM BACKUPS WHERE [FILESPACE_NAME="FSname" AND] LL_NAME="TheFileName" (Remember that for client systems where filenames are case-insensitive, such as Windows, TSM stores them as UPPER CASE, so search for them the same way.) File, in storage pool When TSM stores files in storage pools, if the current storage pool sequential volume fills as the file is being written, the remainder of the file will be stored on another volume: the file will span volumes. (If the file is within an Aggregate, the Aggregate necessarily spans volumes as well.) A file cannot span Aggregates. If the file size meets or exceeds Aggregate size, the file is not Aggregated. See: Aggregated?; Segment Number File, management class bound to The management class to which any given file is bound can be most readily be checked via 'dsmc q backup ...' or a GUI restore looksee on the client, or via a more consumptive Select performed on the server Backups table. File, selectively delete from *SM There is no supported way currently to storage - standard method dispose of an individual file from server storage via a server operation: but you may accomplish it from the client side, by one of the following methods: 1. The crude approach: Create an empty, dummy file of the same name, back up the empty surrogate as many times as your retention generations value, to assure that all copies of the original are gone. (The backup of an empty file does not require storage pool space or a tape mount: it is the trivial case where all the info about the empty file can be stored entirely in the database entry.) 2. Use a special management class with null retention values... - On the server, define a special management class with VERDeleted=0 and RETOnly=0; - On the client, code an Include to tie the specific file to that special management class; - On the client, create a dummy file in the same place in the file system that the bogey file existed; - Perform a Selective Backup on that file name. *SM will then expire the "old" version of the file, and the low retention will cause Expiration to delete it the next day. File, selectively delete from *SM Unsupported and possibly dangerous: storage - unsupported method First up you need to find out the object id(s) for the object(s) that you want to delete. You can find this out from the backup or archive tables using SELECT. Then it is just a simple matter of using the DELETE OBJECT command. There is one trick though. The OBJECT_ID field from the backup and archive tables is a single number. However, the object ID required by DELETE OBJECT takes 2 numbers as parameters, an OBJECT_ID HIGH and an OBJECT_ID LOW. The HIGH value has been seen to always be zero. So, if you want to delete object 193521018 for example, just do DELETE OBJECT 0 193521018. (Note that this command is a *SM construct, as opposed to the pure SQL Delete statement.) Further warning: This command does exactly and only what it says: it deletes an object - regardless of context. It does not update all the necessary tables to fully remove an object from the TSM server. If you use this command, you risk creating a database inconsistency and thus future problems. See also: File Space, delete selected files File, split over two volumes? Do SELECT FILE_NAME FROM CONTENTS WHERE volume_name='______' AND SEGMENT<>'1/1' to find the name of the file spread over two volumes. Then do: SELECT VOLUME_NAME FROM CONTENTS WHERE FILE_NAME='see.above' AND SEGMENT='2/2' to find the other volume. File, what volume is it on? The painful way, depending upon your file population: SELECT VOLUME_NAME FROM CONTENTS - WHERE FILE_NAME='_______' Or: Restore or retrieve the file to a temp area, and see what tape was mounted. Or: Mark the storage pool Unavailable for a moment, attempt a restoral or retrieval, unmark, and look in the server Activity Log for what volume it could not get. See also: Restoral preview File(s), always back up during an Accomplish this by creating a parallel incremental backup Management Class definition pointing to a parallel Backup Copy Group definition which contains "MODE=ABSolute", and then have an Include statement for that file refer to the parallel Management Class. File age For migration prioritization purposes, the number of days since a file was last accessed. File aggregation See Aggregates File attributes, in TSM storage File attributes are not available at the server via SQL Select queries: the attribute information is only available via the same kind of client you used to back up the file, and then only in the GUI client. That is, if you used the Windows client to back up a file, only the Windows client GUI can get the file attributes. While the server certainly does store the attributes given to it by the client, the TSM server does not provide the server administrator with that view of the database. Nor is there any way to get them in their "raw" (uninterpreted) format. This is partly because such data is something only the client admin need be concerned about, and partly because the way the attributes are stored is platform-specific such that extra server programming would be needed to properly interpret the attributes in the context of the client architecture. ODBC issues Select requests, so it's view of the server DB is likewise limited (and slow). See also: dsmc Query Backup File in use during backup or archive Have the CHAngingretries (q.v.) Client System Options file (dsm.sys) option specify how many retries you want. Default: 4. File name (location) of database, Are defined within file: recovery log /usr/lpp/adsmserv/bin/dsmserv.dsk (See "dsmserv.dsk".) File names as stored in server Client operating system file names are stored in the server according to the conventions of the operating system and file system. Unix file names are case-sensitive, and so they are stored as-is. Windows, following the MS-DOS convention, has file names which are case-insensitive, and so TSM follows the convention of that environment by storing them in upper case. File server A dedicated computer and its peripheral storage devices that are connected to a local area network that stores both programs and files that are shared by users on the network. File size For migration prioritization purposes, the size of a file in 1-KB blocks. Revealed in server 'Query CONtent VolName F=D". TSM records the size of a file as it goes to a storage pool. If the client compresses the file, TSM records the compressed size in its database. If the drive compresses the file, TSM is unaware of the compression. See also: FILE_SIZE; File attributes File size, maximum, for storage pool See "MAXSize" operand of DEFine STGpool. File size, maximum supported There was a historic limitation in the ADSM server and client that the maximum file size for backup and archive could not exceed 2 GB. That restriction was lifted in the server around 8/96; and in the client PTF 6, for platforms AIX 4.2, Novell NetWare, Digital UNIX, and Windows NT. See also: Volume, maximum size File Space (Filespace) A logical space on the *SM server that contains a group of files that were stored as a logical unit, as in backup files, archived files. A file space typically consists of the files backed up or archived for a given Unix file system, or a directory apportionment thereof defined via the Unix VIRTUALMountpoint option. In Windows, the file system defined by volume name or UNC name. File Spaces are the middle part of the unique *SM name associated with file system objects, where node name is the higher portion and the remainder of the path name is the lower portion. By default, clients can delete archive file spaces, but not backup file spaces, per server REGister Node definitions. CAUTION: The filespace name you see in character form in the server may not accurately reflect reality, in that the clients may well employ different code pages (Windows: Unicode) than the server. The hexadecimal representation of the name in Query FIlespace is your ultimate reference. File Space, backup versions 'SHOW Versions NodeName FileSpace' File Space, delete in server 'DELete FIlespace NodeName FilespaceName [Type=ANY|Backup| Archive|SPacemanaged] OWNer=OwnerName' Note that "Type=ANY" removes only Backup and Archive copies, not HSM file copies. File Space, delete from client From client, dsmc Delete Filespace is a gross, overall operation which deletes all aspects of the filespace (providing that the node's ARCHDELete and BACKDELete specifications allow it). Doing DELete FIlespace from the server allows greater selectivity as to the type of data to be deleted. File Space, delete selected files TSM does not provide a means for customers to delete specific files from filespaces, as you might want to do if last night's backup sent virus-infected files to the server. TSM is a strict, policy-based data assurance facility for an enterprise, where the server administrator is provided no means for monkeying with individual files...which belong to the clients, who should be guaranteed that their data lives according to the agreed rules. One thing you can do is force individual filenames to be pushed out of the filespace via special policy specifications: Add an Include statement for these files in your client options, specifying a special management class with a COpygroup retention period of 0 (zero) days, and then run a special backup. See also: DELETE OBJECT; File, selectively delete from *SM storage File Space, explicit specification Use braces to enclose and thus isolate the file space portion of a path, as in: 'dsmc query archive -SUbdir=Yes "{/a/b}/c/*"' This will explicitly identify the file space name to TSM, keeping it from guessing wrong in cases where the file system portion of the path is not resident on the system where the command is invoked, you lack access to it, or the like. (TSM assumes that the filespace is the one with longest name which matches the beginning of the filespec. So if you have two filespaces "/a" and "/a/b", you need to specify "{/a/}somefile" to distinguish.) Ref: (Unix) Backup/Archive client manual: Understanding How TSM Stores Files in File Spaces File Space, move to another node The 'REName FIlespace' cannot do this. within same server (The product does not provide an easy means for reattributing file spaces to other nodes - largely, I think, because it would be too easy for naive customers to get into trouble in assigning a file space to an operating system which did not support the kind of file system represented in the file space.) You can perform it via the following (time-consuming) technique, which temporarily renames the sending node to the receiving node: Assume nodes A & B, and you want to move filespace F1 from A to B... 1. REName Node B B_temp 2. REName Node A B 3. EXPort Node B FILESpace=f1 FILEData=All DEVType=3590 VOL=123456 (wait for the export to complete) 4. REName Node B A 5. REName Node B_temp B 6. IMport Node B Replacedefs=No DEVType=3590 VOLumenames=123456 Alternately, you could do the converse: temporarily rename the receiving node to the exported file space node name for the purposes of receiving the import. File Space, number of files in The Query FIlespace server command does not reveal the number; and Query OCCupancy counts only the number of file space objects which are stored in storage pools. File Space, on what volumes? Unfortunately, there is no command such that you can specify a file space and ask ADSM to show you what volumes its files reside upon. You have to do 'Query CONtent VolName' on each volume in turn and look for files, which is tedious. File Space, remove In performing filespace housekeeping, it's wise to do a Rename Filespace rather than an immediate Delete: hang on to the renamed oldie for at least a few days, and only after no panic calls, do DELete FIlespace on that renamee. Alternately, you could Export the filespace and reclaim that tape after a prudent period; but that takes time, and the panicked user would have to await an equally prolonged Import before their data could be had. If you don't exercise prudence in this fashion, recovering a filespace would involve a highly disruptive, prolonged TSM db restoral to a prior time, Export, then restoral back to current time followed by an import. No one wants to face a task like that. File Space, rename 'REName FIlespace NodeName FSname Newname' A step to be performed when an HSM-managed file system is renamed. File Space, timestamp when Backup file 'SHow Versions NodeName FileSpace' written to File Space locking TSM will lock a filespace as it performs some operations, which can result in conflicts. See IBM site TechNote 1110026. File Space name Remember that it is case-sensitive. For ADSM V3 Windows clients after 3.1.0.5, the filespace name is based on the Windows UNC name for each drive, rather than on the drive label. So if somebody changed the Windows NT networking ID, that would change the UNC name, and force a full backup again. Per the API manual Interoperability chapter: Intel platforms automatically place filespace names in uppercase letters when you register or refer them. However, this is not true for the remainder of the object name specification. File Space name, list 'Query CONtent VolName' File Space name *_OLD A filespace name like "\\acadnt1\c$_OLD" is an indication of having a Unicode enabled client where the node definition allows "Auto Filespace Rename = Yes": TSM can't change filespaces on the fly to Unicode so it renames the non-unicode filespaces to ..._old, creates new Unicode filespaces, and then does a "full" backup for the filespaces. When your retention policies permit, you can safely delete the old filespaces. See AUTOFsrename in the Macintosh and Windows B/A clients manuals. File Space number See: FSID File Space reporting From client: 'dsmc q b -SUbdir=Yes -INActive {filespacename}:/dir/* > filelist.output File Space restoral, preview tapes Old way: needed 'SHow VOLUMEUSAGE NodeName' to get the tapes used by a node, then run 'Query CONtent VolName NODE=NodeName FIlespace=FileSpaceName' on each volume in turn. ADSMv3: SELECT VOLUME_NAME FROM - VOLUMEUSAGE WHERE - NODE_NAME='UPPER_CASE_NAME' - AND FILESPACE_NAME='____' AND - COPY_TYPE='BACKUP' AND - STGPOOL_NAME='' File Spaces, abandoned Clients may rename file systems and disk volumes, thus giving the backed-up filespaces new identities and leaving behind the old filespaces for the TSM system administrator to deal with. To TSM, there is no difference between a file system which hasn't been backed up for five years and one which has not been backed up for five hours: the data belongs to the client, and the TSM server's role is to simply do the client's bidding. This is where system administration is needed... The standard treatment is to periodically look for abandoned filespaces (look at last client access time in Query Node, and Query FIlespace last backup date), notify the clients, and delete them if the client says to or no response within a reasonable time. Watch out for filespaces which are just used for archiving, such that backups are not reflected. See "Export" for a technique to preserve abandoned filespaces but eliminate their burden on the server db. File Spaces, report backups Not so easy: the information is in the database, though getting it is tedious. The Actlog table can be mined for ANE* messages reflecting backups (including transfer rates), and with that timestamp you can go at the Backups table to determine the filespace name, and from the filenames gotten there you could brave the Contents table to get sizes (which records aggregates or filesizes, whichever is larger). File Spaces, summarize usage 'SELECT n.node_name,n.platform_name, - COUNT(*) AS "# Filespaces", - SUM(f.capacity) AS "MB Capacity" - FROM nodes n,filespaces f - WHERE f.node_name=n.node_name - GROUP BY n.node_name,n.platform_name - ORDER BY 2,1' File spaces not backed up in 5 days SELECT FILESPACE_NAME AS "Filespace", \ NODE_NAME AS "Node Name", \ DAYS(CURRENT_DATE)-DAYS(BACKUP_END) \ AS "Days since last backup" FROM \ FILESPACES WHERE (DAYS(BACKUP_END) \ < (DAYS(CURRENT_DATE)-5)) Or: SELECT * FROM FILESPACES WHERE - CAST((CURRENT_TIMESTAMP-BACKUP_END)DAYS AS DECIMAL(3,0))>5 File State The state of a file that resides in a file system to which space management has been added. A file can be in one of three states - resident, premigrated, or migrated. See also: resident file; premigrated file; migrated file File system, add space management HSM: 'dsmmigfs add FSname' or use the GUI cmd 'dsmhsm' File system, deactivate space HSM: 'dsmmigfs deactivate FSname' management or use the GUI cmd 'dsmhsm' File system, display HSM: 'dsmdf [FSname]' or 'ddf [FSname]' File system, expanding An HSM-managed file system can be expanded via SMIT or discrete commands, while it is active - no problem. File system, force migration HSM: 'dsmautomig [FSname]' File system, Inactivate all files When a TSM client is retiring, it may be desirable to render all its files Inactive, and allow them to age out gracefully, rather than do a wholesale filespace deletion. Such an inactivation is best done by either emptying the client file system and then doing a last Incremental backup, or by creating an empty file system on the client and then temporarily renaming the TSM server filespace to match for the final Incremental. A tedious alternative is to use the client EXPire command on all the client's Active objects. In doing this, you want the retention policy to have date-based expiration, as files controled by versions-only expiration will remain in the retired filespace indefinitely. File system, query space management HSM: 'dsmmigfs query FSname' or use the GUI cmd 'dsmhsm' File system, reactivate space HSM: 'dsmmigfs reactivate FSname' management or use the GUI cmd 'dsmhsm' File system, remove space management HSM: 'dsmmigfs remove FSname' (q.v.) File system, restrict incremental Use "DOMain" option in the Client User backup to Options file to restrict incremental backup to certain drives or file systems. File system, update space management HSM: 'dsmmigfs update FSname' or use the GUI cmd 'dsmhsm' File system incompatibility The *SM client is programmed to know what kind of file systems your operating system can handle - and, by logical extension, what kinds it cannot. When you attempt to perform cross-node operations to for example inspect the files backed up by a node running a different operating system than yours, the client will not show you anything. The big problem here is the client's failure to say anything useful about its refusal, leaving the customer scratching his head. See also: message ANS4095E File System Migrator (FSM) A kernel extension that is mounted over an operating system file system when space management is added to the file system (over JFS, in AIX). The file system migrator intercepts all file system operations and provides any space management support that is required. If no space management support is required, the operation is passed through to the operating system (e.g., AIX) for it to perform the file system operations. (Note that this perpetual intercept adds overhead, which delays customary file system tasks like 'find' and 'ls -R'.) In the AIX implementation of FSM, HSM installation updates the /etc/vfs file to add its virtual file system entry like: fsm 15 /sbin/helpers/fsmvfsmnthelp none (HSM prefers VFS number 15.) File system restoral, preview tapes Unfortunately, there is no command to needed accomplish this. You could instead try 'SHow VOLUMEUSAGE NodeName' to get a list of the Primary Storage Pool tapes used by a node, then run 'Query CONtent VolName NODE=NodeName FIlespace=FileSpaceName' on each volume in turn to identify the volumes In ADSMv3+ you can exploit the "No Query Restore" feature, which displays the volume name to be mounted, which you can then skip. See: No Query Restore File system size 'Query Filespace' shows its size in the "Capacity" column, and its current percent utilzation under "Pct Util". File system state The state of a file system that resides on a workstation on which ADSM HSM is installed. A file system can be in one of these states-native, active, inactive, or global inactive. File system type used by a client 'Query FIlespace', "Filespace Type". Reveals types such as JFS (AIX), FSM:JFS (HSM under AIX), FAT (DOS, Windows 95), NFS3, NTFS (Windows NT), XFS (IRIX). File system types supported, Macintosh See the Macintosh Backup-Archive Clients Installation and User's Guide, topic "Supported file systems" (Table 10) File system types supported, Unix See the Unix Backup-Archive Clients Installation and User's Guide, topic "File system and ACL support". (Table 47) File system types supported, Windows See the Windows Backup-Archive Clients Installation and User's Guide, topic "Performing an incremental, selective, or incremental-by-date backup". File systems, local The "DOMain ALL-LOCAL" client option causes *SM to process all local file systems during Incremental Backup. For special, non-Backup processing, your client may need to definitively acquire the list of all local file systems. In Unix, you can use the 'df' or 'mount' commands and massage the output. A cuter/sneakier method is to have TSM tell you the file system names: have "DOMain ALL-LOCAL" (or omit DOMain) in your dsm.opt file, and then do 'dsmc query opt'/'dsmc show opt' and parse the returned DomainList. Rightly, /tmp is not included in the returned list. If you don't want to disturb your system dsm.opt file, you can simply define environment variable DSM_CONFIG to name an empty file, like: setenv DSM_CONFIG /dev/null or use the -OPTFILE command line arg (but this arg is not usable with all commands). And to avoid having that environment variable setting left in your session, you can execute the whole in a Csh sub-shell, by enclosing in parens: (setenv DSM_CONFIG /dev/null ; dsmc show opt ) You might use the PRESchedulecmd to weasle such an approach for you. File systems to back up Specify a file system name via the "DOMain option" (q.v.) or specify a file system subdirectory via the VIRTUALMountpoint option (q.v.) and then code it like a file system in the "DOMain option" (q.v.). File systems supported See: File system types supported File systems under HSM control End up enumerated in file /etc/adsm/SpaceMan/config/dsmmigfstab by virtue of running 'dsmmigfs'. FILE_NAME ADSMv3 SQL: The full-path name of a file, being a composite of the HL_NAME and LL_NAME, like: /mydir/ .pinerc FILE_SIZE ADSMv3 SQL: A column in the CONTENTS table, supposedly reflecting the file size. Unfortunately the SQL access we as customers have to the TSM database is a virtual view, which deprives us of much information. Here, FILE_SIZE is the size of the Aggregate (of small files), not the individual file, except when the file is very large and thus not aggregated (greater than the client TXNBytelimit setting), and except in the case of HSM, which does not aggregate. So, in a typical Contents listing involving small files, you will see like "AGGREGATED: 3/9", and all 9 files having the same FILE_SIZE value, which is the size of the Aggregate in which they all reside. Only when you see "AGGREGATED: No" is the FILE_SIZE the actual size of the file. Note also that the CONTENTS table is a dog to query, so it is hopeless in a large system. See also: File attributes FILEEXit Server option to allow events to be saved to a file -- NOTE: Events generated are written to file exit when generated, but AIX may not perform the actual physical write until sometime later - so events may not show up in the file right after they are generated by the server/client. Be sure to enable events to be saved (ENABLE EVENTLOGGING FILE ...) in addition to activating the file exit receiver. Syntax: FILEEXit [YES | NO] [APPEND | REPLACE | PRESERVE] -FILEList= TSM v4.2+ option for providing to the dsmc command a list of files and/or directories, both as a convenience and to overcome the long-imposed default restriction of 20 on the number of filespecs which may appear on the command line. The basic rules are: - one object name per line in the file; - no wildcards; - names containing spaces should be enclosed in double-quotes; - specifying a directory causes only the directory itself to be processed, not the files within it. Invalid entries are skipped, resulting in a dsmerror.log entry. Processing performance (per 4.2 Tech Guide redbook): The entries in the filelist are processed in the order they appear in the filelist. For optimal processing performance, you should pre-sort the filelist by filespace name and path. See also: dsmc command line limits; -REMOVEOPerandlimit Files, backup versions 'SHOW Versions NodeName FileSpace' Files, binding to management class Files are accociated with a Management Class in a process called "binding" such that the policies of the Management Class then apply to the files. Binding is done by: Default management class in the Active policy set. Backup: DIRMc option Archive: ARCHMc option on the 'dsmc archive' command (only) INCLUDE option of an include-exclude list Using a different management class for files previously managed by another management class causes the files to be rebound to the rules of the new management class - which can cause the elimination of various inactive versions of files and the like, depending upon the change in rules; so be careful in order to avoid disruption. Ref: Admin Guide Files, maximum transferred as a group "TXNGroupmax" definition in the server between client and server options file. Files, number of in storage pools, See: Query OCCupancy query Files sent in current or recent Sometimes, a current or recent session client session had some impact on the server, and the TSM administrator would like to identify the particulars of the files involved. It is usually well known what TSM storage pool volume they went to, and so a simple way to report them is: 'Query CONtent VolName COUnt=-N F=D' where -N is some likely number which will encompass the recently arrived files of interest - which is most likely to work when the files are large. This may be even simpler if you have a disk storage pool as the initial reception area for Archive, Backup, or HSM client operation. This technique is a handy way to spot-check a set of tapes and see what they were last used for. (The Query Content command is targeted at a volume and limited in scope, so no server overhead, and results are nearly instantaneous.) Files in a volume, list 'Query CONtent VolName ...' Files in database See: Objects in database Fileserver and user-executed restorals Shops may have a fileserver and dependent workstations, perhaps of differing architectures. Backups occur from the fileserver, but how to make it possible for users - who are not on the fileserver - to perform their own restorals? Possibilities: - For each user, have the fileserver do a 'dsmc SET Access' to allow the workstation users to employ -FROMNode and -FROMOwner to perform restorals to their workstations...whence the data would flow back to the server over NFS, which may be tolerable. - Allow rsh access to the fileserver so that via direct command or interface the users could invoke ADSM restore. - Fabricate a basic client-server mechanism with a root proxy daemon on the fileserver performing the restoral for the user, and feeding back the results. (A primitive mechanism could even be mail-based, with the agent on the fileserver using procmail or the like to receive and operate upon the request.) - Have the fileserver employ two different nodenames with ADSM: one for its own system work, and the other for the backup of those client user file systems. This would allow you to give the users a more innocent, separate password which they could use (or embed in a shell script you write for them) to perform ADSM restorals from their workstations using the -nodename option. The data in this case would flow to the ADSM client on the workstation, and then back to the fileserver via NFS, which may be tolerable. The nuisance here is setting up and maintaining ADSM client environments on the workstations...which could be made easier if you further exploited your NFS to have the executables and options files shared from the fileserver (where they would reside, but could not be executed because of the server being Sun and client code being AIX, say). -FILESOnly ADSMv3+ client option, as used with Restore and Retrieve, to cause the operation to bring back only files, not their accompanying directories. However, in Archive, directories in the path of the source file specification *will* be archived. During Restore and Retrieve, surrogate directories will be constructed to emplace the original structure of the file collection. Ref: TSM 4.2 Technical Guide See also: Restore Order; V2archive Filespace See: File Space Filespace number See: FSID Filespace Type Element of 'Query FIlespace' server command, reflecting the type of file system which ADSM found when it was *first* backed up. (Change from, for example, FAT to NTFS, and there will be no change in Filespace Type.) Sample types: Platform: JFS AIX FSM:JFS AIX HSM ext2 LINUX NFS3 IRIX XFS IRIX FAT32 Windows 95 NTFS WinNT AUTOFS IRIX See also: Platform FileSpaceList Entry in ADSM 'dsmc Query Options' or TSM 'dsmc show options' report which reveals the Virtual Mount Points defined in dsm.sys. Names are reported under this label if defined as a Virtual Mount Point *and* something is actually there. As such this is a good way of determining if an incremental backup will work on this name. FILESPACES *SM SQL table for the node filespace. Columns: NODE_NAME, FILESPACE_NAME, FILESPACE_TYPE, CAPACITY, PCT_UTIL, BACKUP_START, BACKUP_END See also: Query FIlespace for field meanings. FILETEXTEXIT TSM server option to specify a file to which enabled events are routed. Each logged event is a fixed-size, readable line. Syntax: FILETEXTEXIT [No|Yes] File_Name REPLACE|APPEND|PRESERVE Parameters: Yes Event logging to the file exit receiver begins automatically at server startup. No Event logging to the file exit receiver does not begin automatically at server startup. When this parameter has been specified, you must begin event logging manually by issuing the BEGIN EVENTLOGGING command. file_name The name of the file in which the events are stored. REPLACE If the file already exists, it will be overwritten. APPEND If the file already exists, data will be appended to it. PRESERVE If the file already exists, it will not be overwritten. Filling Typical status of a tape in a 'Query Volume' report, reflecting a sequential access volume is currently being filled with data. (In searching the manuals, note that the phrase "partially filled" is often used instead of "filling".) Note that this status can pertain though the volume shows 100% utilized: the utilization has reached the estimated capacity but not yet the end of the volume. Note that "Filling" will not immediately change to "Full" on a filled volume if the Segment at the end of the volume spans into the next volume: writing of the remainder of the segment must complete on the second volume before the previous volume can be declared "Full". This necessitates the mounting and writing of a continuation volume, which might be thwarted by volume availability (MAXSCRatch, etc.). Note also that it is not logical for a non-mounted Filling status tape to be used when the current tape fills with a spanned file: files which span volumes must always continue at the front of a fresh volume. It would not be logical for a file to span from the end of one volume into the midst of another volume. Thus, a Filling tape will most often be used when an operation begins, not as it continues. Historically, *SM has always keep as many volumes in filling status as you have mount points defined to the device class for that storage pool. So if your device class has a MOUNTLimit of 2, you'll always see 2 volumes in filling status (barring volumes that encounter an error). So when one Filling tape goes full, it would start another one. Advisory: Your scratch pool capacity can dwindle faster than you would expect, by tapes in Filling status having just a small amount of data on them, perhaps never again called upon for further filling. This can be caused by a worthy Filling tape dismounting when an operation like Move Data starts: it would otherwise use that Filling tape, but because it is dismounting, *SM instead uses a fresh tape, and that new tape will probably be used for further operations, leaving the old Filling tape essentially abandoned; so your usable tape complement shrinks. Reclamation: Filling volumes can be reclaimed as readily as Full volumes, per the reclaim threshold you set. Ref: Admin Guide, chapter 8, How the Server Selects Volumes with Collocation Enabled; ... Disabled See also: Full; Pct Util Firewall and idle session A firewall between the TSM client and server can result in the session being disconnected after, say, an hour of idle time (as in a long MediaWait). The real solution, of course, is to resolve the wait problems. You might also set the TCP keepalive interval to below the value of your firewall timeout before a session starts, or changing the SO_KEEPALIVE on the socket for a current session (if possible). Firewall support For web-based access, TSM 4.1 introduced the option WEBPorts. The client scheduler operating in Prompted mode does not work when the server is across a firewall; but it does work when operating in Polling mode. To enable the Backup-Archive client, Command Line Admin client, and the Scheduler (running in polling mode) to run outside a firewall, the port specified by the server option TCPPort (default 1500) must be opened within the firewall. The server cannot log events to a Tivoli Enterprise Console (T/EC) server across a firewall. Consider investigating VPN methods or SAN in general. Ref: Quick Start manual, "Connecting with IBM Tivoli Storage Manager across a Firewall". See: Port numbers, for ADSM client/server; SESSIONINITiation; WEBPorts Firmware IBM term for microcode. Firmware, for 3570, 3590 May be in a secure directory on the ADSM web site, index.storsys.ibm.com. (login:code3570 passwd: mag5tar). Fixed-home Cell 3494 concept wherein a cartridge is assigned to a fixed storage cell: its home will not change as it is used. This is necessitated if the Dual Gripper feature is not installed. fixfsm (HSM) /usr/lpp/adsm/bin/fixfsm, a ksh script for recreating .SpaceMan files when there is a corruption or loss problem in that HSM control area, including loss of the whole directory. Ref: Redbook "Using ADSM HSM", page 52 and appendix D. Fixtest Synonymous with "patch"; indicates that the code has not been fully tested. If your TSM version has a nonzero value in the 4th part of the version number (i.e. the '8' in '5.1.5.8') then it is a fixtest (or patch). See also: Version numbering FlashCopy Facility on the IBM ESS (Shark) which purports to facilitate backups by creating a backup image of a file system. It performs the operation by making a block-by-block copy of an entire volume. The IBM doc talks of having to unmount the file system before taking the copy - which is impossible in most sites - but that is actually an advisory to ensure the consistency of the involved data. Floating-home Cell 3494 Home Cell Mode wherein a cartridge need not be assigned to a fixed storage cell: its home will change as it is used. This is made possible via the Dual Gripper feature. See: Home Cell Mode FMR Field Microcode Replacement, as in updating the firmware on a drive. In the case of a tape drive, when the CE does this he/she arrives with a tape (FMR tape); but it can often be done via host command. .fmr Filename suffix for FMR (q.v.). IBM changed to a .ro suffix in 2003. Folder separator character ':'. (Macintosh) See also: "Directory separator" for Unix, DOS, OS/2, and Novell. FOLlowsymbolic Client User Options file (dsm.opt) (or 'dsmc -FOLlowsymbolic') option to specify whether ADSM is to restore files to symbolic directory links, and to allow a symbolic link to be used as a Virtual Mount Point (q.v.). Default: No Implications in restoring a symbolic link which pointed to a directory, and the symlink already exists: If FOLlowsymbolic=Yes, the symbolic link is restored and overlays the existing one; else ADSM displays an error msg. You may also be thinking of ARCHSYMLinkasfile. FOLlowsymbolic, query ADSM 'dsmc Query Options' or TSM 'show options" and look for "followsym". Font to use with the dsm GUI It ignores the -fn flag. Use the work-around of using X resources to set the font the GUI should use. Try invoking the GUI like this: dsm -xrm '*fontList: fixed' This lets the GUI come up with the font "fixed" being used for all panels. To use another font, simply replace "fixed" with that font's name (the command 'xlsfonts' gives a list of fonts available on your system). Alternatively, you can put a line like "dsm*fontList: fixed" into your .Xdefaults file ("dsm" is the GUI's X class name), and source this file using 'xrdb -merge ~/.Xdefaults"'. This sets the default font to be used for all dsm sessions. forcedirectio Solaris UFS mount option: For the duration of the mount, forced direct I/O will be used - data is transferred directly between user address space and the disk. If the filesystem is mounted using noforcedirectio (the default), data is buffered in kernel address space when the user address space application moves data. forcedirectio is a performance option that is of benefit only in large sequential data transfers. Reported value: One customer saw a throughput enhancement factor of 5 - 15. Ref: Solaris mount_ufs man page Format See: Dateformat; -DISPLaymode; MessageFormat; Numberformat; Timeformat Format= Operand of many TSM queries, to specify how much information to return: Standard The default, to return a basic amount of information. Detailed To return full information. FORMAT= Operand of DEFine DEVclass, to define the manner in which TSM is to tell the DEVType device to operate. For example, a 3590 drive can be specified to operate in either basic mode or compress mode. Advice: Avoid the temptation to employ the "FORMAT=DRIVE" specification, available for many device types, which says to operate at the highest format of which the device is capable. This is non-specific, and has historically been the subject of defect reports where it would not yield the highest operating format. Specify exactly what you want, to get what you want. Format command /usr/lpp/adsmserv/bin/dsmfmt Free backup products See: Amanda http://www.backupcentral.com/ free-backup-software2.html FREQuency A Copy Group attribute that specifies the minimum interval, in days, between successive backups. Note that this unit refers to day thresholds, not 24-hour intervals. -FROMDate (and -FROMTime) Client option, as used with Restore and Retrieve, to limit the operation to files Backed up or Archived on or after the indicated date. Used on RESTORE, RETRIEVE, QUERY ARCHIVE and QUERY BACKUP command line commands, usually in conjunction with -TODATE (and -TOTIME) to limit the files involved. The operation proceeds by the server sending the client the full list of files, for the client to filter out those meeting the date requirement. A non-query operation will then cause the client to request the server to send the data for each candidate file to the client, which will then write it to the designated location. In ADSMv3, uses "classic" restore protocol rather than No Query Restore protocol. Contrast with "FROMDate". See: No Query Restore /FROMEXCSERV=server-name TDP Exchange option for doing cross-Exchange server restores... where you are doing a restore from a different Exchange Server.. and need to specify the Exchange Server name that the backup was taken under. -FROMNode Used on ADSM client QUERY ARCHIVE, QUERY BACKUP, Query Filespace, QUERY MGMTCLASS, RESTORE, and RETRIEVE command line to display, retrieve, or restore files belonging to another user on another node. (Root can always access the files of other users, so doesn't need this option.) The owner of the files must have granted you access by doing 'DSMC SET Access'. Contrast with -NODename, which gives you the ability to gain access to your own files when you are at another node. The Mac 3.7 client README advises that using FROMNode with a large number of files incurs a huge performance penalty, and advises using NODename instead. dsm GUI equivalent: Utilities menu, "Access another node" Related: -FROMOwner. See also: VIRTUALNodename -FROMOwner Used on QUERY ARCHIVE, QUERY BACKUP, QUERY FILESPACE, RESTORE, and RETRIEVE, client commands, when invoked by an ordinary user, to operate upon files owned by another user. Wildcard characters may be used. Root can always access the files of other users, but would want to use this option to limit the operation to the files owned by this user, as in querying just that user's archive files in a file system. The owner of the files must have granted you access by doing 'DSMC SET Access'. As of ADSM3.1.7, non root users can specify -FROMOwner=root to access files owned by the root user if the root user has granted them access. Related: -FROMNode. -FROMTime (and -TOTime) Client option, used with Restore and Retrieve, to limit the operation to files backed up on or after the indicated time. Used on RESTORE, RETRIEVE, QUERY ARCHIVE and QUERY BACKUP command line commands, usually in conjunction with -FROMDate (and -TODate) to limit the files involved. The operation proceeds by the server sending the client the full list of files, for the client to filter out those meeting the time requirement. A non-query operation will then cause the client to request the server to send the data for each candidate file to the client, which will then write it to the designated location. FRU Field-Replaceable Unit. A term that hardware vendors use to describe a part that can be replaced "in the field": at the customer site. FSID (fsID) File Space ID: a unique numeric identifier which the server assigns to a filespace, under a node, when it is introduced to server storage. (FSIDs are not unique across nodes - only within nodes.) Is referenced in commands like DELete FIlespace, REName FIlespace. The fsID of a file space can be displayed via the GUI: on the main window, select the File details option from the View menu. May appear in messages ANR0800I, ANR0802I, ANR4391I. fslock.pid A file in the .SpaceMan directory of an HSM-managed file system, containing the ASCII PID of the current or last dsmreconcile process. FSM See: File System Migrator Fstypes Windows option file or command line option to specifiy which type of file system you want to see on the ADSM server when you view file spaces on another node. Use this option only when you query, restore, or retrieve files from another node. Choices: FAT File Allocation Table drives. RMT-FAT Remote FAT drives. HPFS High-Performance File System drives (OS/2 and Windows NT). RMT-HPFS Remote HPFS drives. NTFS Windows NT File System drives RMT-NTFS Remote NTFS drives. FTP site index.storsys.ibm.com (Better to use direct FTP than WWW.) Go into directory "adsm". Full Typical status of a tape in a 'Query Volume' report, reflecting a sequential access volume which has been used to the point of having filled. Over time, you will see the Pct Util for the volume drop. This reflects the logical deletion of files on the volume per expiration rules. But the very nature of serial media is such that there is no such thing as either the physical deletion of files in the midst of the the volume nor re-use of space in its midst. So the physical tape remains unchanged as the logical Pct Util value declines: in real, physical terms, the tape is still full as per having been written to the End Of Tape marker. Hence, the volume will retain the "Full" status until either all files on it expire, or you reclaim it at a reasonably low percentage. Remember that you do not want to quickly re-use volumes that became full, but rather want to age them, both to even out the utilization of tapes in your library, and to assure that physical data is still in place should you be forced to restore your *SM database to earlier than latest state. Msgs: When tape fills: ANR8341I End-of-volume reached... See also: Filling; Pct Util Full backup See: Backup, full Full volumes, report avg capacity by SELECT STGPOOL_NAME AS STGPOOL, storage pool CAST(MEAN(EST_CAPACITY_MB/1024) AS DECIMAL(5,2)) AS GB_PER_FULL _VOL FROM VOLUMES WHERE STATUS='FULL' GROUP BY STGPOOL_NAME Fuzzy backup A backup version of an object that might not accurately reflect what is currently in the object because ADSM backed up the object while the object was being modified. See: SERialization Fuzzy copy An archive copy of an object that might not accurately reflect what is currently in the object because ADSM archived the object while the object was being modified. GE Excessive abbreviation of GigE, which is Gigabit Ethernet. GEM Tivoli Global Enterprise Manager. GENerate BACKUPSET TSM3.7 server command to create a copy of a node's current Active data as a single point-in-time amalgam. The output is intended to be written to sequential media, typically of a type which can be read either on the server or client such that the client can perform a 'dsmc REStore BACKUPSET' either through the TSM server or by directly reading the media from the client node. Syntax: 'GENerate BACKUPSET Node_Name Backup_Set_Name_Prefix [*|FileSpaceName[,FileSpaceName]] DEVclass=DevclassName [SCRatch=Yes|No] [VOLumes=VolName[,Volname]] [RETention=365|Ndays|NOLimit] [DESCription=___________] [Wait=No|Yes' It is wise to set a unique DESCription value to facilitate later identification and searching. See: Backup Set; dsmc REStore BACKUPSET Query BACKUPSETContents GENERICTAPE DEVclass DEVType for when the server does not recognize either the type of device or the cartridge recording format - never the best situation. See also: ANS1312E Ghost (Norton product) and TSM You can use Ghost as a quick way to install the recovery system that is used to run TSM restores of the real system. Sites that use Ghost this way generally put the recovery system and its TSM client software in a separate partition rather than non-standard folders in the production partition. GIGE Nickname for Gigabit Ethernet. global inactive state The state of all file systems to which space management has been added when space management is globally deactivated for a client node. When space management is globally deactivated, HSM cannot perform migration, recall, or reconciliation. However, a root user can update space management settings and add space management to additional file systems. Users can access resident and premigrated files. GPFS General Parallel File System (GPFS) is the product name for Almaden's Tiger Shark file system. It is a scalable cluster file system for the RS/6000 SP. Tiger Shark was originally developed for large-scale multimedia. Later, it was extended to support the additional requirements of parallel computing. GPFS supports file systems of several tens of terabytes, and has run at I/O rates of several gigabytes per second. http://www.almaden.ibm.com/cs/gpfs.html Grace period The default retention period for files where the management class to which they were bound disappears, and the default management class does not have a copy group for them. Per DEFine DOMain. See: ARCHRETention, BACKRETention Grant Access You mean SET Access. See: dsmc SET Access GRant AUTHority *SM server command to grant an administrator one or more administrative privilege classes. Syntax: 'GRant AUTHority Adm_Name [CLasses=SYstem|Policy|STorage| Operator|Analyst|Node] [DOmains=domain1[,domain2...]] [STGpools=pool1[,pool2...]] [AUTHority=Access|Owner] [DOmains=____|NOde=____]' When you specify CLASSES=POLICY, you specify a list of policy domains the admin id can control. That admin can do things ONLY for the nodes in the specified domain(s): lock/unlock, register, associate, change passwords. But the admin won't be allowed to do any things on the server end, like checkin/checkout, manage storage pools, or mess with admin schedules, or even create new domains; you need SYSTEM for that. A limitation with POLICY is the inability to Cancel sessions for the nodes in its domain. See also: Query ADmin; REGister Admin; REMove Admin; UPDate Admin Graphical User Interface (GUI) A type of user interface that takes advantage of a high-resolution monitor, includes a combination of graphics, the object-action paradigm, and the use of pointing devices, menu bars, overlapping windows, and icons. See: dsm, versus dsmc Gripper On a tape robot (e.g., 3494) is the "hand" part, carried on the Accessor, which grabs and holds tapes as they are moved between storage cells and tape drives. See also: Accessor Gripper Error Recovery Cell 3494: Cartridge location 1 A 3 if Dual Gripper installed; 1 A 1 if Dual Gripper *not* installed. Also known as the "Error Recovery Cell". Ref: 3494 Operator Guide. Group By SQL operator to specify groups of rows to be formed if aggregate functions (AVG, COUNT, MAX, SUM, etc.) are used. SQL clause that allows you to group records (rows) that have the same value in a specified field and then apply an aggregate function to each group. For example, here we report the number of files and megabytes, by node, in the Occupancy table, for primary storage pools: SELECT NODE_NAME, SUM(NUM_FILES) as - "# Files", SUM(PHYSICAL_MB) as - "Physical MB" FROM OCCUPANCY WHERE - STGPOOL_NAME IN (SELECT DISTINCT - STGPOOL_NAME FROM STGPOOLS WHERE - POOLTYPE='PRIMARY') GROUP BY - NODE_NAME' The Group By causes the Sums to occur for each stgpool in turn. Groups Client System Options file (dsm.sys) option to name the Unix groups which may use ADSM services. It is a means of restricting ADSM use to certain groups. Default: any group can use ADSM. GroupWise Novell Nterprise product for communication and collaboration, a principal component being mail. Its backup is perhaps best accomplished with St. Bernard's Open File Manager. One thing you want to be careful of with Groupwise is how your policies are set up... It has been reported that GroupWise stores its messages in uniquely named files - which it would periodically reorganize, deleting the old uniquely named files and creating new ones. See also GWTSA. GUI Graphical User Interface; as opposed to the CLI or WCI. GUI, control functionality The TSM client GUI, in Windows, may be configured to limit the services available to the end user. See IBM site Solution swg21109086. GUI client Refers to the window-oriented client interface, rather than the command-line interface. Note that the GUI is a convenience facility: as such its performance is inferior to that of the command line client, and so should not be used for time-sensitive purposes such as disaster recovery. (So says the B/A Client manual, under "Performing Large Restore Operations".) As of 2004, the GUI is currently designed to query the server for all jobs when the GUI starts up, and then depend on events from the server to keep in sync when jobs are printed and new jobs are submitted. It is possible for the GUI to get out of sync with reality: the GUI will remove a job instance from its repertoire if a query for the job fails to find it (which additionally keeps 5010-505 "cannot find" messages out of the server error.log). GUI vs. CLI By design, the GUI client is different in its manner of operation than the CLI client, because the nature of the GUI means that it needs to provide responses faster. Before v3, the GUI worked much like the CLI, obtaining all information about the area being queried before returning any. That was problematic, in the obvious delay, and client memory utilization (where a *SM client schedule process itself may be hanging on to a lot of memory). As of v3, the GUI asked the server for only as much data as it needed to fulfill its immediate display request (a top level set of directories, or the immediate contents of a selected directory). That discipline, however, makes PIT restorals problematic, in that the GUI's pursuit of just what exists within the PIT timeframe can mean that it will not obtain and display directories which you know to be involved, because they had been backed up outside the timeframe. (APAR IC24733 addresses this artifact, to say that it is working as designed.) Thus, for PIT restorals, you may be better off using the CLI. GUID (TSM 4.2+) The Globally Unique IDentifier (GUID) associates a client node with a physical system. The GUID is (currently) not used for functional purposes, but is only there for potential reporting purposes. When you install the Tivoli software: On Unix, the tivguid program is run to generate a GUID which is stored in the /etc/tivoli directory; On Windows, the tivguid.exe program is run to generate a GUID which is stored in the Registry. The GUID is a 16-byte code that identifies an interface to an object across all computers and networks. The identifier is unique because it contains a time stamp and a code based on the network address that is hard-wired on the host computer's LAN interface card. The GUID for a client node on the server can change if the host system machine is corrupted, if the file entry is lost, or if a user uses the same node name from different host systems. You can perform the following functions from the command line: - Create a new GUID 'tivguid -Create' - View the current GUID 'tivguid -Show' - Write a specific value - Create another GUID even if one exists. Do 'tivguid -Help' for usage. Ref: Unix client manual (body and glossary); IBM site entry swg21110521 GUIFilesysinfo Client option that determines whether information such as filesystem capacity is displayed on the initial GUI screen for all filesystems (GUIF=All, the default), or only for local filesystems (GUIF=Local). GUIF=Local is useful if the remote filesystems displayed are often unreachable, because ADSM must wait for the remote filesystem information or a timeout before displaying the initial GUI screen, which may cause a delay in the appearance of the initial GUI screen. This option can be specified in dsm.sys or dsm.opt, or on the command line when invoking the GUI. GUITREEViewafterbackup Specifies whether the client is returned to the Backup, Restore, Archive, or Retrieve window after a successful operation completes. Specify where: Client options file (dsm.opt) and the client system options file (dsm.sys). Possibilities: No - default; Yes. GWTSA GroupWise Target Service Agent - a NetWare TSA module used to make an online backup of GroupWise. See also: GroupWise HALT ADSM server command to shut down the server. This is an abrupt action. If possible, perform a Disable beforehand and give time for prevailing sessions to finish. Unix alternative for when you are locked out and want to halt the server cleanly is to send it a SIGTERM signal: 'kill -15 ' ( = 'kill -TERM ') ( = 'kill ') See also: Server "hangs"; Server lockout Hard drives list See: File systems, local Hard links (hardlinks) Unix: When more than one directory entry in a file system points to the same file system inode, as achieved by the 'ln' command. The directory entries are just names which associate themselves with a certain inode number within the file system. They are equivalent, which is to say that one is not the "original, true" entry and that the later one is "just a link". The "hard links" condition is known only because the inode block contains a count of links to the inode. When one of its multiple names is deleted, the link count is reduced by one, and the inode goes away only if the link count reaches zero. When you back up a file that contains a hard link to another file, TSM stores both the link information and the data file on the server. If you back up two files that contain a hard link to each other, TSM stores the same data file under both names, along with the link information. When you restore a file that contains hard link info, TSM attempts to reestablish the links. If only one of the hard-linked files is still on your workstation, and you restore both files, TSM hard-links them together. Of course, if the hard link was broken since the backup such that the multiple names became files unto themselves, then it will not be possible to restore the hardlink name. Ref: Using the Backup-Archive Clients manual, "Understanding How Hard Links Are Handled". HAVING SQL operand, as in: "... HAVING COUNT(*)>10" HBA Host Bus Adapter, a term commonly used with Fibre Channel to refer to the interface card. Performance/impact: FibreChannel is high speed traffic, where an HBA such as a 6228 can eat the entire available bandwidth of a PCI bus; so each card should be on a separate PCI bus, with very little else on the bus. IBM recommends: "It is highly recommended that Tape Drives and Tape Libraries be connected to the system on their own host bus adapter and not share with other devices types (DISK, CDROM, etc.)." The redpaper IBM TotalStorage: FAStT Best Practices Guide further says: "It is often debated whether one should share HBAs for disk storage and tape connectivity. A guideline is to separate the tape backup from the rest of your storage by zoning and move the tape traffic to a separate HBA and create an separate zone. This avoids LIPa resets from other loop devices to reset the tape device and potentially interrupt a running backup." HDD Hard Disk Drive Header files for 3590 programming /usr/include/sys/mtio.h /usr/include/sys/Atape.h Helical scan tape techology Magnetic tape is tightly wound around and passes over a drum, at an angle. Inside the drum and protruding from a slot cut into it is a rotating arm with read/write heads on both ends of the arm. The heads contact the tape in "slash" strokes, the effect being like a helix. This recording technique allows higher density than if the tape were linearly passed over a single head: it is most commonly found used in VCRs, where analog video frames are conveniently recorded in the slashes. The technique was extended to data recording in 8mm form - where it achieved notoriety because of high error rates and unreadable tapes. Helical scanning is rough on tapes, resulting in oxide shedding and head clogging: frequent cleaning is essential. In contrast, linear tape technology does not employ sharp angles or mechanically active heads, and so its tapes enjoy much longer, reliable lives. As found in Exabyte Mammoth and Sony AIT (both 8mm tape technologies). Help files for client May have to do: 'setenv HELP /usr/lpp/adsm/bin' Hidden directory See: .SpaceMan Hierarchical storage management client A program that runs on a workstation or file server to provide space management services. It automatically migrates eligible files to ADSM storage to maintain specific levels of free space on local file systems, and automatically recalls migrated files when they are accessed. It also allows users to migrate and recall specific files. Hierarchy See: Storage Pool Hierarchy High Capacity Output Facility 3494 hardware area, located on the inside of the control unit door, consisting of a designated column of slots within the 3494 from which the operator can take Bulk Ejects by opening the door. To change it, you need to perform a Teach Current Configuration, which involves going through a multi-step configuration review, followed by a 3494 reboot; then you need to force a partial reinventory, for the Library Manager to review the cells involved. See also the related Convenience I/O Station. High Performance Cartridge Tape The advanced cartridges used in the IBM 3590 tape drive. High threshold HSM: The percentage of space usage on a local file system at which HSM automatically begins migrating eligible files to ADSM storage. A root user sets this percentage when adding space management to a file system or updating space management settings. Contrast with low threshold. See "dsmmigfs". High-level address Refers to the IP address of a server. See also: Low-level address; Set SERVERHladdress; Set SERVERLladdress HIghmig Operand of 'DEFine STGpool', to define when ADSM can start migration for the storage pool, as a percentage of the storage pool occupancy. Can specify 1-100. Default: 90. To force migration from a storage pool, use 'UPDate STGpool' to reduce the HIghmig value (with HI=0 being extreme). See also: Cache; LOwmig HIPER Seen in IBM APARs; refers to a situation which is High Impact, PERvasive. Hivelist See: BACKup REgistry Hives High level keys HL_NAME SQL: The high level name of an object, being the directory in which the object resides. Simply put, it is everything between the filespace name and the file name, which is to say all the intervening directories. In most cases, the FILESPACE_NAME will not have a trailing slash, the HL_NAME will have a leading and trailing slash, and the LL_NAME will have no slashes. Unix examples: For file system /users, directory name /users: FILESPACE_NAME="/users", HL_NAME="/", LL_NAME="". For file system /users, directory name /users/mgmt/: FILESPACE_NAME="/users", HL_NAME="/", LL_NAME="users". For file system /users, file name /users/mgmt/phb: FILESPACE_NAME="/users", HL_NAME="/mgmt/", LL_NAME="phb". For file system filename /usr/docs/Acrobat3.0/Introduction.pdf the FILESPACE_NAME="/usr/docs", HL_NAME="/Acrobat3.0/", LL_NAME="Introduction.pdf". Note: The Contents table has a FILE_NAME column which is a composite of the HL_NAME and LL_NAME, like: /mydir/ .pinerc which makes it awkward to use the output of that table to further select in the Backups table, for example. See also: FILE_NAME; LL_NAME HLAddress REGister Node specification for the client's IP address, being a hard-coded specification of the address to use, as opposed to the implied address discovered by the TSM server during client sessions (which may be specified on the client side via the TCPCLIENTAddress option). See also: LLAddress; IP addresses of clients; SCHEDMODe PRompted Hole in the tape test An ultimate test of tape technology error correction ability: a (1.25mm) hole is punched through the midst of data-laden tape, and then the tape is put through a read test. 3590 tape technology passes this extreme test. ("Magstar Data Integrity Tape Experiment") Ref: Redbook "IBM TotalStorage Tape Selection and Differentiation Guide"; http://www4.clearlake.ibm.com/hpss/Forum /2000/AdobePDF/Freelance-Graphics-IBM- Tape-Solutions-Hoyle.pdf Home Cell Mode 3494 concept determining whether cartridges are assigned to fixed storage slots (cells) or can be stored anywhere after use (Floating-home Cell). Query via 3494 Status menu selection "Operational Status". Home Element Column in 'Query LIBVolume' output. See: HOME_ELEMENT HOME_ELEMENT TSM DB: Column in LIBVOLUMES table containing the Element address of the SCSI library slot containing the tape. (Does not apply to libraries which contain their own supervisor, such as the 3494, where TSM does not physically control actions.) Type: Integer Length: 10 See also: Element Host name You mean "Server name" or "Node name"? (q.v.) Hot backup Colloquial term referring to performing a backup on an object, such as a database, which is undergoing continual updating as a conventional, external backup of that object proceeds. The restorability of the object backed up that way is questionable at best. The more reasonable approach involves performing the backup from inside the object, as for example a database API which can capture data for backup but do so in conjunction with ongoing processing. Another approach is an operating system API which performs continual, real-time backup. HOUR(timestamp) SQL function to return the hour value from a timestamp. See also: MINUTE(); SECOND() HOURS See: DAYS HP-UX file systems HP-UX uses the Veritas File System (VxFS), also referred to as the Journaled File System (JFS). VxFS provides Logical Volume Manager (LVM) tools to administer physical disks and allow administrators to manage storage assets. In general, one or more physical disks are initialized as physical volumes and are allocated to Volume Groups. Storage from the Volume Group is made available to a host by creating one or more Logical Volumes. Once allocated, Logical Volumes can be used for HP-UX file systems or used as raw (logical) devices for DBMS. Information about the Volume Group and Logical Volume are stored on each physical volume. HPCT High Performance Cartride Tape. See: 3590 'J' Contrast with CST and ECCST. See also: 3590 'J'; EHPCT HSM Hierarchical Storage Management. Currently called "TSM for Space Management". A TSM client option available in AIX and Solaris. Its nature calls for operating system modifications, typically in the form of kernel extensions. (Was once available for SGI as well, but that was withdrawn. IBM intended HSM for many platforms, but as they approached the task they found that various parties were being licensed to likewise modify the operating system to their needs: in that this uncoordinated approach would lead to inevitable conflicts, IBM reduced its ambitions.) Started by /etc/inittab's "adsmsmext" entry invoking /etc/rc.adsmhsm . See also: DM HSM, add file system to it Employ the GUI, or the command: 'dsmmigfs add FileSystemName' The file system name ends up being added to the list /etc/adsm/SpaceMan/config/dsmmigfstab HSM, command format Control via the OPTIONFormat option in the Client User Options file (dsm.opt): STANDARD for long-form, else SHORT. Default: STANDARD HSM, display Unix kernel messages? Control via the KERNelmessages option in the Client System Options file (dsm.sys). Default: Yes HSM, exclude files Specify "EXclude.spacemgmt pattern..." in the Include-exclude options file entry to exclude a file or group of files from HSM handling. HSM, for Windows It's Legato DiskXtender, an IBM-blessed TSM companion product. (Formerly from OTG Software, bought by Legato.) http://portal1.legato.com/products/ disxtender/ In past history: Eastman Software had an HSM for NT product called OPEN/stor, being replaced in 1998 by Advanced Storage for Windows NT (y2k compliant). As of mid-98, OPEN/stor became Storage Migrator 2.5 (version 2.5 includes the ADSM option as part of the base product) HSM, insufficient space in file system You can run into a situation where it looks like there should be room in the HSM-controlled file system to move in a given file, but attempting to do so results in an error indicating insufficient space to complete the operation. This may be due to fragmentation of the disk space: the query you performed to report the amount of free space is misleading because it includes partially free blocks of space, whereas the file copy operation wants whole, empty blocks. In AIX, for example, the default file system block size is 4 KB. A file containing 1 byte of data requires a minimum storage unit of one 4 KB block where 4095 bytes are empty; but those 4095 bytes can only be used for the expansion of that file, not the introduction of a new file. In AIX, a fragmentation problem at data movement time can be determined by examining the AIX Error Log, as via the 'errpt' command, for JFS_FS_FRAGMENTED entries. HSM, recall daemons, max number Control via the MAXRecalldaemons option in the Client System Options file (dsm.sys). Default: 20 HSM, recall daemons, min number Control via the MINRecalldaemons option in the Client System Options file (dsm.sys). Default: 3 HSM, reconcilliation interval Control via the RECOncileinterval option in the Client System Options file (dsm.sys). Default: 24 hours HSM, reconcilliation processes, max Control via the MAXRCONcileproc option number in the Client System Options file (dsm.sys). Default: 3 HSM, start manually In Unix: '/etc/rc.adsmhsm &' HSM, threshold migration, query Via the AIX command: 'dsmmigfs Query [FileSysName]' HSM, threshold migration, set Control via the AIX command: 'dsmmigfs Add|Update -hthreshold=N' for the high threshold migration percentage level. Use: 'dsmmigfs Add|Update -lthreshold=N' for the low threshold migration percentage level. HSM, retention period for migrated Control via the MIGFILEEXPiration option files (after modified or deleted in in the Client System Options file client file system) (dsm.sys). Default: 7 (days) HSM, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)] on all volumes [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: It is best to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' to assure that the reported information will be current. HSM, threshold migration, max number Control via the MAXThresholdproc option of processes in the Client System Options file (dsm.sys). Default: 3 HSM active on a file system? 'dsmdf FSname', look in "FS State" column for "a" for active, "i" for inactive, or "gi" for global inactive. HSM and Aggregation HSM did not begin utilizing Aggregation when that capability came into being in ADSMv3, and HSM still does not use it. The rationale for not using Aggregation is that the HSM design transfers each file in its own transaction, which is due to a number of reasons, such as that HSM in general will be migrating "large" files as these are favored during candidates search (unless the size factor is 0) and will thus be migrated before any of the smaller files. The effect is increased server overhead as well as greater tape utilization. HSM backup, offsite copypool only Some implementations seek to have only an offsite (copypool) image of the HSM data, seeking to avoid the use of tapes for an onsite backup image. An approach: Via dsmmigfs, defined the stub size to be 512 to eliminate leading file data from the stub, to force all files to be eligible for migration. Employ a relatively low HThreshold value on the HSM file system, to cause most files to migrate naturally. Prepatory to daily TSM server administration tasks, schedule a 'dsmmigrate -R' on the file system, allowing enough time for it to finish. As part of daily TSM server administration, do Backup Stgpool on the disk & tape stgpools to which that HSM data migrates, to an appropriate offsite stgpool. HSM candidates list 'dsmmigquery FSname' HSM commands, list help 'dsmmighelp' HSM configuration directory /etc/adsm/SpaceMan/config HSM daemons dsmmonitord and dsmrecalld. Their PIDs are remembered in files /etc/adsm/SpaceMan/dsmmonitord.pid and /etc/adsm/SpaceMan/dsmrecalld.pid HSM disaster recovery (offsite) issues For *SM offsite disaster recovery, what should go offsite? Should you send copies of HSM storage pool backups, or copies of backup storage pools reflecting HSM file system backups - or both? HSM storage pools contain only data which has migrated from the HSM file system to TSM server storage - which is *never* small (<4 KB) files. Because HSM storage pool copy tapes are inherently incomplete, they cannot fully recover HSM in the event of a disaster. However, one would *like* to depend upon HSM copy storage pool tapes because restoring the server storage pool is so easy. Depending upon HSM file system backup storage pool data for disaster recovery is more appropriate in that it is a complete image of the data: files of all sizes, migrated or not. While complete, a backup image of HSM is problematic for disaster recovery in that there is little chance that it can all fit into the HSM file system upon restoral. To accomplish such a restoral, you will need an aggressive migration from the file system to the HSM storage pool, which has the opportunity to run as the restoral takes time to transition from one tape to another. (Note that a Backup storage pool tape set is far too awkward to depend upon as a resource for restoring a bad HSM primary tape storage pool: depend upon HSM backup storage pool tapes only for file recovery and disaster recovery.) HSM error handling Specify a program to execute via the ERRORPROG Client System Options file (dsm.sys). Can be as simple as "/bin/cat". **WARNING** If ADSM loses its mind (as when it obliterates its own client password), this can result in tens of thousands of mail messages being sent. HSM file, recall Is implicit by use of the file, or you can use the dsmrecall command (q.v.). HSM file system, back up Performing a 'dsmc Incremental' on an HSM file system results in basic backup files. If a file is Migrated, a backup of it results in just the single instance of the file in the Backups table: there will be no backup image of the stub file. HSM file system, mount Make sure your current directory is not the mount point directory, then: 'mount FSname' # Mount the JFS 'mount -v fsm FSname' # Mount the FSM (The second command will result in msg "ANS9309I Mount FSM: ADSM space management mounted on FSname".) HSM file system, mounting from an NFS You can have an HSM-managed file system client available to remote systems via NFS; but there are procedural considerations: - Attempting to mount the file system too early in server start-up could result in having the (empty) server mount point directory being mounted. What's worse: a 'df' on the client misleads with historical information. - AIX's normal exports sequence will result in the JFS file system being exported from the server. You need to do another 'exportfs' command after HSM mounts its FSM VFS over the JFS file system, else on the client you get: mount ServerName:/FSname MtPoint mount: access denied for ServerName:/FSname mount: giving up on: ServerName:/FSname Permission denied So try '/usr/bin/exportfs -v FSname'. Note that this can sometimes take up to 10 minutes to take effect (some problem with mountd). HSM file system, move to another ADSM The simplest method is to set up a server replacement HSM file system in the new environment and perform a cross-node restore (-VIRTUALNodename=FormerClient) to populate the new file system, specifying -SUbdir=Yes to recreate the full directory structure, and -RESToremigstate=No to move all the data across. This method depends upon the feasibility of using a datacomm line for so much data, being able to use a tape drive on the source TSM server for a prolonged period, and the receiving HSM file system parameters being set to perform migration and dsmreconcile in time to make space for the incoming data. Another approach is to: Perform a final backup of the HSM file system in its original location. EXPort Node of that backup filespace. Define the HSM file system and HSM storage pool in its new environment. IMport Node to plant the backup filespace. Perform a full file system restoral in the new environment (dsmc restore -SUbdir=Yes -RESToremigstate=Yes (the default anyway)) to recreate the directory structure, restore small files, and recreate stub files. This basically follows the HSM file system recovery procedures outlined in the HSM manual and HSM redbook (q.v.). The big consideration to this approach is that Export and Import are very slow. HSM file system, move to another The following method is anecdotally client, same server reported, but is undocumented: import volume group mount the HSM file system dsmmigfs import HSM file system, remove Make sure that the file system is all but empty, in that following REMove will cause a full recall. 'dsmmigfs REMove FSname', which... - runs reconcilliation for the filesys; - evaluates space for total recall; - recalls all files - has the server eliminate migrated file images from server storage - unmounts the FSM from the JFS filesys. You then do: 'umount FSname' # Unmount the JFS 'rmfs -r FSname' to remove the file system, LV, and mount point. Remove name from /etc/exports.HSM; Update /usr/lpp/adsm/bin/dsm.opt, and restart dsmc schedule process, if any; Update /usr/lpp/adsm/bin/rc.adsmhsm, if filesys named there. HSM file system, rename 'dsmmigfs deactivate FSname' 'umount FSname' # Unmount the FSM 'umount FSname' # Unmount the JFS Change name in /etc/filesystems; Change name in /etc/exports.HSM; Rename mount point; Change name in /etc/adsm/SpaceMan/config/dsmmigfstab; In ADSM server: 'REName FIlespace NodeName FSname NewFSname' 'mount NewFSname'; 'mount -v fsm NewFSname'; 'dsmmigfs reactivate NewFSname' '/usr/sbin/exportfs NewFSname' # To export the FSM Update /usr/lpp/adsm/bin/dsm.opt Update /usr/lpp/adsm/bin/rc.adsmhsm, if filesys named there. HSM file system, restore as stub files Use -RESToremigstate=Yes (the default) (restore in migrated state) to restore the files such that the data ends up in TSM server filespace and the client file system gets stub files. (Naturally, files too small to participate in HSM migration are fixed residents in the file system, and physical restoral must occur.) Can specify either on the dsmc command line, or in the Client User Options file (dsm.opt). Example: 'dsmc restore -RESToremigstate=Yes -SUbdir=Yes /FileSystem' To query, do 'dsmc Query Option' in TSM or 'dsmc show options' in TSM and look for "restoreMigState". See also: dsmmigundelete; Leader data HSM file system, unmount Do this when the file system is dormant. Make sure your current directory is not the mount point directory, then: 'umount FSname' # Unmount the FSM 'umount FSname' # Unmount the JFS HSM file systems, list 'dsmmigfs query [FileSystemName...]' The file systems end up enumerated in file /etc/adsm/SpaceMan/config/dsmmigfstab by virtue of running 'dsmmigfs add'. HSM files, database space required Figure 143 bytes + filename length. HSM files, restore as stubs (migrated Control via the RESToremigstate Client files) or as whole files User Options file (dsm.opt) option. Specify "RESToremigstate Yes" to restore as stubs (the default, usual method); or just say "No", to fully restore the files to the local file system in resident state. HSM files, actual sizes The Unix 'du -k ...' command can be used to display the sizes of files as they sit in the Unix file system; but it obviously knows not of HSM and cannot display actual data sizes for files migrated from an HSM-controlled file system. Use the ADSM HSM 'dsmdu' command to display the true sizes. See: dsmdu HSM files, seek in database SELECT * FROM SPACEMGFILES WHERE - NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='___' AND FILE_NAME='___' This will report state (Active, Inactive), migration date, deletion date, and management class name. It will not report owner, size, storage pool name or volumes that the file is stored on. HSM for Netware Product "FileWizard 4 TSM" from a company called Knozall Systems. http://www.knozall.com/hsm.htm HSM for Windows See: HSM, for Windows HSM installed? In AIX, do: lslpp -l "adsm*" or: lslpp -l "tsm*" and look for "HSM". HSM kernel extension loaded? '/usr/lpp/adsm/bin/installfsm -q /usr/lpp/adsm/bin/kext' See also: installfsm HSM kernel extension management See: installfsm HSM Management Class, select HSM uses the Default Management Class which is in force for the Policy Domain, which can be queried from the client via the dsmc command 'Query MGmtclass'. You may override the Default Management Class and select another by coding an Include-Exclude file, with the third operand on an Include line specifying the Management Class to be used for the file(s) named in the second operand. HSM migration behavior Observations via 'dsmls' show that files migrate as follows: 1. They sit in the file system for some time, as Resident (r). 2. When space is needed, migration candidates are migrated (m). In addition, the Premigration Percentage causes a certain additional amount to be premigrated (p). Note that the premigrated files are recorded in the premigrdb database located in the .SpaceMan directory. HSM migration candidates list empty See: HSM migration not happening HSM migration not happening Possible causes: - The file system is not actively under HSM control. - The management class operand SPACEMGTECHnique is NONE or SELective. Check via client 'dsmmigquery -M -D'. - The files are predominantly smaller than the stub size defined for the HSM file system (usually 4KB). - If your file system usage level is not over the defined migration threshold, there is no need for migration. - dsmmonitord not running (started by rc.adsmhsm) so as to run dsmreconcile and create a migration candidates list (verifiable via 'dsmmigquery -c FSnm') - By default, migration requires that a backup have been done first, per the MGmtclass MIGREQUIRESBkup choice. (Look for msg ANS9297I.) - Assure that your storage pool migration destinations are defined as you think they are. - Assure that the destination storage pool Access is Read/Write, and that its volumes are online. - Another cause of this problem is there being binary (as in a Newline) embedded in a space-managed file name. Look for such an oddity in the migration candidates list. - Try a manual dsmreconcile. That may say "Note: unable to find any candidates in the file system.": try doing 'dsmmigrate -R Fsname' and see what messages result. - If there is a migration candidates list, manually run dsmautomig and see if that works; else try a manual dsmmigrate on a selected file and see if that works. HSM migration processes, number The 4.1.2 HSM client introduces the new parameter MAXMIGRATORS (q.v.). HSM quota HSM: The total number of megabytes of data that can be migrated and premigrated from a file system to ADSM storage. The default is "no quota", but if activated, the default value is the same number of megabytes as allocated for the file system itself. HSM quota, define Defined when adding space management to a file system, via the dsmhsm GUI or the 'dsmmigfs add -quota=NNN Fsname' command. HSM quota, update Can be done via the dsmhsm GUI or the 'dsmmigfs update -quota=NNN Fsname' command. HSM rc file /etc/rc.adsmhsm, which is a symlink to /usr/lpp/adsm/rc.adsmhsm, a Ksh script. Invoked by /etc/inittab's "adsmsmext" entry. As provided by IBM, the script has no "#!" first line to cause it to be run under Ksh if invoked simply by name. HSM recall Priority: Will preempt a BAckup STGpool. HSM recall processes, cancel 'dsmrm ID [ID ...]' HSM recall processes, current 'dsmq' HSM server Specified on the MIgrateserver option in the Client System Options file (dsm.sys). Default: the server named on the DEFAULTServer option. HSM status info Stored in: /etc/adsm/SpaceMan/status which is the symlink target of the .SpaceMan/status entry in the space-managed file system. HSM threshold migration interval Defaults to once every 5 minutes. Specify a value on the CHEckthresholds option in the Client System Options file (dsm.sys). HTTP A COMMmethod defined in the Server Options File, for the Web-browser based administrative interface. You need to code both: COMMmethod HTTP HTTPPort 1580 HTTPport Client System Options File (dsm.sys) option specifying the TCP/IP port address for the Web Client. Code a value from 1000 - 32767. Default: 1581 Windows advisory: The HTTPport in the options file may not actually be what controls the port number: there may be an HttpPort value in the registry, which will take precedence for the port on which to listen. The registry entry is: HKEY_LOCAL_MACHINE\SYSTEM\ControlSetXX \Services\ADSM Client Acceptor \Parameters\HttpPort . The "dsm.opt" file will be looked at if this HttpPort Registry entry does not exist: if there is no HTTPport value specified in the dsm.opt, the default value of 1581 will be used. The HttpPort value in the Registry can be updated with the dsmcutil command: dsmcutil update cad /name:"NameOfCadService " /httpport:#### Surprise: The HTTPport value also controls the Client Acceptor (dsmcad) port number! Ref: www.ibm.com/support/ entdocview.wss?uid=swg21079454 . See also: WEBPorts HTTPPort Server options file option specifying the port number for the HTTP communication method. Default: 1580 HTTPS ADSMv3 COMMmethod defined in the Server Options File, for a Web-browser based administrative interface using the Secure Sockets Layer (SSL) communications protocol. You need to code both: COMMmethod HTTPS HTTPSPort 1580 Note: Not required for the Web proxy and is not supported by TSM. HTTPSPort Server options file option specifying the port number for the HTTPS communication method, which uses the Secure Socket Layer (SSL). Defaut: 1543 Hyperthreading See: Intel hyperthreading & licensing I/O error messages ANR1414W at TSM server start-up time, reporting a volume "read-only" due to previous write error. ANR8359E Media fault ... (q.v.) I/O errors reading a tape Errors are sometimes encountered when reading tapes. Sometimes, simply repeating the read will cause the error to disappear. With tapes which have been unused for a long time, or stored in under unfavorable conditions, you may want to retension the tape before trying to read it. See: Retension IBM media problems Call (888) IBM-MEDIA about the problem you have with media purchased from IBM. IBM Tivoli Storage Manager Formal name of product, as of 2002/04, previously called Tivoli Storage Manager (and before that, ADSTAR Distributed Storage Manager, derived from WDSF). IBM TotalStorage New name, supplanting "Magstar" in 2002. IBMtape The 3590/LTO/Ultrium device driver for Solaris systems. ftp://ftp.software.ibm.com/storage/ devdrvr/Solaris/ See also: Atape ICN IBM Customer Number. The 7-digit number under which you order IBM software, and through which you obtain IBM support under contract. Idle timeout value, define "IDLETimeout" definition in the server options file. Idle wait (IdleW, IdleWait) "Sess State" value in 'Query SEssion' output for when the server end of the session is idle, waiting for a request from the client. Recorded in the 22nd field of the accounting record, and the "Pct. Idle Wait Last Session" field of the 'Query Node Format=Detailed' server command, where slower clients typically have larger numbers. Can result when a client has asked for a mass of information from the server (as in an incremental backup), the server has sent it to the client, and the client is now very busy sorting it and scanning file systems for files which need to be backed up, comparing against the list of already-backed-up files provided by the server. In the midst of a Backup session, idle wait time is as the client is running through the file system seeking the next changed file to back up - and changed files may be few and far between in a given file system. Naturally, a client system busy doing other things will deprive the TSM backup of CPU time and result in file system contention (made worse by virus checking). Also keep in mind that the client doesn't send data to the server until it has a transaction's worth. Retries are another impediment to getting back to the server. If the server expects a response and the client is too busy for a long time, IDLETimeout can occur. See also: Communications Wait; Media Wait; SendW, Start IDLETimeout Definition in the server options file. Specifies the number of minutes that a client session can be idle before its session will be canceled. Allowed: 1 (minute) to infinity Default: 15 (minutes) Too small a value can result in server message ANR0482W. A value of 60 is much more realistic. See IBM site topic "Why are sessions being terminated due to timeouts?" (swg21161949). See also: COMMTimeout; SETOPT IDLETimeout server option, query 'Query OPTion' IDRC Improved Data Recording Capability. Technology built into the 3590 tape drive to compress and compact data, from two to five times that of uncompacted data (the typical compression factor being 3x). IE Usually, Internet Explorer; but sometimes an unfortunately short abbreviation of Include/Exclude. -IFNewer Client option, used with Restore and Retrieve, to cause replacement of an existing file with the file from the server storage pool if that server file is newer than the existing file. Note that this is part of a full replacement type restore ("-REPlace=All|Yes|Prompt") and won't work if using "-REPlace=No". That is, it is part of a "fill in voids and update old files" restoral. WARNING: -REP=All|Yes -IFNewer was horrendously inefficient: it essentially does a -REP=ALL, mounting every tape and moving every file, and at the last second, only replaces it if newer. Ref: APARs IX87650 (server), IC23158 (client), IX89496 (client). Use -FROMDate, -FROMTime, and -PITDate instead, which result in database selection being done in the server, minimizing the movement of data. See also: -LAtest IGNORESOCKETS Testflag, per APAR IX80646, to give the ability to skip socket files during Restore. Works for all platforms except AIX 4.2 and HP-UX, which always skip socket files. Do not attempt to use during Backup. See also: Sockets, Testflag Image Backup (aka Snapshot Backup) The 3.7 facility for backing up a logical volume (partition) as a physical image, on the AIX, HP, and Sun client platforms. In TSM 5.1, available on Windows 2000, where the Logical Volume Storage Agent (LVSA) is available, which can take a snapshot of the volume while it is online. This image backup is a block by block copy of the data. Optionally only occupied blocks can be copied. If the snapshot option is used (rather than static) then any blocks which change during the backup process are first kept unaltered in an Original Block File. In this way the client is able to send a consistent image of the volume as it was at the start of the snapshot process to the Tivoli Storage Manager server. Subsequently available on Windows XP (which is built upon Windows 2000). TSM 5.2 built upon this: its Open File Support uses this Snapshot mechanism. See also: Open File Support; Raw logical volume, back up; Snapshot Immediate Client Actions utility After using, stop and restart the scheduler service on the client, so it can query the server to find out it's next schedule, which in this case would the immediate action you created. Otherwise you will need to wait till the client checks for its next schedule on its own. Also affected by the server 'Set RANDomize' command. Imperfect collocation Occurs when collocation is enabled, but there are insufficient scratch tapes to maintain full separation of data, such that data which otherwise would be kept separate has to be mingled within remaining volume space. See also: Collocation Import To import into a TSM server the definitions and/or data from another server where an Export had been done. Notes: Code -volumenames in the order they were created. If the server encounters a policy set named ACTIVE on the tape volume during the import process, it uses a temporary policy set named $$ACTIVE$$ to import the active policy set. After each $$ACTIVE$$ policy set has been activated, the server deletes that $$ACTIVE$$ policy set from the target server. TSM uses the $$ACTIVE$$ name to show you that the policy set which is currently activated for this domain is the policy set that was active at the time the export was performed. After doing the Import, review the policy results and perform VALidate POlicyset and ACTivate POlicyset as needed. IMport Node *SM server command to import data previously EXPorted from a *SM server. The process will retain the exported domain and node name. Syntax: 'IMPort Node DEVclass=DevclassName VOLumenames=VolName(s) [NodeName(s)] [FILESpace=________] [DOmains=____] [FILEData=None|ALl|ARchive| Backup|BACKUPActive| ALLActive| SPacemanaged] [Preview=No|Yes] [Dates=Absolute|Relative] [Replacedefs=No|Yes]' where NodeName, FILESpace, and DOmains are used to select from the input. Dates= Specifies whether the recorded backup or archive dates for client node file copies are set to the values specified when the files were exported (Absolute), or are adjusted relative to the date of import (Relative). Default: Absolute. Backup data will be put into the tape pool, and HSM data will be put into the HSM disk storage pool. Note that the exported domain name will typically not exist on the import system (nor would you want it to) and so the import operation will attempt to assign all to domain name STANDARD - after which you can perform an UPDate Node to reassign the node to an appropriate domain name in the importing system. Note that the volumes to be imported need to be checked in to the receiving server before use. If Import finds a filespace of the same name already on the receiving server, it will rename the incoming filespace to have a digit at the end of the name. A message reflecting this should appear in the Activity Log. (See "Importing File Data Information", "Understanding How Duplicate File Spaces Are Handled" in the Admin Guide.) Alas, there has been no merging capability in Import. There is Rename Filespace capability in the server, to adjust things to suit your environment, where you could make it match a file system name so that users could therein retrieve their imported data. Look for ANR0617I "success" message in the Activity Log to verify that the import has worked. DO NOT perform Query OCCupancy while Import is running: it has been seen to result in: ANR9999D imutil.c(2555): Lock acquisition (ixLock) failed for Inventory node 17. Messages: ANR0798E, ANR1366W, ANR1368W Improved Data Recording Capability See: IDRC IN SQL clause to include a particular set of data that matches one of a list of values. The set is specified in parentheses. Literals may appear in the set, enclosed in single quotes. WHERE COLUMN_NAME - IN (value1,value2,value3) See also: NOT IN IN USE Status of a tape drive in 'Query MOunt' output when a tape drive is committed to a session involving a client. -INActive 'dsmc REStore' option to cause ADSM to display both the active and inactive versions of files in the selection generated via -Pick. Inactive, when a file went Do a Select on the Backups table, where the DEACTIVATE_DATE tells the story. Inactive file, restore See example under "-PIck". Inactive file system HSM: A file system for which you have deactivated space management. When space management is deactivated for a file system, HSM cannot perform migration, recall, or reconciliation for the file system. However, a root user can update space management settings for the file system, and users can access resident and premigrated files. Contrast with active file system. Inactive files, identify in Select STATE='INACTIVE_VERSION' See also: Active files, identify in Select; STATE Inactive files, list via SQL SELECT HL_NAME, LL_NAME, - DATE(BACKUP_DATE) as bkdate, - DATE(DEACTIVATE_DATE) AS DELDATE, CLASS_NAME FROM ADSM.BACKUPS WHERE - STATE = 'INACTIVE_VERSION' AND - TYPE = 'FILE' AND - NODE_NAME = 'UPPER_CASE_NAME' AND - FILESPACE_NAME = 'Case_Sensitive_Name' Inactive files, number and bytes Do 'Query OCCupancy NodeName FileSpaceName Type=Backup' Total the number of files and bytes, for all stored data, Active and Inactive. Do 'EXPort Node NodeName FILESpace=FileSpaceName FILEData=BACKUPActive Preview=Yes' Message ANR0986I will report the number of files and bytes for Active files. Subtract these numbers from those obtained in Query OCCupancy, yielding values for Inactive files. See also: Active files, number and bytes Inactive files, rebind There is no command to rebind Inactive files (those which have been deleted from the client but which are retained in TSM server storage). But there is a simple technique to effect rebinding of the Inactive files: 1. Temporarily restore the Inactive filenames, or create an empty file of the same name. 2. Perform an unqualified Incremental backup. (A Selective backup binds the backed up files to the new mgmtclass, but not the Inactive files.) 3. Remove the temp files. Consider instead changing retention policies within the existing management class, as long as the change is safe to pertain to all the file systems bound to that mangement class. Inactive files, restore In the command line client (dsmc), use the -INActive option. Inactive files, restore selectively Restoring one or more Inactive files is awkward in that they all have the same name, and name is the standard way to identify files to restore. You can use the GUI or -PIck option to point out specific instances of Inactive files to be restored. Example of CLI-only: 'dsmc restore -inactive -pick ' then select one file from the list. But this requires a human selection process. To accomplish the same thing via a purely command line (batch) operation: First perform a query of the backup files, including the inactive ones. Then invoke the restoral as 'dsmc restore -INActive -PITDate=____ FileName Dest', where -PITDate serves to uniquely identify the instance of the Inactive version of the file. Also use -PITTime, if there was more than one backup on a given day. See also: -PITDate; -PITTime Inactive files for a user, identify SELECT COUNT(*) AS - via Select "Inactive files count" FROM BACKUPS - WHERE NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='___' AND OWNER='___'- AND STATE='INACTIVE_VERSION' Inactive Version (Inactive File) A copy of a backup file in ADSM storage that either is not the most recent version or the corresponding original object has been deleted from the client file system. For example: you delete a file, then do a backup - the latest backup copy of the file is now in the Inactive Version, and would have to be restored from there. Inactive backup versions are eligible for expiration according to the management class assigned to the object. Note that active and inactive files may exist on the same volumes. Query from client: 'dsmc Query Backup -SUbdir=Yes -INActive {filespacename}:/dir/* (where "-INActive" causes *both* active and inactive versions to be reported). See also: Active Version INACTIVE_VERSION SQL DB: State value in Backups table for a host-deleted, Inactive file. See also: ACTIVATE_DATE INCLEXCL TSM server-defined option for clients of all kinds (though the name may lead you to think it's just for Unix), via 'DEFine CLIENTOpt'. Each INCLEXCL contains an Include or Exclude statement in a set of such statements to be applied to the clients using the option set. The Include and Exclude specification coded in the server logically precede and are additive to client-defined Include and Exclude options. Example: DEFine CLIENTOpt INCLEXCL EXCLUDE.FS /home See: DEFine CLIENTOpt INCLExcl Client System Options file (dsm.sys) option to name the file which contains Include-Exclude specifications. Must be coded within a server stanza. Current status can be obtained via the command 'dsmc Query Option' in ADSM or 'dsmc show options' in TSM. Note that if this file is changed, the client scheduler needs to be restarted to see the change. Historical: This option was for many years available for use only in Unix clients. INCLExcl ignored? See: Include-Exclude "not working" INclude Client option to specify files for inclusion in backup processing, archive processing (as of TSM 3.7), image processing, and HSM services; and to also specify the management class to use in storing the files on the server. Placement: Unix: Either in the client system options file or, more commonly, in the file named on the INCLExcl option. Other: In the client options file. Note that Include applies only to files: you cannot specify that certain directories be included. Code as: 'INclude pattern...' or 'INclude pattern... MgmtClass' (Note that the INclude option does not provide the .backup and .spacemgmt qualifiers which the EXclude option does.) Coding an Include does not imply that other file names are excluded: the rule is that an Include statement assures that files are not excluded, but that other files will be implicitly included. Technique suggestion: Rather than have a bunch of management classes and cause client administrators set up somewhat intricate Include statements, it may be preferable to create multiple Domains on the TSM server with a tailored default management class in each, and then change the client Node definition to use that Domain. See also: INCLExcl; INCLUDE.FILE; INCLUDE.IMAGE INCLExcl not working See: Include-Exclude "not working" INCLUDE.ENCRYPT TSM 4.1 Windows option to include files for encryption processing. (The default is that no files are encrypted.) See also: ENCryptkey; EXCLUDE.ENCRYPT INCLUDE.FILE Variation on the INclude statement, to include a specified file in backup operations. INCLUDE.FS Windows (only) Include spec for Open File Support/Snapshot backups. Note that this spec is not in Unix. INCLUDE.IMAGE Variation on the INclude statement, for AIX, HP-UX, and Solaris systmes, to include a specified filespace or logical volume in backup operations. Note that INCLUDE.IMAGE stands alone, being independent of all other Include specifications. Include-exclude list A list of INCLUDE and EXCLUDE options that include or exclude selected objects for backup. An EXCLUDE option identifies objects that should not be backed up. An INCLUDE option identifies objects that are exempt from the exclusion rules or assigns a management class to an object or a group of objects for backup or archive services. The include-exclude list is defined either in the file named on the INCLEXCL opton of the Client System Options File (Unix systems) or in the client options file. Wildcards are allowed: * ... [] The include/exclude list is processed from bottom to top, and exits satisfied as soon as a match is found. Ref: Installing the Clients Include-exclude list, validate ADSMv3: dsmc Query INCLEXCL TSM: dsmc SHow INCLEXCL Include-Exclude list, verify Via manual, command line action: ADSM: 'dsmc Query INCLEXCL' (v3 PTF6) TSM: 'dsmc SHOW INCLEXCL' There is no way to definitively have the scheduler show you if it is seeing and honoring the include-exclude list, as there is no Action=Query in the server DEFine SCHedule command. The best you can do is have the scheduler invoke the Query Inclexcl command to demonstrate that the include-exclude options set was in effect at the time the schedule was run. 1. Add to your options file: PRESchedulecmd "dsmc query inclexcl" 2. Invoke the scheduler to redirect output to a file (as in Unix example 'dsmc schedule >> logfile 2>&1'). 3. Inspect the logfile. Include-Exclude "not working" Possible causes: - Not coded with a server stanza. - Scheduler process not restarted after client options file change. - Exclude not coded *before* the file system containing it is named on an Include, remembering that the Include-Exclude list is processed bottom-up. - Not supported for your opsys. - Unix: The InclExcl option must be coded in your dsm.sys file, and it must be within the server stanza you are using; and, of course, the file that it specifies must exist and be properly coded and have appropriate permissions. - Perhaps 'DEFine CLIENTOpt' has been done on the server, specifing INCLEXCL options for all clients which, though they logically precede client-defined Include-Exclude options, may interfere with client expectations. See also: Include-Exclude list, verify Include-Exclude options file For Unix systems: a file, created by a root user on your system, that contains statements which ADSM uses to determine whether to include or exclude certain objects in Backup and Space Management (HSM) operations, and to override the associated management classes to use for backup or archive. Each line contains Include or Exclude as the first line token, and named files as the second line token(s). Include statements may also contain a third token specifying the management class to be used for backup, to use other than the Default Management Class. The file is processed from the bottom, up, and stops processing, satisfied, as soon as it finds a match. The file is named in the Client System Options File (dsm.sys) for Unix systems, but on other systems the Include statements are located in the dsm.opt file itself. An Exclude option can be used to exclude a file from backup and space management, backup only, or space management only. An Include option can be used to include specific files for backup and space management, and optionally specify the management class to be used. Automatic migration occurs only for the Default Management Class; you have to manually incite migration if coded in the include-exclude options file. Caution: If you change your Include/Exclude list or file so that a previously included file is now excluded, any pre-existing backup versions of that file are expired the next time an incremental backup is run. Include-Exclude options file, query Use the client 'dsmc Query Option' in ADSM or 'dsmc show options' in TSM, and look for "InclExcl:". Include-exclude order of precedence As of ADSMv3, Include-Exclude specifications may come from the server as well as the client, and are taken in the following order: 1. Specifications received from the server's client options set, starting with the highest sequence number. 2. Specifications obtained from the client options file, from bottom to top. Note that, whether from the server or client, Include-Exclude statements are "additive", and cannot be overriden by a Force=Yes specification in the DEFine CLIENTOpt. Do 'dsmc Query Inclexcl' to see the full collection of Include-Exclude statements in effect, in the order in which they are processed during backup and archive operations. Ref: Admin Guide "Managing Client Option Files" See: DEFine CLIENTOpt; DEFine CLOptset; Exclude; INCLEXCL; Include -INCRBYDate Option on the 'dsmc incremental' command to requests an incremental backup by date: the client only asks the server for the date and time of the last incremental backup, for comparing against the client file's last-modified (mtime) timestamp. (A Unix inode administrative change (ctime, as via chmod, chown, chgroup) does not count.) In computer science terms, this is almost a "stateless" backup. This method eliminates the time, memory, and transmission path usage involved in capturing a files list from the server in an ordinary Incremental Backup. Because only the last backup date is considered in determining which files get backed up, any OS environment factors which affect the file but do not change its date and time stamps are not recognized. If a file's last changed date and time is after that of the last backup, the file is backed up. Otherwise it is not, even if the file's name is new to the file system. Because Incrbydate operates by relative date, there obviously must have been a previous complete Incremental backup to have established a filespace last backup date. Files that have been deleted from the file system since the last incremental backup will not expire on the server, because the backup did not involve a list comparison that would allow the client to tell the server that a previously existing file is now gone. Because this backup knows nothing about what was backed up before, it backs up a lot of directories afresh, because their timestamps have changed as their contents have changed - so that may be a time loss detracting from the other gains in this technique, unless changes to files within directories cause the timestamps on the directories to be updated such that a normal incremental would have backed them up anyway. Further things Incrbydate does not do: - Does not rebind backup versions to a new management class if you change the management class. - In Windows, does not back up files whose attributes have changed, unless the modification dates and times have also changed. - Ignores the copy group frequency attribute of management classes: the backup is unconditional. An Incrbydate backup of a whole file system will cause the filespace last backup timestamp to be updated. Prevailing retention rules are honored as usual in an -INCRBYDate backup. Because they do not change the last changed date and time, changes to access control lists (ACL) are not backed up during an incremental by date. Relative speed: In Windows, an Incrbydate backup will be slower than a full incremental backup with journaling active. Recommendation: Incrbydate backups are best suited to file systems with stable populations which are regularly updated, and which have few directories. Mail spool file systems are good candidates. Incremental backup See: dsmc Incremental Incremental backup, file systems to See: DOMain option back up Incremental backup, force when missed Run backup from client, if have access. by client Else create a backup schedule on the server (define schedule) of a small window including the current time, then associate the schedule with the client (DEFine ASSOCiation). "Incremental forever" Often cited as the mantra of the TSM product, it is a capability rather than a dictum. The basic scheme of the product is to back up any new or changed files. You don't necessarily have to ever perform a "full" backup - but of course the cost is having your backups spread over perhaps many tapes (mitigated by Reclamations), which can aggravate restoral times. But you are free to adopt any combination of full and incremental backups as dictated by economics and your restoral objectives. INCRTHreshold TSM 4.2+ option, for Windows. Specifies the threshold value for the number of directories in any journaled file space that might have active objects on the server, but no equivalent object on the workstation. GUI: "Threshold for non-journaled incremental backups" Ref: Windows client manual; TSM 4.2 Informix database backup Use the informix DBA do a DB export, then ADSM backs up this export. Or use the SQL BackTrack product. See also: TDP for Informix Informix database backup, query 'dsmc query backup /InstanceName/InstanceName/*/*' Initialize tapes See: Label tapes initserv.log TSMv4 server log file which will log errors in initializing the server. inode A data structure that describes the individual files in an operating system. There is one inode for each file. The number of inodes in a file system, and therefore the maximum number of files a file system can contain, is set when the file system is created. Hardlinked files share the same inode. inode number A number that specifies a particular inode in a file system. Insert category 3494 Library Manager category code FF00 for a tape volume added to the 3494 inventory. The 3494 reads the external label on the volume, creates an inventory entry for the volume, and assigns the volume to this category as it stores the tape into a library cell. The "LIBVolume" command set is the one TSM means of detecting and handling Insert volumes. You can have TSM adopt INSERT category cartridges via a command like: 'CHECKIn LIBVolume 3494Name DEVType=3590 SEARCH=yes STATus=SCRatch' Insert category tapes, count Via Unix environment command: 'mtlib -l /dev/lmcp0 -vqK -s ff00' Insert category tapes, list Via Unix environment command: 'mtlib -l /dev/lmcp0 -qC -s ff00' (There is no way to list such tapes from TSM.) Install directory, Windows ADSM: \program files\ibm\adsm TSM: \program files\tivoli\tsm installfsm HSM kernel extension management program, /usr/lpp/adsm/bin/installfsm, as invoked in /etc/rc.adsmhsm by /etc/inittab. Syntax: 'installfsm [-l|-q|-u] Kernel_Extension' where: -l Loads the named kernel extension. -q Queries the named kernel extension. -u Unloads the named kernel extension. Examples: (be in client directory) installfsm -l kext installfsm -q kext installfsm -u kext Msgs: ANS9281E Instant Archive An unfortunate, misleading name for what is in reality a Backup Set - which has nothing to do with the TSM Archive facility. The Instant Archive name derives from the property of the Backup Set that it is a permanent, self-contained, immutable snapshot of the Active files set. See: Backup Set; Rapid Recovery Intel hyperthreading & licensing In some modern Intel processors, fuller use of the computing components is made by multi-threading in hardware, which can currently make a single physical processor function like two. Does this affect IBM's licensing charges, which are based upon processor count? What we are hearing is No. Interfaces to ADSM Typically the 'adsm' command, used to invoke the standard ADSM interface (GUI), for access to Utilities, Server, Administrative Client, Backup-Archive Client, and HSM Client management. /usr/bin/adsm -> /usr/lpp/adsmserv/ezadsm/adsm. 'dsmadm': to invoke GUI for pure server administration. 'dsmadmc': to invoke command line interface for pure server administration. 'dsm': client backup-archive graphical interface. 'dsmc': client backup-archive command line interface. 'dsmhsm': client HSM Xwindows interface. Interposer An electrical connector adapter which connects between the cable and the SCSI device. Most commonly seen on Fast-Wide-Differential chains, as with a chain off the IBM 2412 SCSI adapter card. The interposer is part FC 9702. Inventory expiration runs interval, "EXPInterval" definition in the server define options file. Inventory Update A 3494 function invoked from the Commands menu of the operator station, to re-examine the tapes in the library and add any previously unknown ones to the library database. The 3494 will accept commands while it is doing this, so you could request a mount during the inventory. Contrast with "Reinventory complete system". IP address of client changes On occasion, your site may need to reassign the IP address of your computer, which is a TSM client. Per discussion in topic "IP addresses of clients", under some circumstances the TSM server has the client's IP address stored in its database, for client schedule purposes. The server would thus be stuck on the old client address, and keep trying and failing (i.e., timeout) to reach the client at its old address. (Or, worse, it might *succeed* in entering into a session with whatever computer has taken the old IP address!) How to get the server to recognize the new IP address? Given that the IP address is remembered only for nodes associated with a schedule, performing a 'DELete ASSOCiation' should cause the server to forget the IP address of the client and cause it to capture its actual, new IP address after a fresh 'DEFine ASSOCiation' and next scheduler communication with the client. (Note that neither stopping and starting the scheduler on the client, nor performing other interactive functions will cause the server to adopt the new IP address. The TCPCLIENTAddress option might be used to accomplish the change, but the option is actually for multi-homed (multiple ethernet carded) clients, to force use of one of its other IP addresses.) IP address of server See: 'DEFine SERver', HLAddress parameter; TCPServeraddress IP addresses of clients The TSM server stores the IP address of nodes in its database, but ONLY when the address is specified on the HLAddress parameter for the node definition, or for nodes associated with a schedule when running in Server Prompted (SCHEDMODe PRompted) mode. That is, for ordinary client contacts, the IP address used is not important: it is only when the server has to initiate contact with the client that it is important enough to be stored in the server. The IP addresses are readily available in the TSM 3.7 server table "Summary" (up to the number of days specified via Set SUMmaryretention), and are recorded in the Activity Log on message ANR0406I when clients contact the server to start sessions. TSM 5.x now provides the IP addresses in the Nodes table (if the above considerations apply), so you can perform 'Query Node ... F=D' to see them. Otherwise they can be found (not in a very readable format), by the following procedure (using undocumented debugging commands): 1. 'SHOW OBJDir': This will generate a list of objects in the database. Search for "Schedule.Node.Addresses". Note the value for "homeAddr". 2. 'SHOW NODE ': This will give you a list of the IP-addresses which have registered for running scheduled processes (by running the DSMC SCHEDULE program on the client node). See also: SCHEDMODe; TCPPort IPX/SPX Internetwork Packet Exchange/Sequenced Packet Exchange. IPX/SPX is Novell NetWare's proprietary communication protocol. IPXBuffersize *SM server option. Specifies the size (in kilobytes) of the IPX/SPX communications buffer. Allowed range: 1 - 32 (KB) Default: 32 (KB) IPXSErveraddress Old TSM 4.2 option for Novell clients for using IPX communication methods to interact with the TSM server. IPXSocket *SM server option. Specifies the IPX socket number for an ADSM server. Allowed range: 0 - 32767 Default: 8522 IPXBufferSize server option, query 'Query OPTion' IPXSocket server option, query 'Query OPTion' -Itemcommit Command-line option for ADSM administrative client commands ('dsmadmc', etc.) to say that you want to commit commands inside a macro as each command is executed. This prevents the macro from failing if any command in it encounters "No match found" (RC 11) or the like. See also: COMMIT; dsmadmc iSeries backups There is no TSM client per se for the iSeries. However, there is an interface to TSM based upon the TSM API called the BRMS Application Client. See also: BRMS ISSUE MESSAGE TSM 3.7+ server command to use with with return code processing in a script to issue a message from a server script to determine where the problem is with a command in the script. Syntax: 'ISSUE MESSAGE Message_Severity Message_Text' Message_Severity Specifies the severity of the message. The message severity indicators are: E = Error. ANR1498E is displayed in the message text. I = Information. ANR1496I is displayed in the message text. S = Severe. ANR1499S is displayed in the message text. W = Warning. ANR1497W is displayed in the message text. Message_Text Specifies the description of the message. See also: Activity log, create an entry ITSM IBM Tivoli Storage Manager - the name game evolves in 2002. See also: TSM ITSM for Databases Is the third generation name and new licensing scheme for the database backup agents in 2003: - TDP for Informix - TDP for MS SQL - TDP for Oracle ITSM For Hardware See: Tivoli Storage Manager For Hardware "JA" The 7th and 8th chars on a 3592 tape cartridge, identifying the media type, being the first generation of the 3592. Japanese filenames See: Non-English filenames Jaz drives (Iomega) Can be used for ADSMv3 server storage pools, via 'DEFine DEVclass ... DEVType=REMOVABLEfile'. Be advised that Jaz cartridges have a distinctly limited lifetime. See articles about it on the web: search on "Click of Death". JBB Journal-based backups (q.v.). JDB See: Journal-based backups (JBB) JFS buffering? No! The ADSM server bypasses JFS buffering on writes by requesting synchronous writes, using O_SYNC on the open(). There is no problem using JFS for the ADSM server database recovery log and storage pool volumes: this is the recommended method. JNLINBNPTIMEOUT Journal Based Backups Testflag, implemented in the 5.1.6.2 level fixtest, to allow a client to specify a timeout value that the client will wait for a connection to the journal daemon to become free (that is, the currently running jbb session to finish). Use by adding to your Windows dsm.opt file like: testflag jnlinbnptimeout:600 where the numeric value is in seconds. (TSM 5.2 will better address timeouts.) Join (noun) An SQL operation where you specify retrieving data from more than one table at a time by specifying FROM a comma-separated set of table names, using table-qualified column names to report the results. Example: SELECT MEDIA.VOLUME_NAME, MEDIA.STGPOOL_NAME, VOLUMES.PCT_UTILIZED FROM MEDIA, VOLUMES Note that processing tends to occur by repeatedly looking through the multiple tables, which is to say that you will experience a multiplicative effect: if the columns being reported occur in multiple tables, you need to use matching to avoid repetitive output, as in: WHERE MEDIA.VOLUME_NAME=VOLUMES.VOLUME_NAME So, if you had 100 volumes, this would prevent the query from reporting 100x100 times for the same set of volumes. See also: Subquery Journal-based backups (JBB) TSM 4.2+: Client journaling improves overall incremental backup performance for Windows NT and Windows 2000 clients (including MS Clustered systems) by using a client-resident journal to track the files to be backed up. The journal engine keeps track of changed files as they are changed, as a jornal daemon monitors file systems specified in the jbb config file. When the incremental backup starts, it just backs up the files that the journal has flagged as changed. (Thus, the journal grows in size only as a result of host file update activity: backups only act upon the contents of the journal - they do not add to it.) When objects are processed (backed up or expired) during a journal based backup, the b/a client notifies the journal daemon to remove the journal entries which have been processed - which releases space internal to the journal: the journal size itself is not reduced. In such backups, the server inventory does not need to be queried, and therein lies the performance advantage. Journal-based backups eliminate the need for the client to scan the local file system or query the server to determine which files to process. It also reduces network traffic between the client and server. Because archive and selective backup are not based on whether a file has changed, there is no server inventory query to begin with, and therefore the journal engine offers no advantage. The journal engine is not used for these operations. Default installation directory: C:\Program Files\Tivoli\TSM\baclient The number of journal entries corresponds with the amount of file system change activity and that the size of journal entries depends primarily on the fully qualified path length of objects which change (so file systems with very deeply nested dir structures will use more space). Every journal entry is unique, meaning that there can only be one entry per file/directory of the file system being journaled (each entry represents that the last change activity of the object). When a journal based backup is performed and journal entries are processed by the B/A client (backed up or expired), the space the processed journal db entries occupy are marked as free and will be reused, but the actual disk size of the journal db file never shrinks. Note that this design is intentionally independent of the Windows 2000 NTFS 5 journalled file system so as to be usable in NT as well, with the possibility of expansion to other platforms in the future. The first time you run a backup after enabling the journal service, you will still see a regular full incremental backup performed, done to synchronize the journal database with the TSM server database. Thereafter the backups should use the journaled backup method, unless the journal db and server db become out of sync (for more info, see the PreserveDbOnExit option in the client manual appendix on configuring the journal service). Relative speed: A JBB is typically faster than an Incrbydate backup. Ref: TSM 4.2 Technical Guide redbook; search IBM db for "TSM Journal Based Backup FAQ" (swg21155524). KB Knowledge Base. Vendors often name their customer-searchable databases this. Go to www.ibm.com and use the Search box to find articles in IBM's KB. KEEPMP= TSM 3.7+ server REGister Node parameter to specify whether the client node keeps the mount point for the entire session. Code: Yes or No. Default: No Ref: TSM 3.7 Technical Guide, 6.1.2.3 See also: MAXNUMMP; REGister Node Kernel extension (server) /usr/lpp/adsmserv/bin/pkmonx, as loaded by: '/usr/lpp/adsmserv/bin/loadpkx -f /usr/lpp/adsmserv/bin/pkmonx', usually by being an entry in /etc/inittab, as put there by /usr/lpp/adsmserv/bin/dsm_update_itab. (See the Installing manual.) NOTE: The need for the kernel extension is eliminated in ADSM 2.1.5, which implements "pthreads", as supported by AIX 4.1.4. Kernel extension (server), load Can be done manually as root via: '/usr/lpp/adsmserv/bin/loadpkx -f /usr/lpp/adsmserv/bin/pkmonx' or: 'cd /usr/lpp/adsmserv/bin' './loadpkx -f pkmonx' but more usually via an entry in /etc/inittab, as put there by /usr/lpp/adsmserv/bin/dsm_update_itab. Alternately you can: '/usr/lpp/adsmserv/bin/rc.adsmserv kernel' Messages: Kernel extension now loaded with kmid = 21837452. Kernel extension successfully initialized. Then you can start the server. Ref: Installing the Server... Kernel extension (server), loaded? As root: '/usr/lpp/adsmserv/bin/loadpkx -q /usr/lpp/adsmserv/bin/pkmonx' May say: "Kernel extension is not loaded" or "Kernel extension is loaded with kmid = 21834876." (See the Installing manual.) Kernel extension (server), unload Make sure all dsm* processes are down on the server, and then do: As root: '/usr/lpp/adsmserv/bin/loadpkx -u /usr/lpp/adsmserv/bin/pkmonx' KERNelmessages Client System Options file (dsm.sys) option to specify whether HSM-related messages issued by the Unix kernel during processing (such as ANS9283K) should be displayed. Specify Yes or No. Because of kernel nature, a change in this option doesn't take effect until the ADSM server is restarted. Default: Yes KEY= In ANR830_E messages, is Byte 2 of the sense bytes from the error, as summarized in the I/O Error Code Descriptions for Server Messages appendix in the Messages manual. To further explain some values: 7 Data protect: as when the tape cartridge's write-protect thumbwheel or slider has been thrown to the position which the drive will sense to disallow writing on the tape. Should be accompanied in message by ASC=27, ASCQ=00, and msg ANR8463E. Kilobyte 1,024 bytes. It is typically only disk drive manufacturers that express a kilobyte as 1,000 bytes. Software and tape drive makers typically use a 1,024 value. The TSM Admin Ref manual glossary, and the 3590 Hardware Reference manual, for example, both define a kilobyte as 1,024. L_ (e.g., L1) LTO Ultrium tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification L1 Ultrium Generation 1 Type A, 100 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. Ref: IBM LTO Ultrium Cartridge Label Specification L2 Ultrium Generation 2 Type A, 200 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. Ref: IBM LTO Ultrium Cartridge Label Specification L3 Ultrium Generation 3 Type A, 400 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. Ref: IBM LTO Ultrium Cartridge Label Specification L4 Ultrium Generation 4 Type A, 800 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. Ref: IBM LTO Ultrium Cartridge Label Specification LA Ultrium Generation 1 Type B, 50 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. Ref: IBM LTO Ultrium Cartridge Label Specification Label all tapes in 3494 library The modern way is to use the LABEl having category code of Insert LIBVolume command, to both label and checkin the volumes. To just label, issue the following operating system command: 'dsmlabel -drive=/dev/XXXX -library=/dev/lmcp0 -search -keep [-overwrite]' LABEl LIBVolume TSM server command (new with ADSMv3). Allows you to label and checkin a single tape, a range of tapes, or any new tapes in an automated library, all in one easy step. Note that there is no "checkin" phase for LIBtype=MANUAL. (The command task is serial: one volume is labeled at a time.) Syntax: 'LABEl LIBVolume libraryname volname|SEARCH=Yes|SEARCH=BULK [VOLRange=volname1,volname2] [LABELSource=Barcode|Prompt] [CHECKIN=SCRatch|PRIvate] [DEVTYPE=CARTRIDGE|3590] [OVERWRITE=No|Yes] [VOLList=vol1,vol2,vol3 ... -or- FILE:file_name]' The SEARCH option will cause TSM to issue an initial query to compile a list of Insert tapes, which it will then process. (If you thereafter add more tape to the library as the command is in its labeling phase, those Inserts will not be processed: you will have to reissue the command later.) The operation tends to use available drives rotationally, to even wear. Failing to specify OVERWRITE=Yes for a previously labeled volume results in error ANR8807W. This command will not wait for a drive to become available, even if one or more drives have Idle tapes or are in a Dismounting state. TSM is smart enough to not relabel a volume that is in a storage pool or the volume history file, and had been taken out of the library and put back in (thus getting an Insert category code): msg ANR8816E will result. Did the command succeed? It will end with message ANR0985I; but that message will always indicate success, even though there were problems, and that no tapes were labeled. Look for adjoining problem messages like ANR8806E. Advisory: Query for a reply number for the Checkin command (make sure you have the tape you want to checkin in the I/O slot) key in q request and it will ask you to enter a reply # (i.e reply 001). Your tape should then checkin. Warning: The foolish command will proceed to do its internal CHECKIn LIBVolume even if the labeling fails (msg ANR8806E) - in ADSMv3, at least! Note that a MOVe MEDia will hang if a LABEl LIBVolume is running. Note that if any tape being processed suffers an I/O error (Write), it will be skipped and, in the case of a 3494, its Category Code will remain FF00 (Insert). Msgs: ANR8799I to reflect start; ANR8801I & ANR8427I for each volume processed; ANR0985I; ANR8810I; ANR8806E. Note that there is no logged indication as to the drive on which the volume was mounted. Label prefix, define Via "PREFIX=LabelPrefix" in 'DEFine DEVclass ...' and 'UPDate DEVclass ...'. Label prefix, query 'Query DEVclass Format=Detailed' Label tapes Use the 'dsmlabel' utility. Newly purchased tapes should have been barcoded and internally labeled by the vendor, so there should be no need to run the 'dsmlabel' utility. But you still need to do an ADSM 'CHECKIn' (q.v.). Label tapes in a 3570 Do something like: 'dsmlabel -drive=/dev/rmt1,16 -library=/dev/rmt1.smc' Labelling a tape... Will destroy ALL data remaining on it, because a new will be written immediately after the labels. (It is the standard for writing on tapes in general that an EOD is written at the conclusion of writing.) Disk/disc media are typically different, as in the case of R/W Optical drives. If you inadvertently relable a data tape, try to restore data on the volume: Run a Q CONTENT volumename to get a list of file names, then try to restore each file individually (make sure to try several files, especially those located at the end of the tape): this may allow you to read past the tape mark. LABELSource Operand in 'LABEl LIBVolume' and other ADSMserver commands, used *only* for SCSI libraries, as in "LABELSource=BARCODE". Note that 3494s do not need this operand since the label is ALWAYS the barcode. LABELSource=3DBARCODE LAN configuration of 3494 Perform under the operator "Commands" menu of the 3494 operator station. Lan-Free Backup Introduced in TSM V3.7. Relieves the load on the LAN by introducing the Storage Agent. This is a small TSM server (without a Database or Recovery Log), termed a Storage Agent, which is installed and run on the TSM client machine. It handles the communication with the TSM server over the LAN but sends the data directly to SAN attached tape devices, relieving the TSM server from the actual I/O transfer. See also: Lan-Free Restore; Server-free Ref: TSM 3.7.3+4.1 Technical Guide redbook; TSM 5.1 Technical Guide LAN-Free Data Transfer The optional Managed System for SAN feature for the LAN-free data transfer function effectively exploits SAN environments by moving back-end office and IT data transfers from the communications network to a data network or SAN. LAN communications bandwidth then can be used to enhance and improve service levels for end users and customers. http://www.tivoli.com/products/index/ storage_mgr/storage_mgr_concepts.html See also: Network-Free... Lan-Free license file mgsyssan.lic Lan-Free Restore TSM 3.7 feature designed to get around network limitations when clients need to be quickly restored, and they are physically near the server. Client backups occur as usual, over the network each day (optimally, over over a Storage Area Network). Once on the server, a "Backup Set" can be produced from the current Active files, constituting a point-in-time bundle on media which can be read at the client site. Then, when a mass restoral is necessary at the client, the compatible media can be transported from the server location to the client location (or could have been sent there as a matter of course each day) and the client can be restored on-site from that bundled image. See: Backup Set LanFree bytes transferred Client Summary Statistics element: The total number of data bytes transferred during a lan-free operation. If the ENABLELanfree client option is set to No, this line will not appear. LANGuage Definition in the server options file and Windows Client User Options File. Specifies the language to use for help and error messages. Note that whereas the Windows client sports a LANGuage client option, the Unix client has no such option, instead relying upon the LANG environment variable, in that OS's environmental language support. Default: en_US (AMENG) for USA. If the client is running on an unsupported language/locale combination, such as French/Canada or Spanish/Mexico, the language will default to US English. Note that the language option does not affect the Web client, which employs the language associated with the locale of the browser. If the browser is running in a locale that TSM does not support, the Web client displays in US English. Ref: Just about every TSM manual discusses language. LANGuage server option, query 'Query OPTion' Laptop computers, back up See "Backup laptop computers". LARGECOMmbuffers ADSMv3 client system options file (dsm.sys) option (in ADSMv2 was "USELARGebuffers"). Specifies whether the client will use increased buffers to transfer large amounts of data between the client and the server. You can disable this option when your machine is running low on memory. Specify Yes or No. Msgs: ANS1030E See also: MEMORYEFficientbackup Default: Yes for AIX; No for all others Last 8 hours, SQL time ref You can form a "within last 8 hours" spec in a SELECT by using the form: [Whatever_Timestamp] >(CURRENT_TIMESTAMP-8 hours) Last Backup Completion Date/Time Column in 'Query FIlespace Format=Detailed'. This field will be empty if the backup was not a full incremental, or it was but did not complete, or if the filespace involves Archive activity rather than Backup. As of TSM 5.1: If the command specified by the PRESchedulecmd or POSTSchedulecmd option ends with a nonzero return code, TSM will consider the command to have failed. Last Backup Start Date/Time Column in 'Query FIlespace Format=Detailed'. This field will be empty if the backup was not a full incremental, or it was but did not complete, or if the filespace involves Archive activity rather than Backup. As of TSM 5.1: If the command specified by the PRESchedulecmd or POSTSchedulecmd option ends with a nonzero return code, TSM will consider the command to have failed. Last Incr Date See: dsmc Query Filespace Last night's volumes See: Volumes used last night LASTSESS_SENT SQL: Field in NODES table is for data sent for *any* TSM client operation, whether it be Archive, Backup, or even just a Query. -LAtest 'dsmc REStore' option to restore the most recent backup verson of a file, be it active or inactive. Without this option, ADSM searches only for active files. See also -IFNewer. LB Ultrium Generation 1 Type C, 30 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification lbtest AIX, NT library test program for use with SCSI libraries using the special device /dev/lb0 or /dev/rmtX.smc. Beware using when TSM is also going after the library, as TSM will fail when it cannot open it. Where it is: Windows: /utils directory Unix: server/bin directory Syntax: Windows: lbtest -dev lbx.0.0.y UNIX: lbtest <-f batch-input-file> <-o batch-output-file> <-d special-file> <-p passthru-device> Unix example: lbtest -dev /dev/lbxx Windows example: c:>lbtest -dev lbx.0.0.y where x is the SCSI address and y is the port number - values available from the server utilities diagnostic screen. Once in lbtest, select manual test, select open, select return element count and then do what you want. Make sure you have your command window scrolling as the stuff goes by awful fast. Ref: There is no documentation provided by Tivoli for this TSM utility. LC Ultrium Generation 1 Type D, 10 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LD Ultrium Generation 2 Type B, 100 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LE Language Environment. LE Ultrium Generation 2 Type C, 60 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification Leader data HSM: Leading bytes of data from a migrated file that are replicated in the stub file in the local file system. (The migrated file contains all the file's data; but the leading data of the file is also stored in the stub file for the convenience of limited-access commands such as the Unix 'head' command. The amount of leader data stored in a stub file depends on the stub size specified. The required data for a stub file consumes 511 bytes of space. Any remaining space in a stub file is used to store leader data. If a process accesses only the leader data and does not modify that data, HSM does not need to recall the migrated file back to the local file system. See also: dsmmigundelete; RESToremigstate LEFT(String,N_chars) SQL function to take the left N characters of a given string. Sample usage: SELECT * FROM ADMIN_SCHEDULES WHERE LEFT(SCHEDULE_NAME,4)='BKUP' See also: CHAR() Legato Is bundled with DEC Unix. LF Ultrium Generation 2 Type D, 20 GB tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LG Ultrium Generation 3 Type B, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LH Ultrium Generation 3 Type C, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LI Ultrium Generation 3 Type D, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification libApiDS.a The *SM API library. In TSM 3.7, lives in /usr/tivoli/tsm/client/api/bin See also: dsmapi* Libraries, multiple of same time, Sites may end up with multipe libraries avoiding operator confusion of the same type. How to keep operators from returning offsite tapes to the wrong library? One approach is color-coding: apply solid-color gummed labels to the cartridges and frame the library I/O portal with the same color, making it all but impossible for the operator to goof. Choose yellow and purple, and put Big Bird and Barney pictures onto each library to enhance operator comprehension. Library A composite device consisting of serial media (typically, tapes), storage cells to house them, and drives to read them. A library has its own, dedicated scratch tape pool (dedicated per category code assignment during Checkin, or the like). In TSM, a Library is a logical definition: there may be multiple logical Library definitions for a physical library (as needed when a library contains multiple drive types), with each instance having its own, dedicated scratch tape pool. LIBRary TSM keyword for defining and updating libraries. Note that in TSM a library definition cannot span multiple physical libraries. Library (LibName) A collection of Drives for which volume mounts are accomplished via a single method, typically either manually or by robotic actions. LibName comes into play in Define Library such that Checkin will assign desired category codes to new tapes. LibName is used in: AUDit LIBRary, CHECKIn, CHECKOut, DEFine DEVclass, DEFine DRive, DEFine LIBRary. Is target of: DEFine DEVclass and: DEFine DRive Ref: Admin Guide See also: SCSI Library Library, 3494, define Make sure that the 3494 is online. For a basic definition: 'DEFine LIBRary LibName LIBType=349x - DEVIce=/dev/lmcp0' which take default category codes of decimal 300 (X'12C') for Private and decimal 301 (X'12D') for 3490 Scratch, with 302 (X'12E') implied for 3590 Scratch. For a secondary definition, for another system to access the 3494, you need to define categories to segregate tape volumes so as to prevent conflicting use. That definition would entail: 'DEFine LIBRary LibName LIBType=349x - DEVIce=/dev/lmcp0 PRIVATECATegory=Np_decimal SCRATCHCATegory=Ns_decimal' where the Np and Ns values are unique, non-conflicting Private and Scratch category codes for this Library. (Note that defined category codes are implicitly assigned to library tapes when a Checkin is done.) See also: SCRATCHCATegory Ref: Admin Guide Library, add tape to 'CHECKIn LIBVolume ...' Library, audit See: AUDit LIBRary Library, count of all volumes Via Unix command: 'mtlib -l /dev/lmcp0 -vqK' Library, count of cartridges in See: 3494, count of cartridges in Convenience I/O Station Convenience I/O Station Library, count of CE volumes Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fff6' Library, count of cleaning cartridges Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fffd' Library, count of SCRATCH volumes Via Unix command: (3590 tapes, default ADSM SCRATCH 'mtlib -l /dev/lmcp0 -vqK -s 12E' category code) Library, define drive within 'DEFine DRive LibName Drive_Name DEVIce=/dev/??? [ELEMent=SCSI_Lib_Element_Addr]' Note that ADSM will automatically figure out the device type, which will subsequently turn up in 'Query DRive'. Library, multiple drive types Drives with different device types are supported in a single physical library if you perform a DEFine LIBRary for each type of drive. If distinctively different drive device types are involved (such as 3590E and 3590H), you define two libraries. Then you define drives and device classes for each library. In each device class definition, you can use the FORMAT parameter with a value of DRIVE, if you choose. Living with this arrangement involves the awkwardness of having to apportion your scratch tapes complement between the two TSM library definitions. Ref: Admin Guide "Configuring an IBM 3494 Library for Use by One Server" Library, query 'Query LIBRary [LibName] [Format=Detailed]' Note that the Device which is reported is *not* one of the Drives: it is instead the *library device* by which the host controls the library, rather than the conduit for getting data to and from the library volumes. Does not reveal drives: for the drives assigned to a library you have to do 'Query DRive', which amounts to a bottom-up search for the associated library. Note that there is also an unsupported command to show the status of the library and its drives: 'SHow LIBrary'. Library, remove tape from 'CHECKOut LIBVolume LibName VolName [CHECKLabel=no] [FORCE=yes] [REMove=no]' Library, SCSI See: SCSI Library Library, use as both automatic and Define the library as two libraries: one manual automatic, the other manual: def library=manual libtype=manual def drive manual mtape device=_____ Then when you want to use the drive as a manual library you do: UPDate DEVclass ____ LIBRary=MANUAL And to change back: UPDate DEVclass ____ LIBRary=Automat Library Client A TSM server which accesses a library managed by a separate TSM server, with data transfer ofer a server-to-server communication path. Specified via DEFine LIBRary ... SHAREd See also: Library Manager Library debugging If a library is not properly responding to *SM, here are some analysis ideas: - Do 'q act' in the *SM server to see if it is reporting an error. - If the opsys has an error log, see if any errors recorded there. If the lib has its own error log, inspect. Maybe the library gripper or barcode reader is having a problem. - Try to identify what changed in the environment to cause the difference since the problem appeared. - Is the library in a proper mode to service requests (i.e., did some operator leave a switch in a wrong position or change configuration?). For example, a 9710 must have the FAST LOAD option enabled. - Examine response outside of *SM, via the mtlib, lbtest or other command appropriate to your library, emulating the operation as closely as possible. Be next to the lib to actually see what's happening. - Check networking between *SM and the library: If a direct connection, check cabling and connectors; If networked and on different subnets, maybe an intermediary router problem, or that the library resides in a subnet which is Not Routed (cannot be reached from outside). - Is there a shortage of tape drives, as perhaps tapes left in drives after *SM was not shut down cleanly? - Perform *SM server queries (e.g., 'q pr') as a sluggish request is pending. Do 'Query REQuest' for more manual libs to see if mount pending. Maybe the server is in polling mode waiting on a tape mount: do 'SHow LIBrary' to see what it thinks. - If CHECKIn is hanging, try it with CHECKLabel=No and see if faster, which skips tape loading and barcode review. Library full situation You can have *SM track volumes that are removed from a full library, if you employ the Overflow Storage Pool method. Ref: Admin Guide, "Managing a Full Library" See: MOVe MEDia, Overflow Storage Pool Library Manager TSM concept for a TSM server which controls device operations when multiple IBM TSM servers share a storage device, per 'DEFINE LIBRary ... SHAREd'. Device operations include mount, dismount, volume ownership, and library inventory. See also library client. Library Manager The PC and application software residing in a 3494 or like robotic tape library, for controlling the robotic mechanism and otherwise managing the library, including the database of library volumes with their category codes. Library Manager, microcode level Obtain at the 3494 control panel: First: In the Mode menu, activate the Service Menu (will result in a second row of selections appearing in menu bar at top of screen). Then: under Service, select View Code Levels, then scroll down to "LM Patch Level", which will show a number like "512.09". Library Manager Control Point The host device name through which a (LMCP) a host program (e.g., TSM or the 'mtlib' command) accesses the unique 3494 library that has been associated with that device name, as via AIX SMIT configuration. The LMCP is used to perform the library functions (such as mount and demount volumes). In AIX, the library is accessed via a special device, like /dev/lmcp0. In Solaris, it is more simply the arbitrary symbolic name that you code in the /etc/ibmatl.conf file's first column. That is, in Solaris you simply reference the name you chose to stuff into the file: it is not some peculiar name that is generated via the install programs. The "SCSI...Device Drivers: Programming Reference" manual goes into details and helps make this clearer. Library Manager Control Point Daemon A process which is always running on the (lmcpd) AIX system through which programs on that system interact with the one or more 3494 Tape Libraries which that host is allowed to access (per definitions in the 3494 Library Manager). The executable is /etc/lmcpd. In AIX, the lmcpd software is a device driver. In Solaris, it is instead Unix-domain sockets. The /etc/ibmatl.conf defines arbitrary name "handles" for each library, and each name is tied to a unique lmcp_ device in the /dev/ directory, via SMIT definitions. The daemon listens on port 3494, that number having been added to /etc/services in the atldd install process. There is one daemon and one control file in the host, through which communication occurs with all 3494s. This software is provided on floppy disk with the 3494 hardware. Installs into /usr/lpp/atldd. Updates are available via FTP to the storsys site's .devdrvr dir. It used to be started in /etc/inittab: lmcpd:234:once:/etc/methods/startatl But later versions caused it to be folded into the /etc/objrepos and /etc/methods/ database system such that it is started by the 'cfgmgr' that is done at boot time. Restart by doing 'cfgmgr' (or, less disruptively, 'cfgmgr -l lmcp0'); or simply invoke '/etc/lmcpd'. Configuration file: /etc/ibmatl.conf If the 3494 is connected to the host via TCP/IP (rather than RS-232), then a port number must be defined in /etc/services for the 3494 to communicate with the host (via socket programming). By default, the Library Driver software installation creates a port '3494/tcp' entry in /etc/services, which matches the default port at the 3494 itself. If to be changed, be sure to keep both in sync. Ref: "IBM SCSI Tape Drive, Medium Changer, and Library Device Drivers: Installation and User's Guide" manual (GC35-0154) See also: /etc/.3494sock; /etc/ibmatl.conf Library Manager Control Point Daemon In /etc/ibmatl.pid; but may not be able (lmcpd) PID to read because "Text file busy". Library not using all drives Examine the following: - Mount limit on device class. - 'SHow LIBrary'; make sure all Online and Available. - If AIX, 'lsdev -C -c tape -H -t 3590' and make sure all Available (do chdev if not). - At library console, assure drives are Available. - If AIX, use errpt to look for hardware problems. - Examine drive for being powered on and not in problem state. Library offline? Run something innocuous like: mtlib -l /dev/lmcp0 -qL If offline, will return: Query operation Error - Library is Offline to Host. and a status code of 255. Library sharing In a LAN+SAN environment, the ability for multiple TSM servers to share the resources of a SAN-connected library. Control communication occurs over the LAN, and data flow over the SAN. One sever controls the library and is called the Library Manager Server; requesting servers are called Library Client Servers. (Note that this arrangement does not fully conform to the SAN philosophy, in that peer-level access is absent.) Library sharing contrasts with library partitioning, where the latter subdivides and dedicates portions of the library to each. Ref: Admin Guide, "Multiple Tivoli Storage Manager Servers Sharing Libraries" Library space shortage An often cited issue is the tape library being "full", hindering everything. This typically results from site management not being realistic and skimping on resources, though that jeopardizes the mission of data backup and leaves the administrators in a lurch. Potential remediations: - Expand the library to give it the capacity it needs for reasonable operation. - Go for higher density tape drives and tapes, to increase library capacity without physical expansion. - Buy tape racks and employ a discipline which keeps dormant tapes outside the library, available for mounting via request. Library storage slot element address See: SHow LIBINV Library volumes, list Use opsys command: 'mtlib -l /dev/lmcp0 -vqI' for fully-labeled information, or just 'mtlib -l /dev/lmcp0 -qI' for unlabeled data fields: volser, category code, volume attribute, volume class (type of tape drive; equates to device class), volume type. (or use options -vqI for verbosity, for more descriptive output) The tapes reported do not include CE tape or cleaning tapes. LIBType Library type, as operand of 'DEFine LIBRary' server command. Legal types: MANUAL - tapes mounted by people SCSI - generic robotic autochanger 349X - IBM 3494 or 3495 Tape Lib. EXTERNAL - external media management LIBVolume commands The only TSM commands which recognize and handle tapes whose (3494) Category Code is Insert. See: 'CHECKIn LIBVolume', 'CHECKOut LIBVolume', 'LABEl LIBVolume', 'Query LIBVolume', 'UPDate LIBVolume'. Libvolume, remove Use CHECKOut. See also: DELete VOLHistory LIBVOLUMES *SM database table to track volumes which belong to it and which are contained in the named library. Columns: LIBRARY_NAME, VOLUME_NAME, STATUS, LAST_USE, HOME_ELEMENT, CLEANINGS_LEFT Libvolumes, count by Status 'SELECT STATUS,COUNT(*) AS \ "Library Counts" FROM LIBVOLUMES \ GROUP BY STATUS' Libvolumes which are Scratch, count 'SELECT COUNT(*) FROM LIBVOLUMES WHERE STATUS='Scratch' License See also: adsmserv.licenses; dsmreg.lic; Enrollment Certificate Files License, register 'REGister LICense' command. See: REGister LICense License, TSM 4 TSMv4 introduced the Tivoli 'Value-Based Pricing' model, which changed the license options and files: You no longer buy the network enabling license. Instead, the cost of the base server is tiered based on the hardware you are running on. The client license cost is also tiered based on the hardware type and size. Client licenses were also split into two flavors: a managed LAN system - which is basically what we had prior to v4.1 - and a managed SAN system. The end result is basically the same, but the accounting is different. License, unregister See: Unregister licenses See also notes under REGister LICense. License audit period, query 'Query STatus', see License Audit Period 'SHow LMVARS' also reveals it. License audit period, set 'Set LICenseauditperiod N_Days' License file ADSMv2: It is /usr/lpp/adsmserv/bin/adsmserv.licenses which is a plain file containing hexadecimal strings generated by invoking the 'REGister LICense' command per the sheet of codes received with your order. (The adsmserv module invokes the outboard /usr/lpp/adsmserv/bin/dsmreg.lic to perform the encoding.) ADSMv3 and TSM: The runtime file is the "nodelock" file in the server directory. CPU dependency: The generated numbers incorporate your CPU ID, and so if you change processors (or motherboard) you must regenerate this file. If to be located in a directory other than the ADSM server code directory, this must be specified to the server via the DSMSERV_DIR environment variable. Ref: Admin Guide; README.LIC file included in your installation License filesets (AIX), list 'lslpp -L' and look for tivoli.tsm.license.cert tivoli.tsm.license.rte License info, get See: LICENSE_DETAILS; 'Query LICense' LICENSE_DETAILS table SQL table added to TSM 4.1. Columns: LICENSE_NAME One of the usual TSM license feature names, as in: SPACEMGMT, ORACLE, MSSQL, MSEXCH, LNOTES, DOMINO, INFORMIX, SAPR3, ESS, ESSR3, EMCSYMM, EMCSYMR3, MGSYSLAN, MGSYSSAN, LIBRARY NODE_NAME Either the name of a Backup/Archive client or the name of a library. LAST_USED The time the library was last initialized or the last time that client session ended using that feature. License Wizard One of the Windows "wizards" (see the Windows server Quick Start manual) See: Unregister licenses LICENSE_DETAILS TSM 4.1 SQL table. Columns: LICENSE_NAME Varchar L=10 NODE_NAME Varchar L=64 LAST_USED Last access Timestamp LICENSE_NAME is the name of a license feature, being one of: SPACEMGMT, ORACLE, MSSQL, MSEXCH, LNOTES, DOMINO, INFORMIX, SAPR3, ESS, ESSR3, EMCSYMM, EMCSYMR3, MGSYSLAN, MGSYSSAN, LIBRARY where "MGSYS" is Managed Systems. NODE_NAME will be either the name of a Backup/Archive client or the name of a library. LAST_USED will be set to the time the library was last initialized or the last time that client session ended using that feature. (The datestamp may be more than 30 days ago; an 'AUDit LICense' will not remove the entry.) See also: 'Query LICense' LICenseauditperiod See: License audit period... Licenses ADSMv3: Held in the server directory as file "nodelock". See: nodelock Licenses, audit See: 'AUDit licenses' Licenses, insufficient Archives are denied with msg ANR0438W Backups are denied with msg ANR0439W HSM is denied with msg ANR0447W DRM is denied with msg ANR6750E Licenses, unregister See: Unregister licenses See also notes under REGister LICense. Licenses and dormant clients There is sometimes concern that having old, dormant filespaces hanging around for a dormant client may take up a client license. If your server level is at least 4.1, doing Query LICense, will reveal: Managed systems for Lan in use: x Managed systems for Lan licensed: y where the "in use" value is the thing. From the 4.1 Readme: With this service level the following changes to in use license counting are introduced. - License Expiration. A license feature that has not been used for more than 30 day will be expired from the in use license count. This will not change the registered licenses, only the count of the in use licenses. Libraries in use will not be expired, only client license features. - License actuals update. The number of licenses in use will now be updated when the client session ends. An audit license is no longer required for the number of in use licenses to get updated. (Sadly, this information was not carried over into the manuals.) The above information was further confused by APAR IC32946. See also: AUDit LICenses; Query LICense Licensing problems Can be caused by having the wrong date in your operating system such that TSM thinks the license is not valid. Lightning bolt icon In web admin interface, in a list of nodes: That is a link to the backup/archive GUI interface for the clients. It means you specified its URL for the Client acceptor piece. Clicked, it should bring up that node's web client. You can use that to perform client functions. For it to work: - The client acceptor and remote client agent must be installed on the node. - The client acceptor must be started but leave the remote client agent alone in manual. - The node must be findable on the network, by name or numeric address. You may need to go into the node and update it with the correct URL for it work correctly. This gives you a common management point to perform backup/restore procedures. Linux client support for >2 GB files As ov TSM 4.2.1, the TSM Linux client can back up Large Files, as possible as of Linux kernel 2.4. LINUX support, ADSM (client only) As of 1998/08, there was a NON-Supported version of the ADSM Linux client was available pre-compiled (no source code) on ftp.storsys.ibm.com FTP server in the /adsm/nosuppt directory: file adsmv3.linux.tar.Z (now gone). IBM says: "The TSM source code is not in the public domain." Reportedly worked well with RedHat 5.0. Back then, there was also: http://bau2.uibk.ac.at/linux/mdw/ HOWTO/mini/ADSM-Backup LINUX support, TSM client As of 2000/04/27, a formally supported Linux client is available through the TSM clients site. Installs into /opt/tivoli/tsm/client/. File system support, per the README: "The functionality of the Tivoli Storage Manager Linux client is designed and tested to work on file systems of the common types EXT2, NFS (see under known problems and limitations for supported environment), and ISO9660 (cdrom). Backup and archive for other file system types is not excluded. They will be tolerated and performed in compatibility mode. This means that features of other file systems types may not be supported by the Linux client. These file system type information of such file systems will be forced to unknown." The RedHat TSM Client reportedly needs at least the 4.2.2.1 client level or higher: the 4.1 client does not support the Reiser file system. LINUX support, TSM server Into mid 2003, implementing a TSM Linux server remains problematic: - Requires very specific (often older) kernel levels. - Device support is spotty. LINUX support, TSM web client You may experience a Java error when trying to use the web client interface (via IE 6.0 SP1 with JRE 1.4.2_03). The Unix Client manual, under firewall support, notes that the two TCP/IP ports for the remote workstation will be assigned to two random ports - which may be blocked by Linux's iptables. You'll want to choose two ports and explicitly open them in iptables. For example: In dsm.sys: webports 1582 1583 In /etc/sysconfig/iptables: -A RH-Lokkit-0-50-INPUT -p tcp -m tcp --dport 1582 --syn -j ACCEPT -A RH-Lokkit-0-50-INPUT -p tcp -m tcp --dport 1583 --syn -j ACCEPT and then restart dsmcad and iptables (/etc/rc.d/init.d/iptables restart). LJ Ultrium Generation 4 Type B, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LK Ultrium Generation 4 Type C, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LL Ultrium Generation 4 Type D, future tape cartridge identifier letters, as appears on the barcode label, after the 6-char volser. The first character, L, designates LTO cartridge type, and the second character designates the generation & capacity. Ref: IBM LTO Ultrium Cartridge Label Specification LL_NAME SQL: Low level name of a filespace object, meaning the "filename" portion of the path...the basename. Unix example: For path /tmp/xyz, the FILESPACE_NAME="/tmp", HL_NAME="/", and LL_NAME="xyz". (Remember that for client systems where filenames are case-insensitive, such as Windows, TSM stores them as UPPER CASE.) See also: HL_NAME LLAddress REGister Node specification for the client's port number, being a hard-coded specification of the port to use, as opposed to the implied port number discovered by the TSM server during client sessions (which may be specified on the client side via the TCPCLIENTPort option). See also: HLAddress LM Library Manager. LMCP See Library Manager Control Point LMCP Available? 'lsdev -C -l lmcp0' lmcpd See: Library Manager Control Point Daemon. lmcpd, restart '/etc/kill_lmcpd' '/etc/lmcpd' lmcpd, shut down '/etc/kill_lmcpd' lmcpd level 'lslpp -ql atldd.driver' lmcp0 Library Manager Control Point, only for 3494 libraries. lmcp0, define Library Manager Control '/etc/methods/defatl -ctape -slibrary Point to AIX -tatl -a library_name='OIT3494' LOADDB See "DSMSERV LOADDB". Local Area Network (LAN) A variable-sized communications network placed in one location. It connects servers, PCs, workstations, a network operating system, access methods, and communications software and links. Local file systems See: File systems, local LOCK Admin ADSM server command to prevent an administrator from accessing the server, without altering privileges. Syntax: 'LOCK Admin Adm_Name' Note: Cannot be used on the SERVER_CONSOLE administrator id. Inverse: UNLOCK Admin LOCK Node TSM server command to prevent a client node from accessing the server. Syntax: 'LOCK Node NodeName'. A good thing to do before Exporting a node. Inverse: UNLOCK Node lofs (LOFS) "Loopback file system", or "Loopback Virtual File System": a file system created by mounting a directory over another local directory, also known as mount-over-mount. A LOFS can also be generated using an automounter. Under SGI IRIX, an AUTOFS (automount) file system. Loopback file systems provide access to existing files using alternate pathnames. Once such a virtual file system is created, other file systems can be mounted within it without affecting the original file system. An example: mount -t lo -o ro /real/files /anon/ftp/files To check your mount: mount -p Then put the new info from mount -p into your /etc/fstab. See also: all-lofs; all-auto-lofs Log See: Recovery log Log buffer pool See: LOGPoolsize Log command output To log command output, invoke the ADSM server command as in: 'dsmadmc -OUTfile=SomeFilename ...". See also: Redirection of command output Log file name, determine 'Query LOGVolume [Format=Detailed]' Log pinning See: Recovery Log pinning %Logical ADSM v.3 Query STGpool output field, later renamed to "Pct Logical" (q.v.). Logical file A client file stored in one or more server storage pools, either by itself or as part of an aggregate file (small files aggregation). See also: Aggregate file; Physical file Logical occupancy The space required for the storage of logical files in a storage pool. Because logical occupancy does not include the unused space created when logical files are deleted from aggregates (small files aggregation), it may be less than physical occupancy. See also: physical file; logical file Logical volume See: Raw Logical Volume Logical volume backups Available in ADSM 3.7. A way to obtain a physical image of the overall volume, rather than traversing the file system contained in the volume. Advantages: - Fast backup and restoral, in not having to diddle with thousands of files. - Minimal TSM db activity: just one entry to account for the single image, not thousands to account for all the files in it. - Simple way to snapshot your system for straightforward point-in-time restorals. Disadvantages: - Image integrity: no way to know or deal with contained files or vendor databases being open or active. Logmode See: Set LOGMode Logmode, query 'Query STatus', look for "Log Mode" near bottom. Logmode, set Set LOGMode Loop mode Term used for invocation of the command line client command in interactive mode. See: dsmc LOOP Loopback file system See: lofs LOwmig Operand of 'DEFine STGpool', to define when *SM can stop migration for the storage pool, as a percentage of the storage pool occupancy. Can specify 0-99. Default: 70. To force migration from a storage pool, use 'UPDate STGpool' to reduce the LOwmig value. You could reduce it all the way to 0; but if a backup or like task is writing to the storage pool, the migration task will not end until the backup ends; so a value of 1 may be better as a dynamic minimum. When migration kicks off, it will drain to below this level if CAChe=Yes in your storage pool because caching occurs only with migration, and at that point ADSM wants to cache everything in there. It is also the case that Migration fully operates on the entirety of a node's data, before re-inspecting the LOwmig value; thus, the level of the storage pool may fall below the LOwmig value. See: Migration LOGPoolsize Definition in the server options file. Specifies the size of the Recovery Log buffer pool, in Kbytes. A large buffer pool may increase the rate by which Recovery Log transactions are committed to the database. To see if you need to increase the size of this value, do 'Query LOG Format=Detailed' and look at "Log Pool Pct. Wait": if it is more than zero, boost LOGPoolsize. Default: 512 (KB); minimum: 128 (KB) See also: COMMIT Ref: Installing the Server... LOGPoolsize server option, query 'Query OPTion', see LogPoolSize LOGWARNFULLPercent Server option: Specifies the log utilization threshold at which warning messages will be issued. Syntax: LOGWARNFULLPercent where the percentage is that of log utilization at which warning messages will begin. After messages begin, they will be issued for every 2% increase in log utilization until utilization drops below this percentage. Code as: 0 - 98. Default: 90 See also: SETOPT Long filenames in Netware restorals From the TSM Netware client manual: "If files have been backed up from a volume with long name space loaded, and you attempt to restore them to a volume without long name space, the restore will fail." Long-term data archiving See: Archive, long term, issues Long-term data retention See: Archive, long term, issues Lotus Domino Mail server package, backed up by Tivoli Storage Manager for Mail (q.v.). Domino release 5 introduced new backup APIs, exploited by TDP for Lotus Domino. In Domino, every user has her own mail box database, so it can be individually restored. However, you cannot restore just a single document: you have to restore the DB and copy the document over. See also: TDP... Lotus Domino and compression The bytes read/written/transfered messages from TDP for Domino will be the same whether compression is on or off. Those messages are all based on the number of bytes read and does not take into account any compression being done by the TSM API. You would need to query the occupancy on the server to see any difference. Lotus Notes Agent Note that *SM catalogs every document in the Notes database (.NSF file). Low threshold A percentage of space usage on a local file system at which HSM automatically stops migrating files to ADSM storage during a threshold or demand migration process. A root user sets this percentage when adding space management to a file system or updating space management settings. Contrast with high threshold. See: dsmmigfs Low-level address Refers to the port number of a server. See also: High-level address; Set SERVERHladdress; Set SERVERLladdress LOwmig Operand of 'DEFine STGpool', to define when *SM can stop migration for the storage pool, as a percentage of the storage pool estimated capacity. When the storage pool reaches the low migration threshold, the server does not start migration of another node's files. Because all file spaces that belong to a node are migrated together, the occupancy of the storage pool can fall below the value you specified for this parameter. You can set LOwmig=0 to permit migration to empty the storage pool. Can specify 0-99. Default: 70. To force migration from a storage pool, use 'UPDate STGpool' to reduce the HIghmig value (with HI=0 being extreme). See also: Cache; HIghmig lpfc0 See: Emulex LP8000 Fibre Channel Adapter LRD In Media table, the Last Reference Date (YYYY-MM-DD HH:MM:SS.000000). LTO Linear Tape - Open. In 1997 IBM formed a partnership with HP and Seagate on an open tape standard called LTO or Linear Tape Open. LTO will be based on Magstar MP. (Conspicuously missing from the partnership is Quantum, the sole maker of DLT drives: LTO was devised as a mid-range tape technology in avoiding paying royalties to Quantum. Quantum subsequently advanced to SuperDLT to compete with LTO.) Employs servo tracking for precise positioning. Comes in two flavors, with different cartridges: Accelis (based upon IBM 3570) and Ultrium (based upon IBM 3590). The Accelis and Ultrium formats use the same head / media track layout / channel / servo technology, and share many common electronic building blocks and code blocks. Accelis is optimized for quick access to data while Ultrium is optimized for capacity. Note that Accelis was abandoned in favor of Ultrium, expecting that customers would want higher capacity rather than high performance. Cartridge Memory (LTO CM, LTO-CM) chip is embedded in both Accelis and Ultrium cartridges. A non-contacting RF module, with non-volatile memory capacity of 4096 bytes, provides for storage and retrieval of cartridge, data positioning, and user specified info. Capacity and speed are intended to double in each succeeding generation of the technology. Performance: LTO is streaming technology. If you cannot keep the data flowing at tape speed, it has to stop, back up, and restart to get the tape up to speed again, which makes for a substantial performance penalty. LTO seems, as a product, to be positioned between the compating DLT and the complementary, higher-priced 3590 and STK 9x40. SAN usage: Initially supported via SDG (SAN Data Gateway). Visit: http://lto-technology.com/ http://www.lto-technology.com/newsite/ index.html http://www.ultrium.com http://www.storage.ibm.com/hardsoft/ tape/lto/index.html http://www.cartagena.com/naspa/LTO1.pdf http://www.overlanddata.com/PDFs/ 104278-102_A.pdf http://www.ibm.com/storage/europe/ pdfs/lto_mag.pdf See also: 3583; Accelis; MAM; TXNBytelimit and tape drive buffers; Ultrium LTO bar code format - Quiet zones (at each end of the bar code). - A start character (indicating the beginning of the label). - A six-character volume label. - A two-character cartridge media-type identifier (L1), which identifies the cartridge as an LTO cartridge ('L') and indicates that the cartridge is the first generation of its type ('1'). - A stop character (indicating the end of the label) When read by the library's bar code reader, the bar code identifies the cartridge's volume label to the tape library. The bar code volume label also indicates to the library whether the cartridge is a data, cleaning, or diagnostic cartridge. LTO cleaning cartridge See: Ultrium cleaning cartridge LTO drive cleaning Seldom required. At each tape unload the LTO drives have a small mechanical brush that runs over the heads. This seems to reduce the need for cleaning. LTO performance See Tivoli whitepaper "IBM LTO Ultrium Performance Considerations" Note that performance can be impaired if the LTO-CM memory chip (aka Medium Auxiliary Memory: MAM) has failed. A worse problem is one which was divulged 2004/09/13, where bad LTO1,2 microcode will cause the CM index to be corrupted. Without the index, the drive has to grope its way through the data to find what it needs to access, and performance is severely impaired. The LTO architecture is designed to automatically re-build this index if it should become corrupted. However, when this corrupted index condition is detected, slow performance is the result as the index is re-built, as the tape must be re-read from the beginning to the end of the tape. A corrupted index may be fixed the next time it is used, only to be corrupted again at a future time: installing corrected drive microcode is the only solution. LTO customers should use TapeAlert, which spells out drive problems. LTO tape errors Can be caused by the cartridge having been dropped. (The LTO cartridges are not as rugged as 3480/3490/3590 tape cartridges.) LTO tape serial number The barcode may have "SU3689L1", wherein the serial number is "SU3689" - does not include the "L1". LTO vs. 3590 An LTO drive is 5 inches tall and roughly twice as long as the data cartridge; the motor is lightweight, and there is no tape 'buffer' between the cart and the internal reel. The motor on a 3580/3590 is much larger and heavier, and there is a vacuum column buffer between the cart and the internal reel. The net result is that the 3590 needs to get one reel or the other up to speed and has several inches of tape to accelerate AND has a much more powerful motor to do it. The LTO drive, with a lighter motor, has no tape buffer and needs to get both reels and all the tape moving. It is also the case that LTO is designed for streaming: the start-stop operation associated with small files is greatly detrimental to LTO performance (see: Backhitch). See also: LTO vs. 3590 LTO1 drives, IBM Those are 3580 Ultrium 1 drives. See: 3580 LTO-2 (lto2) See: Ultrium 2 LuName server option, query 'Query OPTion' LVM Fixed Area The 1 MB reserved control area on a *SM database volume, as accounted for in the creating 'dsmfmt -db' operation. See also: SHow LVMFA LVSA Logical Volume Snapshot Agent. For making an image backup of a Windows 2000 volume while the volume continues to be available for other processing. TSM will create the OBF (Old Blocks File) there, and perform the backup from there. Default location: C:\TSMLVSA See also: Image Backup; OBF; Open File Support; SNAPSHOTCACHELocation LZ1 IBM's proprietary version of Lempel-Ziv encoding called IBM LZ1. Macintosh, shut down after backups Put into the ADSM prefs file: "SCHEDCOMpleteaction Shutdown" Macintosh backup file names Macintosh has traditionally used the colon character (:) rather than slash (/) or backslash (\) as its directory designation character. Interestingly, this persists into OS X, where the user interface makes the directory character seem to be the usual Unix slash (/); but OS X invisibly translates that to and from its usual colon (:). So, if you do Query CONtent or the like at the TSM server, you will see the actual colons separating file path components. Macintosh client components The following components are in the Macintosh client package: Backup: The interactive GUI for backup, restore, archive, retrieve. ~2.8MB Scheduler daemon: A background appl that operates in sleep mode until it is time to run a schedule, then starts the Scheduler program. ~120KB Scheduler program: Communicates with the server for the next schedule to run, and performs the scheduled action, such as a backup or restore, at the scheduled time. ~1.5MB Macintosh disaster recovery Simply take some kind of removable disk (Syquest, ZIP, ...) with enough capacity and put a minimal version of MacOS (with TCP/IP support) and ADSM on it. Macintosh files, back up from NT Yes, ADSM can do this, via NT "Services for Macintosh". NT can access Macintosh file systems, and from NT you can then back them up. BUT: ADSM version 2 cannot handle the resource fork portion of the files (ADSM v3 can). V.2 restorals thus bring the files back as "flat files". See: Services for Macintosh; USEUNICODEFilenames Macintosh files, restore to NT The Mac files must be restored to a directory managed by "Services for Macintosh". Also make sure that Services for Macintosh is up and running. Macintosh icons, effects of moving In the Mac client V3 manual, Chapter 3, page 13, it says: "Simply moving an icon makes the file appear changed. ADSM records the change in icon position to minimize the problem of multiple icons occupying the same space after the files are restored. If only the attributes of a file or folder have changed, and not the data, only the attributes are backed up. You may have multiple versions of the same file with the only difference between them being the icon position or color." Macintosh OS X scheduler Via dsmcad. It's started from the script /Library/StartupItems/dsmcad/dsmcad when Mac OS X boots. You should see a /usr/bin/dsmcad running. If checking with the GUI client, you'll need to use 'TSM Backup for Administrators' rather than the plain 'TSM Backup': the latter will only show other users' backed up directories, not their files. MACRO TSM server command used to invoke a user-programmed set of TSM commnds, as a package, with variable substitution. Syntax: 'MACRO MacroName [Substitutionvalues]' where the the macro file name is case-sensitive and Substitutionvalues fill in percent-signed numbers, in numerical order by invocation order. Example of variables: %1, %2, %3. Note that you cannot run a macro via an Administrative Schedule - but you can via a Client Schedule, via ACTion=Macro with OBJects naming the macro...which means that the schedule must be associated with a node and that its dsmc sched process causes the macro to run. (Consider instead using Server Scripts.) Redirection: Works The TSM manuals are obscure as to where macro files are supposed to be located. In actuality, they can be: - In the directory where the dsmadmc command was invoked, whereby you can invoke the macro simply by its base name, as in: MACRO mymacro - In any system directory, whereby you need to invoke the macro by full path name, as in: MACRO /usr/local/adsm/mymacro One convenient practice would be to create a standard macros directory, and then 'cd' there before invoking 'dsmadmc', thus allowing you to invoke the macros with short names. Note that you do not need eXecute permission to be set on macro files, in that ADSM will load and interpret them. An unusual factor is that TSM keeps going back to the macro as it performs it, even if the macro is simple and certainly involves no looping: changing the content of the macro during a "more..." screen transition, for example, will result in an "ANR2000E Unknown command" error message. Ref: Admin Guide chapter "Automating Server Operations", Using Macros See also: /* */; Server scripts Magic Number You will run into occasional TSM server messages referring to "magic number". This amounts to a checksum number which TSM generated and stored in the database at the time it put the file object into its storage pool (wrote it to media), to assure data integrity. When at some time in the future TSM may be called upon to retrieve the object from that media, it generates a checksum from the retrieved file data and checks that it matches what it originally had for the object. An error indicates that the data could be read from the media without hardware/OS detection of an error, but nevertheless there is a discrepancy. The data is thus deemed corrupted and hopeless: you need to perform a Restore Volume or the like to get a usable copy of the object. How did the data go bad? The most likely cause is between TSM and the tape head: Faulty hardware, erroneous firmware, bad SCSI cables, network infrastructure problems, and the like can all result in bad data ending up on the media. Magstar Product line acronym: Magnetic storage and retrieval. Name supplanted in 2002 by See also: IBM TotalStorage; TotalStorage Magstar MP IBM's name for its 3570 and 3575 technology. MAILprog Client System Options file (dsm.sys) option to specify who gets mail, and via what mailer program, when a password expires and a new password is generated. Can be used when PASSWORDAccess Generate is in effect. Code within the SErvername section of definitions. Format: "Mailprog /mail/pgmname User_Id" See also: PASSWORDAccess; PASSWORDDIR MAKESPARSEFILE See: Sparse files, handling of MAM Medium Auxiliary Memory: An Auxiliary Memory residing on a medium, for example, a tape cartridge. Some tape technologies - e.g., AIT and LTO (Ultrium) - use cartridges equipped with Medium Auxiliary Memory (MAM), a non-volatile memory used to record medium identification and usage info. This is typically accessed via an RF interface and does not require reading the tape itself. In a library not equipped with a mobile MAM reader, it is necessary to load the cartridge into the drive to read the MAM via the drive's MAM reader. Ref: http://www.t10.org/ftp/t10/ document.99/99-347r0.pdf Mammoth tape drive Exabyte 8mm (helical scan) tape drive with SCSI-2 fast interface, wide or narrow, with SE or differential as an option. Capacity: 20 GB, native/uncompressed; 40 GB compressed. Transfer Rate: 10.5 GB per hour, native/uncompressed; 360 MB/min compressed rate. Technology is similar to AIT-1. Mammoth-2 tape drive Exabyte 8mm tape drive (helical scan). Form factor: half-height, 5.25" Capacity: 60 GB Transfer rate: 12 MBps Cartridge tape contains a section of cleaning fabric which the drive uses as needed. Technology is similar to AIT-2. Managed Server See: Enterprise Configuration and Policy Management MANAGEDServices Windows client option for having CAD cause the client scheduler, and web client, to run rather than have them hang around as memory-holding processes. Syntax: MANAGEDServices {[schedule] [webclient]} See also: CAD Management class A policy object that contains a collection of (HSM) space management attributes and backup and archive Copy Groups. The space management attributes contained in a Management Class determine determine whether HSM-managed files are eligible for automatic or selective migration. The attributes in the backup and archive Copy Groups determine whether a file is eligible for incremental backup and specify how ADSM manages backup versions of files and archived copies of files. The management class is typically chosen for users by the node root administrator (via 'ASsign DEFMGmtclass') but can alternately be selected as the third token on the INCLUDE line in the include-exclude options file, or via the DIRMc Client Systems Option File option, or the ARCHMc 'dsmc archive' command line option. However, automatic migration occurs *only* for the default management class; for the incl-excl named management class you have to manually incite migration. Management class, choose Is accomplished by specifying the mangement class as the third token on a client Include option. Format: Include FileSpec MgmtClassName To have all backups use the management class, code: Include * MgmtClassName To have specific file systems use the management class, do like: Include /fsname/.../* MgmtClassName Ref: Client B/A manual Management class, copy See: COPy MGmtclass Management class, default As the name implies, this is the management class which will be used by default. Can be overridden via the third token on the INCLUDE line in the include-exclude options file. However, automatic migration occurs *only* for the default management class; for the incl-excl named management class you have to manually incite migration. Management class, default, establish 'ASsign DEFMGmtclass DomainName SetName ClassName' To make this change effective you then need to do: 'ACTivate POlicyset DomainName SetName' Management class, define 'DEFine MGmtclass DomainName SetName ClassName [SPACEMGTECH=AUTOmatic| SELective|NONE] [AUTOMIGNOnuse=Ndays] [MIGREQUIRESBkup=Yes|No] [MIGDESTination=poolname] [DESCription="___"]' Note that except for DESCription, all of the optional parameters are Space Management Attributes for HSM. Management class, delete 'DELete MGmtclass DomainName SetName ClassName' Management class, query 'Query MGmtclass [[[DomainName] [SetName] [ClassName]]] [f=d]' See also: Management classes, query Management class, SQL queries It is: CLASS_NAME Management class, update See: UPDate MGmtclass Management class for HSM, select HSM uses the Default Management Class which is in force for the Policy Domain, which can be queried from the client via the dsmc command 'Query MGmtclass'. You may override the Default Management Class and select another by coding an Include-Exclude file, with the third operand on an Include line specifying the Management Class to be used for the file(s) named in the second operand. Management class used by a client 'dsmc query mgmtclass' or 'dsmc query options' in ADSM ('dsmc show options' in TSM). Management class used in backup Shows up in 'dsmc query backup', whether via command line or GUI. Management classes, display in detail 'dsmmigquery -M -D' Management classes, query from client 'dsmc Query Mgmtclass [-DETail]' Reports the default management class and any management classes specified on INCLude statements in the Include/Exclude file. Management classes, unused, identify You can perform queries like the following, for Archives and Backups: SELECT DOMAIN_NAME, CLASS_NAME FROM MGMTCLASSES WHERE CLASS_NAME NOT IN (SELECT DISTINCT(CLASS_NAME) FROM ARCHIVES) MANUAL (libtype) See: Manual library Manually Ejected category 3494 Library Manager category code FFFA for a tape volume which was in the inventory but in a re-inventory was not found in the 3494. Thus, the 3494 thinks that someone reached in and removed it. This category is typically induced by having to extricate a damaged tape from the robot. See "Purge Volume" category to eliminate such an entry. Manual library No, it's not a library full of manuals; it's a library whose volumes are to be mounted manually, by people responding to mount messages. It is distinguished by LIBType=MANUAL in DEFine LIBRary; and the tape device will be of "mt" type, rather than "rmt" (*SM driver). A shop running this type of operation will usually have an operations terminal running the *SM administrative client in Mount Mode (dsmadmc -mountmode), simply for the operators to see and respond to mount requests. Outstanding mount requests can be checked via Query REQuest. Such requests are answered with the REPLY command acknowledging a specific request number, to signify that the action requested has been performed by the operator such that *SM can proceed. Manuals See: TSM manuals "Many small files" problem The name of the challenge where backups involve a large number of small files, which stresses the TSM database due to the heavy updating and number of database entries, and the client's memory and processing power in performing an Incremental backup. See "Database performance" for ways to mitigate the impact on the TSM database and optimize performance. Other possible approaches: - To somewhat reduce Backup time, consider using -INCRBYDate backup, which eliminates getting a long list of files from the server, massaging it in client memory, and then comparing as the file system is traversed. (But see the INCRBYDate entry for side effects.) - Another Backup time reduction scheme: With some client file systems it may be known in what area updating occurs, as in the case of a company doing product testing which creates thousands of results files in subdirectories named by product and date. Here you can tailor your backup to go directly at those directories and skip the rest of the file system, where you know that little or nothing has changed. - Journal-Based Backups may be a good alternative on Windows. - Consider 'dsmc Backup Image' (q.v.), to back up the physical image of a volume (raw logical volume) rather than individually backing up the files within it. - Some customers pre-combine many small files on the client system, as with the Unix 'tar' command or personal computer file bundling packages, thus reducing the quantity to a single bundle file. - If regulations require you to keep files for a certain period, consider using Backup Sets rather than doing full backups. - Consider a "divide and conquer" approach, using parallel backup processes to operate on separate areas of a file system housing many small files, to reduce the overall time to perform the backup. You may employ a 'dsmc i' for each major top-level directory, to back up into the same TSM server filespace, or use the VIRTUALMountpoint option to cause the file system to be treated as multiple filespaces. Naturally, this can be effective only if your disk and I/O path can meet the demands.) Your retention policies need to be reasonable: don't arbitrarily retain a year's worth of versions, but rather keep as much as is really needed to recover files. Make sure you are running regular, unlimited expirations, else your TSM database will balloon. The backup of small files is also problematic with tape drives with poor start-stop characteristics (see Backhitch). The condition of the directory in which the small files exist can also slow things down: see "Backup performance". Consider turning on client tracing to identify the specific problem area. Master Drive An informal name for the first, SMC drive in a SCSI library, such as the 3584. (Remove that drive and you suffer ANR8840E trying to interact with the library.) MATCHAllchar Client option to specify a character to be used as a match-all wildcard character. The default is an asterisk (*). MATCHOnechar Client option to specify a character to be used as a match-one wildcard character. The default is a question mark (?). MAX SQL statement to yield the largest number from all the rows of a given numeric column. See also: AVG; COUNT; MIN; SUM MAXCAPacity Devclass keyword for some devices (principally, File) to specify the maximum size of any data storage files defined to a storage pool categorized by this device class. MAXCAPACITY, if set to other than 0 determines the maximum amount of data ADSM will put to a tape, ESTCAPACITY, if MAXCAPACITY is not set, is an estimate used for some calculations for reclamation and display, but does not determine when a tape is full. On VM and MVS servers MAXCAPACITY is the maximum amount of data that ADSM will put on a tape, but if the tape becomes physically full, or has certain errors, it will be marked full before it reaches that capacity. The capacity reported by ADSM does not consider compression. If client compression is used, or if the data is not very compressible (backups of zip files, for examples) then ADSM will report a full tape will a smaller capacity. Most tape manufacturers give their tape capacity assuming compression (I think normally around 3/1), so if you are sending already compressed data, you will not be able to reach the stated capacities. MAXCMDRetries Client System Options file (dsm.sys) option to specify the maximum number of times you want the client scheduler to attempt to process a scheduled command which fails. Default: 2 Do not confuse with the Copy Group SERialization parameter, which governs attempts on a busy file, not session reattempts. Maximum command retries 'Query STatus' Maximum mounts See: MOUNTLimit Maximum Scheduled Sessions 'Query STatus' output reflecting the number of schedule sessions possible, as controlled by the 'Set MAXSCHedsessions' command percentage of the the Maximum Sessions value seen in 'Query STatus'. Default: 50% of Maximum Sessions. MAXMIGRATORS HSM: New in 4.1.2 HSM client, per the IP22148.README.HSM.JFS.AIX43 file: Starting with this release, dsmautomig starts parallel sessions to the TSM server that allows to migrate more than one file at a time. The number of parallel migration sessions is recognized by the dsmautomig process specific option that can be configured in the dsm.sys file: MAXMIGRATORS (default = 1, min = 1, max = 20) Make sure that sufficient resources are availabale on the TSM server for parallel migration. Avoid to set the MAXMIGRATORS option higher than number of sessions on the TSM server can be used for storing data. maxmountpoint You mean MAXNUMMP (q.v.) MAXNUMMP TSM 3.7+ server REGister Node, UPDate Node parameter to limit the number of concurrent mount points, per node, for Archive and Backup operations. Prevents a client from taking too many tape drives at one time. Affects parallelization. Code 0 - 999. Default: 1 Warning: A value of 0 will result in ANS1312E message and immediate termination of a backup/archive session; but restore/retrieve will not be impeded. Warning: Upgrading to 3.7, with its attendant database conversion, results in the MAXNUMMP value being 0! Ref: TSM 3.7 Technical Guide, 6.1.2.3 See also: KEEPMP; MOUNTLimit; Multi-session client; REGister Node MAXPRocess Operand in 'BAckup STGpool', 'MOVe NODEdata', 'RESTORE STGpool', and 'RESTORE Volume' to parallelize the operation - tempered by the number of tape drives. Note that the "process" implications in the name harks back to the days when server taks were performed by individual processes: in these modern times, MAXPRocess is figurative and actually governs the number of threads. MAXRecalldaemons Client System Options file (dsm.sys) option to specify the maximum number of dsmrecalld daemons which may run at one time to service HSM recall requests. Default: 20 MAXRECOncileproc Client System Options file (dsm.sys) option to specify the maximum number of reconcilliation processes which HSM can start automatically at one time. Default: 3 MAXSCRatch Operand in 'DEFine STGpool' to govern the use of scratch tapes in the storage pool. Specifies the maximum number of scratch volumes that may be taken for the storage pool, cumulatively. That is, each volume taken from the scratch pool is still known as a scratch volume, as reflected in the Query Volume "Scratch Volume?" value, and will return to the scratch pool when emptied. The MAXSCRatch value is thus the storage pool's quota limit. Setting MAXSCRatch=0 prevents use of scratch volumes, an intentional special case when you want to have the storage pool use on volumes specifically assigned to it, via 'DEFine Volume'. If MAXSCRatch is greater than 0 and you have also DEFine'd volumes into the storage pool, the DEFine'd volumes will be used first, then scratches. Msgs: ANR1221E MAXSCRatch, query 'Query STGpool ... Format=Detailed'; look for the value associated with "Maximum Scratch Volumes Allowed". MAXSCRatch and collocation ADSM will never allocate more than 'MAXSCRatch' volumes for the storage "raw logical volume" "lock files" /tmp pool: collocation becomes defeated when the scratch pool is exhausted as ADSM will then mingle clients. When a new client's data is to be moved to the storage pool, ADSM will first try to select a scratch tape, but if the storage pool already has 'MAXSCRatch' volumes then it will select the tape with the lowest utilization in the storage pool. MAXSessions Server options definition (dsmserv.opt). Specifies the number of simultaneous client sessions. The MAXSessions value is incremented by prompted sessions, polling sessions, and admin sessions. When an attempt is made to prompt a client there is a 1 minute delay for response from that client. The next client to be prompted is not prompted until either the first client responds or the 1 minute delay elapses. So if you have many prompted clients, be sure your schedule starttime duration is large enough to accomodate 1 minute delays. Typically the client will start as soon as prompted, so you may have prompted clients that are not "loaded" and consequently the entire delay is used waiting for a client that is not going to respond. Even if you are maxed out on the MAXSessions value, you can always start more administrative clients. Default: 25 client sessions Ref: Installing the Server... See also: Multi-session Client; "Set MAXSCHedsessions %sched", whereby part of this total MAXSessions value is devoted to Schedule sessions; SETOPT MAXSessions server option, query 'Query OPTion', see "Maximum Scheduled Sessions". MAXSize STGpool operand to define the maximum size of a Physical file which may be stored in this pool. (Remember that Physical size refers to the size of an Aggregate, not the size of a Logical file from the client file system. See "Aggregates".) Limiting the size of a file eligible for a given pool in a hierarchy causes larger files to skip that storage pool and try the next one down in the hierarchy. If the file is too big for any pool in the hierarchy, it will not be stored. The file's size, as reported by the operating system, is compared to the storage pool's MAXSize value PRIOR TO compression. Value can be specified as "NOLIMIT" (which is the default), or a number followed by a unit type: K for kilobytes, M for megabytes, G for gigabytes, T for terabytes. Examine current values via server command 'Query STGpool Format=Detailed'. Msgs: ANS1310E See also: Storage pool space and transactions MAXThresholdproc Client System Options file (dsm.sys) option to specify the maximum number of HSM threshold migration processes which can start automatically at one time. Default: 3 Maximum sessions, define "MAXSessions" definition in the server options file. Maximum sessions, get 'Query STatus' MB Megabyte: To be considered equal to 1024x1024 = 1,048,576 in TSM. (Note that disk makers base their sizings on 1000, not 1024.) MBps Megabytes per second, a data rate typically used with tape drives. Mbps Megabits per second, a data rate typically associated with data communication lines. Media Access Status Element of Query SEssion F=D report. "Waiting for access to output volume ______ (___ seconds)" may reflect the volume name that the session was waiting for when it started - but that may no longer be the actual volume needed. For example: an Archive session fills the disk storage pool in a hierarchy where tape is the next level, and so a migration process is incited...and so the client is waiting on the tape which the migration process is migrating to. Then that tape fills. Migration goes on to a fresh tape, but the archive session still shows waiting for access to the original tape. When neither Query Process nor Query Session F=D show the volume identified in "Waiting for access...", it can be due to a backup of HSM-managed space where that volume is feeding the backup directly from the storage pool rather than the client, as HSM backups operate where the HSM space is on the *SM server. Query Session F=D shows only the output volume, not the implicit input. "Current output volume(s): ______,(470 Seconds)" is an undocumented form, which seems to reflect how long the tape has been idle, as for example when the client is looking for the next candidate file to back up. This impression is reinforced by the Seconds value dropping back to zero periodically. If that HSM backup cannot mount either the input or output volumes for lack of drives, the field will report two "Waiting for mount point..." instances, which looks odd but makes perfect sense. Media fault message ANR8359E Media fault ... (q.v.) Media Type IBM 34xx tape cartridges have an external one-character ID, as follows: '1' Cartridge System Tape (CST): 3490 'E' Enhanced Capacity Cartridge System Tape (ECCST): 3490E 'J' Magstar 3590 tape cartridge (HPCT) 'K' Magstar 3590 tape cartridge (EHPCT) See also: CST; ECCST; HPCT Media TSM db table to intended to report volumes managed via the MOVe MEDia cmd. Columns: VOLUME_NAME, STATE (MOUNTABLEINLIB, MOUNTABLENOTINLIB), UPD_DATE (YYYY-MM-DD HH:MM:SS.000000), LOCATION, STGPOOL_NAME, LIB_NAME, STATUS (EMPTY, FILLING, FULL, ACCESS (READONLY, etc.), LRD (YYYY-MM-DD HH:MM:SS.000000). (LRD is Last Reference Date.) MEDIA1 A less-used designation for 3490 base cartridge technology. See CST. MEDIA2 A less-used designation for 3490E cartridge technology. See ECCST. MEDIA3 A less-used designation for 3590 cartridge technology. mediaStorehouse 199901 product from Manage Data Inc. which functions as an ADSM proxy client to service backup and restore of network-client data via CORBA wherever the user currently happes to be (based upon userid). www.managedata.com Media Wait (MediaW) "Sess State" value in 'Query SEssion' for when a sequential volume (tape) is to be mounted to serve the needs of that session with a client and the session awaits completion of that mount. This could mean waiting either for a mount point or a volume in use by another session or process. Another cause is the tape library being unavailable, as in a 3494 in Pause mode. Recorded in the 24th field of the accounting record, and the "Pct. Media Wait Last Session" field of the 'Query Node Format=Detailed' server command. See also: Communications Wait; Idle Wait; SendW; Run; Start Medium changer, list contents Unix: 'tapeutil -f /dev/____ inventory' Windows: 'ntutil -t tape_ inventory' See: ntutil; tapeutil Medium Mover (SCSI commands) 3590 tape drive: Allows the host to control the movement of tape cartridges from cell to cell within the ACF magazine, treating it like a mini library of volumes. Megabyte See: MB Memory limits See: Unix Limits Memory-mapped I/O You mean Shared Memory (q.v.) MEMORYEFficientbackup ADSMv3+ Client User Options file (dsm.opt) option specifies a more memory conserving algorithm for processing incremental backups, backing up one directory at a time, and using less memory. This obviously occurs at (great) expense of backup performance. Choices: No Your client node uses the faster, more memory-intensive method when it processes incremental backups. Yes Your client node uses the method that uses less memory when processing incremental backups - BUT WITH A BIG PERFORMANCE PENALTY. Note: This option can also be defined on the server. Msgs: ANS1030E See also: LARGECOMmbuffers Message explanation You can do 'help MsgNumber' to get info about a message. For example: with message ANR8776W, you can simply do 'help 8776'. Message filesets (TSM AIX server) tivoli.tsm.msg.en_US.devices tivoli.tsm.msg.en_US.server tivoli.tsm.msg.en_US.webhelp Message interval "MSGINTerval" definition in the server options file. MessageFormat Definition in the server options file. Specifies the message headers in all lines of a multi-line message. Possible option numbers: 1 - Only the first line of a multi-line message contains the header. 2 - All lines of a multi-line message contain headers. Default: 1 Ref: Installing the Server... MessageFormat server option, query 'Query OPTion' Messages, suppress Use the Client System Options file (dsm.sys) option "Quiet". See also: VERBOSE MGMTCLASSES SQL Table for Management Classes. Columns: DOMAIN_NAME, SET_NAME, CLASS_NAME, DEFAULT, DESCRIPTION, SPACEMGTECHNIQUE, AUTOMIGNONUSE, MIGREQUIRESBKUP, MIGDESTINATION, CHG_TIME, CHG_ADMIN, PROFILE MGSYSLAN Managed System for LAN license. MIC Memory-in-Cassette: Sony's non-volatile memory chip in their AIT cartridge. See: AIT; MAM Microcode, acquire Call 1-800-IBM-SERV and request the latest microcode for your device. Microcode, install Can use tapeutil or ntutil (Tape Drive Service Aids): select "Microcode Load"... - position to equivalent /dev/rmtx and hit Enter; - at "Enter Filename" enter the filename of your new firmware; - press F7 - download of firmware to the drive begins; successful download will be displayed (message "Operation completed successfully!") - press F10 and enter q to exit tapeutil/ntutil. Microcode in tape drive Run /usr/lpp/adsmserv/bin/mttest... select 1: manual test select 1: set device special file e.g.: /dev/rmt0 select 20: open select 46: device information or select 37: inquiry MICROSECONDS See: DAYS Microsoft Cluster Server Environment See IBM article swg21109932 scheduled backups, verify Microsoft Exchange See: Exchange; TDP for Exchange MIGContinue ADSMv3 Stgpool keyword to specify whether ADSM is allowed to migrate files that have not exceeded the MIGDelay value. Default: Yes. Because of the MIGDelay parameter, it is now possible for ADSM to complete a migration process and not meet the low migration threshold. This can occur if the MIGDelay parameter value prevents *SM from migrating enough files to satisfy the low migration threshold. The MIGContinue parameter allows system administrators to specify whether ADSM is allowed to migrate additional files. Exploitation note: This setting allows a very nice archival scheme to be implemented. Say you run a time sharing system, and when users leave you archive their home directories as a tar file in a storage pool. But you only want to keep the most recent year's worth of data there, and want anything older to be written to separate tapes that can be ejected from the tape library when they fill. You can set MIGDelay=365 and MIGContinue=No. This will keep recent files in the "current" storage pool and, when you drop the HIghmig value to cause migration to the "oldies" storage pool below it, files more than a year old will go there. Neat. See also: MIGDelay; Migration MIGDelay ADSMv3+ Stgpool keyword to specify the minimum number of days that a file must remain in a storage pool before the file becomes eligible for migration from the storage pool. The number of days is counted from the day that the file was stored in the storage pool or retrieved by a client, whichever is more recent. (The NORETRIEVEDATE server option prevents retrieval date recording.) This parameter is optional. Allowable values: 0 to 9999. Default: 0, which means migration is not delayed, which causes migration to be determined purely in terms of occupancy level. See also: MIGContinue; NORETRIEVEDATE MIGFILEEXPiration Client System Options file (dsm.sys) HSM option to specify the number of days that copies of migrated/premigrated files are kept on the server after they are modified on or deleted from the client file system. That is, the no-longer-viable migrated copy of the file in the HSM server is removed while the original remains intact on the client and a new, migrated copy of a modified file may now be present on the ADSM server. Note that the expiration clock starts ticking after reconciliation is run on the file system; and that HSM takes care of its own expiration, rather than it being done in EXPIre Inventory. Default: 7 (days) MIGPRocess Operand of 'DEFine STGpool' and 'UPDate STGpool' to specify the number of processes to be used for migrating files from the (disk) storage pool to a lower storage pool in the hierarchy of storage pools. (You cannot specify this operand on sequential (tape) storage pools, in that tape is traditionally a final destination.) Default: 1 process. Note that it pertains to migrating from a disk storage pool down to tape: you cannot specify migration *from* tape. Migration occurs with one process per node, moving *all* of the data for one node before going on to the data for another node. The order of nodes processed is per largest amount of data in the disk storage pool. See APAR IX77884. This means that if only one node session is active, you will get just one migration process, regardless of the MIGPRocess value. %Migr (ADSMv2 server) See: Pct Migr Migrate files (HSM) 'dsmmigrate Filename(s)' migrate-on-close recall mode A mode that causes HSM to recall a migrated file back to its originating file system only temporarily. If the file is not modified, HSM returns the file to a migrated state when it is closed. However, if the file is modified, it becomes a resident file. You can set the recall mode for a migrated file to migrate-on-close by using the dsmattr command, or set the recall mode for a specific execution of a command or series of commands to migrate-on-close by using the dsmmode command. Contrast with normal recall mode and read-without-recall recall mode. Migrated file A file that has been copied from a local file system to ADSM storage and replaced with a stub file on the local file system. Contrast with resident file and premigrated file. See also: Leader data; Stub file Migrated file, accessibility 'dsmmode -dataACCess=n' (normal) makes migrated files appear resident, and allow them to be retrieved. 'dsmmode -dataACCess=z' makes migrated files appear to be zero-length, and prevents them from being retrieved. Migrated file, display its recall 'dsmattr Filename' mode Migrated file, set its recall mode 'dsmattr -recallmode=n|m|r Filename' (HSM) where recall mode is one of: - n, for Normal - m, for migrate-on-close - r, for read-without-recall Migrated files, HSM, list from client 'dsmls' 'dsmmigquery -SORTEDMigrated' (this takes some time) Migrated files, HSM, list from server 'Query CONtent VolName ... Type=SPacemanaged' Migrated files, HSM, count In dsmreconcile log. MIgrateserver HSM: Client System Options file (dsm.sys) option to specify the name of the ADSM server to be used for HSM services (file migration - space management). Code at the head of the dsm.sys file, not in the server stanzas. Cannot be overridden in dsm.opt or via command line. Using -SErvername on the command line does not cause MIgrateserver to use that server. Default: server named on DEFAULTServer option. Migration A concept which occurs in several places in ADSM: Storage pools: Refers to migrating files from one level to a lower level in a storage pool hierarchy when the Pct Migr value (Query STGpool report) reaches the specified threshhold percentage (HIghmig), mitigated by other control values such as MIGDelay and NORETRIEVEDATE. Occurs with one process per node (regardless of the MIGPRocess value), moving *all* of the data for one node before going on to the data for another node - or before again checking the LOwmig value. The order of nodes processed is per largest amount of data in the disk storage pool. Priority: Will wait for a Move Data process to complete, and then take a tape drive before any additional waiting Move Data processes start. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting). HSM: The process of copying a file from a local file system to ADSM storage and replacing the file with a stub file on the local file system. See also: threshold migration; demand migration; selective migration See: DEFine STGpool; HIghmig; LOwmig; MIGDelay, NORETRIEVEDATE Migration, Auto, manually perform for HSM: 'dsmautomig [FSname]' file system Migration, prevent at start-up To prevent migration from occurring during a problematic TSM server restart, add the following (undocumented) option to the server options file: NOMIGRRECL Migration, storage pool files General ADSM concept of migrating a storage pool's files down to the next storage pool in a hierarchy when a given pool exceeds its high threshold value. Migration, storage pool files, query 'Query STGpool [STGpoolName]' Migration, storage pool files, set The high migration threshold is specified via the "HIghmig=N" operand of 'DEFine STGpool' and 'UPDate STGpool'. The low migration threshold is specified via the "LOwmig=N" operand. Note that LOwmig is effectively overridden to 0 when CAChe=Yes is in effect for the storage pool, because ADSM wants to cache everything once migration is triggered. Migration and reclamation As a TSM server pool receives data, the server checks to see if migration is needed. This migration causes cascading checks as the next stgpool in the hierarchy receives data. When the bottom of the storage pool hierarchy is reached, the migration checking thread will initiate reclamation checking against this lowest level stgpool if it is a sequential stgpool. If there are multiple sequential storage pools within the storage pool hierarchy, reclamation processing will start on the lowest hierarchy position and proceed to the next level storage pool in the hierarchy. Migration candidate considerations Too small? A file will not be a (HSM) candidate for migration if its size is smaller than the stub file size (as revealed in 'dsmmigfs query'). Management class proper? As installed, HDM will not migrate files unless they have been backed up. 'dsmmigquery FSname' Migration candidates, list (HSM) 'dsmmigquery FSname' Migration candidates list (HSM) A prioritized list of files that are eligible for automatic migration at the time the list is built. Files are prioritized for migration based on the number of days since they were last accessed (atime), their size, and the age and size factors specified for a file system. Note that time of last access is a measure of demand for the file, so is used as a basis rather than modification time. Can be rebuilt by the client root user command: 'dsmreconcile [-Candidatelist] [-Fileinfo]' See: candidates Migration in progress? 'Query STGpool ____ Format=Detailed' "Migration in Progress?" value. Migration not happening That is, migration from a higher level storage pool to a lower one in a storage pool hierarchy is not happening. - The presence of server option NOMIGRRECL will prevent it. Migration not happening (HSM problem) See: HSM migration not happening Migration performance The migration of data from one storage pool to a lower one - particularly to tape - is limited by: - Your collocation specification, which can cause many tapes to be mounted as files are "delivered" to their appropriate places in the next storage pool. - The *SM database is in the middle of the action, so its cache hit ratio performance is important with many small files. - Long mount retention periods can prolong processing in having to wait for an idle tape to be dismounted before the next one can be mounted. - The MOVEBatchsize and MOVESizethresh server option values will govern how much data moves in each server transaction. - The performance of your tape technology is also a factor. - In moving from disk to tape, realize that the conflicting characteristics of the two media can hamper performance... Disk is a bit-serial medium which has to perform seeks to get to data. Tape is a byte-parallel medium which is always ready to write when in streaming mode, where its transfer rate is typically much faster than disk. If the tape to wait for the disk to provide data, the tape drive is forced into start/stop mode, which particularly worsens throughput in some tape technologies. - With caching in effect, there will be more disk seek time to step over older cached files in migrating new files, while the receiving tape drive waits. See: MOVEBatchsize, MOVESizethresh Migration Priority A number assigned to a file in the Migration Candidates list (candidates file), computed by: - multiplying the number of days since the file was last accessed by the age factor; - multiplying the size of the file in 1-KB blocks times the size factor; - add those two products to produce the priority score (Migration Priority). This ends up in the first field of the candidates file line. See: candidates Migration processes, number of Code on "MIGPRocess=N" keyword of 'DEFine STGpool' and 'UPDate STGpool'. Default: 1. See: MIGPRocess Migration storage pool (HSM) Specified via 'DEFine MGmtclass MIGDESTination=StgPl' or 'UPDate MGmtclass MIGDESTination=StgPl'. Default destination: SPACEMGPOOL. Migration vs. Backup, priorities Backups have priority over migration. MIGREQUIRESBkup (HSM) Mgmtclass parameter specifying that a backup version of a file must exist before the file can be migrated. Default: Yes Query: 'Query MGmtclass' and look for "Backup Required Before Migration". See also: Backup Required Before Migration; RESToremigstate MIM (3590) Media Information Message. Sent to the host system. AIX: appears in Error Log. Severity 1 indicates high temporary read/write errors were detected (moderate severity). Severity 2 indicates permanent read/write errors were detected (serious severity). Severity 3 indicates tape directory errors were detected (acute severity). Ref: "3590 Operator Guide" manual (GA32-0330-06) esp. Appendix B "Statistical Analysis and Reporting System User Guide" See also: SARS; SIM MIN SQL statement to yield the smallest number from all the rows of a given numeric column. See also: AVG; COUNT; MAX; SUM MINRecalldaemons Client System Options file (dsm.sys) option to specify the minimum number of dsmrecalld daemons which may run at one time to service HSM recall requests. Default: 3 See also: MAXRecalldaemons MINUTE(timestamp) SQL function to return the minutes value from a timestamp. See also: HOUR(), SECOND() MINUTES See: DAYS Mirror database Define a volume copy via: 'DEFine DBCopy Db_VolName Copy_VolName' MIRRORRead DB server option, query 'Query OPTion' MIRRORRead LOG|DB Normal|Verify Definition in the server options file. Specifies the mode used for reading recovery log pages or data base log pages. Possibilities: Normal: read one mirrored volume to obtain the desired page; Verify: read all mirror volumes for a page every time a recovery log or database page is read, and if an invalid page is encountered, to resync with valid page from other volume (decreases performance but assures readability). This should be in effect when a (standalone) dsmserv auditdb is run. Default: Normal Ref: Installing the Server... MIRRORRead LOG server option, query 'Query OPTion' MIRRORWrite DB server option, query 'Query OPTion' MIRRORWrite LOG|DB Sequential|Parallel Definition in the server options file. Specifies how mirrored volumes are accessed when the server writes pages to the recovery log or data base log during normal processing. "Sequential" is "conditional mirroring" such that data won't be written to a mirror copy until successfully written to the primary. Default: Sequential for DB; Parallel for LOG Comments: *SM Sequential mirroring *is* better than RAID because of the danger of partial page writes - which *do* occur in the real world as hardware and human defects evidence themselves. RAID will perform the partial writing in parallel, thus resulting in a corrupted database if the writing is interrupted, whereas *SM Sequential mirroring will leave you with a recoverable database - by simple resync, not "recovery". That is, RAID is just as problematic as *SM Parallel mirroring. Mirroring of the *SM database is much debated. You could let the hardware or operating system perform mirroring instead, but you lose the advantaged of the *SM application mirroring - which also include being able to put the mirrors on any arbitrary volume, not in a single Volume Group as AIX insists. Ref: Installing the Server... MIRRORWrite LOG server option, query 'Query OPTion' Missed Status in Query EVent output indicating that the scheduled startup window for the event has passed and the schedule did not begin. When you have SCHEDMODe PRompted and have a client schedule set up for the node, then it is missed if the server couldn't contact the client within the time window. The dsmsched.log will typically show "Scheduler has been stopped." One mundane cause of Missed is that the client scheduler process already has a (long-running) session underway, as in the case of a backup which runs much longer than expected because of a lot of new data in the file system, which runs well past the start time for the next session. See also: Failed; Schedule, missed Mobile Backups See: Adaptive differencing; SUBFILE* MODE A TSM server Copy Group attribute that specifies whether a backup should be performed for an object that was not modified since the last time it was backed up. (MODE=MODified|ABSolute) Specifying a Management Class with MODE=ABSolute is a technique for performing a full backup of a file system. See also: ABSolute; MODified MODE (-MODE) Client option used in conjunction with Backup Image to specify the type of file system style backup that should be used to supplement the last image backup. Choices: Selective The default. Causes the usual image backup to be performed, to distinguish from the Incremental choice. (The name of this choice is unfortunate in that it invites confusion with the standard TSM Selective backup, which this choice has nothing to do with. The name of this choice should have been "Image". Incremental Only back up files whose modification timestamp is later than that of the last image backup. This is accomplished via an -INCRBYDate backup, whose nature means that deleted files cannot be detected and head toward expiration on the server, and nor can files whose attributes have changed be detected for backup. If there was no prior image backup, this Incremental choice will be ignored as an erroneous specification, and a full image backup will be performed, as if Selective had instead been the choice. See also: dsmc Backup Image MODified A backup Copy Group attribute that indicates that an object is considered for backup only if it has been changed since the last backup. An object is considered changed if the date, size, owner, or permissions have changed. (Note that the file will be physically backed up again only if TSM deems the content of the file to have been changed: if only the attributes (e.g., Unix permissions) have been changed, then TSM will simply update the attributes of the object on the server.) See also: MODE Contrast with: ABSolute See also: SERialization (another Copy Group parameter) Monitoring products See: TSM monitoring products MONTHS See: DAYS Mount in progress Server command: 'SHow ASM' Mount limit See: MOUNTLimit Mount message See: TAPEPrompt Mount point, keep over whole session? The 'REGister Node' operand KEEPMP controls this. Mount point queue Server command: 'SHow ASQ' Mount point wait queue IBM internal term for how ADSM prioritizes server tasks needing tapes. MOVe Datas have a higher priority than some other tasks. Mount points Defined globally in DEVclass MOUNTLimit Restricted thereunder via REGister Node parameters KEEPMP and MAXNUMMP, governing the number of mount points available for other sessions. See: KEEPMP; MAXNUMMP; MOUNTLimit Mount points, maximum See: MOUNTLimit Mount points, report active 'SHow MP' Mount request timeout message ANR8426E on a CHECKIn LIBVolume. Mount requests, pending 'Query REQuest' (q.v.). Via Unix command: 'mtlib -l /dev/lmcp0 -qS' Mount requests, service console See: -MOUNTmode Mount Retention Output field in report from 'Query DEVclass Format=Detailed'. Value is defined via MOUNTRetention operand of 'DEFine DEVclass' command. See also: KEEPMP; MAXNUMMP; MOUNTLimit; MOUNTRetention Mount retention period, change See: MOUNTRetention Mount tape Via Unix command: 'mtlib -l /dev/lmcp0 -m -f /dev/rmt? -V VolName' # Absolute drivenm 'mtlib -l /dev/lmcp0 -m -x Rel_Drive# -V VolName' # Relative drive# (but note that the relative drive method is unreliable). Note that there is no ADSM command to explicitly mount a tape: mounts are implicit by need. Once mounted, it takes 20 seconds for the tape to settle and become ready for processing. See also: Dismount tape Mount tape, time required For a 3590 tape drive: If a drive is free, it takes a nominal 32 seconds for the 3494 robot to move to the storage cell containing the tape, carry the tape to the drive, load the tape, and have it wind within the drive. Wind-on time itself is about 20 seconds. Note that if you have two tape drives and your mount request is behind another which is just starting to be processed, you should expect your mount to take twice as long, or about 64 seconds. To rewind, dismount, mount a new tape in that drive, and position it can take 120 seconds. If a mount is taking an usually long time, it could mean that the library has a cleaning tape mounted, cleaning the drive. Or the tape could be defective, giving the drive a hard time as it tries to mount the tape. MOuntable DRM media state for volumes containing valid data and available for onsite processing. See also: COUrier; COURIERRetrieve; NOTMOuntable; VAult; VAULTRetrieve MOUNTABLEInlib State for a volume that had been processed by the MOVe MEDia command: the volume contains valid data, is mountable, and is in the library. See also: MOVe DRMedia MOUNTABLENotinlib State for a volume that had been processed by the MOVe MEDia command: the volume may contain valid data, is mountable, but is not in the library (is in its external, overflow location). See msg ANR1425W. See also: MOVe DRMedia Mounted, is a tape mounted in a drive? The 3494 Database "Device" column will show a drive number if the tape is mounted, and a Cell number of "_ K 6", where '_' is the wall number. If the Cell number says "Gripper", the tape is in the process of being mounted. Mounted volumes Server command: 'SHow ASM' MOUNTLimit (mount limit) Operand in 'DEFine DEVclass', to specify the maximum number of concurrent mounts. Affects BAckup STGpool, etc. It should be set no higher than the number of physical drives you have available. In ADSMv3+, you can specify "MOUNTLimit=DRIVES", and ADSM will then dynamically adjust the MOUNTLimit. Default: 1. -MOUNTmode Command-line option for *SM administrative client commands ('dsmadmc', etc.) to have all mount messages displayed at that terminal. No administrative commands are accepted. See also: -CONsolemode; dsmadmc Ref: Administrator's Reference MOUNTRetention Devclass operand, to specify how long, in minutes (0-9999), to retain an idle sequential access volume before dismounting it. Default: 60 (minutes). The value should be long enough to allow for re-use of same mounted tape within a reasonable time, but not so long that the tape could end up trapped in the drive upon an operating system shutdown which does not give *SM the opportunity to dismount it. (Always shut *SM down cleanly if possible.) Another reason to keep mount retention fairly short is that having a tape left in a drive only delays a mount for a new request, in that the stale tape must be dismounted first: this is a big consideration in restorals, particularly of a large quantity of data as for a whole file system, in which case it would be worth minimizing the MOUNTRetention when such a job runs. Also, the drive mechanism stays on while tape is mounted, so adds wear. Keep mount retention short when collocation is employed, to prevent waiting for dismounts, given the elevated number of mounts involved. But keep the retention value sufficient to cover client think time during file system backups. Msgs: ANR8325I for dismount when MOUNTRetention expires. See also: KEEPMP; MAXNUMMP; MOUNTLimit MOUNTRetention, query 'Query DEVclass Format=Detailed' and look for "Mount Retention" value. Mounts, current 'SHow MP'. Or Via Unix command: 'mtlib -l /dev/lmcp0 -qS' for the number of mounted drives; 'mtlib -l /dev/lmcp0 -vqM' for details on mounted drives. Mounts, maximum See: MOUNTLimit Mounts, monitor Start an "administrative client session" to control and monitor the server from a remote workstation, via the command: 'dsmadmc -MOUNTmode'. If having a human operator perform mounts, consider setting up a "mounts operator" admin ID and a shell script which would invoke something to the effect of: 'dsmadmc -ID=mountop -MOUNTmode -OUTfile=/var/log/ADSM-mounts.YYYYMMDD' and thus log all mounts. Ref: Administrator's Reference Mounts, pending Via ADSM: 'Query REQuest' (q.v.). Via Unix command: 'mtlib -l /dev/lmcp0 -qS' Mounts, historical SELECT * FROM SUMMARY WHERE ACTIVITY='TAPE MOUNT' Mounts count, by drive See: 3590 tape mounts, by drive MOUNTWait DEVclass and CHECKIn LIBVolume command operand specifying the number of minutes to wait for a tape to mount, on an allocated drive. Note that this pertains only to the time taken for a tape to be mounted by tape robot or operator once a tape mount request has been issued, and has been honored by the library. Example: a task requires a tape volume which is not in the library. It does not pertain to a wait for a tape *drive* when for example one incremental backup is taking up all tape drives and another incremental backup comes along needing a tape drive. Default: 60 min. Advice: The MOUNTWait value should be larger than the MOUNTRetention to assure that idle volumes have a chance to dismount and free drives before the MOUNTWait time expires. MOVe Data Server command to move a volume's viable data to volume(s) within the same sequential access volume storage pool (default) or a specified sequential access volume storage pool. (MOVe Data cannot be used on DISK devtype (Random Access) storage pools. The source storage pool may be a disk pool, with the target being the defined NEXTstgpool, whereby MOVe Data essentially will accomplish what migration does, but physically rather than logically. Copy storage pool volume contents can only be moved to other volumes in the same copy storage pool: you cannot move copy storage pool data across copy storage pools. MOVe Data can effectively reclaim a tape by compacting the data onto another volume. Syntax: 'MOVe Data VolName [STGpool=PoolName] [RECONStruct=No|Yes] [Wait=No|Yes]' RECONStruct is new with TSM 5.1, and allows the vacated space within aggregates to be reclaimed, thus allowing Move Data to be the equivalent of Reclamation. The reconstruction does incur more time. And, again, this can be done only on sequential access storage pools. The "from" volume gets mounted R/O. By default, data is moved by copying Aggregates as-is: unlike Reclamation, MOVe Data does not reclaim space where logical files expired and were logically deleted from *within* an Aggregate. (Per 1998 APAR IX82232: RECONSTRUCTION DOES NOT OCCUR DURING MOVE DATA: "MOVe Data by design does not perform reconstruction of aggregates with empty space. Although this was discussed during design, it was decided to only perform reconstruction during reclamation. A major reason for this decision was performance as reconstruction of aggregates requires additional overhead that MOVe Data does not; hence requires additional time to complete.") Like Reclamation, MOVe Data brings together all the pieces of each filespace, which means it has to skip down the tape to get to each piece. (The portion of a filespace that is on a volume is called a Cluster.) In addition, if the target storage pool is collocated, each cluster may ask for a new output tape, and TSM isn't smart enough to find all the clusters that are bound for a particular output tape and reclaim them together. Instead it is driven by the order of filespaces on the input tape, so the same output tape may be mounted many times. In doing a MOVe Data, *SM attempts to fill volumes, so it will select the most full available volume in the storage pool. Note that the data on the volume will be inaccssible to users until the operation completes. During the move, the 'Query PRocess' "Moved Bytes" reflects the data in uncompressed form. Ends with message ANR1141I (which fails to report byte count). May be preempted by higher priority operation - see message ANR1143W - but may not preempt the lower priority reclamation process (msg ANR2420E). (Move Data has a higher priority on what IBM internally refers to as the Mount point wait queue.) See also: AUDit Volume; NOPREEMPT; Pct Util; Reclamation Move Data, find required volumes Move Data would obviously involve the subject volume itself, and any volumes containing files that spanned into (the front of) or out of (the back of) the volume. This would be identifiable by the Segment number in Query CONtent _volname_, or the corresponding Select, being other than 1/1. For spanning files, you would then have to perform a Content table search on the related segment. (A tape in Filling status would obviously have no span-out-of segment on another volume.) Move Data, offsite volumes When (copy storage pool) volumes are marked "ACCess=OFfsite", TSM knows not to use those volumes, to instead use onsite copy storage pool volumes containing the same data (from the same primary storage pool). Naturally, the files on one offsite volume may be found on any number of onsite volumes, so multiple mounts may be expected, accompanied by a bunch of TSM "think time" between volumes. See also: ANR1173E MOVe Data and caching disk volumes Doing a Move Data on a cached disk pool volume has the effect of clearing the cache. This is obvious, when you think about it, as the cache represents data that is already in the lower storage pool in the hierarchy...that data has been "pre-moved". MOVe Data performance Move Data operations can be expected to involve considerable repositioning as the source tape is processed, to skip over full-expired Aggregates. Whether your tape technology is good at start-stop operations will affect your throughput. See also: BUFPoolsize; MOVEBatchsize; MOVESizethresh MOVe DRMedia DRM server command to move disaster recovery media offsite and back onsite. Will eject the volumes out of the library before transitioning the volumes to the destination state. Syntax: 'MOVe DRMedia VolName [WHERESTate=MOuntable| NOTMOuntable|COUrier| VAULTRetrieve|COURIERRetrieve] [BEGINDate=date] [ENDDate=date] [BEGINTime=time] [ENDTime=time] [COPYstgpool=StgpoolName] [DBBackup=Yes|No] [REMove=Yes|No|Bulk] [TOSTate=NOTMOuntable| COUrier|VAult|COURIERRetrieve| ONSITERetrieve] [WHERELOcation=location] [TOLOcation=location] [CMd=________] [CMDFilename=file_name] [APPend=No|Yes] [Wait=No|Yes]' Do not do a MOVe DRMedia where a MOVe MEDia is called for. REMove=BUlk is not supposed to result in a Reply required on SCSI libraries, but may: the workaround is Wait=Yes. MOVe MEDia ADSMv3 command to deal with a full library by moving storage pool volumes to an external "overflow" location, typically named on the OVFLOcation operand of Primary and Copy Storage Pools. (Think "poor man's DRM".) Unlike with Checkout, the volume remains requestable and ultimately mountable, via an outstanding mount request. (Note that, internally, MOVe MEDia actually performs a Checkout Libvolume, as indicated in its ANR6696I message.) Syntax: 'MOVe MEDia VolName STGpool=PoolName [Days=NdaysSinceLastUsage] [WHERESTate=MOUNTABLEInlib| MOUNTABLENotinlib] [WHERESTATUs=FULl,FILling,EMPty] [ACCess=READWrite|READOnly] [OVFLOcation=________] [REMove=Yes|No|Bulk] [CMd="command"] [CMDFilename=file_name] [APPend=No|Yes] [CHECKLabel=Yes|No]' By default, moving a volume out of the library causes it to be made ReadOnly, and moving it back into the library causes it to be made ReadWrite. If you are moving a volume back into a library (MOUNTABLENotinlib) and it is not empty, you must specify WHERESTATUs=FULl for the command to work, else get ANR6691E error. OVFLOcation can be used to override that specification had by the storage pool. Do not do a MOVe MEDia where a MOVe DRMedia is called for. This command moves whole volumes, not the data within them. Note that a MOVe MEDia will hang if a LABEl LIBVolume is running. After doing MOVe MEDia to move the volume back into the library: - The volume will be READWrite, rather than the READOnly that is conventional for a moved-out volume; - Query MEDia no longer shows the volume (Query Volume does), until CHECKIn is done; - You must do a CHECKIn LIBVolume to get the volume back into play. What happens when there are more than 10 tapes to go to the 3494 Convenience I/O Station? TSM moves one at a time, then an Intervention Required shows up ("The convenience I/O station is full"): when you empty the I/O station, the Int Req goes away, and TSM resumes ejecting tapes. No indication of the condition shows up in the Activity Log. Watch out for ANR8824E message condition where the request to the library is lost: the volume will probably have actually been ejected from the library, but the MOVe MEDia updating of its status to MOUNTABLENotinlib would not have occurred, leaving it in an in-between state. Msgs: ANR8762I; ANR2017I; ANR0984I; ANR0609I; ANR0610I; ANR6696I; ANR8766I; ANR6683I; ANR6682I; ANR0611I; ANR0987I (completion) See also: Overflow Storage Pool; OVFLOcation; Query REQuest Ref: Admin Guide, "Managing a Full Library" MOVe NODEdata TSM 5.1+ server command to move data for all filespaces for one or more nodes. As with the 'MOVe Data' command, when the source storage pool is a primary pool, you can move data to other volumes within the same pool or to another primary pool; but when the source storage pool is a copy pool, data can only be moved to other volumes within that copy pool (so the TOstgpool parameter is not usable). This command can operate upon data in a storage pool whose data format is NATIVE or NONBLOCK. As of 2003/11 the Reference Manual fails to advise what the Tech Guide does: that the Access mode of the volumes must be READWRITE or READONLY, which precludes OFFSITE and any possibility of onsite volumes standing in for the offsite vols. Cautions: As of 2003/05, the command may report success though that was not the case, as in specifying a non-existant filespace. Ref: TSM 5.1 Technical Guide MOVEBatchsize Definition in the server options file. Specifies the maximum number of client files that can be grouped together in a batch within the same server transaction for storage pool backup/restore, migration, reclamation, or MOVe Data operations. Specify 1-1000 (files). Default: 40 (files). TSM: If the SELFTUNETXNsize server option is set to Yes, the server sets the MOVEBatchsize option to its maximum values to optimize server throughput. Beware: A high value can cause severe performance problems in some server architectures when doing 'BAckup DB'. MOVEBatchsize, query 'Query OPTion'; look for "MoveBatchSize". MOVESizethresh Definition in the server options file. Specifies a threshold, in megabytes, for the amount of data moved as a batch within the same server transaction for storage pool backup/restore, migration, reclamation, or MOVe Data operations. Specify 1-500 (MB) Default: 500 (megabytes). TSM: If the SELFTUNETXNsize server option is set to Yes, the server sets the MOVESizethresh option to its maximum values to optimize server throughput. MOVESizethresh and MOVEBatchsize Server data is moved in transaction units whose capacity is controlled by the MOVEBatchsize and MOVESizethresh server options. MOVEBatchsize specifies the number of files that are to be moved within the same server transaction, and MOVESizethresh specifies, in megabytes, the amount of data to be moved within the same server transaction. When either threshold is reached, a new transaction is started. MOVESizethresh, query 'Query OPTion'; seek "MoveSizeThresh". MP1 Metal Particle 1 tape oxide formulation type, as used in the 3590. Lifetime: According to Imation studies (http://www.thic.org/pdf/Oct00/ imation.jgoins.001003.pdf) "All Studies Conclude that Advanced Metal Particle (MP1) Magnetic Coatings Will Achieve a Projected Magnetic Life of 15-30 Years. Media will lose 5% - 10% of its magnetic moment after 15 years. Media resists chemical degradation even after direct exposure to extreme environments." MPTIMEOUT TSM4.1 server option for 3494 sharing. Specifies the maximum time in seconds the server will retry before failing the request. The minimum and maximum values allowed are 30 seconds and 9999 seconds. Default: 30 seconds See also: 3494SHARED; DRIVEACQUIRERETRY MSCS Microsoft Cluster Server. MSGINTerval Definition in the server options file. Specifies the number of minutes that the ADSM server waits before sending subsequent message to a tape operator requesting a tape mount, as identified by the MOUNTOP option. Default: 1 (minute) Ref: Installing the Server... MSGINTerval server option, query 'Query OPTion' MSI (.msi file suffix) Designates the Microsoft Software Installer. Note that such files are on the CD-ROM, not in the online download area (which has .exe, .TXT, and .FTP files). If you copy the files from the CD for alternate processing, be aware that Microsoft does not support running an MSI from a mapped network drive when you are connect to a server via remote desktop to terminal server. MSI (Microsoft Installer) return codes See item 21050782 on the IBM web site ("Microsoft Installer (MSI) Return Codes for Tivoli Storage Manager Client & Server"). msiexec command Invokes the Microsoft Software Installer as for example msiexec /i "Z:\tsm_images\TSM_BA_Client \IBM Tivoli Storage Manager Client.msi" to install from the CD-ROM or network drive containing the installation image. See: Windows client manual mt See: /dev/mt MT0, MT1 Tape drive identifiers on Windows 2000. Example: MT0.0.0.2 for a 3590E drive in a 3494 library. mt_._._._ Designation for a tape drive in a Windows configuration, using Fibre Channel, as in mt0.0.0.5, where the encoding means "magnetic tape device, Target ID 0, Lun 0, Bus 0, with the final digit being auto assigned by Windows based on the time of first detection. mtadata Exchange server: Message Transfer Agent data, as in \exchsrvr\mtadata mtevent Command provided with 3494 Tape Library Device Driver, being an interface to the MTIOCLEW function, to wait for library events and display them. Usage: mtevent -[ltv?] -l[filename] Library special filename, i.e. "/dev/lmcp0". -t[timeout] Wait for asychronous library event, for the specified # of seconds. If omitted, the program will wait indefinitely. -? this help text. NOTE: The -l argument is required. mtlib Command provided with 3494 Tape Library Device Driver to manually interact with the Library Manager. For environments: AIX, SGI, Sun, HP-UX, Windows NT/2000. Do 'mtlib -\?' to get usage info - but beware that its output fails to show the legal combinations of options as the Device Drivers manual does. -L is used to specify the name of a file containing the volsers to be processed - and only with the -a and -C operands. This is handy for resetting Category Code values in a 3494 library, via like: 'mtlib -l /dev/lmcp0 -C -L filename -t"012C"' -v (verbose) will identify each element of the output, which makes things clearer than the "quick" output which is produced in the absence of the -v option. Specify category codes as hex numbers. (Remember that this is a library physical command: it knows nothing about TSM or what is defined in your TSM system.) If command fails because "the library is offline to the host", it indicates either that the host is not defined in the 3494's LAN Hosts allowance list, or that the host is not on the same subnet as the 3494 in the unusual case that the subnet is defined as Not Routed. A mount (-m) may take a considerable time and then yield: "Mount operation Error - Internal error" due to the tape being problematic, but the mount will probably work. Ref: "IBM SCSI Tape Drive, Medium Changer, and Library Device Drivers: Installation and User's Guide" (GC35-0154) mttest Undocumented command for performing ioctl operations and set's on a tape drive. /usr/lpp/adsmserv/bin/mttest. Syntax: 'mttest <-f batch-input-file> <-o batch-output-file> <-d special-file>' MTU Maximum Transmission Unit: the hardware buffer size of an Ethernet card, as revealed by 'netstat -i'. This is the maximum size of the frame/packet that can be transmitted by the adapter. (Larger packets need to be subdivided to be transmitted.) The standard Ethernet MTU size is 1500. Note that this maximum packet size is a constraining factor for processes which use ethernet. For example, a single process can max out a 10Mb ethernet card, but it can only drive a 100Mb card about 2.5x faster because the measly packet size is so constraining. To make full use of higher-speed ethernets, then, one must have multiple processes feeding them. (10Mb, 100Mb, and gigabit ethernet all use the same format and frame size.) See: TCPNodelay Multi-homed client See: TCPCLIENTAddress Multi-session Client TSM 3.7 facility which multi-threads, to (Multi session client) start multiple sessions, in order to transfer data more quickly. This will work for the following program components: Backup-archive client (including Enterprise Management Agent, formerly Web client) Backup and Archive functions. This new functionality is completely transparent: there is no need to switch it on or off. The TSM client will decide if a performance improvement can be gained by starting an additional session to the server. This can result in as many as five sessions running at one time to read files and send them to the server. (So says the B/A client manual, under "Performing Backups Using a GUI", "Displaying Backup Processing Status".) Types of threads: - Compare: For generating the list of backup or archive candidate files, which is handed over to the Data Transfer thread. There can be one or more simultaneous Compare threads. - Data Transfer: Intereacts with the client file system to read or write files in the TSM operation, performs compression/decompression, handles data transfer with the server, and awaits commitment of data sent to the server. There can be one or more simultaneous Data Transfer threads. - Monitor: The multi-session governor. Decides if multiple sessions would be beneficial and initiates them. The number of sessions possible is governed by the RESOURceutilization client option setting and server option MAXSessions. Mitigating factors: Using collocation, only one data transfer session per file space will write to tape at one time: all other data transfer sessions for the file space will be in Media Wait state. Under TSM 3.7 Unix, with "PASSWORDAccess Generate" in effect, a non-root session is single-threaded because the TCA does not support multiple sessions. Multi-session Client is supported with any server version; but if the server is below 3.7, the limit is 2 sessions. Considerations: Multiple accounting records for multiple simultaneous sessions from one command invocation. Ref: TSM 3.7 Technical Guide, 6.1 See also: MAXNUMMP; MAXSessions; RESOURceutilization; TCA; Threads, client Multi-Session Restore TSM 5.1 facility which allows the backup-archive clients to perform multiple restore sessions for No Query Restore operations, increasing the speed of restores. (Both server and client must be at least 5.1.) This is similar to the multiple backup session feature. Elements: - RESOURceutilization parameter in dsm.sys - MAXNUMMP setting for the node definition in the server - MAXSessions parameter in dsmserv.opt The efficacy of MSR is obviously limited by the number of volumes which can be used in parallel. From an IBM System Journal article: "During a large-scale restore operation (e.g., entire file space or host), the TSM server notifies the client whether additional sessions may be started to restore data through parallel transfer. The notification is subject to configuration settings that can limit the number of mount points (e.g., tape drives) that are consumed by a client node, the number of mount points available in a particular storage pool, the number of volumes on which the client data are stored, and a parameter on the client that can be used to control the resource utilization for TSM operations. The server prepares for a large-scale restore operation by scanning database tables to retrieve information on the volumes that contain the client's data. Every distinct volume found represents an opportunity for a separate session to restore the data. The client automatically starts new sessions, subject to the afore-mentioned constraints, in an attempt to maximize throughput." Additional info: http://www.ibm.com/support/ docview.wss?uid=swg21109935 See also: DISK; Storage pool, disk, performance Multi-threaded session See: Multi-session Client Multiple servers See: Servers, multiple Multiple sessions See: MAXNUMMP; Multi-session Client; RESOURceutilization Multiprocessor usage TSM uses all the processors available to it, in a multi-processor environment. One customer cited having a 12-processor system, and TSM used all of them. MVS Multiple Virtual Storage: IBM's mainframe operating system, descended from OS/MFT and OS/MVT (multiple fixed or variable number of tasks). Because the operating system was so tailored to a specific hardware platform, MVS was a software product produced by the IBM hardware division. MVS evolved into OS/390, for the 390 hardware series. MVS server performance Turn accounting off and you will likely see a dramatic improvement in performance. Especially boost the TAPEIOBUFS server option. See also: Server performance Named Pipe In general: A type of interprocess communication which allows message data streams to be passed between peer processes, such as between a client and a server. Windows: The name of the facility by which the TSM client and server processes can directly intercommunicate when the are co-resident in the same computer, to enhance performance by not going through data communications methods to transfer the data. The governing option is NAMedpipename. See also: Restore to tape, not disk NAMedpipename (-NAMedpipename=) Windows client option for direct communication between the TSM client and server processes when they are running on the same computer or across connected domains, thus avoiding the overhead of going through data communication methods (e.g., TCP/IP). This depends upon a file system object which the server and client will both reference in order to communicate - which can be a point of vulnerability, in contrast to traditional networking (ANS1865E). Syntax: NAMedpipename \\.\pipe\SomeName -NAMedpipename=\\.\pipe\SomeName Default: Originally: \pipe\dsmserv Later: \\.\pipe\Server1 See also: COMMMethod; NAMEDpipename; Shared Memory NAMEDpipename Windows server option for direct communication between the TSM server and client processes when they are running on the same computer or across connected domains, thus avoiding the overhead of going through data communication methods (e.g., TCP/IP). This depends upon a file system object which the server and client will both reference in order to communicate - which can be a point of vulnerability, in contrast to traditional networking (ANS1865E). And note that the involvement of Windows Domain itself can mean networking, which can obviate the advantage. Syntax: NAMEDpipename name Default: Originally: \pipe\dsmserv Later: \\.\pipe\Server1 See also: COMMMethod; NAMedpipename; Shared Memory Names for objects, coding rules Content: the following characters are legal in object names: A-Z 0-9 _ . - + & (It is best not to use the hyphen because ADSM uses it when continuing a name over multiple lines in a query, which would be visually confusing.) Length: varies per type of object. Ref: Admin Ref NAS See: Network Appliance See also IBM site Solution 1105834 NATIVE Refers to storage pool DATAFormat definition, where NATIVE is the default. TSM operations use storage pools defined with a NATIVE or NONBLOCK data format (which differs from NDMP). DATAFormat=NATive specifies that the data format is the native TSM server format and includes block headers. NATIVE is required: - To back up a primary storage pool; - To audit volumes; - To use CRCData. See also: NONBLOCK native file system A file system to which you have not added space management. NDMP Network Data Management Protocol: a cross-vendor standard for enterprise data backups, to tape devices. Its creation was led by Network Appliance and Legato Systems. The backup software orchstrates a network connection between an NDMP-equipped NAS appliance and an NDMP tape library or backup server. The appliance uses NDMP to stream its data to the backup device. The NDMP support in TSM works only with tape drives as the backup target, and there are no plans to extend NDMP support to disk. As of 2004/01, NDMP backs up at volume level only. Originally, only SCSI libraries were supported for NDMP operations. Support for ACSLS libraries was introduced in 5.1.1 and support for 349x libraries came in 5.1.5. To perform NDMP operations with TSM, tape drives must be accessible to the NAS device. This means that there must be a SCSI or FC connection between the filer and drive(s) and a path must be defined in TSM from the NAS data mover to the drive(s). Some or all of the drives can also be accessed by the TSM server, provided that there is physical connectivity and a path definition from the TSM server to those drives. This does not mean that data is funneled through the TSM server for NDMP operations. It simply allows sharing drives for NDMP and conventional TSM operations. In fact, if the library robotics is controlled directly by the TSM server (rather than through a NAS device), it is possible to share drives among NAS devices, library server, storage agents and library clients. Data flow for NDMP operations is always directly between the filer and the drive and never through the TSM server. The TSM server handles control and metadata, but not bulk data flow. The TSM server does not need to be on a SAN, but if you want to share drives between the TSM server and the NAS device, a SAN allows the necessary interconnectivity. See: dsmc Backup NAS; Network Appliance (NAS) backups Nearline storage A somewhat odd, ad hoc term to describe on-site, nearby storage pool data; as opposed to offsite versions of the data. NetApp Network Appliance, Inc. Long-time provider of network attached storage. Company was founded by guys who helped develop AFS. www.netapp.com NetTAPE NetTAPE provides facilities such as remote tape access, centralized operator interfaces, and tape drive and library sharing among applications and systems. As of late 1997, reportedly a shakey product as of late 1997. Ref: redbook 'AIX Tape Management' (SG24-4705-00) NETBIOS Network Basic Input/Output System. An operating system interface for application programs used on IBM personal computers that are attached to the IBM Token-Ring Network. NETBIOSBuffersize *SM server option. Specifies the size of the NetBIOS send and receive buffers. Allowed range: 1 - 32 (KB). Default: 32 (KB) NetbiosBufferSize server option, query 'Query OPTion' NetbiosSessions server option, query 'Query OPTion' NETTAPE IBM GA-product that allows dynamic sharing of tape drives among many applications. NetWare Novell product. Has historically not had virtual memory, and so tends to be memory-constrained, which hinders *SM backups and restorals. See also: nwignorecomp NetWare backup recommendation Code "EXCLUDE sys:/.../*.qdr/.../*.*" to omit the queues on the SYS volume. NetWare Loadable Module (NLM) Novell NetWare software that provides extended server functionality. Support for various ADSM and NetWare platforms are examples of NLMs. Netware restore, won't restore, saying Reason unknown, but specifying option incoming files are "write protected" "-overwrite" has been seen to resolve. Netware restore fails on long file See: Long filenames in Netware restorals name Netware restore performance - Make sure your ADSM client software is recent! (To take advantage of "No Query Restore" et al. But beware that No Query Restore is not used for NetWare Directory Services (NDS).) - Avoid client or Netware compression of incoming data (and no virus scanning of each incoming file). - If you have a routed network environment, have this line in SYS:ETC\TCPIP.CFG : TcpMSSinternetlimit OFF - Use TXNBytelimit 25600 in the DSM.OPT file, and TXNGroupmax 256 in the ADSM server options file. - Set up a separate disk pool that does not migrate to tape, and use DIRMc to send directory backups to it. - Consider using separate management classes for directories, to facilitate parallel restorals. - Disable scheduled backups of that filespace during its restoral. - Try to minimize other work that the server has to do duing the restoral (expirations, reclamnations, etc.). - And the usual server data storage considerations (collocation, etc.). Data spread out over many tapes means many tape mounts and lots of time. - Consider tracing the client to see where the time is going: traceflags INSTR_CLIENT_DETAIL tracefile somefile.txt (See "CLIENT TRACING" section at bottom of this document.) - During the session, use ADSM server command 'Q SE' to gauge where time is going; or afterwards, review the ADSM accounting record idle wait, comm wait, and media wait times. Network Appliance (NAS) backups Lineage: Tivoli originally announced that TSM version 4.2 would provide backup and restore of NAS filers - 3Q 2001. The product was "TDP for NDMP" (5698-DPA), a specialized client that interfaces with the Network Data Management Protocol (NDMP). Full volume image backup/restore will be supported. File level support is announced for TSM version 5.1 - 1Q 2002. TDP for NDMP was then folded into TSM Enterprise Edition, which was withdrawn from marketing 2002/11/12, supplanted by TSM Extended Edition (5698-ISX). Note that options COMPRESSION and VALIDATEPROTOCOL are not valid for a node of Type=NAS. The name of the NAS node must be the same as the data mover. Netware timestamp peculiarities The Modified timestamp on a Netware file is attached to the file, and remains constant as it may move, for example, from a vendor development site to a customer site. The Created timestamp is when the file was planted in the customer file system. Thus, the Created timestamp may be later than the Modified timestamp. Network card selection on client See: TCPCLIENTAddress Network data transfer rate Statistic at end of Backup/Archive job, reflecting the raw speed of the network layer: just the time it took to transfer the data to the network protocol handler (expressed that way to emphasize that *SM does not know if the data has actually gone over the network). The data transfer rate is calculated by dividing the total number of bytes transferred by the data transfer time. The time it takes to process objects is not included in the network transfer rate. Therefore, the network transfer rate is higher than the aggregate transfer rate. Corresponds to the Data Verb time in an INSTR_CLIENT_DETAIL client trace. Contrast with Aggregate data transfer rate. Beware that if the Data transfer time is too small (as when sending a small amount of data) then the resulting Network Data Transfer Rate will be skewed, reporting a higher number than the theoretical maximum. This reflects the communications medium rapidly absorbing the initial data in its buffers, which it has yet to actually send. That is, ADSM handed off the data and considers it logically sent, having no idea as to whether it has been physically sent. This also explains why at the beginning of a backup session that you see some number of files seemingly sent to the server before an ANS4118I message appears saying that a mount is necessary (for backup directly to tape), rather than appearing after the first file. Thus, to see meaningful transfer rate statistics you need to send a lot of data so as to counter the effect of the initial buffering. Ref: B/A Client manual glossary See also: Data transfer time; TCPNodelay Network performance Many network factors can affect performance: - Technology generation: Are you still limited to 10 Mbps or 100, when Gigabit Ethernet is available, with its faster basic speed and optional larger frame sizes? - Are you using an ethernet switch rather than a router to improve subnet performance (and security)? - Are your network buffer sized adequate? In AIX, particularly do 'netstat -v' and see if the "No Receive Pool Buffer Errors" count is greater than zero: if so boost the Receive Pool Buffer Size. (A value of 384 is no good: needs to be 2048.) Network Storage Manager (NSM) The IBM 3466 storage system which combines a tape robot and AIX system in one package, wholly maintained by IBM. The IBM Network Storage Manager (NSM) is an integrated data storage facility that provides backup, archive, space management, and disaster recovery of data stored in a network computing environment. NSM integrates ADSM server functions and AIX with an RS/6000 RISC rack mounted processor, Serial Storage Architecture (SSA) disk subsystems, tape library (choose a type) and drives, and network communications, into a single server system. Network transfer rate See: Network data transfer rate Network-Free Rapid Recovery Provides the ability to create a backup set which consolidates a client's files onto a set of media that is portable and may be directly readable by the clients system for fast, "LAN-free" (no network) restore operations. The portable backup set, synthesized from existing backups, is tracked and policy-managed by the TSM server, can be written to media such as ZIP, Jaz drives, and CD-ROM volumes, for use by Windows 2000, Windows NT, AIX, Sun Solaris, HP-UX, NetWare backup-archive client platforms. In addition, for the Windows 2000, Windows NT, AIX, Sun Solaris (32-bit) and HP-UX backup-archive clients, the backup sets can be copied to tape devices. TSM backup-archive clients can, independent of the TSM server, directly restore data from the backup set media using standard operating system device drivers. Ref: Redbook "Tivoli Storage Manager Version 3.7: Technical Guide" (SG24-5477), see CREATE BACKUPSET. http://www.tivoli.com/products/index/ storage_mgr/storage_mgr_concepts.html Newbie Someone who is new to all this stuff. NEXTstgpool Parameter on 'DEFine STGpool' to define the next primary storage pool to use in a hierarchy of storage pools. (Copy storage pools are not eligible for hierarchical arrangement.) This can be used creatively to cause ADSM to use lower storage pools to be used as overflow areas rather than migration areas, by defining the HIghmig value to be 100 percent. This would be used in cases where storage pool filling has to keep up with incoming data, and could not if migration were used. NFS client backup prohibition You can establish a site policy that file systems should not be backed up from NFS clients (they will be done from the NFS server). Violators can be detected in a ADSM server 'Query Filespace' command (Filespace Type), whereupon you could delete the filespace outright or rename it for X days before deleting it, with warning mail to the perpetrator, and a final 'Lock Node' if no compliance. NFSTIMEout Client system options file (dsm.sys) or command line option to deal with error "ANS4010E Error processing '': stale NFS handle". Specifies the amount of time in seconds the server waits for an NFS system call response before it times out. If you do not have any NFS-mounted filesystems, or you do not want this time-out option, remove or rename the dsmstat file in the ADSM program directory. Syntax: "NFSTIMEout TimeoutSeconds". Note: This option can also be defined on the server. NIC selection on client See: TCPCLIENTAddress NLB Microsoft Network Load Balanced NLS National Language Support, standard in ADSMv3. The message repository is now called dsmserv.cat, which on AIX is found in /usr/lib/nls/msg/en_US (for the english version, other languages are found in their respective directories). The dsmameng.txt file still exists in the ADSM server working directory and is used if the dsmserv.cat file is not found. See also: Language No Query Restore ADSMv3+: Facility to speed restorals by eliminating the preliminary step of the server having to send the client a voluminous list of files matching its restoral specs, for the client to traverse the list and then sort it for server retrieval efficiency ("restore order"). That is, in a No Query Restore the client knows specifically what it needs and can simply ask the server for it, so there is no need for the server to first send the client a list of everything available. Both client and server have to be at Version 3+ in order to use No Query Restore. It is used automatically for all restores unless one or more of the following options are used: INActive, Pick, FROMDate, FROMTime, LAtest, TODate, TOTime. Also, No Query Restore is not used for NetWare Directory Services (NDS). Note that NQR has nothing to do with minimizing tape mounts for restore: for a given restore, TSM mounts each needed tape once and only once, retrieving files as needed in a single pass from the beginning of the tape to the end. A big consideration in NQR is that the client specification may be so general that the server ends up sending the client far more files than it needs. IBM used the term "No Query Restore" in their v3 announcements, but did not use it in their v3.1 manuals: usage was implied. Later manuals reinstated No Query Restore as a specific action, and documented it. IBM now refers to the v2 method of restoral as "Classic Restore". The most visible benefit of no query restore is that data starts coming back from the server sooner than it does with "classic" restore. With classic restore, the client queries the server for all objects that match the restore file specification. The server sends this info to the client, then the client sorts it so that tape mounts will be optimized. However, the time involved in getting the info from the server, then sorting it (before any data is actually restored), can be quite lengthy - and may incite client timeout at the server. NQR the *SM server do the work: the client sends the restore file specs to the server, the server figures out the optimal tape mount order, and then starts sending the restoral data to the client. The server can do this faster, and thus the time it takes to start actually restoring data is reduced. (A consideration is that while the server is busy figuring this out, no activity is visible from the client, which may concern the user.) Ref: Backup/Archive Client manual, chapter 3 (Backing Up and Restoring), "Restore: Advanced Considerations"; Redbook "ADSM Version 3 Technical Guide" (SG24-2236). See also: No Query Restore, disable; Restart Restore; Restore Order No Query Restore, disable Whereas this v3 feature was supposed to improve performance, it has had performance impacts of its own. To disable, perform the restoral with -traceflags=DISABLENQR, or by specifying option "TESTFLAG DISABLENQR" in dsm.opt. See "DISABLENQR" in "CLIENT TRACING". No-Query Restore See: No Query Restore NOAGGREGATES Temporary server options file option, to compensate for early v.3 defect. Is intended for customers who have a serious shortage of tapes. If you use this option, any new files backed up or archived to your server will not be aggregated. When the volumes on which these files are reclaimed, you will not be left with empty space within aggregates. The downside is that these files will never become aggregated, so you will miss the performance benefits of aggregation for these files. If you do not use the NOAGGREGATES option, files will continue to be aggregated and empty space may accumulate within these aggregates; this empty space will be eliminated during reclamation after you have run the data movement/reclamation utilities. NOARCHIVE ADSMv3 option for the include-exclude file, to prohibit Archive operations for the specified files, as in: "include ?:\...\* NOARCHIVE" to prohibit all archiving. NOAUDITStorage Server options file option, introduced by APAR PN77064 (PTF UN87800), to suppress the megabyte counting for each of the clients during an "AUDit LICenses" event, and thus reduce the time required for AUDit LICenses. Obsolete: now AUDITSTorage Yes|No. See: AUDITSTorage NOBUFPREFETCH Undocumented server option to disable the buffer prefetcher - at the expense of performance. (Useful where the 'SHow THReads' command reveals sessions hung on a condition in TbKillPrefetch, where the prefetcher is looping because of a design defect.) Node See: Client Node Node, add administrator Do 'REGister Admin', then 'GRant AUTHority' Node, define See: 'REGister Node' Node, delete See: 'REMove Node' Node, disable access 'LOCK Node NodeName' Node, lock 'LOCK Node NodeName' Node, move across storage pools Use 'MOVe Data', specifying a different storage pool; then reassign the node to the new stgpool's domain. But if a node shares tapes with other nodes: reassign it to the new stgpool, then let the data expire off of the old stgpool. Node, move to another Policy Domain 'UPDate Node NodeName DOmain=_____' In doing this, note: - If the receiving domain does not have the same management classes as were used in the old domain, the domain files will be bound to the receiving domain's default management class, which could have an adverse effect upon retention periods you expect. But in all cases, check the receiving domain Copypool retention policies before doing the move. - If the node was associated with a schedule, it will lose it, so be sure to examine all scheduling values. Node, number used See: Tapes, number used by a node Node, prevent data from expiring A request comes in from the owner of a client that because of subpoena or the like, its data must not expire; but that client has been using the same management class as is used for the backup of all clients. How to satisfy this request? 1. Use 'COPy DOmain' to create a copy of the policy domain the node is in. 2. Update the retention parameters in the copy group in the new domain. 3. Activate the appropriate policy set. 4. Use 'UPDate Node' to move the node to the new policy domain. Node, prohibit access 'LOCK Node NodeName' Node, prohibit storing data on server See: Client, prevent storing data on server Node, remove See: 'REMove Node' Node, space used for Active files 'Query OCCupancy' does not reveal this, as it reports all space. A simple way to get the information is to 'EXPort Node NODENAME FILEData=BACKUPActive Preview=Yes'. Node, space used on all volumes 'Query AUDITOccupancy NodeName(s) [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: It is best to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' to assure that the reported information will be current. Also try the unsupported command 'SHow VOLUMEUSAGE NodeName' Node, volumes in use by 'SHow VOLUMEUSAGE NodeName' or: 'SELECT DISTINCT VOLUME_NAME,NODE_NAME FROM VOLUMEUSAGE' or: 'SELECT NODE_NAME,VOLUME_NAME FROM VOLUME_USAGE WHERE - NODE_NAME='UPPER_CASE_NAME' Node, volumes needed to restore ADSMv3: SELECT FILESPACE_NAME,VOLUME_NAME - FROM VOLUMEUSAGE WHERE - NODE_NAME='UPPER_CASE_NAME' AND - COPY_TYPE='BACKUP' AND - STGPOOL_NAME='' Node conversion state An *SM internal designation. Node state 5 is Unicode, for Unicode enabled clients, which is to say platforms in which Unicode is supported. (Within Unicode-enabled clients, it is the filespace which specifically employs Unicode.) May be seen on ANR4054I and ANR9999D messages. Node name A unique name used to identify a workstation, file server, or PC to the server. Should be the same as returned by the AIX 'hostname' command. Is specified in the Client System Options file and the Client User Options file. Node name, register 'REGister Node ...' (q.v.) (register a client with the server) Be sure to specify the DOmain name you want, because the default is the STANDARD domain, which is what IBM supplied rather than what you set up. There must be a defined and active Policy Set. Node name, remove 'REMove Node NodeName' Node name, rename (Windows) See: dsmcutil.exe Node name, update registration 'UPDate Node ...' (q.v.) (register a client with the server) Node must not be currently conducting a session with the server, else command fails with error ANR2150E. Node names in a volume, list 'Query CONtent VolName ...' Node names known to server, list 'Query Node' Node password, update from server See: Password, client, update from server Node sessions, byte count SELECT NODE_NAME, SUM(LASTSESS_RECVD) - AS "Total Bytes" FROM NODES - GROUP BY NODE_NAME nodelock File the server directory, housing the licenses information generated by the ADSMv3 and TSM REGister LICense operation. The *SM server must have access to this file in order to run. If the server processor board is upgraded such that its serial number changes, the this file must be removed and regenerated. file first. See also: adsmserv.licenses; REGister LICense nodename /etc/filesystems attribute, set "-", which is added when 'dsmmigfs' or its GUI equivalent is run to add ADSM HSM control to an AIX file system. The dash tells the mount command to call the HSM mount helper. NODename Client System Options file operand to specify the node name by which the client is registered to the server. Placement: within a server stanza The intention of this option is to firmly specify the identity of the client where the client may have multiple identities, as in a multi-homed ethernet config. If your client system has only a single identity, it is best if this option is not used, letting the node name default to the natural system name. If you *do* code NODename, it is best that it be in upper case. If "PASSWORDAccess Generate" is in effect, you *cannot* use NODename because the password directory entry (e.g., as in /etc/security/adsm/) must be there for that node, and thus you must not have the choice of saying that you are some arbitrary node name. PASSWORDAccess Generate does not work if you code NODename. If in Unix you put it in dsm.opt, then ADSM assumes you want to be the "virtual root user", which gives you access to all of that node's data, requiring you to enter a password. Instead, put NODename in the dsm.sys file. If you are attempting to use NODename for cross-node restorals, DO NOT change your client options file to code the name of the originating node: remember that the options file is for all invocations of client functions, not just the one task you are performing, so your modification could yield incorrect results in incidental client invocations other than your own. Also, it is too easy to forget that this options file change was made. You should instead use the -NODename=____ invocation override form of the option. Note that as long as the Nodename remains the same, changes in the client's IP address (as in switching network providers) will not incite a password prompt. See also: PASSWORDAccess; TCPCLIENTAddress; VIRTUALNodename -NODename=____ (Employed on some clients (Netware and Windows), which otherwise would use -VIRTUALNodename if available there.) Command line equivalent, but override of the same options file definition, used when you want to restore or retrieve your own files when you are on other than your home nodename. Beware that specifying this causes ADSM to ask you for the password of that node, and thereafter regards you as a virtual root user. Worse, it will cause the password to be encrypted and stored on the machine where invoked. Thus anyone else can subsequently access your node's data, presenting a potential security issue. Unless that is your intent, use VIRTUALNodename instead of NODename. Note that when overriding the node name this way, with the ADSM server, a 'Query SEssion' will show the session as coming from the node whose name you have specified. Contrast with -FROMNode, which is used to gain access to another user's files. Note that a 'Query SEssion' in the server will say that the session is coming from the client named via -NODename, rather than the actual identity of the client. See also: -PASsword; VIRTUALNodename NODES SQL table containing all the information about each registered node. Columns: NODE_NAME, PLATFORM_NAME, DOMAIN_NAME, PWSET_TIME, INVALID_PW_COUNT, CONTACT, COMPRESSION, ARCHDELETE, BACKDELETE, LOCKED, LASTACC_TIME, REG_TIME, REG_ADMIN, LASTSESS_COMMMETH, LASTSESS_RECVD, LASTSESS_SENT, LASTSESS_DURATION, LASTSESS_IDLEWAIT, LASTSESS_COMMWAIT, LASTSESS_MEDIAWAIT, CLIENT_VERSION, CLIENT_RELEASE, CLIENT_LEVEL, CLIENT_SUBLEVEL, CLIENT_OS_LEVEL, OPTION_SET, AGGREGATION, URL, NODETYPE, PASSEXP. Note that the table is indexed by NODE_NAME, so seeking on an exact match is faster than on a "LIKE". Nodes, registered 'Query DOmain Format=Detailed' Nodes, registered, number SELECT COUNT(NODE_NAME) - AS "Number of registered nodes" - FROM NODES Nodes, report MB and files count SELECT NODE_NAME, SUM(LOGICAL_MB) AS - Data_In_MB, SUM(NUM_FILES) AS - Num_of_files FROM OCCUPANCY GROUP BY - NODE_NAME ORDER BY NODE_NAME ASC Nodes not doing backups in 90 days 'SELECT NODE_NAME, CONTACT, \ LASTACC_TIME, REG_TIME, DOMAIN_NAME \ FROM NODES WHERE DOMAIN_NAME='FL_INTL'\ AND DAYS(CURRENT_TIMESTAMP)-\ DAYS(LASTACC_TIME)>90 ORDER BY \ LASTACC_TIME DESC > SomeFilename' Nodes without filespaces There will always be nodes which have registered with the server but which have yet to send data to the server. The following will report them: SELECT NODE_NAME AS - "Nodes with no filespaces:", - DATE(REG_TIME) AS "Registered:", - DATE(LASTACC_TIME) AS "Last Access:" - FROM NODES WHERE NODE_NAME NOT IN - (SELECT NODE_NAME FROM FILESPACES) NOMIGRRECL Undocumented server option to prevent migration and reclamation at server start-up time. Note that there is no server Query that will evidence the use of this option: the server options file has to be inspected. Non-English filenames (NLS support) The TSM product is a product of the USA, written in an English language environment, originally and predominantly for English language customers using an alphabet comprised of the characters found in the basic ASCII character set. Trying to use TSM in a non-English environment is a stretch, as customers who have tried it have found and reported in ADSM-L. The product has experienced many, protracted problems with non-English alphabets, as seen in numerous APARs - and some debacles ("the umlaut problem" - see message ANS1304W). As of mid-2001, there is no support for mixed, multi-national languages, as for example a predomiantly English language client which stores some files whose names contain multi-byte character sets (e.g., Japanese). Customers find, for example, that to back up Japanese filenames you must run the Windows client on a Japanese language Windows server. Some customers circumvent the whole problem on their English language systems by copying the non-English files into a tarchive or zip file having an English name, which then backs up without problems. Another approach is to use NT Shares across English and non-English client systems, to back up as appropriate. NONBLOCK Refers to storage pool DATAFormat definition, where NATIVE is the default. TSM operations use storage pools defined with a NATIVE or NONBLOCK data format (which differs from NDMP). DATAFormat=NONblock specifies that the data format is the native TSM server format, but does not include block headers. See also: NATIVE NOPREEMPT ADSMv3 Server Options file (dsmserv.opt) entry to prevent preemption. TSM allows certain operations to preempt other operations for access to volumes and devices. For example, a client data restore operation preempts a client data backup for use of a specific device or access to a specific volume. When preemption is disabled, no operation can preempt another for access to a volume, and only a database backup operation can preempt another operation for access to a device. The effect is to cause high-priority tasks like Restores to wait for resources, rather than preempt a lower-priority task so as to execute asap. See also: Preemption; DEFine SCHedule NORETRIEVEDATE Server option to specify that the retrieve date of a file in a disk storage pool is not be updated when the file is restored or retrieved by a client. This option can be used in combination with the MIGDelay storage pool parameter to control when files are migrated. If this option is not specified, files are migrated only if they have been in the storage pool the minimum number of days specified by the MIGDelay parameter. The number of days is counted from the day that the file was stored in the storage pool or retrieved by a client, whichever is more recent. By specifying this option, the retrieve date of a file is not updated and the number of days is counted only from the day the file entered the disk storage pool. If this option is specified and caching is enabled for a disk storage pool, reclamation of cached space is affected. When space is needed in a disk storage pool containing cached files, space is obtained by selectively erasing cached copies. Files that have the oldest retrieve dates and occupy the largest amount of space are selected for removal. When the NORETRIEVEDATE option is specified, the retrieve date is not updated when a file is retrieved. This may cause cached copies to be removed even though they have recently been retrieved by a client. See also: MIGDelay Normal File--> Leads the line of output from a Backup operation, as when backup is incited by the file's mtime (file modification time) having changed, or if a chown or chgrp effected a change. See also: Updating-->; Expiring-->; Rebinding--> Normal recall mode A mode that causes HSM to copy a migrated file back to its originating file system when it is accessed. If the file is not modified, it becomes a premigrated file. If the file is modified, it becomes a resident file. Contrast with migrate-on-close recall mode and read-without-recall recall mode. NOT IN SQL clause to exclude a particular set of data that matches one of a list of values: WHERE COLUMN_NAME - NOT IN (value1,value2,value3) See also: IN "Not supported" Vendor parlance indicating that a certain level or mix of hardware/software is not supported by the vendor. It may mean that the vendor knows that the level is not viable by virtue of design; but more usually indicates that an older level of software was not deemed worth the expenditure to test compatibility, rather than having tested and having found incompatibilities. It is common for customers to inadvertently or intentionally use unsupported software and encounter no problems. Usually, usage of such software which "stays near the center of the path" can do okay; it's when the usage gets near the edges of complexity that functional problems are more likely to arise. NOTMOuntable DRM media state for volumes containing valid data, located onsite, but TSM is not to use it. This value can also be the default Location if Set DRMNOTMOuntablename has not been run. See also: COUrier; COURIERRetrieve; MOuntable; MOVe DRMedia; Set DRMNOTMOuntablename; VAult; VAULTRetrieve Novell See also: Netware Novell and TSM problems Novell customers report that problems using TSM (or, for that matter, many other applications) under Novell Netware are almost universally due to Novell irregularities and failing to communicate OS changes to developers. Novell (Netware) performance The standard Backup considerations apply, including too many files in one directory. See also: PROCESSORutilization Novell trustee rights With Novell your trustee rights are normally set on a directory level. If this is a case with your Novell systems, then just use the -dirsonly option when doing a restore. TSM backs up rights and IRFs only at a directory level, not a file level. Trustee Rights are not seen by the client workstation who maps the drive for his use. Client workstations should not be doing the backups: they should be done from the Novell system. .NSF file Lotus Notes database file. NSM See: Network Storage Manager NT Microsoft Windows New Technology operating system, situated between Windows 98 and Windows 2000. See: Windows NT .NTF files (Lotus Notes) and backup By default, the Lotus Notes Connect Agent will not back up .NTF files: you have to specifically request them to get them backed up. NTFS NT File System. Is understood by OS/2. Unlike FAT, NTFS directories are complex, and cannot be stored in the *SM database, instead having to go into a storage pool. NTFS and Daylight Savings Time Incredibly, NTFS file timestamps are offsets from GMT rather than absolute values - and hence the perceived timestamps on all files in the NTFS will change in DST transitions. (Another reason that NT systems cannot be regarded as serious contenders for server implementations.) http://support.microsoft.com/support/kb/ articles/q129/5/74.asp NTFS and permissions changes If someone happens to make a global change to the permissions (security information) of files in an NTFS, the next Backup will cause the files to be backed up afresh...which is warranted, as the attributes are vital to the files. The fresh backup will occur if any of the NTFS file security descriptors are changed: Owner Security Identifier (SID), Group SID, Discretionary Access Control List (ACL), and System ACL. Possible mitigations (all of which have encumbrances and side effects): - Perform -INCRBYDate backups. - In Windows Journal-Based Backups, you may employ the NotifyFilter. - Subfile backups should avoid wholesale backups, if you happen to use them. - Another approach to mitigation is to follow MS's AGLP (AGDLP for AD) rules: assign users to Global Groups, add Global Groups to Local (DOMAIN Local in AD) and only assign permissions to the local groups. You create the appropriate local groups (eg read access, write etc) and only assign permissions once to these groups. Any user changes are done through removal of uses from the Global groups or GG from local groups which doesn't trigger any ACL changes on the files so no extra backups are done. As far as initial security lockdown, this should be done at server setup. NTFS and security info in restorals NTFS object security information is stored with the object on the server and will be restored when the individual NTFS object is restored. "Security" in Windows NTFS and what gets restored: Inherited: The only security info is "provide same access as the parent directory is providing". TSM will restore the "checkmarked" inheritance. It *will not* restore parent's ACL, or the ACL of the parent's parent, ... up to the origin of the inherited ACL. As result you have resolved ability to inherit but not *what* to inherit. Explicitly specified: There is a list of users along with set of allowed operations. TSM will restore "no inheritance" mode and list of defined privileges. This is probably what you want in a restoral Mixed permissions: Both access inherited from the parent plus some explicitly specified additions/deletions/changes to the ACL. TSM restores both "inheritance" mode and the explicit access. As a result, the explicitly defined entities will have their access intact but the other are left to the mercy of ACL inherited from the parent directory. If the whole drive is restored, file/directory specific ACL elements are restored together their parents'. All this should explain why sometimes you see the ACL "restored", sometimes "not restored" and sometimes "partially restored". NTFS security info as stored in TSM Because of the amount of information involved in NTFS security data, it is too much to be stored in the TSM database, as simple file attribute data can otherwise be, and so NTFS security info has to go into a TSM storage pool. The NTFS security info is stored as part of the file data - an implication being that if just the security info is changed, the file itself has to be backed up afresh as well. NTuser.dat The NT current profile of each user registered to use the NT system. When you log on to NT, the contents of NTUSER.DAT are loaded into the HKEY_CURRENT_USER Registry key, where that copy persists only for the duration of the user session. So a TSM backup captures that as part of Registry backup; and you can do 'dsmc REStore REgistry USER CURUSER' to get your profile back. If the user is not logged in at the time of the backup, the file will be backed up from where it sits. If the user is logged in at the time, the file will be in use by the system, and will be backed up as part of the Registry, which is to say that the API used by the client for Registry backup will make a copy in the adsm.sys directory, and back that up. (The above assumes that the backup is run by Administrator: if run by an ordinary user, there is no access to either source of NTUSER.DAT data: it has to be skipped as busy.) C:\adsm.sys\Registry\\Users contains a directory for each id, and each id that was logged on at the time of the backup will have a file with a name like: S-1-5-21-1417001333- 436374069-854245398-1000 This is the logical equivalent of NTUSER.DAT. To restore it requires an extra step, though: When doing a bare metal restore, you restore the files, then the Registry; then you reboot; then you log on under that user's account. Since you don't have a restored copy of NTUSER.DAT, you will see the default profile. Run: dsmc REStore REgistry USER CURUSER which reloads the profile stuff from adsm.sys into the registry. Then you reboot again, and on the way down it will write the profile out to NTUSER.DAT again, and you are back in business. When you come back up, you have your restored/customized profile. If using the 4.1.2 client, the names in adsm.sys have changed, and the backed up user profile for each user is actually called NTUSER.DAT. And you can't restore individual Registry keys. So after you do the bare-metal restore of files & Registry as ADMINISTRATOR, you drag that person's NTUSER.DAT from the adsm.sys directory back to where it is supposed to be, before that account logs on again. In running standard TSM backups, be sure to run the TSM Scheduler Service under the Local System account, not a user account, to avoid the inevitable problem of finding the user profile (NTuser.dat) locked.