[ Bottom of Page | Previous Page | Next Page | Contents | Index | Library Home | Legal | Search ]

System Management Guide:
Operating System and Devices

LVM Maintenance Tasks

The simplest tasks you might need when maintaining the entities that LVM controls (physical and logical volumes, volume groups, and file systems) are grouped within the following table. Instructions for additional maintenance tasks are located later in this section or in How-To's for System Management Tasks. Instructions that are specific to file systems are located in File Systems Management Tasks.

You must have root authority to perform most of the following tasks. For your convenience, links to all the logical volume, physical volume, and volume group maintenance tasks are listed below:

Table 2. Managing Logical Volumes and Storage Tasks
Task SMIT Fast Path Command or File
Activate a volume group smit varyonvg  
Add a fixed disk without data to existing volume group smit extendvg  
Add a fixed disk without data to new volume group smit mkvg  
Add a logical volumeNote 1 smit mklv  
Add a volume group smit mkvg  
Add and activate a new volume group smit mkvg  
Change a logical volume to use data allocation smit chlv1  
Change the name of a volume groupNote 2
  1. smit varyoffvg
  2. smit exportvg
  3. smit importvg
  4. smit mountfs
  1. varyoffvg OldVGName
  2. exportvg OldVGName
  3. importvg NewVGName
  4. mount all
Change a volume group to use automatic activation smit chvg  
Change or set logical volume policies smit chlv1  
Copy a logical volume to a new logical volumeNote 3 smit cplv  
Copy a logical volume to an existing logical volume of the same sizeAttn 1 smit cplv  
Copy a logical volume to an existing logical volume of smaller sizeAttn 1 Note 3 Do not use SMITAttn 2
  1. Create logical volume. For example:
    mklv -y hdiskN vg00 4
  2. Create new file system on new logical volume.
    For example:
    crfs -v jfs -d hdiskN -m /doc -A yes
  3. Mount file system. For example:
    mount /doc
  4. Create directory at new mount point. For example:
    mkdir /doc/options
  5. Transfer files system from source to destination
    logical volume. For example:
    cp -R /usr/adam/oldoptions/* \ /doc/options
Copy a logical volume to an existing logical volume of larger sizeAttn 1 smit cplv  
Deactivate a volume group smit varyoffvg  
Enable write-verify and change scheduling policy smit chlv1  
Increase the maximum size of a logical volume smit chlv1  
Increase the size of a logical volume smit extendlv  
List all logical volumes by volume group smit lslv2  
List all physical volumes in system smit lspv2  
List all volume groups smit lsvg2  
List the status, logical volumes, or partitions of a physical volume smit lspv  
List the contents of a volume group smit lsvg1  
List a logical volume's status or mapping smit lslv  
Mirror a logical volume with or without data allocation smit mklvcopy  
Power off a removable disk smit offdsk Available with the hot-removability feature only
Power on a removable disk smit ondsk Available with the hot-removability feature only
Remove a disk with data from the operating system smit exportvgrds  
Remove a disk without data from the operating system smit reducevgrds  
Remove mirroring from a volume group smit unmirrorvg  
Remove a volume group smit reducevg2  
Reorganize a volume group smit reorgvg  
Unconfigure and power off a disk smit rmvdsk1 or
smit rmvdsk then
smit opendoor
 
Attention:
  1. Using this procedure to copy to an existing logical volume will overwrite any data on that volume without requesting user confirmation.
  2. Do not use the SMIT procedure or the cplv command to copy a larger logical volume to a smaller one. Doing so results in a corrupted file system because some of the data (including the superblock) is not copied to the smaller logical volume.
Notes:
  1. After you create a logical volume, the state will be closed because no LVM structure is using that logical volume. It will remain closed until a file system has been mounted over the logical volume or the logical volume is opened for raw I/O. See also Define a Raw Logical Volume for an Application.
  2. You cannot change the name of, import, or export rootvg.
  3. You must have enough direct access storage to duplicate a specific logical volume.

Adding Disks while the System Remains Available

The following procedure describes how to turn on and configure a disk using the hot-removability feature, which lets you add disks without powering off the system. You can add a disk for additional storage or to correct a disk failure. To remove a disk using the hot-removability feature, see Removing a Disk while the System Remains Available. This feature is only available on certain systems.

  1. Install the disk in a free slot of the cabinet. For detailed information about the installation procedure, see the service guide for your machine.
  2. Power on the new disk by typing the following fast path on the command line:
    smit ondsk

At this point, the disk is added to the system but it is not yet usable. What you do next depends on whether the new disk contains data.

Changing a Volume Group to Nonquorum Status

You can change a volume group to nonquorum status to have data continuously available even when there is no quorum. This procedure is often used for systems with the following configurations:

When a volume group under these circumstances can operate in nonquorum status, then even if a disk failure occurs, the volume group remains active as long as one logical volume copy remains intact on a disk. For conceptual information about quorums, refer to AIX 5L Version 5.2 System Management Concepts: Operating System and Devices.

To make recovery of nonquorum groups possible, ensure the following:

Both user-defined and rootvg volume groups can operate in nonquorum status, but their configuration and recovery methods are different.

Changing a User-Defined Volume Group to Nonquorum Status

Use the following procedure to change a user-defined volume group to nonquorum status:

  1. Check whether the user-defined volume group is currently active (varied on) by typing the following command:
    lsvg -o

    If the group you want is not listed, continue with step 3. If the group you want is listed, continue with step 2.

  2. If the group is active (varied on), type the following command:
    varyoffvg VGname
    Where VGName is the name of your user-defined volume group.
  3. To change an inactive user-defined volume group to nonquorum status, type the following command:
    chvg -Qn VGName
    If the volume group is active, the change does not take effect until the next varyoff/varyon cycle completes.
  4. To activate the volume group and cause the change to take effect, type the following command:

    varyonvg VGName
    Note
    To activate a nonquorum user-defined volume group, all of the volume group's physical volumes must be accessible or the activation fails. Because nonquorum volume groups stay online until the last disk becomes inaccessible, it is necessary to have each disk accessible at activation time.

At this point, your user-defined volume group should be available even if a quorum of physical volumes is not available.

Changing the rootvg Volume Group to Nonquorum Status

The procedure to change a rootvg to nonquorum status requires shutting down your system and rebooting.

Attention: When a disk associated with the rootvg volume group is missing, avoid powering on the system unless the missing disk cannot possibly be repaired. The Logical Volume Manager (LVM) always uses the -f flag to forcibly activate (vary on) a nonquorum rootvg; this operation involves risk. LVM must force the activation because the operating system cannot be brought up unless rootvg is activated. In other words, LVM makes a final attempt to activate (vary on) a nonquorum rootvg even if only a single disk is accessible.
  1. To change the rootvg volume group to nonquorum status, type the following command:

    chvg -Qn rootvg
  2. To shut down and reboot the system, which causes the change to nonquorum status to take effect, type:

    shutdown -Fr

At this point, the rootvg should remain available even if a quorum of physical volumes is not available.

Changing the Name of a Logical Volume

The following procedure describes how to rename a logical volume without losing data on the logical volume.

In the following examples, the logical volume name is changed from lv00 to lv33.

  1. Unmount all file systems associated with the logical volume, by typing:
    unmount /FSname

    Where FSname is the full name of a file system.

    Notes:
    1. The unmount command fails if the file system you are trying to unmount is currently being used. The unmount command executes only if none of the file system's files are open and no user's current directory is on that device.
    2. Another name for the unmount command is umount. The names are interchangeable.
  2. Rename the logical volume, by typing:
    chlv -n NewLVname OldLVname 

    Where the -n flag specifies the new logical volume name (NewLVname) and OldLVname is the name you want to change. For example:

    chlv -n lv33 lv00 
    Note
    If you rename a JFS or JFS2 log, the system prompts you to run the chfs command on all file systems that use the renamed log device.
  3. Remount the file systems you unmounted in step 1 by typing:

    mount /test1

At this point, the logical volume is renamed and available for use.

Copying a Logical Volume to Another Physical Volume

Depending on your needs, there are several ways to copy a logical volume to another physical volume while retaining file system integrity. The following sections describe your options.

Note
For the following scenarios to be successful in a concurrent volume group environment, AIX 4.3.2 or later must be installed on all concurrent nodes.

This scenario offers multiple methods to copy a logical volume or JFS to another physical volume. Choose the method that best serves your purposes:

Copy a Logical Volume

The simplest method is to use the cplv command to copy the original logical volume and create a new logical volume on the destination physical volume.

  1. Stop using the logical volume. Unmount the file system, if applicable, and stop any application that accesses the logical volume.
  2. Select a physical volume that has the capacity to contain all of the data in the original logical volume.
    Attention: If you copy from a larger logical volume containing data to a smaller one, you can corrupt your file system because some data (including the superblock) might be lost.
  3. Copy the original logical volume (in this example, it is named lv00) and create the new one, using the following command:
    Note
    The following cplv command fails if it creates a new logical volume and the volume group is varied on in concurrent mode.
    cplv lv00
  4. Mount the file systems, if applicable, and restart applications to begin using the logical volume.

At this point, the logical volume copy is usable.

Copy a Logical Volume While Original Logical Volume Remains Usable

If your environment requires continued use of the original logical volume, you can use the splitlvcopy command to copy the contents, as shown in the following example:

  1. Mirror the logical volume, using the following SMIT fast path:
    smit mklvcopy 
  2. Stop using the logical volume. Unmount the file system, if applicable, and stop or put into quiescent mode any application that accesses the logical volume.
    Attention: The next step uses the splitlvcopy command. Always close logical volumes before splitting them and unmount any contained file systems before using this command. Splitting an open logical volume can corrupt your file systems and cause you to lose consistency between the original logical volume and the copy if the logical volume is accessed simultaneously by multiple processes.
  3. With root authority, copy the original logical volume (oldlv) to the new logical volume (newlv) using the following command:
    splitlvcopy -y newlv oldlv

    The -y flag designates the new logical volume name. If the oldlv volume does not have a logical volume control block, the splitlvcopy command completes successfully but generates a message that the newlv volume has been created without a logical volume control block.

  4. Mount the file systems, if applicable, and restart applications to begin using the logical volume.

At this point, the logical volume copy is usable.

Copy a Raw Logical Volume to Another Physical Volume

To copy a raw logical volume to another physical volume, do the following:

  1. Create a mirrored copy of the logical volume on a new physical volume in the volume group using the following command:
    mklvcopy LogVol_name 2 new_PhysVol_name
    
  2. Synchronize the partitions in the new mirror copy using the following command:
    syncvg -l LogVol_name
    
  3. Remove the copy of the logical volume from the physical volume using the following command:
    rmlvcopy LogVol_name 1 old_PhysVol_name
    

At this point, the raw logical volume copy is usable.

Creating a File System Log on a Dedicated Disk for a User-Defined Volume Group

A JFS or JFS2 file system log is a formatted list of file system transaction records. The log ensures file system integrity (but not necessarily data integrity) in case the system goes down before transactions have been completed. A dedicated disk is created on hd8 for rootvg when the system is installed. The following procedure helps you create a JFS log on a separate disk for other volume groups. When you create a JFS2 log, the procedure requires the following changes:

Creating a file system log file for user-defined volume groups can improve performance under certain conditions, for example, if you have an NFS server and you want the transactions for this server to be processed without competition from other processes.

To create a log file for user-defined volume groups, the easiest way is to use the Web-based System Manager wizard, as follows:

  1. If Web-based System Manager is not already running, with root authority, type wsm on the command line.
  2. Select a host name.
  3. Select the Volumes container.
  4. Select the Logical Volumes container.
  5. In the Volumes menu, select New Logical Volume (Wizard). The wizard will guide you through the procedure. Online help is available if you need it.

Alternatively, you can use the following procedure, which creates a volume group (fsvg1) with two physical volumes (hdisk1 and hdisk2). The file system is on hdisk2 (a 256-MB file system mounted at /u/myfs) and the log is on hdisk1. By default, a JFS log size is 4 MB. You can place little-used programs, for example, /blv, on the same physical volume as the log without impacting performance.

The following instructions explain how to create a JFS log for a user-defined volume group using SMIT and the command line interface:

  1. Add the new volume group (in this example, fsvg1) using the SMIT fast path:
    smit mkvg
  2. Add a new logical volume to this volume group using the SMIT fast path:
    smit mklv
  3. On the Add a Logical Volume screen, add your data to the following fields. For example:
    Logical Volumes NAME                     fsvg1log
    
    Number of LOGICAL PARTITIONS             1
    
    PHYSICAL VOLUME names                    hdisk1
    
    Logical volume TYPE                      jfslog
    
    POSITION on Physical Volume              center
  4. After you set the fields, press Enter to accept your changes and exit SMIT.
  5. Type the following on a command line:
    /usr/sbin/logform /dev/fsvg1log
  6. When you receive the following prompt, type y and press Enter:
    Destroy /dev/fsvg1log

    Despite the wording in this prompt, nothing is destroyed. When you respond y to this prompt, the system formats the logical volume for the JFS log so that it can record file-system transactions.

  7. Add another logical volume using the following SMIT fast path:
    smit mklv
  8. Type the name of the same volume group as you used in step 2 (fsvg1 in this example). In the Logical Volumes screen, add your data to the following fields. Remember to designate a different physical volume for this logical volume than you did in step 3. For example:
    Logical Volumes NAME                     fslv1
    
    Number of LOGICAL PARTITIONS             64
    
    PHYSICAL VOLUME names                    hdisk2
    
    Logical volume TYPE                      jfs

    After you set the fields, press Enter to accept your changes and exit SMIT.

  9. Add a file system to the new logical volume, designate the log, and mount the new file system, using the following sequence of commands:
    crfs -v jfs -d LogVolName -m FileSysName -a logname=FSLogPath
    
    mount FileSysName
    Where LogVolName is the name of the logical volume you created in step 7; FileSysName is the name of the file system you want to mount on this logical volume; and FSLogPath is the name of the volume group you created in step 2. For example:
    crfs -v jfs -d fslv1 -m /u/myfs -a logname=/dev/fsvg1log
    mount /u/myfs
  10. To verify that you have set up the file system and log correctly, type the following command (substituting your volume group name):

    lsvg -l fsvg1

    The output shows both logical volumes you created, with their file system types, as in the following example:

    LV NAME             TYPE    ...
    /dev/fsvg1log       jfslog  ...
    fslv1               jfs     ...

At this point, you have created a volume group containing at least two logical volumes on separate physical volumes, and one of those logical volumes contains the file system log.

Designating Hot Spare Disks

Beginning with AIX 5.1, you can designate hot spare disks for a volume group to ensure the availability of your system if a disk or disks start to fail. Hot spare disk concepts and policies are described in AIX 5L Version 5.2 System Management Concepts: Operating System and Devices. The following procedures to enable hot spare disk support depend on whether you are designating hot spare disks to use with an existing volume group or enabling support while creating a new volume group.

Enable Hot Spare Disk Support for an Existing Volume Group

The following steps use Web-based System Manager to enable hot spare disk support for an existing volume.

  1. Start Web-based System Manager (if not already running) by typing wsm on the command line.
  2. Select the Volumes container.
  3. Select the Volume Groups container.
  4. Select the name of your target volume group, and choose Properties from the Selected menu.
  5. Select the Hot Spare Disk Support tab and check beside Enable hot spare disk support.
  6. Select the Physical Volumes tab to add available physical volumes to the Volume Group as hot spare disks.

At this point, your mirrored volume group has one or more disks designated as spares. If your system detects a failing disk, depending on the options you selected, the data on the failing disk can be migrated to a spare disk without interruption to use or availability.

Enable Hot Spare Disk Support while Creating a New Volume Group

The following steps use Web-based System Manager to enable hot spare disk support while you are creating a new volume group.

  1. Start Web-based System Manager (if not already running) by typing wsm on the command line.
  2. Select the Volumes container.
  3. Select the Volume Groups container.
  4. From the Volumes menu, select New->Volume Group (Advanced Method). The subsequent panels let you choose physical volumes and their sizes, enable hot spare disk support, select unused physical volumes to assign as hot spares, then set the migration characteristics for your hot spare disk or your hot spare disk pool.

At this point, your system recognizes a new mirrored volume group with one or more disks designated as spares. If your system detects a failing disk, depending on the options you selected, the data on the failing disk can be migrated to a spare disk without interruption to use or availability.

Enabling and Configuring Hot Spot Reporting

Beginning with AIX 5.1, you can identify hot spot problems with your logical volumes and remedy those problems without interrupting the use of your system. A hot-spot problem occurs when some of the logical partitions on your disk have so much disk I/O that your system performance noticeably suffers.

The following procedures use Web-based System Manager to enable host spot reporting and manage the result.

Enabling Hot Spot Reporting at the Volume Group Level

The following steps use Web-based System Manager to enable hot spot reporting at the volume group level.

  1. Start Web-based System Manager (if not already running) by typing wsm on the command line.
  2. Select the Volumes container.
  3. Select the Volume Groups container.
  4. Select the name of your target volume group, and choose Hot Spot Reporting... from the Selected menu.
  5. Check beside Enable hot spot reporting and Restart the Statistics Counters.

At this point, the hot spot feature is enabled. Use the pull-down or pop-up menu in Web-based System Manager to access the Manage Hot Spots... Sequential dialog. In the subsequent panels, you can define your reporting and statistics, display your statistics, select logical partitions to migrate, specify the destination physical partition, and verify the information before committing your changes.

Enabling Hot Spot Reporting at the Logical Volume Level

The following steps use Web-based System Manager to enable hot spot reporting at the logical volume level so you can avoid enabling it for an entire volume group.

  1. Start Web-based System Manager (if not already running) by typing wsm on the command line.
  2. Select the Volumes container.
  3. Select the Logical Volumes container.
  4. Select the name of your target logical volume and choose Hot Spot Reporting... from the Selected menu.
  5. Check beside Enable hot spot reporting and Restart the Statistics Counters.

At this point, the hot spot feature is enabled. Use the pull-down or pop-up menu in Web-based System Manager to access the Manage Hot Spots... Sequential dialog. In the subsequent panels, you can define your reporting and statistics, display your statistics, select logical partitions to migrate, specify the destination physical partition, and verify the information before committing your changes.

Importing or Exporting a Volume Group

The following table explains how to use import and export to move a user-defined volume group from one system to another. (The rootvg volume group cannot be exported or imported.) The export procedure removes the definition of a volume group from a system. The import procedure serves to introduce the volume group to its new system.

You can also use the import procedure to reintroduce a volume group to the system when it once was associated with the system but had been exported. You can also use import and export to add a physical volume that contains data to a volume group by putting the disk to be added in its own volume group.

Attention: The importvg command changes the name of an imported logical volume if a logical volume of that name already exists on the new system. If the importvg command must rename a logical volume, it prints an error message to standard error. When there are no conflicts, the importvg command also creates file mount points and entries in the /etc/filesystems file.
Import and Export Volume Group Tasks
Task SMIT Fast Path Command or File
Import a volume group smit importvg  
Export a volume group
  1. Unmount files systems on logical volumes in the volume group:
    smit umntdsk
  2. Vary off the volume group:
    smit varyoffvg
  3. Export the volume group:
    smit exportvg
 
Attention: A volume group that has a paging space volume on it cannot be exported while the paging space is active. Before exporting a volume group with an active paging space, ensure that the paging space is not activated automatically at system initialization by typing the following command:
chps -a n paging_space name

Then, reboot the system so that the paging space is inactive.

Migrating the Contents of a Physical Volume

To move the physical partitions belonging to one or more specified logical volumes from one physical volume to one or more other physical volumes in a volume group, use the following instructions. You can also use this procedure to move data from a failing disk before replacing or repairing the failing disk. This procedure can be used on physical volumes in either the root volume group or a user-defined volume group.

Attention: When the boot logical volume is migrated from a physical volume, the boot record on the source must be cleared or it could cause a system hang. When you execute the bosboot command, you must also execute the chpv -c command described in step 4 of the following procedure.
  1. If you want to migrate the data to a new disk, do the following steps. Otherwise, continue with step 2.
    1. Check that the disk is recognizable by the system and available by typing:

      lsdev -Cc disk

      The output resembles the following:

      hdisk0 Available 10-60-00-8,0  16 Bit  LVD  SCSI Disk Drive
      hdisk1 Available 10-60-00-9,0  16 Bit  LVD  SCSI Disk Drive
      hdisk2 Available 10-60-00-11,0 16 Bit  LVD  SCSI Disk Drive
    2. If the disk is listed and in the available state, check that it does not belong to another volume group by typing:

      lspv

      The output looks similar to the following:

      hdisk0          0004234583aa7879       rootvg         active
      hdisk1          00042345e05603c1       none           active
      hdisk2          00083772caa7896e       imagesvg       active

      In the example, hdisk1 can be used as a destination disk because the third field shows that it is not being used by a volume group.

      If the new disk is not listed or unavailable, refer to Configuring a Disk or Adding Disks while the System Remains Available.

    3. Add the new disk to the volume group by typing:

      extendvg VGName diskname

      Where VGName is the name of your volume group and diskname is the name of the new disk. In the example shown in the previous step, diskname would be replaced by hdisk1.

  2. The source and destination physical volumes must be in the same volume group. To determine whether both physical volumes are in the volume group, type:

    lsvg -p VGname

    Where VGname is the name of your volume group. The output for a root volume group looks similar to the following:

    rootvg:                                                                       
    PV_NAME        PV STATE       TOTAL PPs   FREE PPs    FREE DISTRIBUTION 
    hdisk0         active         542         85          00..00..00..26..59
    hdisk1         active         542         306         00..00..00..00..06

    Note the number of FREE PPs.

  3. Check that you have enough room on the target disk for the source that you want to move:
    1. Determine the number of physical partitions on the source disk by typing:

      lspv SourceDiskName | grep "USED PPs"

      Where SourceDiskName is of the name of the source disk, for example, hdisk0. The output looks similar to the following:

      USED PPs:      159 (636 megabytes)

      In this example, you need 159 FREE PPs on the destination disk to successfully complete the migration.

    2. Compare the number of USED PPs from the source disk with the number of FREE PPs on the destination disk or disks (step 2). If the number of FREE PPs is larger than the number of USED PPs, you have enough space for the migration.
  4. Follow this step only if you are migrating data from a disk in the rootvg volume group. If you are migrating data from a disk in a user-defined volume group, proceed to step 5.

    Check to see if the boot logical volume (hd5) is on the source disk by typing:

    lspv -l SourceDiskNumber | grep hd5

    If you get no output, the boot logical volume is not located on the source disk. Continue to step 5.

    If you get output similar to the following:

    hd5            2   2   02..00..00..00..00   /blv

    then run the following command:

    migratepv -l hd5 SourceDiskName DestinationDiskName

    You will receive a message warning you to perform the bosboot command on the destination disk. You must also perform a mkboot -c command to clear the boot record on the source. Type the following sequence of commands:

    bosboot -a -d /dev/DestinationDiskName
    bootlist -m normal DestinationDiskName
    mkboot -c -d /dev/SourceDiskName
  5. Migrate your data by typing the following SMIT fast path:
    smit migratepv 
  6. List the physical volumes, and select the source physical volume you examined previously.
  7. Go to the DESTINATION physical volume field. If you accept the default, all the physical volumes in the volume group are available for the transfer. Otherwise, select one or more disks with adequate space for the partitions you are moving (from step 4).
  8. If you wish, go to the Move only data belonging to this LOGICAL VOLUME field, and list and select a logical volume. You move only the physical partitions allocated to the logical volume specified that are located on the physical volume selected as the source physical volume.
  9. Press Enter to move the physical partitions.

At this point, the data now resides on the new (destination) disk. The original (source) disk, however, remains in the volume group. If the disk is still reliable, you could continue to use it as a hot spare disk (see Designating Hot Spare Disks). Especially when a disk is failing, it is advisable to do the following steps:

  1. To remove the source disk from the volume group, type:
    reducevg VGNname SourceDiskName
  2. To physically remove the source disk from the system, type:
    rmdev -l SourceDiskName -d

Mirroring a Volume Group

The following scenario explains how to mirror a normal volume group. If you want to mirror the root volume group (rootvg), see Mirroring the Root Volume Group.

The following instructions show you how to mirror a root volume group using the System Management Interface Tool (SMIT). You can also use Web-based System Manager (select a volume group in the Volumes container, then choose Mirror from the Selected menu). Experienced administrators can use the mirrorvg command.

Note
The following instructions assume you understand the mirroring and logical volume manager (LVM) concepts explained in AIX 5L Version 5.2 System Management Concepts: Operating System and Devices.
  1. With root authority, add a disk to the volume group using the following SMIT fast path:
    smit extendvg
  2. Mirror the volume group onto the new disk by typing the following SMIT fast path:
    smit mirrorvg
  3. In the first panel, select a volume group for mirroring.
  4. In the second panel, you can define mirroring options or accept defaults. Online help is available if you need it.
Note
When you complete the SMIT panels and click OK or exit, the underlying command can take a significant amount of time to complete. The length of time is affected by error checking, the size and number of logical volumes in the volume group, and the time it takes to synchronize the newly mirrored logical volumes.

At this point, all changes to the logical volumes will be mirrored as you specified in the SMIT panels.

Mirroring the Root Volume Group

The following scenario explains how to mirror the root volume group (rootvg).

Notes:
  1. Mirroring the root volume group requires advanced system administration experience. If not done correctly, you can cause your system to be unbootable.
  2. Mirrored dump devices are supported in AIX 4.3.3 or later.

In the following scenario, the rootvg is contained on hdisk01, and the mirror is being made to a disk called hdisk11:

  1. Check that hdisk11 is supported by AIX as a boot device:
    bootinfo -B hdisk11
    If this command returns a value of 1, the selected disk is bootable by AIX. Any other value indicates that hdisk11 is not a candidate for rootvg mirroring.
  2. Extend rootvg to include hdisk11, using the following command:
    extendvg rootvg hdisk11

    If you receive the following error messages:

    0516-050 Not enough descriptor space left in this volume group, Either try
    adding a smaller PV or use another volume group.

    or a message similar to:

    0516-1162 extendvg: Warning, The Physical Partition size of 16 requires the 
    creation of 1084 partitions for hdisk11. The limitation for volume group 
    rootvg is 1016 physical partitions per physical volume. Use chvg command with 
    the -t option to attempt to change the maximum physical partitions per Physical 
    Volume for this volume group.

    You have the following options:

  3. Mirror the rootvg, using the exact mapping option, as shown in the following command:
    mirrorvg -m rootvg hdisk11
    This command will turn off quorum when the volume group is rootvg. If you do not use the exact mapping option, you must verify that the new copy of the boot logical volume, hd5, is made of contiguous partitions.
  4. Initialize all boot records and devices, using the following command:
    bosboot -a
  5. Initialize the boot list with the following command:
    bootlist -m normal hdisk01 hdisk11
    Notes:
    1. Even though the bootlist command identifies hdisk11 as an alternate boot disk, it cannot guarantee the system will use hdisk11 as the boot device if hdisk01 fails. In such case, you might have to boot from the product media, select maintenance, and reissue the bootlist command without naming the failed disk.
    2. If your hardware model does not support the bootlist command, you can still mirror the rootvg, but you must actively select the alternate boot disk when the original disk is unavailable.

Removing a Disk while the System Remains Available

The following procedure describes how to remove a disk using the hot-removability feature, which lets you remove the disk without turning the system off. This feature is only available on certain systems.

Hot removability is useful when you want to:

Removing a Disk with Data

The following procedure describes how to remove a disk that contains data without turning the system off. The disk you are removing must be in a separate non-rootvg volume group. Use this procedure when you want to move a disk to another system.

  1. To list the volume group associated with the disk you want to remove, type:
    smit lspv
    Your output looks similar to the following:
    PHYSICAL VOLUME:    hdisk2                   VOLUME GROUP:     imagesvg         
    PV IDENTIFIER:      00083772caa7896e VG IDENTIFIER     0004234500004c00000000e9b5cac262                                                                         
    PV STATE:           active                                                      
    STALE PARTITIONS:   0                        ALLOCATABLE:      yes              
    PP SIZE:            16 megabyte(s)           LOGICAL VOLUMES:  5                
    TOTAL PPs:          542 (8672 megabytes)     VG DESCRIPTORS:   2                
    FREE PPs:           19 (304 megabytes)       HOT SPARE:        no               
    USED PPs:           523 (8368 megabytes)                                        
    FREE DISTRIBUTION:  00..00..00..00..19                                          
    USED DISTRIBUTION:  109..108..108..108..90                                      
    The name of the volume group is listed in the VOLUME GROUP field. In this example, the volume group is imagesvg.
  2. To verify that the disk is in a separate non-rootvg volume group, type:
    smit lsvg
    Then select the volume group associated with your disk (in this example, imagesvg). Your output looks similar to the following:
    VOLUME GROUP:   imagesvg                 VG IDENTIFIER:  0004234500004c00000000e9b5cac262                                                                       
    VG STATE:       active                   PP SIZE:        16 megabyte(s)         
    VG PERMISSION:  read/write               TOTAL PPs:      542 (8672 megabytes)   
    MAX LVs:        256                      FREE PPs:       19 (304 megabytes)     
    LVs:            5                        USED PPs:       523 (8368 megabytes)   
    OPEN LVs:       4                        QUORUM:         2                      
    TOTAL PVs:      1                        VG DESCRIPTORS: 2                      
    STALE PVs:      0                        STALE PPs:      0                      
    ACTIVE PVs:     1                        AUTO ON:        yes                    
    MAX PPs per PV: 1016                     MAX PVs:        32                     
    LTG size:       128 kilobyte(s)          AUTO SYNC:      no                     
    HOT SPARE:      no                                                              
    In this example, the TOTAL PVs field indicates there is only one physical volume associated with imagesvg. Because all data in this volume group is contained on hdisk2, hdisk2 can be removed using this procedure.
  3. To unmount any file systems on the logical volumes on the disk, type:
    smit umountfs
  4. To deactivate and export the volume group in which the disk resides, unconfigure the disk and turn it off, type:
    smit exportvgrds

    When the procedure completes, the system displays a message indicating the cabinet number and disk number of the disk to be removed. If the disk is placed at the front side of the cabinet, the disk shutter automatically opens.

  5. Look at the LED display for the disk you want to remove. Ensure the yellow LED is off (not lit).
  6. Physically remove the disk. For more information about the removal procedure, see the service guide for your machine.

At this point, the disk is physically and logically removed from your system. If you are permanently removing this disk, this procedure is completed. You can also do one of the following:

Removing a Disk without Data

The following procedure describes how to remove a disk that contains either no data or no data that you want to keep.

Attention: The following procedure erases any data that resides on the disk.
  1. To unmount any file systems on the logical volumes on the disk, type:
    smit umountfs
  2. To deactivate and export any volume group in which the disk resides, unconfigure the disk and turn it off, type:
    smit exportvgrds

    When the procedure completes, the system displays a message indicating the cabinet number and disk number of the disk to be removed. If the disk is placed at the front side of the cabinet, the disk shutter automatically opens.

  3. Look at the LED display for the disk you want to remove. Ensure the yellow LED is off (not lit).
  4. Physically remove the disk. For more information about the removal procedure, see the service guide for your machine.

At this point, the disk is physically and logically removed from your system. If you are permanently removing this disk, this procedure is completed. If you want to replace the removed disk with a new one, see Adding Disks while the System Remains Available.

Removing a Logical Volume

To remove a logical volume, you can use one of the following procedures. The primary difference between the following procedures is that the procedures to remove a logical volume by removing its file system remove the file system, its associated logical volume, and its record in the /etc/filesystems file. The procedures to remove a logical volume remove the logical volume and its file system, but not the file system's record in /etc/filesystems.

Removing a Logical Volume by Removing the File System
Attention: When you remove a file system, you destroy all data in the specified file systems and logical volume.

The following procedure explains how to remove a JFS or JFS2 file system, its associated logical volume, its associated stanza in the /etc/filesystems file, and, optionally, the mount point (directory) where the file system is mounted. If you want to remove a logical volume with a different type of file system mounted on it or a logical volume that does not contain a file system, refer to Removing a Logical Volume Only.

To remove a journaled file system through Web-based System Manager, use the following procedure:

  1. If Web-based System Manager is not already running, with root authority type wsm on the command line.
  2. Select a host name.
  3. Select the File Systems container.
  4. Select the Journaled File Systems container.
  5. Select the file system you want to remove.
  6. From the Selected menu, select Unmount.
  7. From the Selected menu, select Delete.

To remove a journaled file system through SMIT, use the following procedure:

  1. Unmount the file system that resides on the logical volume with a command similar to the following example:

    umount /adam/usr/local

    Note: You cannot use the umount command on a device in use. A device is in use if any file is open for any reason or if a user's current directory is on that device.
  2. To remove the file system, type the following fast path:
    smit rmfs
  3. Select the name of the file system you want to remove.
  4. Go to the Remove Mount Point field and toggle to your preference. If you select yes, the underlying command will also remove the mount point (directory) where the file system is mounted (if the directory is empty).
  5. Press Enter to remove the file system. SMIT prompts you to confirm whether you want to remove the file system.
  6. Confirm you want to remove the file system. SMIT displays a message when the file system has been removed successfully.

At this point, the file system, its data, and its associated logical volume are completely removed from your system.

Removing a Logical Volume Only
Attention: Removing a logical volume destroys all data in the specified file systems and logical volume.

The following procedures explain how to remove a logical volume and any associated file system. You can use this procedure to remove a non-JFS file system or a logical volume that does not contain a file system. After the following procedures describe how to remove a logical volume, they describe how to remove any non-JFS file system's stanza in the /etc/filesystems file.

To remove a logical volume through Web-based System Manager, use the following procedure:

  1. If Web-based System Manager is not already running, with root authority, type wsm on the command line.
  2. Select a host name.
  3. If the logical volume does not contain a file system, skip to step 10.
  4. Select the File Systems container.
  5. Select the container for the appropriate file system type.
  6. Select the file system you want to unmount.
  7. From the Selected menu, select Unmount.
  8. Select the appropriate file system container in the navigation area to list its file systems.
  9. Note the logical volume name of the system you want to remove.
  10. Select the Volumes container.
  11. Select the Logical Volumes container.
  12. Select the logical volume you want to remove.
  13. From the Selected menu, select Delete.

To remove a logical volume through SMIT, use the following procedure:

  1. If the logical volume does not contain a file system, skip to step 4.
  2. Unmount all file systems associated with the logical volume by typing:
    unmount /FSname

    Where /FSname is the full path name of a file system.

    Notes:
    1. The unmount command fails if the file system you are trying to unmount is currently being used. The unmount command executes only if none of the file system's files are open and no user's current directory is on that device.
    2. Another name for the unmount command is umount. The names are interchangeable.
  3. To list information you need to know about your file systems, type the following fast path:
    smit lsfs
    The following is a partial listing:
    Name            Nodename   Mount Pt         ...
       
    /dev/hd3        --         /tmp             ...
                                             
    /dev/locallv    --         /adam/usr/local  ... 

    Assuming standard naming conventions for the second listed item, the file system is named /adam/usr/local and the logical volume is locallv. To verify this, type the following fast path:

    smit lslv2
    The following is a partial listing:
    imagesvg:                                                                 
    LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT 
    hd3                 jfs        4     4     1    open/syncd    /tmp
    locallv             mine       4     4     1    closed/syncd  /adam/usr/local
  4. To remove the logical volume, type the following fast path on the command line:
    smit rmlv
  5. Select the name of the logical volume you want to remove.
  6. Go to the Remove Mount Point field and toggle to your preference. If you select yes, the underlying command will also remove the mount point (directory) where the file system is mounted (if any, and if that directory is empty).
  7. Press Enter to remove the logical volume. SMIT prompts you to confirm whether you want to remove the logical volume.
  8. Confirm you want to remove the logical volume. SMIT displays a message when the logical volume has been removed successfully.
  9. If the logical volume had a non-JFS file system mounted on it, remove the file system and its associated stanza in the /etc/filesystems file, as shown in the following example:
    rmfs /adam/usr/local

    Or, you can use the file system name as follows:

    rmfs /dev/locallv

At this point, the logical volume is removed. If the logical volume contained a non-JFS file system, that system's stanza has also been removed from the /etc/filesystems file.

Resize a RAID Volume Group

In AIX 5.2 and later versions, on systems that use a redundant array of independent disks (RAID), chvg and chpv command options provide the ability to add a disk to the RAID group and grow the size of the physical volume that LVM uses without interruptions to the use or availability of the system.

Notes:
  1. This feature is not available while the volume group is activated in classic or in enhanced concurrent mode.
  2. The rootvg volume group cannot be resized using the following procedure.
  3. A volume group with an active paging space cannot be resized using the following procedure.

The size of all disks in a volume group is automatically examined when the volume group is activated (varyon). If growth is detected, the system generates an informational message.

The following procedure describes how to grow disks in a RAID environment:

  1. To check for disk growth and resize if needed, type the following command:
    chvg -g VGname
    Where VGname is the name of your volume group. This command examines all disks in the volume group. If any have grown in size, it attempts to add physical partitions to the physical volume. If necessary, it will determine the appropriate 1016 limit multiplier and convert the volume group to a big volume group.
  2. To turn off LVM bad block relocation on a RAID disk, type the following command:
    chpv -r ny PVname
    Where PVname is the name of your physical volume.

[ Top of Page | Previous Page | Next Page | Contents | Index | Library Home | Legal | Search ]