If a disk larger than 4 GB is added to a volume group (based on usage of the default 4 MB size for the physical partition), the disk addition fails. The warning message provided is:
The Physical Partition Size of <number A> requires the creation of <number B>: partitions for hdiskX. The system limitation is <number C> physical partitions per disk at a factor value of <number D>. Specify a larger Physical Partition Size or a larger factor value in order create a volume group on this disk.
There are two instances where this limitation is enforced:
The workaround to this limitation is to select from the physical partition size ranges of:
1, 2, (4), 8, 16, 32, 64, 128, 256, 512, 1024
Megabytes and use the mkvg -s
Use a suitable factor (mkvg -t command) that allows multiples of 1016 partitions per disk.
If the install code detects that the rootvg drive is larger than 4 GB, it changes the mkvg -s value until the entire disk capacity can be mapped to the available 1016 tracks. This install change also implies that all other disks added to rootvg, regardless of size, are also defined at that physical partition size.
For systems using a redundant array of identical discs (RAID) , the /dev/hdiskX name used by LVM may consist of many non-4 GB disks. In this case, the 1016 requirement still exists. LVM is unaware of the size of the individual disks that really make up /dev/hdiskX. LVM bases the 1016 limitation on the recognized size of /dev/hdiskX, and not the real physical disks that make up /dev/hdiskX.
Not enough descriptor area space left in this volume group. Either try adding a smaller PV or use another volume group.
On every disk in a volume group, there exists an area called the volume group descriptor area (VGDA). This space allows the user to take a volume group to another system using the importvg command. The VGDA contains the names of disks that make up the volume group, their physical sizes, partition mapping, logical volumes that exist in the volume group, and other pertinent LVM management information.
When the user creates a volume group, the mkvg command defaults to allowing the new volume group to have a maximum of 32 disks in a volume group. However, as bigger disks have become more prevalent, this 32-disk limit is usually not achieved because the space in the VGDA is used up faster, as it accounts for the capacity on the bigger disks. This maximum VGDA space, for 32 disks, is a fixed size which is part of the LVM design. Large disks require more management-mapping space in the VGDA, causing the number and size of available disks to be added to the existing volume group to shrink. When a disk is added to a volume group, not only does the new disk get a copy of the updated VGDA, but all existing drives in the volume group must be able to accept the new, updated VGDA.
The exception to this description of the maximum VGDA is the rootvg command. To provide users more free disk space, when rootvg is created, the mkvg command does not use the maximum limit of 32 disks that is allowed into a volume group. Instead, the number of disks picked in the install menu of is used as the reference number by the mkvg -d command during the creation of rootvg. This -d number is 7 for one disk and one more for each additional disk picked. For example, if two disks are picked, the number is 8 and if three disks are picked, the number is 9, and so on. This limit does not prohibit the user from adding more disks to rootvg during post-installation. The amount of free space left in a VGDA, and the number size of the disks added to a volume group, depends on the size and number of disks already defined for a volume group.
If the customer requires more VGDA space in the rootvg, then they should use the mksysb, and migratepv commands to reconstruct and reorganize their rootvg (the only way to change the -d limitation is recreation of a volume group).
Note: Do not place user data onto rootvg disks. This separation provides an extra degree of system integrity.
Warning, cannot write lv control block data.
Most of the time, this is a result of database programs accessing raw logical volumes (and bypassing the JFS) as storage media. When this occurs, the information for the database is literally written over the LVCB. Although this might seem fatal, it is not the case. After the LVCB is overwritten, the user can still:
There are limitations to deleting LVCBs. The logical volumes with deleted LVCB face possible, incomplete importation into other systems. During an importation, the LVM importvg command will scan the LVCBs of all defined logical volumes in a volume group for information concerning the logical volumes. If the LVCB is deleted, the imported volume group still defines the logical volume to the new system that is accessing this volume group, and the user can still access the raw logical volume. However, any journaled file system information is lost and the associated mount point is not imported into the new system. The user must create new mount points and the availability of previous data stored in the file system is not assured.
Also, during an import of a logical volume with an erased LVCB, some non-jfs information concerning the logical volume (displayed by the lslv command) cannot be found. When this occurs, the system uses default logical volume information to populate the ODM information. Thus, some output from the lslv command might be inconsistent with the real logical volume. If any logical volume copies still exist on the original disks, the information is not be correctly reflected in the ODM database. Use the rmlvcopy and mklvcopy commands to rebuild any logical volume copies and synchronize the ODM.