ITEM: H0262L

Cannot initialize DATABASE.


Question:

I have a RISC System/6000 with AIX 3.2.3E.  I have a 2GB logical volume
with a JFS file system.  However, I am unable to programmatically create 
a file that is near the 2GB limit.  What I am getting is an increase in 
size of 1MB at a time until the (single) file is about 1.981 MB.  

I would like to know what the overhead for the file system is and why am
I not able to get a file closer to a size of 2GB.

Response:

The following is an explanation of the overhead required by the JFS file
system:

The number of disk i-nodes available to a file system is dependent on the 
size of the file system.  When a journaled file system is created, it is 
automatically allocated one i-node for every 4KB block in the file system.  
Each disk i-node in the journaled file system is a 128-byte structure (see 
/usr/include/jfs/inode.h).  This equates to 1/32 or .03125 of the filesystem 
that is required for disk inodes.  When enough files have been created to use 
all the available i-nodes, no more files can be created, even if the file 
system has free space.

Additional overhead comes from the journaled file system's use of the indirect
method of allocating data blocks.  

There are 8 addresses listed in the i-node that directly point to 4K byte data
blocks.  The maximum size of a file using the direct block allocation method is
32K bytes or 8x4K.  When the file requires more than 32K bytes, an indirect
block is used.

This i_rindirect field of a disk i-node contains either the real disk address
of an indirect block or a double indirect block.  The indirect block contains a
pointer to a specially formatted DATA BLOCK that contains 1024 pointers to 
data blocks.  Pointers to data blocks are 4 bytes large.  Using the indirect
block allocation method, the file can be up to 4M bytes or 1024x4K.

The double indirect method uses the i_rindirect field to point to a double
indirect block.  The double indirect block contains 512 pointer to indirect
blocks.  Therefore, the largest file size allowed is 2G bytes or 512(1024x4K).

Results of testing in our lab (AIX 3.2.5):

In my testing I created a 2GB file system and created one large file in the
file system.  Here is are the results I found:

\# mkvg lizvg
\# mklv -y lizlv lizvg 512
\# crfs -v jfs -m /liz -d /dev/lizlv
\# df
Filesystem    Total KB    free %used   iused %iused Mounted on
/dev/hd4         12288    5832   52%     637    15% /
/dev/hd9var       4096    3156   22%      65     6% /var
/dev/hd2         98304   12564   87%    4302    17% /usr
/dev/hd3         12288   11052   10%      86     2% /tmp
/dev/hd1          4096    3912    4%      25     2% /home
/dev/lizlv     2097152 2031288    3%      16     0% /liz

\# ls -al /liz
total 16
drwxr-sr-x   2 sys      sys          512 Mar  2 14:55 ./
drwxr-xr-x  17 bin      bin          512 Mar  2 14:54 ../

==> Ran C program that writes a file until the device runs out of space \<==

\# df
Filesystem    Total KB    free %used   iused %iused Mounted on
/dev/hd4         12288    5832   52%     637    15% /
/dev/hd9var       4096    3156   22%      65     6% /var
/dev/hd2         98304   12564   87%    4302    17% /usr
/dev/hd3         12288   11048   10%      86     2% /tmp
/dev/hd1          4096    3912    4%      25     2% /home
/dev/lizlv     2097152       0  100%      17     0% /liz

\# ls -al /liz
total 4058616
drwxr-sr-x   2 sys      sys          512 Mar  2 15:17 ./
drwxr-xr-x  17 bin      bin          512 Mar  2 14:54 ../
-rwSr-Sr--   1 root     sys      2078003200 Mar  2 15:27 afile

According to the initial df, we should have been able to use 2031288K or
2080038912 bytes for our data.  According to "ls -al" we were only able 
to use 2078003200 bytes.  This is a difference of 2035712 bytes or 497 data
blocks (2035712/4K).  For this size of file, we would be doing double
indirection which would require data blocks for storing addresses.  Here's 
how to figure how many data blocks will be needed to store the addresses
for a file of this size:

2078003200/4096 = 507325 data blocks needed to store the actual data

507325/1024 = 496 single indirect blocks required to store the addresses 

1 double indirect block required

496 + 1 = 497 blocks required to store the adresses for the file's data

If you add the number of blocks used for i-nodes with the number of blocks
used for indirection you get approximately 69480448 bytes or 16963 4K-blocks
that are used as overhead for the JFS filesystem.

Any additional blocks that you are unable to access could be due either to bad
blocks on the disk (use the certify utility in Diagnostics to check this) or
perhaps there are some free blocks that for some reason are not on the free
list (use fsck to check this).  Otherwise, you should get numbers very similar
to the above if you have a 2GB filesystem with only one large file in it.


Support Line: Cannot initialize DATABASE. ITEM: H0262L
Dated: March 1994 Category: N/A
This HTML file was generated 99/06/24~13:30:48
Comments or suggestions? Contact us