VSAM buffering

The physical process of going out to disk (or tape) for data takes time. Modern DASD devices have large cache capacity, and read ahead algorithms will preload up to a cylinders worth of data into cache. This eliminates I-O delay due to seek times and rotational positioning, but there is still a lot of benefit to be gained from I-O buffer tuning.
The point behind buffering is to avoid going to disk if possible, by loading data into memory.

   EADM Advert

Accelerate DB2 Write with zHyperWrite and "EADM™ by Improving DB2 Logs Volumes Response Time:

JCL buffers

The VSAM unit of data transfer is the CI size, and this is often defaulted to 4096. By default, VSAM provides 2 Data and 1 Index buffers, but these generally do not provide adequate performance. The following formulae have worked well over the years.

For sequential processing the process will read through the DATA portion of the file, from start to end. Define enough data buffers to ensure that the process does a minimum of IOs to disk. Sequential processing should get a lot of assistance from disk cache. Use:

   BUFND = (2 * number of CIs per track) + 3

For random processing, if the file is small enough, define enough index buffers to contain the entire index, otherwise aim to get the sequence set and the top level record in buffers. The sequence set is the lowest level of index. Use:

   BUFNI = (TI - HURBA/CASZ) + 1

where
TI = Total number of index records of the Index component
HURBA = High-used field from the Data component
CASZ = CISZ *CI/CA

All these parameters can be found by examining LISTCAT of the VSAM cluster. Just issue the TSO command LISTCAT ENT(cluster name) ALL

The following JCL can be used to place index data in buffers.

//DD1    DD DSN=aa.bb.cc,DISP=SHR,
//       AMP=('BUFNI=50')

In this case, the index was fairly small, and it could be contained in 50 buffers. I-O rates on a heavily accessed index can come down by 1000% or more, with good buffering.

back to top