LSR buffers

One way to retain data in storage is to use LSR buffers. LSR buffers are designed and mainly used by for OLTP (On-Line Transaction Processing) systems like CICS and IMS, that use direct access processing. An OLTP system will typically have hundreds of open files and it is inefficient to buffer each file independently. LSR buffers are shared by open files within the same address space.
LSR buffering is designed for direct access and will give better performance than NSR buffering for direct access applications. Buffered CIs are replaced using a least recently used (LRU) algorithm, which is designed for random processing. It is ideal for applications that frequently access the same set of data, as it will be kept in the buffer. LSR has no lookahead ability so it is not suitable for sequential I/O. If your application mainly uses sequential I/O, use NSR buffers.
LSR can use deferred writes, where updated data is held in the buffer, so if that data CI needs to be updated again, it is already in the buffer. Applications using LSR can also use hiperspace as a second level of buffering.

   EADM Advert

Accelerate DB2 Write with zHyperWrite and "EADM™ by Improving DB2 Logs Volumes Response Time:

STROBE is an excellent product for checking buffer usage. The following is a tailored example of Strobe output, which illustrates LSR buffers in action. Basically, the output was simplified to make it suitable for the Web, the full output stretches to about 15 columns. Non I-O means the process got its data from buffers. I-O means the operation had to go out to disk.

BUF BUFNO RETRIEVES WRITES
BUF BUFNO I-O / NON I-O I-O / NON I-O
1024 5 3 / 39928 0 / 0
2048 200 228 / 599066 343 / 0
4096 1000 34146 / 321715 2639 / 435

You can see that the 2K buffers are performing exceptionally well, with almost all the data requests coming from buffers.

LSR cannot be used on data which has alternate indexes. If the program does both sequential processing and direct updates, the updates sometimes hang, because the sequential read request is holding the buffer.

Batch LSR

Batch LSR is easier to set up than LSR, and it appears to use buffers more intelligently. BLSR works with both SMS and non-SMS-managed data sets. However if your data set is SMS-managed and is in extended format, you get better performance by using SMB.
If records are written out using multiple PUT statements, VSAM normally does an I-O for every PUT, but with BLSR, these become deferred writes, so we get one write per CI. BLSR also uses a different algorithm for swapping out buffers. It uses least recently used (LRU), rather than oldest, so there is more chance of getting the next record you need in a buffer.

BSLR really works best with direct processing with localised data access and is not suitable for sequential processing. However if your data is mainly processed direct, with a little bit of sequential, then BSLR should improve performance. BLSR supports all the VSAM types except Linear. The VSAM buffers and control blocks can be forced above 16 MB without having to use hiperspace. It also allows the buffer pool can be shared among several VSAM data sets.

BLSR runs as an z/OS subsystem, and needs a change to the IEFSSNxx member in SYS1.PARMLIB to define it. Typically, this would be a line looking like

SUBSYS SUBNAME(BLSR)

Once the subsystem is activated, all that is needed is an extra JCL DD statement which allocates the file to be buffered :

//ddold DD SUBSYS=(BLSR,'DDNAME=ddnew','HBUFND=100','HBUFNI=20')
//ddnew DD DSN=aa.bb.cc,DISP=SHR etc.

Where 'ddold' is the ddname the program is expecting to find, which points to 'ddnew', the original ddname.

GSR Buffers

Global Shared Resource, or GSR buffers are similar to LSR and are used to share buffer space among VSAM datasets in multiple address spaces. The resource pools are held in CSA, not hiperspace. The IBM recommendation is to use LSR rather than GSR as CSR uses the Common area for buffers, which is usually a limited resource.

Buffering products

Software products exist which automate all this buffering seamlessly.

One such product is Performance Essential (currently supplied by Rocket Software under exclusive license from EMC) which has been known to cut batch run times down by 25%, and in exceptional cases individual batch job run times have reduced by 95%

Performance Essential runs as a Started Task, and has a central control repository which contains generic, or specific lists of jobs which are eligible for management. Once the product 'learns' about a job, it can then adjust buffers automatically, to cope with regular changes, such as month end runs.

back to top