DFSMS, z/OS System Managed Storage - Storage Group

Storage groups are the fundamental concept of DFSMS. Mainframe disks were limited in size to about 8GB each, though much larger disks are now available. In the old days, you had to decide which disk you were going to place your file on, and hope it had enough space available. Space errors were common, especially in batchwork. DFSMS groups disks together into storage pools, so you allocated by storage pool. Its much easier to maintain free space in a pool of 50 disks. Storage pools can also consist of tape volumes (not tape files) and optical volumes. This allows SMS to direct tape allocations to a VTS or automated library.


Accelerate DB2 Write with zHyperWrite and "EADM™ by Improving DB2 Logs Volumes Response Time:

The different types of storage pool that you can define are:

You need to define your volumes as being DFDSMS capable, then you add them to your storage pools. There are some simple rules regarding storage pools, volumes and datasets.

When you are initialising new volumes, the easiest way to prepare them for SMS is to use the STORAGEGROUP keyword in your ICKDSF job like this

INIT UNIT(D615) VOLID(D3D615) VTOC(1,0,270) -
   INDEX(0,1,14) VFY(OLDVOL) -

If you want to know what volumes are defined to an existing storage class, the easiest way is to use the SDSF command

D SMS,SG(poolname),LISTVOL

However access to these commands is often restricted. If you do not have access, the other way is to use the ISMF panels.
Select the Storage Group option 6 from the ISMF Primary Option Menu
Select option 1, List and this will list out all the storage groups.
Type LISTVOL in the Line Operator column next to a storage group that you are interested in and this will list out all the volumes.

There is a special type of storage pool called 'Reserve storage pools' that you can use as a place to hold volumes that have been initialised and formatted for SMS, but you do not want them to be used at this time.
You initialise reserved volumes with ICKDSF, but add a RESERVED parameter and an OWNERID parameter. The OWNERID should be called IBMRSPrespoolname, where respoolname is the name of the reserved storage pool. The reserved volumes will be offline and cannot be brought online until they are re-initialised without the RESERVED parameter.

How many pools?

In general, the bigger the pool the better. Try to avoid lots of small pools. Why? Because you can run a large pool at a higher occupancy level without space problems, and that saves money. If you have a 5 volume pool which holds 40GB, then 20% free space is 8GB. That is not a lot, especially when the space gets fragmented. If you run a 2500GB pool at 90% occupancy, then you have 250GB free. What's right for the occupancy level depends very much on the type of data in the pool. Databases tend to be well behaved and predictable, so large database pools can be run at 90% to 95%. General purpose pools can be volatile, and need more free space. You need to analyse your own pool usage and see what's best for your site.
So how many pools? One suggestion is to split your data into three, Production, Development and Test. Within each section you would have three pools, large and small allocations, and an overflow pool. It is also a good idea to have a separate pools for databases, so that's 18 pools. You will also need a few special pools for system data, VIO pools, SMS managed tape and the like.

What are large/small/overflow pools? A large pool basically holds files with large allocations, and a small one with small allocations. Then when datasets are deleted, they leave small holes in the small pool, and large holes in the large pool. If you mix them, small datasets will clog the disks up, and leave the free space fragmented, so large datasets cannot be allocated. Separating them avoids space abends and reduces the need for defragmentation jobs. You can also avoid space abends by keeping a small number of volumes empty(ish), to be used when the other volumes fill up. DFSMS implements this by letting you define volumes in QUINEW status, which means 'Quiesce New'. These disks will only be used when the ENABLE'd volumes are full. However, its much more efficient to keep one small pool of quiesced volumes, rather than having quiesced volumes in every pool. This is the overflow pool. DFSMS has no concept of an overflow pool per se. To make it work, you must have ALL the volumes ENABLED in your primary pools, and all the volumes QUIESCED in your overflow pool. You then concatenate the overflow pool with your primary pools, using ACS code like this


Storgrp ACS variables

Some of the read only variables that are used in the other ACS routines, are not allowed in the storage group routine. These are:-

The problem is that if you use one of these variables in your ACS code, you get the error message
when you compile the code. The message is a little ambiguous, something like
would be a bit easier to interpret.

Volume Selection Order

When it gets an allocation request, SMS checks out all available volumes are draws up four lists

  1. The Reject list contains all volumes that do not match the required criteria for the allocation. These volumes will not be used by SMS
  2. The Primary list contains online volumes with used space below the threshold and that meets all the allocation criteria
  3. The Secondary list contains volumes that are online, but do not meet all the criteria
  4. The Tertiary list is set up if if there are not enough volumes available in the pool to meet the requested number

SMS will first try to select volumes from the Primary list, using SRM to select the volume with the lowest device delay first. If there are not enough volumes in the primary list, then SMS selects at random from the Secondary list, then the Tertiary list. If the request is for a striped dataset, then SMS will initially try to pick volumes that are under different controllers.

If you use DFHSM to backup and migrate data, then you need to set AUTO MIGRATE, AUTO BACKUP and AUTO DUMP parameters for each pool.
AUTO MIGRATE determines if the DASD volumes in this Storage Group are eligible for automatic space management processing, which includes things like deletion of temporary datasets, release of unused, over-allocated space, and migration of files off primary disk if they have not been used for a while. Possible values are YES, NO, INTERVAL, or PRIMARY
AUTO BACKUP determines if the volumes in this Storage Group are eligible for DFSMShsm incremental backup. Possible values are YES or NO.
AUTO DUMP determines if volumes in this Storage Group can be automatically dumped using DFSMShsm. Possible values are YES or NO.
See the DFHSM section for more details


Accelerate DB2 Write with zHyperWrite and "EADM™ by Improving DB2 Logs Volumes Response Time:

z/OS Storage and Datasets

Lascon latest major updates

Welcome to Lascon Storage. This site provides hints and tips on how to manage your data, strategic advice and news items.

back to top