DFSMS, z/OS System Managed Storage - Storage Group

Storage groups are the fundamental concept of DFSMS. Mainframe disks were limited in size to about 8GB each, though much larger disks are now available. In the old days, you had to decide which disk you were going to place your file on, and hope it had enough space available. Space errors were common, especially in batchwork. DFSMS groups disks together into storage pools, so you allocated by storage pool. Its much easier to maintain free space in a pool of 50 disks. Storage pools can also consist of tape volumes (not tape files) and optical volumes. This allows SMS to direct tape allocations to a VTS or automated library.

   

Accelerate DB2 Write with zHyperWrite and "EADM™ by Improving DB2 Logs Volumes Response Time:

The different types of storage pool that you can define are:

You need to define your volumes as being DFDSMS capable, then you add them to your storage pools. There are some simple rules regarding storage pools, volumes and datasets.

When you are initialising new volumes, the easiest way to prepare them for SMS is to use the STORAGEGROUP keyword in your ICKDSF job like this

INIT UNIT(D615) VOLID(D3D615) VTOC(1,0,270) -
   INDEX(0,1,14) VFY(OLDVOL) -
   STORAGEGROUP

If you want to know what volumes are defined to an existing storage class, the easiest way is to use the SDSF command

D SMS,SG(poolname),LISTVOL

However access to these commands is often restricted. If you do not have access, the other way is to use the ISMF panels.
Select the Storage Group option 6 from the ISMF Primary Option Menu
Select option 1, List and this will list out all the storage groups.
Type LISTVOL in the Line Operator column next to a storage group that you are interested in and this will list out all the volumes.

There is a special type of storage pool called 'Reserve storage pools' that you can use as a place to hold volumes that have been initialised and formatted for SMS, but you do not want them to be used at this time.
You initialise reserved volumes with ICKDSF, but add a RESERVED parameter and an OWNERID parameter. The OWNERID should be called IBMRSPrespoolname, where respoolname is the name of the reserved storage pool. The reserved volumes will be offline and cannot be brought online until they are re-initialised without the RESERVED parameter.

When you define a Storage Group for Tape, all you define is a list of at least one, and up to 8 libraries that belong to that Storage Group, and the SG status. You do not define drives or tape volumes. You can define up to 15 SGs with 8 libraries each.
When the Storage Group ACS routine gets a new tape allocation, it assigns it to a tape storage group, then selects a tape library and a tape device pool, then a specific tape drive is picked out from the tape device pool. A tape device pool is a string of tape drives attached to a single control unit.

How many pools?

In general, the bigger the pool the better. Try to avoid lots of small pools. Why? Because you can run a large pool at a higher occupancy level without space problems, and that saves money. If you have a 5 volume pool which holds 40GB, then 20% free space is 8GB. That is not a lot, especially when the space gets fragmented. If you run a 2500GB pool at 90% occupancy, then you have 250GB free. What's right for the occupancy level depends very much on the type of data in the pool. Databases tend to be well behaved and predictable, so large database pools can be run at 90% to 95%. General purpose pools can be volatile, and need more free space. You need to analyse your own pool usage and see what's best for your site.
So how many pools? One suggestion is to split your data into three, Production, Development and Test. Within each section you would have three pools, large and small allocations, and an overflow pool. It is also a good idea to have a separate pools for databases, so that's 18 pools. You will also need a few special pools for system data, VIO pools, SMS managed tape and the like.

What are large/small/overflow pools? A large pool basically holds files with large allocations, and a small one with small allocations. Then when datasets are deleted, they leave small holes in the small pool, and large holes in the large pool. If you mix them, small datasets will clog the disks up, and leave the free space fragmented, so large datasets cannot be allocated. Separating them avoids space abends and reduces the need for defragmentation jobs. You can also avoid space abends by keeping a small number of volumes empty(ish), to be used when the other volumes fill up. DFSMS implements this by letting you define volumes in QUINEW status, which means 'Quiesce New'. These disks will only be used when the ENABLE'd volumes are full. However, its much more efficient to keep one small pool of quiesced volumes, rather than having quiesced volumes in every pool. This is the overflow pool. DFSMS has no concept of an overflow pool per se. To make it work, you must have ALL the volumes ENABLED in your primary pools, and all the volumes QUIESCED in your overflow pool. You then concatenate the overflow pool with your primary pools, using ACS code like this

WHEN (&SIZE GE 250MB) DO
   SET &TORGRP='LARGE','OVERFLOW'
   EXIT
  END
 OTHERWISE DO
   SET &STORGRP='SMALL','OVERFLOW'
   EXIT
  END

This process has been made a bit more complicated by the introduction of three new features for storage pools. An SMS storage pool has new definition parameters;
Extend Storage Group: The name of a storage pool that this pool an extend to.
Overflow Storage Group: Is this pool an Overflow pool, 'Y' or 'N'.
A new parameter in the Storage Class definition; Will this Storage Class use a Multi-Tiered SG, 'Y' or 'N'.

An Extend pool is just one other pool that allocations can go to if the primary pool is full. 2 primary pools can be defined so that they extend onto each other. Generally speaking, extend or overflow storage groups should not be used in a copy pool environment, unless all the primary, extend and overflow pools are all contained within the same copy pools. An Extend pool does not need any ACS routine processing.

To use an 'Overflow' pool, they must be concatenated with a primary pool in the ACS routine like this:
SET &STORGRP = 'Primary', 'Overflow'
Those names are examples, and you should use better, more meaningful ones. SMS will prefer to use volumes in the primary pool for an allocation (primary list, see below), but if that pool in over its threshold, then it will use the overflow pool (secondary list, see below. This assumes that all the volumes have identical performance characteristics). An overflow storage group may also be specified as an extend storage group. There is no need to quiesce the overflow pool volumes, in fact volumes residing in overflow storage groups are preferred over quiesced volumes and storage groups. However if you do quiesce your overflow pool, or some of your overflow pool volumes, then Primary quiesced volumes will be used before Overflow quiesced volumes

You can take this concept over further by using Multi-Tiered SG (Y) in a storage class. For this to work, you need concatenated storage pools in your ACS routines, but you can concatenate several pools and they will be used in order. I think the possible number of pools is something ridiculous like 256. Let's assume you have 4 pools concatenated, your storage class has Multi-Tiered SG 'Y' and that storage class picks up the concatenated pool code like this;
SET &STORGRP = 'SG1', 'SG2', 'SG3', 'SG4'
Now SMS will use SG1 for allocations, unless all its enabled volumes exceed the high threshold, in which case it will go to SG2. If that pool is full it will try SG3, then if SG3 is full it will try SG4.

Alerting

You can now define alert thresholds for entire SMS storage groups. 2 thresholds are available:
Total Space Alert Threshold %
Track-Managed Space Alert Threshold %
When a volume is varied on or offline; or when a file is added, extended or deleted so that the pool spave changes, SMS calculates the overall space usage of the storage pool and will issue an alert message of either of the thresholds are exceeded.

Tape storage pools

Storgrp ACS variables

Some of the read only variables that are used in the other ACS routines, are not allowed in the storage group routine. These are:-

The problem is that if you use one of these variables in your ACS code, you get the error message
IGD03111I INVALID REFERENCE TO READ/ONLY VARIABLE &JOB
when you compile the code. The message is a little ambiguous, something like
VARIABLE &JOB NOT ALLOWED IN STORGRP ROUTINE
would be a bit easier to interpret.

Volume Selection Order

When it gets an allocation request, SMS checks out all available volumes are draws up four lists

  1. The Reject list contains all volumes that do not match the required criteria for the allocation. These volumes will not be used by SMS
  2. The Primary list contains online volumes with used space below the threshold and that meets all the allocation criteria
  3. The Secondary list contains volumes that are online, but do not meet all the criteria
  4. The Tertiary list is set up if if there are not enough volumes available in the pool to meet the requested number

SMS will first try to select volumes from the Primary list, using SRM to select the volume with the lowest device delay first. If there are not enough volumes in the primary list, then SMS selects at random from the Secondary list, then the Tertiary list. If the request is for a striped dataset, then SMS will initially try to pick volumes that are under different controllers.

If you use DFHSM to backup and migrate data, then you need to set AUTO MIGRATE, AUTO BACKUP and AUTO DUMP parameters for each pool.
AUTO MIGRATE determines if the DASD volumes in this Storage Group are eligible for automatic space management processing, which includes things like deletion of temporary datasets, release of unused, over-allocated space, and migration of files off primary disk if they have not been used for a while. Possible values are YES, NO, INTERVAL, or PRIMARY
AUTO BACKUP determines if the volumes in this Storage Group are eligible for DFSMShsm incremental backup. Possible values are YES or NO.
AUTO DUMP determines if volumes in this Storage Group can be automatically dumped using DFSMShsm. Possible values are YES or NO.
See the DFHSM section for more details

   

Accelerate DB2 Write with zHyperWrite and "EADM™ by Improving DB2 Logs Volumes Response Time:

z/OS Storage and Datasets

Lascon latest major updates

Welcome to Lascon Storage. This site provides hints and tips on how to manage your data, strategic advice and news items.

back to top