z/OS Dataset Utilities

These links lead to sections in the text below.

Copying flat files
Copying PDS files
A dummy program
with non-VSAM
PDS file mgmt

FDReport is a utility program that gets data from VTOCS and Catalogs. See the FDR Report section for details.


IEBGENER is used for copying Physical Sequential files, and for copying members of Partitioned datasets or PDSEs. IEBGENER can only cope with record lengths up to 32760 bytes, longer records are truncated. It can also be used to convert sequential files to partitioned and partitioned to sequential. The example below shows IEBGENER at its simplest, to just copy a file. It is using a dataclass to get the attributes for the output file.

//SYSUT1   DD DISP=SHR,DSN=input dataset
//SYSUT2   DD DISP=(,CATLG,DELETE),DSN=output dataset,
//      DATACLAS=SEQ80
//SYSIN    DD *

Incidentally, if you are creating a new dataset, you must specify DISP=(,CATALOG) at the very least. The first parameter will default to NEW, but the second one defaults to DELETE! If you don't specify a disposition at all, the default is DISP=(NEW,DELETE,DELETE). So the job creates the file, writes data out to it, then deletes it! I once spend several hours in the middle of the night trying to work out why a job was not creating a file, when the problem was simply that I'd forgotten to add a DISP statement.

Stacking datasets on tape using JCL

Some tape management systems, TLMS for example, will not let you add a file to an existing tape, as it considered the tape as non-scratch, so you have to stack them all up in one job. How do you stack datasets on a tape using JCL? If you want to copy several datasets to a single tape, you need to use a combination of label parameters and referbacks, and this can be quite complicated. Here is working sample using IEBGEBER.

//       VOL=(,RETAIN,,20),
//       LABEL=01,
//       UNIT=(CART,,DEFER)
//     VOL=(,RETAIN,,20,
//     REF=*.S1.SYSUT2),
//     LABEL=02,
//       VOL=(,RETAIN,,20,
//       REF=*.S2.SYSUT2),
//       LABEL=03,
//       UNIT=(CART,,DEFER)

Within the VOL statement, the ,RETAIN keeps the volume on the drive,and the ,,20 means the output can span up to 20 volumes.
When you allocate a tape to the first step, you mount a non-specific scratch tape. You want the other steps to use that tape, but you do not know in advance what the tape is. The REF=*.S1.SYSUT2 means use the same volume as was used in STEP S1, DDNAME SYSUT2.
LABEL=01 in the first step is not required as it will default, but it makes the job look consistent. The other LABEL statements must be specified and must be in incremental order.
If you are stacking DFDSS dumps using this technique, then make sure that every step has some data to dump, because if a step has no data, the dump is missed out, then the label statements go out of sequence and the job fails.

If you are processing several tape files in one job, each with its own DD statement, then z/OS will try to allocate all the tape drives that it needs up front. This is a waste of resources if you will only ever access one tape dataset at a time. To stop z/OS from allocating all the tape units up front, use the UNIT=AFF parameter like this

//       UNIT=AFF=DD1
//       UNIT=AFF=DD2

It is more efficient to write data to tape with large blocksizes, as that allows the tapes to stream and not backhitch every time they write a new block. The largest blocksize allowed for disk datasets is 32760, but IEBGENER can write bigger blocksizes to tape using the SDB (system determined blocksize) PARM statement. The parameter is


This allows IEBGENER to write blocksizes bigger than 32760. The actual optimum blocksize is picked by the system. Other valid options are

The default if the SDB parm is not specified is usually to copy the input blocksize. This is defined in the COPYSDB= parameter in the DEVSUPxx PARMLIB member.

back to top

IEBGENER can read z/OS UNIX files. In this case, is edited and copied. The logical record length of the output data set is less than that of the input data set.

//        LRECL=100,BLKSIZE=1000,RECFM=FB
//        VOLUME=SER=111113,DCB=(RECFM=FB,LRECL=80,
//        BLKSIZE=640),SPACE=(TRK,(20,10))
  GRP1 RECORD IDENT=(8,'FIRSTGRP',1),FIELD=(21,80,,60),FIELD=(59,1,,1)
  GRP2 RECORD FIELD=(11,90,,70),FIELD=(69,1,,1)

The control statements are as follows:
SYSUT1 DD defines the input file. Its name is /dist3/stor44/sales.mon. It contains text in 100–byte records. The record delimiter is not stated here. The file might be on a non-System/390 system that is available via Network File System (NFS).
GENERATE indicates that a maximum of four FIELD parameters are included in subsequent RECORD statements and that one IDENT parameter appears in a subsequent RECORD statement.
EXITS identifies the user routine that handles input/output errors.
The first RECORD statement (GRP1) controls the editing of the first record group. FIRSTGRP, which appears in the first eight positions of an input record, is defined as being the last record in the first group of records. The data in positions 80 through 100 of each input record are moved into positions 60 through 80 of each corresponding output record. (This example implies that the data in positions 60 through 79 of the input records in the first record group are no longer required; thus, the logical record length is shortened by 20 bytes.) The data in the remaining positions within each input record are transferred directly to the output records, as specified in the second FIELD parameter.
The second RECORD statement (GRP2) indicates that the remainder of the input records are to be processed as the second record group. The data in positions 90 through 100 of each input record are moved into positions 70 through 80 of the output records. (This example implies that the data in positions 70 through 89 of the input records from group 2 are no longer required; thus, the logical record length is shortened by 20 bytes.) The data in the remaining positions within each input record are transferred directly to the output records, as specified in the second FIELD parameter.


IEBCOPY is used to copy a PDS, to copy a PDS into a PDSE, or to merge two PDS files together. It can also be used to compress a PDS in batch.

In the example below, the input file and output file are the same, so this is a batch compress

//A    DD  DSNAME=input.dataset,
//       DISP=OLD
//B    DD  DSNAME=input.dataset,
//       DISP=OLD

The next example shows an entire PDS being copied to another. If you try to do this with an IEBGENER, the job will 'work' but all the members will be joined into one big file. IEBCOPY will use the DCB from the input file by default, unless you override it. In the example, all the DCB except space comes from the input file.

//SYSUT1   DD DISP=SHR,DSN=input.pds
//SYSUT2   DD DISP=(,CATLG),DSN=output.pds,
//         SPACE=(TRK,(50,50))
//         UNIT=3390
//SYSIN    DD *

If the following input statements are used, instead of copying the whole PDS, the job will just copy the four selected members.


This example, will convert a partitioned data set to a PDSE.

 //          DISP=(NEW,CATLG)

The control statements are as follows:
SYSUT1 DD has a partitioned data set as input, called 'MY.JCL.PDS'.
SYSUT2 DD has a PDSE as an output data set, as specified by the DSNTYPE=LIBRARY parameter, called 'MY.JCL.PDSE'. It is picking up most of its attributes like DCB and space from the input file, as specified by the LIKE parameter. We will let DFSMS decide where to allocate the file, as determined by the ACS routines.

Finally, here's a job I run if I need to make a PDS larger. You need to alter the file name from 'changeme' to your file, and change the space units in STEP2 to suit your needs.

//SYSIN    DD  *
   ALTER  changeme -
//    IF (STEP01.RC = 0) THEN
//SYSUT1    DD DISP=SHR,DSN=changeme.O
//SYSUT2    DD DSN=changeme,
//          UNIT=3390,DISP=(,CATLG,DELETE),
//          SPACE=(CYL,(400,25,700)),
//          LIKE=changeme.O
//SYSIN     DD *
//   ENDIF

If you are really brave, you add a final step which deletes the changeme.o file, provided all the previous steps worked. Me, I delete it manually once I'm sure the bigger file is working ok.

back to top


IEFBR14 is a dummy program that does nothing except return a completion code of 0. However, when you run IEFBR14, all the attached JCL statements are checked and executed. This means it is a very useful program for working with files in batch. For instance, if you want to create a new dataset and delete an old one as part of a batch run, the following JCL would do the job

//       EXEC PGM=IEFBR14
//       UNIT=3390,SPACE=(CYL,(3,1,25)

Yes, you can do this easily using ISPF, but if you need to create and delete files as part of a batch run, IEFBR14 is an easy way to do it.
IEFBR14 is also often used to pre-allocate a new GDG file like this.

//        COND=(0,NE)
//        DISP=(,CATLG),
//        SPACE=(TRK,(15,150)),

back to top


IDCAMS is mainly used for VSAM datasets, and this is discussed in the VSAM section and the ICF catalog section. IDCAMS has a few uses for ordinary files, as described below. Standard IDCAMS uses the following DD cards


After this, the SYSIN cards are all different. The first example will add 5 new candidate volumes to a file, that is, it allows the file to extend over 5 more volumes. You need to close the file before the extra volumes are picked up. These volumes are non-specific, and allocated by SMS

  ALTER dataset.name ADDVOLUMES(* * * * *)

The next example will change the management class of a file


In both these examples, if you are just changing a few files, its easier to enter these commands as line commands from ISPF option 3.4. as shown below.

How do you fix an SMS dataset which has become uncataloged? You can't simply enter a 'C' as a line command against it. The answer is to use IDCAMS as shown below

   (NAME(uncataloged.file) -

While z/OS datasets have to be accessed through their catalog entry, it is possible to define 'aliases', which are alternative catalog entries that point to a physical dataset. They are often used for different versions of test programs, so that different versions can be identified by name, but a consistent 'production' name is used for the alias name. This means that JCL does not need to be changed when new program versions are introduced. For example, the production name for a program library might be PASP.ONLINE.CMDLIB and the current version might be PASP.ONLV217.CMDLIB. You then point an alias from PASP.ONLINE.CMDLIB to PASP.ONLV217.CMDLIB and your online systems will pickup the correct data. You set this up with an IDCAMS define alias command.


There was a restriction, that the Alias had to be in the same catalog as the entryname, but this restriction was removed in z/OS 2.1. That release also records the alias creation date, which is useful if you are cleaning up old and obsolete data.

IDCAMS can be used to delete DFSMS managed files that have become faulty. Because SMS insists that every managed file must be catalogued you cannot delete an uncatalogued file with 'D' or 'DEL' as a 3.4 line command. The following IDCAMS statements will delete an non-vsam file and is the equivalent of the TSO DEL command.


The statement below will delete an non-vsam file from a specific catalog, that exists on the volume specified in DD1.

//DD1       DD  VOL=SER=SYSV00,
//SYSIN   DD *

This statement will delete the catalog entry for a dataset, but will not delete the actual data from the disk. This one is useful to get rid of catalog entries that have no data.


This is the opposite to the example above. It will delete an uncatalogued dataset from the disk identified in the DD2 ddname. The NVR parameter identifies that this is not a VSAM dataset.

//DD2     DD  VOL=SER=VOL001,
//SYSIN   DD  *

The DCOLLECT utility is accessed through IDCAMS. This will provide very comprehensive statistics about disks and datasets, but in a raw and difficult to interpret format. If you use IDCAMS then you really need to invest in Merril's MXG SAS programs to interpret it. This example will return lots of SMS data for every online volume

//          SPACE=(CYL,(250,50),RLSE)
//SYSIN     DD *
    VOLUMES(*) -

This example will extract MCDS data from HSM

//SYSIN    DD  *

back to top


Sites which run PDSMAN usually alias it out, so it looks like you are executing IBM's IEBCOPY, but you are actually executing CA's PDSMAN. You can usually work out what you are running by checking the first line in the SYSPRINT output file, which will look like
if you are running PDSMAN

This example shows how to extend a PDS directory dynamically using PDSMAN. The program name is IEBCOPY, but it is actually executing PDSMAN

//OUT      DD DISP=SHR,DSN=dataset.with.full.directory
//SYSIN    DD *

The next example empties all the members out of a PDS

//PDSMPDS  DD DISP=OLD,DSN=test.library

The next example will check on the structure of a PDS, and report on any errors it finds

//PDSMPDS  DD DISP=SHR,DSN=faulty.dataset

You cannot use wildcards to select members to copy between PDS files using IEBCOPY, but you can with use wildcards with PDSMAN

   SELECT M=((+S++D*))
   SELECT M=((A*,B*),(+B*,+C*,R))

SELECT MEMBER=(DM*) will copy all members starting DM

SELECT M=((+S++D*)) will copy all members with S in position 2 and D in position 5

SELECT M=((A*,B*),(+B*,+C*,R)) will copy all members starting with A, and rename them so they start with B, and also copy all members with B in position 2, rename them so they have a C there instead, and replace any members that already exist.

This example will scan two libraries, and report on all members which contain the string '3590'

//S1        EXEC PGM=PDSM18,PARM='.ALL'
//PDSMPDS   DD DISP=SHR,DSN=master.schedule.joblib
//          DD DISP=SHR,DSN=master.schedule.proclib
//SYSIN     DD *
  SCAN TARGET='3590'

z/OS Storage and Datasets

Lascon updTES

I retired 2 years ago, and so I'm out of touch with the latest in the data storage world. The Lascon site has not been updated since July 2021, and probably will not get updated very much again. The site hosting is paid up until early 2023 when it will almost certainly disappear.
Lascon Storage was conceived in 2000, and technology has changed massively over those 22 years. It's been fun, but I guess it's time to call it a day. Thanks to all my readers in that time. I hope you managed to find something useful in there.
All the best

back to top