z/OS Storage performance

I've heard of one site which upgraded from z14 to z15 and was not happy with their performance, as the MSU usage increased. It is possible that this increase in MSU could come from the internal compression that the z15 does now, whereas on z14 it was z EDC that did the compression. Has anyone else had any experience of this, or indeed has anyone upgraded from z14 to z15 without any problems? I'd like to hear your experiences. I can be contacted here, and your details are always kept confidential.

Why is z/OS storage performance important

Some might say that since the advent of flash disks, there is no need to worry about storage performance anymore. When all we had was spinning disks, we had to worry about things like seek times and rotational delay, that is, the amount of time it took for read/write heads to move over a disk, and the time it took for a disk to spin round to the data. Flash disks have pretty much made these issues redundant. Also, the original spinning disk heads could only process one IO at a time, so transactions waited in an IOS queue for the heads to become available. Virtualisation, disk subsystem cache and PAV has resolved the IOSQ problem. So can we forget about performance monitoring?
The thing is, a single mainframe CPU can now process close to a billion transactions per hour, and most sites have several mainframes and LPARs, clustered together into sysplexes that share the same storage subsystems, accessing thousands of disks. So, another question could be, is it possible to monitor performance on these environments in any meaningful way?
The answer to these questions is that we can't forget about performance, and we have to find better ways to monitor it. These days, data is the lifeblood of nearly every organisation, and fast reliable access to enterprise data is essential to the productivity of thousands of employees and to the success of revenue generating applications. It is very necessary to properly monitor your storage performance to make sure that your applications are getting the correct level of service. However this monitoring in an ever more complex z/OS infrastructure can be challenging because many z/OS professionals are reaching retirement age and are not being replaced. This means that enterprises simply lack the resources to monitor performance on a regular basis.

How can you check and measure Storage performance

If you run a complex infrastructure with many LPARs sharing different storage subsystems with a combination of flash and spinning disks, then monitoring will not be easy. IBM do supply two products, SMF and RMF, both discussed elsewhere on this site. However it can be difficult to make sense of the data and find the bottlenecks when you are looking at the impact of several different systems. What you need is some Artificial Intelligence to help you. Several products exist to help you with this, and one of them is EADM from Technical Storage.


EADM can monitor DASD performance in z/OS environments, for IBM, HitachiVantara or DELL/EMC subsystems. EADM extracts data from your RMF or CMF files to create EADM files that are then analyzed by CONTROL CENTER, the expert system behind EADM technology. The analysed results help understand where current bottlenecks are. Before we delve into some EADM reports, we should quickly summarise what some of the performance indicators mean.

CONN TIME is the good guy. It is the average number of milliseconds the device was connected to a channel path and actually transferring data between the device and Processor. Simplistically speaking, Conn Time is the time an IO spends doing useful work. However, excessive CONN time might mean your channels are running too slow.
PEND TIME refers to the number of milliseconds an I/O request is queued in the channel between processor and storage subsystem. PEND time can be incurred waiting for an available channel path and control unit, or by delays caused by shared DASD contention.
DISC TIME is the time when a device was in use, but wasn’t transferring data. It used to be the time waiting for an ESCON channel, but with the advent of FICON it is more likely to be waiting for synchronous remote copy, caching issues like a read cache miss or write hits running faster than the controller stage the data in cache. It could also be control Unit busy or maybe multiple allegiance or PAV write extent conflicts.
IOSQ TIME is the average number of milliseconds an I/O request must wait on for disk to become free. With the advent of virtualisation and PAV, this should be 0.

Before you can use the AI feature of EADM you need to fill in a couple of tables, one to detail which performance features are already installed in your datacenter and the other to define the datacenter’s performance control needs. An example of the first form is shown below.
EADM performance features panel

Next you fill in a form to decide what kind of reports you want to see. An example of this form is shown below
EADM performance features panel

EADM provides lots of different reports and pictures of how performance has changed in an environment. The example below shows an EADM customer with two datacenters. When EADM analyzed the 24h RMF report, the customer reached the following conclusions:
Checking out the response time and its components over 24 hours shows that the curves of Pending Time, Connect Time and Disconnect Time are not stable.
The weighted averages show a generally good performance, but analyzing 15mn RMF reports shows performance drifts during Transaction Processing (TP).
To calculate the weighted average, multiply the IO rate for each disk by the average response time, sum up these results, then divide the total by the total IO. It should be obvious that it is much easier to have a product to do the calculation for you. You need to carefully choose the RMF sample input. EADM recommends a 15-minute sample during TP processing.
EADM shows both the weighted average for the 40 RMF samples (if TP is 10 hours), and also the MAX achieved over the 40 periods. Some applications such as sorting last sometimes only 10 minutes in the morning but significant Disconnect Time can delay the end of sorting jobs. It is recommended to have the IBM CFW feature (Cache Fast Write) paired between datacenter processes and hardware. This can be easily verified in RMF or CMF. Below are the weighted averages of the previous table (TP = 8:00 to 18:00 and Batch = 18:00 to 8:00).
EADM performance features panel

This next screen focuses on a list of 3390 volumes (mostly DB2 volumes) that need to be monitored closely.
EADM performance features panel

Beware of weighted average values. And do not make weighted averages of several LPARs together unless you have tools that can detect anomalies on unit volumes. At 0.67 ms, the average IO in column 3 is very good over 12 hours of Batch, but column 5 shows that during certain interval we can see 1.15ms
EADM performance features panel

The volume in line 4 below is handling 3160 I/O for 30 minutes and shows a response time of 1.48ms. So 3160 x 60sec x 30mn = 5 688 000 I/O at 1.48ms is not good
EADM performance features panel

This indicator is important because it shows the total number of I/O per day on disks and not a weighted average. It is important to reduce the number of I/O on disk especially with the number of I/Os expected to rise sharply thanks to faster technology and new pricing models that replace the R4HA. z/OS disk experts agree that a good I/O is an I/O in z/OS buffers and not on disk, hence the important role of DB2 buffer pools. DB2 buffer pools allow you to put frequently used DB2 data in rapid cache memory to avoid or minimize slow I/Os.
EADM performance features panel

This final image shows a problem with excessive connect time, with recommendations from EADM on how it could be reduced. The graph below shows the total I/O count recorded during BATCH (e.g. 18:00 to 08:00 the day after) and the response time for a pool of 10 LPARs over 12 hours
EADM performance features panel

How can you improve z/OS Storage performance

I guess the first thing to note is that z/OS IO performance in measured in microseconds now, not milliseconds. So while a 5ms response time might have been considered good a few years ago, it is too long now. So let us assume you have done your analysis and worked out where your bottleneck is. Here is some suggestions on how to fix them.


If your connect time is too high, then this is almost certainly down to your paths between the storage subsystem and the z/OS server. It is likely that we will soon see IBM making some important announcements regarding Front-End and FICON to reduce disk Connect Time and to simplify managing multiple LPARs.
There are things you can do now if you need to, like installing FICON express, or High Performance FICON features like Multi-track, DB2 list prefetch, SQAM BSAM or Enhanced Write.
However remember that the best I/Os are those that come from z/OS buffers, not disk. The VSAM buffering page on this site has suggestions on how to improve VSAM buffering. One of the images above shows problems with DB2 disks. DB2 buffers are usually managed by your DBAs, so if you see a problem there, speak to them. DB2 issues can also be caused by a badly written query that is selecting all, or most of the database. Again, this would be one for your DBA to investigate and check out. Finally, DB2 has an essential utility called RUNSTATS, which optimises the IO paths through the database. Your DBA should run that frequently, and certainly after every major upgrade. I have seen it forgotton once and it had a disastrous effect on IO rates.


A high PEND time usually indicates that you are getting control unit contention, probably because you have multiple LPARS connected to the control unit, and I/O is delayed by waiting for the another LPAR. The queue could be at device level, channel or control unit. You could use the SMF I/O Queuing Activity report (SMF type 78) to investigate further.
A solution might require faster channels, a faster control unit, or maybe just seperating out really busy datasets onto different control units.


A high disconnect time usually means that your disk subsystem cache is not big enough. A disconnect on read means the data was not in cache. A disconnect on write is usually means that the IO is waiting for synchronous remote copy. If this is a problem, consider faster PPRC links.


High IOSQ time is typically solved by implementing dynamic PAVs, HyperPAVs or SuperPAV. If PAV is already installed, maybe it is not configured correctly. See the PAV section for details

back to top

z/OS Storage and Datasets

Lascon updTES

I retired 2 years ago, and so I'm out of touch with the latest in the data storage world. The Lascon site has not been updated since July 2021, and probably will not get updated very much again. The site hosting is paid up until early 2023 when it will almost certainly disappear.
Lascon Storage was conceived in 2000, and technology has changed massively over those 22 years. It's been fun, but I guess it's time to call it a day. Thanks to all my readers in that time. I hope you managed to find something useful in there.
All the best

back to top