Parallel Access Volumes and Multiple Allegiance

Parallel Access Volumes (PAV)

One of the problems with mechanical storage disks is that they are single threading; Only one process can access the data on a disk at a time. z/OS managed this by allocating a Unit Control Block (UCB) to each storage device. If several applications wanted to access a device at the same time, then the IO operations were queued up by the IO supervisor. In performance terms, this is called IOSQ delay.
IOSQ is an especial problem for disks with several very active datasets, and storage administrators allocated busy databases, RACF files, HSM control files, Page and Spool datasets etc. on their own dedicated volumes. This required careful monitoring, and manual effort to change.
The principle behind IOSQ is illustrated in the GIF below. 4 applications are trying to access a JBOD disk. (JBOD is the opposite of RAID. The acronym is Just a Bunch Of Disks and it means one simple disk assembly) When an application turns green, it is successfully getting to the disk. The other applications are queued, waiting for their turn.
IOSQ illustration
Modern disks subsystems do not have this physical restriction. They are usually RAID so data is spread over several physical disks. Also, all write IOs go to solid state cache and over 90% of read IOs come from cache. It is possible to schedule concurrent IOs to a cache as it has several access paths, but z/OS was unaware of the physical implementation behind its virtual disks. The UCB architecture still said that only one IO was allowed to a disk. PAV was introduced to fix this. The concept behind PAV is that every disk has its normal 'base' UCB and also a number of 'alias' UCBs, all of which connect to the logical disk. This means that it is possible to schedule concurrent IOs to a disk, one through the base UCB, the rest through alias UCBs. If there are enough alias UCBs available, IOSQ should not happen.

    GFS Advert

There are three flavours or PAV; STATIC, DYNAMIC and HyperPAV
STATIC means you specify how many PAV aliases each BASE (the real UCB) alias can have, and then that number is fixed.
illustration of static PAV

DYNAMIC means you define just a bunch of aliases, and workload manager decides how many aliases are needed by each base aliases, depending on how busy the virtual disk is, and how important the application is. However Workload Manager can take a while to work out that a disk is busy, and will not eliminate IOSQ completely.
DYNAMIC PAV needs fewer aliases than Static PAV and performs better, as more aliases are available for busy disks.
Illustration of Dynamic PAV

Dynamic PAV has two problems; it uses up a lot of UCB addresses and it takes workload manager a while to notice that a disk is busy and needs more aliases. HyperPAV is designed to fix these problems.
HyperPAV keeps all its aliases in a pool and just assigns one to a volume when it is needed to service an IO. It does not use WLM to decide when to allocate an alias. Each HyperPAV host can also use the same alias to service a different base, which means fewer aliases are needed.
illustration of HyperPAV
HyperPAV then requires fewer aliases per base, I've seen a ratio of one alias to 4 bases work well, but your requirement will depend on your workload.
HyperPAV is especially useful if you are planning to use EAV volumes.

   EADM Advert

Accelerate DB2 Write with zHyperWrite and "EADM™ by Improving DB2 Logs Volumes Response Time:

Invoking PAV

If you set MIH times in SYS1.PARMLIB(IECIOSxx), then IBM recommends that you do not set them for PAV alias devices.

The following steps are needed to invoke Static PAV
Define PAV aliases in HCD, associated with each DASD subsystem. Base definitions are added as type 3390B, and aliases of type 3390A
Define a number of PAV aliases to every disk in your disk subsystem. The total number must match the HCD.

The following steps are needed to invoke Dynamic PAV

  • Define PAV aliases in HCD, as above
  • Define a number of PAV aliases to every disk in your disk subsystem. You should need fewer aliases than with static PAV
  • In Workload Manager, set WLMPAV=YES, and run Workload Manager in Goal mode, so Workload Manager moves the aliases around as required

To set up HyperPAV, you need to

  • Define the aliases in HCD
  • Authorise HyperPAV on your disk subsystem
  • Add them to your disk subsystem
  • Add HYPERPAV=YES in SYS1.PARMLIB(IECIOSxx)

You can enable HyperPAV dynamically, but IBM recommends that you do this at a quiet time, with no other configuration work running on a DS8K. You use the command .

SETIOS HYPERPAV=YES

You can check if HyperPAV is active with the command

D IOS,HYPERPAV

RESPONSE=SP00
 IOS098I 15.37.01 HYPERPAV DATA 109
 HYPERPAV MODE IS SET TO YES

Hyperpav requires z/OS 1.8. or higher, although fixes are available for earlier z/OS releases. It is supported by the latest IBM, EMC and HDS (including SUN and HP) devices, but usually needs a chargeable code upgrade. It will only work with FICON channels, ESCON is not supported.

Be sure to define the same typs of PAV on the same range of volumes for each LPAR.

Querying PAV status

From SDSF, use the devserv command /DS QPAV,uuuu,nn where uuuu is the starting unit address, and nn is the number of units you want to display. If you display a base address, you'll see something like

If you display an alias address, you'll see

To find out what aliases are active to a volume, use the command DS QPAV,uuuu,VOLUME. An example of dynamic PAV in action is -

The output from the same commands with hyperpav active looks like

An RMF Snapshot report with both Dynamic and HyperPAV active looks like this

There are three volumes using dynamic PAV and five on DS8K using HyperPAV as indicated by the 1.0H in the PAV column. One dynamic PAV volume has 2 aliases and a bit of IOSQ wait, but all the hyperpav volumes have one alias and no wait.

Multiple Allegiance (MA)

PAV addresses queuing issues with IOs coming from the same CPU or LPAR. If a disk was shared between several CPUs or LPARs, the owning LPAR puts a hardware reserve on the disk, to prevent others from interfering. Multiple Allegiance removes that requirement, and lets the storage controller manage cross system IOs.
You do not set up MA, or switch it on. If your disk subsystems are MA capable, it happens.

back to top