GDPS/Global Mirror

GDPS/GM or Global Mirror is an asynchronous mirroring solution, intended for sites which are more than 200km apart. It supports both CKD and FBA disks in a single consistency group. What that means is that you can combine Open Systems and Mainframe data together and have them both recovered to a consistent point-in-time in a disaster.

GDPS/GM is managed from a mainframe, it requires a mainframe LPAR in both local and remotes sites, both dedicated to managing Global Mirroring. In IBM terminology, the local LPAR is usually called the K-sys and the remote LPAR the R-sys. It needs a designated communications CKD disk in each subsystem, and every subsystem must contain some CKD disks. GDPS/GM does not manage systems and applications, just data, so there is no automation provided for management and failover of production systems. This means that GDPS/GM has a longer RTO than GDPS/MM. It is often used in a 3-site configuration, where 2 'local' sites use synchronous GDPS/MM while a third, remote site uses GDPS/GM. For those people who really like acronyms, this configuration is called GDPS/MGM.

Global Mirror requires the following disks:

  • GM Source volumes at the local site
  • GM Target volumes at the remote site, which are also FlashCopy (FC)Source volumes
  • FC target volumes at the remote site

The FC volumes are required to provide data consistency, as the mirrored data is not consistent, and so cannot provide a disaster position. To get a good position. the GM volumes must be brought into a consistent state, then a FlashCopy is taken. This FlashCopy will therefore be consistent. Host I/O is delayed while this consistent point is being created. If you have more than one primary disk subsystem, then one subsystem must be designated as the master, and the others as subordinates, to ensure consistency between subsystems. The FlashCopies happen at a regular time, usually every 5 to 10 seconds, so while the disaster recovery data lags behind the source and there is some data loss, the RPO is still quite short. There is a balancing act to be sought here, you want your RPO to be a short as possible, without having a noticable affect on application performance. Three parameters are set to control this:

  • Consistency Group Interval Time - called cginterval in commands: The time is seconds between the formation of each consistency position and the default is 0s.
  • Maximim Co-ordination Time - called coordinate in commands: The maximum time in milliseconds that a master storage subsystem can spend with it subordinated forming a consistenct group, the default is 50 milliseconds.
  • Maximum time Writes are Inhibited at the remote site - called drain in commands: The maximum time that a remote site will try to form a consistency group, also konwn as the drain time, the default is 4 minutes or twice the CGinterval value.

You can modify these parameters from the 'Define Properties' page of the DS Storage Manager web interface. You might need IBM consultancy and it will may take a few iterations to get them right.

Global mirror commands

While there is a GUI interface for managing GDPS/GM, it is best managed by using command line scripts. If you write a script, you are forced to sit down and think about what you are trying to achieve step by step, and when you are finished, you can get your script peer checked. This means that the script has a better chance of being successful. Once you have a tested, working script, you can then execute it over and over again, and know it will work the same every time. This series of commands can be used to set Global Mirroring up

Most of the commands below require -dev and -remotedev parameters. These are used to identify disk subsystems and the name is made up up maunufacturer.machine_type-serial_number. For the sake of illustration I'm using IBM.2107-75AK275 for the primary subsystem and IBM.2107-75MG701 for the secondary subsystem. You don't have to use the dev and remotedev parameters, but if you miss them out then you have to fully qualify your subsystems and I think commands are more readable with the devices defined.
Mirroring is done by logical subsystem, so '5A' is the primary LSS and '5B' is the secondary LSS.
You will need to substitute your own values for these parameters.

The first thing you need to do is run a couple of queries to get some information about your storage subsystems. You need to know the worldwide node name (WWNN) of your secondary disk subsystem and you need to know what Fiber Channel (FC)I/O ports are free. If you want to get information on all your disk subsystems run the first command below. If you know the id of your remote storage, yoiu can home in on that one with the second command.

lssi -l
showsi -fullid IBM.2107-75MG701

Either of these commands will display a 16 digit WWNN HEX id for the remote storage subsystem. Make a note of this, then run command

lsavailpprcport -l -dev IBM.2107-75AK275 -remotedev IBM.2107-75MG701 -remotewwnn 50050A6307FEC268 5A:5B

Use your subsystem identifiers, your wwnn and your LSS numbers. This should list out all the FC I/O ports that are available for remote mirroring. Identify the ones you want to use, then run the mkpprcpath command.

mkpprcpath -dev IBM.2107-75AK275 -remotedev IBM.2107-75MG701 -remotewwnn 50050A6307FEC268 -type gcp -srclss 5A -tgtlss 5B I0103:I0103 I0233:I0233 I0503:I0503 I0633:I0633

The command above shows 4 paths being defined. This command will replace any existing paths, so if you have existing paths and are adding extra ones, make sure that you include all the paths in this command. Once you have all your paths defined, you can start to mirror volumes with the mkpprc command, note the type=gcp for global copy.

mkpprc dev IBM.2107-75AK275 -remotedev IBM.2107-75MG701 -type gcp -mode full 5A01-5AB3:5A01-5AB3

All these global copies must be completed before you can setup the FlashCopy pairs, so use the lspprc command to check mirroring progress.

lspprc -dev IBM.2107-75AK275 -remotedev IBM.2107-75MG701 5A01-5AB3:5A01-5AB3

Once all the copies are complete, you can establish the FlashCopy pairs using the mkflash command, which you run from the remote site. You need to substitute your own parameters. You can run mkflash command if you have an IP commection to the remote site. If there is no IP connection, then run the mkremoteflash command instead, with the same parameters.

mkflash -dev IBM.2107-75MG701 -tgtinhibit -persist -record -nocp 5A01-5AB3:5AC1-5AD3

The paremeters needed are
tgtinhibit: do not let other tasks update the target volumes.
persist: retain all flashcopy relationships after the copy completes
record: record changes to volume pairs
ncp: don't run a background copy

Use the lsflash command to check on the flashcopy progress. Now up to this point we have just used pprc and flashcopy commands, maybe with slightly different parameters. Now we have mirroring and FlashCopy established, we can start getting into global copy. First we need to set up a Global Mirror Session with the mksession command.

mksession -dev IBM.2107-75AK275 -lss 5A -volume 5A01-5AB3 01

The dev parameter is the source machine. The '01' on the end is a session ID. You need to run this command for every LSS that you are mirroring, but you can associate them all with the same session ID. Now have a set of mirrored and flashed disks, associated together into a Global Mirror session. Now we need to start the processing off with a mkgmir command. Note the mk'g', this is a global command

mkgmir -dev IBM.2107-75AK275 -lss 5A -cginterval 10 -coordinate 60 -drain 30 -session 01 IBM.2107-75AK275/5A:IBM.2107-75MG701/5A

Note that here we have to enter the tuning parameters for this session.

Once you have your Global Mirror session going, you can pause it with the 'pausegmir' command, then start it again with the 'resumegmir' command, check it out with the 'shomgmir' and 'lsgmir' commands, check the consistency group with the 'showgmircg' command, then remove it with the 'rmgmir' command

back to top