Configuring a DS8880

OK, so much for the theory, but how do you configure a DS8880 in practice?
You can either use GUI screens, or commands from a command line interface. I'll just discuss CLI commands, as the GUI is supposed to be intuitive. The commands are more powerful and you can pre-define them into a file and run them as a script. It is also possible to save the output of the script into a file, which is useful for auditing purposes. If you chose to use the GUI, you need to take the same steps as below, using a suitable GUI screen.

CLI Profiles

When using CLI commands, you should set up a profile file for every DS8880, and specify the DS8880 machine data in that profile.

The default profile is in c:\Program Files\IBM\dscli\profile Among other things, it should contain the IP address of the Hardware Maintenance Console and the DS8880 serial number like this

Devid: IBM.2107-75BS072

You can invoke different profiles by using the -cfg parameter

Without a profile you need to specify the hardware device in every command like this

mkarray -dev IBM.2107-75BS072 -raidtype 5 -arsite S21

With a profile you can just use

mkarray -raidtype 5 -arsite S21

CLI command types

CLI commands can be grouped into 6 types

Defining 4 new RAID ranks

I'm assuming that your DS8880 is installed and licensed, all physical cabling is complete, and your engineer has inserted the disk groups.

To configure a DS8880 with two arrays of FC disk and 2 arrays of FATA disk, you go through the following steps (This install has a definite mainframe bias, but where different actions are needed for Open Systems, they are mentioned).

First, define the I/O ports that will be used to communicate with the DS8880. The Setioport command has two parameters, one for the type of communication, which can be one of SCSI-FCP, FC-AL or FICON, and the port number. The number of ports you define should be the same as the number of channels that were cabled up.

Setioport -topology ficon I001
Setioport -topology ficon I002

Next you create the RAID arrays. To do this, you need to know which Array Sites are free and available for formatting, so you run lsarraysite -l first and note the free Array Sites.

Note the different types of diskclass; Flash, SSD (2.5 inch SSD), HighCapFlash, ENTerprise, and NL (Nearline) The mkarray command needs two parameters, one to describe what type of RAID you want, and one to specify which Array Site you want to use. RAID types can be 6 or 10, or 5 if you sign off the risk.

mkarray -raidtype 6 -arsite S37
mkarray -raidtype 6 -arsite S38
mkarray -raidtype 6 -arsite S39

It is always good practice to look at what you just created and make sure the result was what you expected.

Note that the DDMcap capacity figure is quoted in units of 10^9 Bytes. This is an IBM Gigabyte, or 1,000,000,000. The more usual figure, and the figure quoted for all Open Systems storage is 2^30 or 1,073,741,824 Bytes. This can cause confusion when calculating capacities.

Next you create the Ranks with the mkrank command and format the array up. In this example the ranks are formatted as ckd, use -stgtype fb for Open Systems

mkrank -array A35 -stgtype ckd
mkrank -array A36 -stgtype ckd
mkrank -array A37 -stgtype ckd

You can use the lsrank command to see the ranks you just created. Now you want to create four extent pools, one each of FATA and FC for each server. Each Extent Pool is associated with either Rank Group 0 or Rank Group 1, which in turn are associated with Server0 and Server1. All 4 Extent Pools are CKD, and all 4 pools are given names that show they are CKD, which server they are associated with, and which tier disks they contain.

mkextpool -rankgrp 0 -stgtype ckd ckd-S0-T1
mkextpool -rankgrp 1 -stgtype ckd ckd-S1-T1
mkextpool -rankgrp 0 -stgtype ckd ckd-S0-T3

lsextpool -l

Name       ID   stgtype  rankgrp
ckd-S0-T1  P0   ckd      0
ckd-S1-T1  P1   ckd      1
ckd-S0-T3  P2   ckd      0

Now I should have created the extent pools before I created the Ranks, but I can now associated the Ranks with the extent pools with the chrank command.

chrank -extpool P0 R0
chrank -extpool P1 R1
chrank -extpool P2 R2

I want to use some of the Tier3 disks for Space Efficient Pools for backups. repcap is the amount of physical space to reserve for space efficient volumes, while vircap is the amount of virtual space that can be defined as disks.

mksestg -repcap 100 -extpool P2 -vircap 300

As these are CKD pools, I need to create Logical Control Units. Open Systems users can skip this step. This command will create 16 LCUs, numbered from B100-B10F. As each LCU can have 256 volumes, that allows me to address 4096 volumes.
(For information, the B1 is the storage unit identifier and the 00-0F is the logical subsystem identifier. The four combined hex numbers are the LCU address. The Volume addresses will be A000-A0FF, A100-A1FF, etc. . You will also need an IOGEN to define the 16 LCUs and the 4096 volumes to z/OS. The link between LCU numbers and volume addresses is made in the IO gen)

mklcu -qty 16 -id 00 -ss B100

An Extent Pool that is associated with Server1 must also be associated with odd numbered LCUs.

Now finally I get to create some volumes. The mkckdvol command needs to know which extent pool to go to for its extents, how big each volume is in cylinders, and how many of each type of volume to add

mkckdvol -extpool P0 -cap 1113 -name 33901-P0-#h A000
mkckdvol -extpool P0 -cap 10017 -name 33909-P0-#h A001-A09F
mkckdvol -extpool P0 -cap 30051 -name 339027-P0-#h A0A0-A0CF
mkaliasvol -base 1500 -order decrement -qty 32 A0FF

The commands above show 1 3390-1 being defined (the smallest size available, for use as a GDPS utility volume), 159 mod9s, 48 mod 27s and 48 PAV aliases. Each CKD volume gets a nickname assigned. The names chosen show the volume size and the assigned extent pool - you can make your own up of course. The #h on the end of the name means to use the hexadecimal volume number as part of the volume name. The final numbers represent volume ranges. 1100 is one single volume. 1101-119F is a range of volumes.

Open Systems specific commands

If you are creating FB volumes, then you need some extra steps;

Create Volume Groups

Server hosts use two different ways to discover disks; SCSI MAP256 or SCSI MASK. You need to know which type your server uses, and you can find this out with the commands

lshosttype -type scsimask
lshosttype -type scsimap256

These commands will list all the servers that use each discovery type. If your server is Windows, then it uses SCSI MAP256, so the command to create a volume group called VG-W01 is

mkvolgrp -type scsimap256 VG-W01

Next you would create the volumes for that volume group. This command will create 8 200GB volumes in the VG-W01 volume group. Note that the command is mkfbvol.

mkfbvol -extpool P0 -name W2K#h -cap 200 -volgrp VG-W01 1200-1207

Set the IO port topology

For each port you are going to utilise, use the setioport command to define which of topology you want to use; SCSI-FCP, FC-AL or FICON. Use the lsioport command to see what is already configured

setioport -topology SCSIFCP 10002 (port number)

Create the Open Systems Clusters

use the mkcuster to create a cluster that will contain a group of hosts, or the lscluster to see what is there already

mkcluster Cluster_1

Create the Open Systems Hosts

You need to create 2 default server types, for Windows and Linux using the mkhost command

mkhost -type "Linux Server" -cluster Cluster_1
mkhost -type "Windows Server" -cluster Cluster_1

Assign Open Systems hosts to Ports

Finally you need to create Host Connections using the command mkhostconnect. Don't use the -ioport option as this effectively creates LUN masking. This is best done with SAN zoning. Note that a volume can belong to more than one volume group.

mkhostconnect -hosttype "Windows Server" -wwnname 200000E0123456789 -host host_name

These examples give a flavour of how a Windows setup could work, but the commands and options will be different for the various host types.

back to top

Enterprise Disk

Disk Protocols

Lascon updTES

I retired 2 years ago, and so I'm out of touch with the latest in the data storage world. The Lascon site has not been updated since July 2021, and probably will not get updated very much again. The site hosting is paid up until early 2023 when it will almost certainly disappear.
Lascon Storage was conceived in 2000, and technology has changed massively over those 22 years. It's been fun, but I guess it's time to call it a day. Thanks to all my readers in that time. I hope you managed to find something useful in there.
All the best