Saturday, December 27, 2008

Dataset ENQ and CONtention

If you are trying to edit or view a dataset and you get a message "data set in use", how will you find who is using it? ISRDDN is the answer.

Type TSO ISRDDN (in some shops its just ISRDDN - no TSO prefix required) and hit enter. Then type ENQ and hit enter.

You will get a screen like this:


Type your dataset name (or prefix) in "Minor name prefix" and hit enter.

You'll get a list of all the users/jobs that are viewing/editing the
dataset(s) under the Job Name field. These are color coded to indicate the following ENQue status:

GREEN - Shared
REVERSE GREEN - Waiting for shared
RED - Exclusive
REVERSE RED - Waiting for exclusive

For the System ENQ List you may specify specific Major, and Minor prefixes, as well as specific job or user prefixes to reduce the size of the list.

You can use the RESET command to reset these values to defaults.

On the ENQ display screen, you can type ALL to view all system ENQs or RESET to view the SYSDSN enqs for your user id.

The System ENQ Contention display (command Con) shows all ENQ contention on the system.

Thursday, December 11, 2008

Fast Load abend - IEC146I 513-04

You are running Platinum Fast Load job and the job fails on you with an "IEC146I 513-04" error.

IEC146I 513-04 means
"An OPEN macro instruction was issued for a magnetic tape data set allocated to a device that already has an open data set on it. Make sure that the first data set is closed before the second is opened, or allocate the second data set to a different device. This error may be due to a
previous abnormal end associated with the same unit in the same step. If so, correct the error causing the previous abnormal end."

Problem:
As per the above message, you are trying open the dataset for two different purposes in the same job step.

In your LOAD card, see what you've specified for INDDN. In my case, I had specified INDDN SYSREC.

Fast Load user guide says "If you will be performing an OUTPUT-CONTROL ALL type load, and you want to specify INDDN SYSREC, you must also specify UNLDDN and change its default value. Otherwise, Unicenter Fast Load will try to use SYSREC as an input and a work file."

OUTPUT-CONTROL ALL is the default. I noticed that I didn't have a UNLDDN DD name in my JCL.

Fix:
Either specify a UNLDDN DD in your JCL step or change INDDN to something other than SYSREC. I changed it to SYSULD and it worked.

Monday, December 1, 2008

CA Intertest (Batch) Tips

If you use CA's Intertest for Batch to debug your batch COBOL programs or stored procedures, read on.

There are a few commands that you use frequently. Set them commands in PFKEYS for ease of use:

STEP 1 - Set steps to go in each run to 1 (I usually set it to PF4)
GO - Run as many steps as described in the STEP command (I usually set it to PF5)
RUN - Run until the next breakpoint (I usually set it to PF6)
KUP - Scroll Up in the Data Display Window (I usually set it to PF17)
KDOWN - Scroll Down in the Data Display Window (I usually set it to PF18)

Thursday, November 20, 2008

Sample Job: BMC Unload Plus

Here is a sample job to run BMC's Unload Plus utility:

Sample Job: CA/Platinum Fast Unload

Here is a sample job to run CA/Platinum Fast Unload utility:

Wednesday, November 12, 2008

PRINT job output and SYSLOG from SDSF

Say, you have run a job and you want to save the job output. How do you do it? There are two ways: either you save a softcopy in a dataset or take a print out.

Note: the following methods work only if you have SDSF.

To save job output to a dataset:
  • Go to SDSF
  • Type XDC besides the DDName or the job name
  • It will pop up a "SDSF Open Print Data Set" menu.
  • Fill in the required fields such as Data set name, Disposition, Space units, Primary Quantity, etc and hit enter. When it is done copying the data to the dataset, you'll get "Print Closed" message on the Top Right corner of the screen.


To print job output on a mainframe printer:
  • Go to SDSF
  • Type XSC besides the DDName or the job name
  • It will pop up a printer menu. Fill in the required fields such as Class, copies and other fields and hit enter. It goes to the held queue(usually)
  • Go to the Held queue and release it by typing 'O'

Sometimes it is required to print the SYSLOG or part of the SYSLOG to a
data set:
  • To print part of the SYSLOG to a preallocated data set:
  • Go to SDSF
  • Type the PRINT Command like this: PT ODSN SDSF.PRINT * MOD
  • Then type the PRINT command with the date and time for which you need to print the log: PT 06.00.00 10/15/2008 10.00.00 10/15/2008
  • Then type this command: PT CLOSE

Thursday, November 6, 2008

Lotus Notes - Save Window State

Well, this is not a mainframe tip. But I use Lotus Notes everyday and today I found something useful that I want to share.

If there are databases or group mail files that you open every day in Lotus Notes, you can setup Lotus Notes to open these on startup.
  • Open all of the windows you want opened every time you open Lotus Notes
  • Click on File and select Save Window State
Now every time you log into Lotus Notes your selected items will automatically open.

If Save Window State option is greyed out, click on File, Preferences, User Preferences and check that "Save Window State on Exit" is not checked under "Startup Options".

Tuesday, November 4, 2008

APF Authorization

When someone says, "you need to APF authorize this dataset", what do they mean?

Well, APF means "Authorized Program Facility". In a z/OS environment, APF is a facility that permits the identification of programs that are authorized to use restricted functions.

To find the datasets that are APF authorized:

1. Type TSO ISRDDN in your ISPF session (some shops need just ISRDDN with no TSO prefix) and hit enter.

2. Type APF and hit enter. It'll bring up a list of all datasets that are APF authorized.

Remember that, if you like to use a APF authorized dataset in a job STEPLIB, make sure all the datasets in the STEPLIB are APF authorized.

Monday, November 3, 2008

DB2 SQLCODE=-950

We use BCV split (Business Continuance Volume) technology to clone our Peoplesoft DB2 production subsystem to DB2 test subsystem. After we cloned one of our subsystems, we noticed that our programmers were not able to access QMF. They started getting the error message:

"DSQ16950 The location name is not known to the local DB2 subsystem.".

This was the error message from QMF. The actual error message issued by DB2 was "SQLCODE -950" which means "THE LOCATION NAME SPECIFIED IN THE CONNECT STATEMENT IS INVALID OR NOT LISTED IN THE COMMUNICATIONS DATABASE".

When you get this message check SYSIBM.LOCATIONS to see if a row is there for the DB2 subsystem it is complaining about. If there is one and the values are correct, check the application PLAN. In our case we checked QMF PLAN.

Check if you have the right value for CURRENTSERVER in your BIND PLAN. If you don't have the correct value, rebind the plan with correct value for this parameter. If you are on a local server (on a local LPAR), just remove this parameter. Removing this parameter and rebinding the plan fixed our issue with QMF.

Sunday, November 2, 2008

Platinum Fast Load Error "PFL8013E"

If a Fast Load job fails with PFL8013E (Error Description: A table within Table Space that has multiple versioned rows is not supported for this release of Unicenter Fastload. Reason: This DB2 V8 feature is not yet supported by Unicenter Fast Load. Processing terminates), it could be because the tablespace is in AREO (Advisory Reorg pending) status. This happens in a DB2 V8 NFM subsystem, with Fast Load (version r11.5) Resume NO REPLACE option specified.

How will you fix this? There are a few options that you can try:

1. Run REORG utility on the tablespace and retry loading.

2. If that doesn't work, drop and recreate the table and then load the table using Fast Load

3. If that doesn't work, try IBM LOAD. That should definitely work.

Thursday, October 30, 2008

CICS Scheduled task

I've a CICS program which is invoked when CICS is started (either a cold start or a warm start) through a PLT entry "DFHPLT TYPE=ENTRY,PROGRAM=program_name".

Our shop cycles CICS every Sunday. But I wanted to run this program every day at 10 a.m. and again at 10 p.m. So, what I did was, I coded this piece at the end of the program (just before "EXEC CICS RETURN END-EXEC"):

IF WS-TIME > '21.59.59'
EXEC CICS START TRANSID(WS-THIS-TRAN)
REQID(WS-REQID)
AT HOURS(10)
END-EXEC
ELSE
EXEC CICS START TRANSID(WS-THIS-TRAN)
REQID(WS-REQID)
AT HOURS(22)
END-EXEC
END-IF


Thanks to my colleague who taught me this technique.

Saturday, October 25, 2008

ISRDDN

When you log on to your mainframe TSO (Time Sharing Option) session, you can use the TSO supplied commands. Apart from those commands, if you know how to code in REXX or CLIST, you can come up with your own commands.

There are also occasions when you use commands written by some 3rd party vendors. For example, Quick Reference - you invoke their product by typing "QW keyword". Sometimes you wonder, how does TSO know these commands exist or where to get the source/load from - the source could be in a lot of different datasets.

When you logon into your TSO session, the system preallocates a bunch of datasets. This is determined by your logon PROC. Usually the system programmer determines the appropriate logon PROC for each user. For example, a developer might need a set of datasets preallocated, while a DBA might need a different set of datasets preallocated. So, when you type QW in the command line, TSO searches in these datasets and executes it.

To find out which are all the datasets that have been allocated to your TSO session, type ISRDDN (in some shops, you may have to type TSO ISRDDN) in the command line and hit enter. You will get all the datasets that are allocated and the corresponding DD names. If you want to find the source of a particular CLIST, REXX, Panel, etc., just type "M membername" (without quotes). e.g. M qw -- will search for the source code or load for QW.

Wednesday, October 22, 2008

QMF Tips

If you want to delete/drop a saved data table in QMF, read on.

After doing a "LI TABLES" to list all the saved data tables, if you want to drop one of the tables, use command "ERASE" (DROP, DELETE etc are not for this purpose).

BMC UNLOAD PLUS Tips

BMC Unload Plus is a high-performance DB2 UNLOAD utility (just like CA/Platinum FAST UNLOAD utility). The performance of this utility can be controlled by proper use of the keyword "DIRECT".

DIRECT YES: UNLOAD PLUS reads data directly from the table space data set or image copy data sets to unload the data, using a SELECT-like syntaxfor data selection. SELECT functionality is a limited subset of thefunctionality that is normally available in DB2 SQL. The benefit of the DIRECT YES mode is maximum performance when unloading large volumes of data.

DIRECT NO: UNLOAD PLUS processes the SELECT statement and reads the data usingDB2 dynamic SQL. This mode is not a high-performance solution for unloading large volumes of data. The benefit of the DIRECT NO mode is a full range of DB2 SQL SELECTfunctionality, including joined tables, subqueries, and so on. This functionality includes many of the features that are availablein UNLOAD PLUS including DB2 parallelism, data type conversions, output formatting.

Thursday, October 16, 2008

DB2 z/OS DBRM Layout

If you look at a DBRM member, especially in DB2 for z/OS V8 or higher, you cannot make anything out of it. It looks like garbage at worst, or looks like a loadlib member at best. But it's not very difficult to find what's in there, if you know a little bit assembler language.

The layout of DBRM can be found in "high-level-qualifier.SDSNMACS(DSNXNBRM)". This is in macro format.

DB2 ROUND_TIMESTAMP function

ROUND_TIMESTAMP(CURRENT TIMESTAMP,'MI'):

The ROUND_TIMESTAMP function returns a timestamp that is the expression rounded to the unit specified by the format-string. In this case, MI means minutes.

For example, the output of

SELECT ROUND_TIMESTAMP('2008-10-16-14.46.22.121222','MI')

FROM SYSIBM.SYSDUMMY1

looks like this:
2008-10-16-14.46.00.000000

Tuesday, October 14, 2008

Who updated that DB2 table? (how to read DB2 log?)

There are situations when people drop and re-create a DB2 object, such as a Stored Procedure (usually in the development or test environment) using a common shared user-id. It may be necessary to find out who made the change, but may not be possible, because the person used a CURRENT SQLID = 'shared id' or something like that.

Or, sometimes, someone might have updated (or deleted or inserted) a table without logging it anywhere (or that's what they thought). There is no straight forward way to find who updated this table. But, I didn't say that it is not possible.

To find this out, you may need to read the DB2 log. Here are the steps to do just that:
  1. Run a batch job which executes DSNJU004 with the correct Boot Strap dataset (BSDS). This program reads the BSDS and displays information from DB2*MSTR, including the active and archive logs.
  2. Get the archive log dataset name that was created around the time that the DB2 object got changed. If the log is still active for time that you are looking for, archive it manually (using DB2 ARCHIVE command)
  3. Run a batch job which executes DSN1LOGP with SUMMARY(ONLY) option. This program reads the log, formats it and prints it in the output  dataset.
  4. In this job output, search the database in which the change was made. Convert the STARTLRSN time to readable timestamp value (I've a REXX script to do it, if you need it, please let me know). If this matches with the time when the DB2 object was changed, then get other details such as CONNID, AUTHID etc.
Note: If you need sample JCLs for the steps mentioned in this blog entry, you can contact me @ ajjuba@yahoo.com.

Sunday, October 5, 2008

LUWID to IP Address conversion

Often it is required to find the server from which a distributed query comes from. In those cases, find the LUWID from DB2 MSTR: e.g.,

STC13507 DSNL032I +DB2P DSNLZDJN DRDA EXCEPTION CONDITION IN 324
REQUEST FROM REQUESTOR LOCATION=10.130.11.23 FOR THREAD WITH
LUWID=GA820B17.O412.0F9017030439=152303
REASON=00D35601

In the above example, the LUWID is GA820B17. To convert this to a IP address, start from the 2nd character and convert as follows:
Decimal (A) = 10
Decimal (82) = 130
Decimal (0B) = 11
Decimal (17) = 23
So the IP address is 10.130.11.23.

Visual Explain Tips

This blog entry explains how to get the information necessary to setup your connection from Visual Explain (VE) to DB2 for z/OS.

When you bring up VE V8, the first screen you are presented with is the "List Databases" panel. This screen lists all of the databases VE is aware of that you can connect to. However, if you are using VE for the first time, you will have to set up your own connections.

On this screen click the "Catalog" button. The "Catalog a Database" screen is displayed. On the top of this screen, there are four mandatory input fields (boxes): "Host Name", "Database Name", "Database Alias", and "Port Number". Now here's the cool part.

Display your DB2 xxxxMSTR address space. Once in the Jeslog portion of the xxxxMSTR, search the DB2 message DSNL004I. It should look similar to the following message only with your values filling in the fields in

DSNL004I = DDF START COMPLETE 837
LOCATION //your location name here//
LU USIBMSY.SYEC1DB2
GENERICLU -NONE
DOMAIN //your hostname here//
TCPPORT //your TCP port number here//
RESPORT 5001

This message has everything you will need to complete the "Catalog a Database" screen. The LOCATION has the value that you need to specify in the "Database Name" box. DOMAIN has the value you need to complete the "Host Name" box. TCPPORT has the value for "Port Number".

That leaves "Database Alias". This can be anything you want to specify. I usually use the same value I specified for "Database Name". I would also strongly suggest filling in the "Comment" box so you have some idea later on what this entry was for.

Tuesday, September 30, 2008

CA/Platinum Fast Unload tips

Today I'm going to discuss about a vendor product - CA(previously Platinum)'s DB2 "Fast Unload". It is a high performance DB2 data unload utility. If you use Fast Unload at your shop, go ahead and read these tips:

Fast Unload is much faster than DB2's DSNTIAUL. One of the reasons is, it does not use DB2 SQL to fetch the data. Instead it fetches the data directly from the underlying VSAM dataset. But this does not happen always. Sometimes it may have to fetch the data from the table. This is controlled by the keyword "SQL-ACCESS".

SQL-ACCESS NONE:
When you specify "NONE" for SQL-ACCESS, it does not use SQL access. If the select statement
cannot be processed using VSAM or EXCP, an error message is issued & processing ends. This is the DEFAULT.

Note: You must specify NONE to unload data from Image Copies, Concurrent Copies & DSN1COPYs.

SQL-ACCESS ONLY:
When you specify this option, it uses only SQL access to unload the tablespaces. This option lets you use any valid SELECT statement. Only one SELECT statement is processed at a time. The statements using VSAM or EXCP are processed first, followed by the SQL ACCESS SELECT statements, which are processed in the order specified in the control statements. You cannot include an ORDER-CLUSTERED clause, or the PART or OBID parameters with this option. You must specify SQL-ACCESS ONLY to use TRIM with ASCII objects. With SQL-ACCESS ONLY, DB2 determines locking. To preserve data integrity, you must specify LOCKSIZE TABLE on the tablespace.

SQL-ACCESS EXTENSION:
When you specify "EXTENSION" for SQL-ACCESS, it uses EXCP or VSAM processing if possible; otherwise uses SQL access. If you specify SQL-ACCESS EXTENSION, only those SELECT statements that require SQL access use it; all other SELECT statements are multitasked & unloaded using VSAM or EXCP. If you specify SQL-ACCESS EXTENSION with the PART or OBID parameter & SQL-ACCESS is required, an error message is issued and processing terminates.

Tuesday, September 23, 2008

How to find DBID, PSID, OBID of a tablespace

If you dropped a table and you took imagecopy BEFORE dropping it, then you can recover the data to a newly recreated table if you know the DBID, PSID, OBID of the dropped table. To find these values, use DSN1COPY with the image copy dataset and use FULLCOPY parm and specify 0001 for the source DBID, PSID & OBID. When you run it first, it will give the correct DBID. Replace 0001 with this DBID, you'll get the PSID. Replace 0001 with this PSID and you'll get the OBID.

For an existing table, to obtain the DBID and PSID, run the following query:

SELECT DBNAME, DBID, NAME, PSID
FROM SYSIBM.SYSTABLESPACE
WHERE DBNAME = "database-name"
AND NAME = "table_space-name"

To obtain the table OBID, run this query:

SELECT NAME, OBID
FROM SYSIBM.SYSTABLES
WHERE TSNAME = "table_space-name"
AND CREATOR = "creator-name"

Thursday, September 18, 2008

ISPF/TSO tips

  • PANELID ON will show the panel name in the top left corner
  • PFSHOW ON will show the PF Keys assignment at the bottom of the screen
  • You are editing a PDS (partitioned dataset) member; you've made significant number of changes to the member; now you are thinking about creating a new member with these changes rather than saving the changes in the member you are editing. Can you do this? Yes, you can. Just issue this command: CRE .ZF .ZL memname. And, don't forget to issue CANCEL command on the member that you are editing, otherwise, you might save the changes accidentally (if AUTOSAVE is turned ON in your ISPF profile)
  • Talking of ISPF profile, how will you know what are all the options available and which are turned on/off? Issue PROF on a dataset while in VIEW or EDIT mode.
  • C ALL 'abc' 'xyz' will replace all occurrences of string abc to xyz
  • C ALL 'abc' 'xyz' n will replace all occurrences of string abc to xyz in column n
  • C ALL 'abc' 'xyz' n1 n2 will replace all occurrences of string abc to xyz between columns n1 and n2
  • C ALL P'¬' 'abc' n1 n2 will replace ANY character found between columns n1 and n2 to abc. Use this very carefully.
  • X ALL 'abc' will exclude all the lines that have the string 'abc'
  • COMP X 'dataset(optional_member_name)' will display the differences between current PDS/PS and the compared dataset excluding all the lines that match. This is very useful if you are making changes to a source code and want to compare it with the baseline. There is another powerful feature available along with comparing two datasets. You can bring in the changes found in the compared dataset which is not available in the current dataset. If you would like know how to do this, please send me an email or post a comment

"HIDE EXCLUDE"

When you are editing a dataset in ISPF, you can exclude lines by typing 'x' in the line number area. When you do that, you see the excluded lines as a row of dots. You now have the option of entering 'HIDE EXCLUDE' on the command line. The line of dots disappears and is replaced with a solid line in the number area.

RESET will behave as before; RESET HIDE will return the row of dots to your display.

Wednesday, September 17, 2008

How to query tables from different DBMSs?

There are times when you need to query tables from different Database Management Systems (DBMSs). For example, one type of information or data may reside in DB2 UDB table and another in DB2 for z/OS or even Oracle table. How will you write queries that joins these tables and fetches the desired results?

The answer is 'federation'. If you have DB2 UDB, and want to join a table from Oracle or DB2 for z/OS, create a federated table image of the table that resides in the other DBMS. Once you create a federated table, you can write queries treating it like a table that resides on the local DB2 UDB server.

For example:
SELECT L.col1, L.col2, F.cola, F.colb
FROM Local_Table L, Federated_Table F
WHERE L.col3 = F.colc

Remember that for "federation" to work, you need DB2 Connect or a similar product.

You can find more details here:
IBM Federated Database Technology

Monday, September 15, 2008

Migrate dataset to tape

HMIGRATE command with no option specified on a dataset migrates the dataset to MIGRAT. To migrate a dataset to TAPE (MIGRAT2) directly, issue the following command:

HMIG 'dataset name' ML2

HRECALL with DFDSS option

When you need a dataset that has been migrated to tape, you can issue the HRECALL command. However, if the recalled dataset doesn’t fit in one volume the command fails. To get through this, you can have it span multi-volumes using the following command:

HSEND RECALL ‘migrated-dataset-name’ DFDSSOPTION(VOLCOUNT(ANY))

‘TTR’ in PDS and PDSE

Partitioned Data Set uses an addressing scheme called TTR (Track-Track-Record) which is based on the DASD geometry. TTR addresses are stored in hexadecimal format. So an address of X’002E26′ would mean track number X’002E’ and record X’26′. The name TTR comes from the fact that first two bytes of the address denote track number and third byte denotes record number. This dependence on the DASD geometry makes it very difficult to migrate PDS from one type of DASD to another one, e.g. from 3380 to 3390.

The PDSE addressing scheme is not dependent on the physical device geometry. It uses a ’simulated’ 3-byte TTR address to locate the members & the records which makes the migration easier. Incidentally, this simulation of addresses places some limitations on the number of members and the number of records per member in a PDSE. A TTR address of X’000001′ in a PDSE points to the directory. The addresses from X’000002′ to X’07FFFF’ point to the first record of each member, which is why there is a limit of 524,286 members. The addresses from X’100001′ to X’FFFFFF’ point to records within each member, which is why there is a limit of 15,728,639 records in each member.