Friday, October 1, 2010

SQLCODE -331

Recently I altered a tablespace CCSID from EBCDIC to UNICODE. This tablespace had a PLAN_TABLE in it. After altering the tablespace, I couldn't retrieve certain rows from this PLAN_TABLE. I started getting SQLCODE -331. The message appeared like this:

"CHARACTER CONVERSION CANNOT BE PERFORMED BECAUSE A STRING, POSITION43 CANNNOT BE CONVERTED FROM 367 TO 37, REASON 12"

Position 43 refers to column IBM_REQUIRED_DATA in the PLAN_TABLE. As a short term solution, I spaced out the value in this column and it fixed it. However, it kept appearing for different programs/queries. So, I altered the table column attribute for IBM_REQUIRED_DATA from "FOR SBCS DATA" to "FOR BIT DATA". It fixed the problem.

Tuesday, September 7, 2010

SMP/E error "NO APPLICABLE ++VER"

I was trying to apply a PTF BPEnnnn on a BMC product called "Snapshot Upgrade Feature". The SMP/E RECEIVE job failed with the message "NO APPLICABLE ++VER". It means that I do not have the correct version of the 'Base Product' on which I was trying to apply the PTF. I checked the technical bulletin that BMC sent me, it mentioned that I should be on Version 5.6 of the product to apply this product. I browsed my SMPPTFIN dataset and searched for the PTF BPEnnnn and found the FMID. It was one version prior to the one mentioned on the technical bulletin. It was clearly my mistake that I didn't read the technical bulletin clearly before doing a SMP/E RECEIVE, but I learnt something new.

Saturday, July 3, 2010

SQLCODE -518

The reason mentioned in the DB2 manual for SQLCODE -518 is "THE EXECUTE STATEMENT DOES NOT IDENTIFY A VALID PREPARED STATEMENT"

You may get this SQLCODE for many reasons. I don't know them all. For me this is what happened. When I tried to connect to a remote DB2 database (on LUW) from a COBOL program on the mainframe, the CONNECT was successful. However when I tried to run an SQL on that database I got this SQLCODE.

After researching for a while, I found that I had an existing SQL statement within EXEC SQL and END-EXEC. That worked good. But when I commented out one of WHERE clause statements (using * at column 7), it gave me this error. I moved the commented statement to the line after END-EXEC and it started working again.

Bottom line is DB2 doesn't like COBOL-like commented code inside EXEC SQL and END-EXEC when you are trying to run the SQL at a remote DB2 database.

Tuesday, April 20, 2010

SQLCODE -804

SQLCODE -804 means something messed up with the SQLDA. Since SQLDA is internally created by DB2 for static COBOL programs, it is possible for a COBOL subscript to overwrite SQLDA statements which are inserted by DB2 at the end of the Working Storage section.

If your programmer complains about this sqlcode, have him/her increase the table size (increase the 'TIMES' parameter value) and recompile the program.

Friday, April 16, 2010

DB2_ALL to run DB2 command on all partitions

DB2 LUW is all Greek and French to me. Today I was trying to rename a column name. I issued the DROP command, but I got the message "SQL0290N Table space access is not allowed." I checked the tablespace status, it was in 'backup pending' state. So I issued the BACKUP database tablespace command and the tablespace state showed normal. However, when I reissued the DROP command, I got the same SQL0290N.

After some research, I found that on a partitioned database, issuing DB2 command from DB2 command line DOES NOT work on ALL partitions. I had to use DB2_ALL utility. However, DB2_ALL doesn't work from DB2 command line.

So I invoked DB2 Task Center, created a new task and put in the following in the Command Script.

DB2_ALL "DB2 BACKUP DATABASE database TABLESPACE tablespace ONLINE TOD:\DB2\BACKUP\database COMPRESS"

Then saved this task, right clicked on it and clicked on Run Now. After it ran, checked the "Show Results" and confirmed that the backup was succesful. After that I reran the DROP and CREATE table command ssuccesfully.

Phew !!

Friday, March 5, 2010

JCL to find if a dataset is empty

In your batch job, if there is a need to check for empty dataset, you can do it using IDCAMS.


//*-------------------------------------------------------------------*/
//* This JCL is to find if a file is empty or not. If there are no */
//* records, the return-code will be 4. */
//*-------------------------------------------------------------------*/
//STEP01 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//IN1 DD DISP=SHR,DSN=your.dataset.name
//SYSIN DD *
PRINT INFILE(IN1) COUNT(001)
/*


If you have ICETOOL at your site, you can use that as well.

//*-------------------------------------------------------------------*/
//* This JCL is to find if a file is empty or not using ICETOOL. */
//* If there are no records, the return-code will be 12 */
//*-------------------------------------------------------------------*/
//STEP01 EXEC PGM=ICETOOL
//TOOLMSG DD SYSOUT=*
//DFSMSG DD SYSOUT=*
//IN DD DISP=SHR,DSN=your.dataset.name
//TOOLIN DD *
COUNT FROM(IN) EMPTY
/*

Tuesday, February 9, 2010

Gmail Tips

I know, this post has nothing to do with mainframes, but I just thought I would pass this around in case anyone finds it useful.

I've started using my gmail id very frequently because of all the good features it offers. However there are a few things that is missing from gmail. One of them is, you cannot sort the emails by Name, Subject, etc. But I found a way to do something to achieve similar results. Being a search giant, google seems to handle emails like another search item.

To sort on label (folder) "DB2", type label:DB2 in the search field and click on Search Mail

To sort on label (folder) "DB2" and Subject "DDL Lookup", type label:DB2 subject:DDL Lookup in the search field and click Search Mail

To get all emails from "Aji", type From:Aji in the search field and click Search Mail

Saturday, February 6, 2010

UNIQUE WHERE NOT NULL - Explained

I always had confusion over the meaning of indexes created with UNIQUE WHERE NOT NULL clause on DB2 for z/OS. The manuals don't seem to do a good job explaining it. I understand it now and here is how it works.

It means its ok to have multiple rows with null values in columns included in the index, but its not ok to have rows where the combination of the columns included in the index is not unique and all the columns are not null. I tested this on a three column index. I was able to insert rows rows shown below just fine, the only duplicate exception I got was where column a, b, c were the same value for two different rows and none of the 3 columns contained a null value.



COL1 COL2 COL3
---- ---- ----
A B C
A B -
A B -
A - -
A - -
- - -
- - -

CREATE UNIQUE WHERE NOT NULL INDEX PRE.IDX1
ON PRE.TEST
(COL1 ASC
,COL2 ASC
,COL3 ASC)
USING STOGROUP ASTERISK
PRIQTY 12
SECQTY 10800
ERASE NO
FREEPAGE 0
PCTFREE 0
BUFFERPOOL BP1
CLOSE YES
PIECESIZE 2097152 K;

CREATE TABLE PRE.TEST
(COL1 CHARACTER(1) FOR SBCS DATA WITH DEFAULT NULL
,COL2 CHARACTER(1) FOR SBCS DATA WITH DEFAULT NULL
,COL3 CHARACTER(1) FOR SBCS DATA WITH DEFAULT NULL
)
IN DB.TS;

Friday, January 22, 2010

SMP/E Errors

Well, I've been asked to install Omegamon XE for DB2 PM in our shop. I downloaded the required components from the IBM Shop z website and did a SMP/E RECEIVE. When I started the SMP/E APPLY (with CHECK option), I started getting all kinds of errors. I'm new to SMP/E, so I'll try to document what I find from my experience. It might be useful for those novices who have starting doing SMP/E installs.

These are the steps I follow when I install a product or apply maintenance(PTF) on an existing product:

1. If you are applying maintenance on an existing product, use ADRDSSU to backup ALL the datasets related to the product

2. Download the product CBDBO/PTF from IBM Shop z series website to your PC

3. FTP the product/PTF to mainframe

4. If the PTF is in tersed (IBM way of compressing) format, UNTERSE using TRSMAIN program with UNPACK option - if you need a sample job to run TRSMAIN, please put your request in the comment section

5. For new product installs, allocate SMP/E global zone and target/distribution datasets and all other related SMP/E datasets like SMPLOG, SMPPTS, SMPLTS, SMPMTS etc. Usually when you download the product,you'll get a sample job that allocates these datasets and defines them to the appropriate DD names. Go through the Program Directory to find info on the sample job. You may have to talk to your storage administrator about which volume to use for the new datasets.

6. Run SMP/E APPLY job with CHECK option (CHECK option ensures that you do not update any datasets. In our shop we usually install new product in the "base" libraries and all PTFs in the "maintenance" libraries. After installing thePTFs in the maintenance libraries, if everything goes well, we copy them to the base libraries. With the CHECK option you can see which datasets will be updated and if you do see any issue with updating a particular dataset, you can change the DDDEF - DD definition - in SMP/E and resubmit the job with CHECK option)

7. Run SMP/E APPLY job with CHECK option commented.

8. After a satisfactory testing, run SMP/E ACCEPT with CHECK option (this is in most cases not necessary unless you are going to back out any changes - in my little experience, if I've to backout a PTF, I just RESTORE all the datasets from the ADRSSU backup taken before applying the PTF)

Thanks for stopping by.