DB2

DB2 locking and concurrency – Tips & Trick

DB2 locking and concurrency is a critical part of DB2 as well as batch performance.  Batch update jobs processing large amounts of data will need to have restart logic through the use of the COMMIT statement and a program data save area. This will identify the restart point. Commit frequency should be an input parameter where possible, and not hard-coded in the program. See the following points to make a program Restartable:

a) declare 2 cursors to select rows to be updated in the table. Code an order by clause for the columns of the unique index. The first cursor will select the rows we want.
e.g.Table for Restart is

CREATE TABLE CHKPT_RSTRT
( PROGRAM_NAME CHAR(8) NOT NULL,
ITERATION CHAR(4) NOT NULL,
COMMIT_FREQUENCY SMALLINT NOT NULL WITH DEFAULT,
NO_OF_COMMITS SMALLINT NOT NULL WITH DEFAULT,
CHECKPOINT_TIME TIMESTAMP NOT NULL WITH DEFAULT,
CHECKPOINT_AREA CHAR(254) NOT NULL WITH DEFAULT,
PRIMARY KEY (PROGRAM_NAME,ITERATION)
)
IN DATABASE.TABLE
;
CREATE UNIQUE INDEX XCHKPRST
(PROGRAM_NAME,ITERATION)
CLUSTER;
QUERY

EXEC SQL
   DECLARE PBNPRLV_CSR1 CURSOR FOR
   SELECT
     CNTRL_NO,
     BPRO_NO,
     BNPRLV_NO,
     PBNH_EFF_DT
   FROM PBNPRLV
   ORDER BY CNTRL_NO,BPRO_NO, PBNH_EFF_DT
END-EXEC.

The second cursor is for use after issuing commits and restarts processing. We have to reposition the cursor by using where clause that reflects the order by on the primary key.

EXEC SQL
DECLARE PBNPRLV_CSR1 CURSOR FOR
   SELECT
     CNTRL_NO,
     BPRO_NO,
     BNPRLV_NO,
     PBNH_EFF_DT
   FROM PBNPRLV
   WHERE
   ((CNTRL_NO= :CHKPT-CNTRL-NO
   AND BPRO_NO = :CHKPT-BPRO-NO
   AND PBNH_EFF_DT> :CHKPT- PBNH-EFF-DT)
    OR (CNTRL_NO = :CHKPT-CNTRL-NO
   AND BPRO_NO > :CHKPT-BPRO-NO )
    OR (CNTRL_NO > : CHKPT-CNTRL-NO))
   AND CNTRL_NO >= : CHKPT-CNTRL-NO
   ORDER BY CNTRL_NO, BPRO_NO, PBNH_EFF_DT
END-EXEC.

b) select the row from CHKPT-RESTART table for the program and iteration being processed.
c) If it is the first time through and CHECKPOINT_AREA contains data, the program is restarted. Move appropriate values from CHECKPOINT_AREA to host variable used in the second cursor and open it.
d) Fetch the row from the opened cursor.
e) If fetch is successful increment the WS variable that counts successful fetches.
f) Perform UPDATE for PBNPRLV row that was fetched.
g) If the fetch counter equals the COMMIT_FREQUENCY, perform a commit paragraph. This paragraph should increment and Update NO_OF_COMMITS and CHECKPOINT_AREA column with data retrieved from the table and set CHECKPOINT_TIME to the current timestamp and issue a COMMIT and reset the fetch counter to zero.
h) After commit, the cursor will be closed unless WITH HOLD option is specified.
i) When update processing is complete, reset the values of the columns in the CHKPT_RSTRT table to their original default value.

  • Bind most applications with the ISOLATION(CS) and CURRENTDATA(NO) options. These options enable Db2 to release locks early and avoid taking locks in many cases.
  • Use the REORG utility to keep your data organized. Doing so can prevent the additional lock and unlock requests for situations such as updates to compressed and varying-length rows, and auto-release locks for pseudo-deleted index entries and keys. You can enable automated cleanup of pseudo-deleted index entries by setting the INDEX_CLEANUP_THREADS subsystem parameter to a non-zero value.
  • Use LOCKSIZE ANY or PAGE as a design default. Consider LOCKSIZE ROW only when applications encounter significant lock contention, including deadlock and timeout.
  • LOCKSIZE ANY is the default for CREATE TABLESPACE. It allows Db2 to choose the lock size, and Db2 usually chooses LOCKSIZE PAGE and LOCKMAX SYSTEM for non-LOB/non-XML tablespaces. For LOB table spaces, Db2 chooses LOCKSIZE LOB and LOCKMAX SYSTEM. Similarly, for XML table spaces, Db2 chooses LOCKSIZE XML and LOCKMAX SYSTEM.
  • Page-level locking generally results in fewer requests to lock and unlock data for sequential access and manipulation, which translates to reduced CPU cost. Page-level locking is also more likely to result in sequentially inserted rows in the same data page. Row-level locking with MAXROWS=1 can suffer from data page p-locks in data sharing environments. However, page-level locking can avoid the data page p-locks when MAXROWS=1.
  • Row-level locking provides better concurrency because the locks are more granular. However, the cost of each lock and unlock request is roughly the same for both page and row-level locking. Therefore, row-level locking is likely to incur additional CPU cost. Row-level locking might also result in more data page latch contention. Sequentially inserted rows, by concurrent threads, are less likely to be in the same data page under row-level locking.
  • Reduce locking contention on the catalog and directory for data definition, bind, and utility operations You can use the following approaches to reduce this type of contention:
  • Reduce the number of objects per database.
  • Group data definition statements from the same database within the same commit scope, apart from data manipulation statements, and commit frequently.
  • Assign a unique authorization ID and private database to each user.
  • Avoid using LOCK TABLE statements and statements that use RR isolation to query the catalog.
  • Specify the TRACKMOD NO and MEMBER CLUSTER options when you create tablespaces. These options can reduce p-lock and page latch contention on space map pages during heavy inserts into GBP-dependent tablespaces. TRACKMOD NO cannot be used when incremental image copies are used for the tablespaces.
  • Use the RELEASE(DEALLOCATE) bind option to avoid the cost of repeatedly releasing and reacquiring locks for applications that use frequent commit points for repeated access to the same tablespaces.
  • Use the RELEASE(COMMIT) bind option for plans or packages that are used less frequently to avoid excessive increases to the EDM pool storage.
  • For mixed INSERT, UPDATE, and DELETE workloads consider the LOCKSIZE PAGE and MAXROWS 1 options to reduce page latch contention on data pages. Do not use LOCKSIZE ROW for such mixed workloads, regardless of whether MEMBER CLUSTER is used. MAXROWS 1 is recommended only when high levels of lock or latch contention are encountered. The trade-off is a potential increase in get page and read-write I/O operations. The number of pages required to contain the data might increase by as many rows as can fit on a page when MAXROWS 1 is used. For example, if 20 rows fit in a single page, then the result is a 20 times increase in the number of pages used. Another result is a significantly reduce buffer pool hit ratio.

DSNTIAR

We can use the DSNTIAR subroutine to convert an SQLCODE into a text message. The syntax is

CALL 'DSNTIAR' USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN.

We did a search in MPROD2 for C37BAT31 (Batch COBOL) and the modules that use DSNTIAR are in the attached list. After calling DSNTIAR we get the system text message in array ERROR-MESSAGE for SQLCODE in SQLCA.
Let’s say the WORKING-STORAGE variables defined for DSNTIAR are

01 WS010-MISC-FIELDS.
   05 WS010-SQLCODE PIC +9(05).
   05 WS010-MSG-LINE PIC X(120) VALUE SPACES.
   05 WS010-SUBSCRIPT PIC 9(02) VALUE ZERO.
   05 WS010-TEXT-LENGTH PIC S9(9) COMP VALUE +120.
   05 WS010-DB2-ERROR-MESSAGE.
10 WS010-MSG-LENGTH PIC S9(4) COMP VALUE +960.
10 WS010-MSG-TEXT PIC X(120) OCCURS 8 TIMES.

The code for calling DSNTIAR is

CALL 'DSNTIAR' USING SQLCA,
WS010-DB2-ERROR-MESSAGE,
WS010-TEXT-LENGTH
The code for displaying error message returned by DSNTIAR is
PERFORM VARYING WS010-SUBSCRIPT
FROM 1 BY 1
UNTIL WS010-SUBSCRIPT > 8
IF WS010-MSG-TEXT (WS010-SUBSCRIPT) > SPACES
DISPLAY WS010-MSG-TEXT (WS010-SUBSCRIPT)
END-IF
END-PERFORM.

After calling DSNTIAR we have to call ‘CEE3ABD’ or ‘ABEND’ to stop processing inside DB2 handler or Application Programming Interface(API).

Utilization of dynamic SQL – DB2 locking and concurrency
While this has its advantages, it cannot be administered via a DBA. Also, with some technologies, excessive catalog locking conflicts with DBA activity. Utilize this in reporting only environments where absolutely necessary. Provide an easy method to capture SQL and possibly EXPLAIN the access paths. If not reporting, use Stored Procedures or APIs.

Host variable – DB2 locking and concurrency
Many times, a variable not correctly defined will lead to errors that are difficult to detect. Where appropriate, use DCLGEN variables produced from the system catalog.

LOCK TABLE or DDL in the program – DB2 locking and concurrency
These cause issues with concurrency. Avoid these unless the DDL is for Declared Temporary Tables.

Call the DBA with vague DB2 error – DB2 locking and concurrency
INCLUDE the SQLCA. Use appropriate error code logic after each SQL call. Provide all the error information given by calling the DSNTIAR program or using GET DIAGNOSTICS (V8) or using Java methods that provide detailed information

Moving a numeric constant to the length field of a VARCHAR data type – DB2 locking and concurrency
Causes the row to be excessively large wasting DASD and causing extraneous I/Os.

EXEC SQL
  SET: calc-length =
  LENGTH(RTRIM(:host-var))
END-EXEC

MOVE calc-length TO LL-field (in Cobol 49 level)

∙ FOR READ/FETCH ONLY or WITH UR for retrieval only cursors.
∙ OPTIMIZE when you know the accurate number of rows that will be fetched
∙ ORDER BY only when the sequence is important
∙ WITH HOLD statement to prevent COMMIT from destroying the cursor position in batch Programs.
∙ Select only those fields that you truly need
∙ Use only DCLGEN variables as predicates
While declaring CURSOR in handler we should use OPTIMIZE FOR n ROWS, if we want to pass only n rows from DB2 handler back to calling program. In this case, DB2 handler only fetches n rows into the intermediate result table. The syntax is

DECLARE C1 CURSOR FOR
   SELECT * FROM PACS_TRANS_TRACK
   OPTIMIZE FOR 5000 ROWS
FOR FETCH ONLY

OPTIMIZE FOR tells DB2 to proceed under the assumption that at most a total of integer rows are to be retrieved from the result table. Without this clause, DB2 would assume that all rows of the result table are to be retrieved, and would optimize accordingly. Optimizing for integer rows, if at most this number of rows are fetched, could improve performance.

E.g.
We want to fetch PRCTR_CD, CTLHX_NM, and CTLHX_EFF_DT from CTLHX table with the descending order of CTLHX_EFF_DT.

EXEC SQL
 DECLARE CTLHX_CSR CURSOR FOR
   SELECT
      PRCTR_CD,
      CTLHX_NM,
      CTLHX_EFF_DT
   FROM CTLHX
   WHERE
       CNTRL_NO =: CNTRL-NO
   AND CTLHX_SUPRCD_DTS ='0001-01-01-00.00.00.000000'
   AND CTLHX_EFF_DT <=: CTLHX-EFF-DT
   ORDER BY CTLHX_EFF_DT DESC
   FETCH FIRST 1 ROW ONLY
 WITH UR
END-EXEC

As this is retrieval only so we have used WITH UR option. Here the order is important so we used the ORDER BY clause. As if we use DCLGEN variables. As we are interested in retrieving the first row so we have used the clause FETCH FIRST 1 ROW ONLY clause. We have used literal ‘0001-01-01-00.00.00.000000’ so that optimizer will know at BIND time the values.

Read DB2 blogs : Click Here IBM DB2 Manual : Click Here

Admin

Share
Published by
Admin

Recent Posts

Increase Transparency and Collaboration Product Backlog

A well-maintained product backlog is crucial for successful product development. It serves as a single…

2 months ago

Product Backlog – Incremental value to the customer

Incremental value to the customer refers to the gradual delivery of small, functional parts of…

2 months ago

Product Market, Customer’s Desire, Need, and Challenges

A Product Market refers to the group of potential customers who might be interested in…

2 months ago

PAL-EBM Professional Agile Leadership – EBM Certification

The Professional Agile Leadership - Evidence-Based Management (PAL-EBM) certification offered by Scrum.org is designed for…

5 months ago

PAL I Professional Agile Leadership Certification

The Professional Agile Leadership (PAL I) certification, offered by Scrum.org, is designed to equip leaders…

5 months ago

Scrum Master Certification: CSM, PSM, SSM

Choosing the right Scrum Master Certification depends on your current experience and career goals. If…

7 months ago