Administrator function
DB2 12 introduces several features useful for the database administrators. This chapter describes the following functions:
9.1 Dynamic plan stability
Dynamic SQL statements allow applications to build SQL statements dynamically at run time. Unlike static SQL statements, the full text of the dynamic SQL statement is not known at the application’s bind time. In an ad hoc query environment, performance of dynamic SQL statements is an important priority because enterprise applications use repeating dynamic SQL and they often suffer from instability compared to static SQL. Although the risk of any individual query regressing on any one day is small, exposing thousands of queries to access path changes every day as statistics, the environment, maintenance level, and even release of DB2 changes result in exposure to query performance regression.
To optimize a dynamic SQL statement’s performance, the prepared or bound form of the SQL statement is often saved in the in-memory global dynamic statement cache (DSC). Then, for the next request to prepare the dynamic SQL statement for execution, a copy of the bound form of the SQL statement from the cache may be used. However, the storage of the bound form of the dynamic SQL statement in the in-memory cache is not persistent and only provides stabilization for dynamic queries over an instance of a database system. Further, stabilization is not realized across different version of the database system. After the bound form of the SQL statement is deleted from the cache, the dynamic SQL statement must be prepared again. Even when the access plan for the dynamic SQL statement is persistently stored and reused on the next PREPARE SQL, there is no guarantee that the same bound form will result when the PREPARE process is repeated using the stored access plan.
DB2 12 addresses that concern by supporting the ability to stabilize and reuse the runtime structures (the prepared form) for dynamic SQL, extending the stability that was available to only statically bound SQL in DB2 11 to stabilized dynamic SQL.
9.1.1 Stabilization into and loading from catalog tables
Similar to static SQL statements that are bound in packages and saved in the persistent DB2 catalog and directory, DB2 12 saves the prepared runtime structures of dynamic SQL statements and their corresponding EXPLAIN and dependency (object and authorization) information, in the persistent new catalog tables, SYSDYNQRY and SYSDYNQRYDEP. The source statements in the DSC can be saved in those catalog tables through the new START DYNQRYCAPTURE command.
During subsequent prepared request of the same dynamic statement, DB2 looks in the DSC. If the statement is not found in the DSC (cache miss), DB2 will look up the SYSDYNQRY for a stabilized statement using the same criteria for DSC look up. If a match is found on the catalog table, a running copy of the statement is made along with its dependency information to insert into the DSC and to be executed by the running thread. This process is called loading a stabilized dynamic statement into the DSC, which bypasses the full PREPARE process altogether.
9.1.2 Stabilization method
When the function level is activated to 500 or above, START DYNQUERYCAPTURE (STA DYNQRY for short) can be used to capture qualified statements in the DSC into the catalog tables. The DSNT758 message is issued when attempting to issue the commands before new function activation. This command can be executed with different scopes:
SCOPE(LOCAL): Qualified statements in the DSC of the member where the command is executed are handled.
SCOPE(GROUP): Qualified statements in the DSC of all active members are handled.
To execute this command, a privilege set of the process must include one of the following authorities:
SQLADM authority
System DBADM authority
SYSOPR authority
SYSCTRL authority
SYSADM authority
Figure 9-1 shows the STA DYNQRY command syntax.
Figure 9-1 The START DYNQUERYCAPTURE command
You provide a stabilization group name to logically associate a set of queries that are captured or stabilized. The stabilization group name can be used as input to the FREE command to free all the queries for a stabilization group.
Various criteria can be specified to find the statements that are stable and performed well to be captured:
Queries that were cached in the DSC under a specific CURRENT SQLID or all and have been executed a number of times (threshold). Note that DB2 increments each statement’s execution count only if IFCID318 is enabled. Therefore, to turn on this trace record or TYPE(MONITOR) CLASS(29), then use EXPLAIN STATEMENT CACHE or other methods to determine which statements are to be stabilized. The default threshold value is 2.
A query with a specific statement ID or statement token as shown on the EXPLAIN STATEMENT CACHE output or IFCID 316 trace record. These are IDs and tokens in the DSC; hence, only the member’s DSC where the command is executed is examined.
The DSNX221I message is issued if the command has no syntax or no authorization error thus is started successfully. An asynchronous service task is scheduled to begin the stabilization process. The DSNX222I message is issued when the stabilization process finishes with summarized information on the numbers of statements that DB2 attempted to capture and actually captured. A command number is displayed on both messages to help coordinate them together. The DSNX223I message is written if another dynamic query capture process is already active for the specified SQLID.
More than one STA DYNQRY command can be executed at the same time, and DB2 manages the process so that a query can be stabilized with a stabilization group only. For a data sharing group, the preference is to use the SCOPE(GROUP) option to reduce the contention among members. DB2 captures queries in the member where the command starts, then notifies other members to capture queries in their DSC concurrently. Thus, if a query exists on multiple members, the number of scheduled queries for stabilization might be less than the number of queries actually stabilized.
Example 9-1 specifies to DB2 to stabilize queries in the dynamic cache with CURRENT SQLID of ADMF001 and that have been executed at least 50 times.
Example 9-1 The STA DYNQRY command and output
-STA DYNQUERYCAPTURE STBLGRP(ABC) THRESHOLD(50) CURSQLID(ADMF001)
DSNX221I -DB2A DSNXESTC DYNAMIC QUERY CAPTURE FOR
COMMAND NUMBER 3 STARTED SUCCESSFULLY.
DSNX222I -DB2A DSNXESC1 DYNAMIC QUERY CAPTURE
COMPLETED FOR COMMAND NUMBER 3 WITH 20 STATEMENTS SCHEDULED,
20 STATEMENTS STABILIZED, AND 0 STATEMENTS ALREADY STABILIZED.
Example 9-2 stabilizes all queries in the dynamic cache of each member in the data sharing group that have been executed at least 200 times.
Example 9-2 The STA DYNQRY command and output
-STA DYNQUERYCAPTURE STBLGRP(DEF) THRESHOLD(200) SCOPE(GLOBAL)
DSNX221I -DB2A DSNXESTC DYNAMIC QUERY CAPTURE FOR
COMMAND NUMBER 2 STARTED SUCCESSFULLY.
DSNX222I -DB2A DSNXESC1 DYNAMIC QUERY CAPTURE
COMPLETED FOR COMMAND NUMBER 2 WITH 50 STATEMENTS SCHEDULED,
47 STATEMENTS STABILIZED, AND 3 STATEMENTS ALREADY STABILIZED.
Limitation
Qualified queries to be captured are those inserted in the DB2 dynamic cache excluding these queries:
Queries prepared with REOPT(AUTO)
Queries prepared with CONCENTRATE STATEMENT WITH LITERALS
Queries transformed due to referencing System Temporal, Application Temporal, or Archived Transparency table and non-default setting of the CURRENT SYSTEM TEMPORAL TIME special register, CURRENT BUSINESS TEMPORAL TIME special register, or GET_ARCHIVE global variable.
9.1.3 Catalog tables
Dynamic statements are stabilized in the SYSIBM.SYSDYNQRY table with an unique stabilized statement ID and copy ID. The table has the following columns (Table 9-1).
Table 9-1 Catalog tables
Column name
Data type
Description
SDQ_STMT_ID
BIGINT NOT NULL
Stabilized dynamic query statement ID.
STBLGRP
VARCHAR(128) NOT NULL
Stabilization group name
COPYID
SMALLINT NOT NULL
The copy of the stabilized runtime structures for the query in this row:
0 – CURRENT
4 – INVALID
CURSQLID
VARCHAR (128) NOT NULL
Current SQLID for stabilized dynamic query.
CURSCHEMA
VARCHAR (128) NOT NULL
Current schema for stabilized dynamic query.
CURAPPLCOMPAT
VARCHAR (10) NOT NULL
Current application compatibility for stabilized dynamic query.
QUERY_HASH
CHAR(16) NOT NULL FOR BIT DATA
The hash key generated by statement text.
QUERY_HASH_VERSION
INTEGER NOT NULL
The version of the query hash.
VALID
CHAR(1) NOT NULL
Whether the stabilized dynamic query is valid:
N for No
Y for Yes
LASTUSED
DATE NOT NULL
Date query using these runtime structures was last run.
RELBOUND
CHAR(1) NOT NULL
The release when the query was stabilized. See Release dependency indicators.
GROUP_MEMBER
VARCHAR(24) NOT NULL
The data sharing member name that updates the row
STBLTIME
TIMESTAMP NOT NULL
Timestamp when statement was stabilized.
ROWID
ROWID NOT NULL
GENERATED ALWAYS
Internal use only.
STMTTEXT
CLOB (2M) NOT NULL
The text of the SQL statement.
DATA1
BLOB(2G) INLINE LENGTH (32329) NOT NULL
Internal use only.
DATA2
BLOB(2G) NOT NULL
Internal use only.
DATA3
BLOB(2G) NOT NULL
Internal use only.
DATA4
BLOB(2G) NOT NULL
Internal use only.
DATA5
VARCHAR(128) NOT NULL
Internal use only.
DATA6
CHAR(8) NOT NULL FOR BIT DATA
Internal use only.
The object and authorization dependency information are stored in the SYSDYNQRYDEP table with the primary key as SDQ_STMT_ID and COPY ID column values to correlate back to rows in the SYSDYNQRY table.
Table 9-2 shows the columns that exist in the SYSDYNQRYDEP table.
Table 9-2 Column names in the SYSDYNQRYDEP table
Column name
Data type
Description
SDQ_STMT_ID
BIGINT NOT NULL
Stabilized dynamic query statement ID.
COPYID
SMALLINT NOT NULL
The copy of the dynamic query in this row.
Current version of the dynamic query.
BQUALIFIER
VARCHAR(128) NOT NULL
The value of the column depends on the type of object:
If BNAME identifies a table space (BTYPE is R), the value is the name of its database.
If BNAME identifies a table on which a period is defined (BTYPE is B or C), the value is the qualifier of that table.
If BNAME identifies user-defined function, a cast function, a stored procedure, or a sequence (BTYPE is F, O, or Q), the value is the schema name.
If BNAME identifies a role, the value is blank.
Otherwise, the value is the schema of BNAME.
BNAME
VARCHAR(128) NOT NULL
The name of the object that the query depends on.
BTYPE
CHAR(1) NOT NULL
Type of object identified by BNAME and BQUALIFIER:
'A' = Alias
'E' = INSTEAD OF trigger
'F' = User-defined function or cast function
'G' = Created global temporary table
'H' = Global Variable
'I' =  Index
'M' = Materialized query table
'O' = Stored procedure
'P' = Partitioned table space if it is defined as LARGE or with the DSSIZE parm
'Q' = Sequence object
'R' = Table space
'S' = Synonym
'T' = Table
'V' = View
'W' = SYSTEM_TIME period
'Z' = BUSINESS_TIME period
'0' (zero) = Sequence alias
CLASS
CHAR(1) NOT NULL
'A' = Authorization dependency
'D' = Data Definition Language dependency
BAUTH
SMALLINT NOT NULL
The privilege that is held on the object on which the query is dependent. The privilege only applies when CLASS is 'A':
50 = SELECTAUTH
51 = INSERTAUTH
52 = DELETEAUTH
53 = UPDATEAUTH
64 = EXECUTEAUTH
263 = USAGEAUTH
291 = READAUTH
292 = WRITEAUTH
0 = Column is not used
AUTHID_TYPE
CHAR(1) NOT NULL
The type of authorization ID indicated by AUTHID. The authorization type only applies when CLASS is 'A':
' ' = Authorization ID
'L' = Role
AUTHID
VARCHAR(128) NOT NULL
The authorization ID or role of the user who holds the privilege on the object on which the query is dependent. The authorization ID only applies when CLASS is 'A'.
DBNAME
VARCHAR(24) NOT NULL
If the value of BADMINAUTH IS 'D' (DBADMAUTH), DBNAME contains the name of the database on which the user or role indicated by AUTHID holds DBADM authority. Otherwise the value is blank.
BADMINAUTH
CHAR(1) NOT NULL
The authority that allowed access to the object on which the query is dependent. The admin authority only applies when CLASS is 'A':
'B' = SDBADMAUTH
'D' = DBADMAUTH
'G' = ACCESSCTRLAUTH
'K' = SQLADMAUTH
'L' = SYSCTRLAUTH
'S' = SYSADMAUTH
'T' = DATAACCESSAUTH
' ' = Authority not held
PUBLICAUTH
CHAR(1) NOT NULL
Whether the privilege or authority is held by PUBLIC. The PUBLIC privilege only applies when CLASS is 'A':
'Y' = Privilege is held
' ' = Privilege is not held
ALLOBJAUT
CHAR(1) NOT NULL
Whether the privilege is held on all objects within the schema. The all objects privilege only applies when CLASS is 'A':
'Y' = Privilege is held
' ' = Privilege is not held
QUERY_HASH
CHAR(16) NOT NULL FOR BIT DATA
The hash key of the statement text if the value of CLASS is 'D', otherwise hexadecimal zeros.
 
VARCHAR(128) NOT NULL
Internal use only.
 
CHAR(8) NOT NULL FOR BIT DATA
Internal use only.
9.1.4 Calculating the EDM statement cache hit ratio
The EDM storage statistics provide information that can help to determine how successful applications are at finding statements in the dynamic cache and in the SYSDYNQRY catalog table.
PREPARE REQUESTS (A) records the number of PREPARE requests.
FULL PREPARES (B) records the number of times that a statement was inserted into the cache after a full PREPARE to derive the access path and build the runtime structures.
LOAD FROM CATALOG (C) records the number of times that a statement was inserted into the cache via the process where a SYSDYNQRY record is used to achieve access paths stability, therefore bypassing the optimization and building of the runtime structure in a full PREPARE process altogether.
To determine how often the dynamic statement was used from the cache, check the value in CACHE HIT RATIO (D).
Procedure to calculate the EDM statement cache hit ratio
The following formula can be used to calculate the cache hit ratio which is the best performance path:
(PREPARE REQUESTS - FULL PREPARES - LOAD FROM CATALOG) / PREPARE REQUESTS
To find cache hit ratio and load from catalog hit ratio, the following formula can be used:
(PREPARE REQUESTS - FULL PREPARES) / PREPARE REQUESTS
Example 9-3 demonstrates that formula.
Example 9-3 The EDM storage statistics
DYNAMIC SQL STMT QUANTITY FORMULA
---------------- -------- -------
PREPARE REQUESTS A 210225
FULL PREPARES B 42681
SHORT PREPARES 167544 (A - B - C)
Short PREPARES based on cache 154592 (A - B - C - C)
Short PREPARES based on catalog C 12952
CACHE HIT RATIO (%) 73.54 (A - B - C) / A
CATALOG+CACHE HIT RATIO (%) 79.70 (A - B) / A
In the example, A, B, and C are the EDM statistic counters QISEDSG, QISEDSI, and QISEDPSF respectively.
9.1.5 Invalidation of stabilized dynamic statements
When a statement is captured into the SYSDYNQRY table, its COPY ID value is '0' and VALID = 'Y'.
Various actions and events can invalidate stabilized dynamic SQL statements. For example, changing objects that are referenced by the statement, such by issuing ALTER, REVOKE, and DROP statements can invalidate stabilized dynamic SQL statements. The VALID column of the SYSDYNQRY catalog table indicates the validity of stabilized dynamic statements. When these invalidation situations occur, DB2 marks the saved runtime structures for the stabilized access paths as invalid (VALID = 'N' with COPYID remains as '0').
The next time an invalidated statement is prepared, DB2 uses the full prepare process to generate new access paths. Such invalidated statements are not stabilized until another START DYNQUERYCAPTURE command is issued to stabilize them again. At that time, the invalidated query will have copy ID of '4', and the newly stabilized, valid copy is added with copy ID of '0'. The load stabilized dynamic statement process on the PREPARE request uses only the valid copy.
9.1.6 EXPLAIN changes
In DB2 12, cached queries in the DSC keep track of their stabilized statement IDs if a query was captured into or loaded from the SYSDYNQRY table. This ID is externalized in the EXPLAIN STATEMENT CACHE output and IFCID 316 trace record.
DB2 12 also adds a new option on the EXPLAIN statement, STABILIZED DYNAMIC QUERY STMTID, for the ability to retrieve the access path information saved with a specific captured statement.
Figure 9-2 shows the EXPLAIN statement’s syntax diagram.
Figure 9-2 The EXPLAIN statement syntax diagram
A COPY 'CURRENT' option will retrieve the row in the SYSDYNQRY table with matching SDQ_STMTID and COPYID of 0 which is the copy usable on a load process. A COPY 'INVALID' option will retrieve the row in the SYSDYNQRY table with matching SDQ_STMTID and COPYID of 4 which is the copy not usable on a load process. This EXPLAIN statement is useful when an access path's comparison is needed such as after certain DDL activities (dropping an index).
Because no QUERYNO exists for this EXPLAIN request, the HINT_USED column in the PLAN_TABLE is populated with the string 'EXPLAIN SDQ: cid', and cid will be one of the following values:
“0” is the current copyid.
"4” is the invalid copyid.
The QUERYNO column of each EXPLAIN table record that is returned is set to the default value 0, and the COLLID column is set to DSNSTBLQRYEXPLAIN.
Various EXPLAIN tables in DB2 12 also have the new PER_STMT_ID column to correlate back to the SYSDYNQRY.SDQ_STMTID column.
9.1.7 The FREE STABILIZED DYNAMIC QUERY subcommand
As queries are captured into the persistent catalog, the user can monitor usage and remove the statements which are not loaded recently because they are not frequently executed. The SYSDYNQRY.LASTUSED column can be referenced for the above information. A new TSO FREE STABILIZED DYNAMIC QUERY subcommand is supported to allow a user deleting the captured statement (or statements) either by the ID or stabilization group name.
 
Note: When freeing multiple statements in a stabilization group, DB2 issues a COMMIT after each statement free. If any of the specified queries are in the dynamic statement cache, FREE STABILIZED DYNAMIC QUERY purges them from the dynamic statement cache.
The PLANMGNTSCOPE option can be used to specify all copies or invalid copies
(COPYID = '4') are to be freed. The INVALIDONLY option is provided to target the invalid copies (VALID = 'N').
 
Note: A row with the VALID column of 'N' might or might not have the COPYID of '4'. COPY ID of '4' is updated only when the statement is re-prepared, cached in the DSC and recaptured in the SYSDYNQRY table.
Figure 9-3 shows the FREE subcommand syntax.
Figure 9-3 The FREE subcommand syntax diagram
Example 9-4 indicates how to free the stabilized dynamic query whose ID is 1234 and invalid with COPYID of '4'.
Example 9-4 Free the invalid stabilized dynamic query whose ID is 1234, copy id = '4'’
FREE STABILIZED DYNAMIC QUERY SDQSTMTID (1234) PLANMGMTSCOPE(INVALID)
This new subcommand can also be revoked with the ADMIN_COMMAND_DSN stored procedure. Example 9-5 shows the stored procedure's invocation to free all stabilized dynamic queries for stabilization group APP01.
Example 9-5 The ADMIN_COMMAND_DSN invocation with Free stabilized dynamic queries
CALL SYSPROC.ADMIN_COMMAND_DSN('FREE STABILIZED DYNAMIC QUERY STBLGRP(APP01)', ?)
MSG:
ROWNUM Text
1 DSNT340I -DB2A FREE STABILIZED DYNAMIC QUERY
2 COMPLETED SUCCESSFULLY FOR 5 STATEMENTS.
"ADMIN_COMMAND_DSN" RETURN _STATUS: 0
9.1.8 Monitor for stabilization
Because the STA DYNQRY MONITOR(NO) command performs only a snapshot capture of stabilized dynamic queries, it does not capture all queries for the workload. You must periodically issue the command to capture more queries as they are executed and qualified. The MONITOR(YES) option avoids this unreliable work for you and allows DB2 to perform the work continuously without performance overhead to the mainline SQL execution or to overall system resource.
Start the monitor
When a STA DYNQRY command is issued with the MONITOR(YES) option, partially qualified statements can be monitored to be stabilized later. An example of a partially qualified statement is when the statement matches the CURRENT SQLID but the number of executions has not reached the specified threshold value. Several concurrent monitoring capture commands can be started, and DB2 assigns the stabilization group and command number to each partially qualified statement in the DSC. Additionally, as new statements are inserted into the DSC through the full PREPARE process, the stabilized dynamic statement monitor process can link them to an active monitor request (if the CURRENT SQLID matches).
At every minute interval, DB2 service task checks the DSC for monitored statements meeting all criteria (such as the execution threshold) and stabilizes them into the SYSDYNQRY table.
 
Note: The STA DYNQRY MONITOR(YES) command also includes the MONITOR(NO) work (taking the snapshot of the DSC and stabilized currently qualified statements) before beginning the monitoring work.
Display the monitors
The new DISPLAY DYNQUERYCAPTURE command (DIS DYNQRY for short) can be used to display all currently active dynamic query capture monitors. The DSNX250I message shows that DIS DYNQRY command started, and the display output is followed. The DSNX260I message indicates that a long display output continues from the previous output. Figure 9-4 shows the command syntax.
Figure 9-4 The DISPLAY DYNQUERYCAPTURE command syntax diagram
Stop the monitor
The new STOP DYNQUERYCAPTURE command stops the specified active dynamic query capture monitoring request (or requests). Figure 9-5 shows the command syntax.
Figure 9-5 The STOP DYNQUERYCAPTURE command syntax
The CNO value can be obtained from the command number shown in the DSNX221I message issued on the STA DYNQRY command or the DIS DYNQRY command output.
Use case
The following use case shows how the monitoring process works for multiple commands and how the DIS DYNQRY command can be used to track progress. Assume that two applications exist, application A and application B, with the dynamic SQL statements that are listed in Example 9-6.
Example 9-6 Dynamic SQL statements for application A and application B
Application A: SET CURRENT SQLID = SCHEMA1
SELECT * FROM SCHEMA1.T1
INSERT INTO SCHEMA1.T1 SELECT FROM SCHEMA2.T2
Application B: SET CURRENT SQLID = SCHEMA2
UPDATE SCHEMA2.T2 SET C1 -1 WHERE C2 = ‘ABC’
DELETE FROM SCHEMA2.T2 WHERE C2 = ‘ABC’
Assume also that applications A and B are executed twice, and the runtime structures of the statements in Example 9-6 are inserted into the DSC with a number of executions of 2.
Assume further that the following first stabilize command is issued to start the stabilization for dynamic query statement with monitoring capability:
- START DYNQUERYCAPTURE GROUP(GRPA) CURSQLID(SCHEMA1) THRESHOLD(2) MONITOR(YES)
With this command, the SELECT and INSERT statements from application A are stabilized into the DB2 catalog because they match both criteria CURRENT SQLID and execution threshold. Example 9-7 shows the output when a DIS DYNQRY command is issued after.
Example 9-7 Output of issuing a DIS DYNQRY command
-DISPLAY DYNQUERYCAPTURE CNO(*)
*** BEGIN DISPLAY DYNAMIC QUERY CAPTURE CNO(*)
================================================================
CNO : 1
STBLGRP : GRPA
SCHEMA : SCHEMA1
THRESHOLD : 2
STABILIZED : 2
================================================================
*** END DISPLAY DYNAMIC QUERY CAPTURE
Next, the following second stabilize command is issued to start stabilization with monitoring capability:
- START DYNQUERYCAPTURE GROUP(GRPB) CURSQLID(SCHEMA2) THRESHOLD(10) MONITOR(YES)
On this command, DB2 links the UPDATE and DELETE statements in the DSC to the second monitor request. Example 9-8 shows the output when a DIS DYNQRY command is issued after.
Example 9-8 Output of issuing a DIS DYNQRY command
-DISPLAY DYNQUERYCAPTURE CNO(*)
*** BEGIN DISPLAY DYNAMIC QUERY CAPTURE CNO(*)
================================================================
CNO : 1
STBLGRP : GRPA
SCHEMA : SCHEMA1
THRESHOLD : 2
STABILIZED : 2
---------------------------------------------------------------
CNO : 2
STBLGRP : GRPB
SQLID : SCHEMA2
THRESHOLD : 10
STABILIZED : 0
================================================================
*** END DISPLAY DYNAMIC QUERY CAPTURE
Assume that application B is running again and its UPDATE statement is executed 8 more times. The monitor service task finds that the UPDATE statement fully satisfies the filtering criteria of the second monitoring request since the number of executions is now 10 and stores the bound form of the UPDATE statement in the persistent catalog. The command in Example 9-9 can then be issued to display pending monitoring requests.
Example 9-9 Display pending monitoring requests
- DISPLAY DYNQUERYCAPTURE CNO(*)
*** BEGIN DISPLAY DYNAMIC QUERY CAPTURE CNO(*)
================================================================
CNO : 1
STBLGRP : GRPA
SCHEMA : SCHEMA1
THRESHOLD : 2
STABILIZED : 2
---------------------------------------------------------------
CNO : 2
STBLGRP : GRPB
SQLID : SCHEMA2
THRESHOLD : 10
STABILIZED : 1
================================================================
*** END DISPLAY DYNAMIC QUERY CAPTURE
 
Note: For the command number (CNO) 2, the output now shows the progress of 1 (one) stabilized statement.
To stop a pending monitoring request, a STOP DYNQUERYCAPTURE command can be issued. For example, assume that at this time, a STOP DYNQUERYCAPTURE CNO(2) command is issued for the second monitoring command. In response, the monitoring request for SCHEMA2 is deleted and the link from application B’s DELETE statement is also broken. The DELETE statement can be subsequently executed but it will not be stabilized.
Next, assume further that a new application C executes the following dynamic SQL statement twice:
INSERT INTO SCHEMA1.T1 SELECT * FROM SCHEMA1.T2
As this new statement is prepared and its runtime structures inserted into the DSC, a check is made against the monitoring requests, and a link is established with the monitoring request 1 because the new statement satisfies the CURRENT SQLID criteria. When the execution threshold of 2 is reached, DB2 automatically stabilizes this statement. Example 9-10 shows that the output displays the updated progress in the monitoring requests.
Example 9-10 Display the updated progress in monitoring requests
-DISPLAY DYNQUERYCAPTURE CNO(*)
*** BEGIN DISPLAY DYNAMIC QUERY CAPTURE CNO(*)
================================================================
CNO : 1
STBLGRP : GRPA
SCHEMA : SCHEMA1
THRESHOLD : 2
STABILIZED : 3
================================================================
*** END DISPLAY DYNAMIC QUERY CAPTURE
9.1.9 DSNZPARM and installation panel
A new subsystem parameter, CACHEDYN_STABILIZATION, is introduced for specifying how DB2 should stabilize cached dynamic SQL. The CACHEDYN_STABILIZATION DNZPARM is externalized as CACHE DYN STABILITY on the DSNTIP8 installation panel (Figure 9-6 on page 145).
Figure 9-6 The DSNTIP8 installation panel
Consider the following information about the parameter:
CACHE DYN STABILITY: Specify how DB2 is to stabilize cached dynamic SQL statements. When a statement is stabilized, the CURRENT SQLID, statement text, and runtime structures are written to catalog tables. When a dynamic SQL statement is not present in the dynamic SQL statement cache, DB2 will load the runtime structures if available from the SYSIBM.SYSDYNQUERY catalog table rather than performing a full prepare. This extends the stability and reliability of performance of dynamic SQL.
Range: CAPTURE, LOAD, BOTH, NONE
 – CAPTURE: Statements may be stabilized through the -START DYNQUERY command with both MONITOR(NO) and MONITOR(YES). DB2 will not load stabilized statements from SYSDYNQUERY.
 – LOAD: Statements may not be stabilized through any means.
The -START DYNQUERY command will fail, and any MONITOR(YES) commands in progress will not stabilize statements, even if stabilization criteria are matched. During long prepare, DB2 will attempt to load stabilized statements from SYSDYNQUERY with which to run.
 – BOTH: This is the default setting. Statements may be stabilized through the -START DYNQUERY command through both MONITOR(NO) and MONITOR(YES). During long prepare, DB2 will attempt to load stabilized statements from SYSDYNQUERY with which to run.
 – NONE: Statements may not be stabilized through any means.
The -START DYNQUERY command will fail, and any MONITOR(YES) commands in progress will not stabilize statements, even if stabilization criteria are matched. DB2 will not load stabilized statements from SYSDYNQUERY.
Default: BOTH
Data sharing scope: All members should use the same setting
Online changeable: Yes
DSNZPxxx: DSN6SPRM.CACHEDYN_STABILIZATION
9.2 Resource limit facility for static SQL
Traditionally, the DB2 resource limit facility (RLF) is used to govern dynamic SQL statements so they do not unexpectedly consume too much resources such as locks, CPU, storage, I/O, and so on, due to unanticipated access path change, DASD hardware degradation, or poorly coded SQL. Abrupt poor performance when a static SQL statement in application runs away in DB2 and accumulates many resources is also an issue that could negatively impact online transactions in a production system.
Transaction Managers such as IBM CICS® and IMS have functions that control transactions consuming resources in an IMS or CICS database. However, when the CICS or IMS transaction invokes a static SQL statement running in DB2 for z/OS, the resource limit control function does not apply. In a high volume online transaction processing environment, the ability to proactively cancel poorly running DB2 threads can help avoid severe degradation of an LPAR.
DB2 12 extends the RLF functionality to support static SQL statements to avoid unanticipated application’s elapsed time as well as to control system’s resource consumption.
9.2.1 Reactive governing static SQL
You can define the RLF tables by using the unique naming convention DSNTRLSTxx and DSNTRLMTxx where xx is an ID number. These RLF tables allow you to specify the limit amount of processor resources, in service units, used by dynamic SQL statements. This type of control function is called reactive governing because DB2 can interrupt the execution of the qualified SQL statement and return a negative SQLCODE when the limit is reached.
DB2 12 enhances the reactive governing function of the RLF tables so that you can limit resources used by static SQL statements too. To use RLF, the you insert rows into a resource limit table with values that identify the context of the governed statements, the type of governing, and threshold limits. The limits specified in the RLF tables apply to individual dynamic or static SQL statements that qualify for a defined scope. You can specify different function codes in the RLF tables to indicate whether static or dynamic SQL statements are to be governed with the limit when they are executed. To insert a row for reactive governing on static SQL statements, you can specify 'A' in the RLFFUNC column and the amount of processor limit in ASUTIME column. The PLANNAME column has to be blank.
 
Note: Other functions are provided with RLF too, such as the predictive governing function, query parallelism control function, and bind limit function. These function codes can be specified in the RLF tables and are still applicable to dynamic SQL statements only.
Similar to dynamic SQL, only the following static statements can be reactively governed:
SELECT (cursor and singleton),
INSERT
UPDATE (search and position)
MERGE
TRUNCATE
DELETE (search and position)
The change shown in Table 9-3 and Table 9-4 is applicable to the RLFFUNC column of the user DSNRLSTxx and DSNRLMTxx tables supplied by DB2.
Table 9-3 RLFFUNC column in RLF DSNRLSTxx table
Column name
Column type
Description
RLFFUNC
CHAR(1)
Specifies how the row is used. These values have an effect:
'1' = The row reactively governs bind operations.
'2' = The row reactively governs dynamic SELECT, INSERT, UPDATE, MERGE, TRUNCATE, or DELETE statements by package or collection name.
'4' = The row disables query CP parallelism.
'7' = The row predictively governs dynamic SELECT, INSERT, UPDATE, MERGE, TRUNCATE, or DELETE statements by package or collection name.
'A' = The row reactively governs static SELECT (cursor and singleton), INSERT, UPDATE, MERGE, TRUNCATE, or DELETE statements by package or collection name.
All other values are ignored.
Table 9-4 RLFFUNC column in RLF DSNRLMTxx table
Column name
Column type
Description
RLFFUNC
CHAR(1)
Specifies how the row is used. These values have an effect:
'8' = The row reactively governs dynamic SELECT, INSERT, UPDATE, MERGE, TRUNCATE, or DELETE statements by client information (RLEUID, RLFEUAN, RLFEUWN, and RLFIP).
'9' = The row predictively governs dynamic SELECT, INSERT, UPDATE, MERGE, TRUNCATE, or DELETE statements by client information (RLEUID, RLFEUAN, RLFEUWN, and RLFIP).
'B' = The row reactively governs static SELECT (cursor and singleton), INSERT, UPDATE, MERGE, TRUNCATE, or DELETE statements by client information (RLEUID, RLFEUAN, RLFEUWN, and RLFIP).
All other values are ignored.
After a START RLF command is issued referencing these tables, the changes to the resource limit table become effective immediately for all new threads.
While a thread is executing and using a limit, DB2 can detect that a filtering criteria can change and apply the corresponding limit:
When the primary user is changed
When the client information is changed
When a package is loaded for execution
Also, while a thread is executing and using a limit, a new limit may be updated in the RLF tables. For the changed limit to be effective for such thread, RLF must be restarted.
When a row in the RLF table matches the currently executing static statement, the row’s ASUTIME value is used to limit the statement’s execution. The static statement’s execution can be stopped with the SQLCODE -905 when the consumed resource exceeds the specified ASUTIME limit. Note that SQL requests related to a cursor such as OPEN, FETCH, CLOSE can accumulate CP resources on the same SELECT statement. When no row in the RLF table matches the currently executing static statement, DB2 uses the default limit value that is set in the RLST ACCESS ERROR DSNZPARM for static SQL on installation panel DSNTIPO4 which appears after panel DSNTIPO3.
9.2.2 Use cases
Two use cases are presented here.
Use case 1
These are the steps for using the DB2 resource limit facility to govern static SQL statements:
1. Insert rows into a DSNRLSTxx resource limit table with values that identify the context of the governed statements (which application the statements are in, which client connection the statements are from, which primary authorization ID executes the statements, and more), the type of governing, and CP limits. Assume the rows are inserted (Table 9-5).
Table 9-5 Rows in the DSNRLSTxx table
RLFFUNC
AUTHID
PLANNAME
RLFCOLLN
RLFPKG
LUNAME
ASUTIME
A
JOE
(blank)
COL1
(blank)
(blank)
(null
2
JOE
(blank)
COL1
(blank)
(blank)
15000
A
(blank)
(blank)
(blank)
PKG2
PUBLIC
10000
2
(blank)
(blank)
(blank)
(PKG2)
PUBLIC
20000
The first row indicates that when user JOE runs any package in the collection COL1, at the local location, no limit restricts any static statement in the package because the ASUTIME column value is null.
The second row shows that when user JOE runs any package in the collection COL1, at the local location, each dynamic statement in the package is restricted to 15,000 SUs.
The third row reflects that when any user runs package PKG2, in any collection from any location in the network, including the local location, a processor limit of 10,000 SUs is applied for each static statement in the package.
The last row shows that when any user runs package PKG2, in any collection from any location in the network, including the local location, a processor limit of 20,000 SUs is applied for each dynamic statement in the package.
2. Issue the START RLMIT ID=xx command, where xx is the two-character identifier that was specified when the RLF table was created. You can start and stop different resource limit tables at different times. However, only one resource limit table of each type (DNSRLMTxx or DSNRLSTxx) can be used at any given time.
3. When SQL statements are run in a package, DB2 uses the following search order:
a. Exact match
b. Authorization ID
c. Plan name, or collection name and package name
d. LU name
e. No row match
 
Note: A DB2 11 behavior is that the CP time for a dynamic cursor can include CP time incurred by a dynamic positioned UPDATE and DELETE (WCO) statement against the same cursor. A static cursor can also be used to update or delete a particular row with a static or dynamic positioned UPDATE or DELETE statement. When the ASUTIME limits for static and dynamic SQL specified in the RLF table differ, a dynamic positioned UPDATE and DELETE statement will use the dynamic limit, and a static positioned UPDATE and DELETE statement will use the static limit.
The following examples show the SQL statements in different packages that qualified for the previous DSNRLSTxx table example and their corresponding limits governed by DB2 (Table 9-6).
Table 9-6 Dynamic and static SQL in packages and RLF limits used
Package name
SQL request
Applied limit in service units
COL1.X
DCL CURSOR C1 FOR SELECT...
 
 
OPEN/FETCH C1
Infinite is used to govern (accumulate into C1)
 
Static UPDATE WHERE CURRENT OF C1
Infinite is used to govern (accumulate into C1)
 
Dynamic UPDATE WHERE CURRENT OF C1
15000 is used to govern (accumulate into C1)
X.PKG2
DCL CURSOR C2 FOR STMID
PREPARE STMID FOR SELECT…
 
 
OPEN/FETCH C2
10000 is used to govern (accumulate into C2)
 
Dynamic UPDATE WHERE CURRENT OF C2
10000 is used to govern (accumulate into C2)
X.PKG3
DCL CURSOR C3 FOR SELECT...
 
 
OPEN/FETCH C3
10000 is used to govern (accumulate)
 
Static UPDATE WHERE CURRENT OF C3
10000 is used to govern (accumulate into C3)
 
Dynamic UPDATE WHERE CURRENT OF C3
20000 is used to govern (accumulate into C3)
Use case 2
Assume that the following subsystem parameters for RLF are set as shown here:
RLFENABLE = ALL, to govern both static and dynamic SQL
RLFERRSTC = 5000, as the default limit for locally originated static SQL
RLFERRDSTC = 8000, as the default limit for remotely originated static SQL
RLFERR = 10000, as the default limit for locally originated dynamic SQL
RLFERRD = NOLIMIT, as the default limit for remotely originated dynamic SQL
The DSNRLST01 table (Table 9-7) and DSNRLMT01 table (Table 9-8) are activated for RLF.
Table 9-7 DSNRLST01 table
RLFFUNC
AUTHID
PLANNAME
RLFCOLLN
RLFPKG
LUNAME
ASUTIME
 
JOE
(blank)
COL1
PKG1
(blank)
2000
 
JOE
(blank)
COL1
PKG1
(blank)
800
 
(blank)
(blank)
(blank)
PKG2
PUBLIC
500
 
DAVID
(blank)
(blank)
(blank)
PUBLIC
600
Note that in Table 9-7, the blank value of the LUNAME column is for locally originated SQL, and the PUBLIC value is for both locally and remotely originated SQL.
Table 9-8 DSNRLMT01 table
RLFFUNC
RLFEUAN
RLFEUID
RLFEUWN
RLFIP
ASUTIME
 
APP1
(blank)
(blank)
(blank)
1500
 
APP1
(blank)
(blank)
(blank)
700
The steps are as follows:
1. The following static query is issued from the locally executed package COL3.PKG3, by the authorization ID FRANK:
DCL CURSOR C1 FOR SELECT SALARY FROM DSN8C10.EMP;
OPEN C1;
FETCH C1;
The default limit, 5000 service unit, for the locally originated static SQL is applied for cursor C1 because no matching row is found for the thread from either the DSNRLMT01 table or the DSNRLST01 table.
2. The following static query is issued from the package COL4.PKG4, which is remotely executed in the middleware application, APP4, by the authorization ID FRANK:
DCL CURSOR C2 FOR SELECT SALARY FROM DSN8C10.EMP;
OPEN C2;
FETCH C2;
The default limit, 8000 service unit, for the remotely originated static SQL is applied for cursor C2 since there is no matching row found for the thread either from the DSNRLMT01 table or DSNRLST01 table.
3. The following static SQL, which updates a static positioned row, is issued from the locally executed package, COL1.PKG1, by the authorization ID JOE:
DCL CURSOR C3 FOR SELECT SALARY FROM DSN8C10.EMP;
OPEN C3;
FETCH C3;
UPDATE DSN8C10.EMP SET SALARY = SALARY + (SALARY * 0.1) WHRE CURRENT OF C3;
The 800 service unit limit, is applied for the static UPDATE statement that references static positioned cursor C3 because the second row in the DSNTRLST01 table matches the thread for static SQL.
4. The following static SQL, which updates a dynamic positioned row, is issued from the locally executed package COL1.PKG1, by the authorization ID JOE:
DCL CURSOR C4 FOR S4;
STMT4 = 'SELECT SALARY FROM DSN8C10.EMP';
PREPARE S4 FROM :STMT4;
OPEN C4;
FETCH C4;
UPDATE DSN8C10.EMP SET SALARY = SALARY + (SALARY * 0.1) WHRE CURRENT OF C4;
The 2000 service unit limit, is applied for the static UPDATE statement that references the dynamic positioned cursor C4 because the first row in the DSNTRLST01 table matches the thread for dynamic SQL.
 
Note: When a dynamic or static statement contains an external user-defined function, the execution time for the user-defined function is not included in the ASUTIME of the RLF-governed dynamic or static statement’s execution. The ASUTIME for a user-defined function's execution is controlled based on the ASUTIME option specified for the user-defined function in the CREATE FUNCTION statement.
9.2.3 RLF DSNZPARMs and installation panels
Because DB2 12 supports different limits for governing dynamic and static SQL statements, you might want to activate RLF for dynamic as in DB2 11 only, or static only, or both. The new RLFENABLE DSNZPARM is introduced for the selection mentioned. Two new DSNZPARMs are also added so the user can specify the action taken in the case DB2 cannot access the RLF tables.
Two subsystem parameters for RLF are added to DSN6SYSP:
RLFENABLE: Specifies the level of RLF governing when RLF is started.
RLFERRSTC: Specifies what action DB2 is to take for static SQL statements when the Resource Limit Facility governor encounters a condition that prevents it from accessing the resource limit specification table. This setting also applies if DB2 cannot find an applicable row in the resource limit specification table. It is equivalent to the existing RLFERR parameter which pertains to dynamic SQL statements.
Also, a subsystem parameter for RLF is added to DSN6FAC:
RLFERRDSTC: Specifies what action DB2 is to take for static SQL statements from a remote location when the Resource Limit Facility governor encounters a condition that prevents it from accessing the resource limit table. This setting also applies if DB2 cannot find an applicable row in the resource limit table. It is equivalent to the existing RLFERRD parameter of which pertains to dynamic SQL statements from a remote location.
A new installation panel, DSNTIPO4 is introduced specifically for RLF-related subsystem parameters. It appears after panel DSNTIPO3 (Figure 9-7).
Figure 9-7 New DSNTIPO4 installation panel for Resource Limit Facility
Summary of fields is as follows:
The RLF AUTO START and RLST NAME SUFFIX fields were relocated directly from panel DSNTIPO. They correspond to the DSN6SYSP.RLF and DSN6SYSP.RLFTBL parameters.
The DYNAMIC SQL field was also relocated from DSNTIPO where it was named RLST ACCESS ERROR. It corresponds to the DSN6SYSP.RLFERR parameter.
The REMOTE DYNAMIC SQL field was relocated here from panel DSNTIPR (Distributed Data Facility panel 1) where it was named RLST ACCESS ERROR. It corresponds to the DSN6FAC.RLDERRD parameter.
The RLF SCOPE field is new and corresponds to the DSN6SYSP.RLFENABLE parameter.
The STATIC SQL field is also new and corresponds to the DSN6SYSP.RLFERRSTC parameter.
The REMOTE STATIC SQL field is also new and corresponds to the DSN6FAC.RLFERRDSTC parameter.
Only the three following new fields are discussed next:
RLF SCOPE
Specify the level of RLF governing when RLF is started.
Range: DYNAMIC, STATIC, ALL
 – DYNAMIC, the default, means that RLF will govern only dynamic SQL statements.
 – STATIC means that it will govern only static SQL statements.
 – ALL means that it will govern both types.
Default: DYNAMIC
Data sharing scope: All members should use the same setting
Online changeable: Yes
DSNZPxxx: DSN6SYSP.RLFENABLE
STATIC SQL
Specify what action DB2 is to take for static SQL statements when the Resource Limit Specification governor encounters a condition that prevents it from accessing the Resource Limit Specification table (RLST). This setting also applies if DB2 cannot find an applicable row in the RLST.
Range: NOLIMIT, NORUN, or an integer from 1 to 5000000
 – NOLIMIT, the default, means that RLF will allow all static SQL statements to run without limit.
 – NORUN means that RLF will terminate all static SQL statements immediately with an SQL error code. An integer setting of 1 - 5000000 specifies the number of service units that RLF will use as the default resource limit for all static SQL statements. If the limit is exceeded, the SQL statement is terminated.
Default: NOLIMIT
Data sharing scope: It is recommended that all members use the same setting
Online changeable: Yes
DSNZPxxx: DSN6SYSP.RLFERRSTC
REMOTE STATIC SQL
Specify what action DB2 is to take for static SQL statements from a remote location when the Resource Limit Specification governor encounters a condition that prevents it from accessing the Resource Limit Specification table (RLST). This setting also applies if DB2 cannot find an applicable row in the RLST.
Range: NOLIMIT, NORUN, or an integer from 1 to 5000000
 – NOLIMIT, the default, means that RLF will allow all static SQL statements from a remote location to run without limit.
 – NORUN means that RLF will terminate all static SQL statements from a remote location immediately with an SQL error code. An integer setting of 1 - 5000000 specifies the number of service units that RLF will use as the default resource limit for all static SQL statements from a remote location. If the limit is exceeded, the SQL statement is terminated.
Default: NOLIMIT
Data sharing scope: It is recommended that all members use the same setting
Online changeable: Yes
DSNZPxxx: DSN6FAC.RLFERRDSTC
9.3 Column level deferred alter (pending alter column)
In DB2 11, the capability exists to alter a table column’s attribute, such as its data type, precision, scale, or length through the ALTER TABLE ALTER COLUMN statement as an immediate alteration. However, this function impacts the system in several ways:
Some column alterations result in indexes being placed in a restrictive status. If a unique index is placed in restrictive status, it results in an outage to the table.
Column alterations invalidate dependent packages so the SQL referencing the columns can be rebound or autobound. When dependent packages are rebound or go through autobind, indexes in restrictive status are not candidates in selecting access path. Hence, a suboptimal access path may be used instead.
After the column is altered, the column definition changes are immediately reflected in the catalog, but not in the data. Subsequent access to the column data results in data on the retrieved rows being converted from the old definition format to the new definition format, which incurs a performance overhead until the table is reorganized to convert all data to the new definition format.
If pending alterations for the containing table space or table are not yet materialized, executing a subsequent immediate alteration (such as column alteration) fails. To resolve this situation, one of the following actions is needed:
 – A REORG utility must be run to materialize the pending changes first, followed by an immediate column alteration, followed by another REORG to convert to the new definition format.
 – The order of alterations must be changed so that the immediate alterations are performed prior to the pending alterations.
To address the ways that the system is impacted, DB2 12 enhances the alter column attribute process to limit data unavailability, reduce impact to access paths, and allow ease of scheduling schema alterations and REORG activities. You have a choice to execute the ALTER TABLE ALTER COLUMN statement as an immediate or pending change. Other DDL pending changes have been supported since DB2 10 such as ALTER TABLE ALTER PARTITION, ALTER TABLE DROP COLUMN, ALTER TABLESPACE DSSIZE, ALTER INDEX BUFFERPOOL, and others. When the ALTER TABLE ALTER COLUMN statement is a pending change in DB2 12, the actual alteration will be done during the REORG utility execution, and therefore, the following benefits can be observed:
The new column definition is reflected in both the catalog and data at the same time. There is no conversion needed when accessing data and hence no performance impact incurred as a result of such conversion.
Indexes are no longer placed in any pending states. They will be rebuilt during the materializing REORG utility.
Dependent packages are invalidated during the REORG SWITCH phase, after indexes have already been rebuilt with the new definition.
All pending alterations can be grouped together to be materialized by a single REORG activity of the table space.
Example 9-11 shows a partitioned table created followed by a pending alteration of a column data type issued, and the resulting row in the SYSPENDINGDDL table (Figure 9-8).
Example 9-11 Pending alteration with the SYSPENDINGDDL row
CREATE TABLE SC.TB1
(COLUMN1 INTEGER,
COLUMN2 CHAR(100),
COLUMN3 VARCHAR(100))
IN DB1.TS1;
CREATE INDEX SC.IX1 ON SC.TB1(COLUMN1);
ALTER TABLE SC.TB1 ALTER COLUMN COLUMN1
SET DATA TYPE BIGINT; -> SQLCODE +610
Figure 9-8 SYSPENDINGDDL table
DDL_MATERIALIZATION, a new ZPARM, is introduced to indicate whether eligible column alterations are executed as immediate or as pending alterations.
Valid ZPARM values:
ALWAYS_IMMEDIATE (default)
ALWAYS_PENDING
This ZPARM only applies when all of the following conditions are met:
The APPLCOMPAT bind option (for static ALTER) or the CURRENT APPLICATION COMPATIBILITY special register (for dynamic ALTER) is set to V12R1M500 or higher.
The ALTER COLUMN SET DATA TYPE statement is altering the column’s data type, length, precision, or scale. The following alter items are ineligible to be pending:
 – Altering the inline length of a LOB column
 – Altering the subtype of a column
 – Altering the XML type modifier of an XML column
The underlying table space is a Universal Table Space (UTS) or pending alter exists to convert to UTS.
The underlying table space data sets are defined.
Figure 9-9 shows the different behaviors between an immediate alteration and a pending alteration based on the zparm DDL_MATERIALIZATION.
Figure 9-9 Differences between an immediate alteration and a pending alteration
Figure 9-10 describes the status for the table space and its index when an allowed alteration (of data type or data length) is done as immediate or pending.
Figure 9-10 Object status difference between immediate and pending alterations
Figure 9-11 shows how the order of pending alteration requests can make a difference (successful or not).
Figure 9-11 Fail pending column alteration
The immediate column alteration request fails because the earlier ALTER TABLESPACE DSSIZE statement is a pending request (regardless of the zparm DDL_MATERIALIZATION value) and has not been materialized yet. Switching to pending request by setting the zparm DDL_MATERIALIZATION = ALWAYS_PENDING can fix this failure because both pending alterations are stackable.
However, when the order of the two alter requests are switched, both statements will be successful with the zparm DDL_MATERIALIZATION set to ALWAYS_PENDING or ALWAYS_PENALWAYS_IMMEDIATE. The ALTER TABLE ALTER COLUMN statement is done first under DDL_MATERIALIZATION = ALWAYS_IMMEDIATE and is an immediate alteration which does not impede the next pending ALTER TABLESPACE statement (Figure 9-12).
Figure 9-12 Successful ALTER requests due to order of execution
The pending column alteration enhancement does not allow any new alterations, and only the possible existing ones are able to be deferred alterations. Some current restrictions still exist on column level alterations such as length reduction (conversion from BINARY to CHAR FOR BIT DATA data type), and change of a LOB column’s inline length. Restrictions related to pending alters in general also apply to pending alter column.
New restrictions apply if alter column is executed as a pending change:
An ALTER INDEX statement with the NOT PADDED clause where the index references a column with pending definition
A CREATE TABLE or ALTER TABLE statement that specifies a FOREIGN KEY referencing a parent column with pending definition changes
9.3.1 Utility
The REORG materialization of alteration in the data can be done at a partition level for immediate alter (flagged by the AREO* status). However, the REORG SHRLEVEL REFERENCE or CHANGE utility to materialize alteration in the data must be done at a table space level for pending alter (flagged by the AREOR status).
The LOAD REPLACE utility does materialize changes for immediate alter (flagged by the AREO* status). This utility does not materialize changes for pending alter (flagged by the AREOR status). Online REORG on the complete table space must be run as indicated above.
The RECOVERY and RECOVERY to point-in-time utility time prior to an immediate column alteration are supported.
In addition to deferred ALTER processing (for example, cleanup AREOR), the online REORG utility also performs the following actions:
Regenerates views, triggers, masks and permissions.
Invalidates dependent package invalidation and dynamic statement cache.
Updates versioning counter where only one new version number is generated.
The DISCARD option discards rows in the materialized format with new schema definition.
If the table space is in Basic Row Format (BRF), it will be converted to Reordered Row Format (RRF) regardless of ROWFORMAT keyword or the RRF ZPARM.
If the STATISTICS keyword not specified, the default option used is STATISTICS TABLE ALL, INDEX ALL, UPDATE ALL, HISTORY ALL.
9.3.2 ALTER INDEX
An alteration of index to and from COMPRESS is a pending alteration independent of the zPARM DDL_MATERIALIZATION. Before DB2 12, the index is set to the PSRBD status. Such pending index change is materialized by the REORG, not by the REBUILD utility.
An alteration of index to NOT PADDED is not allowed if pending changes on column exists. The reason is that the materializing REORG might generate a key larger than 2000 bytes due to 2-byte length of NOT PADDED key entries. Without pending changes, the alteration to NOT PADDED is allowed but sets the PSRBD status on the index.
An alteration of index to PADDED is always allowed but sets the PSRBD status on the index. The reason this ALTER INDEX TO PADDED statement is allowed is because the keys are fully expanded and can only shrink by the 2-byte length of NOT PADDED key entries.
9.4 Insert partition
In DB2 10, partitions in a partitioned table may be rearranged by rotating them around using the ALTER TABLE ROTATE PARTITION 'n' TO LAST statement. This support provided some capability for user to modify a table’s partition configuration. In addition, a user can also add a new partition as the last partition at the end of the table via the ALTER TABLE ADD PARTITION statement with ENDING AT x where x must be the highest limit key value.
DB2 12 extends the support to dynamically adding a partition in the middle of the table for greater flexibility so that the logical partition numbers are arranged as how the data with the limit key values are supposed to be stored. This functionality provides usability while maximizing availability of the altered object.
9.4.1 ALTER ADD PARTITION
Two SQL statements are available than can insert a new partition in the middle of the table. Both give the same behavior:
The user specifies the high limit key (x) of the newly inserted partition:
ALTER TABLE
      
ADD PARTITION ENDING AT x
This syntax is the same syntax as existed in DB2 10 when adding a partition at the end. With DB2 12, the limit key x value can be any value so the new partition can be added in the middle of the table.
The user specifies the high limit key (x) of the newly inserted partition, along with the high limit key (y) of the existing partition (n as the physical partition number for the subsequent logical partition) that will be affected by the insert operation:
ALTER TABLE
ADD PARTITION ENDING AT x
ALTER PARTITION n ENDING AT y
This syntax is new in DB2 12 for family compatibility and ensures the user specifies the intended place where the new partition is to be added.
Inserting a partition in the middle of the table is supported when the APPLCOMPAT bind option (for static ALTER) or the CURRENT APPLICATION COMPATIBILITY special register (for dynamic ALTER) is set to V12R1M500 or greater.
The keyword INCLUSIVE is implied by default even if it is not stated explicitly. The partition specified in the ALTER PARTITION clause needs to be the very next logical partition to the partition being added in the ADD PARTITION clause. In addition, the high limit key value specified in the ALTER PARTITION clause must be the existing high limit key value for the very next logical partition. The high limit key value cannot be altered in the same statement while inserting a new partition.
A new first partition will be added if the specified high limit key for the inserted partition is lower than the existing lowest limit key in the table.
The newly inserted partition will be physically added at the end to obtain a new physical partition number. It will be assigned a new logical partition number based on the location inserted and all the logical partition numbers after that will be renumbered by incrementing by 1.
The adding partition request is considered a pending change when there is an existing pending change or when both of the following items are true:
A partition is added in the middle of the table.
The data sets of the table space are already created.
Otherwise, the change is considered an immediate change, for example, adding a partition at the end or the table space has the DEFINE(NO) attribute. Adding the last partition does not affect existing data in existing partitions. When the data sets for a table space that has not yet been defined, the table’s redefinition does not need to wait either. If the change is considered an immediate change, the change to the description of the table takes effect immediately.
If the change is a pending change to the definition of the table, the changes are not reflected in the definition or data at the time the ALTER TABLE statement is issued. Instead, the affected partitions are placed in an advisory REORG-pending state (AREOR).
Figure 9-13 shows a table with four partitions ending at keys 250, 500, 750, and 1000. It shows how the physical and logical partitions are arranged if any of the following statements are executed:
ALTER TABLE ADD PARTITION ENDING AT 1250 statement
ALTER TABLE ROTATE PARTITION 2 TO LAST ENDING AT 1250 statement
ALTER TABLE ADD PARTITION ENDING AT 400 statement
Figure 9-13 Adding a partition at the end or middle of the table, or rotating a partition
The following restrictions are implemented:
The table must be a ranged-partition table (PBR and PBR Relative Page Number are both supported).
The newly inserted partition’s limit key value under the ADD PARTITION clause is not the same as an existing partition’s limit key.
The partition specified in the ALTER PARTITION clause must be the very next logical partition to the partition being added in the ADD PARTITION clause.
The high limit key value specified in the ALTER PARTITION clause must be the existing high limit key value for the very next logical partition, which can be obtained from the catalog table. The high limit key value cannot be altered in the same statement while inserting a new partition.
If any outstanding unmaterialized alter limit key pending definition changes exist on the last partition of the table, insert partition will not be allowed in the same table until the pending alter limit key changes are materialized by a REORG execution.
Adding a partition to the end of the table is not allowed if any outstanding pending definition changes exist for the table space or objects within the table space.
The table cannot contain a LOB column, a distinct type column that is based on a LOB data type, or an XML column.
Note that only one new partition to be inserted can be specified in a single ALTER TABLE statement. However, multiple insert partition requests can be submitted with multiple ALTER statements, one partition per statement, and all those pending requests will be materialized by a single materializing REORG utility execution (Example 9-12).
Example 9-12 Multiple pending insert partitions in the same table are materialized
ALTER TABLE T1 ADD PARTITION ENDING AT 15
ALTER TABLE T1 ADD PARTITION ENDING AT 25
REORG TABLESPACE SHARELEVEL REFERENCE
9.4.2 Utilities affected
REORG and RECOVER are affected utilities.
REORG
In order to materialize the pending definition change for inserting partition, a partition-level REORG utility for the affected partition (or partitions) must be executed. The REORG utility must be online (SHRLEVEL REFERENCE or SHRLEVEL CHANGE). The new SYSDDLPENDING.REORG_SCOPE_LOWPART and SYSDDLPENDING.REORG_SCOPE_HIGHPART columns represent the logical partition range of the object’s definition prior to any pending definition changes being materialized. If there are multiple SYSPENDINGDDL entries for multiple insert partition pending definition changes and some of them get materialized first but not all of them, these two fields get updated during the previous materializing REORG processing to reflect the most current logical partition number range.
After converting this logical partition range to physical partition range, this is the partition range that the user must include in the partition-level REORG to materialize this particular pending definition change. If this partition range is not included in the partition-level REORG execution, the REORG processing will proceed and a new warning message DSNU2918I with return code 4 will be issued to indicate that the utility was run on a subset of affected partitions and not all pending definition changes on the partition-level are applied.
If any additional table space level pending definition changes were issued on the table space while some partitions are in advisory-REORG (AREOR) pending status due to inserting a new partition, a table space level REORG is required to materialize all the pending definition changes together at once.
If the partitions that are in pending status in the table space is due to insert partition pending definition changes only, a table space level REORG with SCOPE PENDING option could also be used to reorganize affected partitions and materialize the insert partition pending definition changes.
If adjacent partitions are affected by either alter limit key or insert partition partition-level pending definition changes, these adjacent partitions are required to be reorganized together in order to materialize the partition-level pending definition changes. If the affected adjacent partitions are not materialized together, none of the pending definition changes would be materialized. In addition, a table space level REORG with SCOPE PENDING option can also be used to reorganize affected partitions and materialize the pending definition changes.
While a partition-level REORG on the affected partitions can materialize insert partition pending definition changes, by either PART specification or SCOPE PENDING, the underlying REORG needs to quiesce the entire partitioned table space. The complete table space is drained in the LOG phase due to partition numbers and ordering changes. Also, packages and dynamic statements dependent on the table space will be invalided.
The REORG utility may collect new statistics for the newly inserted and affected partitions. For the newly inserted partition (or partitions), statistics will be collected with STATISTICS TABLE ALL INDEX ALL UPDATE ALL HISTORY ALL options, unless you explicitly specify the STATISTICS keyword with different options.
For the affected partition (or partitions), it is recommended to collect all statistics inline and to have a profile defined for the table, and specify REORG STATISTICS TABLE USE PROFILE in the materializing REORG execution for the recollection of the complete set of statistics. If recalculation of statistics is not done, a warning message DSNU1166I with return code 4 can be issued to indicate that some partition statistics might no longer be accurate because they have become obsolete. The partition statistics that might be obsolete are COLGROUP statistics, KEYCARD statistics, HISTOGRAM statistics, frequency statistics with NUMCOLS > 1, and statistics for extended indexes where applicable. You should execute the RUNSTATS utility to collect the partition statistics again after the REORG utility.
RECOVER
The RECOVER utility to point-in-time cannot be run to a point before the materializing REORG. The error message DSNU556I with return code 8 will be issued if such time is specified. The REORGE utility can be execute to materialize the pending changes before recovery to a point in time. DB2 12 does insert the SYSOBDS records for point-in-time recovery in the future. A MODIFY RECOVERY utility does a cleanup of these records.
9.4.3 Catalog changes
Catalog changes are described in this section.
SYSIBM.SYSTABLESPACE
When a new partition is added at the end of the table, the existing table space PRIQTY and SECQTY attributes of the previous logical partition are used for the space attributes of the new partition. When a new partition is inserted in the middle of the table, the existing table space PRIQTY and SECQTY attributes from table space level stored in SYSTABLESPACE are used for the space attributes of the new partition. For this usage, DB2 12 added new columns in SYSIBM.SYSTABLESPACE.
In addition, for newly created objects, the values for other table space attributes specified on the CREATE TABLESPACE statement, or default values for unspecified options are saved in new SYSTABLESPACE columns. These values are saved in SYSTABLESPACEPART in DB2 11. In DB2 12, these values are also saved in SYSTABLEPART so that insert partition can inherit these table space values. The column values can also be populated when an ALTER TABLESPACE statement on these attributes occurs at the global level.
For migrated objects, values for these new columns, which are still NULL, will be populated when any DDL changes (insert partition, alter partition, rotate partition, conversion from index-controlled to table-controlled partitioning, table space type conversion, and others) occur on the table space, and it will inherit values from the last logical partition (same as the existing add partition behavior). The effect takes place immediately for immediate alters and during materialization for pending definition changes.
The following new columns are added:
PQTY
STORTYPE
STORNAME
VCATNAME
FREEPAGE
PCTFREE
COMPRESS
GBPCACHE
TRACKMOD
SECQTYI
PCTFREE_UPD
PCTFREE_UPD_CALC
SYSIBM.SYSINDEXES
The following new nullable columns will be added to the SYSINDEXES catalog table to store default values for partition attributes at the index space level. These columns also exist in the SYSINDEXPART catalog table in DB2 11. In DB2 12, the new columns added to SYSINDEXES have their data type and description identical to those in SYSINDEXPART except the new columns in SYSINDEXES are nullable.
The following new columns are added:
PQTY
STORTYPE
STORNAME
VCATNAME
FREEPAGE
PCTFREE
GBPCACHE
SECQTYI
SYSIBM.SYSPENDINGDDL
Two new nullable SMALLINT columns, REORG_SCOPE_LOWPART and REORG_SCOPE_HIGHPART, are added to the SYSPENDINGDDL catalog table to store the low logical partition range number and the high logical partition range number for which REORG needs to include in the partition-level REORG execution to materialize insert partition pending definition changes. In DB2 12, values for these two columns will be populated for all pending definition changes.
Figure 9-14 shows a table that has three partitions (L for logical partition and P for physical partition) ending at 20, 40, and 60 key values.
Figure 9-14 Table with 3 partitions
When an insert partition request with limit key of 30 is issued, the partitions and the SYSDDLPENDING row look like this example (see Figure 9-15 on page 165 and Figure 9-16 on page 165):
ALTER TABLE ADD PARTITON ENDING AT 30
Figure 9-15 Adding a logical partition with key 30
Figure 9-16 SYSDDLPENDING row
This row has the columns SYSDDLPENDING.REORG_SCOPELOWPART = 2 and SYSDDLPENDING.REORG_SCOPEHIGHPART = 2 because only the existing logical partition number 2 is affected (it will become logical partition 3 and rows will be moved to the new partition).
Then a REORG utility is executed with affected partition number 2 to materialize the pending change. Now the table has the new logical partition 3 with key values from 31 to 40. Next, another insert partition request with limit key of 35 is issued. The partitions and the SYSDDLPENDING row look like this example (see Figure 9-17 and Figure 9-18):
ALTER TABLE ADD PARTITON ENDING AT 35
-> SYSDDLPENDING.OPTION_VALUE = 35,
Figure 9-17 Adding logical partition with key 35
Figure 9-18 SYSDDLPENDING row
This row has the columns SYSDDLPENDING.REORG_SCOPELOWPART = 3 and SYSDDLPENDING.REORG_SCOPEHIGHPART = 3 because only the existing logical partition number 3 is affected (it will become logical partition 4 and rows will be moved to the new partition).
If no REORG utility was run to materialize the first insert partition request (where limit key was at 30), the two rows shown in Figure 9-19 would be inserted in the SYSDDLPENDING table.
Figure 9-19 SYSDDLPENDING table
Only the original logical partition 2 is affected for both pending changes.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset