Fast Path performance considerations
This chapter explores Fast Path processing. IMS 11 introduced Fast Path 64-bit buffer pools. IMS 12 introduces Fast Path secondary index (FPSI) and Fast Path log reduction.
First, we describe general performance considerations and then proceed to the new functions:
This function was introduced in IMS 11 and enhanced in IMS 12. The 64-bit buffer manager creates database buffers above the 2 GB bar, in 64-bit virtual storage. Those buffer pools are pre-expanded in anticipation of future needs and automatically compressed when the use of a subpool drops.
IMS 12 provides support for secondary indexes for DEDBs. A secondary index database provides an alternate path to access its corresponding primary DEDB and can be processed as a separate database. IMS supports two database structures for FPSI: hierarchical indexed sequential access method (HISAM) and simple hierarchical indexed sequential access method (SHISAM).
IMS 12 allows users to specify that the before-image record of type ‘99’ log is not to be written. It can be a benefit for asynchronous data capture users who want only after-image log records with reducing logging volumes and improving performance.
8.1 IMS Fast Path databases
IMS Fast Path includes two database organizations:
Data entry database (DEDB)
Main storage database (MSDB)
DEDBs are hierarchical databases similar to HDAM but have significant differences that provide even higher performance, capacity, and availability. MSDBs reside in storage, enabling application programs to avoid the I/O activity to access them, an also provide high performance. MSDBs cannot be used in a data sharing environment and cannot be updated from OTMA transaction.
8.2 DEDB general performance considerations
When the performance problems occur with DEDBs, major analysis and tuning points are as follows:
I/O IWAIT/CALL
Elapse time for I/O IWAIT
Usage of Fast Path resources
Figure 8-1 shows two patterns of DL/I call access to DEDB. If the DB is well organized, only one I/O occurs for one segment search request (Case1). But, if the segments in a record are contained in multiple CIs, multiple I/O can occur for one segment request (Case2).
Figure 8-1 DL/I call access patterns to DEDB
8.2.1 I/O IWAIT/CALL
This I/O IWAIT/CALL is the number of I/O IWAITs that occur per one DL/I call. The DASD I/O occurs when the CI, including the searched segment, is not in the buffer. If the segment is included in the CI that reads first, the program can get the segment with one DASD I/O. But the program needs to read multiple CIs per DL/I call if there are synonym chains or overflows of segments into the Independent Overflow (IOVF) or the Dependent Overflow (DOVF) parts. When there are many dependent segments for one root segment, the DB record can be divided into multiple CIs; and multiple I/Os are also needed to reach the dependent segment that is placed in the latter part of the record. These instances can make the I/O IWAIT rate higher.
The IWAITs/Call section in IMS Performance Analyzer Database IWAIT Summary report gives you the number of I/O IWAIT/CALL. It is reported for each database data sets. The rate under 1.0 is ideal. See Example 8-1.
Example 8-1 IMS Performance Analyzer for z/OS: Database IWAIT Summary
Report from 16Aug2012 16:12:16:96 IMS 12.1.0 IMS Performance Analyzer 4.3 Report to 16Aug2012 16:14:36:76
Database IWAIT Summary (Sorted by Total IWAIT Elapsed time)
___________________________________________________________
Region Totals From 16Aug2012 16:12:17:26 To 16Aug2012 16:14:36:75 Elapsed= 0 Hrs 2 Mins 19.800.185 Secs
Elap/IWAIT StdDev Max IWAIT Calls IWAITs Pct Tot Pct Tot Pct Tot Pct Tot
DDname Type IWAITs Sc.Mil.Mic X Avg Sc.Mil.Mic Waiting /Call Calls IWAITs IWTElp DLAElp
DDOPER01 DEDB 1,866 0.454 .367 6.930 1,492 1.25 6.27% 15.80% 13.535% 10.171%
DDTRMC01 DEDB 1,436 0.453 .620 9.839 1,336 1.07 5.61% 12.16% 10.378% 7.799%
DDBOMN01 DEDB 1,077 0.460 .119 1.114 980 1.10 4.12% 9.12% 7.901% 5.938%
DDSSEQ01 DEDB 903 0.487 .102 1.009 903 1.00 3.79% 7.64% 7.015% 5.272%
DDCODE01 DEDB 731 0.414 .071 0.783 679 1.08 2.85% 6.19% 4.831% 3.631%
DDINDX01 DEDB 418 0.430 .150 1.078 418 1.00 1.76% 3.54% 2.867% 2.155%
DDDEPO02 DEDB 180 0.734 1.245 12.923 128 1.41 .54% 1.52% 2.108% 1.584%
DDDEPO06 DEDB 192 0.671 .079 1.090 132 1.45 .55% 1.63% 2.058% 1.546%
DDDEPO15 DEDB 169 0.674 .092 1.158 113 1.50 .47% 1.43% 1.818% 1.366%
DDBOCN01 DEDB 198 0.564 .077 0.982 181 1.09 .76% 1.68% 1.783% 1.340%
DDDEPO05 VSAM 163 0.682 .095 0.986 114 1.43 .48% 1.38% 1.776% 1.334%
DDZENN01 VSAM 213 0.520 .738 6.026 213 1.00 .89% 1.80% 1.768% 1.328%
DDDEPO10 VSAM 163 0.677 .096 1.118 122 1.34 .51% 1.38% 1.762% 1.324%
If the I/O IWAIT/CALL rate is high, be sure to check the existence of synonym chains or the overflows of segments into IOVF/DOVF by using, for example, High Performance Fast Path Utilities -DEDB Pointer Checker. Then, if necessary, you can reorganize the database or change the structure of it (the sizes of CI, RAA, and UOW).
If there are many dependent segments for one root segment, you can change only the logical structure of the database.
8.2.2 Elapse time for I/O IWAIT
The elapse time is of DASD I/O processing. The Elap/IWAIT section, in the “IMS Performance Analyzer for z/OS: Database IWAIT Summary report,” gives information of the length of I/O IWAIT. The proper value varies on the disc model, generally under 10 ms is preferable.
It might be possible to identify the parts of the database that are heavily accessed (hot spots) and move these records either to another existing area or to a new area. We suggest that you spread the hot spots around, if possible, so that the activity is evenly dispersed among the area data sets on fast DASD devices. Or, if the reduction of the elapse is essential, a solution might be to use Virtual Storage Option (VSO) or Shared Virtual Storage Option (SVSO). See 8.4, “VSO and SVSO” on page 322.
8.2.3 Usage of Fast Path resources
Fast Path databases are accessed by using dedicated buffers, so checking these resources is important. The following parameters are related to Fast Path resources. Some of them can be defined in IMS control region EXEC parameters and others in the dependent region:
Control region EXEC parameters (DFSPBxx member of IMS.PROCLIB)
DBBF Maximum number of Fast Path buffers
DBFX Number of buffers out of the DBBF to be set aside and page-fixed at control region initialization to use for DEDB writes.
BSIZ Fast Path database buffer size
OTHR Number of concurrent output threads that Fast Path is to support for the entire system
Dependent region EXEC parameters (IFP, MPP, BMP)
NBA Normal buffer allocation
OBA Overflow buffer allocation
 
Fast Path 64-bit manager: If you enable the Fast Path 64-bit manager, you can specify the buffer pool usage in DFSDFxxx member of IMS.PROCLIB, and some of those parameters are ignored. See 8.5, “Fast Path 64-bit buffer manager” on page 331.
You can display the entire current Fast Path buffer usage with the /DIS POOL FPDB command. See Example 8-2.
Example 8-2 /DIS POOL FPDB command response
/DIS POOL FPDB
DFS4445I CMD FROM MCS/E-MCS CONSOLE USERID=IMSR4: DIS POOL FPDB IMS12A
DFS4444I DISPLAY FROM ID=IMS12A 781
FPDB BUFFER POOL:
+ AVAIL = 20 WRITING = 0 PGMUSE = 0 UNFIXED =
280
** NO DEDB PRIVATE POOLS DEFINED **
*12203/084520*
When the UNFIXED is less than the NBA required by the starting dependent region, the region initialization fails. If it happens, you should increase the DBBF.
DBBF
Use the following formula to calculate the number of Fast Path database buffers required:
DBBF = Number of open areas that have SDEP segments
+ Sum of NBA for all concurrently active FP programs
+ Largest OBA allocation for any of the concurrently active FP programs, including any specified by CICS for DBCTL
+ DBFX
+ Sum of all Fast Path buffers used by CICS(CNBA)
(+ Some margin for IMS internal tasks)
If the number of database buffers requested by DBBF is not large enough, then an area open or a region initialization fails.
DBFX
When the new NBA request occurs after using all DBFX buffers, new buffers are page-fixed; this operation can influence the transaction or batch performance. The number of buffer waits are reported in either of the following locations; the ideal number is 0 (zero):
The Common Buffer Usage Wts section in IMS Performance Analyzer for z/OS: Fast Path Resource Usage and Contention (Example 8-3)
The COMMON BUFFER USAGE WTS section in Fast Path Log Analysis utility (DBFULTA0): OVERALL SUMMARY OF RESOURCE USAGE AND CONTENTIONS report (Example 8-4)
Example 8-3 IMS Performance Analyzer for z/OS: Fast Path Resource Usage and Contention
IMS Performance Analyzer Page 1
Fast Path Resource Usage and Contention - IMSP
_______________________________________________
From 17Aug2012 14:21:03:04 To 17Aug2012 14:23:18:53 Elapsed= 0 Hrs 2 Mins 15.482.032 Secs
---DEDB Calls-- --- ADS I/O --- --VSO Activity- -Common Buffer- Contentions LGNR Stat Totl Tran
Transact Routing Reads Updates Reads Updates Reads Updates Usage Tot Tot CI/ Total #CI Sync Rate
Code Code Count Avg Max Avg Max Avg Max Avg Max Avg Max Avg Max Avg Max Wts Stl UOW OBA Sec Comb Logd Fail /Sec
________ ________ _______ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ____ ____ ____ ____
DFSIVP4 *SF=L 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
DFSIVP5 *SF=L 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
IVTFD IVTFD 10 1 1 0 1 1 3 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0
IVTFM IVTFM 13 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
**System Totals** 25 0 1 0 1 1 3 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 2 0
Example 8-4 Fast Path Log Analysis utility (DBFULTA0): OVERALL SUMMARY OF RESOURCE USAGE AND CONTENTIONS
OVERALL SUMMARY OF RESOURCE USAGE AND CONTENTIONS FOR ALL TRANSACTION CODES AND PSBS: PAGE 7
TRANCODE --NO.-- ------DEDB CALLS------- -MSDB-- ----ADS I/O---- ----VSO ACT---- -COMMON BUFFER- TOTL CONTENTIONS TRAN LGNR STATS
--OR---- ---OF-- -TOTAL- --GET-- --UPD-- -CALLS- --RDS-- --UPD-- --RDS - --UPD-- -----USAGE----- SYNC TOT TOT CI/ RATE -NO. OF CI
--PSB--- -TRANS- AVG MAX AVG MAX AVG MAX AVG MAX AVG MAX AVG MAX AVG MAX AVG MAX AVG MAX WTS STL FAIL UOW OBA SEC /SEC COMB LOG'D
________ _______ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ____ ___ ___ ___ ____ ____ _____
IVTFM 13 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
IVTFD 10 1 2 0 1 0 1 0 0 1 3 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0
DFSIVP5 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
DFSIVP4 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
If the buffer wait occurs, increase DBFX and DBBF. The long response time of database update I/O can also lead to the delay of buffer release, and cause buffer wait. When that happens, you should also investigate the DASD response time.
NBA and OBA
When a dependent region is started with NBA specified in its execution parameters, it causes the NBA number of buffers to be made available for the region in the Fast Path buffer pool. This number of buffers must be sufficient to handle the processing of the vast majority of programs running in that region. These buffers are page-fixed when the region starts.
All CIs that are locked at the exclusive level remain locked until the buffer is released. Buffers that were not updated are released when either of the following items is reached:
The NBA limit is reached (and buffer stealing occurs).
The program reaches sync point.
Updated buffers are released only when the OTHREADs are completed.
If your program requires more than its NBA, IMS can provide additional buffers. The number allowed is specified by the OBA parameter on the region procedure. However, IMS permits only a single program to access its OBA buffers at any point in time and uses the OBA latch to enforce this requirements (generally, OBA is required only by update programs).
The OBA latch is released when the holding program reaches sync point. If the latch is unavailable because another program is using its OBA buffers, the region waits until the latch becomes available. At any time, only the largest OBA requested by a region is page-fixed in the Fast Path buffer pool. Be sure to allocate sufficient NBA for the majority of work units, so that OBA is rarely used.
The IMS Performance Analyzer for z/OS: Fast Path Resource Usage and Contention (Example 8-5) and Fast Path Log Analysis utility (DBFULTA0): OVERALL SUMMARY OF RESOURCE USAGE AND CONTENTIONS report (Example 8-6) show the number of NBAs actually used and the number of OBA latch conflicts by each Fast Path transaction.
Example 8-5 IMS Performance Analyzer for z/OS: Fast Path Resource Usage and Contention
IMS Performance Analyzer Page 1
Fast Path Resource Usage and Contention - IMSP
_______________________________________________
From 17Aug2012 14:21:03:04 To 17Aug2012 14:23:18:53 Elapsed= 0 Hrs 2 Mins 15.482.032 Secs
---DEDB Calls-- --- ADS I/O --- --VSO Activity- -Common Buffer- Contentions LGNR Stat Totl Tran
Transact Routing Reads Updates Reads Updates Reads Updates Usage Tot Tot CI/ Total #CI Sync Rate
Code Code Count Avg Max Avg Max Avg Max Avg Max Avg Max Avg Max Avg Max Wts Stl UOW OBA Sec Comb Logd Fail /Sec
________ ________ _______ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ____ ____ ____ ____
DFSIVP4 *SF=L 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
DFSIVP5 *SF=L 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
IVTFD IVTFD 10 1 1 0 1 1 3 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0
IVTFM IVTFM 13 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
**System Totals** 25 0 1 0 1 1 3 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 2 0
Example 8-6 Fast Path Log Analysis utility (DBFULTA0): OVERALL SUMMARY OF RESOURCE USAGE AND CONTENTIONS
OVERALL SUMMARY OF RESOURCE USAGE AND CONTENTIONS FOR ALL TRANSACTION CODES AND PSBS: PAGE 7
TRANCODE --NO.-- ------DEDB CALLS------- -MSDB-- ----ADS I/O---- ----VSO ACT---- -COMMON BUFFER- TOTL CONTENTIONS TRAN LGNR STATS
--OR---- ---OF-- -TOTAL- --GET-- --UPD-- -CALLS- --RDS-- --UPD-- --RDS - --UPD-- -----USAGE----- SYNC TOT TOT CI/ RATE -NO. OF CI
--PSB--- -TRANS- AVG MAX AVG MAX AVG MAX AVG MAX AVG MAX AVG MAX AVG MAX AVG MAX AVG MAX WTS STL FAIL UOW OBA SEC /SEC COMB LOG'D
________ _______ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ____ ___ ___ ___ ____ ____ _____
IVTFM 13 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
IVTFD 10 1 2 0 1 0 1 0 0 1 3 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0
DFSIVP5 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
DFSIVP4 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
You can also use the IMS Performance Analyzer for z/OS: Fast Path Transaction Exception Log report (Example 8-7). This report can show similar information to the DBFULTA0 report, but enhanced data filtering, time precision, and additional fields, such as USERID, are available.
Example 8-7 IMS Performance Analyzer for z/OS: Fast Path Transaction Exception log
IMS Performance Analyzer Page 1
Fast Path Transaction Exception Log
___________________________________
Log 17Aug2012 14:21:03:04
Sync Point S Transact Routing P User PST Queue --Transit Times (T/Us)-- Output -DB Call- --ADS-- --VSO-- Buf --DB Wait--
Time F Code Code T ID ID Count In-Q Proc Out-Q Total (sec) DEDB MSDB Get Put Get Put Use CI UW OB CB
___________ _ ________ ________ _ ________ ___ _____ _____ _____ _____ ______ ______ ____ ____ ___ ___ ___ ___ ___ __ __ __ __
14:21:03.04 L DFSIVP4 *IFP 1 0 0 0 0 0 0.00 0 0 0 0 0 0 0 0 0 0 0
14:21:07.21 L DFSIVP5 *IFP 2 0 0 0 0 0 0.00 0 0 0 0 0 0 0 0 0 0 0
14:21:50.66 IVTFD IVTFD SASAKI1 1 0 0 120 192 312 0.01 1 0 3 0 0 0 1 0 0 0 0
14:21:53.39 IVTFD IVTFD SASAKI1 1 0 0 0 1 1 0.01 1 0 1 0 0 0 1 0 0 0 0
The number that is reported under the Buf Use column, on the first line for every transaction, is the total number of buffers used, irrespective of whether they were only NBA, or NBA plus OBA. The Buffers line in the report gives the details of the number used within NBA and OBA.
The report can provide details of buffer usage by type. It details the number of NBA, OBA, NRDB (non-related buffers for SDEP and MSDB use), number of times buffer stealing was invoked, number of times the program waited for a buffer to become available, number of buffers written with OTHREADs, and also the number of buffer sets that are used by HSSP and the high-speed reorganization utility.
You can monitor the OBA latch contention by means of the Latch Statistics section of IMS Performance Analyzer for z/OS: Internal Resource Usage report. Be sure to investigate any value different from 0 (zero) total IWAITs in the OBA latch line, and increase the NBA enough to make it to 0. The fragmentation can lead to the increase of buffer usage, so you can investigate the DEDBs that are accessed by the transaction, using High Performance Fast Path Utilities -DEDB Pointer Checker, and reorganize them if needed.
OTHR
When buffers that are waiting to be written start queuing for OTHREADs, buffer contention increases because the locks on the data in the buffers are not released until the buffers are written to DASD. If the problem is on the buffer side, we suggest that you split the area across several DASD volumes.
However, the problem might be an output thread shortage. We also suggest that you set the OTHR system execution parameter to a value big enough so that the write buffers are not queued because of insufficient numbers of SRBs.
The IMS Performance Analyzer for z/OS: Fast Path OTHREAD Analysis report gives you some useful information about the value to specify for the OTHR parameter (Example 8-8 on page 320). The Max Value column of the Active OTHREADs section shows the maximum number of SRBs scheduled for OTHREAD requests; if the value is close or equal to the OTHR parameter, increase this parameter value.
The output thread processing for VSO and SVSO DEDBs is somewhat different. See “VSO output threads” on page 323 for details.
Example 8-8 IMS Performance Analyzer for z/OS: Fast Path OTHREAD Analysis
Report from 14Jun2012 18.22.05.26 IMS 12.1.0 IMS Performance Analyzer 4.3 Report to 14Jun2012 18.52.30.76
Fast Path OTHREAD Analysis
From 14Jun2012 18.30.29.14 To 14Jun2012 18.44.19.38 Elapsed= 0 Hrs 13 Mins 50.236.850 Secs
--------- Active OTHREADs ---------|---------- Waiting Areas ----------|-------- Buffers on Queue ---------
Enq Counts Counts Counts
Counts OTHREAD/Enq StDev Max value Area/Enq StDev Max value Buff/Enq StDev Max value
731 0.03 6.444 2 0.01 12.025 1 3.20 1.066 16
 
Active OTHREAD / Enq | Waiting Area / Enq | Buffer Count / Enq
| |
Average Std-Dev/Avg Max Value | Average Std-Dev/Avg Max Value | Average Std-Dev/Avg Max Value
0.03 6.444 2 | 0.01 12.025 1 | 3.20 1.066 16
| |
Range Count in | Range Count in | Range Count in
Counts Range | Counts Range | Counts Range
To Maximum 0| |To Maximum 0| | To Maximum 0|
50 0| | 50 0| | 50 0|
20 0| | 20 0| | 20 1|
15 0| | 15 0| | 15 64|****
10 0| | 10 0| | 10 64|****
5 0| | 5 0| | 5 4|
4 0| | 4 0| | 4 42|**
3 0| | 3 0| | 3 17|*
2 1| | 2 0| | 2 262|**************
1 730|******************** | 1 731|******************** | 1 277|***************
____________________ | ____________________ | ____________________
--------| | | | | | | --------| | | | | | | --------| | | | | |
Total= 731 10 20 30 40 50% | Total= 731 10 20 30 40 50% | Total= 731 10 20 30 40 50%
-----------------------------------------------------------------------------------------------------------------------------------
 
Report from 19Jun2012 10.57.53.58 IMS 12.1.0 IMS Performance Analyzer 4.3 Report to 19Jun2012 11.01.41.40
Fast Path OTHREAD Analysis
From 19Jun2012 10.49.38.15 To 19Jun2012 10.58.57.52 Elapsed= 0 Hrs 9 Mins 19.363.112 Secs
**** DEDB Write IWAIT ****
Share Elap/IWAIT Max value Pct Tot Pct Tot ---- CI Write Count ----
ADSname Level VSO IWAITs Sc.Mil.Mic Sc.Mil.Mic IWAITs IWT Elp CI/IWAIT Max value
DB23AR0 1 NO 5 5.387 17.631 0.63% 1.89% 1 1
DB23AR1 1 NO 5 4.196 5.392 0.63% 0.38% 2 4
DB23AR2 1 NO 7 4.860 6.245 1.28% 1.07% 5 6
. . .
DD01AR0 1 NO 17 2.949 5.488 2.53% 1.58% 2 5
BANKC00 3 YES 426 4.404 140.204 47.53% 21.28% 2 8
BANKC01 3 YES 434 14.246 511.090 58.47% 74.72% 2 8
** Total 903 7.941 511.090 100.00% 100.00% 3 8
8.3 Lock wait time
Be sure to monitor the lock contention, because it can last several seconds or minutes and can greatly affect performance even though it happens only a few times.
IMS Performance Analyzer for z/OS: Fast Path DEDB Resource Contention Summary report can show you detailed information about lock contentions (Example 8-9 on page 321). The Area Name column shows the name of the area where lock contention occurs; the Counts column shows the number of locks; the Elap/Count column shows the average elapsed time to wait per request; and the Max IWAIT column shows the maximum elapsed time to wait per request.
Example 8-9 IMS Performance Analyzer for z/OS: Fast Path DEDB Res. Contention Summary
Report from 09Jun2012 14.25.56.36 IMS 12.1.0 IMS Performance Analyzer 4.3 Report to 09Jun2012 14.30.06.71
Fast Path DEDB Resource Contention Summary
From 09Jun2012 14.26.11.74 To 09Jun2012 14.29.21.57 Elapsed= 0 Hrs 3 Mins 09.836.240 Secs
**** CI Lock IWAIT **** Sharing Types:
Area Sharing Elap/Count Max IWAIT Pct Tot Pct Tot A : Area / Non Level Share
Name Type Counts Sc.Mil.Mic StDev Sc.Mil.Mic Counts IW Elp B : 1 IRLM Block Level Share
C : 2 IRLM Block Level Share
DB23AR0 A 3 3.313 0.466 5.498 9.09% 0.05%
DB23AR1 A 4 2.222 0.551 3.386 12.12% 0.04%
DB23AR3 A 1 4.871.974 0.000 4.871.974 3.03% 24.50%
DB23AR4 A 1 0.257 0.000 0.257 3.03% 0.00%
DB23AR5 A 11 1.358.286 1.620 4.981.761 33.33% 75.15%
DD01AR0 A 13 3.880 0.499 6.863 39.39% 0.25%
** Total 33 602.504 2.668 4.981.761 100.00% 100.00%
 
**** Area Lock IWAIT **** Sharing Types:
Area Sharing Elap/Count Max IWAIT Pct Tot Pct Tot A : Area / Non Level Share
Name Type Counts Sc.Mil.Mic StDev Sc.Mil.Mic Counts IW Elp B : 1 IRLM Block Level Share
C : 2 IRLM Block Level Share
BANKC00 C 11 18.813 0.129 22.795 39.29% 15.18%
BANKC01 C 17 68.036 2.828 837.022 60.71% 84.82%
** Total 28 48.699 3.118 837.022 100.00% 100.00%
 
**** CI Lock IWAIT ****
| Average SD/Avg Max-Value| Average SD/Avg Max-Value| Average SD/Avg Max-Value| Average SD/Avg Max-Value
| 3.313 .471 5.498| 2.222 .556 3.386| 4.871.974 .005 4.871.974| 0.257 .005 0.257
| | | |
Range|Count in Areaname=DB23AR0 |Count in Areaname=DB23AR1 |Count in Areaname=DB23AR3 |Count in Areaname=DB23AR4
Sc Mil Mic| Range Share Type=A | Range Share Type=A | Range Share Type=A | Range Share Type=A
To Maximum| 0| | 0| | 1|******************** | 0|
256.000| 0| | 0| | 0| | 0|
128.000| 0| | 0| | 0| | 0|
64.000| 0| | 0| | 0| | 0|
32.000| 0| | 0| | 0| | 0|
16.000| 0| | 0| | 0| | 0|
8.000| 1|************* | 0| | 0| | 0|
4.000| 2|******************** | 2|******************** | 0| | 0|
2.000| 0| | 1|********** | 0| | 0|
1.000| 0| | 1|********** | 0| | 1|********************
| ____________________ | ____________________ | ____________________ | ____________________
|-------| | | | | | |-------| | | | | | |-------| | | | | | |-------| | | | | |
Total=| 3 10 20 30 40 50%| 4 10 20 30 40 50%| 1 10 20 30 40 50%| 1 10 20 30 40 50%
-----------------------------------------------------------------------------------------------------------------------------------
| Average SD/Avg Max-Value| Average SD/Avg Max-Value| Average SD/Avg Max-Value|
| 1.358.286 1.624 4.981.761| 3.880 .503 6.863| 602.504 2.673 4.981.761|
| | | |
Range|Count in Areaname=DB23AR5 |Count in Areaname=DD01AR0 |Count in Areaname=** Total |
Sc Mil Mic| Range Share Type=A | Range Share Type=A | Range Share Type= |
To Maximum| 3|*********** | 0| | 4|***** |
256.000| 0| | 0| | 0| |
128.000| 0| | 0| | 0| |
64.000| 1|**** | 0| | 1|* |
32.000| 0| | 0| | 0| |
16.000| 2|******* | 3|********* | 5|****** |
8.000| 1|**** | 2|****** | 4|***** |
4.000| 2|******* | 3|********* | 9|*********** |
2.000| 1|**** | 5|*************** | 7|******** |
1.000| 1|**** | 0| | 3|**** |
| ____________________ | ____________________ | ____________________ |
|-------| | | | | | |-------| | | | | | |-------| | | | | | |
Total=| 11 10 20 30 40 50%| 13 10 20 30 40 50%| 33 10 20 30 40 50%|
-----------------------------------------------------------------------------------------------------------------------------------
8.4 VSO and SVSO
Virtual Storage Option (VSO) and Shared VSO (SVSO) are a high-performance option for DEDBs and the preferred alternative to MSDBs for holding data in memory. IMS offers the user the best features of DEDBs and the response time and parallelism of MSDBs.
8.4.1 Virtual Storage Option (VSO)
VSO data resides in a z/OS data space in virtual storage, so response times for DL/I calls are close to memory access times. The definition, access, and operations on a DEDB using the VSO function are exactly the same as they are for an ordinary DEDB.
In this section, we provide performance tips related to the following items:
Segment level locking
Locking at the segment level is implemented for any VSO DEDB whose characteristics exactly match those of an MSDB:
Root-only hierarchy
Fixed length segment
PCB PROCOPT=G or R
VSO option in DBRC
No compression
If you implement the VSO option for an existing DEDB, be sure you consider modifying the DEDB, when possible, so that it matches the segment level locking requirements. However, taking an existing DEDB and modifying it so that it meets the requirements of segment level locking might not be possible.
Lock contention
One of the benefits obtained from a VSO DEDB when compared to a non-VSO DEDB is that a CI lock held for updating can be released earlier because of the differences in the output thread processing:
With a non-VSO DEDB, a CI lock has to be held until the CI is written to DASD. Because Fast Path writes out the CIs asynchronously for performance reasons, the CI lock can be held for a relatively long time.
With VSO areas, the lock is released as soon as the updated CI is copied to the data space, which occurs early in sync point phase 2. This way considerably reduces lock contention.
Table 8-1 summarizes the lock level in effect for every case.
Table 8-1 Locking level for GU and GN DL/I calls on VSO DEDB
PROCOPT
Segment level locking
VIEW=MSDB
Lock level
A
N
N
EXCLUSIVE
A
N
Y
READ
A
Y
N
EXCLUSIVE
A
Y
Y
EXCLUSIVE
G
N
N
READ
G
N
Y
READ
G
Y
N
READ
G
Y
Y
READ
GR
N
N
EXCLUSIVE
GR
N
Y
READ
GR
Y
N
EXCLUSIVE
GR
Y
Y
READ
If you migrate an MSDB to VSO DEDB, be sure to include VIEW=MSDB in the corresponding PCBs, to avoid possible deadlock situations and performance impact.
If you implement the VSO option for an existing non-VSO DEDB, remember that if you code VIEW=MSDB in the PCB, the READ locks are released soon after the GU calls complete. This behavior is quite different from the non-VSO DEDB method, and your application programs might not be prepared for this event or they might experience an integrity exposure.
With VIEW=MSDB PCBs holding locks on VSO DEDBs, as MSDBs do, and locking at the segment level, the difference between VSO DEDB locking rules and the ones used with MSDBs is not significant. Therefore, from a lock contention and parallelism point of view, VSO DEDBs are comparable to MSDBs and considerably better than non-VSO DEDBs.
The FLD call works for a VSO in the same way as for an MSDB. That is, FLD calls are processed at sync point and are sorted by resource.
I/O reduction
Some IMS systems have DEDB areas that experience high I/O rates. Implementing such areas with VSO provides major benefits, because the read I/Os are eliminated and the write I/Os are optimized; updates to a CI from multiple transactions are applied with a single I/O. We strongly suggest that you implement VSO for small and highly volatile DEDBs. If your system has small and highly volatile databases implemented as MSDBs, we suggest that you migrate them to VSO DEDBs also.
VSO output threads
Periodically, all updated CIs are written out from the data spaces to the area data sets during a process that is also referred to as output thread (OTHREAD).
This OTHREAD process for VSO DEDB is more efficient than the non-VSO DEDB process. Locks on CIs are released earlier during sync point processing, and the I/Os are performed in chains of 200 KB. Also, multiple updates to the same CI are written back to DASD only once. See Table 8-2 for a comparison of OTHREAD processing.
At IMS system checkpoint, the OTHREAD process is started too, but it is also asynchronous, and so it does not affect IMS system checkpoint duration. If you migrate your MSDB to VSO DEDB and specify your system to no longer use MSDBs (MSDB parameter left to null in DFSPBxx member of IMS.PROCLIB), then your IMS system checkpoint elapsed time should be reduced.
Table 8-2 OTHREAD processing for VSO and non-VSO DEDBs
Non-VSO DEDB
VSO DEDB
Updated CI in FP buffer
Updated CI in FP buffer
SYNC
SYNC
 
CI Write to Data Space
 
Lock Release
Asynchronous Physical Logging (OTHREAD)
Asynchronous Physical Logging
Database Data set Write I/Os
Asynchronous OTHREAD (Wait for Logging)
Lock Release
Database Data set Write I/Os
VSO PRELOAD options
Any DEDB can be moved to virtual storage by specifying the VSO keyword on the DBRC commands: INIT.DBDS or CHANGE.DBDS.
VSO areas are mapped linearly to z/OS data spaces. A data space can contain the data of multiple areas, but the data of an area cannot be divided into multiple data spaces. During IMS startup, two 2 GB data spaces are acquired:
One has the disabled reference (DREF data space) option specified, which means that its pages can reside in memory but are never paged out to DASD.
The other data space does not have the DREF option (non-DREF data space) and might experience paging.
An area can be defined to DBRC with either the PRELOAD or the NOPREL attributes:
Any area defined with the PRELOAD option is read into the DREF data space following the IMS initialization system checkpoint. Only the direct part of the area is loaded.
If an area has the NOPREL option, then each direct part CI is allocated a position in the non-DREF but no automatic load takes place. Instead, the first time a CI is requested after the area open, IMS reads it from DASD and copies it to the data space.
SDEP CIs are never loaded into the data space.
CIs from PRELOAD areas always remain in central storage. For medium or small areas with a high access rate, where most of the CIs in the area are accessed with similar frequency (there are zero or negligible hot spots), we suggest implementing the PRELOAD option.
When choosing the PRELOAD option, know that PREOPEN is implicitly assumed for this option. This means that VSAM area data sets are allocated and opened immediately when IMS starts or a /START AREA command is issued.
CIs from NONPRELOAD areas can be paged out to page data sets if there is a storage constraint. If you have big VSO areas where only a small part of the area is actually accessed, implement the NOPREL option so that only space for needed CIs is allocated.
VSO DEDB performance will more likely degrade with paging than non-VSO DEDBs doing I/Os to the actual data sets. Be sure to make use of the VSO option for DEDBs, that is, for those databases where the area is not too big and that need high performance.
IMS system checkpoint
When VSO DEDBs databases are used, the content of the data space is written to DASD at IMS system checkpoint time only for those VSO DEDBs that had update processing. To do so, an output thread is started at every system checkpoint. However, this output thread is an asynchronous process, and its elapsed time does not affect the IMS system checkpoint duration.
A considerable reduction in the IMS system checkpoint elapsed time (tens of seconds, depending how large the MSDBs are) can be achieved by migrating MSDBs to VSO DEDBs and specifying the MSDB parameter, in the DFSPBxx member of IMS.PROCLIB, as null.
During IMS system checkpoint, transaction processing almost stops, which has several negative consequences. If this is the case for your system, we suggest that you migrate from MSDBs to VSO DEDBs.
8.4.2 Shared Virtual Storage Option (SVSO)
Block-level data sharing of VSO DEDB areas allows multiple IMS subsystems to concurrently read and update VSO DEDB data. VSO DEDBs that support block-level data sharing are commonly known as Shared VSO (SVSO) DEDBs.
The following main elements participate in the SVSO implementation:
Cache structures in the coupling facility
Store-in cache structures are used to contain copies of the control intervals accessible to each IMS in the data sharing group.
Local cache buffers
Each IMS has a set of buffers that contains a local copy of the control intervals.
Permanent storage
DEDB area data sets are needed to contain a non-volatile copy of the data in DASD.
When the cache structures contain the data required, IMS reads the data from CF. Else, when they do not, IMS reads from DASD and puts the data into CF. When updating the shared VSO DEDB, IMS writes the data on CF and the changed CIs are written into DASD periodically.
Local buffer pool definitions
The private local buffer pools can be defined by means of the DEDB statement of member DFSVSMxx on IMS.PROCLIB. Multiple DEDB statements can be used to define various buffer pools, each one with different attributes. The following attributes can be specified in the DEDB statement:
Name of the buffer pool
Number of buffers for the primary and secondary allocations
Maximum number of buffers that can be allocated
Buffer pool buffer size
Whether to assign the buffer pool to a certain area
LKASID option
An area can use a certain buffer pool if the CI size for the area matches the buffer pool buffer size, it has the same LKASID option specified in the RECON, and the buffer pool is not implicitly assigned to another area.
Two or more areas with the same characteristics can also share a buffer pool.
When starting an area, if no buffer pool is defined or no available buffer pool matches the CI size for the area, IMS creates a default buffer pool for it. Default buffer pools can also be shared among different areas.
Consider the following suggestions when you define local buffer pools:
Do not allow the default buffer pools to be created. Include at least one DEDB statement for each different CI size in the DFSVSMxx member of IMS.PROCLIB.
Specify a number of buffers large enough to hold those needed for the maximum number of concurrent requests. Specify them as primary allocation buffers, so that they are page-fixed. Then, specify a secondary allocation to cover unexpected high load situations, and a maximum number of buffers large enough so that the maximum is never reached (if a buffer is not available, IMS waits for a buffer to become free). Paging of primary or secondary buffers does not occur. When secondary buffers are allocated, they are page-fixed.
If your applications use PCB with PROCOPT=GO to access SVSO areas, Fast Path might steal local buffers. That means that extra buffers must be defined for the local buffer pool.
Also, if the buffer pool is not large enough, the LKASID option can be ineffective. So, it is important for the primary allocation to be large enough.
A buffer containing committed changes that was not written to the coupling facility (CF) cannot be reused until an output thread writes the buffer to the CF and completes. Therefore, a lack of OTHREADs can result in more buffers being needed for the local buffer pools. Set the OTHR parameter to the maximum value (32,767) if there is any doubt about how to fine-tune their number, because they are cheap to define.
If the application’s access rate pattern for different SVSO areas is similar, you might allow two or more areas to share the same buffer pool if they have the same CI size and LKASID options, which is better from an administration point of view. However, if the two areas both have high access rates, you should give the high-access-rate areas dedicated buffer pools and allow only the low-access-rate areas to share pools.
If one of the areas sharing a pool is accessed much more than the others, then it consumes most of the buffers, and the performance of the accesses to the other areas can be degraded. If the access profiles are not similar for the different areas in your system, we suggest that you assign dedicated buffer pools for each area.
Normally, we prefer the LKASID option unless the area is highly updated from different IMS instances in the data sharing group.
For those areas where the read/write ratio is high, we prefer assigning to them a dedicated buffer pool and enabling the LKASID option, so that the major part of the reads are resolved within the local IMS (without accessing the CF).
We also suggest that you monitor the usage of these buffer pools and adjust the allocation values and the LKASID option appropriately.
LKASID option
IMS maintains the buffers in the local buffer pool that are chained off three or four queues:
Available queue
These buffers are available for use. It does not matter what they contain, because they are considered as empty buffers.
Requestor queue
These buffers are in use by an application program. These buffers count toward the NBA/OBA limit.
Output Thread queue
These buffers have committed updates that are waiting for an OTHREAD to be written to the CF.
LKASID queue (optional)
This queue applies only for pools with the LKASID option, which we discuss in this section.
When the LKASID option is the choice, the local buffers that are used by the application are not returned to the available queue after application sync point. Instead, they are chained off the lookaside queue.
We recommend that you select the LKASID option. It provides better performance, because CF accesses are saved, especially if the read/write ratio is high and most of the requests for CI are resolved locally without accessing the CF.
To take advantage of the LKASID option, the buffer pool must be large enough (primary plus secondary) to hold all of the current requests for buffers from executing transactions (requestor queue), for scheduled but incomplete output threads (output thread queue), and with enough additional buffers to provide a sufficient number of buffers that remain on the lookaside queue.
How efficiently the LKASID option performs depends on the ratio of valid hits and number of searches. If this ratio is not a big number, searching the lookaside queue might be worth doing, and you might consider choosing NOLKASID. This ratio can be obtained by means of either IMS Performance Analyzer VSO Statistics report, the /DIS POOL FPDB command, or the DBFULTA0 utility.
The LKASID option can be inefficient for three reasons:
A shortage in the buffer pool
The application database access profile
INOPREL area and CF structure size shortage
Be aware that the Valid ratio shown in the output for the /DIS POOL FPDB command has a slightly different meaning from the Hits Valid ratio shown in the IMS Performance Analyzer VSO Statistics report. For the command, Valid means the percentage of times a buffer found in the pool had valid data. Therefore, you must use the displayed values for Hits and Valid together to obtain the ratio of efficiency for the LKASID option. For example, if Hits is 40%, and Valid is 75%, a buffer was found in the pool 40% of the time. Of that 40%, 75% of the buffers found had valid data, that is, 30% of the requests found buffers on the LKASID queue with valid data. So, IMS had to read data from CF approximately 70% of the time. The Hits Valid ratio shown in IMS Performance Analyzer VSO Statistics report is 30%.
DBFULTA0 gives the number of cross-invalidated hit buffers instead, which is the percentage of invalid buffers from those that were found in the lookaside queue.
Monitoring the local buffer pools is necessary to set the local buffer pool allocations and the LKASID option appropriately.
Coupling facility structure definitions
Each VSO DEDB cache structure in the shared storage of a coupling facility represents one or more VSO DEDB areas. A VSO DEDB cache structure can be a single-area structure or a multi-area structure. A single-area structure can contain data from only one DEDB area. A multi-area cache structure can hold data from multiple areas. Both single-area and multi-area cache structures conform to the characteristics of the areas for which they are created. Both types of cache structures are also non-persistent: they are deleted after you close the last area connected to them.
To determine the structure size, you can use the z/OS coupling facility structure sizer tool (CFSizer). CFSizer is a web-based application that calculates the structure size based on the input data that you provide. To use the CFSizer tool, go to the following web page:
PRELOAD or NOPREL option
The PRELOAD option causes the whole area to be read from DASD and written to the CF as soon as IMS restarts or an /STA AREA command is issued. When this is the choice, the CF structure size must be large enough to contain all the CIs of the direct portion of the area; otherwise, the area cannot be started.
As a general rule, we prefer to use NOPREL and specify a CF structure size large enough to contain the CIs of the area that are actually active. If there are hot spots in the database, then as soon as the applications start accessing those CIs, they are placed in the CF and remain there for the remainder of the IMS session (or until the area is closed or a /VUNLOAD command is issued). No unnecessary space is wasted in the CF with NOPREL and an appropriate CF structure size. The first reference to each CI might have a worse response time because DASD accesses are also needed (CF structures are just a cache).
PRELOAD is used only if almost all the CIs in the area contain data, they are all referenced with similar frequency, and it is required to have good performance from the start.
Also, you might consider using PRELOAD for those areas with medium load but where extremely high performance is a requirement and all the CIs are randomly accessed. In this case, PRELOAD maintains all the CIs in the CF.
The preloading process differs slightly between multi-area structure (MAS) and single area structure (SAS). MAS is the option introduced in IMS V9, which enables multiple DEDB areas that have same CI size to share a single coupling facility structure. In the case of MAS, IMS does not check a cache structure size for preloading MAS SVSO areas. However, in the case of SAS, IMS checks a cache structure size for SAS SVSO, when the area is opened for preloading. Although this checking occurs, IMS sometimes is unable to load whole CI images because of inaccurate calculation of the checking macro.
Block level locking and root-only DEDBs
As the following conditions are fulfilled, local VSO DEDB areas use segment level locking, which makes them equivalent to MSDBs:
ROOT segment only
Fixed length segment
PROCOPT=G or R
This behavior means that two segments can be accessed simultaneously from separate dependent regions without incurring lock contention even if they were in the same CI and the access intent was exclusive.
With SVSO areas, the locking unit is the CI. If, after moving from local VSO to SVSO, you experience an increase in lock contention, try to reduce contention problems at a maximum in the following ways:
Reduce the CI size
Different CI sizes can be specified for each area and local buffer pool. Try to adjust the CI size to the segment size where possible, so that only one segment fits in a CI.
Expand the area
You can also increase the size of the area to have more available RAP CIs. This way causes segments to be more sparsely distributed in the database and reduces the chances of having two or more segments in the same CI. If you do this step, be aware that, if PRELOAD option is the choice, the CF structure size must be appropriately increased. Increasing the area size can result in many empty CIs, therefore, consider whether PRELOAD is the best option in this case.
We also suggest that you adjust the PROCOPT values of the database PCBs to the actual type of access that the programs perform. If you overly specify the PROCOPTs, you can experience lock-contention problems. The only time locking differs between VSO and SVSO is if VSO is eligible for segment level locking. If you have lock contention with VSO, you will have the same contention with SVSO. The difference is that with SVSO, you use the IRLM to resolve the contention. If there is contention, then using the IRLM to resolve it is not appreciably less efficient than using PI to resolve it if data sharing is not used at all.
If you implement the VSO option for an already existing SHARELEVEL(3) DEDB, no particular increase in lock contention is expected to occur.
8.4.3 Monitoring VSO and SVSO performance
The VSO Activity Summary section of IMS Performance Analyzer for z/OS: VSO Statistics report provides information that can be used to monitor the performance of the VSO areas. Example 8-10 shows this report. You can determine how well VSO is performing by comparing the number of requests that are performed against the data space to the number of requests that need DASD access.
Example 8-10 IMS Performance Analyzer for z/OS: VSO Statistics (VSO Activity Summary)
IMS Performance Analyzer Page 1
VSO Activity Summary: SHARELVL 0/1 - IMSA
From 14Apr2012 8.00.00.01 To 14Apr2012 9.01.02.01 Elapsed= 1 Hrs 1 Mins 0.425.492 Secs
Database Area --IMS from/to VSO DS-- -------VSO DS from/to DASD------ I/O I/O Elapsed
Name Name Gets Puts Gets Puts Castouts Scheduled HH:MM:SS:TH
________ ________ __________ __________ __________ __________ ________ __________ ___________
STOCK STOCKA1 4901 8498 547 387 1 12 1.43.11
STOCKA2 6462 6743 491 256 1 15 2.13.14
STOCK *Total* 11363 15241 1038 643 2 27 3.56.25
. . .
**System Totals** 344352 492812 67574 38564 78 387 1.12.35.65
 
VSO Activity Summary: SHARELVL 2/3 - IMSA
From 14Apr2012 8.00.00.01 To 14Apr2012 9.01.02.01 Elapsed= 1 Hrs 1 Mins 0.425.492 Secs
Database Area --- IMS from/to CF --- -------VSO CF from/to DASD------ ----------- Lookaside-Pool Buffer ------------
Name Name Gets Puts Gets Puts Castouts Searches Hits Pct Hit Valid Pct
________ ________ __________ __________ __________ __________ ________ __________ __________ _____ __________ _____
ORDERS ORDERA1 4901 8498 547 387 1 134875 75621 53.4 72071 49.9
ORDERA2 6462 6743 491 256 1 144470 79621 52.7 73957 49.9
ORDERS *Total* 11363 15241 1038 643 2 279345 155242 53.1 146028
. . .
**System Totals** 344352 492812 67574 38564 78 744470 365432 54.9 345428 49.9
You can also monitor the areas that are actually loaded into data spaces and the amount of storage they use. The AREASIZE column shows the number of 4K pages reserved for a certain area in the data space. This amount is the maximum space needed to allocate the whole area (all its CIs) and depends on the DBD. If the area is not preloaded, the actual amount of storage allocated for the area can be sensibly less.
Example 8-11 shows a /DIS FPV command.
Example 8-11 /DIS FPV command
IM1BDIS FPV
DFS4444I DISPLAY FROM ID=IM1B
DATASPACE MAXSIZE(4K) AREANAME AREASIZE(4K) OPTION
001 524238 DREF
NO AREAS LOADED INTO DREF DATASPACE 001.
+ AREANAME STRUCTURE ENTRIES CHANGED AREA CI? POOLNAME OPTIONS
+ AREAWH01 IM0B_AREAWH01A 0000075 0000020 00000075 WAREPOOL PREO, PREL
+ AREAWH01 IM0B_AREAWH01B 0000075 0000020 00000075 WAREPOOL PREO, PREL
+ AREAIT01 IM0B_AREAIT01A 0002448 0000000 00002520 ITEMPOOL PREO, PREL
+ AREAIT01 IM0B_AREAIT01B 0002448 0000000 00002520 ITEMPOOL PREO, PREL
+ AREADI01 IM0B_AREADI01A 0000440 0000185 00000440 DISTPOOL PREO, PREL
+ AREADI01 IM0B_AREADI01B 0000440 0000185 00000440 DISTPOOL PREO, PREL
*2006271/171202*
8.5 Fast Path 64-bit buffer manager
The Fast Path 64-bit buffer manager autonomically controls the number and size of Fast Path buffer pools, including buffer pools for DEDBs, MSDBs, and system services. It also places the DEDB buffer pools above the bar in 64-bit common storage in IMS 12, which was placed in ECSA in the previous versions, so it provides ECSA constraint relief.
8.5.1 Fast Path 64-bit buffer manager overview
If you are using the Fast Path 64-bit buffer manager, IMS creates and manages the Fast Path buffer pools for you and places DEDB buffers in 64-bit storage. The buffers are placed in virtual storage above 2 GB bar. The 64-bit buffer manager creates multiple subpools with different buffer sizes. It creates an initial allocation of subpools based on the number of areas of each CI size, and automatically creates more buffers in a subpool as they were needed. Obviously this approach is possible only when you have the z/Architecture enabled.
When the Fast Path 64-bit manager is enabled, you do not need to design DEDB buffer pools. The parameters that define the sizes of buffer pools, DBBF, DBFX, and BSIZ, are ignored when you use 64-bit buffer manager.
With IMS 12, the user can specify the initial amount of 64-bit storage used for the buffer pool. Buffer pools are pre-expanded, that is, expanded in anticipation of future needs. They are compressed when the use of a subpool drops. IMS 12 moved some buffers that were still in ECSA to 64-bit storage, which are the buffers that are used for SDEP inserts and FLD calls.
When the 64 bit buffer manager is used, the use of OBA buffers is not serialized. Multiple dependent regions or threads can be using their OBA allocations at the same time. This way eliminates a potential bottleneck in buffer use. In practice, it means that each region or thread can be using the number of buffers equal to its NBA plus OBA specification.
8.5.2 Monitoring of Fast Path 64-bit buffer usage
You can capture usage statistics for the Fast Path 64-bit buffers by issuing the following command:
UPDATE IMS SET(LCLPARM(FPBP64STAT(Y)))
IMS captures the usage statistics for each unit of work in a dependent region, and writes the statistics to the online log data set as X'5945' log records, which are mapped by the DBFL5945 and DBFBPND6 macros. These log records are read and processed by the Fast Path Log Analysis utility (DBFULTA0).
The type-2 QUERY POOL command is enhanced to show information about the 64-bit buffer manager. Example 8-12 and Example 8-13 shows responses. The SHOW(STATISTICS) option is available in IMS 12 or later.
Example 8-12 Response of QUERY POOL TYPE(FPBP64) SHOW(ALL) command
Response for: QUERY POOL TYPE(FPBP64) SHOW(ALL)
Subpool MbrName CC Size Type Status T_id Tot_Buf Buf_T Buf_Use Buf_U Buf_Avl Buf_A %Use %Ext Qui_Buf Buf_Q EPVT_Tot EPVT_T
-------- -------- ---- ------ ---- ------ ---- ------- ----- ------- ----- ------- ----- ---- ---- ------- ----- -------- ------
DBF_MAXB I12A
DBF_TOTB I12A G 88 0 88 0 1K
DBFC0001 I12A 512 Tot 10 32 0 32 0 156
DBFC0001 I12A 0 Base 15 32 0 32 0 100 0 156
DBFC0002 I12A 1024 Tot 10 16 0 16 0 156
DBFC0002 I12A 0 Base 15 16 0 16 0 100 0 156
DBFS0001 I12A 512 Tot 10 32 0 32 0 156
DBFS0001 I12A 0 Base 15 32 0 32 0 50 0 156
DBFS0002 I12A 1024 Tot 10 8 0 8 0 156
DBFS0002 I12A 0 Base 15 8 0 8 0 100 0 156
Subpool MbrName CC Size ECSA_Tot ECSA_Buf ECSA_B ECSA_Oth ECSA_O 64b_Tot 64b_Buf TimeCreate
-------- -------- ---- ------ -------- -------- ------ -------- ------ ------- ------- --------------------
DBF_MAXB I12A 100M
DBF_TOTB I12A 71K 24K 42K 32K
DBFC0001 I12A 512 15K 16K
DBFC0001 I12A 0 15K 16K 2012.249 06:24:47.89
DBFC0002 I12A 1024 8K 16K
DBFC0002 I12A 0 8K 16K 2012.249 06:24:47.89
DBFS0001 I12A 512 31K 15K
DBFS0001 I12A 0 31K 15K 2012.249 06:24:47.89
DBFS0002 I12A 1024 12K 4K
DBFS0002 I12A 0 12K 4K 2012.249 06:24:47.89
Example 8-13 Response of QUERY POOL TYPE(FPBP64) SHOW(STATISTICS) command
Response for: QUERY POOL TYPE(FPBP64) SHOW(STATISTICS)
Buf_Size MbrName CC SPT Tot_Buf Buf_Use Buf_Avl %Use HWM EPVT_Tot ECSA_Tot 64b_Tot
-------- -------- ---- --- ------- ------- ------- ---- ----- -------- -------- -------
Total 88 0 88 0 2 1K 71K 32K
512 I12A 0 C 32 0 32 0 1 156 15K 16K
1024 I12A 0 C 16 0 16 0 0 157 8K 16K
512 I12A 0 S 32 0 32 0 1 156 31K 0
1024 I12A 0 S 8 0 8 0 0 156 12K 0
Note that the 64-bit buffer usage cannot be reported by using the type-1 command, such as in Example 8-14.
Example 8-14 /DIS POOL FPDB command response
/DIS POOL FPDB
DFS4445I CMD FROM MCS/E-MCS CONSOLE USERID=IMS008: DIS POOL FPDB I12A
DFS4444I DISPLAY FROM ID=I12A 478
FPDB BUFFER POOL:
** NO DEDB PRIVATE POOLS DEFINED **
*12249/064657*
The IMS Performance Analyzer for z/OS: Internal Resource Usage report, Fast Path 64-bit Buffer Statistic report can show the detailed usage. See Example 8-15.
Example 8-15 IMS Performance Analyzer for z/OS: Internal Resource Usage report, Fast Path 64-bit Buffer Statistic
Start 05Sep2012 06:24:48:49 IMS Performance Analyzer End 05Sep2012 07:12:57:76 Page 53
Internal Resource Usage - I10A
______________________________
Fast Path 64-bit Buffer Manager Statistics Interval : 48.09 (HHHH.MM.SS)
General Information
Available Used Unknown Total
______________ ______________ ______________ ______________
Common subpool buffers 48 0 0 48
System subpool buffers 40 0 0 40
Total subpool buffers 88 0 0 88
Total ECSA used for buffers 24,576
Total ECSA used for DMHR 40,832
Total ECSA used for other control 2,368
Total ECSA used 72,916
Total 64-bit storage used 32,768
Total EPVT used for buffers 0
Total EPVT used for DMHR 0
Total EPVT used for other control 624
Total EPVT used 1,044
System Pools
Subpool name DBFS0001 DBFS0002 - - -
Buffer size 512 1,024 0 0 0
Number of times waited for buffer 0 0 0 0 0
Number of buffers 32 8 0 0 0
Number of buffers available 32 8 0 0 0
Number of buffers in use 0 0 0 0 0
Number of buffers in unknown status 0 0 0 0 0
Maximum number of buffers used 1 0 0 0 0
Buffer storage in base section 16,384 8,192 0 0 0
DMHR storage in base section 14,848 3,712 0 0 0
Buffer storage in extents 0 0 0 0 0
DMHR storage in extents 0 0 0 0 0
Total ECSA used 31,824 12,496 0 0 0
Total EPVT used 156 156 0 0 0
Total 64-bit used 0 0 0 0 0
DBFULTA0 was also enhanced to report the usage of FP 64-bit buffer. When you write the keyword FPBP64 in the control statement, you can obtain the information, such as in Example 8-16.
Example 8-16 Fast Path Log Analysis utility (DBFULTA0): DETAIL LISTING OF EXCEPTION TRANSACTIONS
DETAIL LISTING OF EXCEPTION TRANSACTIONS: PAGE 4
SEQ TRANCODE SYNC POINT S ROUTING LOGICAL PST QUEUE TRANSIT TIMES(MSEC)- -OUT- DEDB ..ADS.. ..VSO.. MSDB BUF CONTENTIONS R P
NO. OR PSB TIME F CODE TERMINAL ID COUNT IN-Q PROC OUTQ TOTAL (SEC) CALL RD UPD RD UPD CALL USE CI UW OB BW T T
_______ ________ ___________ _ ________ ________ ___ _____ ____ ____ ____ _____ _____ ____ ___ ___ ___ ___ ____ ___ __ __ __ __ _ _
1 DFSIVP4 6:29:40.33 L 1 0 0 0 NO DEQ 0 0 0 0 0 0 0 0 0 0 0
FPBP64 GET COMMON BUFF: 512= 0 1K= 0 2K= 0 4K= 0 8K= 0 12K= 0 16K= 0 20K= 0 24K= 0 28K= 0
FPBP64 WRT COMMON BUFF: 512= 0 1K= 0 2K= 0 4K= 0 8K= 0 12K= 0 16K= 0 20K= 0 24K= 0 28K= 0
FPBP64 GET SYSTEM BUFF: 512= 0 1K= 0 2K= 0 4K= 0 8K= 0 12K= 0 16K= 0 20K= 0 24K= 0 28K= 0
FPBP64 WRT SYSTEM BUFF: 512= 0 1K= 0 2K= 0 4K= 0 8K= 0 12K= 0 16K= 0 20K= 0 24K= 0 28K= 0
FPBP64 WAIT FOR BUFFER: 512= 0 1K= 0 2K= 0 4K= 0 8K= 0 12K= 0 16K= 0 20K= 0 24K= 0 28K= 0
2 DFSIVP5 6:29:46.70 L 2 0 0 0 NO DEQ 0 0 0 0 0 0 0 0 0 0 0
FPBP64 GET COMMON BUFF: 512= 0 1K= 0 2K= 0 4K= 0 8K= 0 12K= 0 16K= 0 20K= 0 24K= 0 28K= 0
FPBP64 WRT COMMON BUFF: 512= 0 1K= 0 2K= 0 4K= 0 8K= 0 12K= 0 16K= 0 20K= 0 24K= 0 28K= 0
FPBP64 GET SYSTEM BUFF: 512= 0 1K= 0 2K= 0 4K= 0 8K= 0 12K= 0 16K= 0 20K= 0 24K= 0 28K= 0
FPBP64 WRT SYSTEM BUFF: 512= 0 1K= 0 2K= 0 4K= 0 8K= 0 12K= 0 16K= 0 20K= 0 24K= 0 28K= 0
FPBP64 WAIT FOR BUFFER: 512= 0 1K= 0 2K= 0 4K= 0 8K= 0 12K= 0 16K= 0 20K= 0 24K= 0 28K= 0
8.6 Fast Path secondary index (FPSI) and performance comparison with HDAM
Secondary indexing provides a way to meet the processing requirements of various applications. With secondary indexing, you can have an index based on any field in the database, not only the key field in the root segment.
IMS 12 supports secondary indexing on a primary DEDB to process a segment type in a sequence other than the one that is defined by the segment’s key.
The secondary index databases must be root-only VSAM-based HISAM or SHISAM databases. HISAM databases must be used if you are confronted with duplicate secondary index values. A SHISAM database is composed of one KSDS data set; a HISAM database can have overflow in an ESDS data set.
At IBM Silicon Valley Laboratory, a comparison test was run between HDAM with secondary index and DEDB with secondary index. The environment, scenarios, and results are reported in this section.
Test environment
The test environment configuration used the following hardware and software:
Hardware
 – z196 processor, 2 CPs for 1-IMS, 3 CPs for TPNS
 – DASD DS8700
Software
 – z/OS V1R12
 – IRLM V2.3
 – TPNS V3R5
Test scenarios
We used three scenarios:
Scenario 1: Determine the performance improvement of running with DEDB with two secondary indexes.
We use one area of DEDB, with two secondary indexes in the ROOT segment; the application updates DEDB roots randomly with 100% GHU/GHN/REPL calls and 10% issuing ISRT/DLET calls, which caused secondary index maintenance
Scenario 2: Determine the performance improvement of running with DEDB with secondary index case.
We use a BMP to access DEDB with secondary index sequence (PROCDEQD), issuing GU to identify one particular primary key, and GHN/REPL next 10 records sequentially.
Scenario 3: Determine the performance of running with DEDB with two and four secondary indexes, but not using the secondary indexes. Determine the base performance of running with DEDB without secondary index case.
We use DEDB with two and four secondary indexes definition. The applications update DEDB roots randomly with GHU/GHN/REPL DL/I calls, which do not cause secondary index maintenance.
Results
Table 8-3 shows results of scenario 1.
Table 8-3 Scenario 1: HDAM/VSAM with two secondary indexes versus one area DEDB with two secondary indexes
Metric
HDAM VSAM with two secondary indexes
DEDB with two secondary indexes
Difference
ETR
658
642
-16 (-2%)
CPU utilization
17%
14%
-3% (-18%)
ITR
3871
4586
+715 (+18%)
Processing time (in seconds)
0.008
0.013
+0.05 (+63%)
I/O activity (per second)
Main DB
3800
2055
-1,745 (-46%)
Secondary index
1578
1496
-82 (-5%)
DLI statistics (per second)
GHU
29.74%
29.86%
 
GHN
10.79%
10.41%
 
ISRT
6.31%
6.49%
 
DLET
6.32%
6.48%
 
REPL
21.58%
20.82%
 
Table 8-4 shows results of scenario 2.
Table 8-4 Scenario 2: Access DEDB with secondary index sequence versus no secondary index sequence
Metric
DEDB without secondary index
DEDB with secondary index
(using PROCSEQD)
Elapsed time
0:00:41
0:00:01
CPU
5.44%
2.22%
I/O activity (per second)
DEDB
2,106.53
1.978
Secondary index
0
2.182
DLI statistics (per second)
DB GU
0
1
DB GHU
1
0
DB GHN
534,990
10
DB REPL
10
10
Total DLI DB Calls
535,001
21
Table 8-5 shows results of scenario 3.
Table 8-5 Scenario 3: Access DEDB with 0, 2, and 4 secondary indexes
Metric
DEDB with
0 secondary indexes
DEDB with
2 secondary indexes
DEDB with
4 secondary indexes
ETR
278.54
278.93
280.86
CPU
12%
13%
13%
ITR
2321.17
2145.62
2160.46
DLI statistics (per second)
GHU
2.33%
2.33%
2.33%
GHN
46.51%
46.51%
46.51%
REPL
48.84%
48.84%
48.84%
8.7 Fast Path log reduction
You might want to use asynchronous changed data capture, but do not want to write log records for the before-images. Prior to IMS 12, asynchronous changed-data capture wrote before- and after-image log records (X’99’). In IMS 12, you can specify that these before-image log records not be written. To specify this information, use the following values on the EXIT= parameter of the DBD or SEGM macro of DBDGEN:
DLET: Writes the before-image log record for DLET calls. This value is the default and is also the action taken by previous versions of IMS.
NODLET: Does not write the before-image log record for DLET calls.
BEFORE: Writes the before-image log record for REPL calls. This value is the default and is also the action taken by previous versions of IMS.
NOBEFORE: Does not write the before-image log record for REPL calls.
The benefit of this enhancement is to reduce logging volumes for asynchronous data capture for users who want only after-image log records.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset