Performance concepts and monitoring methodology
Performance problems are erratic in nature and generally occur when you least expect them. This situation applies to any kind of system; in this book, we discuss the performance of IMS systems. In an environment that has a combination of both internal and external variables, what must we do to be sure we apply good practices and that we are proactive? A number of techniques are available, but the most important is the ability to monitor, profile, track, and trend transactions.
The purpose of this chapter is to highlight areas where performance problems can occur and give you a reference point in identifying and resolving performance problems.
We discuss the lifecycle of an IMS transaction by describing the flow of a message from its inception at a terminal through to its processing by an application, and finally its reply back to the terminal. We provide a brief overview of the performance challenges and then describe IMS full-function, IMS Fast Path, and DBCTL message flows and open transaction interactions with the IMS system. The message flow depicts the life of a transaction through IMS and highlights areas where performance problems can occur. We then describe monitoring methodology and procedures.
This chapter contains the following topics:
2.1 Performance challenges
The performance of an IMS system is directly related to a number of internal variables. These variables can be found in the z/OS operating system, in IMS Transaction Manager (TM), in IMS database manager (DM), in the application, or in the hardware. External variables include the network and the physical infrastructure of your network. These external variables are mostly out of our control, although they are integral in ensuring respectable response times. An understanding of your network architecture is paramount in diagnosing network-related response-time issues.
IMS is an event-driven system and events are represented internally by event control blocks (ECB). Each event or a multitude of events can result in performance bottlenecks. We provide a generic method for monitoring and viewing the IMS system to identify problems as events occur.
Several typical IMS performance and availability challenges are as follows:
Poor IMS response time, transaction queuing, and processing bottlenecks
 – Queued IMS transactions
 – IMS scheduling delays
 – IMS application performance or system performance bottlenecks
IMS connection bottlenecks
 – CICS/DBCTL connection bottlenecks
 – Network delays
 – Delays related to IMS Connect, Open Transaction Manager Access (OTMA), Open Database Manager (ODBM), Advanced Program-to-Program Communication (APPC), and external driving systems
IMS database and subsystem delays
 – High I/O
 – Poor buffer pool performance
 – IMS lock and latch conflicts
External subsystem delays, such as DB2, which can elongate IMS application time
 – DB2 thread connection issues
 – DB2 SQL processing delays
 – DB2 I/O and buffer pool performance issues
 – DB2 lock conflicts
2.2 Events for full-function messages
In this section we describe how to identify possible performance problems for IMS full-function transaction flows, with a view to identifying performance problems, as we step through the various events. Table 2-1 on page 19 shows the list of events for full-function messages. These steps are described in more detail after the table.
Table 2-1 Event list for full-function messages
Step
Event
Activity
Problem identification
1
Message arrives in IMS
Message Format Service (MFS) routines, basic edit or intersystem communication (ISC) edit
RECA, high I/O pool (HIOP), CIOP, and MFBP pool shortages evident
2
Message queuing
Message queued to scheduler message block (SMB)
QBUF shortage evident
3
Message scheduling
Schedule message in dependent region
Transaction queuing or pool intent failures
4
Scheduling-end to first DL/I call
Load programs, subroutines, and initialize working storage
Appears as high CPU and elapsed time
5
Program elapsed time
Application program invoked and DL/I calls performed
High elapsed times, as a result of application, database, WLM, I/O subsystem, or system issues
6
Sync point
Phase one and two commit
High elapsed times as a result of WADS, OLDS, or I/O subsystem delays
7
Message output
Send output to destination
HIOP shortage, or network delays before message is dequeued
The steps are as follows:
1. Message arrives in IMS
The message arrival event begins when the message is placed by either IBM VTAM® in an IMS receive any buffer or transported through IMS Connect to IMS Open Transaction Manager Access (OTMA). Either way, the message is moved to a buffer acquired in the high I/O pool (HIOP) where it is edited by Message Format Service (MFS) routines, IMS basic edit, or intersystem communication (ISC) edit.
2. Message queuing
Message queuing events require the message in the standard IMS format:
llzz|trancode|data
It is allocated to a position on one of the message queue data sets. The message is moved to the message queue pool and enqueued on the scheduler message block (SMB).
3. Message scheduling
Message scheduling events are dependent on a number of variables. The transaction definition is the starting point for any scheduling issues. The following variables affect transaction scheduling:
 – SCHDTYP on APPLCTN macro
 – Serial or parallel
 – PROCLIM
 – PARLIM
 – PRIORITY
 – MAXRGN
 – CLASS of transaction
 – SCHD
The next step is to identify any pool intent failures. IMS requires the program specification block (PSB) directory (PDIR) and data management block (DMB) directory (DDIR) for the scheduled PSB to be available. The PSB and DMB are loaded when the transaction first executes. If not resident, then ACBLIB I/Os are required to load the PSBs and DMBs. The PSB, PSB work pool (PSBW), and DMB pools must be large enough to accommodate all PSBs and DMBs, if possible, and must be monitored regularly.
4. Scheduling-end to first DL/I call
Scheduling-end to first DL/I events include program or subroutine load and any program and working storage initialization being performed by the application. You must review long time periods, when evident in this event, from a program-load or application-design perspective.
5. Program elapsed time
Program elapsed time events are composed of many variables that can influence the elapsed time of a transaction. Elapsed time is measured from the time that the application is scheduled into the IMS regions until synchronization point (sync point) processing.
The following variables can affect elapsed time:
 – Program execution time
 – Time required to complete the DL/I calls, and consists of two components:
 • IWAIT time, which is the time for DL/I to acquire the database record
 • NOT-IWAIT time, which is the time DL/I spends in code execution
 – I/O subsystem delays as a result of DASD response times
 – System waits, over which IMS has no influence
6. Sync point processing
Sync point processing events for full-function (FF) require that all I/Os actually take place. IMS follows the standard two-phase commit process. In an IMS only workload, this two-phase commit is actually a call to two separate modules. Generally, a two-phase commit is between IMS and DB2 and follows the standard two-phase commit process. In IMS terms, a get unique (GU) call to the IOPCB to retrieve the next message implies sync point.
7. Message output
Message output events imply termination of the program so that output messages are ready to be sent to their final destination. The HIOP usage is important in identifying possible performance-related issues in this area.
2.3 Events for Fast Path messages
Table 2-2 on page 21 is specific to messages that are being processed through Fast Path expedited message handling (EMH). Certain overlaps exist between Fast Path and full-function, and are also mentioned. These steps are described in more detail after the table.
Table 2-2 Event list for Fast Path messages
Step
Event
Activity
Problem identification
1
Message arrives into IMS
MFS, basic edit or ISC edit
RECA, MFBP, HIOP, or CIOP shortages evident
2
Fast Path expedited message handling (EMH) queuing
Determine if Fast Path (FP) potential or exclusive
EMHB pool shortages
3
Fast Path EMH scheduling
First-in-first-out scheduling by balancing group (BALG)
Queuing on BALG
4
Program elapsed time
Application program invoked and DL/I calls performed
High elapsed times as a result of application, database, WLM, I/O subsystem, or system issues
5
Sync point
Phase one and two commit
Output thread shortage (OTHR)
6
Message output
Send output to destination
EMHB pool shortage, or network delays before message is dequeued
The steps are as follows:
1. Message arrives in IMS
Message arrival event for Fast Path is essentially the same as for full-function.
2. Fast Path EMH queuing
The Fast Path EMH event requires IMS to make a decision on whether the message is Fast Path potential (FPP) or Fast Path exclusive (FPE). If FPP, then DBFHAGU0 is called to decide whether the message needs to be processed by full-function IMS or whether a routing code needs to be assigned to make it Fast Path. If FPE, then IMS queues the message from the BALG. The BALG becomes the queue anchor point.
3. Fast Path EMH scheduling
The Fast Path EMH scheduling event has no complex-priority scheduling schema when compared to FF. Messages are processed by using a BALG on a first-in-first-out (FIFO) basis and scheduled into an available IMS Fast Path (IFP) region. All IFP regions are always in wait-for-input (WFI) mode so the overhead of program-load is avoided.
4. Program elapsed time
The program elapsed time event involves the same principle as mentioned for FF, with the exception that any Fast Path DEDB or MSDB reads are performed under the control of the control region instead of the DL/I separate address space.
5. Sync point processing
Fast Path sync point processing differs significantly from full-function sync point processing. A key element of Fast Path sync point processing is that all Fast Path writes happen asynchronously through output threads (OTHR). Two-phase commit process is still used by Fast Path, but it operates differently. IMS Fast Path does not have to wait for the I/O to be physically written. If phase one processing is unsuccessful, the buffers are thrown away, and the transaction is either rescheduled or thrown away in its entirety, depending on the nature of the problem.
2.4 Events for DBCTL
Table 2-3 is specific to messages being processed through DBCTL. These steps are described in more detail after the table.
Table 2-3 Event list for DBCTL
Step
Event
Activity
Problem identification, if relevant
1
Message arrives into CICS TOR/AOR
Format message and call/link related application programs
 
2
Application gets control and issues schedule request
EXEC DL/I command
 
3
PSB scheduling
DBT thread TCB is given control.
Recovery token establishes
Insufficient PSB, DMB, DB work, PSB work, or EPCB pool space can cause scheduling delays or failures
4
Program elapsed time
Application program invoked and DL/I calls performed
High elapsed times as a result of application, database, WLM, I/O subsystem, or system issues
5
Sync point
IMS hardens the log data and writes updated full-function database blocks. Fast Path database updates are asynchronous
WADS and OLDS activity; database DASD I/O
6
Application issues terminate PSB call
IMS frees scheduled resources
 
The steps are as follows:
1. Message arrives into CICS TOR/AOR
Message arrives at CICS TOR first and unless also acting as an AOR, the message can be routed to a CICS AOR.
2. Application gets control and issues schedule request
Application issues DL/I SCHED call to reserve IMS resources.
3. PSB scheduling
In most cases, one PSB is scheduled for one CICS transaction. IMS checks whether the PSB and related databases are available. It then allocates space for the necessary control blocks in various scheduling pools and loads the required blocks.
4. Program elapsed time
The program elapsed time event is composed of many variables that can influence the elapsed time of a transaction. Elapsed time is measured from the time the PSB is scheduled until the PSB is terminated by the application. The following variables can affect elapsed time:
 – Program execution time
 – Time required to complete the DL/I calls, and consists of two components:
 • IWAIT time, which is the time DL/I has to acquire the database record
 • NOT-IWAIT time, which is the time DL/I spends in code execution
 – I/O subsystem delays as a result of DASD response times
 – System waits, which IMS has no influence over
5. Sync point processing
IMS hardens the log data and writes updated full-function database blocks. Fast Path writes are performed asynchronously to sync point processing.
6. Application issues terminate PSB call
PSB terminates and scheduling resources are freed.
2.5 Common log records produced for transaction flows
Table 2-4 lists the most commonly used log records that are produced during the lifecycle of an IMS transaction. Identifying these log records provides the ability to diagnose performance-related problems.
Table 2-4 Common log records produced by both full-function and Fast Path
Record
Description
X’01’
Message received from a terminal
X’03’
Message received from DL/I
X’07’
An application program was terminated
X’08’
An application program was scheduled
X’31’
Message queue GU
X’32’
Message queue reject
X’33’
Message queue free
X’34’
Message cancel
X’35’
Message queue enqueue
X’36’
Message queue dequeue
X’37’
Sync point record
X’38’
Message after abend
X’50’
Database undo/redo record
X’5901’
Fast Path input
X’5903’
Fast Path output
X’5936’
Fast Path dequeue
X’5937’
Fast Path sync point
X’5938’
Fast Path abend
X’5950’
Fast Path database update
X’5953’
Fast Path sequential dependent (SDEP) write
2.6 Logging in a single IMS system
Figure 2-1 shows the transaction processing events, and the associated log records that are related to each event in a single IMS system. This figure shows the associated event processing and the IMS logger (ILOG) functions and associated log records that you can use to review the performance characteristics and flow of an IMS transaction.
Figure 2-1 Events and associated log records for an IMS transaction
Figure 2-2 on page 25 shows similar transaction events and associated log records that use IMS Problem Investigator tool. This figure highlights the power and simplicity that the PI tool can provide in viewing the overall flow of an IMS transaction. IMS Problem Investigator is further described in 3.2, “IMS Problem Investigator for z/OS, Version 2 Release 3” on page 81.
Figure 2-2 IMS Problem Investigator: Events and log records for an IMS transaction in PI
Figure 2-3 shows the transaction processing events, and the associated log records that are related to each event in an IMS shared-queues environment. It shows the associated event processing and the ILOG functions and associated log records that you can use to review the performance characteristics and flow of an IMS transaction.
Figure 2-3 Events and associated log records for transaction in shared queues environment
IBM Parallel Sysplex® and data sharing, and shared queues are not specifically discussed in this chapter and are addressed in Chapter 10, “Parallel Sysplex considerations” on page 353.That chapter also has a brief overview of the transaction flow and log records in a shared-queue environment compared to a single system.
2.7 Open transaction flow
IMS offers a variety of solutions for accessing IMS applications and data. There are mainly two patterns of connectivity from the application in an open environment: transaction and database. See Figure 2-4.
Figure 2-4 Connectivity with open environment
The patterns of connectivity are as follows:
Transaction connectivity
This way is how a client application in an open environment issues an IMS transaction, IMS application processes, and sends a reply to the client application. Conversely, another way referred to as callout, in which IMS application can call out the open application. See Figure 2-5 on page 27.
This connectivity uses Open Transaction Manager Access (OTMA) as an interface for sending and receiving transactions and data from IMS. The commonly used OTMA clients are IMS Connect (ICON) and MQ-IMS bridge.
Figure 2-5 IMS transaction connectivity solutions
Database connectivity
This way is how a client application in open environment issues IMS DB processing requests and receives replies. The business logic is processed in the open application. See Figure 2-6.
This connectivity uses Open Database Access (ODBA), database resource adapter (DRA), or both, as an interface to access databases managed by IMS DB manager. Examples of ODBA clients are IBM WebSphere® Application Server and IBM InfoSphere® Classic Federation Server. The Open Database Manager (ODBM) is also used as a client.
Figure 2-6 IMS DB connectivity solutions
When you are using ICON, you can collect the event records with the function of IMS Connect Extensions for z/OS (see 3.3, “IMS Connect Extensions for z/OS, Version 2 Release 3” on page 116).
There are two types of event records:
Connect status event
A Connect status event identifies a change in the status of your IMS Connect system, for example, a resource (data store, TMEMBER) becoming available or unavailable, or a socket becoming accepted for input by a port task. Connect status events are typically not related to the processing of input messages, but can affect their processing.
Connect status event records are identified by the EVNT constant event key.
Message related event
Message related event records identify an event in the processing of an incoming message request. Message related event records have a store clock (STCK) token event key. For non-persistent sockets, each incoming message is assigned a unique event key, and every event that is associated with the processing of the message has the same event key.
For persistent sockets, all incoming messages are assigned the same event key. All events that are associated with the processing of all messages for the duration of the socket have the same event key.
The following list introduces the records that are involved in an IMS Connect flow for sync level none and sync level confirm transactions:
Event flow: Commit mode 1, sync level none
Example 2-1 shows the event flow for a single commit mode 1, sync level none transaction. It shows the hexadecimal code and the description of the events records.
Example 2-1 Sample event flow: Commit mode 1, sync level none
3C Prepare Read Socket <= Incoming message from client
49 Read Socket
3D Message Exit called for READ
3E Message Exit return for READ
41 Message sent to OTMA <= Sent to OTMA for processing
42 Message received from OTMA
3D Message Exit called for XMIT
3E Message Exit return for XMIT
4A Write Socket <= Response sent back to client
0C Begin Close Socket <= Non-persistent Socket is closed
0D End Close Socket
48 Trigger event CLOS <= Connect has finished processing message
Event flow: Commit mode 1, sync level confirm
Example 2-2 shows the event flow for a single commit mode 1, sync level confirm transaction. The difference from the previous example is that the client acknowledgement is displayed in the flow.
Example 2-2 Sample event flow: Commit mode 1, sync level confirm
3C Prepare Read Socket <= Incoming message from client
49 Read Socket
3D Message Exit called for READ
3E Message Exit return for READ
41 Message sent to OTMA <= Sent to OTMA for processing
42 Message received from OTMA
3D Message Exit called for XMIT
3E Message Exit return for XMIT
4A Write Socket <= Response sent back to client
49 Read Socket <= ACK received from Client
3D Message Exit called for READ
3E Message Exit return for READ
41 Message sent to OTMA <= ACK sent to OTMA
42 Message received from OTMA
46 De-allocate Session
3D Message Exit called for XMIT
3E Message Exit return for XMIT
4A Write Socket <= De-alloc Response sent back to client
0C Begin Close Socket <= Non-persistent Socket is closed
0D End Close Socket
48 Trigger event CLOS <= Connect has finished processing message
Event flow: ODBM
Example 2-3 shows the event flow of processing through ODBM.
Example 2-3 Sample event flow: ODBM
3C Prepare READ Socket <= Incoming message from client
49 READ Socket
5B DRDA 1041 EXCSAT-Exchange Server Attributes <= Request for server attributes
49 READ Socket (including security mechanism)
49 READ Socket
5B DRDA 106D ACCSEC-Access Security
5C DRDA 1443 EXCSATRD-Server Attributes Reply Data
4A WRITE Socket
49 READ Socket
49 READ Socket
5B DRDA 106E SECCHK-Security Check <= Request Security check
63 ODBM Security Exit called <= Open Database Security Exit called
64 ODBM Security Exit returned
5C DRDA 1219 SECCHKRM-Security Check Reply Message<= Reply from Security Check
4A WRITE Socket
49 READ Socket
49 READ Socket
5B DRDA 2001 ACCRDB-Access RDB <= Request to allocate PSB
5D ODBM begin Allocate PSB (APSB) Program=AUTPSB11
61 ODBM Routing Exit called <= Open Database Routing Exit called
62 ODBM Routing Exit returned
AA ODBM Trace: Message sent to ODBM
69 Message sent to ODBM
AA ODBM Trace: Message received from ODBM
6A Message received from ODBM
5E ODBM end Allocate PSB (APSB) Program=AUTPSB11 <= PSB allocated
5C DRDA 2201 ACCRDBRM-Access RDB Reply Message
4A WRITE Socket
48 Trigger Event for ODBMMSG
3C Prepare READ Socket
49 READ Socket <= Request data
5B DRDA 200C OPNQRY-Open Query
49 READ Socket
49 READ Socket
5B DRDA CC05 DLIFUNC-DL/I function
49 READ Socket
49 READ Socket
5B DRDA CC01 INAIB-AIB data
49 READ Socket
49 READ Socket
5B DRDA CC04 RTRVFLD-Field client wants to retrieve data
5B DRDA CC04 RTRVFLD-Field client wants to retrieve data
...
3C Prepare READ Socket
49 READ Socket
5B DRDA C801 DEALLOCDB-Deallocate PSB <= Request to Deallocate PSB
5F ODBM begin Deallocate PSB (DPSB)
AA ODBM Trace: Message sent to ODBM
69 Message sent to ODBM
AA ODBM Trace: Message received from ODBM
6A Message received from ODBM
60 ODBM end Deallocate PSB (DPSB)
5C DRDA CA01 DEALLOCDBRM-Name of deallocated PSB <= Reply to Deallocate PSB
4A WRITE Socket
48 Trigger Event for ODBMMSG
3C Prepare READ Socket
47 Session Error
0C Begin CLOSE Socket
0D End CLOSE Socket <= Client disconnected
2.8 Monitoring methodology
Monitoring is the collection and interpretation of IMS data. Monitoring should be an ongoing task for the following reasons:
Monitoring helps you establish base profiles, workload statistics, and data for capacity planning and prediction.
Monitoring gives early warning and comparative data to help you prevent performance problems.
Monitoring validates tuning you have done in response to a performance problem and ascertains the effectiveness of that tuning.
An historical base and conclusions from continuous monitoring provide a good start to answering user complaints and an initial direction for tuning projects.
2.8.1 Establishing monitoring strategies
Several types of monitoring strategies are available:
Summarize actual workload for the entire online execution. This strategy can include both continuous and periodic tracking. You can track total workload or selected representative transactions.
Take sample snapshots at peak loads and under normal conditions. Monitoring the peak periods is always useful for two reasons:
 – Bottlenecks and response time problems are more pronounced at peak volumes.
 – The current peak load is a good indicator of what the future average will be.
Monitor critical transactions or programs that have documented performance criteria.
Use the z/OS Workload Manager to help manage workload distribution, balance workloads, and distribute resources.
Plan your monitoring procedures in advance. A strategy should explain the tools to be used, the analysis techniques to be used, the operational extent of those activities, and how often they are to be performed.
Regardless of which strategy you use, you need to develop the following items:
Performance criteria
A master plan for monitoring, data gathering, and analysis
2.8.2 Monitoring multiple systems in DB/DC and DCCTL environments
Plan to obtain both statistical and performance data for IMS online systems that are part of a multi-system network. You can use the same monitoring utilities and tools that are used for generating performance data for single IMS systems.
An online monitoring facility, such as IBM Tivoli® OMEGAMON® XE for IMS on z/OS (see 3.5, “Tivoli OMEGAMON XE for IMS on z/OS, Version 4 Release 2” on page 129) offers several features that can be used to assist in monitoring multiple IMS systems. It can assist in identifying and resolving immediate issues when monitoring multiple subsystems. IMSplex workspaces can help you monitor shared queues and data sharing groups.
For more information about monitoring multiple systems with OMEGAMON XE for IMS on z/OS, see Chapter 5. Monitoring IMSplex systems of IBM Tivoli OMEGAMON XE for IMS on z/OS User’s Guide V4R2, SC23-9706.
IMS Monitor can be run concurrently in several systems. You obtain IMS Monitor reports for each IMS system and coordinate your processing analysis:
IMS Statistical Analysis utility produces summaries of transaction traffic for each system. Again, you combine the statistics for a composite picture.
IMS Transaction Analysis utility helps you trace transactions across multiple systems and examine the traffic.
With IMS Performance Analyzer for z/OS (3.1, “IMS Performance Analyzer for z/OS, Version 4 Release 3” on page 44) you can define groups. You can group IMS systems for reporting purposes.
IMS Monitor reporting only allows for processing a single IMS system at a time, but with IMS Performance Analyzer for z/OS Groups you can do the following tasks:
Connect IMS systems participating in a sysplex. Reporting on the group can produce end-to-end response-time statistics for shared queue transactions.
Connect IMS systems that use data sharing. Reporting on the group can produce consolidated database reporting.
Connect IMS systems for periodical or ad hoc reporting. Reporting on the group can produce reports for each IMS subsystem in a single run.
Connect systems (IMS and IMS Connect) to produce combined IMS and Connect forms-based reports.
For more information about using IMS Performance Analyzer for IMS on z/OS Groups to monitor multiple IMS systems in either a shared queue or data sharing environment. See Chapter 13. Defining Groups of IBM IMS Performance Analyzer for z/OS User’s Guide V4R3, SC19-3633.
These options are only some of the options available when monitoring multiple systems in either a shared-queues or data-sharing environment. This process is installation-specific, to be determined by your system administrator.
2.8.3 Coordinating performance information in an MSC network
The IMS system log for each system participating in Multiple Systems Coupling (MSC) contains only the record of events that take place in that system. The logging of traffic received on links is included. You can augment the system log documentation that records the checkpoint intervals with the system identifications of all coupled systems. In this way, you can interpret reports, because you know of transactions that might be present in message queues but are not processed, and you can expect additional transaction loads from remote sources. In your analysis procedures, include ways of isolating the processing that is triggered by transactions that originate from another system.
To satisfy the need for monitoring with typical activity that includes cross-system processing, coordinate your scheduling of IMS Monitor and other traces between master terminal operators. The span of the monitoring does not have to be exactly the same, but if it is widely different, the averaging of report summaries can make it harder to interpret the effect of the processing that is triggered by cross-system messages.
For more information about interpreting MSC reports, see IMS Version 12: System Utilities, SC19-3023 and IMS Version 12: System Administration, SC19-3020.
Additionally, IMS Performance Analyzer for z/OS (see 3.1, “IMS Performance Analyzer for z/OS, Version 4 Release 3” on page 44) provides several reports to assist you in tracking MSC activity in your installation. These IMS Monitor MSC reports include the following information:
System analysis
Resource usage
Monitor data analysis
The reports format the records from the monitor input file into a chronological listing.
IMS Performance Analyzer for z/OS alleviates the need for averaging IMS Monitor report summaries by allowing you to place systems into groups and report against the group rather than individual systems.
IMS log reporting includes the MSC Link Statistics report (Figure 3-31 on page 61) and forms-based transit reporting (3.1.3, “Forms-based reports” on page 48). The MSC Link Statistics report contains information about the use of MSC Links and is generated from IMS log record X’4513’. It provides summary information for each MSC link with a more detailed breakdown of send and receive traffic. Forms-based reports provide summary and detailed transit information about your MSC transactions.
IMS Connect forms-based transit reports, using IMS Connect Extensions for z/OS (see 3.3, “IMS Connect Extensions for z/OS, Version 2 Release 3” on page 116) event journals, allow you to gain detailed information about your MSC over TCP/IP transactions that are initiated through IMS Connect. By using groups to combine IMS and IMS Connect systems, you can obtain end-to-end transit reporting for these types of MSC transactions.
For more information about using IMS Performance Analyzer for z/OS reporting for MSC, see the following sources:
IBM IMS Performance Analyzer for z/OS User’s Guide V4R3, SC19-3633
 – Part 4, Forms-based transit reporting
 – Part 5, Requesting Log reports
 – Part 6, Monitor reporting
IBM IMS Connect Extensions for z/OS User’s Guide V2R3, SC19-3632
2.8.4 Monitoring Fast Path systems in DB/DC and DCCTL environments
The major emphasis for monitoring IMS online systems that include message-driven Fast Path application programs is the balance between rapid response and high transaction rates. With Fast Path, performance data is part of the system log information. You can use IMS Fast Path Log Analysis utility (see 1.6, “Fast Path Log Analysis utility (DBFULTA0)” on page 14) to generate statistical reports from the system log records. This utility can provide summaries of the Fast Path transaction loads, reports that highlight exceptional response time, and details of the elapsed time between key events during the time in the system.
Another option, if available to your installation, is to use IMS Performance Analyzer for z/OS (see 3.1, “IMS Performance Analyzer for z/OS, Version 4 Release 3” on page 44) for your Fast Path monitoring needs. It includes IMS Monitor Fast Path Reporting (Figure 2-7), and Fast Path transit and resource usage reporting (Figure 2-8 on page 34). By using it, you can build customizable report sets that can be run daily, weekly, or when you might need them, with one pass of the IMS log or extract.
Figure 2-7 IMS Performance Analyzer: IMS Monitor Fast Path Report Set
Figure 2-8 IMS Performance Analyzer: IMS Fast Path Transit and Resource Report Set
The system administration tasks of setting up a monitoring strategy, performance profiles, and analysis procedures should be carried into the Fast Path environment.
For more information about using either IMS Monitor or IMS Fast Path Log Analysis utility, see the following sources:
IMS Version 12: System Utilities, SC19-3023
IMS Version 12: System Administration, SC19-3020
For more information about using IMS Performance Analyzer for z/OS, see IBM IMS Performance Analyzer for z/OS User’s Guide V4R3, SC19-3633.
2.9 Transaction flow in DB/DC and DCCTL environments
A distinct sequence of events occurs during the processing of a transaction. Message-related processing is asynchronous within IMS, that is, not associated with a dependent region’s processing. Examples of this kind of processing are message traffic, editing, formatting, and recovery-related message enqueuing, any of which can be done concurrently with application program processing for other transactions. Events from application program scheduling to termination are associated with a PST and can be regarded as synchronous.
Figure 2-9 shows you a sequence of events when an online IMS system is processing a mix of transactions concurrently. Each event is explained in the notes that follow.
The unit of work by which most IMS systems are measured is the transaction (or a single conversation iteration, from entering the input message to receipt of one or more output messages in response).
One way of representing the flow of units of work is to compare it to three funnels through which all transactions must pass, as illustrated in Figure 2-9.
Figure 2-9 Processing events during transaction flow through IMS
The events that account for the principal contributions to transaction response time are numbered in the center. The items entered on the left of the diagram are message-related, and those on the right are related to the application program. The arrows trace the flow for an individual transaction. (The diagram does not show the paging element or system checkpoint processing that is distributed through the elapsed times.)
Figure 2-9 on page 35 shows the following events:
1. Wait for poll
This wait used to be the time between pressing the Enter key and receiving a poll that results in the data being read by the channel program when using BTAM. Now this time does not belong to IMS.
2. Data transfer
This time includes propagation delay and modem turnarounds for multi-block input messages. You can estimate the data transfer times if the volume of data transmitted is known.
3. Input message processing
The IMS control of the transaction begins when the input message is available in the HIOP. The time that the message spends in this pool, in MFS processing, and in being moved to the message queue buffers affects response time. Individual transaction I/O to the Format library affects the message queue. A major factor in determining response time is whether the respective pools are large enough for the current volume of transactions flowing into input queuing. In particular, if the message queue pool is too small, overflow to the message queue data sets occurs.
4. Message classification
This call is to the z/OS WLM to obtain a WLM service classification for the incoming message.
5. Input queuing
This time is spent on the input queue or in the message queue buffers waiting for a message region to become available. In a busy system, this time can become a major portion of the response time. The pattern of programs scheduled into available regions and the region occupancy percentage are important and should be closely monitored.
6. Scheduling
Because of class scheduling, regions can be idle while transactions are still in the queue. The effects of scheduling parameters can be as follows:
 – Termination of scheduling as a result of PSB conflict or message class priorities
 – Termination of scheduling as a result of intent conflict
 – Extension of scheduling by I/Os to IMS.ACBLIB for intent lists, PSBs, or DMBs
 – Pool space failures in either the PSB or DMB pools
7. Initialize PB call (activate delay monitoring environment)
Activate the WLM delay monitoring environment for the message when it is placed into the dependent region. The WLM PB is initialized with the service classification and transaction name, message arrival time, program execution start time (current time), user ID, and so forth.
8. Program load
See event 9.
9. Program initialization
After scheduling, several kinds of processing events occur before the application program can start:
 – Contents supervision for the dependent region
 – Location of program libraries and directories to them
 – Program fetch from the program library
 – Program initialization up to the time of the first DL/I call to the message queue
For monitoring, you can obtain the overall time for the these activities. The number of I/Os should be checked periodically.
10. Message queue GU
This GU call is to the message queue. It is chosen as a measuring point because the event is recorded on the system log and is used as a starting point for iterations of processing when more than one message is serviced at a single scheduling of the program.
11. Program execution
The time for program execution, from first message call to the output message-insert, is a basic statistic for each transaction. It is important to account for that time in terms of the work performed:
 – Number of transactions processed per schedule
 – Number and type of DL/I calls per transaction
 – Number of I/Os per transaction
A useful breakdown of elapsed time into processor time and I/O helps determine which transactions use significant resources.
12. Output message insert
This time begins the asynchronous processing for the output response. The output message requests flow into the funnel to be serviced while the application program is either beginning to process another input message or is performing closeout processing and program termination.
13. Wait for sync point
When an output message is issued by a program, it is enqueued on a temporary destination until the program reaches a synchronization point. For programs specified as MODE=MULT, a long delay in output transmission can occur when the program goes on to process several transactions at one scheduling. None of the previous output messages can be released for transmission until the program ends. If the program fails, all current transactions are backed out. Actually, when the messages are dequeued, LIFO sequence is used, from temporary to permanent destination. With MODE=SNGL, the wait for sync point (at the next GU to the message queue) is normally negligible.
14. Program termination
In the case of MODE=MULT, described in event 13, the synchronization point occurs at program termination. Any database updates are purged from the database buffer pools, and the waiting output messages are released.
In the MODE=SNGL case, the synchronization point occurs at the previous message queue GU call (usually a GU with a QC status code), and no database-commit processing occurs at termination, unless the application program has updated a database after the last message queue GU.
15. WLM notify call
This call tells WLM that the application program ended execution. The PB and current time are passed to WLM.
16. Wait for selection
This time is similar to wait for poll on input, with one difference, which is that an output message might not have to wait for the completion of a polling cycle if no polling is in progress on the line at the time the output message is enqueued. However, there might be a wait for the duration of data transmission to other terminals on the line. In a busy system, this wait can account for the majority of time spent on the output queue.
17. Output message processing
This activity is similar to event 3 on page 36.
18. WLM report call
This call tells WLM that the response is being sent. IMS passes the input message arrival time, the service classification, and the current time (output message send time).
19. Data transfer
This activity is similar to event 2 on page 36.
20. Output queue processing
Output messages that were sent are dequeued after their receipt is acknowledged by the terminal. In the case of paged output, the acknowledgment is a consequence of another input or an IMS MFS 3260/3270 PA2 key entry from the terminal.
2.9.1 Principal DB/DC and DCCTL monitoring facilities
IMS Monitor is the principal monitoring utility that is provided by IMS. It is an integral part of the control program in the DB/DC environment. The counterpart of IMS Monitor in the batch environment is the Database Batch Monitor.
IMS Monitor collects data while the DB/DC environment is operating. Information in the form of monitor records is gathered for all dispatch events and placed in a sequential data set. IMS Monitor data set is specified on the IMSMON DD statement in the control region JCL; data is added to the data set when the /TRACE command activates the monitor. The MTO can start and stop the monitor, guided by awareness of the system's status, to obtain several snapshots.
For more information about interpreting IMS Monitor reports, see the following sources:
IMS Version 12: System Utilities, SC19-3023
IMS Version 12: System Administration, SC19-3020
The principle monitoring tools, provided by IBM IMS Tools, for IMS are as follows:
Tivoli OMEGAMON XE for IMS on z/OS monitors and manages the availability, performance, and resource utilization of your IMS systems, either at a system level or within an IMSplex. OMEGAMON XE for IMS on z/OS monitors more than 80 groups of attributes, providing a wealth of IMSplex and system-level data. You can use these attributes to tailor the information presented in workspaces, or to define situations that target specific thresholds, events, or performance problems you want to monitor.
For more information about Tivoli OMEGAMON XE for IMS on z/OS monitoring facilities, see OMEGAMON XE for IMS on z/OS Users Guide V4R2, SC23-9706-04.
IMS Performance Analyzer for z/OS is a performance analysis tool to help you monitor, maintain, and tune your Information Management System Database (IMS DB) and Transaction Manager (IMS TM) systems. IMS Performance Analyzer for z/OS processes IMS Log, Monitor, IMS Connect event data through IMS Connect Extensions for z/OS event journals, and OMEGAMON XE for IMS on z/OS Transaction Reporting Facility (TRF) and Application Trace Facility (ATF) data to provide comprehensive reports for use by IMS specialists to tune their IMS systems, and managers to verify service levels and predict trends.
For more information about IMS Performance Analyzer for z/OS and its reporting capabilities, see IBM IMS Performance Analyzer for z/OS User’s Guide V4R3, SC19-3633.
2.9.2 Monitoring procedures in a DBCTL environment
This topic explains how to establish monitoring procedures for your DBCTL environment. First, consider that monitoring in a DB/DC environment generally refers to the monitoring of transactions. The transaction is entered by a user on a terminal, is processed by the DB/DC environment, and returns a result to the user. Transaction characteristics that are measured include total response time and the number and duration of resource contentions.
A DBCTL environment has no transactions and no terminal users. It does, however, do work on behalf of coordinator controller (CCTL) transactions that are entered by CCTL terminal users. DBCTL monitoring provides data about the DBCTL processing that occurs when a CCTL transaction accesses databases in a DBCTL environment. This access is provided by the CCTL making the DRA request.
The most typical sequence of database resource adapter (DRA) requests that represents a CCTL transaction are as follows (see Figure 2-10):
A SCHED request to schedule a PSB in the DBCTL environment
A DL/I request to make database calls
A sync point request, COMMTERM, to commit the database updates and release the PSB
Figure 2-10 IMS Problem Investigator: DBCTL Transaction Flow in IMS
The DBCTL process that encompasses this request is called a unit of recovery (UOR).
DBCTL provides UOR monitoring data, such as the following examples (see Figure 2-11 on page 40):
Total time the UOR exists
Wait time to schedule a PSB
I/O activity during database calls
Figure 2-11 IMS Performance Analyzer: Transaction Resource Usage Summary-DBCTL
This information is similar to, and is often the same as, DB/DC monitoring data. However, in a DBCTL environment, the UOR data represents only a part of the total processing of a CCTL transaction. You must also include CCTL monitoring data, using separate tooling, to get an overall view of the CCTL transaction performance.
In this topic, the term transaction refers to a CCTL transaction. When it applies, UOR is specifically named.
The CCTL administrator must decide what strategy to use to monitor transaction performance. Several types of monitoring strategies are available:
Summarize actual workload for the entire online execution. This can be continuous or at an agreed-to frequency. Total workload or selected representative transactions can be tracked.
Take sample snapshots at peak loads and under normal conditions. Monitoring the peak periods is always useful for two reasons:
 – Bottlenecks and response time problems are more pronounced at peak volumes.
 – The current peak load is a good indicator of what the future average will be like.
Monitor critical transactions or programs that have documented performance criteria.
2.10 Monitoring procedures in an open transaction environment
This topic explains how to establish monitoring procedures for your open transaction environment.
In IMS 12, the Open Database Manager provides access to IMS databases that are managed by IMS DB systems in DBCTL and DB/TM environments within an IMSplex. The Open Database Manager supports TCP/IP clients through IMS Connect and application servers running application programs that use the IMS ODBA interface; it is one of the address spaces provided by the IMS Common Service Layer (CSL).
Both independently and with IMS Connect, ODBM supports various interfaces to ease the development of application programs that access IMS databases from many and varied distributed and local environments.
ODBM supports the following interfaces:
IMS Universal DB resource adapter
IMS Universal JDBC driver
IMS Universal DL/I driver
The ODBA interface
The ODBM CSLDMI interface for user-written ODBM client application programs
Figure 2-12 is an overview of an IMS configuration that includes ODBM.
Figure 2-12 Overview of an IMS configuration that includes ODBM
With the variety of origination points for these types of transactions, developing monitoring practices can sometimes be difficult. When we talk about transactions coming into the OBDM address space through either user-written ODBM clients, using SCI, or z/OS ODBA applications, enabling BPE tracing for your ODBM address space in combination with the monitoring process described in earlier parts of this chapter can assist you in monitoring transaction performance in this environment. You might also consider using a combination of BPE tracing, IMS Performance Analyzer for z/OS reporting (3.1, “IMS Performance Analyzer for z/OS, Version 4 Release 3” on page 44), and IMS Problem Investigator for z/OS (3.2, “IMS Problem Investigator for z/OS, Version 2 Release 3” on page 81) for interactive analysis of the transactions executing in this environment.
For transactions that are initiated by using the IBM DRDA® protocols coming into the ODBM address space through IMS Connect through TCP/IP, to access IMS databases, we need additional diagnostic information to better determine their performance. With IMS providing little in the way of diagnostic and performance data collection ability for the IMS Connect address space, diagnosing problems with transactions in this environment can prove to be more difficult.
One option when trying to diagnose these problems is to enable BPE tracing for the IMS Connect and ODBM address spaces, and use this trace data along with system utilities, such as the Log Transaction Analysis utility (1.4, “Log Transaction Analysis utility (DFSILTA0)” on page 11) and Statistical Analysis utility (1.4, “Log Transaction Analysis utility (DFSILTA0)” on page 11), to monitor transaction performance for this environment.
Another option is to use additional tooling provided by IBM IMS tools to help you monitor transaction-related performance with the ODBM transactions initiated through IMS Connect. You might use IMS Connect Extensions for z/OS (3.3, “IMS Connect Extensions for z/OS, Version 2 Release 3” on page 116) event journals in combination with IMS Performance Analyzer for z/OS (3.1, “IMS Performance Analyzer for z/OS, Version 4 Release 3” on page 44) to run reports and obtain detailed transaction transit performance information and specific resource usage, for both IMS Connect and IMS, for your ODBM transactions. With this information, you might then use IMS Problem Investigator for z/OS (Chapter 7, “Performance considerations for managing distributed workloads” on page 291) to focus on any of your ODBM transactions that might have poor response times.
The monitoring process in this environment is truly dependent on each individual installation and can be done in a variety of ways. It is highly dependent on individual skill level and knowledge of IMS utilities and tools.
For more information about open transactions, including transaction flows and monitoring tips, see Chapter 7, “Performance considerations for managing distributed workloads” on page 291.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset