Differences between JES2 and JES3 are becoming smaller
This chapter provides information about functions and features that are included in JES3 that are not provided by JES2 (although the function might be available in other products). We also discuss that in recent years, surface functions have been converging between the two.
This chapter will be used as part of the discovery phase of a migration project to identify the functions that you must find a replacement for. It also provides information to help you determine whether you are using a given function or not.
 
3.1 JES3 functions
This chapter provides a list of functions or features provided in JES3 that do not have a direct equivalent in JES2. A brief explanation of “Where this is enabled” is included for most features to help you know how you would determine if you are using that feature in your installation. Some of the functions presented are part of the fundamental design of JES3 so it might not be so easy to determine if you are using them or not. These features might take some more investigation and time to convert.
For all of the functions, you need to determine the extent of its usage and whether you still need it or not. For example, some features like Main Device Scheduling are less relevant now (especially for installations that use tape virtualization) than they were 40 years ago. So, while you might still be using them, you might find that you do not actually need them.
 
Tip: As you read this chapter, we suggest that you start creating a checklist of the JES3 functions that you use. Later, when you get to the stage of deciding to migrate to JES2, that list will be one of the inputs. To help with that task, Table 3-1 on page 54 could be used as a template for such a checklist.
3.1.1 Dependent Job Control (DJC)
Dependent Job Control (DJC) was originally provided as a JES3 function for installations that required a basic batch job networking capability and found that the use of conditional JCL (using COND codes) was cumbersome. Over the years, most installations found that they required a more robust batch planning, control, and monitoring capability with less manual intervention so the use of batch scheduling products such as IBM Tivoli® Workload Scheduler is now prevalent.
 
Where this is enabled: //*NET Control statement card in JCL. A list of all DJC networks that are currently in use can be found using the *I N command. Be aware that DJC networks are constantly being created and deleted as jobs start and end, so you will also check the log for messages that start with IAT73xx (DJC messages) to build a list of DJC networks.
Any installation that still uses DJC will consider replacing it with a batch scheduler because:
It has more function, including tailored interfaces for production personnel.
Batch schedulers are far more powerful than DJC. They also provide job planning, tracking, and reporting functions that are not provided by DJC.
At present, JES2 can only provide DJC function if you use a third-party product.
Replacing DJC with a batch scheduler is one of the “Positioning Moves” that are referred to in 6.1.1, “Positioning moves” on page 90.
3.1.2 Deadline scheduling
Deadline scheduling is a function in JES3 that can be used to provide a user with the ability to have a job submitted at a certain priority level at a certain time or day. It was also intended for jobs that needed to run at a designated time or period of time, for example weekly. Although its functions worked without a scheduling package or manual operator intervention, it is becoming obsolete because the functions it provides are better handled and controlled by a batch scheduling product such as IBM Tivoli Workload Scheduler and using some of its features for critical path processing and Event Triggered Tracking.
 
Where this is enabled: DEADLINE subparameter on the //*MAIN card in JCL. To display if deadline scheduling is currently being used for any jobs, enter the *I L command. If any jobs are queued, you can get a list of them using the *I A D=DEADLINE command. Be aware that jobs that use DEADLINE might start and end at any time, so just because the *I L command does not show that no jobs are using deadline at the moment does not mean that no jobs use it. Therefore, you will also check the log for messages that start with IAT74xx (DEADLINE messages).
As processing capacity increased over the years, users have come to expect that their jobs will run as soon as they submit it, so this function is not as critical. If the job has specific resource dependencies, or needs to run at a certain time for charge back reasons, that is generally controlled by using different job classes. Note that using DEADLINE scheduling does not guarantee the job would execute at the exact time you want. Some installations might find this function manually intensive to replace.
This is another one of those Positioning Moves referenced in 6.1.1, “Positioning moves” on page 90 that can be completed in advance of the cutover to JES2.
3.1.3 Priority aging
Jobs are selected to run based on jobclass and the available initiators in that class as well as based on a priority in that particular jobclass queue. Priority aging is used to help jobs that were submitted on a system with an insufficient number of initiators. Periodically, as defined by the relevant parameter, if the job was still on the job queue, the priority of the job would be increased. This would potentially give it a better chance of being selected by an initiator and was intended to ensure that low priority jobs would not languish in the job queue forever, while higher priority jobs were continually selected ahead of them. It is only used for JES3-managed initiators.
 
Where this is enabled: SAGER/SAGEL and MAGER/MAGEL keywords on the SELECT INIT statement in the JES3 init deck.
In JES2, there is a similar function for changing the priority of delayed jobs. You can use this by:
Specifying a priority on the /*PRIORITY JECL statement for JES2-managed initiators.
Using the PRTYHIGH=, PRTYLOW=, and PRTYRATE= keywords on the JOBDEF initialization statement.
For more information about this function, refer to the section titled “Job priority aging” in z/OS JES2 Initialization and Tuning Guide, SA22-7532.
However, many installations now use WLM-managed initiators. With WLM-managed initiators, WLM determines which job will be selected to run based on the job’s service class and the Performance Index of that service class. In that environment, the JES priority of the job is irrelevant once it is selected for processing. Before that, the JES priority can be changed, which might change the designated service class for that job.
Converting to WLM-managed initiators is a Positioning Move that can be completed before the move to JES2.
3.1.4 HSM early recall
JES3 recalls non-SMS-managed data sets that have been migrated by DFSMShsm as part of the job setup processing for SMS-managed data sets. One disadvantage is that if you recall during setup, by the time you execute the job, the data set might be migrated again.
JES2 only recalls migrated data sets at the time that they are referenced. This might have an impact on the elapsed time for the job if it has to wait for large data sets (particularly those resident on tape) to be recalled. But remember that this difference only applies to non-SMS-managed data sets. If most of your data is SMS-managed, there will be no change in recall behavior between JES2 and JES3.
z/OS 1.11 added the ability to control whether migrated data sets that are deleted in an IEFBR14 step will be recalled before they are deleted.
In z/OS V2R1, there is a parm in the ALLOCxx member of PARMLIB that allows HSM recalls in serial or parallel. The behavior is the same in JES2 and JES3:
BATCH_RCLMIGDS(SERIAL | PARALLEL)
3.1.5 Main Device Scheduling (MDS)
Main Device Scheduling is a feature of JES3 that verifies that all the resources (devices, volumes, and data sets) needed by a job are available before that job goes into execution. It can be disabled at a system level by using SETUP=NONE. It can still be overridden in jobs that specify //*MAIN SETUP= in their JCL.
Pre-execution setup (JOB setup)
This is the basic feature of JES3 for pre-allocation of all devices, including DASD and tapes. Job setup reserves all devices and mounts all volumes needed by a job before job execution. Job setup can be requested on a job-by-job basis by specifying SETUP=JOB on the //*MAIN statement or on the JES3 initialization statement STANDARDS. SETUP=JOB is the default setting for the STANDARDS statement. Also, the resources are only reserved from a JES3 setup perspective; no actual ENQs or RESERVEs are issued. More information about Main Device Scheduling is available in the chapter titled “Main Device Scheduling” in ABCs of z/OS System Programming Volume 13, SG24-7717.
 
Where this is enabled: If your STANDARDS statement includes SETUP=JOB or has no SETUP parm referenced (JOB is the default parm). It can also be specified at a job level by specifying SETUP=JOB on the //*MAIN JCL statement.
Job setup also does data set awareness. It prevents an initiator from being assigned if the data set is currently allocated OLD. See 3.2.1, “Data Set Name disposition conflict resolution” on page 47.
High water mark setup
This feature reduces the number of resources that will be reserved for a job by determining the maximum, or high water mark, number of devices that will be used by any step in the job. The data set awareness feature is of significant benefit in JES3, especially if you have limited the initiators to a group. It stops a job from consuming an initiator then waiting for data sets.
 
Where this is enabled: The STANDARDS statement SETUP=xHWS (most likely THWS for tape) for a default system setting and a HWSNAME,TYPE= entry in the inish deck for each type of device that can be controlled. Also, overrides to this can be specified via //*MAIN card SETUP= parm in JCL.
For example: A job with three steps that requests two tape drives in the first step, followed by three tape drives in the second step, and one tape drive in the last step, would reserve, or allocate a total of three tape drives for the job since the maximum number of tape drives used by the job would be three. Without this feature enabled, JES3 could attempt to allocate the total of seven devices for the job. This is probably not what was intended because JES3 would view those devices as not being available to other jobs. This could especially be an issue in the case of a long-running job, where it is only the last step that requires many drives.
Because JES2 does not provide an equivalent function to MDS, you will take actions before any migration to JES2 to eliminate its use in JES3. It is likely, if you do use MDS, that it is only used for tape. If you use tape virtualization, it is reasonable to assume that you have more virtual tape drives than you ever use at one time, so disabling MDS would probably have no visible impact on job throughput. Nevertheless, it is prudent to make this change before the migration, so that if it does cause a problem, you can re-enable MDS while you investigate ways to address the problem.
Removing the use of MDS is one of the Positioning Moves described in 6.1.1, “Positioning moves” on page 90.
3.1.6 JES3 device control and device fencing
The original default was for JES3 to control device allocation, including tape and DASD. Device fencing, also known as device pooling, was a feature that could be used to isolate or reserve a certain set of devices for a certain set of jobs or groups, for example, if you wanted a certain group of jobs to use DASD at a remote location only.
For DASD device allocation, it is now recommended to remove all devices from JES3 management by removing their definition from the JES3 inish deck. This feature was most commonly used for tape drive allocations. However, with the combination of SMS-managed tape and tape virtualization, this is now less of a concern and many customers no longer use JES3 to control their tape allocations.
 
Where this is enabled: The JES3 inish deck GROUP statement (part of generalized main scheduling or GMS), using the EXRESC subparameter or the DEVPOOL subparameter.
Removing tape and DASD from JES3 control would be a positioning move and is referenced in 6.1.1, “Positioning moves” on page 90.
3.1.7 Inish deck checker
The JES3 inish deck checker is used to validate the format of the JES3 initialization statements. This is more of an issue in JES3 than in JES2 because of the complexity of the JES3 initialization deck, especially if you use MDS. The IATUTIS program takes input from the IODF and uses that to validate the syntax of the inish deck. An example of this, including the JCL used to run it, can be found in 7.3.2, “Verifying the JES initialization deck” on page 104.
 
Where this is enabled: JCL using program IATUTIS.
In JES2, syntax checking on the initparm can be performed using a secondary JES2, a capability that does not exist in JES3. A second JES2 address would be started using the new parameters to verify the syntax. This is not necessarily a recommended solution.
Also, JES2 tends to be more forgiving of syntax errors in the JES2 initialization statements, providing the operator with an option to resolve any errors during initialization.
Moving from JES3 to JES2 requires a change in the process that you use to validate the JES initialization statements. However, this will only impact a few people.
3.1.8 JES3 Monitoring Facility
The JES3 Monitoring Facility (JMF) provides a number of reports that can be used by the system programmers or software support if there are any specific performance or tuning concerns in JES3. The reports provided are specific to JES3 and are not applicable to a JES2 environment and would not necessarily require an equivalent option.
 
Where this is enabled: Invoked by entering a form of the *X JMF command.
If you do find that you encounter performance issues within JES2 following a migration, JES2 has a similar function called the JES2 health monitor. While the concept is similar, the details will obviously be completely different for JES2 than they were for JES3. More information can be found in the appendix titled “The JES2 health monitor” in z/OS JES2 Initialization and Tuning Guide, SA22-7532.
3.1.9 Disk reader
The disk reader function provides the ability to submit JCL from a PDS to the internal reader using a JES3 command. There are many parameters available to control the set of jobs submitted.
 
Where this is enabled: By placing a //JES3DRDS DD card in the JES3 PROC or by specifying DYNALLOC,DDN=JES3DRDS,DSN=dsn in the JES3 inish deck. It is invoked by using the *X DR M= command, so a search in SYSLOG might be required to determine if this facility is being used.
In JES2, you could provide the equivalent capability by using IEBGENER to copy JCL from a data set or PDS member to the internal reader or have it implemented in a batch scheduling product.
3.2 JES3 features
Following are fundamental features or characteristics of JES3 that cannot be enabled or disabled, but that are different from how JES2 handles the same situation.
3.2.1 Data Set Name disposition conflict resolution
JES3 performs Data Set Name (DSN) conflict resolution before execution of a job. If a job is submitted and will request access to a data set that is inconsistent with another job that is already using that data set, the newly submitted job will not be selected for execution until the data set is freed by the currently executing job. Instead, the job is placed in the JES3 allocation queue. For example, if the new job requests exclusive access to a data set (DISP=OLD or DISP=MOD), and that data set is already in use by another job, the new job will not start executing.
Operators can display the JES3 queues by entering an *I S command. If there are jobs in the allocation queue, you can determine why they are there by entering a form of the *I S A J=nnnn command.
During job execution, if a job requests dynamic allocation of a data set that is already in use, the behavior of the JESs is the same and you will get messages similar to those shown in Example 3-1.
Example 3-1 Message when there is a DSN conflict
IEF861I FOLLOWING RESERVED DATA SET NAMES UNAVAILABLE TO jobname
IEF863I DSN=test.dsname jobname RC = 04
IEF099I Job jobname waiting for datasets
In JES2, the data set needs of a job are not considered when JES2 decides if a job is selected for execution or not. As a result, if you migrate to JES2, you will expect to see the “waiting for data set” message more often. Some tuning of your batch schedules might help reduce the incidence of jobs contending over data sets. After you migrate to JES2, the IBM RMF™ Enqueue Delay Report might help you identify data sets that are experiencing contention. You might also find that you need to increase the number of initiators in JES2 to allow for the fact that some initiators might be occupied by jobs that are waiting for another job to release some required resource. Additionally, any automation that is triggered on this message would need to be reviewed and possibly changed.
You also have the option to issue a SYSDSN ENQ downgrade1. This reduces control from exclusive to shared, allowing access by other jobs.
3.2.2 Job class groups
A job class group is a named set of resource assignment rules to be applied to a group of job classes. System programmers define job class groups on JES3 initialization statements. They establish a link between a job class group and a job class by specifying a job class group name when they define the job classes. The job class group definitions in the initialization deck provide information about the resources that can be used by the set of jobs that are currently running.
Job class groups act differently on JES2, so conversion is needed.
3.2.3 Single point of control
The design of JES3 is that there is a primary, or single, point of control in a multi-system environment, which is known as the JES3 Global system. All other systems are designated as JES3 Locals. The behavior and processing of the JES3 Locals is controlled by the Global.
In a JES2 configuration, each system operates independently. Manipulation of spool files can be performed by any system in the sysplex. However, functions such as selecting a job for execution are performed independently by each system.
Both the operators and the system programmers require some time to get familiar with this different behavior. To ease the migration, you might decide to connect all JES2-managed peripheral devices (printers) to one system in the JESplex, so that all printer control can still be performed from just one system.
3.2.4 Printer naming conventions usage
In JES3, there are no restrictions on printer naming conventions. In JES2, however, there are conventions that must be followed related to the names that you can assign to your printers. As a result, it possible that the printer names you use in JES3, while being more meaningful to a human, will not be acceptable to JES2.
You might be able to circumvent this issue by using JES2 destination IDs that match your old printer names. This issue probably has more impact on the operators because they would need to become familiar with the new printer names.
3.3 On-the-surface convergence
As years pass, one can notice a convergence between JES2 and JES3. However, internal structures are still quite different. Hence the writing of this book to help plan a migration.
Figure 3-1 on page 49 shows how things have been evolving in terms of functions available in JES3 and JES2.
Figure 3-1 JES2 - JES3 surface convergence
Here is some history behind the diagram:
At very the beginning, JES3 was mainly accomplishing its functions based on removable disks and tapes volumes. Nowadays, removable media belong to the past century. Tapes are more virtual; in consequence, there are no more operators to ask for a tape volume to be dug out of the basement. For a discussion of the TS7700 Virtualization Engine support that is available today, refer to IBM Virtualization Engine TS7700 R3.0, SG24-8122. In general, the support for the TS7700 Virtualization Engine is simplified with JES2 since the INISH deck setup with JES3 (the special library and device-related esoterics) is not used with JES2. Instead, the allocation requests are driven entirely through the SMS ACS routines.
With MVS 5.2, new dispatchable units (enclaves) have been introduced which are not handled by any job entry subsystem but still have priorities assigned and so on, by WLM.
Dynamic allocation (SVC 99) is the highly preferred way of allocating resources for subsystems such as IBM DB2®. No more DD cards to handle by any job entry subsystem.
In IBM OS/390® V2R4, WLM was assigned the direct management of initiators in a way close to JES3 principles. JES2 took the benefit to fill a little bit the gap between the two converging job entry subsystems.
UNIX System Services successful integration into MVS opened up a large avenue of applications for which the natural access to data is the UNIX hierarchical file structure, for which no JES is involved.
Recent availability of hybrid platforms embodied in zBX boxes offers a new workload point of view provided by Unified Resource Manager. This is outside the domain of a function like JES.
As an example of this on-the-surface convergence, in the next section we describe JES2 support for virtual tape server VT7700. It was made available in z/OS V1R13, and in a slightly different manner for JES3 in z/OS V2R1.
3.3.1 JES3 role in today’s zWorld
We can recap the JES3 role as follows:
Originally in charge of keeping track of resource availability, JES3 knows about how to assign resources to job batch jobs, based on an image of IOCDS contents that a tool can automatically provide.
Based mostly on MVS catalog, JES3 can know the availability and the shareability of MVS data sets.
SMS-managed data sets availability can be indirectly known of JES3 through an interaction with DFSMS.
USS files are out of its range and their allocation cannot be followed, among other things, because USS files cannot be allocated as MVS data sets have been.
Starting with z/OS V2R1, JES3 can now interact with a VT7700, providing more adequate tape allocation.
Following its traditional load management, JES3 can exploit JES and WLM initiators in order to exploit the hardware resources according to the SLA.
Batch parallel recall
To facilitate interaction between JES and SMS in z/OS V2R1, DFSMShsm can help determine whether data sets to be allocated, have been migrated.
For DFSMShsm-migrated data sets, allocation can now be planned to:
Issue recall requests during step initiation
Or issue recalls for all data sets in the job and wait for the recalls to complete
There is a new ALLOCxx keyword to enable and SETALLOC support.
3.3.2 Job correlator
With JES2 in z/OS V2R1, a correlator can be specified in the JOB card. It opens a unique 64-byte token, the correlator, for each job in a sysplex. This job correlator:
Provides a larger name space for jobs (in addition to classical jobname).
Helps relating jobs to output and other records.
Provides a simple way for applications to determine the Job ID of a job that was just submitted.
Is available with the z/OSMF REST API.
In JES3 environments, this UJOBCORR parameter is accepted but ignored.
The job correlator (UJOBCORR parameter on the JOB card) is a 64-byte token that uniquely identifies a job to JES. The JOBCORR value is composed of a 32-byte system portion, which ensures a unique value, and a 32-byte user portion, which helps identify the job to the system. The UJOBCORR parameter specifies this 32-byte user portion of the job correlator. The UJOBCORR value can be overridden when the job is submitted by using the appropriate JES2 exits.
The job correlator is used to identify the job in multiple interfaces, including:
JES operator commands
ENF messaging
Subsystem interfaces such as extended status and SAPI
SMF records
In the following example, the user portion of the job correlator is set to JMAN_COMPILE:
//TEST JOB 333,STEVE,UJOBCORR=’JMAN_COMPILE’
Subsequently, this value will be combined with the system portion of the correlator to form a job correlator similar to the following example:
J0000025NODE1...C910E4EC.......:JMAN_COMPILE
|<-system portion----------------------->||<-user portion--------------->
3.3.3 MVS resource serialization: JCL examples
Exploiting these enhancements in MVS resource serialization, not only has the ENQ interface been enhanced, but JCL support has been provided. New functions like these generally appear first in JES2 and then later on in JES3.
As depicted in Figure 3-2, starting with z/OS V2R1, a multiple step job can change an exclusive ENQ to shared ENQ for a given data set after the last job step with DISP=OLD, MOD, or NEW has ended. A new JES2 job class parameter can be specified: DSENQSHR=AUTO ¦ ALLOW ¦ DISALLOW
There is a new JOB statement parameter, DSNENQSHR=ALLOW to use with ALLOW.
Figure 3-2 Dynamic ENQ downgrade
3.3.4 JES2 symbols for instream data
z/OS V2R1 introduces new support for JES2 symbols within instream data. As depicted in Figure 3-3 on page 52, this new capability is externally provided by a new step-level EXPORT statement to list system and JCL symbols available to be resolved. It has a new symbols keyword for DD * and DD DATA to control substitution.
Figure 3-3 JES2 symbols for instream data
3.3.5 New PARMDD EXEC keyword
z/OS V2R1 introduces a new PARMDD EXEC keyword in order to support longer parameter strings. Shown in Figure 3-4, this new capability has these characteristics:
Mutually exclusive with PARM keyword.
No other changes required for unauthorized programs.
Authorized programs must be link-edited using LONGPARM or the system will terminate the job at step initiation.
Supports parameter lists 1 - 32760 bytes long.
Figure 3-4 New PARMDD EXEC keyword
3.3.6 JES new functions in z/OS V2R1
Both JES2 and JES3 are enhanced in z/OS V2R1 in a similar manner (at least externally).
Expansion to 8-character job class name
With z/OS V2R1, both JES2 and JES3 add support for 8-character alphanumeric job class names on the JOB JCL statement.
JES3 supports 8-character job classes via JECL:
//*MAIN CLASS=xxxxxxxx
JES3 continues to override CLASS from JOB statement when CLASS is coded on the //*MAIN statement.
Note: A migration task would be to convert to having all CLASS= statements on the JOB card.
JES2 also supports creating up to 512 job classes. JES3 continues to support a maximum of 255 job classes.
In-stream data support
With z/OS V2R1, JES3 adds support for in-stream data sets in PROC and INCLUDE statements. This support is similar to what was introduced in z/OS V1R13 for JES2.
New keyword on the JOB JCL statement
With z/OS V2R1, JES2 and JES3, both add for new SYSTEM and SYSAFF keywords on the JOB JCL statement for directing jobs to specific JES3 main systems.
For the JES3 environment, the following parameters must be consistent with the SYSTEM or SYSAFF parameter, or JES3 will terminate the job:
For the CLASS parameter on the JOB or //*MAIN statement, the requested processor must be assigned to execute jobs in the specified class.
All devices specified on DD statement UNIT parameters must be available to the requested processor.
The TYPE parameter on the //*MAIN statement must specify the system running on the requested processor.
Dynamic support programs requested on //*PROCESS statements must be able to be executed on the requested processor.
If any DD statement UNIT parameter in the job specifies a device-number, either a SYSTEM or SYSAFF parameter must be coded or the JES3 //*MAIN statement must contain a SYSTEM parameter.
MVS SSI 80 64-bit storage requests
With z/OS V2R1, JES2 and JES3 both support requests for 64-bit storage. MVS JES-neutral SSI request 80 (IAZSSST) includes new 64-bit fields.
Access controls on job classes
With z/OS V2R1, JES2 and JES3 both add support for SAF control over job class usage, using new profiles in the JESJOBS class. Many users had an exit for this function, which is no longer needed.
3.4 Checklist
This section contains a simple sample checklist that you can use to keep track of which JES3 functions you are using. You will probably extend this list with functions that you provide in any JES3 user exits that you exploit.
Table 3-1 JES3 unique functions checklist
Function
Being exploited?
Dependent Job Control
 
Deadline Scheduling
 
Priority aging
 
Early HSM recall
 
JES3 Device Control and Device Fencing
 
Main Device Scheduling
 
Inish Deck Checker
 
JES3 Monitoring Facility
 
Disk Reader
 
Data Set Name Disposition Conflict Resolution
 
Job class groups
 
Single Point of Control
 
Printer naming rules
 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset