Planning
This chapter introduces into the planning considerations for setting up an IBM Db2 Mirror environment.
It provides information about the requirements; important planning decisions to be made by the user for the configuration of the Db2 Mirror environment, including the scope of replication; and considerations pertaining to application design, performance, backup and recovery, and more.
2.1 Db2 Mirror setup overview
Db2 Mirror is initially set up on a single partition that is called the setup source node. During the setup process, the setup source node is cloned to create a second node of the Db2 Mirror pair, and it is called the setup copy node. The setup copy node is configured and initialized automatically by Db2 Mirror as part of its setup process.
 
Note: To set up Db2 Mirror, a third IBM i node is required, which is used as the management node for initiating the initial setup for a Db2 Mirror pair by using the Db2 Mirror GUI or the Db2 Mirror SQL services and setup tool.
Figure 2-1shows a setup overview for a Db2 Mirror environment, including a third node for the setup. This third node also can be used after the setup as a management node and quorum for the Db2 Mirror cluster.
Figure 2-1 Setup process overview
For more information about this setup process, see 3.1, “Db2 Mirror setup process” on page 38.
2.2 Hardware and software requirements
This section provides an overview of the Db2 Mirror requirements. For a comprehensive list of the requirements for Db2 Mirror, see IBM Knowledge Center.
2.2.1 IBM Power System hardware requirements
You need Power Systems servers and network adapters.
Power Systems servers
The two nodes of a Db2 Mirror pair must run on POWER8 processor-based servers or later.
From a resiliency perspective, host the two nodes of a Db2 Mirror pair on two separate Power Systems servers.
There is no requirement that both servers have the same hardware configuration. However, as a best practice, plan for adequate performance capabilities for each of the two servers to support the application workload from both nodes in case either Db2 Mirror node becomes unavailable.
Db2 Mirror can be configured and run on a single Power Systems server, but that diminishes the goal of the continuous availability if the server has an outage. Table 2-1shows the minimum server hardware and firmware level requirements for Db2 Mirror.
Table 2-1 Power Systems and firmware levels that are supported with Db2 Mirror
Power Systems server
Firmware level
POWER8
FW860.60 or later
POWER9
FW930 or later
Network adapters
Db2 Mirror requires Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) adapters that provide a fast, high-bandwidth network between the nodes. RoCE adapters, either physical or virtualized, must be installed in both partitions. The adapters can be directly interconnected by using a cable or optionally connected through a RoCE switch. The maximum physical distance between the two nodes is limited by the length of the cables.
Only POWER9 processor-based servers can support RoCE in single root I/O virtualization (SR-IOV) mode, which allows virtualization of the adapter that can then be shared between different IBM i logical partitions (LPARs). Up to eight IBM i LPARs can share one RoCE adapter port.
The network switch must support the RoCE V1 communication protocol.
Table 2-2 lists all the supported network adapters. A 100 Gb adapter is a best practice.
Table 2-2 RoCE adapters that are supported by Db2 Mirror for i on Power Systems servers
RoCE adapter
POWER8 processor-based system
POWER9 processor-based system
PCIe3 2-port 100GbE NIC & RoCE QSFP28 Adapter
(FC EC3L and EC3M; CCIN 2CEC)
X
X
PCIe3 2-port 10Gb NIC & RoCE SR/Cu adapter
(FCEC2R and EC2S; CCIN 58FA)
NA
X
PCIe3 2-port 25/10 Gb NIC & RoCE SFP28 adapter
(FC EC2T and FC EC2U; CCIN 58FB)
NA
X
PCIe4 2-port 100GbE RoCE x16 adapter
(FC EC66 and EC67; CCIN 2CF3)
NA
X
2.2.2 Hardware Management Console requirements
The Db2 Mirror nodes must be managed by a Hardware Management Console (HMC).
The IBM i partition that is used as the setup copy node for the cloning process must be created before beginning the Db2 Mirror configuration process. The partition does not need to be installed with the IBM i operating system (OS), but it does need to be defined on the HMC and have all its hardware assigned to it (such as RoCE adapters and LAN adapters). This partition can be defined on the same HMC or on a different HMC from the setup source node.
Table 2-3 shows the required minimum HMC firmware levels.
Table 2-3 Minimum HMC firmware levels that are required for Power Systems servers
IBM POWER® processor-based systems
Minimum firmware level
POWER8
V8R8.6.0
POWER9
V9R1.930.0
2.2.3 Software requirements
This section describes that necessary software requirements for Db2 Mirror.
IBM i operating system
Db2 Mirror requires IBM i 7.4 or higher.
Before beginning the Db2 Mirror setup process, install the latest levels of the following IBM i program temporary fix (PTF) groups on the setup source node:
IBM i cumulative PTF package: SF99740 (IBM i 7.4)
Db2 Mirror PTF group: SF99668 (IBM i 7.4)
Db2 for IBM i PTF group: SF99704 (IBM i 7.4)
On the managing node, install the latest level of the following PTF groups:
Db2 Mirror PTF group: SF99668 (IBM i 7.4)
IBM HTTP Server for i PTF group: SF99662 (IBM i 7.4)
Java PTF group: SF99665 (IBM i 7.4)
IBM i options and products
Product license keys are required for each of the two nodes of a Db2 Mirror pair.
Add all the license keys that are required for the setup source node and the setup copy node on the setup source node. The keys are automatically applied after the clone completes.
Required setup source and managing nodes products
The setup source node and the managing node must have the options and products that are shown in Table 2-4 installed.
Table 2-4 Required IBM i products and options for setup source and managing nodes
Product and option
Description
Notes
Setup source node
Managing node
5770SS1 Option 3
Extended Base Directory Support
For jt400.jar.
X
X
5770SS1 Option 12
Host Servers
 
X
X
5770SS1 Option 26
IBM DB2® Symmetric Multiprocessing
For parallel degree that is used by resynchronization.
X (optional
 
5770SS1 Option 30
Qshell
 
X
X
5770SS1 Option 33
Portable Application Solutions Environment (PASE)
Required on the setup source node for time synchronization by using Network Time Protocol (NTP).
Required on the managing node for cloning IBM Spectrum® Virtualize storage.
X
X (optional)
5770SS1 Option 34
Digital Certificate Manager (DCM)
Required on any node that is running a cluster monitor.
X (optional)
 
5770SS1 Option 41
High Availability Switchable Resources
Required for cluster device domain.
X
 
5770SS1 Option 48
Db2 Data Mirroring
 
X
 
5770JV1 *BASE
IBM Developer Kit for Java
 
X
X
5770JV1 Option 16
Java SE 8 32-bit
 
X
X
5770JV1 Option 17
Java SE 8 64-bit
 
X
X
5733SC1 *BASE
IBM Portable Utilities for i
 
X
X
5733SC1 Option 1
OpenSSH, OpenSSL, and zlib
 
X (optional)
X
5770DG1 *BASE
IBM HTTP Server
for i
 
X
X
5770DBM *BASE
IBM Db2 Mirror for i
 
X
X
5770DBM Option 1
Db2 Mirror Enablement
Db2 Mirror Enablement is not required for the managing node.
X
 
DS Command-Line Interface (DS CLI)
DS8000 Command-Line Interface
Required on the managing node for cloning IBM System Storage DS8000.
 
X (optional)
Open source packages
These required packages are delivered in Red Hat Package Manager (RPM) packages and installed by using the YUM installer or IBM i Access Client Solutions:
python2-six-1.10.0-1.ibmi7.1.noarch.rpm
python2-ibm_db-2.0.5.8-1.ibmi7.1.ppc64.rpm
cloud-init-1.2-0.ibmi7.1.ppc64.rpm
Figure 2-2 shows the IBM i Access Client Solutions product managing the open source package installation and update.
Figure 2-2 Open source package management option from IBM i ACS
Supported browsers for the Db2 Mirror GUI
Chrome
Edge
Firefox
Safari
2.3 Objects eligible for replication
The Db2 Mirror environment supports replication of the object types that are typically updated in user applications.
2.3.1 IBM i objects
Table 2-5 lists the IBM i object types that are eligible for replication by Db2 Mirror. For more information about the replication for specific object types, see IBM Knowledge Center.
Table 2-5 IBM i object types eligible for Db2 Mirror replication
Object type
Description
*AUTL1
Authorization list
*DTAARA
Data area
*DTAQ
Data queue
*ENVVAR2
Environment variable
*FCNUSGab
Functional usage
*FILE3
File
*JOBD
Job description
*JOBQ
Job queue
*JRN
Journal
*LIB
Library
*OUTQ4
Output queue
*PGM
Program
*SECATRab
Security attribute
*SQLPKG5
SQL package
*SQLUDT
SQL user-defined type
*SQLXSR
SQL XML schema repository
*SRVPGM
Service program
*SYSVALb
System value
*USRPRFa
User profile

1 Objects of this type are always included in replication through Db2 Mirror.
2 This is a pseudo-object type that is defined for use by Db2 Mirror. The default inclusion state does not apply to this type.
3 Device files, except for SQL aliases, cannot be replicated.
4 Spooled files are not specified as replicated objects. An output queue can be specified as a replicated object, which causes all spooled files within that output queue to be replicated.
5 Extended dynamic packages cannot be replicated.
2.3.2 Definition only
You might need to replicate only the definition of an object and not replicate the content. This option is supported within Db2 Mirror for the object types of *DTAQ, *FILE, and *LIB. For a data queue, only the definition can be replicated. The content of a data queue is never replicated. A file can be replicated either as DEFINITION or all the content can be replicated too.
2.3.3 Integrated File System objects
Integrated File System (IFS) objects are made accessible on both Db2 Mirror nodes by using a different technology than the replicated IBM i object types. To be accessible, the IFS objects must be contained within an Independent Auxiliary Storage Pool (IASP), and the IASP must be part of a PowerHA cluster resource group (CRG).
2.4 Replication rules
The definition of the list of objects to be replicated between a Db2 Mirror node pair is determined by three criteria:
Default inclusion state, which is defined at initial setup.
System rules.
The replication criteria list (RCL), which is user-defined.
The combination of these criteria defines replication rules for IBM i objects in the Db2 Mirror environment.
2.4.1 Default inclusion state
The default inclusion state is defined during the initial Db2 Mirror setup and cannot be changed except by recloning the System Auxiliary Storage Pool (SYSBAS) or a database IASP (DB IASP) and removing all existing rules, as described in 4.4, “Recloning” on page 65.
This is the highest level rule, and it has two possible settings:
Exclude: No objects are replicated by default. If you want specific objects or libraries to be included in replication, you must create rules to identify the objects that are to be included.
Include: All eligible objects are replicated by default. You can create more rules to exclude objects from replication.
You may select the option exclude for the default inclusion state if there is no general need to replicate all eligible objects from SYSBAS or the IASP. Then, you can add rules for objects or group of objects that must be replicated between both nodes.
Figure 2-3 shows the Db2 Mirror for i GUI view of the initial setup where the Default Inclusion State is defined. It lists all configured IASPs on the system, such as PRIMASP. The option to include or exclude IASPs in the Db2 Mirror replication must be selected.
Figure 2-3 Default inclusion state during the initial setup
2.4.2 System rules
When the default inclusion state is set for SYSBAS or for a DB IASP, Db2 Mirror adds system-defined object rules to the RCL. These rules are for objects that are always replicated by Db2 Mirror or for objects that will never be replicated by Db2 Mirror. These rules cannot be modified or overridden by a user rule. Examples of system rules that are added by Db2 Mirror are related to user profiles, authorization lists, and system values.
Figure 2-4 shows the Manage Replication List view for the system rules, which are filtered by user profile, system value, and include in replication.
Figure 2-4 Manage Replication List: system rules
2.4.3 Replication criteria list
The RCL is a rules engine that is used by Db2 Mirror to determine whether an object should be replicated.
The RCL consists of a set of rules identifying groups of objects that should or should not be replicated. The rules, which are combined with the default inclusion state and system rules, provide a concise process to determine the replication status of any existing or future object.
Because Db2 Mirror supports objects in SYSBAS and objects that are within a DB IASP, an independent RCL exists for each IASP that is registered as a DB IASP with Db2 Mirror.
Figure 2-5 shows the Db2 Mirror GUI dialog box Manage Replication List – Rules for managing the RCL and its rules. As a default inclusion state of exclude was configured, the user must add rules to include objects to be replicated between the Db2 Mirror node pair.
Figure 2-5 Manage Replication List with the default inclusion state of exclude for *SYSBAS
2.5 Object tracking list suggested priorities
The user can optionally define user-suggested priorities for objects that Db2 Mirror considers when resynchronization is required. By defining suggested priorities, the user can define a data resiliency precedence for selected data based on its importance from a business perspective.
Db2 Mirror determines the order in which objects are resynchronized based on their object type and object dependency on other objects. The suggested priority is defined as a positive integer number with 0 being the highest priority, but it is only a relative and not an absolute priority. The default priority for objects is the null value, which indicates that no user suggested priority is defined.
Figure 2-6 shows an example of a user-suggested priority where the user defined the highest resynchronization priority for all objects from the ITSOLIB library.
Figure 2-6 Db2 Mirror suggested priorities
2.6 Application considerations
Db2 Mirror provides an active-active clustering solution with synchronous data replication between both nodes. It is designed for a recovery point objective (RPO) and recovery time objective (RTO) for outages that can be reduced to near zero.
The design of your production applications in terms of active-active support affects the RTO that is possible with Db2 Mirror for i. Changes to your application architecture can provide added benefits in a Db2 Mirror environment.
Here is a list of items that must be considered:
Database replication jobs and activation groups that are used by Db2 Mirror.
JTOpen Java Database Connectivity (JDBC) driver support for application availability.
Job queues and job scheduled entries considerations with Db2 Mirror.
Querying data from a replicated database file and Db2 Mirror status.
Database trigger considerations.
Db2 Mirror exit points.
Application behavior when the replication state is BLOCKED.
XA distributed transaction environment.
IBM MQ and Db2 Mirror.
You can find more information about these considerations at IBM Knowledge Center.
2.7 Quorum
A quorum node for a Db2 Mirror cluster serves as an arbitrator for determining a node’s correct replication state if both primary and secondary Db2 Mirror nodes were down.
Quorum data, including each node’s last replication state, is kept synchronized among all
Db2 Mirror cluster nodes so that if the partner node is down when a node performs an initial program load (IPL), the quorum node (a third node of the Db2 Mirror cluster) helps to determine a node’s correct replication state of TRACKING or BLOCKED.
 
Note: Configuring a quorum node for a Db2 Mirror cluster is optional but highly recommended because it provides extra resiliency to properly recover from dual-node outages. Without a quorum node, both nodes would be set to a BLOCKED state when they start, and the Db2 Mirror administrator would need to decide to force one of the nodes to a TRACKING state.
The quorum node does not require extra resources for its role as a quorum node or a
Db2 Mirror license. A Db2 Mirror management node or a node of a high availability (HA) disaster recovery (DR) (HA/DR) configuration may be designated as a quorum node. However, as the quorum node must be part of a Db2 Mirror cluster, it must be dedicated to a cluster that cannot be shared among multiple Db2 Mirror clusters.
2.8 Cluster monitors
A configuration of a cluster monitor on a Db2 Mirror cluster node allows the node to query its HMC through a REST API interface for node failure conditions to help distinguish between a possible communication failure or an actual failure of the other node.
The cluster monitor is also used for automatic file system mutations of registered IFS IASPs, that is, a mutation from a client file system instance to a server file system by an IASP failover or from a server file system instance to a stand-alone file system.
 
Note: Adding a cluster monitor to the quorum node is a best practice because it helps correctly set a node’s replication state after recovery from situations where both Db2 Mirror nodes failed concurrently.
A cluster monitor can be configured during the initial Db2 Mirror setup or added later, as shown in Figure 2-7. It requires manually configuring a system certificate store on the node before you use the IBM i DCM GUI to store the HMC’s certificate for secure REST API communications.
Figure 2-7 Db2 Mirror GUI Manage Cluster
2.9 Network Time Protocol server
Db2 Mirror requires NTP-based system time synchronization between the primary and secondary nodes. Both nodes also must be configured with the same time zone setting. The time synchronization between the Db2 Mirror nodes is especially important for logging, temporal tables, and time-based applications.
There are three options for configuring time synchronization at Db2 Mirror setup:
External time server Both nodes reference the same external NTP server, which can be either a public one or one within the customer’s network.
Chained time server The setup source node is configured as a Simple Network Time Protocol (SNTP) server and references an external NTP server. The setup copy node references the setup source node, but also references the external time server as a non-preferred server.
Peer time server The setup copy node references the setup source node, which is configured as an SNTP server.
 
Note: If you reference more than one external time server, configure at least four external time servers to help ensure that the times that the IBM i NTP client receives from the different time sources fall into a small enough range so that the client can figure out which one is correct.
2.10 Performance
The synchronous replication of Db2 Mirror for database type objects and the IFS mutable file system client/server model demand some considerations about performance:
With synchronous replication, the complete I/O path length for database type object change operations increases because the operation drives I/O on both nodes to finish. A typical increase by a factor around 2 - 3x is expected, with single-threaded or serial I/O workloads being most impacted.
Database read workloads are not impacted because they are served locally and not replicated.
The ability to balance the workload by running transactions on both nodes mitigates the per transaction impact with a target of achieving equal to or greater transactional throughput compared to a single non-mirrored system.
For IFS IASPs, the performance might differ depending on which node a file system operation is being initiated from. Users on the server node where the IASP is varied on might experience faster response times than users on the client node.
2.11 Supported storage
Db2 Mirror supports external SAN storage only for both of its nodes.
The background for this requirement is that during the initial setup of Db2 Mirror, the so-called setup source node is cloned to the setup copy node, and this cloning is only supported for external SAN storage.
For planning purposes, the following IBM statement of general direction from the IBM i 7.4 TR1 announcement might be helpful:
“IBM plans to introduce the support for internal disk as a storage option to the Db2 Mirror for i product.”
Db2 Mirror supports automatic cloning for IBM System Storage DS8000 series or
IBM Spectrum Virtualize based storage like IBM Storwize® and IBM SAN Volume Controller by either using IBM FlashCopy® (if you use a single storage system for both the setup source and setup copy node) or remote copy replication (if you use two storage systems). For other
IBM i supported SAN storage systems, Db2 Mirror provides a manual copy option for the cloning where the user is required to manually perform the volume copy operations on the storage systems.
2.12 Migration considerations
Depending on the configuration of the existing IBM i environment, different considerations apply for the migration to Db2 Mirror.
IFS migration
As described in 1.3.6, “Database IASP versus IFS IASP” on page 7, if IFS data must be made simultaneously accessible from both Db2 Mirror nodes, it must be included in an IASP that must be registered as an IFS IASP with Db2 Mirror.
IASP migration
An existing IASP containing both IFS data files and database type objects that must be simultaneously accessible from both Db2 Mirror nodes must be split into two IASPs: one for the IFS data that must be registered as an IFS IASP, and the other one for the database objects that must be registered as a DB IASP with Db2 Mirror.
Clustering migration
Cluster information on a setup source node from a system that was used in a PowerHA environment before it is preserved by the Db2 Mirror setup because it is deleted on the setup copy node after the cloning to allow it to join the existing cluster.
2.13 Backup and recovery considerations
Some special considerations apply for save and restore operations in Db2 Mirror that the user should be aware of to implement a successful backup and recovery strategy.
Backup considerations
When using an IFS IASP, save and restore operations must be performed on the server node where the IASP is varied on. Save and restore for the IFS IASP cannot be performed from the client node.
A full-system save (by using GO SAVE menu option 21 or BRMS backup control group *SYSTEM) should be performed for the primary node. Unless it is performed from a FlashCopy image, it is considered a planned outage because it involves going into a restricted state, so it must be handled by performing a Db2 Mirror role swap before and after the system save in order for production work to continue while the save is done on the primary.
The reason that the primary is the preferred system for a full system save is that it is the system that is restored and then cloned in case a full recovery for the Db2 Mirror environment is required. The underlying presumption is that the primary's personality and non-mirrored data is the favored system in a typical production environment.
Before running the backup from the primary node, the user should verify that the data area QTEMP/SRMIRCTL does not exist (see the following bullet).
For a backup of the secondary node, only non-replicated user data must be backed up (use the GO SAVE menu option 23 or BRMS backup control *BKUGRP).
To omit replicated data from the backup, create a data area that is named QTEMP/SRMIRCTL as follows:
CRTDTAARA DTAARA(QTEMP/SRMIRCTL) TYPE(*CHAR) VALUE(’1’)
The QUSRBRM library containing the BRMS save history information is excluded from Db2 Mirror replication by a system-defined rule. When using BRMS, it is a best practice to include both Db2 Mirror nodes in a traditional BRMS network for the synchronization of save history information.
Independent from Db2 Mirror, for IFS backups it is a best practice to use separate BRMS backup control groups for SYSBAS and each IASP. This approach enforces BRMS to create separately saved items for the IFS data to help prevent duplicate restores causing longer recovery times and undesired restores into SYSBAS if the IASP is not available.
Restore considerations
To perform a complete system restore for a Db2 Mirror environment from a system backup that was created by using the backup strategy that is described in “Backup considerations” on page 34, complete the following steps:
1. Recover the primary node.
2. Reconfigure Db2 Mirror for the secondary node (which includes recloning).
3. Recover the secondary node’s user data.
For a selected restore of objects that are always replicated by Db2 Mirror like user profiles, authorization lists, and information like function usage information and security attribute information, more considerations apply to keep object ownership and authorities synchronized for replicated objects:
Because a restore of all user profiles and authorities requires the IBM i system to be in a restricted state, the RSTUSRPRF and RSTAUT commands must be run on the primary node, which is in tracking state because the restore of authorities is not allowed for most objects when the replication state is BLOCKED.
Similarly, a restore of the QUSRSYS/QUSEXRGOBJ object, which includes the function usage information, must be restored on the primary node if replication is suspended.
2.14 IBM Lab Services offerings
Db2 Mirror is a new technology and a new approach to building resiliency in IBM i environments. To help customers who want to consider and implement IBM Db2 Mirror for i, IBM Systems Lab Services developed an offering that is called Db2 Mirror for i Readiness Assessment. It is a customized consulting engagement that provides customers, IBM Business Partners, and ISVs the skills that are required to implement a Db2 Mirror solution, and the ability to test their applications in a Db2 Mirror lab environment. At the end of the workshop, attendees have the skills that are necessary to start planning their Db2 Mirror environment.
For more information, see IBM Systems Lab Services.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset