Migration
This chapter covers the migration options from PowerHA V7.1.3 to PowerHA V7.2.
This chapter covers the following topics:
5.1 Migration planning
Proper planning of the migration procedure of clusters to IBM PowerHA SystemMirror V7.2.1 is important to minimize the risk duration of the process itself. The following set of actions must be considered when planning the migration of existing PowerHA clusters.
Before beginning the migration procedure, always have a contingency plan in case any problems occur. Here are some general suggestions:
Create a backup of rootvg.
In some cases of upgrading PowerHA, depending on the starting point, updating or upgrading the AIX base operating system is also required. Therefore, a preferred practice is to save your existing rootvg. One method is to create a clone by using alt_disk_copy on other available disks on the system. That way, a simple change to the bootlist and a restart can easily return the system to the beginning state.
Other options are available, such as mksysb, alt_disk_install, and multibos.
Save the existing cluster configuration.
Create a cluster snapshot before the migration. By default, it is stored in the following directory; make a copy of it and also save a copy from the cluster nodes for additional safety.
/usr/es/sbin/cluster/snapshots
Save any user-provided scripts.
This most commonly refers to custom events, pre- and post-events, application controller, and application monitoring scripts.
Save common configuration files needs for proper functioning, such as:
/etc/hosts
/etc/cluster/rhosts
/usr/es/sbin/cluster/netmon.cf
Verify, by using the lslpp -h cluster.* command, that the current version of PowerHA is in the COMMIT state and not in the APPLY state. If not, run smit install_commit before you install the most recent software version.
5.1.1 PowerHA SystemMirror V7.2.1 requirements
Here are listed the software and hardware requirements that must be met before migrating to PowerHA SystemMirror V7.2.1.
Software requirements
The software requirements are as follows:
IBM AIX 7.1 with Technology Level 3 with Service Pack 5, or later
IBM AIX 7.1 with Technology Level 4 with Service Pack 2, or later
IBM AIX 7.2 with Service Pack 1, or later
IBM AIX 7.2 with Technology Level 1, or later
Hardware
Support is available only for POWER5 technologies and later.
5.1.2 Deprecated features
Starting with PowerHA V7.2.0, the IBM Systems Director plug-in is no longer supported or available. However, PowerHA V7.2.1 does provide a new GUI often referred to as the System Mirror User Interface (SMUI). More information about this feature can be found in Chapter 9, “IBM PowerHA SystemMirror User Interface” on page 299.
5.1.3 Migration options
There are four methods of performing a migration of a PowerHA cluster. Each of them is briefly described, and in more detail for the corresponding migration scenarios that are included in this chapter.
Offline A migration method where PowerHA is brought offline on all nodes before performing the software upgrade. During this time, the cluster resource groups (RGs) are not available.
Rolling A migration method from one PowerHA version to another during which cluster services are stopped one node at a time. That node is upgraded and reintegrated into the cluster before the next node is upgraded. It requires little downtime, mostly for moving the RGs between nodes to allow each node to be upgraded.
Snapshot A migration method from one PowerHA version to another, during which you take a snapshot of the current cluster configuration, stop cluster services on all nodes, uninstall the current version of PowerHA and then install the preferred version of PowerHA SystemMirror, convert the snapshot by running the clconvert_snapshot utility, and restore the cluster configuration from the converted snapshot.
Nondisruptive This method is by far the most preferred method of migration whenever possible. As its name implies, the cluster RGs remain available and the applications functional during the cluster migration. All cluster nodes are sequentially (one node at a time) set to an unmanaged state, allowing all RGs on that node to remain operational while cluster services are stopped. However, this method can generally be used only when applying service packs to the cluster, and not doing major upgrades. This option does not apply when the upgrade of the base operating system is also required, such as when migrating PowerHA to a version newer than 7.1.x from an older version.
 
Important: Nodes in a cluster running two separate versions of PowerHA is considered to be in a mixed cluster state. A cluster in this state does not support any configuration changes or synchronization until all the nodes are migrated. Be sure to complete either the rolling or nondisruptive migration as soon as possible to ensure stable cluster functions.
 
5.1.4 Migration steps
The following sections give an overview of the steps that are required to perform each type of migration. Detailed examples of each migration type can be found in 5.2.5, “Nondisruptive upgrade from PowerHA V7.1.3” on page 120.
Offline method
Some of these steps can be performed in parallel because the entire cluster is offline.
 
Important: Always start with the latest service packs that are available for PowerHA, AIX, and Virtual I/O Server (VIOS).
Complete the following steps:
1. Stop cluster services on all nodes and bring the RGs offline.
2. Upgrade AIX (as needed):
a. Ensure that the prerequisites are installed, such as bos.cluster.
b. Restart.
3. Upgrade PowerHA. This step can be performed on both nodes in parallel.
4. Review the /tmp/clconvert.log file.
5. Restart the cluster services.
Rolling method
A rolling migration provides the least amount of downtime by upgrading one node at a time.
 
Important: Always start with the latest service packs that are available for PowerHA, AIX, and VIOS.
Complete the following steps:
1. Stop cluster services on one node (move the RGs as needed).
2. Upgrade AIX (as needed) and restart.
3. Upgrade PowerHA.
4. Review the /tmp/clconvert.log file.
5. Restart the cluster services.
6. Repeat these steps for each node.
Snapshot method
Some of these steps can often be performed in parallel because the entire cluster is offline.
Additional specifics when migrating from PowerHA 6.1, including crucial interim fixes, can be found at PowerHA SystemMirror interim fix Bundles information.
 
Important: Always start with the latest service packs that are available for PowerHA, AIX, and VIOS.
Complete the following steps:
1. Stop cluster services on all nodes and bring the RGs offline.
2. Create a cluster snapshot. Save copies of it off the cluster.
3. Upgrade AIX (as needed) and restart.
4. Upgrade PowerHA. This step can be performed on both nodes in parallel.
5. Review the /tmp/clconvert.log file.
6. Restart the cluster services.
Nondisruptive upgrade
This method applies only when the AIX level is already at appropriate levels to support PowerHA V7.2.1 or later. Complete the following steps on one node:
1. Stop cluster services by unmanaging the RGs.
2. Upgrade PowerHA (update_all).
3. Start cluster services with an automatic manage of the RGs.
Important: When restarting cluster services with the Automatic option for managing RGs, the application start scripts are invoked. Make sure that the application scripts can detect that the application is already running, or copy them and put a dummy blank executable script in their place and then copy them back after start.
5.1.5 Migration matrix to PowerHA SystemMirror 7.2.1
Table 5-1 shows the migration options between versions of PowerHA.
 
Important: Migrating from PowerHA V6.1 to V7.2.1 is not supported. You must upgrade to either V7.1.x or V7.2.0 first.
Table 5-1 Migration matrix table
PowerHA1
To V7.1.1
To V7.1.2
To V7.1.3
To V7.2.0
V7.2.1
From V6.1
Update to SP17 then R2,S, O are all viable options
Rb, S, O
N/A
From V7.1.0
Rb, S, O
Rb, S, O
Rb, S, O
Rb, S, O
N/A
From V7.1.1
 
R, S, O, Nb
R, S, O, Nb
R, S, O, Nb
N/A
From V7.1.2
 
 
R, S, O, Nb
R, S, O, Nb
N/A
From V7.1.3
 
 
 
R, S, O, Nb
R, S, O, Nb
From V7.2.0
 
 
 
 
R, S, O, Nb

1 R: Rolling, S: Snapshot, O: Offline, and N: Nondisruptive.
2 This option is available only if the beginning AIX level is high enough to support the newer version.
5.2 Migration scenarios from PowerHA V7.1.3
This section further details the test scenarios that are used in each of these migration methods:
Rolling migration
Snapshot migration
Offline migration
Nondisruptive upgrade
5.2.1 PowerHA V7.1.3 test environment overview
For the following scenarios, we use a two-node cluster with nodes Jess and Cass. It consists of a single RG that is configured in a typical hot-standby configuration. Our test configuration consists of the following hardware and software (see Figure 5-1):
POWER8 S814 with firmware 850
HMC 850
AIX 7.2.0 SP2
PowerHA V7.1.3 SP5
Storwize V7000 V7.6.1.1
Figure 5-1 PowerHA V7.1.3 test migration cluster
5.2.2 Rolling migration from PowerHA V7.1.3
Here are the steps for a rolling migration from PowerHA V7.1.3.
Checking and documenting the initial stage
This step is described in “Checking and documenting the initial stage” on page 112 as being common to all migration scenarios and methods.
For the rolling migration, we begin with the standby node, Cass. Complete the following steps.
 
Tip: A demonstration of performing a rolling migration from PowerHA V7.1.3 to
PowerHA V7.2.1 is this YouTube video.
1. Stop cluster services on node Cass.
Run smitty clstop and choose the options that are shown in Figure 5-2. The OK response appears quickly. Make sure that the cluster node is in the ST_INIT state by reviewing the lssrc -ls clstrmgrES|grep state output.
Alternatively, you can accomplish this task by using the clmgr command:
clmgr stop node=Cass
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Cass] +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Bring Resource Groups> +
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Figure 5-2 Stopping the cluster services
2. Upgrade AIX.
In our scenario, we have supported AIX levels for PowerHA V7.2.1 and do not need to perform this step. But if you do, a restart is required before continuing.
3. Verify that the clcomd daemon is active, as shown in Figure 5-3.
[root@Cass] /# lssrc -s clcomd
Subsystem Group PID Status
clcomd caa 3421236 active
Figure 5-3 Verify that clcomd is active
4. Upgrade PowerHA on node Cass. To upgrade PowerHA, run e smitty update_all, as shown in Figure 5-4. or run the following command from within the directory in which the updates are:
install_all_updates -vY -d .
          Update Installed Software to Latest Level (Update All)
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[TOP] [Entry Fields]
* INPUT device / directory for software .
* SOFTWARE to update _update_all
PREVIEW only? (update operation will NOT occur) no +
COMMIT software updates? yes +
SAVE replaced files? no +
AUTOMATICALLY install requisite software? yes +
EXTEND file systems if space needed? yes +
VERIFY install and check file sizes? no +
DETAILED output? no +
Process multiple volumes? yes +
ACCEPT new license agreements? yes +
Preview new LICENSE agreements? no +
 
[MORE...6]
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Figure 5-4 Smitty update_all
 
Important: Set ACCEPT new license agreements? to yes.
5. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists and that it contains at least one pingable IP address because the installation or upgrade of PowerHA filesets can overwrite this file with an empty one.
6. Start cluster services on node Cass by running smitty clstart or clmgr start node=Cass.
A message displays about cluster verification being skipped because of mixed versions, as shown in Figure 5-5 on page 115.
 
Important: While the cluster is a mixed cluster state, do not make any cluster changes or attempt to synchronize the cluster.
After starting, validate that the cluster is stable before continuing by running the following command:
lssrc -ls clstrmgrES |grep -i state.
Cluster services are running at different levels across
the cluster. Verification will not be invoked in this environment.
 
Starting Cluster Services on node: Cass
This may take a few minutes. Please wait...
Cass: Nov 8 2016 10:18:43 Starting execution of /usr/es/sbin/cluster/etc/rc.c
luster
Cass: with parameters: -boot -N -A -C interactive -P cl_rc_cluster
Figure 5-5 Verification skipped
7. Repeat the previous steps for node Jess. However, when stopping cluster services, choose the Move Resource Groups option, as shown in Figure 5-6.
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Jess] +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Move Resource Groups +
Figure 5-6 Run clstop and move the resource group
8. Upgrade AIX (if needed).
 
Important: If upgrading to AIX 7.2.0, see the AIX 7.2 Release Notes regarding RSCT filesets when upgrading.
In our scenario, we have supported AIX levels for PowerHA V7.2 and do not need to perform this step. But if you do, a restart is required before continuing.
9. Verify that the clcomd daemon is active, as shown in Figure 5-7.
[root@Jess] /# lssrc -s clcomd
Subsystem Group PID Status
clcomd caa 50467008     active
Figure 5-7 Verify that clcomd is active
10. Upgrade PowerHA on node Jess. To upgrade PowerHA, run smitty update_all, as shown in Figure 5-4 on page 114, or run the following command from within the directory in which the updates are:
install_all_updates -vY -d .
11. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists and that it contains at least one pingable IP address because the installation or upgrade of PowerHA filesets can overwrite this file with an empty one.
12. Start cluster services on node Jess by running one of the following commands:
 – smitty clstart
 – clmgr start node=Jess
13. Verify that the cluster completed the migration on both nodes by checking that version number is 17, as shown in Example 5-1.
Example 5-1 Verifying that the migration completed on both nodes
# clcmd odmget HACMPcluster |grep version
cluster_version = 17
cluster_version = 17
 
#clcmd odmget HACMPnode |grep version |sort -u
version = 17
 
Important: Both nodes must show version=17; otherwise, the migration did not complete successfully. Call IBM Support.
Although the migration is complete, the resource is running on node Cass. If you want, move the RG back to node Jess, as shown in Example 5-2.
Example 5-2 Move the resource group back to node Jess
# clmgr move rg demorg node=Jess
Attempting to move resource group demorg to node Cass.
 
Waiting for the cluster to process the resource group movement request....
 
Waiting for the cluster to stabilize.....
 
Resource group movement successful.
Resource group demorg is online on node Cass.
 
Cluster Name: Jess_cluster
 
Resource Group Name: demorg
Node Group State
---------------------------- ---------------
Jess ONLINE
Cass OFFLINE
 
Important: Always test the cluster thoroughly after migration.
5.2.3 Offline migration from PowerHA V7.1.3
For an offline migration, you can perform many of the steps in parallel on all (both) nodes in the cluster. However, to accomplish this task, you must plane full cluster outage.
 
Tip: To see a demonstration of performing an offline migration from PowerHA V7.1.3 to PowerHA V7.2.1, see this YouTube video.
Complete the following steps:
1. Stop cluster services on both nodes Jess and Cass by running smitty clstop and choosing the options that are shown in Figure 5-8. The OK response appears quickly.
As an alternative, you can also stop the entire cluster by running the following command:
clmgr stop cluster
Make sure that the cluster node is in the ST_INIT state by reviewing the clcmd lssrc -ls clstrmgrES|grep state output.
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Jess,Cass]             +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Bring Resource Groups> +
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Figure 5-8 Stopping the cluster services
2. Upgrade AIX on both nodes.
 
Important: If upgrading to AIX 7.2.0, see the AIX 7.2 Release Notes regarding RSCT filesets when upgrading.
In our scenario, we have supported AIX levels for PowerHA v7.2.1 and do not need to perform this step. But if you do, a restart is required before continuing.
3. Verify that the clcomd daemon is active on both nodes, as shown in Figure 5-9.
# clcmd lssrc -s clcomd
 
-------------------------------
NODE Jess
-------------------------------
Subsystem Group PID Status
clcomd caa 20775182 active
 
-------------------------------
NODE Cass
-------------------------------
Subsystem Group PID Status
clcomd caa 5177840 active
Figure 5-9 Verify that clcomd is active
 
4. Upgrade to PowerHA V7.2.1 by running smitty update_all on both nodes or by running the following command from within the directory in which the updates are:
install_all_updates -vY -d .
5. Verify that the version numbers show correctly, as shown in Example 5-1 on page 116.
6. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists on all nodes and that it contains at least one pingable IP address because the installation or upgrade of PowerHA filesets can overwrite this file with an empty one.
7. Restart cluster on both nodes by running clmgr start cluster.
 
Important: Always test the cluster thoroughly after migrating.
5.2.4 Snapshot migration from PowerHA V7.1.3
For a snapshot migration, you can perform many of the steps in parallel on all (both) nodes in the cluster. However, this requires a full cluster outage.
 
Tip: To see a demonstration of performing an offline migration from PowerHA V7.1.3 to PowerHA V7.2.1, see this YouTube video.
Complete the following steps:
1. Stop cluster services on both nodes Jess and Cass by running smitty clstop and choosing the options that are shown in Figure 5-10. The OK response appears quickly.
As an alternative, you can also stop the entire cluster by running clmgr stop cluster.
Make sure that the cluster node is in the ST_INIT state by reviewing the clcmd lssrc -ls clstrmgrES|grep state output.
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Jess,Cass]             +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Bring Resource Groups> +
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Figure 5-10 Stopping the cluster services
2. Create a cluster snapshot by running smitty cm_add_snap.dialog and completing the options, as shown in Figure 5-11.
             Create a Snapshot of the Cluster Configuration
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Cluster Snapshot Name [pre721migration] /
Custom-Defined Snapshot Methods [] +
* Cluster Snapshot Description [713 SP5 cluster]
Figure 5-11 Creating a cluster snapshot
3. Upgrade AIX on both nodes.
 
Important: If upgrading to AIX 7.2.0, see the AIX 7.2 Release Notes regarding RSCT filesets when upgrading.
In our scenario, we have supported AIX levels for PowerHA V7.2 and do not need to perform this step. But if you do, a restart is required before continuing.
4. Verify that the clcomd daemon is active on both nodes, as shown in Figure 5-12.
# clcmd lssrc -s clcomd
 
-------------------------------
NODE Jess
-------------------------------
Subsystem Group PID Status
clcomd caa 20775182 active
 
-------------------------------
NODE Cass
-------------------------------
Subsystem Group PID Status
clcomd caa 5177840 active
Figure 5-12 Verify that clcomd is active
5. Next, uninstall PowerHA 6.1 on both nodes Jess and Cass by running smitty remove on cluster.*.
6. Install PowerHA V7.2.1 by running smitty install_all on both nodes.
7. Convert the previously created snapshot as follows:
/usr/es/sbin/cluster/conversion/clconvert_snapshot -v 7.1.3 -s pre721migration
Extracting ODM's from snapshot file... done.
Converting extracted ODM's... done.
Rebuilding snapshot file... done.
8. Restore the cluster configuration from the converted snapshot by running smitty cm_apply_snap.select and choosing the snapshot from the menu. The snapshot autofills the last menu, as shown in Figure 5-13.
                        Restore the Cluster Snapshot
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
Cluster Snapshot Name pre721migration>
Cluster Snapshot Description 713 SP5 cluster>
Un/Configure Cluster Resources? [Yes] +
Force apply if verify fails? [No] +
Figure 5-13 Restoring a cluster configuration from a snapshot
The restore process automatically re-creates and synchronizes the cluster.
9. Verify that the version numbers show correctly, as shown in Example 5-1 on page 116.
10. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists on all nodes and that it contains at least one pingable IP address because the installation or upgrade of PowerHA filesets might overwrite this file with an empty one.
11. Restart the cluster on both nodes by running clmgr start cluster.
 
Important: Always test the cluster thoroughly after migrating.
5.2.5 Nondisruptive upgrade from PowerHA V7.1.3
This method applies only when the AIX level is already at appropriate levels to support PowerHA V7.2.1 (or later).
 
Tip: To see a demonstration of performing an offline migration from PowerHA V7.1.3 to PowerHA V7.2.1, see this YouTube video.
Complete the following steps:
1. Stop cluster services by performing an unmanage of the RGs on node Cass, as shown in Example 5-3.
Example 5-3 Stop a cluster node with the unmanage option
# clmgr stop node=Cass manage=unmanage
 
Warning: "WHEN" must be specified. Since it was not, a default of "now" will be
used.
Broadcast message from root@Cass (tty) at 11:39:28 ...
 
PowerHA SystemMirror on Cass shutting down. Please exit any cluster applications...
Cass: 0513-044 The clevmgrdES Subsystem was requested to stop.
.
"Cass" is now unmanaged.
 
Cass: Nov 8 2016 11:39:28/usr/es/sbin/cluster/utilities/clstop: called with flags -N -f
2. Upgrade PowerHA (update_all) by running the following command from within the directory in which the updates are:
install_all_updates -vY -d .
3. Start cluster services by using an automatic manage of the RGs on Cass, as shown in Example 5-4.
Example 5-4 Start the cluster node with the automatic manage option
# clmgr start node=Cass
Warning: "WHEN" must be specified. Since it was not, a default of "now" will be
used.
 
Warning: "MANAGE" must be specified. Since it was not, a default of "auto" will
be used.
 
Verifying Cluster Configuration Prior to Starting Cluster Services.
Cass: start_cluster: Starting PowerHA SystemMirror
.
"Cass" is now online.
 
Starting Cluster Services on node: Cass
This may take a few minutes. Please wait...
Cass: Nov 8 2016 11:43:40Starting execution of /usr/es/sbin/cluster/etc/rc.cluster
Cass: with parameters: -boot -N -b -P cl_rc_cluster -A
Cass:
Cass: Nov 8 2016 11:43:40usage: cl_echo messageid (default) messageNov 8 2016 11:43:40usage: cl_echo messageid (default) messageRETURN_CODE=0
 
Important: Restarting cluster services with the Automatic option for managing RGs invokes the application start scripts. Make sure that the application scripts can detect that the application is already running, or copy and put a dummy blank executable script in their place and then copy them back after start.
Repeat the steps on node Jess.
4. Stop cluster services by performing an unmanage of the RGs on node Jess, as shown in Example 5-5.
Example 5-5 Stop the cluster node with the unmanage option
# clmgr stop node=Jess manage=unmanage
 
Warning: "WHEN" must be specified. Since it was not, a default of "now" will be
used.
Broadcast message from root@Jess (tty) at 11:52:48 ...
 
PowerHA SystemMirror on Jess shutting down. Please exit any cluster applications...
Jess: 0513-044 The clevmgrdES Subsystem was requested to stop.
.
"Jess" is now unmanaged.
 
Jess: Nov 8 2016 11:52:48/usr/es/sbin/cluster/utilities/clstop: called with flags -N -f
5. Upgrade PowerHA (update_all) by running the following command from within the directory in which the updates are:
install_all_updates -vY -d .
6. Start cluster services by performing an automatic manage of the RGs on Jess, as shown in Example 5-6.
Example 5-6 Start a cluster node with the automatic manage option
# clmgr start node=Jess
 
Warning: "WHEN" must be specified. Since it was not, a default of "now" will be
used.
 
 
Warning: "MANAGE" must be specified. Since it was not, a default of "auto" will
be used.
 
Verifying Cluster Configuration Prior to Starting Cluster Services.
Jess: start_cluster: Starting PowerHA SystemMirror
...
"Jess" is now online.
 
Starting Cluster Services on node: Jessica
This may take a few minutes. Please wait...
Jess: Nov 8 2016 11:54:40Starting execution of /usr/es/sbin/cluster/etc/rc.cluster
Jess: with parameters: -boot -N -b -P cl_rc_cluster -A
Jess:
Jess: Nov 8 2016 11:54:40usage: cl_echo messageid (default) messageNov 8 2016 11:54:40usage: cl_echo messageid (default) messageRETURN_CODE=0
 
Important: Restarting cluster services with the Automatic option for managing RGs invokes the application start scripts. Make sure that the application scripts can detect that the application is already running, or copy and put a dummy blank executable script in their place and then copy them back after start.
7. Verify that the version numbers show correctly, as shown in Example 5-1 on page 116.
8. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists on all nodes and that it contains at least one pingable IP address because the installation or upgrade of PowerHA filesets can overwrite this file with an empty one.
5.3 Migration scenarios from PowerHA V7.2.0
This section further details test scenarios that are used in each of these migrations methods:
Rolling migration
Snapshot migration
Offline migration
Nondisruptive upgrade
5.3.1 PowerHA V7.2.0 test environment overview
For the following scenarios, we use a two-node cluster with nodes Jess and Cass. It consists of a single RG that is configured in a typical hot-standby, as shown in Figure 5-14:
POWER8 S814 with firmware 850
HMC 850
AIX 7.2.0 SP1
PowerHA V7.2.0 SP2
Storwize V7000 V7.6.1.1
Figure 5-14 PowerHA V7.2.0 test migration cluster
5.3.2 Rolling migration from PowerHA V7.2.0
For the rolling migration, begin with the standby node Cass.
 
Tip: To see a demonstration of performing an offline migration from PowerHA V7.1.3 to PowerHA V7.2.1, see this YouTube video.
Although the version level is different, the steps are identical as though starting from Version 7.2.0.
Complete the following steps:
1. Stop the cluster services on node Cass by running smitty clstop and choosing the options that are shown in Figure 5-15. The OK response appears quickly. Make sure that the cluster node is in the ST_INIT state by reviewing the lssrc -ls clstrmgrES|grep state output, as shown in Example 5-7.
Example 5-7 Cluster node state
# lssrc -ls clstrmgrES|grep state
Current state: ST_INIT
 
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Cass] +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Bring Resource Groups> +
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Figure 5-15 Stopping the cluster services
You can also stop cluster services by using the clmgr command:
clmgr stop node=Cass
2. Upgrade AIX.
In our scenario, we have supported AIX levels for PowerHA V7.2.1 and do not need to perform this step. But if you do, a restart is required before continuing.
3. Verify that the clcomd daemon is active, as shown in Figure 5-16.
[root@Cass] /# lssrc -s clcomd
Subsystem Group PID Status
clcomd caa 3421236 active
Figure 5-16 Verify that clcomd is active
4. Upgrade PowerHA on node Cass. To upgrade PowerHA, run smitty update_all, as shown in Figure 5-4 on page 114, or run the following command from within the directory in which the updates are:
install_all_updates -vY -d .
Example 5-8 Install_all_updates
# install_all_updates -vY -d .
install_all_updates: Initializing system parameters.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Checking for updated install utilities on media.
install_all_updates: Processing media.
install_all_updates: Generating list of updatable installp filesets.
 
*** ATTENTION: the following list of filesets are installable base images
that are updates to currently installed filesets. Because these filesets are
base-level images, they will be committed automatically. After these filesets
are installed, they can be down-leveled by performing a force-overwrite with
the previous base-level. See the installp man page for more details. ***
 
cluster.es.client.clcomd 7.2.1.0
cluster.es.client.lib 7.2.1.0
cluster.es.client.rte 7.2.1.0
cluster.es.client.utils 7.2.1.0
cluster.es.cspoc.cmds 7.2.1.0
cluster.es.cspoc.rte 7.2.1.0
cluster.es.migcheck 7.2.1.0
cluster.es.server.diag 7.2.1.0
cluster.es.server.events 7.2.1.0
cluster.es.server.rte 7.2.1.0
cluster.es.server.testtool 7.2.1.0
cluster.es.server.utils 7.2.1.0
cluster.license 7.2.1.0
 
<< End of Fileset List >>
 
install_all_updates: The following filesets have been selected as updates
to currently installed software:
 
cluster.es.client.clcomd 7.2.1.0
cluster.es.client.lib 7.2.1.0
cluster.es.client.rte 7.2.1.0
cluster.es.client.utils 7.2.1.0
cluster.es.cspoc.cmds 7.2.1.0
cluster.es.cspoc.rte 7.2.1.0
cluster.es.migcheck 7.2.1.0
cluster.es.server.diag 7.2.1.0
cluster.es.server.events 7.2.1.0
cluster.es.server.rte 7.2.1.0
cluster.es.server.testtool 7.2.1.0
cluster.es.server.utils 7.2.1.0
cluster.license 7.2.1.0
 
<< End of Fileset List >>
 
install_all_updates: Performing installp update.
+-----------------------------------------------------------------------------+
Pre-installation Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...
 
SUCCESSES
---------
Filesets listed in this section passed pre-installation verification
and will be installed.
 
Selected Filesets
-----------------
cluster.es.client.clcomd 7.2.1.0 # Cluster Communication Infras...
cluster.es.client.lib 7.2.1.0 # PowerHA SystemMirror Client ...
cluster.es.client.rte 7.2.1.0 # PowerHA SystemMirror Client ...
cluster.es.client.utils 7.2.1.0 # PowerHA SystemMirror Client ...
cluster.es.cspoc.cmds 7.2.1.0 # CSPOC Commands
cluster.es.cspoc.rte 7.2.1.0 # CSPOC Runtime Commands
cluster.es.migcheck 7.2.1.0 # PowerHA SystemMirror Migrati...
cluster.es.server.diag 7.2.1.0 # Server Diags
cluster.es.server.events 7.2.1.0 # Server Events
cluster.es.server.rte 7.2.1.0 # Base Server Runtime
cluster.es.server.testtool 7.2.1.0 # Cluster Test Tool
cluster.es.server.utils 7.2.1.0 # Server Utilities
cluster.license 7.2.1.0 # PowerHA SystemMirror Electro...
 
<< End of Success Section >>
 
+-----------------------------------------------------------------------------+
BUILDDATE Verification ...
+-----------------------------------------------------------------------------+
Verifying build dates...done
FILESET STATISTICS
------------------
13 Selected to be installed, of which:
13 Passed pre-installation verification
----
13 Total to be installed
 
+-----------------------------------------------------------------------------+
Installing Software...
+-----------------------------------------------------------------------------+
 
installp: APPLYING software for:
cluster.license 7.2.1.0
 
 
. . . . . << Copyright notice for cluster.license >> . . . . . . .
Licensed Materials - Property of IBM
 
5765H3900
Copyright International Business Machines Corp. 2001, 2016.
 
All rights reserved.
US Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp.
. . . . . << End of copyright notice for cluster.license >>. . . .
 
Filesets processed: 1 of 13 (Total time: 2 secs).
 
installp: APPLYING software for:
cluster.es.migcheck 7.2.1.0
 
 
. . . . . << Copyright notice for cluster.es.migcheck >> . . . . . . .
Licensed Materials - Property of IBM
 
5765H3900
Copyright International Business Machines Corp. 2010, 2016.
 
All rights reserved.
US Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp.
. . . . . << End of copyright notice for cluster.es.migcheck >>. . . .
 
Filesets processed: 2 of 13 (Total time: 6 secs).
 
installp: APPLYING software for:
cluster.es.cspoc.rte 7.2.1.0
cluster.es.cspoc.cmds 7.2.1.0
 
 
. . . . . << Copyright notice for cluster.es.cspoc >> . . . . . . .
Licensed Materials - Property of IBM
 
5765H3900
Copyright International Business Machines Corp. 1985, 2016.
 
All rights reserved.
US Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp.
. . . . . << End of copyright notice for cluster.es.cspoc >>. . . .
 
Filesets processed: 4 of 13 (Total time: 10 secs).
 
installp: APPLYING software for:
cluster.es.client.rte 7.2.1.0
cluster.es.client.utils 7.2.1.0
cluster.es.client.lib 7.2.1.0
cluster.es.client.clcomd 7.2.1.0
 
 
. . . . . << Copyright notice for cluster.es.client >> . . . . . . .
Licensed Materials - Property of IBM
 
5765H3900
Copyright International Business Machines Corp. 1985, 2016.
 
All rights reserved.
US Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp.
 
Licensed Materials - Property of IBM
 
5765H3900
Copyright International Business Machines Corp. 2008, 2016.
 
All rights reserved.
US Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp.
. . . . . << End of copyright notice for cluster.es.client >>. . . .
 
Filesets processed: 8 of 13 (Total time: 24 secs).
 
installp: APPLYING software for:
cluster.es.server.testtool 7.2.1.0
cluster.es.server.rte 7.2.1.0
cluster.es.server.utils 7.2.1.0
cluster.es.server.events 7.2.1.0
cluster.es.server.diag 7.2.1.0
 
 
. . . . . << Copyright notice for cluster.es.server >> . . . . . . .
Licensed Materials - Property of IBM
 
5765H3900
Copyright International Business Machines Corp. 1985, 2016.
 
All rights reserved.
US Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp.
. . . . . << End of copyright notice for cluster.es.server >>. . . .
 
0513-095 The request for subsystem refresh was completed successfully.
Finished processing all filesets. (Total time: 1 mins 0 secs).
 
Some configuration files could not be automatically merged into the system
during the installation. The previous versions of these files have been
saved in a configuration directory as listed below. Compare the saved files
and the newly installed files to determine whether you need to recover
configuration data. Consult product documentation to determine how to
merge the data.
 
Configuration files which were saved in /usr/lpp/save.config:
/usr/es/sbin/cluster/utilities/clexit.rc
 
Please wait...
 
/usr/sbin/rsct/install/bin/ctposti
0513-071 The ctrmc Subsystem has been added.
0513-059 The ctrmc Subsystem has been started. Subsystem PID is 12583318.
0513-059 The IBM.ConfigRM Subsystem has been started. Subsystem PID is 11665748.
cthagsctrl: 2520-208 The cthags subsystem must be stopped.
0513-029 The cthags Subsystem is already active.
Multiple instances are not supported.
0513-095 The request for subsystem refresh was completed successfully.
done
+-----------------------------------------------------------------------------+
Summaries:
+-----------------------------------------------------------------------------+
 
Installation Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
cluster.license 7.2.1.0 USR APPLY SUCCESS
cluster.es.migcheck 7.2.1.0 USR APPLY SUCCESS
cluster.es.migcheck 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.cspoc.rte 7.2.1.0 USR APPLY SUCCESS
cluster.es.cspoc.cmds 7.2.1.0 USR APPLY SUCCESS
cluster.es.cspoc.rte 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.client.rte 7.2.1.0 USR APPLY SUCCESS
cluster.es.client.utils 7.2.1.0 USR APPLY SUCCESS
cluster.es.client.lib 7.2.1.0 USR APPLY SUCCESS
cluster.es.client.clcomd 7.2.1.0 USR APPLY SUCCESS
cluster.es.client.rte 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.client.lib 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.client.clcomd 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.server.testtool 7.2.1.0 USR APPLY SUCCESS
cluster.es.server.rte 7.2.1.0 USR APPLY SUCCESS
cluster.es.server.utils 7.2.1.0 USR APPLY SUCCESS
cluster.es.server.events 7.2.1.0 USR APPLY SUCCESS
cluster.es.server.diag 7.2.1.0 USR APPLY SUCCESS
cluster.es.server.rte 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.server.utils 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.server.events 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.server.diag 7.2.1.0 ROOT APPLY SUCCESS
 
install_all_updates: Checking for recommended maintenance level 7200-00.
install_all_updates: Executing /usr/bin/oslevel -rf, Result = 7200-00
install_all_updates: Verification completed.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Result = SUCCESS
 
5. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists and that it contains at least one pingable IP address because the installation or upgrade of PowerHA filesets can overwrite this file with an empty one.
6. Start cluster services on node Cass by running smitty clstart or clmgr start node=Cass.
During the start, a message displays about cluster verification being skipped because of mixed versions, as shown in Figure 5-17.
Cluster services are running at different levels across
the cluster. Verification will not be invoked in this environment.
 
Starting Cluster Services on node: Cass
This may take a few minutes. Please wait...
Cass: Nov 8 2016 11:13:48 Starting execution of /usr/es/sbin/cluster/etc/rc.c
luster
Cass: with parameters: -boot -N -A -C interactive -P cl_rc_cluster
Figure 5-17 Verification skipped
 
Important: While the cluster is this mixed cluster state, do not make any cluster changes or attempt to synchronize the cluster.
After starting, validate that the cluster is stable before continuing by running the following command:
lssrc -ls clstrmgrES |grep -i state
7. Repeat the previous steps for node Jess. However, when stopping cluster services, choose the Move Resource Groups option, as shown in Figure 5-18.
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Jess] +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Move Resource Groups +
Figure 5-18 Run clstop and move the resource group
8. Upgrade AIX (if needed).
 
Important: If upgrading to AIX 7.2.0, see the AIX 7.2 Release Notes regarding RSCT filesets when upgrading.
In our scenario, we have supported AIX levels for PowerHA V7.2.1 and do not need to perform this step. But if you do, a restart is required before continuing.
9. Verify that the clcomd daemon is active, as shown in Figure 5-19.
[root@Jess] /# lssrc -s clcomd
Subsystem Group PID Status
clcomd caa 50467008     active
Figure 5-19 Verify that clcomd is active
10. Upgrade PowerHA on node Jess. To upgrade PowerHA, run smitty update_all, as shown in Figure 5-4 on page 114, or run the following command from within the directory in which the updates are:
install_all_updates -vY -d .
11. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists and that it contains at least one pingable IP address because the installation or upgrade of PowerHA filesets can overwrite this file with an empty one.
12. Start cluster services on node Jess by running smitty clstart or clmgr start node=Jess.
13. Verify that the cluster completed the migration on both nodes by checking that the version number is 17, as shown in Example 5-9.
Example 5-9 Verifying the cluster version on both nodes
# clcmd odmget HACMPcluster |grep version
cluster_version = 17
cluster_version = 17
 
#clcmd odmget HACMPnode |grep version |sort -u
version = 17
 
Important: Both nodes must show version=17, otherwise the migration did not complete. Call IBM Support.
14. Although the migration is complete, the resource is running on node Cass. If you want, move the RG back to node Jess, as shown in Example 5-10.
Example 5-10 Move the resource group back to node Jess
# clmgr move rg demorg node=Jess
Attempting to move resource group demorg to node Cass.
 
Waiting for the cluster to process the resource group movement request....
 
Waiting for the cluster to stabilize.....
 
Resource group movement successful.
Resource group demorg is online on node Cass.
 
Cluster Name: Jess_cluster
 
Resource Group Name: demorg
Node Group State
---------------------------- ---------------
Jess ONLINE
Cass OFFLINE
 
Important: Always test the cluster thoroughly after migrating.
5.3.3 Offline migration from PowerHA V7.2.0
For an offline migration, you can perform many of the steps in parallel on all (both) nodes in the cluster. However, this means that you must plan a full cluster outage.
 
Tip: To see a demonstration of performing an offline migration from PowerHA V7.1.3 to PowerHA V7.2.1, see this YouTube video.
Although the version level is different, the steps are identical as though starting from Version 7.2.0.
Complete the following steps:
1. Stop cluster services on both nodes Jess and Cass by running smitty clstop and choosing the options that are shown in Figure 5-20. The OK response appears quickly.
As an alternative, you can also stop the entire cluster by running clmgr stop cluster.
Make sure the cluster node is in the ST_INIT state by reviewing the clcmd lssrc -ls clstrmgrES|grep state output.
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Jess,Cass]             +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Bring Resource Groups> +
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Figure 5-20 Stopping the cluster services
2. Upgrade AIX on both nodes.
 
Important: If upgrading to AIX 7.2.0, see the AIX 7.2 Release Notes regarding RSCT filesets when upgrading.
In our scenario, we have supported AIX levels for PowerHA V7.2.1 and do not need to perform this step. But if you do, a restart is required before continuing.
3. Verify that the clcomd daemon is active on both nodes, as shown in Figure 5-21.
# clcmd lssrc -s clcomd
 
-------------------------------
NODE Jess
-------------------------------
Subsystem Group PID Status
clcomd caa 20775182 active
 
-------------------------------
NODE Cass
-------------------------------
Subsystem Group PID Status
clcomd caa 5177840 active
Figure 5-21 Verify that clcomd is active
 
4. Upgrade to PowerHA V7.2.1 by running smitty update_all on both nodes, as shown in Figure 5-4 on page 114, or by running the following command from within the directory in which the updates are (see Example 5-8 on page 124):
install_all_updates -vY -d .
5. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists on all nodes and that it contains at least one pingable IP address because the installation or upgrade of PowerHA filesets can overwrite this file with an empty one. Restart the cluster on both nodes by running clmgr start cluster.
6. Verify that the version numbers show correctly, as shown in Example 5-9 on page 131.
 
Important: Always test the cluster thoroughly after migrating.
5.3.4 Snapshot migration from PowerHA V7.2.0
For a snapshot migration, you can perform many of the steps in parallel on all (both) nodes in the cluster. However, this migration requires a full cluster outage.
 
Tip: To see a demonstration of performing an offline migration from PowerHA V7.1.3 to PowerHA V7.2.1, see this YouTube video.
Complete the following steps:
1. Stop cluster services on both nodes Jess and Cass by running smitty clstop and choosing to bring the RG offline. In our case, we chose t0 stop the entire cluster by running clmgr stop cluster. as shown in Figure 5-22.
# clmgr stop cluster
 
Warning: "WHEN" must be specified. Since it was not, a default of "now" will be
used.
 
Warning: "MANAGE" must be specified. Since it was not, a default of "offline"
will be used.
 
Broadcast message from root@Jessica (tty) at 14:08:31 ...
 
PowerHA SystemMirror on Jessica shutting down. Please exit any cluster applications...
Cass: 0513-004 The Subsystem or Group, clinfoES, is currently inoperative.
Cass: 0513-044 The clevmgrdES Subsystem was requested to stop.
Jess: 0513-004 The Subsystem or Group, clinfoES, is currently inoperative.
Jess: 0513-044 The clevmgrdES Subsystem was requested to stop.
...
 
The cluster is now offline.
 
Cass: Nov 8 2016 14:08:31 /usr/es/sbin/cluster/utilities/clstop: called with flags -N -g
Jess: Nov 8 2016 14:08:37 /usr/es/sbin/cluster/utilities/clstop: called with flags -N -g
Figure 5-22 Stopping cluster services by way of the clmgr
Make sure that the cluster node is in the ST_INIT state by reviewing the clcmd lssrc -ls clstrmgrES|grep state output.
2. Create a cluster snapshot by running smitty cm_add_snap.dialog and completing the options, as shown in Figure 5-23.
             Create a Snapshot of the Cluster Configuration
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Cluster Snapshot Name [720cluster] /
Custom-Defined Snapshot Methods [] +
* Cluster Snapshot Description [720 SP1 cluster]
Figure 5-23 Creating a 720 cluster snapshot
3. Upgrade AIX on both nodes.
 
Important: If upgrading to AIX 7.2.0, see the AIX 7.2 Release Notes regarding RSCT filesets when upgrading.
In our scenario, we have supported AIX levels for PowerHA V7.2.1 and do not need to perform this step. But if you do, a restart is required before continuing.
4. Verify that the clcomd daemon is active on both nodes, as shown in Figure 5-24.
# clcmd lssrc -s clcomd
 
-------------------------------
NODE Jess
-------------------------------
Subsystem Group PID Status
clcomd caa 2102992      active
 
-------------------------------
NODE Cass
-------------------------------
Subsystem Group PID Status
clcomd caa 5110698      active
Figure 5-24 Verifying that clcomd is active
5. Uninstall PowerHA 6.1 on both nodes Jess and Cass by running smitty remove on cluster.*.
6. Install PowerHA V7.2.1 by running smitty install_all on both nodes.
7. Convert the previously created snapshot:
/usr/es/sbin/cluster/conversion/clconvert_snapshot -v 7.2 -s 720cluster
Extracting ODM's from snapshot file... done.
Converting extracted ODM's... done.
Rebuilding snapshot file... done.
8. Restore the cluster configuration from the converted snapshot by running smitty cm_apply_snap.select and choosing the snapshot from the menu. It completes the last menu, as shown in Figure 5-25.
                        Restore the Cluster Snapshot
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
Cluster Snapshot Name 720cluster>
Cluster Snapshot Description 720 SP1 cluster>
Un/Configure Cluster Resources? [Yes] +
Force apply if verify fails? [No] +
Figure 5-25 Restoring a cluster configuration from a snapshot
The restore process automatically re-creates and synchronizes the cluster.
9. Verify that the version numbers show correctly, as shown in Example 5-1 on page 116.
10. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists on all nodes and that it contains at least one pingable IP address because the installation or upgrade of PowerHA filesets can overwrite this file with an empty one.
11. Restart the cluster on both nodes by running clmgr start cluster.
 
Important: Always test the cluster thoroughly after migrating.
5.3.5 Nondisruptive upgrade from PowerHA V7.2.0
This method applies only when the AIX level is already at the appropriate levels to support PowerHA V7.2.1 or later.
 
Tip: To see a demonstration of performing an offline migration from PowerHA V7.1.3 to PowerHA V7.2.1, see this YouTube video.
Although the version level is different, the steps are identical as though starting from Version 7.2.0.
1. Stop cluster services by performing an unmanage of the RGs on node Cass, as shown in Example 5-11.
Example 5-11 Stop the cluster node with the unmanage option
# clmgr stop node=Cass manage=unmanage
 
Warning: "WHEN" must be specified. Since it was not, a default of "now" will be
used.
Broadcast message from root@Cass (tty) at 14:27:38 ...
 
PowerHA SystemMirror on Cass shutting down. Please exit any cluster applications...
Cass: 0513-044 The clevmgrdES Subsystem was requested to stop.
.
"Cass" is now unmanaged.
 
Cass: Nov 8 2016 14:27:38/usr/es/sbin/cluster/utilities/clstop: called with flags -N -f
2. Upgrade PowerHA (update_all) by running the following command from within the directory in which the updates are (see Example 5-8 on page 124):
install_all_updates -vY -d .
3. Start the cluster services with an automatic manage of the RGs on Cass, as shown in Example 5-12.
Example 5-12 Start the cluster node with the automatic manage option
# clmgr start node=Cass
Warning: "WHEN" must be specified. Since it was not, a default of "now" will be
used.
 
Warning: "MANAGE" must be specified. Since it was not, a default of "auto" will
be used.
 
Verifying Cluster Configuration Prior to Starting Cluster Services.
Cass: start_cluster: Starting PowerHA SystemMirror
.
"Cass" is now online.
 
Starting Cluster Services on node: Cass
This may take a few minutes. Please wait...
Cass: Nov 8 2016 14:43:49Starting execution of /usr/es/sbin/cluster/etc/rc.cluster
Cass: with parameters: -boot -N -b -P cl_rc_cluster -A
Cass:
Cass: Nov 8 2016 14:43:49usage: cl_echo messageid (default) messageNov 8 2016 14:43:49usage: cl_echo messageid (default) messageRETURN_CODE=0
 
Important: Restarting cluster services with the Automatic option for managing RGs invokes the application start scripts. Make sure that the application scripts can detect that the application is already running, or copy and put a dummy blank executable script in their place and then copy them back after start.
Repeat the steps on node Jess.
4. Stop the cluster services by performing an unmanage of the RGs on node Jess, as shown Example 5-13.
Example 5-13 Stop the cluster node with the unmanage option
# clmgr stop node=Jess manage=unmanage
 
Warning: "WHEN" must be specified. Since it was not, a default of "now" will be
used.
Broadcast message from root@Jess (tty) at 14:52:58 ...
 
PowerHA SystemMirror on Jess shutting down. Please exit any cluster applications...
Jess: 0513-044 The clevmgrdES Subsystem was requested to stop.
.
"Jess" is now unmanaged.
 
Jess: Nov 8 2016 14:52:58/usr/es/sbin/cluster/utilities/clstop: called with flags -N -f
5. Upgrade PowerHA (update_all) by running the following command from within the directory in which the updates are:
install_all_updates -vY -d .
A summary of the PowerHA filesets update is shown in Example 5-14.
Example 5-14 Updating the PowerHA filesets
 
+-----------------------------------------------------------------------------+
Summaries:
+-----------------------------------------------------------------------------+
 
Installation Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
cluster.license 7.2.1.0 USR APPLY SUCCESS
cluster.es.migcheck 7.2.1.0 USR APPLY SUCCESS
cluster.es.migcheck 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.cspoc.rte 7.2.1.0 USR APPLY SUCCESS
cluster.es.cspoc.cmds 7.2.1.0 USR APPLY SUCCESS
cluster.es.cspoc.rte 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.client.rte 7.2.1.0 USR APPLY SUCCESS
cluster.es.client.utils 7.2.1.0 USR APPLY SUCCESS
cluster.es.client.lib 7.2.1.0 USR APPLY SUCCESS
cluster.es.client.clcomd 7.2.1.0 USR APPLY SUCCESS
cluster.es.client.rte 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.client.lib 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.client.clcomd 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.server.testtool 7.2.1.0 USR APPLY SUCCESS
cluster.es.server.rte 7.2.1.0 USR APPLY SUCCESS
cluster.es.server.utils 7.2.1.0 USR APPLY SUCCESS
cluster.es.server.events 7.2.1.0 USR APPLY SUCCESS
cluster.es.server.diag 7.2.1.0 USR APPLY SUCCESS
cluster.es.server.rte 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.server.utils 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.server.events 7.2.1.0 ROOT APPLY SUCCESS
cluster.es.server.diag 7.2.1.0 ROOT APPLY SUCCESS
 
install_all_updates: Checking for recommended maintenance level 7200-00.
install_all_updates: Executing /usr/bin/oslevel -rf, Result = 7200-00
install_all_updates: Verification completed.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Result = SUCCESS
6. Start the cluster services by performing an automatic manage of the RGs on Jess, as shown in Example 5-15.
Example 5-15 Start the cluster node with the automatic manage option
# clmgr start node=Jess
 
Warning: "WHEN" must be specified. Since it was not, a default of "now" will be
used.
 
 
Warning: "MANAGE" must be specified. Since it was not, a default of "auto" will
be used.
 
Verifying Cluster Configuration Prior to Starting Cluster Services.
Jess: start_cluster: Starting PowerHA SystemMirror
...
"Jess" is now online.
 
Starting Cluster Services on node: Jessica
This may take a few minutes. Please wait...
Jess: Nov 8 2016 14:54:40Starting execution of /usr/es/sbin/cluster/etc/rc.cluster
Jess: with parameters: -boot -N -b -P cl_rc_cluster -A
Jess:
Jess: Nov 8 2016 14:54:40usage: cl_echo messageid (default) messageNov 8 2016 14:54:40usage: cl_echo messageid (default) messageRETURN_CODE=0
 
Important: Restarting cluster services with the Automatic option for managing RGs invokes the application start scripts. Make sure that the application scripts can detect that the application is already running, or copy and put a dummy blank executable script in their place and then copy them back after start.
7. Verify that the version numbers show correctly, as shown in Example 5-1 on page 116.
8. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists on all nodes and that it contains at least one pingable IP address because the installation or upgrade of PowerHA filesets can overwrite this file with an empty one.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset