Migration
This chapter covers the most common migration scenarios from IBM PowerHA V6.1 and PowerHA V7.1.x to PowerHA V7.2.
This chapter contains the following topics:
 – Clmigcheck
5.1 Migration planning
Proper planning of the migration procedure of existing clusters to IBM PowerHA SystemMirror V7.2.0 is important, in order to minimise both the risk of unexpected turns during migration and the duration of the process itself. The following section describes a set of actions that should be considered when planning the migration of existing PowerHA clusters.
Additional specifics when migrating from PowerHA V6.1, including crucial interim fixes, can be found on the following website:
Before beginning the actual migration procedure, always have a contingency plan in case any problems occur. Some general suggestions are as follows:
Create a backup of rootvg.
In some cases of upgrading PowerHA, depending on the starting point, updating or upgrading the IBM AIX base operating system is also required. Therefore, a good practice is to save your existing rootvg. One method is to create a clone by using alt_disk_copy to another free disk on the system. That way, a simple change to the bootlist and a reboot can easily return the system to the beginning state.
Other options are available, such as mksysb, alt_disk_install, and multibos.
Save the existing cluster configuration.
Create a cluster snapshot before the migration. By default it is stored in the following directory; make a copy of it and also save a copy from the cluster nodes for extra safety:
/usr/es/sbin/cluster/snapshots
Save any user-provided scripts.
This most commonly refers to custom events, pre-events and post-events, application controller, and application monitoring scripts.
Verify, by using the lslpp -h cluster.* command, that the current version of PowerHA is in the COMMIT state and not in the APPLY state. If not, run smit install_commit before you install the most recent software version.
5.1.1 PowerHA SystemMirror V7.2.0 requirements
The following sections list the software and hardware requirements that must be met for migrating to PowerHA SystemMirror V7.2.0.
Software requirements
Ensure that you meet the following software requirements:
IBM AIX V6 with Technology Level 9 with Service Pack 5, or later
IBM AIX V7 with Technology Level 3 with Service Pack 5, or later
IBM AIX V7 with Technology Level 4 with Service Pack 1, or later
IBM AIX version 7.2 with Service Pack 1, or later
Migrating from PowerHA SystemMirror V6.1 or earlier requires installing these AIX file sets:
bos.cluster.rte
bos.ahafs
bos.clvm.enh
devices.commom.IBM.storfwork.rte
clic.rte (for secured encryption communication options of clcomd)
Hardware
The hardware characteristics are as follows:
Support is available only for IBM POWER5 technologies and later.
Shared disks for the cluster repository.
Choose an appropriate size. Usually 1 gigabyte (GB) is sufficient for a two-node cluster. Ensure that the storage subsystem that hosts the repository disk is supported. Also, make sure that the adapters and the multipath driver that are used for the connection to the repository disk are supported. The only requirement is for it to be accessible within each site and not across sites.
It is possible to repurpose an existing disk heartbeat device as the cluster repository disk. However, the disk must be clear of any contents other than a PVID.
If you decide to use multicast heartbeating, it must be enabled, and you must ensure that the multicast traffic generated by any of the cluster nodes is properly forwarded by the network infrastructure between all cluster nodes.
HBA/SAN level heartbeating
Though it is a good practice to use as many different heartbeating lines of communication as possible, this is optional. It is only used within a site and not across sites. If wanted, you must have an adapter that has the tme attribute to enable. This typically applies to most 4 gigabit (Gb) and newer FC adapters. However, most converged adapters cannot offer this ability.
5.1.2 Deprecated features
Although this applies to PowerHA V7.1 and later, this list is mostly important when migrating from PowerHA V6.1. If your existing cluster contains any of these features in point 1 - 4, your cluster cannot be migrated until they are removed from the cluster configuration.
1. Internet Protocol address takeover (IPAT) with IP replacement
2. Locally administered address (LAA) for Media Access Control (MAC) hardware address takeover (HWAT)
3. Heartbeat over IP aliases
4. The following IP network types:
 – Asynchronous transfer mode (ATM)
 – Fiber Distributed Data Interface (FDDI)
 – Token Ring
5. The following point-to-point (non-IP) network types:
 – Recommended Standard 232 (RS232)
 – Target Mode Small Computer Serial Interface (TMSCSI)
 – Target Mode Serial Storage Architecture (TMSSA)
6. Disk heartbeat (diskhb)
7. Multinode disk heartbeat (mndhb)
8. Two-node configuration assistant
9. Web System Management Interface Tool (WebSMIT), replacing the IBM Systems Director plug-in
 
Important: PowerHA V7.2 no longer provides IBM Systems Director plug-in.
5.1.3 Migration options
There are four methods of performing a migration of a PowerHA cluster. Each of them is briefly described in the following list, and in more detail for the corresponding migration scenarios included in this chapter:
Offline A migration method where PowerHA is brought offline on all nodes before performing the software upgrade. During this time, the cluster resource groups are not available.
Rolling A migration method from one PowerHA version to another during which cluster services are stopped one node at a time. That node is upgraded and reintegrated into the cluster before the next node is upgraded. It requires little downtime, mostly for moving the resource groups between nodes to allow each node to be upgraded.
Snapshot A migration method from one PowerHA version to another, during which you take a snapshot of the current cluster configuration, stop cluster services on all nodes, and uninstall the current version of PowerHA. After this, you install the preferred version of SystemMirror, convert the snapshot by running the clconvert_snapshot utility, and finally restore the cluster configuration from the converted snapshot.
Non-disruptive This method is by far the most advised method of migration, whenever possible. As its name implies, the cluster resource groups remain available, and the applications remain functional, during the cluster migration. All cluster nodes are sequentially (one node at a time) brought to an unmanaged state, allowing all resource groups (RGs) on that node to remain operational while cluster services are stopped.
However, this method can generally be used only when applying service packs to the cluster, and not doing major upgrades. This option does not apply when the upgrade of the base operating system is also required, such as when migrating PowerHA to a version later than 7.1.x from an earlier version.
 
Important: When there are nodes in a cluster running two separate versions of PowerHA, this configuration is considered to be a mixed cluster state. A cluster in this state does not support any configuration changes or synchronization until all of the nodes have been migrated. Be sure to complete either the rolling or non-disruptive migration as soon as possible to ensure stable cluster functionality.
Tip: After Cluster Aware AIX (CAA) is installed, the following line is added to the /etc/syslog.conf file:
*.info /var/adm/ras/syslog.caa rotate size 1m files 10
Be sure to enable verbose logging by adding the following line:
*.debug /tmp/syslog.out rotate size 10m files 10
Then, issue a refresh -s syslogd command. This command provides valuable information if troubleshooting is required.
 
5.1.4 Migration steps
The following sections give an overview of the steps that are required to perform each type of migration. Very detailed examples of each migration type can be found in 5.2, “Migration scenarios from PowerHA V6.1” on page 117 and 5.3, “Migration scenarios from PowerHA V7” on page 131.
Offline method
Some of these steps can often be performed in parallel, because the entire cluster will be offline. However, make note of the differences between migrating from PowerHA V6.1 versus PowerHA V7.
Additional specifics when migrating from PowerHA V6.1, including crucial interim fixes, can be found on the following website:
 
Important: You should always start with the current service packs available for PowerHA, AIX, and Virtual Input/Output Server (VIOS).
To migrate using the offline method, complete the following steps:
1. Stop cluster services on all nodes, and choose to bring resource groups offline.
2. Upgrade AIX (as needed):
a. Ensure that prerequisites, such as bos.cluster, are installed.
b. Restart.
3. If you are upgrading from PowerHA V6.1, continue to step 4. If not, skip to step 8.
4. Verify that clcomd is active.
lssrc -s clcomd
5. Update /etc/cluster/rhosts:
a. Enter either cluster node host names or IP addresses; only one per line.
b. Run the refresh -s clcomd command.
6. Run clmigcheck on one node. If you choose, you can run clmigcheck -l 7.2.0 and then skip running the next step of option 1:
a. Choose option 1, and then choose to which version of PowerHA you are migrating.
If you specified the version previously when running clmigcheck and chose to do this too, you will be presented with the following message.
You have already specified version 7.2.0.
Press <Enter> to continue, or "x" to exit...
b. Choose option 2 to verify that the cluster configuration is supported (assuming
no errors).
c. Then choose option 4:
 • Choose Multicast or Unicast.
 • Choose the repository disk.
d. Exit the clmigcheck menu.
7. Review the contents of /var/clmigcheck/clmigcheck.txt for accuracy.
8. Upgrade PowerHA:
a. If migrating from PowerHA V6.1, perform this action on only one node.
b. If migrating from PowerHA V7.1, you can perform this action on both nodes in parallel and skip to step 11.
9. Review the /tmp/clconvert.log file.
10. Run clmigcheck and upgrade PowerHA on the remaining node.
When running clmigcheck on each additional node, the menu does not appear, and no further actions are needed. On the last node, it creates the CAA cluster.
11. Restart cluster services.
Rolling method
A rolling migration provides the least amount of downtime by upgrading one node at a time.
Additional specifics when migrating from PowerHA V6.1, including crucial interim fixes, can be found on the following website:
 
Important: You should always start with the current service packs available for PowerHA, AIX, and VIOS.
To migrate using the rolling method, complete the following steps:
1. Stop cluster services on one node (move resource group as needed).
 
2. Upgrade AIX (as needed):
a. Ensure that prerequisites, such as bos.cluster, are installed.
b. Reboot.
3. If upgrading from PowerHA V6.1, continue to step 4. If not, skip to step 8.
4. Verify that clcomd is active on the downed node:
lssrc -s clcomd
5. Update /etc/cluster/rhosts.
Enter either cluster node host names or IP addresses; only one per line.
6. Run the refresh -s clcomd command.
7. Run clmigcheck on the downed node. If you choose, you can run clmigcheck -l 7.2.0 and then skip running the next step of option 1:
a. Choose option 1, and then choose to which version of PowerHA you are migrating.
If you specified the version previously when running clmigcheck and choose to do this too, you will be presented with the following message.
You have already specified version 7.2.0.
Press <Enter> to continue, or "x" to exit...
b. Choose option 2 to verify that the cluster configuration is supported (assuming
no errors).
c. Then choose option 4.
 • Choose the repository disk device to be used for each site.
 • Choose Multicast or Unicast.
d. Exit the clmigcheck menu.
8. Review contents of /var/clmigcheck/clmigcheck.txt for accuracy.
9. Upgrade PowerHA on the cluster node where clmigcheck was run.
10. Review the /tmp/clconvert.log file.
11. Restart cluster services.
12. Repeat these steps for each node.
When running clmigcheck on each additional node, a menu does not appear, and no further actions are needed. On the last node, it automatically creates the CAA cluster.
Snapshot method
Some of these steps can often be performed in parallel, because the entire cluster will be offline. However, make note of the differences between migrating from PowerHA V6.1 versus PowerHA V7.1.
Additional specifics when migrating from PowerHA V6.1, including crucial interim fixes, can be found on the following website:
 
Important: You should always start with the current service packs available for PowerHA, AIX, and VIOS.
To migrate using the snapshot method, complete the following steps:
1. Stop cluster services on all nodes, and choose to bring resource groups offline.
2. Create a cluster snapshot.
This step assumes that you have not previously created one. Save copies of it from
the cluster.
3. Upgrade AIX (as needed):
a. Ensure that prerequisites are installed, such as bos.cluster.
b. Reboot.
4. If upgrading from PowerHA V6.1, continue to step 4. If not, skip to step 9.
5. Verify that clcomd is active:
lssrc -s clcomd
6. Update /etc/cluster/rhosts:
a. Enter either cluster node host names or IP addresses; only one per line.
b. Run the refresh -s clcomd command.
7. Run clmigcheck on one node. If you choose, you can run clmigcheck -l 7.2.0 and then skip running the next step of option 1:
a. Choose option 1 and then choose to which version of PowerHA you are migrating.
If you specified the version previously when running clmigcheck and choose to do this too, you will be presented with the following message.
You have already specified version 7.2.0.
Press <Enter> to continue, or "x" to exit...
b. Choose option 3.
Select a specific snapshot (from /usr/es/sbin/cluster/snapshots) to verify that the cluster configuration in the snapshot is supported (assuming no errors).
c. Then, choose option 4:
 • Choose Multicast or Unicast.
 • Choose the repository disk.
d. Exit the clmigcheck menu.
Review contents of /var/clmigcheck/clmigcheck.txt for accuracy.
8. Upgrade PowerHA:
a. If you are migrating from PowerHA V6.1, perform this action on only one node.
b. If you are migrating from PowerHA V7.1, you can perform the upgrade on both nodes in parallel and skip to step 12.
9. Review the /tmp/clconvert.log file.
10. Run clmigcheck and upgrade PowerHA on the remaining node.
When running clmigcheck on each additional node, the menu does not appear, and no further actions are needed. On the last node, it creates the CAA cluster.
11. Restart cluster services.
Non-disruptive upgrade
This method applies only when the AIX level is already at appropriate levels to support PowerHA V7.2 (or later). The following steps are to be performed fully on one node:
1. Stop cluster services with unmanage of the resource groups.
2. Upgrade PowerHA (update_all).
3. Start cluster services with automatic manage of the resource groups.
 
Important: When you restart cluster services with the Automatic option for managing resource groups, this action invokes one or more application start scripts. Make sure that the application scripts can detect that the application is already running. If they cannot detect this, copy them somewhere for backup, put a dummy blank executable script in their place, and then copy them back after startup.
5.1.5 Clmigcheck
Before migrating to PowerHA version 7, run the clmigcheck program to prepare the cluster for migration.
 
Important: Make sure that you have the current version of clmigcheck. Consult the technical bulleting and contact support as needed to obtain an interim fix from the following website:
The program has two functions:
It validates the current cluster configuration (Object Data Manager (ODM) with option 2 or snapshot with option 3) for migration. If the configuration is not valid, the program notifies you of any unsupported elements. If an error is encountered, it must be corrected or you cannot migrate. If a warning is displayed, such as for disk heartbeat, you can continue.
It prepares for the new cluster by obtaining the disks to be used for the repository disks and multicast address (if chosen).
The clmigcheck program goes through the following stages:
1. Performing the first initial run
When the clmigcheck program runs, it checks whether it has been run before by looking for a /var/clmigcheck/clmigcheck.txt file. If it does exist from a previous run, it displays the message shown in Figure 5-1.
This appears to be the first node in the cluster to begin the migration
process. However, a migration data file from a previous invocation of
"clmigcheck" already exists. This file will be overwritten if you
continue.
 
Do you want to continue (y/n)? (y)
Figure 5-1 The clmigcheck.txt file exists warning
It then checks if the last fix pack, SP15, is installed. Next, it makes a cluster snapshot for recovery purposes. If not, it will display the warning message shown in Figure 5-2.
Warning: PowerHA version 6.1 has been detected, but the current fix
level is 12. IBM strongly advises that service pack 15 or later
be installed prior to performing this migration.
 
Do you want to attempt the migration anyway (n/y)? (n)
Figure 5-2 The clmigcheck latest fixes warning
2. Verifying that the cluster configuration is suitable for migration
From the clmigcheck menu, you can select options 2 or 3 to check your existing ODM or snapshot configuration to verify whether the environment is valid for migration. This checks many things, including all of the options in 5.1.2, “Deprecated features” on page 109.
 
3. Creating the CAA required configuration
After performing option 2 or 3, choose option 4. Option 4 creates the /var/clmigcheck/clmigcheck.txt file with the information entered, and is copied to all nodes in the cluster.
When run on the last node of the cluster to be migrated, the clmigcheck program uses the mkcluster command and passes the cluster parameters from the existing PowerHA cluster, along with the repository disk and multicast address (if applicable).
5.1.6 Clmigcheck enhancements
The most recent version of clmigcheck has been enhanced to include additional checks to further maximize the likelihood for a successful migration. These include, but are not limited to, the following enhancements:
Adds a new verification only flag, -v, to perform most checks without actually getting into the menu and creating a clmigcheck.txt file. This is useful to run well in advance of performing a migration (for example, run the clmigcheck -v).
Adds a new flag, -l, to specify the target version for migration (for example, clmigcheck -l 7.2.0).
Adds a new flag, -g, to skip version checking (for example, clmigcheck -g). It is rare that you would ever run this option.
Verifies that the cluster is in sync.
Ensures that /etc/cluster/rhosts is properly created.
Checks if the last PowerHA V6.1 service pack, SP15, is installed.
Automatically creates a cluster snapshot upon first time/node run.
Performs additional log and trace capturing by appending clmigcheck.log into clutils.log.
Changes the backup strategy of existing clmigcheck files to be time and date specific.
Ensures that CAA logging is enabled in syslog.conf by changing the previous default setting in syslog.conf from info to debug.
Ensures that the -v flag is not used in the mkcluster command.
Adds an attribute to specify which version is being migrated to. This is required because it affects which restrictions apply based on the target version. It also shows only the latest version of PowerHA supported based on the AIX level detected.
Ensures that inetd is active.
Verifies that both /etc/cluster/locks and cluster0 exist.
Checks that all critical CAA services exist in the following locations:
 – /etc/inetd.conf
 – /etc/services
 – /etc/syslog.conf
Ensures that a CAA cluster does not already exist.
Includes additional checks for the following IBM High Availability Cluster Multiprocessing (IBM HACMP) deprecated components:
 – HACMPsp2
 – HACMPx25
 – HACMPsna
 – HACMPcommadapter
 – HACMPcommlink
Checks for consistency in persistent host name and communication paths.
Checks that name resolution in /etc/netsvc.conf is configured to resolve locally first.
Ensures that no service or persistent address is set to the host name.
Makes the following additions specifically for the repository disk:
 – Shows only valid repository candidate disks that are 512 megabytes (MB) in size or more. It will also display their size.
 – Also verifies that repository disk candidates are not Oracle Real Application Clusters (RAC) or Oracle Automatic Storage Management (ASM) disks.
 – When a repository disk is chosen, it checks to make sure that the no_reserve attribute for reservation_policy is set.
 – Attempts to verify that the disk chosen for repository does not have any leftover repository contents on it from being used previously.
When performing a snapshot migration, the clmigcheck menu now generates a list of available snapshots to choose from, instead of making the user type in a snapshot.
When performing a snapshot migration, because clmigcheck is run only once, it automatically propagates /etc/cluster/rhosts on all other remote nodes.
Adds messaging and further clarifies existing messages.
5.1.7 Migration matrix to PowerHA SystemMirror V7.2.0
Table 5-1 shows the migration options between versions of PowerHA.
Table 5-1 Migration matrix table
PowerHA
To V6.1
To V7.1.1
To V7.1.2
To V7.1.3
To V7.2.0
From V5.5
R, S, O, N1
Upgrade to PowerHA V6.1 SP15 First
 
From V6.1
Update to SP15 first then R,S,O are all viable options to V7.x.x
 
From V7.1.0
 
R2,S, O
Rb,S, O
Rb,S, O
Rb,S, O
From V7.1.1
 
R, S, O, Nb
R, S, O, Nb
R, S, O, Nb
From V7.1.2
 
R, S, O, Nb
R, S, O, Nb
From V7.1.3
 
 
 
 
R, S, O, Nb

1 R=Rolling, S=Snapshot, O=Offline, N=Nondisruptive
2 This option is available only if the beginning AIX level is high enough to support the newer version.
5.2 Migration scenarios from PowerHA V6.1
This section further details test scenarios used in each of these migrations methods:
Rolling migration
Snapshot migration
Offline migration
5.2.1 PowerHA V6.1 test environment overview
For the following scenarios we are using a two-node cluster with nodes Jess and Cass. It consists of a single resource group configured in a typical hot-standby configuration.
Our test configuration consisted of the following hardware and software (see Figure 5-3):
IBM POWER8 S814 with firmware 840
Hardware Management Console (HMC) 840
AIX V7.1.4
PowerHA V6.1 SP15
IBM Storwize V7000
Figure 5-3 PowerHA V6.1 test migration cluster
5.2.2 Rolling migration from PowerHA V6.1
For the rolling migration, we begin with the standby node, Cass.
 
Tip: A demonstration of performing a rolling migration from PowerHA V6.1 to PowerHA V7.2 is available at the following website:
We performed the following steps:
1. Stop cluster services on node Cass.
This was accomplished by running smitty clstop and choosing the options shown in Figure 5-4 on page 119. After running, the OK response appears quickly. Make sure that the cluster node is in the ST_INIT state. This can be found from the lssrc -ls clstrmgrES|grep state output.
2. Upgrade AIX.
In our scenario we already have supported AIX levels for PowerHA V7.2, and do not need to perform this step. But if you do, a restart will be required before continuing.
 
Important: If you are upgrading to AIX V7.2, ensure that you have PowerHA V7.2 SP1 (or later). Otherwise, at least get interim fix for IV79386 from support, because there is a known issue that will prevent a rolling migration from succeeding.
Also, see the AIX V7.2 release notes regarding IBM Reliable Scalable Cluster Technology (RSCT) file sets when upgrading:
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all wanted changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Cass] +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Bring Resource Groups> +
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Figure 5-4 Stopping cluster services
3. Verify that the clcomd daemon is active, as shown in Figure 5-5.
[root@Cass] /# lssrc -s clcomd
Subsystem Group PID Status
clcomd caa 3670016 active
Figure 5-5 Verify that clcomd is active
4. Next, edit the CAA-specific communication file, /etc/cluster/rhosts. You can enter either the host name for each node, or the IP address that resolves to the host name. However, there must be only one entry per line. We entered host names as shown in Figure 5-6.
[root@Cass] /# vi /etc/cluster/rhosts
Jess
Cass
Figure 5-6 The /etc/cluster/rhosts contents
5. Refresh clcomd by running refresh -s clcomd.
6. Run /usr/sbin/clmigcheck and the menu shown in Figure 5-7 on page 120 displays.
If you choose, you can run clmigcheck -l 7.2.0 and then skip running the next step of option 1.
 
Attention: During our testing of running clmigcheck, we encountered an error about our /etc/netsvc.conf file containing more than one line, as shown in the following paragraph. We had to remove all of the comment lines from the file on each node for successful execution. This was reported to development as a defect:
## One or more possible problems have been detected
ERROR: exactly one “hosts” entry is required in /etc/netsvc.conf on node “Cass”. A value such as the following should be
the only line in /etc/netsvc.conf:
hosts = local4,bind4
Or for IPv6 environments:
hosts = local6,bind6
Figure 5-7 shows the main menu.
------------[ PowerHA SystemMirror Migration Check ]-------------
 
Please select one of the following options:
 
1 -> Enter the version you are migrating to.
 
2 -> Check ODM configuration.
 
3 -> Check snapshot configuration.
 
4 -> Enter repository disk and IP addresses.
 
 
Select one of the above, "x" to exit, or "h" for help: 1
Figure 5-7 Clmigcheck main menu
7. First, we choose Option 1 and then choose Option 5, as shown in Figure 5-8. After specifying the PowerHA version level to which we are migrating (7.2.0) and pressing Enter, we are returned back to the main menu.
------------[ PowerHA SystemMirror Migration Check ]-------------
 
Which version of IBM PowerHA SystemMirror for AIX are you migrating to?
 
1 -> 7.1.0
2 -> 7.1.1
3 -> 7.1.2
4 -> 7.1.3
5 -> 7.2.0
 
Select one of the above or "h" for help or "x" to exit: 5
Figure 5-8 Clmigcheck choosing PowerHA version
8. Because this was a rolling migration, we choose Option 2 and press Enter. In most environments, it is common to have a disk heartbeat network configured. If that is the case, a warning appears.
This is normal, because it is removed during the last phase of the migration. In our case, because there were no unsupported elements, when you press Enter a message displays to that effect, as shown in Figure 5-9.
------------[ PowerHA SystemMirror v7.2.0 Migration Check ]-------------
 
CONFIG-WARNING: The configuration contains unsupported hardware: Disk
Heartbeat network. The PowerHA network name is net_diskhb_01.
This will be removed from the configuration during the
migration to PowerHA SystemMirror 7.
 
Press <Enter> to continue...
 
------------[ PowerHA SystemMirror v7.2.0 Migration Check ]-------------
 
The ODM has no unsupported elements.
 
Press <Enter> to continue...
Figure 5-9 The clmigcheck disk heartbeat warning
9. After pressing Enter to continue, the panel returns to the main clmigcheck menu shown in Figure 5-7 on page 120. This time, we choose Option 4 and press Enter. We were presented with the options for either Multicast or Unicast. We chose Option 3 for Unicast, as shown in Figure 5-10.
------------[ PowerHA SystemMirror v7.2.0 Migration Check ]-------------
 
Your cluster can use multicast or unicast messaging for heartbeat.
Multicast addresses can be user specified or default (i.e. generated by AIX).
Select the message protocol for cluster communications:
 
1 -> DEFAULT_MULTICAST
2 -> USER_MULTICAST
3 -> UNICAST
 
Select one of the above or "h" for help or "x" to exit:3
Figure 5-10 The clmigcheck option for Multicast or Unicast
10. Afterward, the menu to select a repository disk displays. In our case, we choose the only
1 GB disk, hdisk2, by choosing Option 4, as shown in Figure 5-11.
------------[ PowerHA SystemMirror v7.2.0 Migration Check ]-------------
 
Select the disk to use for the repository:
 
1 -> 00f92db1df804285 (hdisk5 on Cass), 2 GB
2 -> 00f92db1df804342 (hdisk4 on Cass), 2 GB
3 -> 00f92db1df804414 (hdisk3 on Cass), 2 GB
4 -> 00f92db1df8044d8 (hdisk2 on Cass), 1 GB
 
Select one of the above or "h" for help or "x" to exit: 4
Figure 5-11 Clmigcheck choosing repository disk
After choosing the repository disk, the clmigcheck menu displays the final message that the new version of PowerHA can now be installed, as shown in Figure 5-12.
No further checking is required on this node.
You can install the new version of PowerHA SystemMirror.
Figure 5-12 Clmigcheck last message
11. Upgrade PowerHA on node Cass. To upgrade PowerHA, we simply run smitty update_all, as shown in Figure 5-13.
          Update Installed Software to Latest Level (Update All)
 
Type or select values in entry fields.
Press Enter AFTER making all wanted changes.
 
[TOP] [Entry Fields]
* INPUT device / directory for software .
* SOFTWARE to update _update_all
PREVIEW only? (update operation will NOT occur) no +
COMMIT software updates? yes +
SAVE replaced files? no +
AUTOMATICALLY install requisite software? yes +
EXTEND file systems if space needed? yes +
VERIFY install and check file sizes? no +
DETAILED output? no +
Process multiple volumes? yes +
ACCEPT new license agreements? yes +
Preview new LICENSE agreements? no +
 
[MORE...6]
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Figure 5-13 Smitty update_all
 
Important: Always remember to set ACCEPT new license agreements? to yes.
12. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists, and that it contains at least one pingable IP address, because installation or upgrade of PowerHA file sets can overwrite this file with an empty one. An illustration is shown in Example 5-21 on page 140.
13. Start cluster services on node Cass by running smitty clstart. During the startup, a message displays about cluster verification being skipped because of mixed versions, as shown in Figure 5-14 on page 123.
Important: While in this mixed cluster state, do not make any cluster changes or attempt to synchronize the cluster.
Cluster services are running at different levels across
the cluster. Verification will not be invoked in this environment.
 
Starting Cluster Services on node: PHA72b
This may take a few minutes. Please wait...
Cass: Nov 25 2015 15:08:33 Starting execution of /usr/es/sbin/cluster/etc/rc.cluster
Cass: with parameters: -boot -N -A -C interactive -P cl_rc_cluster
Figure 5-14 Verification skipped
14. After starting, validate that the cluster is stable before continuing by running the lssrc -ls clstrmgrES |grep -i state command.
15. Now we repeat the previous steps for node Jess. However, when stopping cluster services, we choose the Move Resource Groups option, as shown in Figure 5-15.
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all wanted changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Jess] +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Move Resource Groups +
Figure 5-15 Run clstop and move the resource group
16. Upgrade AIX.
In our scenario we already have supported AIX levels for PowerHA V7.2 and do not need to perform this step. But if you do, a restart will be required before continuing.
17. Verify that the clcomd daemon is active, as shown in Figure 5-16.
[root@Jess] /# lssrc -s clcomd
Subsystem Group PID Status
clcomd caa 50467008     active
Figure 5-16 Verify clcomd is active
18. Next, edit the CAA-specific communication file, /etc/cluster/rhosts. You can enter either the host name for each node, or the IP address that resolves to the host name. However, there must be only one entry per line. We entered host names, as shown in Figure 5-17.
[root@Jess] /# vi /etc/cluster/rhosts
Jess
Cass
Figure 5-17 /etc/cluster/rhosts contents
19. Refresh clcomd by running the refresh -s clcomd command.
20. Run the /usr/sbin/clmigcheck command.
Unlike the first execution, it will not be displayed this time. Rather, the message shown in Figure 5-18 displays.
It appears that this is the last node in this cluster that still needs to
be migrated. All the other nodes have already completed their migration to
PowerHA SystemMirror version 7. Please confirm that this is correct, and
then the migration process will be completed by creating an appropriate
CAA cluster on the cluster nodes.
 
** After the successful creation of the CAA cluster, you _MUST_ install
SystemMirror version 7 on this node as soon as possible. Until version
7 is installed, communication with remote nodes will not be possible.
 
Press <Enter> to continue, or "x" to exit...
Figure 5-18 Clmigcheck to create CAA cluster
21. Upon pressing Enter, a confirmation about creating a CAA cluster displays, as shown in Figure 5-19.
------------[ PowerHA SystemMirror Migration Check ]-------------
 
About to configure a 2 node standard CAA cluster, which can take up to 2
minutes.
 
Press <Enter> to continue...
Figure 5-19 CAA cluster creation notification
22. Press Enter again and, after successful completion, a message displays to remind you to upgrade PowerHA now, as shown in Figure 5-20.
You _MUST_ install the new version of PowerHA SystemMirror now. This node
will not be able to communicate with the other nodes in the cluster until
SystemMirror v7 is installed on it.
Figure 5-20 Clmigcheck upgrade PowerHA notification
23. Check for a CAA cluster on both nodes, and that cluster communication mode is Unicast, as shown in Example 5-1.
Example 5-1 Checking the CAA cluster configuration
Cluster Name: Jess_cluster
Cluster UUID: 6563d404-9479-11e5-8002-96d75a7c7f02
Number of nodes in cluster = 2
Cluster ID for node Jess: 1
Primary IP address for node Jess: 10.2.30.91
Cluster ID for node Cass: 2
Primary IP address for node Cass: 10.2.30.92
Number of disks in cluster = 1
Disk = hdisk2 UUID = ef446503-eb46-8174-9bdd-15563273ad21 cluster_major = 0 cluster_minor = 1
Multicast for site LOCAL: IPv4 228.2.30.91 IPv6 ff05::e402:1e5b
Communication Mode: unicast
Local node maximum capabilities: CAA_NETMON, AUTO_REPOS_REPLACE, HNAME_CHG, UNICAST, IPV6, SITE
Effective cluster-wide capabilities: CAA_NETMON, AUTO_REPOS_REPLACE, HNAME_CHG, UNICAST, IPV6, SITE
24. Upgrade PowerHA on node Jess. To upgrade PowerHA, run smitty update_all, as shown in Figure 5-13 on page 122.
25. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists, and that is contains at least one pingable IP address, because installation or upgrade of PowerHA file sets can overwrite this file with an empty one. An illustration is shown in Example 5-21 on page 140.
26. Start cluster services on node Jess by running the smitty clstart command.
27. Verify that the cluster has completed the migration on both nodes by checking that
cluster_version = 16, as shown in Example 5-2.
Example 5-2 Verifying the migration has completed on both nodes
# clcmd odmget HACMPcluster |grep version
cluster_version = 16
cluster_version = 16
 
#clcmd odmget HACMPnode |grep version |sort -u
version = 16
 
Important: Both nodes must show version=16, or the migration did not complete successfully. If the migration did not complete, call IBM support.
28. Though the migration is completed at this point, remember that the resource is currently running on node Cass. If wanted, move the resource group back to node Jess, as shown in Example 5-3.
Example 5-3 Move resource group back to node Jess
# clmgr move rg demorg node=Jess
Attempting to move resource group demorg to node Cass.
 
Waiting for the cluster to process the resource group movement request....
 
Waiting for the cluster to stabilize.....
 
Resource group movement successful.
Resource group demorg is online on node Cass.
 
Cluster Name: Jess_cluster
 
Resource Group Name: demorg
Node Group State
---------------------------- ---------------
Jess ONLINE
Cass OFFLINE
 
Important: Always test the cluster thoroughly after migrating.
5.2.3 Offline migration from PowerHA V6.1
For an offline migration, we can perform many of the steps in parallel on all (both) nodes in the cluster. However, this means that a full cluster outage must be planned.
 
Tip: A demo of performing an offline migration from PowerHA V6.1 to PowerHV 7.2 is available on the following website:
To perform and offline migration, complete the following steps:
1. Stop cluster services on both nodes Jess and Cass.
This was accomplished by running smitty clstop and choosing the options shown in Figure 5-21. After running, the OK response displays quickly. Make sure that the cluster node is in the ST_INIT state. This can be found from the lssrc -ls clstrmgrES|grep state output.
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all wanted changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Jess,Cass]             +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Bring Resource Groups> +
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Figure 5-21 Stopping cluster services
2. Upgrade AIX on both nodes.
 
Important: If upgrading to AIX 7.2, see the AIX 7.2 release notes regarding RSCT file sets at the following website:
In our scenario, we already have the supported AIX levels for PowerHA V7.2, and do not need to perform this step. But if you do, a restart will be required before continuing.
3. Verify that the clcomd daemon is active on both nodes, as shown in Figure 5-22.
[root@Cass] /# lssrc -s clcomd
Subsystem Group PID Status
clcomd caa 3670016 active
 
[root@Jess] /# lssrc -s clcomd
Subsystem Group PID Status
clcomd caa              50467008     active
Figure 5-22 Verify clcomd is active
4. Next, edit the CAA-specific communication file, /etc/cluster/rhosts, on both nodes. You can enter either the host name for each node, or the IP address that resolves to the host name. But there must be only one entry per line. We entered host names, as shown in Figure 5-23.
[root@Jess] /# vi /etc/cluster/rhosts
Jess
Cass
 
[root@Cass] /# vi /etc/cluster/rhosts
Jess
Cass
Figure 5-23 The /etc/cluster/rhosts contents
5. Refresh clcomd by running refresh -s clcomd on both nodes.
6. Run clmigcheck on one node. In our case, we ran the command on node Jess.
 
Attention: During our testing of running clmigcheck, we encountered an error about our /etc/netsvc.conf file containing more than one line, as shown in the following paragraph. We had to remove all of the comment lines from the file on each node for successful execution. This was reported to development as a defect:
## One or more possible problems have been detected
ERROR: exactly one “hosts” entry is required in /etc/netsvc.conf on node “Jess”. A value such as the following should be
the only line in /etc/netsvc.conf:
hosts = local4,bind4
Or for IPv6 environments:
hosts = local6,bind6
a. Choose Option 1, as shown in Figure 5-7 on page 120.
b. Choose Option 5 to specify version 7.2.0, as shown in Figure 5-8 on page 120.
c. Choose Option 2 back on the main menu to have the cluster configuration validated.
d. Assuming no errors, Choose Option 4:
i. We then choose Unicast, as shown in Figure 5-10 on page 121.
ii. Next, we choose the repository disk, as shown in Figure 5-11 on page 121.
e. Then, we get the last message, as shown in Figure 5-12 on page 122.
7. Now we upgrade to PowerHA V7.2.0 by running smitty update_all on node Jess.
8. Perform clmigcheck on node Cass.
This will create the CAA cluster after pressing Enter at the message displayed in both Figure 5-18 on page 124 and Figure 5-19 on page 124.
9. Now we upgrade to PowerHA V7.2.0 by running smitty update_all on node Cass.
10. Verify that version numbers show correctly, as shown in Example 5-2 on page 125.
11. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists on all nodes, and that it contains at least one pingable IP address, because installation or upgrade of PowerHA file sets can overwrite this file with an empty one. An illustration is shown in Example 5-21 on page 140.
12. Restart the cluster on both nodes by running the clmgr start cluster command.
 
Important: Always test the cluster thoroughly after migrating.
5.2.4 Snapshot migration from PowerHA V6.1
For an a snapshot migration we can perform many of the steps in parallel on all (both) nodes in the cluster. However this means that a full cluster outage must be planned.
 
 
Tip: A demo of performing a snapshot migration from PowerHA V6.1 to PowerHA V7.2 is available at:
To perform a snapshot migration, complete the following steps:
1. Stop cluster services on both nodes Jess and Cass.
This was accomplished by running smitty clstop and choosing the options shown in Figure 5-24. After running, the OK response appears quickly. Make sure that the cluster node is in the ST_INIT state.
This can be found from the lssrc -ls clstrmgrES|grep state output.
                             Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all wanted changes.
 
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [Jess,Cass]             +
BROADCAST cluster shutdown? true +
* Select an Action on Resource Groups Bring Resource Groups> +
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Figure 5-24 Stopping cluster services
2. Create a cluster snapshot. Although clmigcheck creates a snapshot too, we created our own by running smitty cm_add_snap.dialog and completing the options, as shown in Figure 5-25.
             Create a Snapshot of the Cluster Configuration
 
Type or select values in entry fields.
Press Enter AFTER making all wanted changes.
 
[Entry Fields]
* Cluster Snapshot Name [pre72migration] /
Custom-Defined Snapshot Methods [] +
* Cluster Snapshot Description [61 SP15 cluster]
Figure 5-25 Creating cluster snapshot
3. Upgrade AIX on both nodes.
 
Important: If upgrading to AIX 7.2, see the AIX 7.2 release notes regarding RSCT file sets on the following website:
In our scenario, we already have the supported AIX levels for PowerHA V7.2, and do not need to perform this step. But if you do, a restart will be required before continuing.
4. Verify that the clcomd daemon is active on both nodes, as shown in Figure 5-26.
[root@Cass] /# lssrc -s clcomd
Subsystem Group PID Status
clcomd caa 3670016 active
 
[root@Jess] /# lssrc -s clcomd
Subsystem Group PID Status
clcomd caa              50467008     active
Figure 5-26 Verify that clcomd is active
5. Next, edit the CAA-specific communication file, /etc/cluster/rhosts, on both nodes. You can enter either the host name for each node, or the IP address that resolves to the host name. But there must be only one entry per line. We entered host names, as shown in Figure 5-27.
[root@Jess] /# vi /etc/cluster/rhosts
Jess
Cass
 
[root@Cass] /# vi /etc/cluster/rhosts
Jess
Cass
Figure 5-27 The /etc/cluster/rhosts contents
6. Refresh clcomd by running refresh -s clcomd on both nodes.
7. Run clmigcheck on one node. In this case, we ran the command on node Jess.
 
Attention: During our testing of running clmigcheck we encountered an error about our /etc/netsvc.conf file containing more than one line, as shown below. We had to remove all of the comment lines from the file on each node for successful execution. This was reported to development as a defect:
## One or more possible problems have been detected
ERROR: exactly one “hosts” entry is required in /etc/netsvc.conf on node “Jess”. A value such as the following should be
the only line in /etc/netsvc.conf:
hosts = local4,bind4
Or for IPv6 environments:
hosts = local6,bind6
a. Choose Option 1, as shown in Figure 5-7 on page 120.
b. Choose Option 5 to specify version 7.2.0, as shown in Figure 5-8 on page 120.
c. Choose Option 3 on the main menu to have the cluster snapshot configuration validated. In our case, we choose option 8 for the snapshot that we created, as shown in Figure 5-28.
------------[ PowerHA SystemMirror v7.2.0 Migration Check ]-------------
 
Select a snapshot:
 
1 -> 07_30_2014_ClearHA61democluster_autosnap
2 -> 61SP14pre71upgrade
3 -> 713SP2cluster
4 -> HAdb2.1.030800.snapshot
5 -> HAdb2.snapshot
6 -> active.0
7 -> active.1
8 -> pre72migration
 
Select one of the above or "h" for help or "x" to exit: 8
 
Figure 5-28 Clmigcheck snapshot selection
d. Assuming no errors, choose Option 4:
i. We then chose Unicast, as shown in Figure 5-10 on page 121.
ii. Next, we choose our repository disk, as shown in Figure 5-11 on page 121.
e. Then, we get the last message, as shown in Figure 5-12 on page 122.
8. Next, we uninstall PowerHA 6.1 on both nodes Jess and Cass by running smitty remove on the cluster.*
9. Install PowerHA V7.2.0 by running smitty install_all on node Jess.
10. Perform clmigcheck on node Cass.
This creates the CAA cluster after pressing Enter at the message displayed in both Figure 5-18 on page 124 and Figure 5-19 on page 124.
11. Install PowerHA V7.2.0 by running smitty install_all on node Cass.
12. Convert the previously created snapshot:
/usr/es/sbin/cluster/conversion/clconvert_snapshot -v 6.1 -s pre72migration
13. Restore the cluster configuration from the converted snapshot by running smitty cm_apply_snap.select and choosing the snapshot from the pop-up menu. It auto completes the last menu, as shown in Figure 5-29.
                        Restore the Cluster Snapshot
 
Type or select values in entry fields.
Press Enter AFTER making all wanted changes.
 
[Entry Fields]
Cluster Snapshot Name pre72migration>
Cluster Snapshot Description 61 SP15 cluster>
Un/Configure Cluster Resources? [Yes] +
Force apply if verify fails? [No] +
 
Figure 5-29 Restoring the cluster configuration from a snapshot
The restore process automatically synchronizes the cluster.
14. Ensure that the file /usr/es/sbin/cluster/netmon.cf exists on all nodes, and that is contains at least one pingable IP address, because installation or upgrade of the PowerHA file sets can overwrite this file with an empty one. An illustration is shown in Example 5-21 on page 140.
15. Restart the cluster on both nodes by running clmgr start cluster.
 
Important: Always test the cluster thoroughly after migrating.
5.3 Migration scenarios from PowerHA V7
This section further details test scenarios used in each of these migration methods:
Rolling migration
Snapshot migration
Offline migration
Non-disruptive migration
5.3.1 PowerHA V7.1 test environment overview
The cluster environment used for our migration scenarios presented the following features:
Two cluster nodes both with AIX 7.1.3 SP5
PowerHA installed in the following versions as starting point for migrations:
 – PowerHA 7.1.1 SP1
 – PowerHA 7.1.2 SP1
 – PowerHA 7.1.3 GA
One network common to both nodes
One cluster disk, for the resource group
One repository disk
One resource group, built upon IBM HTTP Server
The diagram for the migration test environment is presented in Figure 5-30.
Figure 5-30 Migration test environment
5.3.2 Check and document initial stage
Before starting the actual migration, we need to make sure that the cluster nodes are synchronized in terms of PowerHA committed file sets and cluster configuration. The following actions are common to all migration scenarios, and therefore generally advised. Many of these actions are not mandatory, but they can help you repair or debug your cluster if things go wrong.
Complete the following initial steps:
1. Get the version of the operating system using the oslevel -s command (node-specific), as shown in Example 5-4.
Example 5-4 Get the AIX version
clnode_1:/# oslevel -s
7100-03-05-1524
clnode_1:/#
2. Get the version of PowerHA using the halevel -s command (node specific, Example 5-5).
Example 5-5 Get the PowerHA version
clnode_1:/# halevel -s
7.1.3 GA
clnode_1:/#
3. Get the network configuration using the netstat -in command (Example 5-6 limits the query to the en0 network interface).
Example 5-6 Get the network settings
clnode_1:/# netstat -inI en0
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en0 1500 link#2 ee.af.e.90.ca.2 44292 0 30506 0 0
en0 1500 192.168.100 192.168.100.51 44292 0 30506 0 0
clnode_1:/#
4. Get a list of all PowerHA file sets and their current state using the lslpp -l cluster.* command (node specific, Example 5-7).
Example 5-7 Get a list of PowerHA installed fileset and their status
clnode_1:/# lslpp -l cluster.*
Fileset Level State Description
----------------------------------------------------------------------------
Path: /usr/lib/objrepos
cluster.adt.es.client.include
7.1.3.0 COMMITTED PowerHA SystemMirror Client
Include Files
  [...]
  cluster.man.en_US.es.data 7.1.3.0 COMMITTED Man Pages - U.S. English
clnode_1:/#
5. Get the general configuration of the CAA cluster using the lscluster -c command (CAA cluster specific, Example 5-8).
Example 5-8 Get the general cluster configuration
clnode_1:/# lscluster -c
Cluster Name: migration_cluster
Cluster UUID: 8478eec0-83f2-11e5-98f5-eeaf0e90ca02
Number of nodes in cluster = 2
Cluster ID for node clnode_1: 1
Primary IP address for node clnode_1: 192.168.100.51
Cluster ID for node clnode_2: 2
Primary IP address for node clnode_2: 192.168.100.52
Number of disks in cluster = 1
Disk = hdisk2 UUID = 89efbf4d-ef62-cc65-c0c3-fca88281da6f cluster_major = 0 cluster_minor = 1
Multicast for site LOCAL: IPv4 228.168.100.51 IPv6 ff05::e4a8:6433
Communication Mode: unicast
[...]
clnode_1:/#
6. Get the list of the CAA cluster storage interfaces using the lscluster -d command (CAA cluster specific, Example 5-9).
Example 5-9 Get the storage cluster configuration
clnode_1:/# lscluster -d
Storage Interface Query
 
Cluster Name: migration_cluster
Cluster UUID: 8478eec0-83f2-11e5-98f5-eeaf0e90ca02
Number of nodes reporting = 2
Number of nodes expected = 2
 
Node clnode_1
Node UUID = 8474b378-83f2-11e5-98f5-eeaf0e90ca02
Number of disks discovered = 1
hdisk2:
State : UP
[...]
Type : REPDISK
 
Node clnode_2
Node UUID = 8474b3be-83f2-11e5-98f5-eeaf0e90ca02
Number of disks discovered = 1
hdisk2:
State : UP
[...]
Type : REPDISK
clnode_1:/#
7. Get the list of the CAA cluster network interfaces using the lscluster -i command (CAA cluster specific, Example 5-10).
Example 5-10 Get the network cluster configuration
clnode_1:/# lscluster -i
Network/Storage Interface Query
 
Cluster Name: migration_cluster
Cluster UUID: 8478eec0-83f2-11e5-98f5-eeaf0e90ca02
Number of nodes reporting = 2
Number of nodes stale = 0
Number of nodes expected = 2
 
Node clnode_1
Node UUID = 8474b378-83f2-11e5-98f5-eeaf0e90ca02
Number of interfaces discovered = 2
Interface number 1, en0
[...]
Interface state = UP
Number of regular addresses configured on interface = 1
IPv4 ADDRESS: 192.168.100.51 broadcast 192.168.100.255 netmask 255.255.255.0
Number of cluster multicast addresses configured on interface = 1
IPv4 MULTICAST ADDRESS: 228.168.100.51
[...]
 
Node clnode_2
Node UUID = 8474b3be-83f2-11e5-98f5-eeaf0e90ca02
Number of interfaces discovered = 2
Interface number 1, en0
[...]
Interface state = UP
Number of regular addresses configured on interface = 2
IPv4 ADDRESS: 192.168.100.50 broadcast 192.168.103.255 netmask 255.255.252.0
IPv4 ADDRESS: 192.168.100.52 broadcast 192.168.100.255 netmask 255.255.255.0
Number of cluster multicast addresses configured on interface = 1
[...]
clnode_1:/#
8. Get the cluster node configuration for the CAA cluster using the lscluster -m command (CAA cluster specific, Example 5-11).
Example 5-11 Get the node cluster configuration
clnode_1:/# lscluster -m
Calling node query for all nodes...
Node query number of nodes examined: 2
 
Node name: clnode_1
Cluster shorthand id for node: 1
UUID for node: 8474b378-83f2-11e5-98f5-eeaf0e90ca02
State of node: UP NODE_LOCAL
[...]
Points of contact for node: 0
 
----------------------------------------------------------------------------
 
Node name: clnode_2
Cluster shorthand id for node: 2
UUID for node: 8474b3be-83f2-11e5-98f5-eeaf0e90ca02
State of node: UP
[...]
Points of contact for node: 1
-----------------------------------------------------------------------
Interface State Protocol Status SRC_IP->DST_IP
-----------------------------------------------------------------------
tcpsock->02 UP IPv4 none 192.168.100.51->192.168.100.52
clnode_1:/#
9. Get the current status and version of each node, as well as the version of the PowerHA cluster (which for PowerHA version 7.1.3 has the numeric code 15), using the lssrc -ls clstrmgrES command (node specific, Example 5-12).
Example 5-12 Get the current cluster status and version of each node
clnode_1:/# lssrc -ls clstrmgrES
Current state: ST_STABLE
[...]
CLversion: 15
local node vrmf is 7130
cluster fix level is "0"
[...]
clnode_1:/#
10. Get the cluster topology information using the cltopinfo command (Example 5-13 on page 136).
Example 5-13 Get the cluster topology
clnode_1:/# cltopinfo
Cluster Name: migration_cluster
Cluster Type: Standard
Heartbeat Type: Unicast
Repository Disk: hdisk2 (00f6f5d0d387b342)
 
There are 2 node(s) and 1 network(s) defined
NODE clnode_1:
Network net_ether_01
clst_svcIP 192.168.100.50
clnode_1 192.168.100.51
NODE clnode_2:
Network net_ether_01
clst_svcIP 192.168.100.50
clnode_2 192.168.100.52
 
Resource Group rg_IHS
Startup Policy Online On First Available Node
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback
Participating Nodes clnode_1 clnode_2
Service IP Label clst_svcIP
clnode_1:/#
11. Get resource groups status using the clRGinfo command (Example 5-14).
Example 5-14 Get resource groups status
clnode_1:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
clnode_1:/#
12. List (or make a copy of) some configuration files (Example 5-15) that PowerHA needs for proper functioning, such as the following files:
 – /etc/hosts
 – /etc/cluster/rhosts
 – /usr/es/sbin/cluster/netmon.cf
Example 5-15 List configuration files relevant to PowerHA
clnode_1:/# cat /etc/hosts
[...]
192.168.100.50 clst_svcIP
192.168.100.51 clnode_1
192.168.100.52 clnode_2
192.168.100.40 nimres1
 
clnode_1:/#
clnode_1:/# cat /etc/cluster/rhosts
192.168.100.50
192.168.100.51
192.168.100.52
 
clnode_1:/#
clnode_1:/# cat /usr/es/sbin/cluster/netmon.cf
!REQD en0 192.168.100.1
clnode_1:/#
13. Perhaps most important, take a snapshot of the current cluster configuration using the clmgr add snapshot command (Example 5-16).
Example 5-16 Take a snapshot of the current cluster configuration
clnode_1:/# clmgr add snapshot snapshot_before_migration
 
clsnapshot: Creating file /usr/es/sbin/cluster/snapshots/snapshot_before_migration.odm.
 
clsnapshot: Creating file /usr/es/sbin/cluster/snapshots/snapshot_before_migration.info.
 
clsnapshot: Running clsnapshotinfo command on node: clnode_2...
 
clsnapshot: Running clsnapshotinfo command on node: clnode_1...
 
clsnapshot: Succeeded creating Cluster Snapshot: snapshot_before_migration
clnode_1:/#
14. Copy the snapshot files to a safe location. The default location for creation of snapshot files is the /usr/es/sbin/cluster/snapshots directory.
5.3.3 Offline migration of PowerHA from 7.1.3 to 7.2.0
Complete the following major steps to perform an offline migration of PowerHA from version 7.1.3 to version 7.2.0:
1. Check and document initial stage
2. Stop cluster on all nodes, bringing the resources offline
3. Upgrade PowerHA file sets
4. Start cluster on all nodes, bring the resources online
5. Check for proper function of the cluster
Check and document initial stage
This step is described in 5.3.2, “Check and document initial stage” on page 132 and applies to all migration scenarios and methods.
Stop cluster on all nodes, bringing the resources offline
The next step is to stop the cluster and bring the resources offline:
1. Issue the clmgr stop cluster command with the corresponding resource managing option, as shown in Example 5-17.
Example 5-17 Stop the cluster and bring the resources offline
clnode_2:/# clmgr stop cluster manage=offline when=now
[...]
PowerHA SystemMirror on clnode_2 shutting down. Please exit any cluster applications...
[...]
The cluster is now offline.
clnode_2:/#
2. Now you can check that all cluster nodes are offline, using the clmgr query node command with appropriate command switches, as shown in Example 5-18.
Example 5-18 Check that all cluster nodes are offline
clnode_2:/# clmgr -cv -a name,state,raw_state query node
# NAME:STATE:RAW_STATE
clnode_1:OFFLINE:ST_INIT
clnode_2:OFFLINE:ST_INIT
clnode_2:/#
Upgrade PowerHA file sets
Since we now have the cluster stopped we can proceed to upgrading the PowerHA file sets:
1. This can be done from the command line, using the installp -acgNYX command as shown in Example 5-19. This also can be performed using SMIT via the smitty update_all fastpath.
Example 5-19 Upgrade PowerHA file sets via CLI
clnode_1:/mnt/powerha_720_1545A/inst.images# installp -acgNYX -d . all
+-----------------------------------------------------------------------------+
Pre-installation Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...
[...]
+-----------------------------------------------------------------------------+
Summaries:
+-----------------------------------------------------------------------------+
 
Installation Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
glvm.rpv.man.en_US 7.2.0.0 USR APPLY SUCCESS
glvm.rpv.util 7.2.0.0 USR APPLY SUCCESS
[...]
cluster.es.spprc.rte 7.2.0.0 USR APPLY SUCCESS
cluster.es.spprc.rte 7.2.0.0 ROOT APPLY SUCCESS
clnode_1:/mnt/powerha_720_1545A/inst.images#
2. Alternatively, the upgrade of PowerHA can also be made with SMIT. If using the NIM server, use the smit nim  Install and Update Software  Install and Update from ALL Available Software:
a. At this point you have to select the lpp source for the new PowerHA file sets.
b. Then, select all file sets to install.
c. Select to accept new licenses, as shown in Example 5-20.
Example 5-20 Upgrade PowerHA file sets with SMIT
                            Install and Update from ALL Available Software
 
Type or select values in entry fields.
Press Enter AFTER making all wanted changes.
 
[Entry Fields]
* LPP_SOURCE powerha_720_1545A
* Software to Install [all] +
 
Customization SCRIPT to run after installation [] +
 
installp Flags
PREVIEW only? [no] +
Preview new LICENSE agreements? [no] +
ACCEPT new license agreements? [yes] +
COMMIT software updates? [yes] +
SAVE replaced files? [no] +
AUTOMATICALLY install requisite software? [yes] +
EXTEND filesystems if space needed? [yes] +
OVERWRITE same or newer versions? [no] +
VERIFY install and check file sizes? [no] +
 
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
3. Before starting the cluster services on any recently installed or upgraded node, also check that the file /usr/es/sbin/cluster/netmon.cf exists and contains at least one pingable IP address, as shown in Example 5-21. This step is necessary because the installation or upgrade of the PowerHA file sets can (depending on the particular PowerHA version) overwrite this file with an empty one. This is particularly important in clusters with only a single network interface card per logical network configured.
Example 5-21 Check content of netmon.cf file
clnode_1:/# cat /usr/es/sbin/cluster/netmon.cf
!REQD en0 192.168.100.1
clnode_1:/#
Start cluster on all nodes and bring the resources online
Now we can start the cluster on both nodes:
1. Use either the clmgr start cluster command (as shown in Example 5-22), or smitty clstart in SMIT.
Example 5-22 Start cluster on all nodes and bring the resources online
clnode_1:/# clmgr start cluster
 
[...]
clnode_2: start_cluster: Starting PowerHA SystemMirror
[...]
clnode_1: start_cluster: Starting PowerHA SystemMirror
[...]
The cluster is now online.
 
Cluster services are running at different levels across
the cluster. Verification will not be invoked in this environment.
 
Starting Cluster Services on node: clnode_2
[...]
clnode_2: Exit status = 0
 
Starting Cluster Services on node: clnode_1
[...]
clnode_1: Exit status = 0
clnode_1:/#
2. The cluster nodes status can be checked, as shown in Example 5-23.
Example 5-23 Check cluster nodes status
clnode_1:/# clmgr -cv -a name,state,raw_state query node
# NAME:STATE:RAW_STATE
clnode_1:NORMAL:ST_STABLE
clnode_2:NORMAL:ST_STABLE
clnode_1:/#
3. We also check the version of PowerHA running on each node, as shown in Example 5-24.
Example 5-24 Check PowerHA version on each node
clnode_1:/# lssrc -ls clstrmgrES | egrep "state|CLversion|vrmf|fix"
Current state: ST_STABLE
CLversion: 16
local node vrmf is 7200
cluster fix level is "0"
clnode_1:/#
 
clnode_2:/# lssrc -ls clstrmgrES | egrep "state|CLversion|vrmf|fix"
Current state: ST_STABLE
CLversion: 16
local node vrmf is 7200
cluster fix level is "0"
clnode_2:/#
4. Finally, we check the status of the resource groups using the clRGinfo command, as shown in Example 5-25.
Example 5-25 Check the resource groups status
clnode_2:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
clnode_2:/#
Check for proper function of the cluster
Checking the functionality of the cluster can be done in several ways. The most basic actions include synchronizing the cluster and moving resource groups between cluster nodes. Verifying the cluster is shown in Example 5-26.
Example 5-26 Verify and synchronize cluster configuration
clnode_1:/# clmgr sync cluster
 
Verifying additional prerequisites for Dynamic Reconfiguration...
 
[...]
Verification to be performed on the following:
Cluster Topology
Cluster Resources
 
Retrieving data from available cluster nodes. This could take a few minutes.
 
Start data collection on node clnode_1
Start data collection on node clnode_2
Collector on node clnode_1 completed
Collector on node clnode_2 completed
Data collection complete
[...]
Completed 100 percent of the verification checks
 
Verification has completed normally.
clnode_1:/#
 
Move a resource group from one node to another (Example 5-27).
Example 5-27 Move a resource group from one node to another
clnode_1:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
clnode_1:/#
 
clnode_1:/# clmgr move resource_group rg_IHS node=clnode_1
Attempting to move resource group rg_IHS to node clnode_1.
[...]
Resource group movement successful.
Resource group rg_IHS is online on node clnode_1.
[...]
Resource Group Name: rg_IHS
Node Group State
---------------------------- ---------------
clnode_1 ONLINE
clnode_2 OFFLINE
clnode_1:/#
 
clnode_1:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS ONLINE clnode_1
OFFLINE clnode_2
clnode_1:/#
5.3.4 Rolling migration of PowerHA from 7.1.3 to 7.2.0
The major steps for an offline migration of PowerHA from version 7.1.3 to version 7.2.0 are as follows:
1. Check and document initial stage
2. Stop cluster services on one node, moving resource groups to other nodes
3. Upgrade PowerHA file sets on the offline node
4. Start cluster on the recently upgraded node
5. Repeat steps 2 - 4 for all other cluster nodes, one node at a time
6. Check for proper function of the cluster
Check and document initial stage
This step is described in 5.3.2, “Check and document initial stage” on page 132 and applies to all migration scenarios and methods.
Stop cluster on one node, moving resources to other nodes
To stop cluster services and move resources, complete the following steps:
1. First we make a note of the distribution of resource groups across the cluster, using the clRGinfo command, so that we can track the migration process (Example 5-28).
Example 5-28 Check the resource groups distribution across the cluster
clnode_2:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
clnode_2:/#
2. Next, we stop the cluster with the option of moving the resource groups to another node that is still online. This will cause a short interruption to the application by stopping it first on the current node, and then restarting it on the target node. For that, we use the clmgr stop node command on clnode_1, with the option to move the resource group, as shown in Example 5-29.
Example 5-29 Stop the cluster and move the resource group
clnode_1:/# clmgr stop node clnode_1 manage=move when=now
[...]
PowerHA SystemMirror on clnode_1 shutting down. Please exit any cluster applications...
[...]
"clnode_1" is now offline.
clnode_1:/#
3. We can check now the status of cluster nodes and that of the resource groups across the cluster, using the clmgr query node and the clRGinfo commands. In our case, because we took a node offline that had no online resource groups, the output of the second command is just the same as before stopping the node (Example 5-30).
Example 5-30 Check the cluster nodes status
clnode_1:/# clmgr -cv -a name,state,raw_state query node
# NAME:STATE:RAW_STATE
clnode_1:OFFLINE:ST_INIT
clnode_2:NORMAL:ST_STABLE
clnode_1:/#
 
clnode_1:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
clnode_1:/#
Upgrade PowerHA file sets on the offline node
We now can proceed to upgrade the PowerHA file sets on the node that we just brought offline. This can be performed at the command line, using the installp -acgNYX command as shown in Example 5-19 on page 138. The same action can also be performed with SMIT, as shown in Example 5-20 on page 139.
Before starting the cluster services on the recently upgraded node, we also check that the file /usr/es/sbin/cluster/netmon.cf exists and has the right content (Example 5-21 on page 140, see chapter “Upgrade PowerHA file sets” on page 138 for details).
Start cluster on the recently upgraded node
After the PowerHA file sets are upgraded, we can proceed to start the cluster services on the current node:
1. This can be done at the command line using the clmgr start node command, as shown in Example 5-31.
Example 5-31 Start cluster on upgraded node through CLI
clnode_1:/# clmgr start node
 
[...]
clnode_1: start_cluster: Starting PowerHA SystemMirror
[...]
"clnode_1" is now online.
 
Cluster services are running at different levels across
the cluster. Verification will not be invoked in this environment.
 
Starting Cluster Services on node: clnode_1
[...]
clnode_1: Exit status = 0
clnode_1:/#
Alternatively, the node can be started also using the SMIT by running smitty clstart command, as shown in Example 5-32.
Example 5-32 Start cluster on upgraded node through SMIT
                             Start Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all wanted changes.
[Entry Fields]
* Start now, on system restart or both now +
Start Cluster Services on these nodes [clnode_1] +
* Manage Resource Groups Automatically +
BROADCAST message at startup? true +
Startup Cluster Information Daemon? false +
Ignore verification errors? false +
Automatically correct errors found during Interactively +
cluster start?
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
2. Check the status of the newly upgraded node, as shown in Example 5-33.
Example 5-33 Check the status of upgraded node
clnode_1:/# clmgr -cv -a name,state,raw_state query node
# NAME:STATE:RAW_STATE
clnode_1:NORMAL:ST_STABLE
clnode_2:NORMAL:ST_STABLE
clnode_1:/#
3. When the node is reported as stable, check the version of the node by using the lssrc -ls clstrmgrES command, as shown in Example 5-34.
Example 5-34 Check the PowerHA version of the upgraded node
clnode_1:/# lssrc -ls clstrmgrES | egrep "state|CLversion|vrmf|fix"
Current state: ST_STABLE
CLversion: 15
local node vrmf is 7200
cluster fix level is "0"
clnode_1:/#
Note that although the node itself has been upgraded to PowerHA 7.2.0, the cluster version code is still 15, because not all cluster nodes have yet been upgraded.
Repeat these steps for each node, one node at a time
Now we can proceed in the same manner to upgrade all of the remaining nodes:
Stop cluster services moving resources to another nodes
Upgrade PowerHA file sets
Restart cluster services
Complete the following steps:
1. Proceed in the same way for each node that has not yet been upgraded, one node at a time. When finished, all nodes should be upgraded and stable across the cluster.
 
Note: With the migration process started, moving resources groups using C-SPOC or rg_move to another is not permitted. This is a precautionary measure to minimize the user-originated activity, and therefore the chances of service unavailability across the cluster during the mixed version environment. Movement of resources across the cluster is only permitted by stopping cluster services on specific nodes with the option of moving resources.
2. After all nodes are stable, issue an lssrc -ls clstrmgrES command to verify that the cluster version is 16, as shown in Example 5-35.
Example 5-35 Check the PowerHA version on all nodes
clnode_1:/# clmgr -cv -a name,state,raw_state query node
# NAME:STATE:RAW_STATE
clnode_1:NORMAL:ST_STABLE
clnode_2:NORMAL:ST_STABLE
clnode_1:/#
 
clnode_1:/# lssrc -ls clstrmgrES | egrep "state|version|vrmf|fix"
Current state: ST_STABLE
CLversion: 16
local node vrmf is 7200
cluster fix level is "0"
clnode_1:/#
 
clnode_2:/# lssrc -ls clstrmgrES | egrep "state|version|vrmf|fix"
Current state: ST_STABLE
CLversion: 16
local node vrmf is 7200
cluster fix level is "0"
clnode_2:/#
3. Finally, we also need to check the status of the resource groups using the clRGinfo command, as shown in Example 5-36.
Example 5-36 Check the resource groups’ status
clnode_1:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
clnode_1:/#
Check for proper function of the cluster
Checking the functionality of the cluster can be done in several ways. The most basic actions include synchronizing the cluster and moving resource groups across cluster nodes. Both of these actions are shown in Example 5-37.
Example 5-37 Verify and synchronize cluster configuration
clnode_1:/# clmgr sync cluster
 
 
Verifying additional prerequisites for Dynamic Reconfiguration...
 
[...]
 
Verification to be performed on the following:
Cluster Topology
Cluster Resources
 
Retrieving data from available cluster nodes. This could take a few minutes.
 
Start data collection on node clnode_1
Start data collection on node clnode_2
Collector on node clnode_1 completed
Collector on node clnode_2 completed
Data collection complete
[...]
Completed 100 percent of the verification checks
 
Verification has completed normally.
clnode_1:/#
Moving a resource group from one node to another is shown in Example 5-38.
Example 5-38 Move a resource group from one node to another
clnode_1:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS ONLINE clnode_1
OFFLINE clnode_2
clnode_1:/#
 
clnode_1:/# clmgr move rg rg_IHS node=clnode_2
Attempting to move resource group rg_IHS to node clnode_2.
[...]
Resource group movement successful.
Resource group rg_IHS is online on node clnode_2.
[...]
Resource Group Name: rg_IHS
Node Group State
---------------------------- ---------------
clnode_1 OFFLINE
clnode_2 ONLINE
clnode_1:/#
5.3.5 Snapshot migration from PowerHA 7.1.3 to 7.2.0
The following steps are the major stages for a snapshot migration of PowerHA from version 7.1.3 to version 7.2.0:
1. Check and document the initial stage.
2. Create a cluster snapshot.
3. Stop cluster services on all nodes, bringing the resources offline.
4. Uninstall PowerHA file sets on all nodes.
5. Install the new version of PowerHA on all nodes.
6. Convert the snapshot file from the old version to the new one.
7. Restore the snapshot to re-create the cluster.
8. Start cluster services on all nodes, bring resources online.
9. Check for proper functionality of the cluster.
Check and document initial stage and create a cluster snapshot
This step is described in 5.3.2, “Check and document initial stage” on page 132, and applies to all migration scenarios and methods. Although for other types of migrations, creating a cluster snapshot before starting the migration is only a recommended step, for this type of migration this is, as the name implies, a mandatory action. This is shown in Example 5-16 on page 137.
Stop cluster services on all nodes, bring the resource groups offline
The next step is to stop the cluster services and bring the resource groups offline. For that purpose, you can use the clmgr stop cluster command with the option of bringing the resource groups offline (Example 5-39).
Example 5-39 Stop cluster on all nodes and bring the resource groups offline
clnode_1:/# clmgr stop cluster manage=offline when=now
[...]
PowerHA SystemMirror on clnode_1 shutting down. Please exit any cluster applications...
[...]
The cluster is now offline.
[...]
clnode_1:/#
We now check the status of cluster nodes using the clmgr query node command, as shown in Example 5-40.
Example 5-40 Check cluster nodes status
clnode_1:/# clmgr -cv -a name,state,raw_state query node
# NAME:STATE:RAW_STATE
clnode_1:OFFLINE:ST_INIT
clnode_2:OFFLINE:ST_INIT
clnode_1:/#
Uninstall PowerHA file sets on all nodes
Because we now have the resource groups offline, we can proceed to uninstall the PowerHA file sets. This can be done from the command line using the installp -ug command, as shown in Example 5-41.
Example 5-41 Uninstall PowerHA file sets via CLI
clnode_1:/# installp -ug cluster.*
+-----------------------------------------------------------------------------+
Pre-deinstall Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...
[...]
FILESET STATISTICS
------------------
79 Selected to be deinstalled, of which:
79 Passed pre-deinstall verification
----
79 Total to be deinstalled
[...]
cluster.es.migcheck 7.1.3.0 ROOT DEINSTALL SUCCESS
cluster.es.migcheck 7.1.3.0 USR DEINSTALL SUCCESS
clnode_1:/#
Alternatively, the same result can be obtained with SMIT, by running smitty remove and then specifying “cluster.*” as the filter for removing installed software. Next, clear the “PREVIEW only” option and select “REMOVE dependent software”, as shown in Example 5-42.
Example 5-42 Uninstall PowerHA file sets with SMIT
                              Remove Installed Software
 
Type or select values in entry fields.
Press Enter AFTER making all wanted changes.
 
[Entry Fields]
* SOFTWARE name [cluster.*] +
PREVIEW only? (remove operation will NOT occur) no +
REMOVE dependent software? yes +
EXTEND file systems if space needed? no +
DETAILED output? no +
 
WPAR Management
Perform Operation in Global Environment yes +
Perform Operation on Detached WPARs no +
Detached WPAR Names [_all_wpars] +
 
F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
Install new version of PowerHA on all nodes
We then proceed to install the PowerHA V7.2.0. This can be performed at the command line, using the installp -acgNYX command as shown in Example 5-19 on page 138, or with SMIT by running smit nim  Install and Update Software  Install and Update from ALL Available Software, as shown in Example 5-20 on page 139.
Before starting the cluster services on the recently upgraded node, we also check that the file /usr/es/sbin/cluster/netmon.cf exists and has the right content, as shown in Example 5-21 on page 140 (see chapter “Upgrade PowerHA file sets” on page 138 for details).
Convert snapshot file from old version to new one
Next, we use the clconvert_snapshot command to convert the cluster snapshot file that is created at the beginning of the migration process to the new PowerHA V7.2.0 format, as shown in Example 5-43.
Example 5-43 Convert snapshot file
clnode_1:/# clmgr query snapshot
snapshot_before_migration
clnode_1:/#
 
clnode_1:/# /usr/es/sbin/cluster/conversion/clconvert_snapshot -v 7.1.3 -s snapshot_before_migration
Extracting ODM's from snapshot file... done.
Converting extracted ODM's... done.
Rebuilding snapshot file... done.
clnode_1:/#
The file newly created can be found in the same location as the old one, with the same name, while the old one gets renamed, as shown in Example 5-44.
Example 5-44 Location of new snapshot file
clnode_1:/# ls -al /usr/es/sbin/cluster/snapshots/
total 680
drwxr-xr-x 2 root  system   4096 Nov 5 17:44 .
drwxr-xr-x 28 root  system   4096 Nov 5 17:34 ..
-rw------- 1 root  system      0 Nov 3 17:43 clsnapshot.log
-rw------- 1 root  system  74552 Nov 5 17:12 snapshot_before_migration.info
-rw-r--r-- 1 root  system  58045 Nov 5 17:44 snapshot_before_migration.odm
-rw------- 1 root  system  57722 Nov 5 17:12 snapshot_before_migration.odm.old
clnode_1:/#
Restore the snapshot to re-create the cluster
We will use now the converted file to restore the cluster configuration:
1. Use the clmgr manage snapshot restore command (Example 5-45). This process will also re-create the CAA cluster.
Example 5-45 Restore the snapshot to re-create the cluster
clnode_1:/# clmgr manage snapshot restore snapshot_before_migration
 
clsnapshot: Removing any existing temporary PowerHA SystemMirror ODM entries...
 
clsnapshot: Creating temporary PowerHA SystemMirror ODM object classes...
clsnapshot: Adding PowerHA SystemMirror ODM entries to a temporary directory..
clsnapshot: Verifying configuration using temporary PowerHA SystemMirror ODM entries...
Verification to be performed on the following:
Cluster Topology
Cluster Resources
 
Retrieving data from available cluster nodes. This could take a few minutes.
 
Start data collection on node clnode_1
Start data collection on node clnode_2
Collector on node clnode_2 completed
Waiting on node clnode_1 data collection, 15 seconds elapsed
Collector on node clnode_1 completed
Data collection complete
[...]
Completed 100 percent of the verification checks
 
Verification has completed normally.
 
clsnapshot: Removing current PowerHA SystemMirror cluster information...
Ensuring that the following nodes are offline: clnode_2, clnode_1
[...]
Attempting to delete node "clnode_2" from the cluster...
Attempting to remove the CAA cluster from "clnode_1"...
Attempting to delete node "clnode_1" from the cluster...
 
clsnapshot: Adding new PowerHA SystemMirror ODM entries...
 
clsnapshot: Synchronizing cluster configuration to all cluster nodes...
[...]
Committing any changes, as required, to all available nodes...
[...]
cldare: Configuring a 2 node cluster in AIX may take up to 2 minutes. Please wait.
[...]
Verification has completed normally.
 
clsnapshot: Succeeded applying Cluster Snapshot: snapshot_before_migration
 
lscluster: Cluster services are not active.
[...]
clnode_1:/#
2. When the snapshot restoration is finished, we are able to check the topology of the newly restored cluster, as shown Example 5-46.
Example 5-46 Check cluster topology
clnode_1:/# cltopinfo
Cluster Name: migration_cluster
Cluster Type: Standard
Heartbeat Type: Unicast
Repository Disk: hdisk2 (00f6f5d0d387b342)
 
There are 2 node(s) and 1 network(s) defined
NODE clnode_1:
Network net_ether_01
clst_svcIP 192.168.100.50
clnode_1 192.168.100.51
NODE clnode_2:
Network net_ether_01
clst_svcIP 192.168.100.50
clnode_2 192.168.100.52
 
Resource Group rg_IHS
Startup Policy Online On First Available Node
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback
Participating Nodes clnode_1 clnode_2
Service IP Label clst_svcIP
clnode_1:/#
Start cluster services on all nodes, bring resource groups online
Now we can start the cluster services on both nodes using the clmgr start cluster command, as shown in Example 5-47. We could also use SMIT by running smitty clstart, as shown in Example 5-32 on page 144.
Example 5-47 Start the cluster services on all nodes and bring resource groups online
clnode_1:/# clmgr start cluster
 
clnode_2: start_cluster: Starting PowerHA SystemMirror
[...]
clnode_1: start_cluster: Starting PowerHA SystemMirror
[...]
The cluster is now online.
 
Starting Cluster Services on node: clnode_2
[...]
clnode_2: Exit status = 0
clnode_2:
 
Starting Cluster Services on node: clnode_1
[...]
clnode_1: Exit status = 0
clnode_1:/#
Check for proper function of the cluster
Checking the functionality of the cluster can be done in several ways. The most basic actions include synchronizing the cluster and moving resource groups between cluster nodes. Both of these actions are shown in Example 5-48.
Example 5-48 Verify and synchronize cluster configuration
clnode_2:/# clmgr sync cluster
 
Verifying additional prerequisites for Dynamic Reconfiguration...
...completed.
 
Committing any changes, as required, to all available nodes...
[..]
Verification has completed normally.
 
cldare: No changes detected in Cluster Topology or Resources.
...completed.
 
Committing any changes, as required, to all available nodes...
[...]
Verification has completed normally.
 
clsnapshot: Creating file /usr/es/sbin/cluster/snapshots/active.0.odm.
 
clsnapshot: Succeeded creating Cluster Snapshot: active.0
 
PowerHA SystemMirror Cluster Manager current state is: ST_RP_RUNNING
PowerHA SystemMirror Cluster Manager current state is: ST_BARRIER
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_RP_RUNNING.
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_CBARRIER
PowerHA SystemMirror Cluster Manager current state is: ST_STABLE.....completed.
clnode_2:/#
Moving a resource group from one node to another is shown in Example 5-49.
Example 5-49 Move a resource group from one node to another
clnode_2:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
clnode_2:/#
clnode_2:/# clmgr move resource_group rg_IHS node=clnode_1
Attempting to move resource group rg_IHS to node clnode_1.
[...]
Resource group movement successful.
Resource group rg_IHS is online on node clnode_1.
[...]
Resource Group Name: rg_IHS
Node Group State
---------------------------- ---------------
clnode_1 ONLINE
clnode_2 OFFLINE
clnode_2:/#
5.3.6 Non-disruptive migration of PowerHA from 7.1.3 to 7.2.0
The major steps in a non-disruptive migration of PowerHA are described in the following list:
1. Check and document the initial stage.
2. Stop cluster services on one node, leaving the resource groups unmanaged.
3. Upgrade PowerHA file sets on the offline node.
4. Start the cluster on the recently upgraded node.
5. Repeat steps 2 through 4 for all of the other cluster nodes, one node at a time.
6. Check for proper function of the cluster.
 
Tip: A demo of performing a non-disruptive migration to PowerHA 7.2 is available on the following website:
Check and document initial stage
This step is described in 5.3.2, “Check and document initial stage” on page 132 as being common to all migration scenarios and methods.
Stop cluster services on one node, leaving resource groups unmanaged
To stop cluster services, complete the following steps:
1. First, we make a note of the distribution of resource group across the cluster (using the clRGinfo command), so that we can track the migration process (Example 5-50).
Example 5-50 Check the resource groups distribution across the cluster
clnode_1 /> clRGinfo
-----------------------------------------------------------------------------
Group Name State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
clnode_1 />
2. Next, we stop the cluster services on one node and leave the resource groups unmanaged, so that the applications remain functional. For that purpose, you can use the clmgr stop node command with the option of unmanaging resources (Example 5-51).
Example 5-51 Stop the cluster services on all nodes and unmanage the resource groups
clnode_1 /> clmgr stop node manage=unmanage
[...]
PowerHA SystemMirror on clnode_1 shutting down. Please exit any cluster applications...
[...]
"clnode_1" is now unmanaged.
[...]
clnode_1 />
3. We can now check the status of the cluster nodes and that of the resource groups across the cluster, using the clmgr query node and the clRGinfo commands. In our case, because we first took a node offline that had no online resource groups, the output of the second command is just the same as before stopping the node (Example 5-52).
Example 5-52 Check the cluster nodes and resource groups status
clnode_1 /> clmgr -cv -a name,state,raw_state query node
# NAME:STATE:RAW_STATE
clnode_1:UNMANAGED:UNMANAGED
clnode_2:NORMAL:ST_STABLE
clnode_1 />
 
clnode_1 /> clRGinfo
-----------------------------------------------------------------------------
Group Name State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
clnode_1 />
Upgrade PowerHA file sets on the offline node
Because we now have the cluster services stopped on one node, we can proceed to upgrading the PowerHA file sets on that node. This can be performed at the command line using the installp -acgNYX command. as shown in Example 5-19 on page 138, as well as with SMIT, as shown in Example 5-20 on page 139.
Before starting the cluster services on the recently upgraded node, we also verify that the file /usr/es/sbin/cluster/netmon.cf exists and has the right content, as shown in Example 5-21 on page 140 (see chapter “Upgrade PowerHA file sets” on page 138 for details).
After the upgrade finishes, we can check the version of PowerHA installed on the current node using the halevel -s command, as shown in Example 5-53.
Example 5-53 Checking the version of PowerHA installed in the node
clnode_1 /> halevel -s
7.2.0 GA
clnode_1 />
Start cluster services on the recently upgraded node
Now we can start the cluster services on the recently upgraded node:
1. Use the clmgr start node command (or with SMIT, use the smit sysmirror command and then follow the System Management (C-SPOC) → PowerHA SystemMirror Services → Start Cluster Services path), as shown in Example 5-54.
 
Important: When restarting cluster services with the Automatic option for managing resource groups, this invokes one or more application start scripts. Make sure that the application scripts can either detect that the application is already running, or copy the scripts and put a dummy blank executable script in their place and then copy them back after startup.
Example 5-54 Start cluster services on one node
clnode_1 /> clmgr start node
[...]
clnode_1: start_cluster: Starting PowerHA SystemMirror
...
"clnode_1" is now online.
 
Cluster services are running at different levels across
the cluster. Verification will not be invoked in this environment.
 
Starting Cluster Services on node: clnode_1
[...]
clnode_1: Dec 11 2015 19:28:07 complete.
clnode_1 />
2. The cluster nodes status can be checked using the clmgr query node command, as shown in Example 5-55.
Example 5-55 Check cluster nodes status
clnode_1:/# clmgr -cv -a name,state,raw_state query node
# NAME:STATE:RAW_STATE
clnode_1:NORMAL:ST_STABLE
clnode_2:NORMAL:ST_STABLE
clnode_1:/#
3. We also verify the version and status of PowerHA running on the recently upgraded node, as well as the version of the cluster, using the lssrc -ls clstrmgrES command, as shown in Example 5-56.
Example 5-56 Check PowerHA version on current node
clnode_1 /> lssrc -ls clstrmgrES | egrep "state|CLversion|vrmf|fix"
Current state: ST_STABLE
CLversion: 15
local node vrmf is 7200
cluster fix level is "0"
clnode_1 />
Note that although the node itself has been upgraded to PowerHA 7.2.0, the cluster version is still 15, because not all cluster nodes have yet been upgraded.
Repeat steps 2-4 for each node, one node at a time
Now we can proceed in the same manner to upgrade all the remaining nodes, only one node at a time, following the same steps:
1. Stop cluster services on a particular node, leaving the resource groups unmanaged
2. Upgrade PowerHA file sets on that node
3. Restart cluster services on that node
Complete the following steps:
1. Proceed in this way for each node that has not yet been upgraded, one node at a time. When finished, all nodes should be upgraded and stable within the cluster.
 
Note: With the migration process started, moving resources groups using C-SPOC or rg_move to another node is not permitted. This is a precautionary measure to minimize the user-originated activity, and therefore the chances of service unavailability across the cluster during the mixed version environment. Movement of resources across the cluster is only permitted by stopping cluster services on specific nodes with the option of moving resources.
2. When all nodes are stable (use the clmgr query node command), issue a lssrc -ls clstrmgrES command to verify that the cluster version is 16, as shown in Example 5-57.
Example 5-57 Check the nodes version, nodes status and cluster version
clnode_1 /> clmgr -cv -a name,state,raw_state query node
# NAME:STATE:RAW_STATE
clnode_1:NORMAL:ST_STABLE
clnode_2:NORMAL:ST_STABLE
clnode_1 />
 
clnode_1 /> lssrc -ls clstrmgrES | egrep "state|CLversion|vrmf|fix"
Current state: ST_STABLE
CLversion: 16
local node vrmf is 7200
cluster fix level is "0"
clnode_1 />
 
clnode_2 /> lssrc -ls clstrmgrES | egrep "state|CLversion|vrmf|fix"
Current state: ST_STABLE
CLversion: 16
local node vrmf is 7200
cluster fix level is "0"
clnode_2 />
3. Finally we also need to check the status of the resource groups, using the clRGinfo command, as shown in Example 5-58.
Example 5-58 Check the resource groups status
clnode_1:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
clnode_1:/#
Check for proper function of the cluster
Checking the functionality of the cluster can be done in several ways. The most basic actions include synchronizing the cluster and moving resource groups between cluster nodes. Both of these actions are shown in Example 5-59.
Example 5-59 Verify and synchronize cluster configuration
clnode_2:/# clmgr sync cluster
Verifying additional prerequisites for Dynamic Reconfiguration...
...completed.
 
Committing any changes, as required, to all available nodes...
[...]
Verification has completed normally.
 
clsnapshot: Creating file /usr/es/sbin/cluster/snapshots/active.0.odm.
 
clsnapshot: Succeeded creating Cluster Snapshot: active.0
[...]
PowerHA SystemMirror Cluster Manager current state is: ST_RP_RUNNING
PowerHA SystemMirror Cluster Manager current state is: ST_BARRIER...
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_RP_RUNNING...
PowerHA SystemMirror Cluster Manager current state is: ST_BARRIER
PowerHA SystemMirror Cluster Manager current state is: ST_CBARRIER
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_STABLE.....completed.
clnode_2:/#
Moving a resource group from one node to another is shown in Example 5-60.
Example 5-60 Move a resource group from one node to another
clnode_1:/# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
rg_IHS OFFLINE clnode_1
ONLINE clnode_2
 
clnode_1:/# clmgr move resource_group rg_IHS node=clnode_1
Attempting to move resource group rg_IHS to node clnode_1.
[...]
Resource group movement successful.
Resource group rg_IHS is online on node clnode_1.
[...]
Resource Group Name: rg_IHS
Node Group State
---------------------------- ---------------
clnode_1 ONLINE
clnode_2 OFFLINE
clnode_1:/#
5.3.7 Migrations of PowerHA from 7.1.1 and 7.1.2 to 7.2.0
Migrations of PowerHA from versions 7.1.1 and 7.1.2 to version 7.2.0 are possible as depicted in Table 5-1 on page 117. Because the migrations from these versions of PowerHA go in a very similar manner to that described in the previous sections, we chose to present this only briefly, with some emphasis on the few particularities encountered.
Offline migration
The offline migration of PowerHA from versions 7.1.1 or 7.1.2 to 7.2.0 goes in a similar manner to the one from 7.1.3 described earlier, following the same major steps:
1. Check and document initial stage
2. Stop cluster on all nodes, bringing the resources offline
3. Upgrade PowerHA file sets
4. Start cluster on all nodes, bring the resources online
5. Check for proper function of the cluster
The output that the user gets while running the individual commands for these migration cases is also similar to the one received during a migration from 7.1.3 version (see chapter 5.3.3, “Offline migration of PowerHA from 7.1.3 to 7.2.0” on page 138).
Rolling migration
The rolling migration of PowerHA from versions 7.1.1 or 7.1.2 to 7.2.0 goes in a similar manner to the one from 7.1.3 described earlier, following the same major steps:
1. Check and document initial stage
2. Stop cluster on all nodes, bringing the resources offline
3. Upgrade PowerHA file sets
4. Start cluster on all nodes, bring the resources online
5. Check for proper function of the cluster
The output that the user gets while running the individual commands for these migration cases is also similar to the one received during a migration from 7.1.3 version (see chapter 5.3.4, “Rolling migration of PowerHA from 7.1.3 to 7.2.0” on page 142).
The only notable differences (some obvious or self-explanatory and some not) in the command output are related to the version of PowerHA used as starting point and the corresponding numeric code for the PowerHA cluster, which is 13 for PowerHA version 7.1.1 and 14 for PowerHA 7.1.2, as shown in Table 5-2.
Table 5-2 Cluster and node before, during, and after upgrade from PowerHA 7.1.1 or 7.1.2
 
PowerHA starting point version
 
Migration stage
Command output
halevel -s
lssrc -ls clstrmgrES | egrep "CLversion|vrmf|fix"
 
 
 
 
 
PowerHA 7.1.1
Before upgrade
7.1.1 SP1
CLversion: 13
local node vrmf is 7112
cluster fix level is "2"
During upgrade
(mixed cluster state)
7.2.0 GA
CLversion: 13
local node vrmf is 7200
cluster fix level is "0"
After upgrade
7.2.0 GA
CLversion: 16
local node vrmf is 7200
cluster fix level is "0"
 
 
 
 
PowerHA 7.1.2
Before upgrade
7.1.2 SP1
CLversion: 14
local node vrmf is 7121
cluster fix level is "1"
During upgrade
(mixed cluster state)
7.2.0 GA
CLversion: 13
local node vrmf is 7200
cluster fix level is "0"
After upgrade
7.2.0 GA
CLversion: 16
local node vrmf is 7200
cluster fix level is "0"
Snapshot migration
The snapshot migration of PowerHA from versions 7.1.1 or 7.1.2 to 7.2.0 goes in a similar manner to the one from 7.1.3 described earlier, following the same major steps:
1. Check and document initial stage
2. Stop cluster on all nodes, bringing the resources offline
3. Upgrade PowerHA file sets
4. Start cluster on all nodes, bring the resources online
5. Check for proper function of the cluster
The output that the user gets while running the individual commands for these migration cases is also similar to the one received during a migration from 7.1.3 version (see chapter 5.3.5, “Snapshot migration from PowerHA 7.1.3 to 7.2.0” on page 147).
The only notable difference is related to the parameters specified for the clconvert_snapshot command, which will obviously need to reflect the version of PowerHA used as the starting point for migration (in our case, those were 7.1.1 and 7.1.2, as shown in Example 5-61).
Example 5-61 Convert snapshot file from PowerHA 7.1.1 or 7.1.2
clnode_1 /> /usr/es/sbin/cluster/conversion/clconvert_snapshot -v 7.1.1 -s snapshot_7.1.1_before_migration.odm
Extracting ODMs from snapshot file... done.
Converting extracted ODMs... done.
Rebuilding snapshot file... done.
clnode_1 />
 
clnode_1 /> /usr/es/sbin/cluster/conversion/clconvert_snapshot -v 7.1.2 -s snapshot_7.1.2_before_migration.odm
Extracting ODMs from snapshot file... done.
Converting extracted ODMs... done.
Rebuilding snapshot file... done.
clnode_1 />
Non-disruptive migration
 
Important: Non-disruptive migrations from versions older than 7.1.3 are not supported. Therefore, such migrations should not be attempted in production environments.
For more detailed information on supported configurations, requirements and limitations, see the following website:
We attempted non-disruptive migrations of PowerHA from versions 7.1.1 or 7.1.2 to 7.2.0 on our test environment in a manner similar to the one from 7.1.3 described earlier, following the same major steps:
1. Check and document initial stage
2. Stop cluster services on one node, leaving the resource groups unmanaged
3. Upgrade PowerHA file sets on the offline node
4. Start cluster on the recently upgraded node
5. Repeat steps 2 through 4 for all other cluster nodes, one node at a time
6. Check for proper function of the cluster
Non-disruptive migration from 7.1.1
Complete migration of PowerHA from 7.1.1 to 7.2.0 did not work. The process went in a similar manner to the migration from PowerHA 7.1.3 until the last step, that of cluster verification and synchronization test. The nodes themselves were successfully upgraded and the cluster services started on each of them. The resource groups remained online (available) throughout the whole process.
In the last step, however, the verification and synchronization action failed, rendering the communication between the two nodes not functional. The situation reverted to normal as soon as an offline/online cycle was performed upon the resource groups (as such or by movement to other nodes). However, the result of that action was that the migration was no longer non-disruptive.
Non-disruptive migration from 7.1.2
Although unsupported, our attempt to migrate PowerHA 7.1.2 to 7.2.0, on our test environment and with our cluster setup, was successful and went the same way as the migration from PowerHA 7.1.3. However, this does not mean in any way that it will work in any other conditions, different from those of our test environment and cluster setup.
That being said, the output that we got while running the individual commands for this migration case was similar to the one received during a migration from 7.1.3 version (see chapter 5.3.6, “Non-disruptive migration of PowerHA from 7.1.3 to 7.2.0” on page 153).
The same remarks relative to the cluster and cluster nodes versions apply as in the case of rolling migrations. The stages through which a node goes are shown in Table 5-2 on page 159.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset