Chapter 15. Upgrading to Oracle 11g Release 2

In this chapter, we will describe the process of upgrading to Oracle 11g Release 2. Unlike previous versions of RAC, where most deployments were to new systems, we believe there is now an established user base. Consequently, many Oracle 11.2 deployments will be upgrades from earlier versions of RAC.

At the time of writing, the vast majority of RAC installations are running Oracle 10g Release 2, so in this chapter we will discuss upgrading from that version to Oracle 11.2.

Two steps are involved in the upgrade process:

  1. Upgrading Oracle Clusterware and ASM to Grid Infrastructure

  2. Upgrading the RAC Software and the database

Oracle Clusterware and ASM should be upgraded to the Oracle 11.2 Grid Infrastructure at the same time. Although the installer gives you the option to defer the ASM migration, we strongly recommend performing it together with the Clusterware migration. The RAC software and database can be upgraded at a later date.

Note

The RAC software version cannot be newer than the version of Grid Infrastructure.

Upgrading Grid Infrastructure Components

As just discussed, the Grid Infrastructure components, which are Oracle Clusterware and ASM, should be upgraded from earlier versions at the same time.

The following versions of Oracle Clusterware can be upgraded to Oracle 11.2:

  • Oracle 10g Release 1: 10.1.0.3 or later

  • Oracle 10g Release 2 - 10.2.0.3 or later

  • Oracle 11g Release 1 - 11.1.0.6 or later

Upgrading earlier versions of Clusterware to Grid Infrastructure is always out of place; unlike previous upgrade paths, you will need to specify a new Oracle home for the grid software stack. Oracle Automatic Storage Management (ASM) can be upgraded in a rolling fashion if you are on ASM 11.1.0.6 or later and include a mandatory patch.

Note

There is a known problem with rolling ASM upgrades on the 11.1 base release. Apply patch for bug 7436280 or the latest PSU on top of 11.1.0.7 to your configuration before attempting an ASM rolling upgrade.

If the Oracle Cluster Registry (OCR) and voting disk are currently installed on raw or block devices, the files for these components can be upgraded in place. You can optionally move the files to ASM or a cluster file system later. In Oracle 11.2, the OCR and voting disk are only supported on raw or block devices if they have been upgraded from a previous version; new installations must use ASM or a cluster file system (please refer to Chapter 8 for more information on how to move OCR and voting disks into ASM).

Installing the Prerequisites

Prior to upgrading a production system, we recommend that you rehearse the upgrade procedure on a test cluster. It is particularly important that you be confident in how the Grid Infrastructure upgrade procedure works because it is difficult to recover from a failure.

To test the upgrade process, we built a new cluster running Oracle 10.2 Clusterware, ASM, and RDBMS software and created an Oracle 10.2 database. We initially installed Oracle 10.2.0.1, and then patched all of the Oracle homes to 10.2.0.4 before creating the database. If you don't have physical hardware available, consider a virtualized environment. This isn't a like-for-like replacement of your production environment; however, it does give you the opportunity to practice using the upgrade process.

Upgrading Oracle 10g Clusterware is not seamless. We have encountered many minor OCR corruptions in Oracle 10g; most of these affecting node applications and database services. We strongly recommend that you resolve any corruptions in the OCR on your production cluster before attempting to upgrade to Oracle 11.2. Corruptions can often be resolved by deleting and re-adding services, instances, or databases. In the case of node applications, all four node applications must be removed and re-created together in Oracle 10g and Oracle 11.1.

Also, you should ensure that you have adequate backups of the OCR prior to attempting to fix any outstanding issues. We recommend that you shut down Oracle Clusterware on all nodes in the cluster, and then backup the OCR using the dd command, as in this example:

dd if=/dev/ocr1 of=/tmp/ocr1 bs=1M count=256

We typically make multiple copies of the file to different nodes or to shared storage. If you need to restore the OCR, then ensure that Oracle Clusterware is fully shutdown again before copying the file back (again, you do this using the dd command. Oracle 11.1 introduced the manualbackup option of ocrconfig, which you can use instead of the dd command to upgrade to 11.2.

Before starting an upgrade, make sure that you have backed up the following items:

  • Oracle Clusterware home

  • Oracle ASM home

  • Oracle RDBMS software home(s)

  • OCR

  • Voting disk

  • Database files

  • Archived redo logs

  • Control files

The Oracle homes should be backed up on each node in the cluster because the contents do vary between nodes. We recommend that you make multiple copies of the OCR and voting disk, but mirrored copies of these should be identical. If the database is stored in ASM, then you should use RMAN to backup the database files, control files, and archived redo logs.

If you are upgrading ASM diskgroups from Oracle 11.1, then we also recommend that you take metadata backups for each diskgroup using the md_backup command in the ASMCMD utility.

For Oracle 10.1 and later databases, you may consider enabling flashback logging for the database to restore the database more quickly if the upgrade fails. However, we still strongly recommend that you also make a full backup of the database.

Prior to the upgrade, the SCAN must be registered in DNS. We strongly recommend using three IP addresses for the SCAN (see Chapter 6 for more information about the SCAN).

You should ensure that all software components are downloaded and ready on the server(s) you intend to upgrade to 11.2. This way, networking problems that might occur should not affect you. Prior to installation, we downloaded the Grid Infrastructure software and unzipped the archive file into /home/oracle/stage/11gR2/grid on the first node in the cluster.

Running the Installer

The Grid Infrastructure upgrade is performed using the Oracle Universal Installer (OUI). This installer has a GUI interface and must be executed within an X-Windows session. Chapter 7 describes how to start the OUI in an X-Windows session, either at the console or as a VNC client. Before starting the installer, take a moment to unset all Oracle-related environment variables. The most common variables to unset are undefined in the following example:

[oracle@london1]$ unset ORACLE_HOME ORACLE_SID ORACLE_BASE ORA_CRS_HOME

Not unsetting ORA_CRS_HOME can cause the assistants upgrading the network configuration and ASM to fail. You should also ensure none of these variables are in the .bashrc and .bash_profile configuration files of the software owner on all nodes.

On the first node, log in as the Oracle user, navigate to the staging area, and run the installer:

[oracle@london1]$ cd /home/oracle/stage/11gR2/grid
[oracle@london1]$ ./runInstaller

The installer will take a couple of minutes to load the Java libraries before displaying the Installation Options page shown in Figure 15-1.

The Grid Infrastructure installation options

Figure 15.1. The Grid Infrastructure installation options

On some sites, we have seen very small root partitions that result in only little space available in the /tmp directory where OUI installs temporary files by default. You can set the TMP and TEMP environment variables to a directory with sufficient space to allow OUI to start. It is important to set both TMP and TEMP, not just one of them.

On the Installation Options page, select Upgrade Grid Infrastructure and press Next to continue.

Specifying Options

If the OUI detects an existing ASM instance, then a warning message will be displayed. Press Yes to continue if you are certain that you wish to upgrade the ASM instance at this time. If you choose not to upgrade ASM at this time, then you need to complete this task manually after the upgrade of Clusterware to version 11.2.

The Product Languages page will be displayed next. Select any additional languages that you want error messages to be displayed in, and then press Next to continue to the Grid Infrastructure Node Selection page (see Figure 15-2).

The Grid Infrastructure Node Selection page

Figure 15.2. The Grid Infrastructure Node Selection page

On the Grid Infrastructure Node Selection page, the OUI will display the names of nodes already configured to be part of the cluster. If any nodes are missing from this list, then abort the installation and investigate why in the release you are upgrading from. If you find that all Clusterware daemons are running and that all nodes that should be part of the cluster are also up and running and registered in Clusterware, then you most likely have a corrupt global inventory.

If you wish to remove a node prior to performing the upgrade, we recommend that you do so using the original Oracle Clusterware software.

SSH connectivity should already be configured for the original installation, but there is always the possibility that the configuration has been subsequently modified. The OUI will test the SSH configuration before proceeding. If SSH was not used for the original installation (some early Oracle 10g installations continued to use RSH), then press the Setup button to instruct the OUI to set up a new SSH configuration. Note that, depending on the existing SSH configuration, the OUI automatic setup may fail. If this occurs, then we recommend that you abort the OUI session, delete the existing SSH configuration, and then try again.

The SSH configuration is stored in the .ssh directory of the oracle user on each node, and it can be deleted with the following line of code:

[oracle@london1]$ rm -rf /home/oracle/.ssh

Repeat this command on each node in the cluster. If you wish to retain a backup of the .ssh directory rather than simply deleting its contents, then we recommend renaming the directory, as in this example:

[oracle@london1]$ mv /home/oracle/.ssh /home/oracle/ssh_backup

If the OUI detects any existing ASM instances in the cluster, it will ask you to confirm that you wish to upgrade ASM. We strongly recommend that you upgrade Oracle Clusterware and ASM at the same time.

Press Next to continue to the Grid Infrastructure SCAN Information page (see Figure 15-3).

The Grid Infrastructure SCAN Information page

Figure 15.3. The Grid Infrastructure SCAN Information page

The Grid Infrastructure SCAN Information page displays the existing cluster name and recommends a SCAN name based on the cluster name. By convention, we have been creating the SCAN name by appending the -scan suffix to the cluster name.

Note

The SCAN Information page also allows you to specify a SCAN port. The default port of 1521 is sufficient in most circumstances; in any case, this port can be changed post installation (see Chapter 8 for more information).

Press Next to continue to the ASM Monitor Password page. In Oracle 11.2 and later, a user with SYSASM privileges is required to perform administrative tasks on ASM diskgroups, such as creating and dropping disk groups or disks. In this release, an additional ASMSNMP user is created with SYSDBA privileges to monitor ASM. A password must be supplied for this user. The password must follow the new Oracle password requirements: it must be at least eight characters long, contain both upper- and lowercase letters, and include at least one numeric character.

Enter a password and press Next to continue to the Privileged Operating System Groups page. On this page, you can specify operating system groups that correspond to the following ASM groups:

  • ASM Database Administrator (OSDBA) Group

  • ASM Instance Administrator Operator (OSOPER) Group

  • ASM Instance Administrator (OSASM) Group

The default value for all three groups is oinstall. Modify the groups as required, and then press Next to continue. If you accepted the default values, then a warning will be displayed stating that the OSDBA, OSOPER, and OSASM groups all have the same operating system group. The message will ask you to confirm this decision. Press Yes to continue to the Installation Location page (see Figure 15-4).

The Grid Infrastructure Installation Location page

Figure 15.4. The Grid Infrastructure Installation Location page

The Installation Location page will suggest default values for the Oracle Base directory and the Grid Infrastructure software location. In this example, the Oracle Base directory defaulted to /u01/app/oracle. While the Grid Infrastructure software will not be stored below this directory, it will still be used by Grid Infrastructure as the default base for the diagnostic directory. All Grid Infrastructure-related log files will be stored in $GRID_HOME/log/hostname.

The Grid Infrastructure software location shown Figure 15-4 defaulted to /u01/app/11.2.0/grid. Note that the OUI performs an out-of-place upgrade from any supported previous release to Oracle 11.2. In other words, a completely new Oracle home will be created for the Grid Infrastructure software. This home includes both the Oracle Clusterware and ASM software.

Oracle recommends that the Grid Infrastructure software be installed in local Oracle homes on each node in the cluster. In other words, do not attempt to create a shared Oracle home for the Grid Infrastructure software because this will limit your options during upgrades and patching.

Press Next to continue to the Prerequisite Checks page. See Chapter 7 for a full description of this page. As the prerequisites have evolved between Oracle 10g and Oracle 11.2, you may find that several checks initially fail. We strongly recommend that you address all prerequisite check failures before attempting to continue with the upgrade. Do not ignore failures unless you are absolutely certain they will not cause the upgrade process to fail.

You can request that the Prerequisite Checks page generate fixup scripts for a limited number of check failures, including those related to kernel parameters and user limits. The fixup scripts must be run manually as the root user on each affected node.

When all prerequisite checks have been passed, the Summary page in Figure 15-5 will be displayed:

The Grid Infrastructure Installation summary

Figure 15.5. The Grid Infrastructure Installation summary

Running the Upgrade

Press Finish to terminate the interview process and to start the Grid Infrastructure upgrade process. The Setup page will be displayed; this page shows the current progress through the Setup. In this example, the following steps were performed:

  1. Installed the Grid Infrastructure for a Cluster:

    1. Prepared the existing database for the upgrade process.

    2. Copied files.

    3. Linked binaries.

    4. Set up the file.

    5. Performed remote operations.

  2. Executed root scripts for Install Grid Infrastructure.

  3. Configured the Oracle Grid Infrastructure for a cluster, including the following items:

    1. The Oracle Net Configuration Assistant

    2. The Automatic Storage Management Configuration Assistant

    3. The Enterprise Manager Configuration Upgrade Utility

    4. The Oracle Cluster Verification Utility

After the Grid Infrastructure software is installed, you will see the Execute Configuration Scripts dialog box displayed (see Figure 15-6).

The Execute Configuration Scripts dialog

Figure 15.6. The Execute Configuration Scripts dialog

As the root user on each node in the cluster, run the rootupgrade.sh script. This script will upgrade the OCR and voting disk to the 11.2 format. It will also configure files such as /etc/inittab and the /etc/init.d scripts.

In our example, the output for the first node of the rootupgrade.sh script looked like this:

[root@london1]# /u01/app/11.2.0/grid/rootupgrade.sh
   Running Oracle 11g root.sh script...

   The following environment variables are set as:
       ORACLE_OWNER= oracle
       ORACLE_HOME=  /u01/app/11.2.0/grid

   Enter the full pathname of the local bin directory:   [/usr/local/bin]:
   The file "dbhome" already exists in /usr/local/bin.    Overwrite it? (y/n)
   [n]: y
      Copying dbhome to /usr/local/bin ...
   The file "oraenv" already exists in /usr/local/bin.    Overwrite it? (y/n)
   [n]: y
      Copying oraenv to /usr/local/bin ...
   The file "coraenv" already exists in /usr/local/bin.    Overwrite it? (y/n)
   [n]: y
      Copying coraenv to /usr/local/bin ...

   Entries will be added to the /etc/oratab file as needed by
   Database Configuration Assistant when a database is created
   Finished running generic part of root.sh script.
   Now product-specific root actions will be performed.
   2009-12-22 21:34:38: Parsing the host name
   2009-12-22 21:34:38: Checking for super user privileges
   2009-12-22 21:34:38: User has super user privileges
   Using configuration parameter file:   /u01/app/11.2.0/grid/crs/install/crsconfig_params
   Creating trace directory
   CSS appears healthy
   EVM appears healthy
   CRS appears healthy
   Shutting down Oracle Cluster Ready Services (CRS):
   Dec 22 21:35:22.012 | INF | daemon shutting down
   Stopping resources. This could take several minutes.
   Successfully stopped CRS resources.
   Stopping CSSD.
   Shutting down CSS daemon.
   Shutdown request successfully issued.
   Shutdown has begun. The daemons should exit soon.
   LOCAL ADD MODE
   Creating OCR keys for user 'root', privgrp 'root'..
   Operation successful.
     root wallet
     root wallet cert
     root cert export
     peer wallet
     profile reader wallet
     pa wallet
peer wallet keys
     pa wallet keys
     peer cert request
     pa cert request
     peer cert
     pa cert
     peer root cert TP
     profile reader root cert TP
     pa root cert TP
     peer pa cert TP
     pa peer cert TP
     profile reader pa cert TP
     profile reader peer cert TP
     peer user cert
     pa user cert
   Adding daemon to inittab
   CRS-4123: Oracle High Availability Services has been started.
   ohasd is starting
   CRS-2672: Attempting to start 'ora.mdnsd' on 'london1'
   CRS-2676: Start of 'ora.mdnsd' on 'london1' succeeded
   CRS-2672: Attempting to start 'ora.gipcd' on 'london1'
   CRS-2676: Start of 'ora.gipcd' on 'london1' succeeded
   CRS-2672: Attempting to start 'ora.gpnpd' on 'london1'
   CRS-2676: Start of 'ora.gpnpd' on 'london1' succeeded
   CRS-2672: Attempting to start 'ora.cssdmonitor' on 'london1'
   CRS-2676: Start of 'ora.cssdmonitor' on 'london1' succeeded
   CRS-2672: Attempting to start 'ora.cssd' on 'london1'
   CRS-2672: Attempting to start 'ora.diskmon' on 'london1'
   CRS-2676: Start of 'ora.diskmon' on 'london1' succeeded
   CRS-2676: Start of 'ora.cssd' on 'london1' succeeded
   CRS-2672: Attempting to start 'ora.ctssd' on 'london1'
   CRS-2676: Start of 'ora.ctssd' on 'london1' succeeded
   CRS-2672: Attempting to start 'ora.crsd' on 'london1'
   CRS-2676: Start of 'ora.crsd' on 'london1' succeeded
   CRS-2672: Attempting to start 'ora.evmd' on 'london1'
   CRS-2676: Start of 'ora.evmd' on 'london1' succeeded
   clscfg: EXISTING configuration version 3 detected.
   clscfg: version 3 is 10G Release 2.
   Successfully accumulated necessary OCR keys.
   Creating OCR keys for user 'root', privgrp 'root'..
   Operation successful.

   london1     2009/12/22 21:40:32       /u01/app/11.2.0/grid/cdata/london1/backup_20091222_214032.olr
   Configure Oracle Grid Infrastructure for a Cluster ...   succeeded
   Updating inventory properties for clusterware
   Starting Oracle Universal Installer...

   Checking swap space: must be greater than 500 MB.   Actual 4095   MB    Passed
   The inventory pointer is located at /etc/oraInst.loc
   The inventory is located at /u01/app/oracle/oraInventory
   'UpdateNodeList' was successful.
Starting Oracle Universal Installer...

   Checking swap space: must be greater than 500 MB.   Actual 4095   MB    Passed
   The inventory pointer is located at /etc/oraInst.loc
   The inventory is located at /u01/app/oracle/oraInventory
   'UpdateNodeList' was successful.

Output is similar on all nodes in the cluster. Although not specified in the Execute Configuration Scripts dialog box we recommend that the rootupgrade.sh scripts are run sequentially in node order on each node in the cluster. When you have executed the root scripts on all nodes in the cluster, press OK in the Execute Configuration Scripts dialog box to return to the Setup page. The files keep a detailed log in $GRID_HOME/cfgtoollogs/crsconfig/.

The Setup page will execute the remaining configuration assistants, including the Net Configuration Assistant (NETCA), ASM Configuration Assistant (ASMCA), Enterprise Manager Upgrade Utility (if Enterprise Manager Data Control is configured), and finally, the Cluster Verification Utility (CVU).

Note

If the netca and asmca assistants fail at this stage, then you most likely had the environment variable ORA_CRS_HOME set, pointing to the 10.2 Clusterware home. In this case, skip the remaining assistants on the OUI page and execute configToolAllCommands in $GRID_HOME/cfgtoollogs/ after unsetting the ORA_CRS_HOME variable.

If any of the configuration assistants fails for a reason other than the one just noted, you are given the opportunity to resolve any issues and to retry the assistant. When all of the configuration assistants have completed successfully, the Finish page will be displayed. Press Close to terminate the installation session.

You should now have Oracle Clusterware and any ASM instances upgraded to Oracle 11.2. The databases will remain on their previous versions. In the next section, we will discuss the upgrade process for individual databases. If there are 10.2 database resources in the cluster, then Grid Infrastructure will work in compatibility mode. Any node with a 10.2 resource will be pinned, in Oracle terminology. You can use the olsnodes -s -t command to check whether a node is pinned. You should change this setting only when all pre-11.2 resources have been upgraded. Note that operating with pinned nodes in the cluster places some restrictions on the available options provided by Grid Infrastructure.

Upgrading RAC Software

The next step in the upgrade process is to install RAC software. It is possible to combine this step and the database upgrade into a single operation, in which case the OUI will invoke the Database Upgrade Assistant (DBUA) immediately following installation of the RAC software. However, we recommend that each step be performed separately; this simplifies any error detection and resolution.

As when performing the Grid software upgrade, you should download the RAC software and unzip the archive file into the /home/oracle/stage/11gR2/database directory on the first node in the cluster before initiating the upgrade process itself.

Running the Installer

Like the Grid Infrastructure software installation, the RAC software installation is performed using the Oracle Universal Installer.

Before launching the installer, ensure that the ORACLE_HOME, ORACLE_SID, and ORA_CRS_HOME variables are unset. Next, log in as the oracle user on the first node, navigate to the staging area, and run the installer:

[oracle@london1]$ cd /home/oracle/stage/11gR2/database
   [oracle@london1]$ ./runInstaller

The installer will take a couple of minutes to load the Java libraries before displaying the Security Updates page. If necessary, you can use the TMP and TEMP environment variables if you are short on space on the /tmp directory.

The Security Updates page allows you to provide an email address that will be used by Oracle Support to supply information about security issues. You can optionally elect to receive security updates through My Oracle Support, in which case you should also supply your My Oracle Support password. The security updates are optional. If this is a test installation, then it is highly likely that you will not want to configure these security updates. If you do not specify an email address and optional password, then you will receive a warning that you will not receive critical patch updates; however, the installation will be allowed to continue. Next, the RAC Installation Options page will be displayed (see Figure 15-7).

The RAC Installation Options Page

Figure 15.7. The RAC Installation Options Page

On the Installation Options page, we recommend that you select the Install database software only option. This option installs the RAC software on all nodes in the cluster, and then upgrades any databases. The remainder of the RAC software installation process for an upgrade is identical to the process described in Chapter 7 for a new installation. Even so, we will cover the remaining steps here as well, for the sake of completeness.

On the Node Selection page, ensure that all nodes in the cluster are included in the RAC software installation. If any nodes do not appear, then we recommend that you abort the installation, and then investigate and resolve the issue before restarting the installation.

The Node Selection page also allows you to configure SSH connectivity. This step should have been performed during the Grid Infrastructure software installation. However, the OUI will still test whether SSH connectivity is correctly configured before continuing to the next page.

The Product Languages page allows you to specify additional languages in which Oracle error messages can be generated, and the Database Edition page allows you to specify Enterprise Edition or Standard Edition. It is possible to change from Standard Edition to Enterprise Edition during the upgrade process; however, you should not attempt to change from Enterprise Edition to Standard Edition during the upgrade because this change can result in invalid or unusable objects in the data dictionary and/or database. If you wish to convert from Enterprise Edition to Standard Edition, you will need to export the database objects from the old Enterprise Edition database and import them into new the Standard Edition database.

The Installation Location page allows you to specify a location for the Oracle base directory and a location for the RAC Software Oracle home. In our example, the Oracle base directory defaulted to /u01/app/oracle, and the location of the RAC Software home defaulted to /u01/app/oracle/product/11.2.0/dbhome_1. Note that the RAC Software installation is an out-of-place upgrade; the software is installed into a new Oracle home and does not overwrite an existing Oracle home.

The Privileged Operating System Groups page allows you to specify operating system groups to be used by the Oracle Database Administrator (OSDBA) group and Database Operator (OSOPER) group. The default value for both groups is dba.

The next page is the Prerequisite Checks page. This page will perform all necessary checks prior to the RAC software installation. The checks are similar to those performed during the Grid Infrastructure installation; therefore, the checks should normally succeed for the RAC software installation.

When all prerequisite checks have been passed, the Summary page for the RAC installation will be displayed (see Figure 15-8).

The RAC Installation Summary page

Figure 15.8. The RAC Installation Summary page

Press Finish to terminate the interview process and start the RAC software installation process. The Setup page will be displayed, showing the current progress through the setup process.

Running Configuration Scripts

Towards the end of the installation, the installer will prompt you to run some shell scripts as the root user. At the point, the Execute Configuration Scripts dialog box will be displayed (see Figure 15-9).

The Execute Configuration Scripts dialog

Figure 15.9. The Execute Configuration Scripts dialog

As the root user, run the specified root.sh script on each node in the cluster. In this case, the order of execution is unimportant, and the scripts can be executed concurrently. The following shows a sample of the output generated when running this script on one node:

[root@london1]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
   Running Oracle 11g root.sh script...

   The following environment variables are set as:
       ORACLE_OWNER= oracle
       ORACLE_HOME=  /u01/app/oracle/product/11.2.0/dbhome_1

   Enter the full pathname of the local bin directory:   [/usr/local/bin]:
   The file "dbhome" already exists in /usr/local/bin.    Overwrite it? (y/n)
   [n]: y
      Copying dbhome to /usr/local/bin ...
   The file "oraenv" already exists in /usr/local/bin.    Overwrite it? (y/n)
   [n]: y
      Copying oraenv to /usr/local/bin ...
   The file "coraenv" already exists in /usr/local/bin.    Overwrite it? (y/n)
   [n]: y
      Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
   Database Configuration Assistant when a database is created
   Finished running generic part of root.sh script.
   Now product-specific root actions will be performed.
   Finished product-specific root actions.

When you have executed the root scripts on all nodes in the cluster, press OK in the Execute Configuration Scripts dialog box to return to the Setup page. When setup is complete, the Finish page will be displayed. Press Close to terminate the installation session.

Preparing for the Database Upgrade

The final step in the upgrade process is to upgrade the databases within the cluster. It is possible to run Oracle 11.2 Grid Infrastructure with earlier databases, so this upgrade does not need to be performed immediately. If you have more than one database in the cluster, then each database can be upgraded separately and at different times, if necessary.

Before you can upgrade the databases, you must work through a number of preparatory steps; these steps are described in the following sections. At this point in the book, we assume that you have carried out sufficient testing to ensure the application continues to work as expected and within the SLAs of the new release.

Identifying the Upgrade Path

The complexity of the upgrade process depends on the original database. Upgrading directly to Oracle 11.2 is supported for the following database versions:

  • 9.2.0.8

  • 10.1.0.5

  • 10.2.0.2 or later

  • 11.1.0.6 or later

In our experience, most RAC users are running Oracle 10.2 or later. At the time of writing, a few Oracle 9.2 RAC databases remained in production.

If the current database version is Oracle 7.3, Oracle 8.0, Oracle 8.1, or Oracle 9.0, then the database must be upgraded to an intermediate release, and then upgraded from that intermediate release to Oracle 11.2. Table 15-1 shows the upgrade paths for earlier Oracle versions.

Table 15.1. RAC upgrade paths

Initial Version

Terminal Version

Intermediate Version

Final Version

7.3.3 (or below)

7.3.4

9.2.0.8

11.2

8.0.5 (or below)

8.0.6

9.2.0.8

11.2

8.1.7 (or below)

8.1.7.4

10.2.0.4

11.2

9.0.1.3 (or below)

9.0.1.4

10.2.0.4

11.2

Determine Upgrade Method

There are three commonly used upgrade methods:

  • The Database Upgrade Assistant (DBUA) method: This GUI tool automates the Oracle 11.2 upgrade process, completing all of the tasks that would normally be performed manually. In Oracle 11.2, DBUA fully supports RAC database upgrades. We recommend using DBUA for smaller databases.

  • The Manual Upgrade Method: The method uses SQL scripts and tools to perform the upgrade. The actual process has changed little since Oracle 10g though; however, and as you might expect, the process becomes slightly more complex with each successive release. Most DBAs prefer to have more control over the upgrade process, so they generally favor manual upgrades. We recommend using the manual upgrade method for larger databases and business-critical databases because this allows you to test and tune the upgrade process more comprehensively.

  • The Export / Import Method: Upgrades can also be performed using the Data Pump export / import utility when the source database is Oracle 10.1. Or, you can use the original export / import utilities included with the earlier versions of your databases. In either case, a new target database must be created before the data is imported. The export / import method may be attractive for databases where the bulk of large tables are partitioned using date ranges or where a large percentage of the data is read-only. The Data Pump utility can also be parallelized, thereby achieving good transfer rates. Finally, the Data Pump's export ability can transport data directly to another database using Oracle Net Services, thereby avoiding any intermediate disk I/O. This can be a useful technique if space is at a premium or I/O is a bottleneck for you.

One of the main benefits of the export / import method is that it leaves the source database unchanged. Therefore, you can this method to develop and test without affecting the original database. Both the DBUA and manual methods update the target database; in order to test those upgrade approaches, it will be necessary to clone the database to a new location before attempting the upgrade.

Various other methods have been used by different sites to upgrade time-critical databases, including the Oracle Streams/Logical Standby database, Golden Gate, third-party tools such as Quest Shareplex.

In the remainder of this chapter, we will cover how to upgrade two databases in our example cluster. The PROD database will be upgraded automatically using the DBUA, while the TEST database will be upgraded manually using SQL scripts.

Testing the Upgrade Process

Once the upgrade process has been developed, it is important to test both the process and the resulting database to ensure that the database has been correctly upgraded. You also need to make sure that the software continues to function as required and expected.

Also, we strongly recommend that you attempt to perform some stress testing on the upgraded database. We realize this is not always possible in the real world, but in our experience, failing to perform stress testing is the most likely cause of bad performance in newly upgraded databases.

There are a number of Oracle and third party tools available for stress testing. Oracle introduced Real Application Testing (RAT) as a cost-option in Oracle 11g Release 1. The RAT toolkit consists of the SQL Performance Analyzer (SPA) and Database Replay. SPA is aimed at measuring performance of a SQL tuning set before and after a change to the database, while Database Replay has a broader focus. The reminder of this section will focus exclusively on Database Replay.

Database Replay consists of three parts:

  • Capturing the workload

  • Preprocessing the captured workload

  • Replaying the captured workload

When initially released, only 11.1 database workloads could be captured by Database Replay. Subsequent patchsets or one-off patches have addressed this, so now you can capture workloads on 10g and 9i Release 2, as well. This makes RAT an attractive option, but users should bear in mind that it does not capture the entire software stack; rather, it captures only database workloads. Also, the capture process can add overhead to the production system. A system operating at its capacity limit should probably not be used to capture a representative workload; otherwise, you might encounter stability issues.

The capture process writes information into files in a directory object. Once the capture process has completed, the captured workload needs to be transferred to an 11g database for preprocessing. This process adds additional metadata to the captured files, making them suitable for replay.

The final step, which can be repeated as often as needed, is to replay the captured workload. Oracle clients now include a new utility called wrc (workload replay client). This utility replays the captured workload. If your source system has dependencies on remote objects such as external tables, directories, database links, then these need to be resolved first. You should also ensure that the test database you are replaying the workload against is restored or flashed back to the same SCN as the production database when you started the capture. Also, Oracle recommends setting the server's system time to match the production system's time when the workload was captured. If your application sends out information to customers in any way, shape, or form during the replay, you should ensure that none of this can happen!

After the replay finishes, you are presented with a report that compares the captured and replayed workload. This report should serve as the cornerstone of your investigation to determine whether the upgrade is likely to encounter performance issues. It is also a good test for the SQL Plan Baselines you will create during the migration (see the "Recommended Tasks" section later in this chapter for more information).

Running the pre-Upgrade Tool

The pre-upgrade tool must be run as part of the manual upgrade process. This tool is also run automatically by DBUA during the automatic upgrade process. However, Oracle recommends that you run the tool manually before initiating the automatic upgrade process, so you can preview its findings and address any issues before invoking DBUA.

The pre-upgrade tool is supplied as part of the new Oracle home. In our example, the pathname of the pre-upgrade tool looked like this:

/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin/utlu112i.sql

We copied the utlu112i.sql script to the /tmp directory.

While the pre-upgrade tool is supplied as part of the Oracle 11.2 software, the tool must be run using the original Oracle environment, including the original ORACLE_HOME and PATH. In our example, we upgraded from Oracle 10.2.0.4 to Oracle 11.2; as part of this process, we ran the pre-upgrade tool as the oracle user:

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH

The ORACLE_SID should be set to an appropriate value, as in this example:

export ORACLE_SID=PROD1

Note that the pre-upgrade tool is a SQL*Plus script; it can be executed in this fashion:

$ sqlplus / as sysdba
SQL> SPOOL upgrade1.log
SQL> @utlu112i.sql
SQL> SPOOL OFF

In our example, the following output was written to the upgrade1.log file:

Oracle Database 11.2 Pre-Upgrade Information Tool    12-26-2009   13:16:11
   .
**********************************************************************
Database:
**********************************************************************
--> name:          PROD
--> version:       10.2.0.4.0
--> compatible:    10.2.0.3.0
--> blocksize:     8192
--> platform:      Linux IA (32-bit)
--> timezone file: V4
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 731 MB
.... AUTOEXTEND additional space required: 251 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 473 MB
.... AUTOEXTEND additional space required: 438 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 485 MB
.... AUTOEXTEND additional space required: 215 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 61 MB
.... AUTOEXTEND additional space required: 41 MB
.
**********************************************************************
Flashback: OFF
**********************************************************************
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or   spfile]
**********************************************************************
WARNING: --> "java_pool_size" needs to be   increased to at least 64 MB
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2   init.ora or spfile]
**********************************************************************
--> background_dump_dest         11.1       DEPRECATED     replaced by
"diagnostic_dest"
--> user_dump_dest               11.1       DEPRECATED     replaced by
"diagnostic_dest"
--> core_dump_dest               11.1       DEPRECATED     replaced by
"diagnostic_dest"
.
**********************************************************************
Components: [The following database components will be upgraded   or installed]
**********************************************************************
--> Oracle Catalog Views         [upgrade]  VALID
--> Oracle Packages and Types    [upgrade]  VALID
--> JServer JAVA Virtual Machine [upgrade]  VALID
--> Oracle XDK for Java          [upgrade]  VALID
--> Real Application Clusters    [upgrade]  VALID
--> Oracle Workspace Manager     [upgrade]  VALID
--> OLAP Analytic Workspace      [upgrade]  VALID
--> OLAP Catalog                 [upgrade]  VALID
--> EM Repository                [upgrade]  VALID
--> Oracle Text                  [upgrade]  VALID
--> Oracle XML Database          [upgrade]  VALID
--> Oracle Java Packages         [upgrade]  VALID
--> Oracle interMedia            [upgrade]  VALID
--> Spatial                      [upgrade]  VALID
--> Data Mining                  [upgrade]  VALID
--> Expression Filter            [upgrade]  VALID
--> Rule Manager                 [upgrade]  VALID
--> Oracle OLAP API              [upgrade]  VALID
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> The "cluster_database" parameter is   currently "TRUE"
.... and must be set to "FALSE" prior to running a   manual upgrade.
WARNING: --> Database is using a timezone file older than   version 11.
.... After the release migration, it is recommended that   DBMS_DST package
.... be used to upgrade the 10.2.0.4.0 database timezone   version
.... to the latest version which comes with the new release.
WARNING: --> Database contains schemas with stale optimizer   statistics.
.... Refer to the Upgrade Guide for instructions to update
.... schema statistics prior to upgrading the database.
.... Component Schemas with stale statistics:
....   SYS
....   SYSMAN
WARNING: --> Database contains INVALID objects prior to   upgrade.
.... The list of invalid SYS/SYSTEM objects was written to
.... registry$sys_inv_objs.
.... The list of non-SYS/SYSTEM objects was written to
.... registry$nonsys_inv_objs.
.... Use utluiobj.sql after the upgrade to identify any new   invalid
.... objects due to the upgrade.
.... USER PUBLIC has 1 INVALID objects.
.... USER SYS has 2 INVALID objects.
WARNING: --> Database contains schemas with objects   dependent on network
packages.
.... Refer to the Upgrade Guide for instructions to configure   Network ACLs.
WARNING: --> EM Database Control Repository exists in the   database.
.... Direct downgrade of EM Database Control is not supported.   Refer to the
.... Upgrade Guide for instructions to save the EM data prior   to upgrade.
WARNING:--> recycle bin in use.
.... Your recycle bin turned on.
.... It is REQUIRED
.... that the recycle bin is empty prior to upgrading
.... your database.
.... The command:  PURGE DBA_RECYCLEBIN
.... must be executed immediately prior to executing your   upgrade.
.
PL/SQL procedure successfully completed.

The pre-upgrade report includes the following sections:

  • Database: This section contains basic information about the database, including the database name, operating system, version, and block size. The report will include a warning if the COMPATIBLE parameter needs to be modified prior to the upgrade.

  • Logfiles: This optional section lists any redo log files that are smaller than 4MB. For the manual upgrade, any redo logs that are smaller than 4MB in size must be replaced by log files of at least 4MB prior to the upgrade. At the time of writing, most production sites typically use redo logs of between 100MB and 500MB.

  • Tablespaces: This section reports current and required sizes for the SYSTEM, SYSAUX, temporary, and undo tablespaces. For manual upgrades, tablespaces must be extended manually. DBUA will attempt to extend undersized tablespaces, but most DBAs will want to extend these tablespaces themselves to maintain control over the database and to reduce the possibility of errors. Remember: If you are upgrading an Oracle 9.2 database, a SYSAUX tablespace is mandatory in Oracle 10.1 and later.

  • Flashback: This section displays the current status of flashback logging. Even if you do not enable flashback logging during normal operations, you might consider using this feature during upgrades to allow the database to be quickly restored in the event of a failure.

  • Update Parameters: This section lists any parameter values that should be updated before the database is upgraded.

  • Deprecated Parameters: This section lists any parameters included in the current parameter file that are deprecated in Oracle 11.2. Parameters that have been deprecated between Oracle 10g and Oracle 11g include:

    • BACKGROUND_DUMP_DEST

    • CORE_DUMP_DEST

    • USER_DUMP_DEST

    All three deprecated parameters have been replaced by the new DIAGNOSTIC_DEST parameter, which was introduced in Oracle 11.1.

  • Components: This section lists all components that will be upgraded.

  • Miscellaneous Warnings: This section contains miscellaneous warnings. In the preceding example the following warnings were included:

    • CLUSTER_DATABASE parameter: This parameter must be set to FALSE prior to the upgrade. Effectively, the upgrade can only be performed against a single-instance database, so RAC must be disabled for the duration of the upgrade and subsequently re-enabled. DBUA will automatically update this parameter.

    • Timezone file: If the database is using a timezone file older than version 11, then the DBMS_DST package to upgrade the timezone file should be run after the database upgrade has been completed.

    • Stale Optimizer Statistics: This ensures that optimizer statistics have been collected for all objects in the SYS and SYSMAN schemas. In Oracle 10.1 and later, optimizer statistics can be collected for data dictionary objects using the GATHER_DICTIONARY_STATS procedure in the DBMS_STATS package.

    • Invalid objects: The pre-upgrade utility will check for invalid objects in the database. A list of invalid objects in the SYS schema is stored in the REGISTRY$SYS_INV_OBJS table; a list of invalid objects owned by other users is stored in the REGISTRY$NONSYS_INV_OBJS table. In our tests, the only invalid objects in these tables were the DBA_REGISTRY_DATABASE view and public synonym and the DBMS_REGISTRY package. These objects can be ignored because they will be recompiled during the upgrade process.

    • Objects dependent on network packages: Oracle 11.2 supports fine-grained access control for the UTL_TCP, UTL_SMTP, UTL_MAIL, UTL_HTTP, and UTL_INADDR packages. To support fine-grained access control for these packages, Oracle XML DB must be installed. Before the upgrade, a list of current dependencies should be generated; following the upgrade, the DBMS_NETWORK_ACL_ADMIN package should be used to generate new access control lists for the affected objects.

    • Enterprise Manager Database Control: During the upgrade process, the Enterprise Manager Database Control repository will be automatically upgraded if it exists. However, if the database is subsequently downgraded, then the Database Control repository will not be automatically downgraded. Therefore, the repository should be backed up using the emdwgrd utility prior to the upgrade. You can find this utility in $ORACLE_HOME/bin.

    • Purge recycle bin: If the current database is Oracle 10.1 or later, then the recycle bin should be emptied immediately prior to the upgrade. Use this code to empty the recycle bin:

      SQL> PURGE DBA_RECYCLEBIN;
      
      DBA Recyclebin purged.

Performing Other Checks

The Oracle Database Upgrade Guide lists a number of other checks that should be performed prior to upgrading the database. Most checks are dependent on specific database features, as in these examples:

  • Complete any outstanding materialized view refreshes.

  • Check that no files require media recovery.

  • Take out any files that are in backup mode.

  • Clear out any outstanding distributed transactions.

  • Synchronize any standby databases with the primary database.

Saving Current Database Parameters

Default values for both supported and unsupported parameters can vary between releases. Changes in parameter values can affect application performance. To reduce the time required to resolve performance issues caused by parameter changes, we recommend that you save the current database parameter values before upgrading the database. The saved values can subsequently be compared with the new parameter values to determine whether any have changed.

We recommend that you save both the supported and unsupported parameters that are externalized in the X$KSPPI and X$KSPPSV tables. The following script saves all parameter values to a text file called parameters.txt:

SET PAGESIZE 0
SET TRIMSPOOL ON
SET FEEDBACK OFF
SPOOL parameters.txt

SELECT i.ksppinm||';'||sv.ksppstvl
FROM x$ksppi i, x$ksppsv sv
WHERE i.indx = sv.indx
ORDER BY 1;

SPOOL OFF

Backing up the Database

Prior to running DBUA or starting the manual upgrade process, you should ensure that you have a full or intermediate backup of the database available. This will come in handy if the upgrade fails, and it is necessary to restore it to its original state. If you do not already back up the database regularly—for example, it is a test database—then you can back it up using RMAN.

The Oracle Database Upgrade Guide recommends a simple RMAN command that will perform a full backup. Unfortunately, in the versions we have seen, the syntax for the BACKUP CURRENT CONTROLFILE command is incorrect. The example that follows shows how to use the command with a valid syntax.

First, create a target directory for the backup:

[root@server14 ˜]# mkdir /backup
[root@server14 ˜]# chown oracle.dba /backup

Second, set the environment variables to the old environment:

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH

Third, set the ORACLE_SID to an appropriate value:

export ORACLE_SID=PROD1

Fourth, start the RMAN client:

[oracle@london1]$ rman target=/ nocatalog

Fifth, execute an RMAN command similar to the following:

RUN
   {
      ALLOCATE CHANNEL c1 TYPE DISK;
      BACKUP DATABASE FORMAT '/backup/%U' TAG pre_upgrade plus   archivelog;
      BACKUP CURRENT CONTROLFILE FORMAT '/backup/control.bak';
   }

Note that the preceding example allocates only one channel; you will need to add more channels, depending on the capability of your hardware.

Configuring the Listener Process

If the listener process is currently Oracle 9i or Oracle 10g, then it should be upgraded at this stage. The simplest way to upgrade the listener is to delete and re-create it using the NETCA utility. NETCA is the preferred tool for configuring listener processes in a RAC environment because it updates the OCR.

In our example, we had already upgraded the Grid Infrastructure to Oracle 11.2. This procedure also upgraded the listener processes to the new version. Therefore, it was not necessary to modify the listener processes in our example. Remember that the listener configuration is more complex in Oracle 11.2, which includes SCAN listeners (referenced by the REMOTE_LISTENER database parameter) and local listeners (referenced by the LOCAL_LISTENER database parameter).

Upgrading Automatically with DBUA

We recommend using the Database Upgrade Assistant (DBUA), which is a GUI tool, to upgrade smaller, non-critical databases automatically. In this section, we'll review an upgrade of the PROD database in our example cluster.

If you are upgrading a RAC database, then you need to ensure that the CLUSTER_DATABASE parameter is initially set to TRUE. DBUA will automatically update this parameter during the upgrade process.

We also recommend that the database you want to upgrade be running before DBUA is invoked. This ensures that the database has been started using the correct parameter files. Remember that DBUA will restart the database during the upgrade process using the default parameter files, so these need to be correctly configured.

The DBUA is a GUI tool that should be started in an X session (see Chapter 7 for more details on the different options for running X sessions in a Linux environment). The DBUA can be run by the oracle user or a user in the DBA operating system group.

The Oracle environment must be set to include the new Oracle 11.2 home, as in this example:

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$PATH

Also, the ORACLE_SID environment variable must be set to the local instance name:

export ORACLE_SID=PROD1

Use this command to start the DBUA:

$ dbua

Firing up the DBUA will display the Welcome page (see Figure 15-10).

The Upgrade Assistant's Welcome page

Figure 15.10. The Upgrade Assistant's Welcome page

Press Next to continue to the Select Database page (see Figure 15-11).

The Select Database page

Figure 15.11. The Select Database page

On the Select Database page, you select the database you want to upgrade. In our example, we selected the PROD database. Our selection led to the warnings shown in Figure 15-12.

The DBUA upgrade warnings

Figure 15.12. The DBUA upgrade warnings

These warnings were generated by the pre-upgrade report. In our example, we were satisfied that all issues identified by the warnings had been already been resolved or would be addressed after the upgrade, so we chose to continue with the upgrade.

Press Yes to continue to the first Upgrade Options page (see Figure 15-13).

The DBUA's first Upgrade Options page

Figure 15.13. The DBUA's first Upgrade Options page

On the first Upgrade Options page, you can specify that invalid objects should be recompiled at the end of the upgrade. To reduce overall downtime, objects can be recompiled in parallel. The DBUA will recommend a degree of parallelism based on the number of CPUs detected. This value may need to be modified; for example, you may need to modify this if other production databases are currently running in the cluster.

DBUA can also disable the archive log mode for the duration of the upgrade process. This reduces the amount of disk space required for archived redo logs during the upgrade, and it may reduce the duration of the upgrade. If you choose to disable archiving, however, you should ensure that you make a full backup of the database immediately after the upgrade. This is normally not an option when standby databases are in place. If you do not choose to disable archiving, then the DBUA will display a warning that the upgrade will take longer and require more disk space. You should ensure that the local log archive destination has sufficient space before continuing.

Press Next to continue to the second Upgrade Options page (see Figure 15-14).

The second upgrade options page

Figure 15.14. The second upgrade options page

On the second Upgrade Options page, you can specify a new size and location for the Flash Recovery Area. Remember that the upgrade will reinstall and recompile all PL/SQL and Java objects that can generate a large amount of redo.

In this example, the database is being upgraded from Oracle 10g to Oracle 11g. Therefore, the administrative and trace files will be moved from the Oracle 10g locations below the $ORACLE_HOME/admin/PROD directory to the new diagnostics area. In this case, the new area is created below /u01/app/oracle.

Press Next to continue to the upgrade's Summary page (see Figure 15-15).

The upgrade's Summary page

Figure 15.15. The upgrade's Summary page

Press Finish on the Summary page to initiate the upgrade. This brings up the Progress page, which displays the current progress of the upgrade. The upgrade is divided into a number of tasks. In our example, the following tasks were shown:

  • Performing the pre-Upgrade

  • Upgrading Oracle Server

  • Upgrading JServer JAVA Virtual Machine

  • Upgrading Oracle XDK for Java

  • Upgrading Real Application Clusters

  • Upgrading Oracle Workspace Manager

  • Upgrading OLAP Analytic Workspace

  • Upgrading OLAP Catalog

  • Upgrading EM Repository

  • Upgrading Oracle Text

  • Upgrading Oracle XML Database

  • Upgrading Java Packages

  • Upgrading Oracle interMedia

  • Upgrading Spatial

  • Upgrading Data Mining

  • Upgrading Expression Filter

  • Upgrading Rule Manager

  • Upgrading Oracle OLAP API

  • Performing Post Upgrade

  • Configuring Database with Enterprise Manager

  • Generating Summary

When the upgrade is complete, the Upgrade Results page will be displayed (see Figure 15-16). This page summarizes the actions performed by DBUA during the upgrade. It also allows you to unlock and modify any passwords that have been affected by the upgrade.

The Upgrade Results page

Figure 15.16. The Upgrade Results page

At this point, the DBUA upgrade is complete. However, some additional post-upgrade steps remain; we'll cover these later in this chapter. First, however, we'll explain how to upgrade a database manually using SQL scripts.

Upgrading a Database Manually

We recommend that you use SQL scripts to upgrade larger and business-critical databases manually. In this section, we will walk you through the process we used to upgrade the TEST database in our example cluster from Oracle 10.2 to Oracle 11.2.

In previous versions of Oracle, executing the pre-upgrade utility script, utluVersion.sql, was optional. Beginning with version 11.2, however, executing this is now mandatory; otherwise, the catupgrd.sql script will fail with an error. Refer to this chapter's earlier "Running the pre-Upgrade Tool" section for more information about executing this script on the non-upgraded database.

The Oracle environment must be set to include the new Oracle 11.2 home, as in this example:

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$PATH

Also, the ORACLE_SID environment variable must be set to the local instance name:

export ORACLE_SID=TEST1

As mentioned previously, we did not need to upgrade the Oracle 10.2 listener process because new SCAN listeners and local listeners were created during the Oracle 11.2 Grid Infrastructure installation process. However, we did need to update the listener.ora file in the Oracle home for the new Oracle 11.2 RAC software ($ORACLE_HOME/network/admin/listener.ora). This is necessary because the REMOTE_LISTENER database and LOCAL_LISTENER database parameter reference entries in the listener.ora file.

We also added the following entries to $ORACLE_HOME/network/admin/listener.ora:

LISTENERS_TEST =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST =   cluster1-scan.example.com)(PORT = 1521))
    (ADDRESS =
      (PROTOCOL = TCP)(HOST = london1-vip.example.com)(PORT = 1521))
    (ADDRESS =
      (PROTOCOL = TCP)(HOST = london2-vip.example.com)(PORT = 1521))
    (ADDRESS =
      (PROTOCOL = TCP)(HOST = london3-vip.example.com)(PORT = 1521))
    (ADDRESS =
      (PROTOCOL = TCP)(HOST = london4-vip.example.com)(PORT = 1521))
   )

   LISTENER_TEST1 =
     (ADDRESS = (PROTOCOL = TCP)(HOST =   london1-vip.example.com)(PORT = 1521))

   LISTENER_TEST2 =
     (ADDRESS = (PROTOCOL = TCP)(HOST =   london2-vip.example.com)(PORT = 1521))
LISTENER_TEST3 =
     (ADDRESS = (PROTOCOL = TCP)(HOST = london3-vip.example.com)(PORT   = 1521))

   LISTENER_TEST4 =
     (ADDRESS = (PROTOCOL = TCP)(HOST =   london4-vip.example.com)(PORT = 1521))

   TEST =
       (ADDRESS = (PROTOCOL = TCP)(HOST = cluster1-scan)(PORT =   1521))
       (ADDRESS =
         (PROTOCOL = TCP)(HOST = london1-vip.example.com)(PORT =   1521))
       (ADDRESS =
         (PROTOCOL = TCP)(HOST = london2-vip.example.com)(PORT =   1521))
       (ADDRESS =
         (PROTOCOL = TCP)(HOST = london3-vip.example.com)(PORT =   1521))
       (ADDRESS =
         (PROTOCOL = TCP)(HOST = london4-vip.example.com)(PORT =   1521))
       (LOAD_BALANCE = yes)
       (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = TEST)
       )
     )

   TEST1 =
     (DESCRIPTION =
       (ADDRESS =
         (PROTOCOL = TCP)(HOST = london1-vip.example.com)(PORT =   1521))
       (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = TEST)
         (INSTANCE_NAME = TEST1)
       )
     )

   TEST2 =
     (DESCRIPTION =
       (ADDRESS =
         (PROTOCOL = TCP)(HOST = london2-vip.example.com)(PORT =   1521))
       (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = TEST)
         (INSTANCE_NAME = TEST2)
       )
     )

   TEST3 =
     (DESCRIPTION =
       (ADDRESS =
         (PROTOCOL = TCP)(HOST = london3-vip.example.com)(PORT =   1521))
       (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = TEST)
(INSTANCE_NAME = TEST3)
       )
     )

   TEST4 =
     (DESCRIPTION =
       (ADDRESS =
         (PROTOCOL = TCP)(HOST = london4-vip.example.com)(PORT = 1521))
       (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = TEST)
         (INSTANCE_NAME = TEST4)
       )
     )

Preparing the Parameter Files

If you are currently using an SPFILE, it should be converted to a text file for the duration of the upgrade. You can verify whether your database is using an SPFILE by running the following SQL*Plus command, which returns the pathname of the SPFILE if there is one in use:

SQL> SHOW PARAMETER SPFILE

Our test database was created in Oracle 10.2 using DBCA with an SPFILE. The location of the SPFILE is stored in the $ORACLE_HOME/dbs/init<instance_name>.ora file on each node in the cluster. For example, our $ORACLE_HOME/dbs/initTEST1.ora file contains the following information:

SPFILE='+DATA/TEST/spfileTEST.ora'

In the preceding example, the SPFILE is stored in the +DATA ASM diskgroup. Next, we converted the SPFILE to a text file using the following command:

SQL> CREATE PFILE='/tmp/init.ora' FROM SPFILE;

File created.

The resulting file was stored in the /tmp directory to avoid overwriting any files in the $ORACLE_HOME/dbs directory. It is only necessary to perform this conversion on one node in the cluster.

If $ORACLE_HOME/dbs/init<instance_name>.ora files exist in the old Oracle home, then copy these as the oracle user to the same directory in the new Oracle home on each node in the cluster, as in this example:

[oracle@london1]$ cp   /u01/app/oracle/product/10.2.0/db_1/dbs/initTEST1.ora 
/u01/app/oracle/product/11.2.0/dbhome_1/dbs/

Preparing Password Files

Next, copy the password file as the oracle user on each node in the cluster from the dbs directory in the old Oracle home to the dbs directory in the new Oracle home. The password file name has the format of orapw<instance_name>, as you can see in this example:

[oracle@london1]$ cp   /u01/app/oracle/product/10.2.0/db_1/dbs/orapwTEST1 
/u01/app/oracle/product/11.2.0/dbhome_1/dbs/

Modifying the Initialization Parameters

You need to ensure that the CLUSTER_DATABASE parameter is set to FALSE. In our example, we modified the temporary parameter file in /tmp/init.ora:

cluster_database = FALSE

Also, if you are upgrading an Oracle 10.2 or earlier database, then you need to comment out the BACKGROUND_DUMP_DEST, CORE_DUMP_DEST, and USER_DUMP_DEST parameters from the temporary parameter file because these parameters are deprecated in Oracle 11.1 and later.

Restarting the Database in UPGRADE Mode

Next, shut down the database using this syntax:

SQL> SHUTDOWN IMMEDIATE

Do not use the SHUTDOWN ABORT command because this would require instance recovery when the database is restarted. Next, restart the database in UPGRADE mode and use the PFILE parameter to specify the location of the temporary parameter file, as in this example:

SQL> startup upgrade pfile='/tmp/init.ora';
ORACLE instance started.

Total System Global Area  602619904 bytes
Fixed Size                  1338168 bytes
Variable Size             163579080 bytes
Database Buffers          432013312 bytes
Redo Buffers                5689344 bytes
Database mounted.

Running the Catalog Upgrade Script

The catalog upgrade script will upgrade most objects in the data dictionary and associated Oracle schemas. We typically run this script from the /tmp directory and spool the output to a local file, as in this example:

[oracle@london1]$ sqlplus / as sysdba
SQL> SPOOL upgrade2.log
SQL> @?/rdbms/admin/catupgrd.sql
.....
SQL> SPOOL OFF

The $ORACLE_HOME/rdbms/admin/catupgrd.sql script can take several hours to run, depending on the CPU clock speed, amount of available memory, I/O bandwidth, and the number of components that must be upgraded.

In the preceding example, output is spooled to the /tmp/upgrade2.log file. This log file can become very large (it contained more than 67 million bytes in our example!). The Oracle documentation suggests that you inspect this file for errors. We recommend that you use the grep utility to search for errors. Use the -B and -A arguments to include a couple of lines of context around any errors that the utility discovers.

We also recommend that you monitor the alert log of the database being upgraded. In our case, the text alert log was written to this file:

/u01/app/oracle/diag/rdbms/test/TEST1/trace/alert_TEST1.log

During one of our tests, we did not disable archive redo logging. Subsequently, we ran out of archive redo log space. The catalog upgrade script simply hung. It can quite difficult to detect if this happens because the output is very repetitive. However, an error was written to the alert log, and we were able to release some more space for new archive log files. We did this using the rm command in the ASMCMD utility, deleting some old files from the affected diskgroup. If you have dependent standby databases, then please back the archived log files up before removing them. When sufficient space becomes available, the upgrade script will continue to execute without further intervention.

When the upgrade script completes successfully, restart the database:

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP PFILE=/tmp/init.ora

Configuring SPFILE

The next step is to recreate SPFILE with the new parameter settings. In our example, we used the following command:

SQL> CREATE SPFILE='+DATA/TEST/spfileTEST.ora'
FROM PFILE='/tmp/init.ora';

File created.

Next, you restart the database again to ensure that the new parameter file has been successfully created:

SQL> SHUTDOWN IMMEDIATE
   SQL> STARTUP

Running the post-Upgrade Status Tool

Your next step is to run the post-upgrade status tool. This tool is implemented using the following SQL*Plus script:

$ORACLE_HOME/rdbms/admin/utlu112s.sql

Again, we executed this script from the /tmp directory:

[oracle@london1]$ sqlplus / as sysdba
SQL> SPOOL upgrade3.log
SQL> @$ORACLE_HOME/rdbms/admin/utlu112s.sql
.....
SQL> SPOOL OFF

For our example, this script generated the following output:

Oracle Database 11.2 Post-Upgrade Status Tool             12-26-2009 19:06:17
.
Component                                Status         Version    HH:MM:SS
.
Oracle Server
.                                         VALID        11.2.0.1.0  00:32:55
JServer JAVA Virtual Machine
.                                         VALID        11.2.0.1.0  00:11:04
Oracle Real Application Clusters
.                                         VALID        11.2.0.1.0  00:00:02
Oracle Workspace Manager
.                                         VALID        11.2.0.1.0  00:01:24
OLAP Analytic Workspace
.                                         VALID        11.2.0.1.0  00:01:12
OLAP Catalog
.                                         VALID        11.2.0.1.0  00:02:01
Oracle OLAP API
.                                         VALID        11.2.0.1.0  00:01:03
Oracle Enterprise Manager
.                                         VALID      11.2.0.1.0    00:19:10
Oracle XDK
.                                         VALID        11.2.0.1.0  00:03:43
Oracle Text
.                                         VALID        11.2.0.1.0  00:01:45
Oracle XML Database
.                                         VALID      11.2.0.1.0    00:07:21
Oracle Database Java Packages
.                                         VALID        11.2.0.1.0  00:00:54
Oracle Multimedia
.                                         VALID        11.2.0.1.0  02:30:31
Spatial
.                                         VALID        11.2.0.1.0  00:13:22
Oracle Expression Filter
.                                         VALID        11.2.0.1.0  00:00:25
Oracle Rules Manager
.                                         VALID        11.2.0.1.0  00:00:36
Gathering Statistics
.                                                                  00:11:22
Total Upgrade Time: 04:19:05

PL/SQL procedure successfully completed.

In the preceding example, the Oracle Multimedia upgrade took two and a half hours. Much of this time was due to the aforementioned issue where we ran out of archive log space.

Running post-Upgrade Scripts

At this point, you're ready to run post-upgrade scripts. These scripts are stored in the catuppst.sql file, which is located in the $ORACLE_HOME/rdbms/admin directory. Connect to the database as sysdba, using SQL*Plus and execute the file:

[oracle@london1]$ sqlplus / as sysdba
SQL> SPOOL upgrade4.log
SQL> @?/rdbms/admin/catuppst.sql
.....
SQL> SPOOL OFF

In our example, the preceding script generated the following output (note that we have abridged and reformatted this output slightly to save space and make it easier to read):

TIMESTAMP
-----------------------------------------------------------------------
COMP_TIMESTAMP POSTUP_BGN 2009-12-26 19:07:49


PL/SQL procedure successfully completed.

This script will migrate the Baseline data on a pre-11g   database
to the 11g database.

Move BL Data SYS.WRH$_FILESTATXS
Move BL Data SYS.WRH$_SQLSTAT
Move BL Data SYS.WRH$_SYSTEM_EVENT
Move BL Data SYS.WRH$_WAITSTAT
Move BL Data SYS.WRH$_LATCH
Move BL Data SYS.WRH$_LATCH_CHILDREN
Move BL Data SYS.WRH$_LATCH_PARENT
Move BL Data SYS.WRH$_LATCH_MISSES_SUMMARY
Move BL Data SYS.WRH$_DB_CACHE_ADVICE
Move BL Data SYS.WRH$_ROWCACHE_SUMMARY
Move BL Data SYS.WRH$_SGASTAT
Move BL Data SYS.WRH$_SYSSTAT
Move BL Data SYS.WRH$_PARAMETER
Move BL Data SYS.WRH$_SEG_STAT
Move BL Data SYS.WRH$_DLM_MISC
Move BL Data SYS.WRH$_SERVICE_STAT
Move BL Data SYS.WRH$_TABLESPACE_STAT
Move BL Data SYS.WRH$_OSSTAT
Move BL Data SYS.WRH$_SYS_TIME_MODEL
Move BL Data SYS.WRH$_SERVICE_WAIT_CLASS
Move BL Data SYS.WRH$_INST_CACHE_TRANSFER
Move BL Data SYS.WRH$_ACTIVE_SESSION_HISTORY
...                                       ...
... Completed Moving the Baseline Data    ...
...                                       ...
... If there are no Move BL Data messages ...
... above, then there are no renamed      ...
... baseline tables in the system.        ...
...                                       ...
Drop Renamed Baseline Table SYS.WRH$_FILESTATXS_BR
Drop Renamed Baseline Table SYS.WRH$_SQLSTAT_BR
Drop Renamed Baseline Table SYS.WRH$_SYSTEM_EVENT_BR
Drop Renamed Baseline Table SYS.WRH$_WAITSTAT_BR
Drop Renamed Baseline Table SYS.WRH$_LATCH_BR
Drop Renamed Baseline Table SYS.WRH$_LATCH_CHILDREN_BR
Drop Renamed Baseline Table SYS.WRH$_LATCH_PARENT_BR
Drop Renamed Baseline Table SYS.WRH$_LATCH_MISSES_SUMMARY_BR
Drop Renamed Baseline Table SYS.WRH$_DB_CACHE_ADVICE_BR
Drop Renamed Baseline Table SYS.WRH$_ROWCACHE_SUMMARY_BR
Drop Renamed Baseline Table SYS.WRH$_SGASTAT_BR
Drop Renamed Baseline Table SYS.WRH$_SYSSTAT_BR
Drop Renamed Baseline Table SYS.WRH$_PARAMETER_BR
Drop Renamed Baseline Table SYS.WRH$_SEG_STAT_BR
Drop Renamed Baseline Table SYS.WRH$_DLM_MISC_BR
Drop Renamed Baseline Table SYS.WRH$_SERVICE_STAT_BR
Drop Renamed Baseline Table SYS.WRH$_TABLESPACE_STAT_BR
Drop Renamed Baseline Table SYS.WRH$_OSSTAT_BR
Drop Renamed Baseline Table SYS.WRH$_SYS_TIME_MODEL_BR
Drop Renamed Baseline Table SYS.WRH$_SERVICE_WAIT_CLASS_BR
Drop Renamed Baseline Table SYS.WRH$_INST_CACHE_TRANSFER_BR
Drop Renamed Baseline Table SYS.WRH$_ACTIVE_SESSION_HISTORY_BR
...                                       ...
... Completed the Dropping of the         ...
... Renamed Baseline Tables               ...
...                                       ...
... If there are no Drop Table messages   ...
... above, then there are no renamed      ...
... baseline tables in the system.        ...
...                                       ...

PL/SQL procedure successfully completed.

<Output edited here>

TIMESTAMP
-----------------------------------------------------------------------
COMP_TIMESTAMP POSTUP_END 2009-12-26 19:08:00

Recompiling Invalid Packages

The next step is to use the utlrp.sql script in the $ORACLE_HOME/rdbms/admin directory to recompile any invalid packages:

[oracle@london1]$ sqlplus / as sysdba
SQL> SPOOL upgrade5.log
SQL> @?/rdbms/admin/utlrp.sql
SQL> SPOOL OFF

In our example, the preceding script generated the following output:

TIMESTAMP
------------------------------------------------------------------------------     --
COMP_TIMESTAMP UTLRP_BGN  2009-12-26 19:09:30

DOC>   The following PL/SQL block invokes UTL_RECOMP to   recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#
PL/SQL procedure successfully completed.


TIMESTAMP
------------------------------------------------------------------------------     --
COMP_TIMESTAMP UTLRP_END  2009-12-26 19:12:54

DOC> The following query reports the number of objects that have compiled
DOC> with errors (objects that compile with errors have status set to 3 in
DOC> obj$). If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                   0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                           0


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

SQL> spool off

You can use the following query to verify manually that all objects are valid:

SQL> SELECT COUNT(*) FROM dba_invalid_objects;

  COUNT(*)
----------
          0

You can use the post-upgrade invalid objects tool to identify post-upgrade invalid objects, as in this example:

SQL> @?/rdbms/admin/utluiobj.sql
.
Oracle Database 11.1 Post-Upgrade Invalid Objects Tool   12-26-2009 19:22:36
.
This tool lists post-upgrade invalid objects that were not   invalid
prior to upgrade (it ignores pre-existing pre-upgrade invalid   objects).
.
Owner                     Object Name                       Object Type
.

PL/SQL procedure successfully completed.

Note

The post-upgrade invalid objects tool ignores objects that were invalid before the upgrade.

Updating /etc/oratab

DBUA automatically updates /etc/oratab. However, when using the manual upgrade method, it is necessary to update /etc/oratab by hand.

In our example, the original entry for the TEST database looked like this:

TEST:/u01/app/oracle/product/10.2.0/db_1:N

In our example, we changed this entry so it referenced the new Oracle home:

TEST:/u01/app/oracle/product/11.2.0/dbhome_1:N

Updating Environment Variables

If you do not use the .oraenv family of scripts to configure your environment, then you may also wish to update the $ORACLE_HOME environment variable in /home/oracle/.bash_profile. The original file looked like this in our example:

export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1

The revised file looks like this:

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1

Updating the Oracle Cluster Registry

After the update, the Oracle Cluster Registry will contain some incorrect entries for the database, such as the location of the Oracle home.

In theory, the OCR can be updated using the SRVCONFIG utility. In early versions of Oracle 11.2, however, we have found the SRVCONFIG command to be susceptible to bugs and errors in the OCR. Therefore, we recommend that you delete and re-create the entries for the upgraded database in the OCR.

The database should be removed from the OCR using the SRVCTL utility in the old Oracle home:

[oracle@london1]$ export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
[oracle@london1]$ export PATH=$ORACLE_HOME/bin:$PATH

Use the following command to delete any existing entries for the database; note that the -d option specifies the database name:

[oracle@london1]$ srvctl remove database -d TEST

You should use the SRVCTL utility to add the upgraded database and instances to the OCR in the new Oracle home:

[oracle@london1]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_home1
[oracle@london1]$ export PATH=$ORACLE_HOME/bin:$PATH

Next, you need to add the new database:

[oracle@london1]$ srvctl add database -d TEST -o $ORACLE_HOME   -p '+DATA/TEST/spfileTEST.ora' 
   > -y AUTOMATC -a "DATA,RECO"

In the preceding example, the -d option specifies the database name, while the -o option specifies the new Oracle home. Also, the -p option specifies the path to SPFILE, and -y indicates that the database should be started automatically as part of the start process for Clusterware. The final option, -a, lists the diskgroups that the database depends on. If you specify the spfile command-line parameter, then you do not need to create the $ORACLE_HOME/dbs/init$ORACLE_SID.ora file. In this case, the agent will automatically create it for you.

Use this code to add the new instances:

[oracle@london1]$ srvctl add instance -d TEST -i TEST1 -n   london1
   [oracle@london1]$ srvctl add instance -d TEST -i TEST2 -n london2
   [oracle@london1]$ srvctl add instance -d TEST -i TEST3 -n   london3
   [oracle@london1]$ srvctl add instance -d TEST -i TEST4 -n   london4

In the preceding example, the -d option specifies the database name, the -i option specifies the instance name, and the -n option specifies the node name.

Setting the Initialization Parameters for the New Release

Before touching any other parameter, it is recommended that you convert the database to a RAC database. At the present time, the database is still running as a single-instance database. To reconfigure the database to use all nodes, set the CLUSTER_DATABASE parameter to TRUE:

SQL> ALTER SYSTEM SET cluster_database = TRUE SCOPE =   SPFILE;
   System altered.

Next, shut down the local instance:

SQL> SHUTDOWN IMMEDIATE

Now restart all instances in the database using the SRVCTL utility:

[oracle@london1]$ srvctl start database -d TEST

Finally, verify that all nodes have started:

[oracle@london1]$ srvctl status database -d TEST
Instance TEST1 is running on node london1
Instance TEST2 is running on node london2
Instance TEST3 is running on node london3
Instance TEST4 is running on node london4

If you have problems starting the remaining instances, you should double-check that the password files exist and that the init$ORACLE_SID.ora file points to SPFILE in ASM.

The compatible initialization parameter has a special role in Oracle. We recommend leaving it at its current pre-11.2 value for a transition period. This enables you to allow for a downgrade of the system without having to restore a backup and losing data. We also strongly recommend taking a full level 0 backup of the database before raising this parameter to 11.2.0. When starting with the new compatibility level, internal data structures are changed, which makes a downgrade impossible from that point forward. If you have physical standby databases in place, you need to ensure that you use the same compatibility setting for all databases in the Data Guard configuration.

And with this, you have completed the manual upgrade process.

Performing the Necessary post-Upgrade Steps

A number of post-upgrade steps should be performed. The first step is to back up the database. Once you've obtained the security of a backup, you can move forward with the following mandatory and optional tasks.

Completing Mandatory post-Upgrade Tasks

In this section, we'll outline some mandatory post-upgrade steps that you should consider. You do not need to execute the steps in the order listed; however, you should consider implementing each step, and then execute those steps that apply to your environment:

  • Upgrade the timezone file version if the existing timezone file is earlier than version 11.

  • Statistics tables created by a call to DBMS_STATS.CREATE_STAT_TABLE - typically used to transfer or backup statistics - need to be updated.

  • Upgrade the RMAN recovery catalog schema if this is used for RMAN backups.

  • If the source database was either Oracle 9.2 or 10.1 then you need to upgrade SSL users using external authentication.

  • Users of the Oracle Text option need to install the supplied knowledge base from the companion products.

  • If your database makes use if the Application Express (APEX) option, and the APEX version is lower than 3.2 it will automatically be upgraded to 3.2. Subsequently you will have to run a few more configuration steps.

  • Additional security measures were introduced in version 11.1 for network related packages such as UTL_TCP, UTL_SMTP and others. You need to define specific access criteria to allow developers and code to make use of them

  • If your system makes use of the Database Vault, you need to enable it.

The Oracle Database Upgrade Guide describes all these steps in detail.

Performing the Recommended Tasks

The Oracle Database Upgrade Guide also recommends a number of tasks that can be performed after the database has been upgraded. For example, it recommends updating passwords to enforce case-sensitivity. It also recommends configuring thresholds for server alerts. These steps tend to be database-specific, so we recommend that you consult the Guide for tasks relevant to your environment. One thing we would like to draw your attention to is the concept of plan stability. In the days of the rule-based optimizer, performance was predictable and the rules were simple. With the current, cost-based optimizer, things are a little different and having stable plans in the new version of the database is important. One way to achieve this is to use stored outlines to capture the most important statements in the non-11.2 database, and then transform them into entities managed by SQL Plan Management.

The first step is to identify the important SQL statements relevant to your business. You should take your time and discover these together with your key business users. Once you determine what these are, you can capture them as outlines. This can be a lengthy process, especially for systems not developed in-house. Some systems resist all such tuning efforts because they rely on a lot of dynamic SQL, which is literally impossible to capture in SQL outlines. Once identified, the essential list of SQL statements is extracted from the V$SQL view. The DBMS_OUTLN package has been extended in 10g to allow administrators to capture SQL statements by using the hash value and child number. Once you identify an essential SQL statement, you need to get its hash value and SQL ID from GV$SQL, as in this example:

SELECT inst_id,
  sql_text,
  sql_id,
  hash_value,
  plan_hash_value,
  child_number
FROM gv$sql
WHERE ...

Next, you can create an outline for the SQL statement:

begin
 DBMS_OUTLN.CREATE_OUTLINE (
   hash_value    => hashValue,
   child_number  => childNumber,
   category      => 'MIGRATION');
end;
/

You supply the values from the previous resultset for the hash value and child number. The MIGRATION category is only a suggestion; you can use any other outline category you like. However, to make the migration of the captured outlines easier, you should all group them under the same outline category. Prior to the migration, you should have captured all the important statements in outlines. The difficulty in using outlines is that, as soon as the hash value of a statement changes, it will no longer be used.

The DBA_OUTLINES view should now list your new outline. In a migration test database, you should try and convert this outline to a SQL Plan baseline after the database has been migrated to 11.2:

SQL> set serveroutput on
SQL> set long 1000000 trimspool on linesize 120
SQL> declare
  2   migrationReport clob;
  3  begin
  4   migrationReport := DBMS_SPM.MIGRATE_STORED_OUTLINE(
  5     attribute_name => 'CATEGORY',
  6     attribute_value => 'MIGRATION');
  7  dbms_output.put_line(migrationReport);
  8  end;
  9  /

before migrate:
after migrate:

-------------------------------------------------------------------------

               Migrate Stored Outline to SQL Plan Baseline
Report
-------------------------------------------------------------------------

Summary:
--------
Total outlines to be migrated: 175

Outlines migrated
successfully: 175


PL/SQL procedure successfully completed.

In the preceding example, we instructed Oracle to migrate all outlines of the MIGRATION category to SQL Plan Baselines. The contents of the migrationReport variable returned by DBMS_SPM.MIGRATE_STORED_OUTLINE provides a detailed report about the migration status, as well as information about failed migrations.

The DBA_SQL_PLAN_BASELINES view should now list all the new SQL Plan Baselines that previously were stored outlines. To use the SQL Plan Baselines, set the optimizer_use_sql_plan_baselines parameter to true, either per session or through the alter system command.

When a baseline has been used, the LAST_EXECUTED field in DBA_SQL_PLAN_BASELINES will be updated. You should also see a value in GV$SQL.SQL_PLAN_BASELINE for statements using the baseline.

From a security point of view, it is good practice to change passwords so they are case sensitive. When set to true, the new initialization parameter, sec_case_sensitive_logon, makes passwords case sensitive. Only newly created accounts benefit from this straight away; existing accounts do not. For existing accounts, you need to prompt the user to change the password or, alternatively, execute an alter user command. The new PASSWORD_VERSIONS column in DBA_USERS indicates whether a password is case sensitive.

The command to create the password file has been changed to take an additional parameter: ignorecase. This parameter can be set to either n or y.

You should review any operating system jobs with hard-coded environment variables that would fail to connect because the database now runs from a different ORACLE_HOME than it did previously. If you changed passwords, you should also ensure that any automatically started job still connects to the database. Database links are another area you need to pay special attention to. For example, sometimes the tnsnames.ora file in the new Oracle home does not contain the information required to connect to the remote system.

If you are using database console, you should upgrade it to the new Oracle home. From the new Oracle home, you can execute the emca -upgrade db -cluster command and follow the instructions on the screen to provide the necessary information to connect to the database.

Note

If you chose not to upgrade the database to 11.2, then you do not need to upgrade the database console.

Resolving Problems in Mixed-Database Environments

As we noted in our introduction, Oracle 11.2 fully supports continuing to use non-11.2 databases on a cluster that has been upgraded to 11.2. However, you should be aware that the Oracle 11.2.0.1 base release contained a number of problems that could affect users with a 10.2 database.

If you did not choose to remove and re-add the database resource into the OCR after upgrading Clusterware, then you might run into bug 9257105 after removing the now unneeded 10.2 Clusterware home from your machine. Although the OCR has been upgraded successfully, the action script parameter still references a file in the 10.2 Clusterware home. You can change this easily with the crsctl command, as in the following example:

[oracle@london1 ˜]$ crsctl modify resource ora.PROD.db 
> -attr "ACTION_SCRIPT=$GRID_HOME/bin/racgwrap"

In this case, you need to replace the database name (PROD) with your database name. Note that this is one of the very rare occasions where you use crsctl command instead of srvctl to modify an Oracle resource. The command should be executed with the environment set to the 11.2 Grid Infrastructure home.

Another common problem with brand new installations of Grid Infrastructure 11.2 and pre-11.2 RDBMS instances is related to node pinning. During the upgrade, the nodes containing 10.2 RDBMS software will be pinned, allowing pre-11.2 databases to run on them. Non-upgraded clusters are not automatically pinned, which causes problems with emca and dbca when executed from the pre-11.2 Oracle homes.

The golden rule to remember here is that you should use the utilities from the $GRID_HOME to manage resources provided by the same Oracle home. Similarly, when managing pre-11.2 resources, you can use the commands from their respective homes.

You can find more information about problems with 10.2 databases and 11.2 Grid Infrastructure documented in My Oracle Support note, 948456.1.

Using a Swing Kit

Where possible, we recommend using swing kit to perform the upgrade. This allows you to install Grid Infrastructure instead of upgrading it, eliminating many potential points for failure. If Grid Infrastructure is installed and patched to match your standards from the outset, then a time-consuming task becomes greatly simplified.

One option is to use a physical standby database created on the new cluster. For example, Oracle 10.2/11.1 RDBMSs are fully supported on Grid Infrastructure. To create the standby database, all you need to do is create an identical home (in terms of patches) on the new hardware, in addition to the 11.2 RDBMS home you've already created. The standby database initially runs out of the original RDBMS home; on the day of the migration, a switchover first makes it the primary database, and from there, you follow the standard steps to migrate it to 11.2. Figure 15-17 shows the process graphically; note that this figure omits additional standby databases for the sake of simplicity and clarity.

An alternetive upgrade strategy employing Data Guard switchover

Figure 15.17. An alternetive upgrade strategy employing Data Guard switchover

This strategy can be very desirable on sites with many cluster nodes because executing the rootupgrade.sh script on many nodes can take a while to finish. If you want to keep the old hardware, then you should migrate the old cluster at your convenience to Grid Infrastructure 11.2. At this point, you could install the new RDBMS home before mounting the database under the new 11.2 home, and then wait for it to be synchronized with the primary database.

Summary

This chapter explained different strategies for migrating a pre-11.2 cluster to the latest release of the Oracle software. Most sites we have seen remain on Oracle 10.2.0.4, which is a very stable release. However, premium support for Oracle 10.2.0.4 comes to an end in mid-2011, so many sites should begin thinking about a migration strategy more seriously. Although extended (and lifetime) support are available to customers wishing to remain on their current release, this support comes at an extra cost.

Depending on their budget and other organizational concerns, some sites will perform an upgrade of the clusterware stack on the existing hardware. This is a proven approach, but it also has its problems, especially if you encounter OCR corruption.

Where possible, we recommend using a new cluster when you install Grid Infrastructure. We also recommend creating a fresh OCR and voting disks. You should then create a physical standby database for the database to be migrated, using exactly the same patch level that you use in your production database. On the migration day, you perform a switchover, start the database in the new home, and upgrade it.

Regardless of which method you choose, you need to migrate the database to 11.2 at some point. Oracle offers a GUI tool, called DBUA to take care of this task. We think it is suitable for smaller databases, but control over the migration process is important, which makes it less suitable for larger or business-critical databases.

With the rule-based optimizer marked as deprecated, DBAs now have to try their best to guarantee plan stability. Using stored outlines that you transform into a SQL Plan Management could be a new solution to this old problem.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset