CHAPTER 12

image

Upgrading to Oracle 12c

The final chapter of this book will deal with the upgrade to Oracle 12c. It is unlikely that your Database as a Service (DBaaS) Platform or consolidation project starts on the green field, but if it does: congratulations! You probably waited just long enough to get started with the right technology for the task. This chapter, just like the rest of the book, is written specifically for Pluggable Databases (PDB) and consolidation in mind. However, since PDBs are a new feature you cannot migrate a database from a previous release straight into a PDB using the traditional upgrade path described here. There are of course other methods available to transfer your schemas from a pre 12c database into a PDB.

To convert your complete pre-12c database into a PDB you need to migrate the existing database to a 12c non-Container Database (CDB) first. In the next step you can use the techniques described in this chapter to convert the database to a PDB inside an existing CDB. Note that you need to be on the same platform for this to work-converting databases from one platform to another is a different problem.

Although the migration from a previous version to the current release using the DB Upgrade Assistant or manual approach seems the most applicable use case, you can also use the Data Pump utilities expdp and impdp to migrate databases or subsets of the database as well. Other means of transportation include the use of Transportable Tablespaces or replication technologies. This chapter gives you an overview about migrating from Oracle 11g Release 2 to Oracle 12c Release 1.Oracle 11g Release 2 is the only non 12c release at the time of writing which enjoyed premium support from Oracle and hence was the obvious candidate to be covered in this chapter.

The upgrade path to Oracle database 12c

As with previous Oracle releases you need to pay close attention to the upgrade prerequisites. One of the most important prerequisites, supported database versions, is found in the list of database releases supporting a direct upgrade. The distinction is important since the upgrade options for Oracle Restart are slightly different and will be covered in a later section in this chapter. The minimum database releases for a direct upgrade are listed in Figure 12-1 shown below.

9781430244288_Fig12-01.jpg

Figure 12-1. Supported releases for a direct upgrade

Any release lower than the ones listed require an intermediate upgrade before going to 12.1, or you can pursue an alternative strategy of some sort, such as to recreate the database and move the data from old to new. If you needed additional arguments why it is a good idea to stay reasonably current with your production environment, here is one. If you stayed on a release prior to let’s say 10.2.0.5 for too long for example you have to perform one extra step before going to 12c, which will extend the time window required for the migration. The requirements however are moderate, and the application of a database patch set is not too much work, only time consuming. The functional risk is really all the regression testing, which for most applications is still not automated. Too large a “gap” however, might make it even more difficult to initiate a migration project, as the hurdles may be perceived to be too high.

Another important decision to be made is whether or not you want to perform a hardware refresh at the same time. Most companies use their hardware for a specific amount of time, and a migration project is often taken as an opportunity to get newer, faster hardware. Chapters 3 and 4 described hardware and software choices available for Oracle 12c in more detail.

If you go with a new server, you could install Oracle Restart and all other software homes in their respective version on the target server, and perform any configuration without time pressure you would certainly experience when performing the installation on the same host. Remember that you cannot install the 12c database binaries and continue to use the previous release’s Oracle Restart at the same time. It has always been the case that Oracle Restart (and Clusterware for that matter) had to be a version number equal or higher than the Oracle database. But how would you get the database across to the new server? If you are staying within the restrictions set by Data Guard then it is the easiest technology to use for the hardware refresh/migration. Consider the following scenario:

As part of the migration you could perform a Data Guard switchover to the standby database on the new hardware. Emphasis should be put on the fact that the switchover does not change the Oracle version yet, but it does get you on newer better hardware! Although you are using Oracle Restart 12.1 on the target hardware you have an 11.2 RDBMS home as well-this is the home to use for the Data Guard configuration. The initial setup is shown in Figure 12-2 below.

9781430244288_Fig12-02.jpg

Figure 12-2. Upgrade using a Data Guard switchover

After the successful switchover you can upgrade the database just as you would without the Data Guard setup in the next step. This strategy gives you lots of freedom, and your back-out plan can be as elaborate as you like. One possibility is to use an Oracle 12c RDBMS home on the legacy hardware as shown in Figure 12-3 for a physical standby.

9781430244288_Fig12-03.jpg

Figure 12-3. Backout strategy with Data Guard in place

Using that approach you could downgrade the database to the previous release you migrated off and even switch back to the old hardware if that was required. Additionally you get the benefit of being able to implement separation of duties when installing the software on a different machine. An upgrade of Oracle Restart on the same server does not easily allow you to change the owner of the binaries. If you installed both Oracle Restart 11.2 and the Oracle database as the oracle account you tend to keep this configuration for version 12c.

In this chapter it is assumed that you are upgrading the database on the same host. Also note that the example assumes you are running a platform which is in premier support-at the time of this writing only Oracle 11.2 was in that support state. And since it is always good to be on the latest patchset to avoid the inevitable “apply the latest patchset” reply in a service request this is the case as well.

Before you upgrade production...

Performing a major release upgrade is anything but trivial depending on the application. The complexity increases with hardware and potentially associated platform change. The old rule that well-written, well-designed applications are easier to migrate still applies today. Complex applications written under a lot of time pressure are the most difficult to migrate. Hard-coded and undocumented connection strings buried deep within the application logic core are only the tip of the iceberg! You really need to ensure that you are working with all stakeholders, technical and functional, of your current product to ensure a smooth transition to Oracle 12c. This type of project definitely requires sufficient time for testing the application against the next database release. In the author’s opinion there are two main areas to focus on during the testing period:

  • Functional. Does the application perform as expected from a technical point of view? Does your regression test produce the same output on screen and in reports that you had previously? You are in a lucky position if your regression testing is automated, including the comparison of the application’s output. If everything is the same: great! If not, you need to spend time investigating why. A prominent although somewhat older example includes the change from 9i to 10g when result sets were no longer implicitly ordered if the select statement did not have an “order by” clause. This is exactly the type of problem you want to catch before going live!
  • Technical. Does the latest Oracle release produce the same results as the previous one? Did execution plans change? Do your important queries finish within the allowed time and Service Level Agreement? If not, why? Do you get any contention in the critical Oracle memory areas?

Both of the above mentioned areas are equally important, although the database administrator is more likely to be able to help with the second bullet point. For reasons outlined in Chapter 1 it can be very difficult for an operational DBA to be reasonably familiar with the database he or she is about to upgrade. Changes from a technical point of view are easier visible to the technical staff. Comparing execution plans, response times and the general behavior between versions can be detected using the Automatic Workload Repository and other monitoring solutions. What’s more difficult is to work out if the application does not show regressions under the new database release.

image Note  Ideally you are NOT changing the application code when migrating to a new release! Otherwise, how would you be able to attribute a problem post migration to the new database release or the new application release?

To address bullet point number two more forward-looking planning is necessary. Ideally an application support team has a set of end-to-end testing systems allowing (fully?) automated regression testing of the application. If the application features a graphical user interface then macro-recording software could be used to replay key transactions. Every Key Performance Indicator (KPI) of your software, especially if the KPI is under a Service Level Agreement, must be tested repeatedly and as automated as possible to prevent deviations due to human error. And let’s face it-repeatedly testing the same processes is not the most entertaining activity in the world.

This section’s nugget is that thorough testing using automated procedures would be ideal to prevent unpleasant surprises when going life in production with the next database release. This has been true for any major release change, not only Oracle 12c by the way. If automated testing is not available then the project manager should at least be available to identify key processes that the application team can test post the test migrations. In case of performance degradation theses users should contact the database team for further investigation. This could involve tracing specific end-user processes if possible or other “business as usual” troubleshooting processes. A migration to the next database release should not cause unexpected problems in the ideal world. In reality though all testing is limited to the ingenuity of the people involved, and sometimes it simply is not possible to plan for all eventualities. The more time you can spend on the migration, the more likely it is to capture problems beforehand. But please do not let the migration intimidate you too much: staying on a particular release for too long will make upgrades happening too late and even more daunting.

Upgrading Oracle Restart on the same server

If you are already using Oracle Restart in a version supporting a direct upgrade, the first step in the migration of your database is to upgrade Clusterware Since Oracle 11.2.0.2 all patches to Grid Infrastructure-clustered and unclustered-require an out-of-place upgrade. This means that you will need an additional, separate Grid Home for the base release, and each subsequent major patch. Oracle mandates the out-of-place upgrade to make it easier to revert to a previous version of the software. After sufficient testing has ensured that your environment is stable you can of course delete the old one, after a complete file system backup. As with every operation, the removal of the old, now unused, Grid Home must be tested first on an identical environment before performing this task in production, in addition to the file system backup you have taken! If possible you should draft a plan to create storage snapshots of the LUNs containing the Oracle binaries (all of them-not only the Grid home!) and if the storage admin is in good mood try to get the same service for your database LUNs as well. If storage snapshots are not possible then you must ensure alternative means of file system and database backups before starting any work. Snapshots based on the Logical Volume Manager (LVM) have proven to be useful in the world of Engineered Systems, and they might equally be useful for “regular” Oracle builds as well. You should perform a backup of the Oracle Local Registry manually just in case, before initiating the file system backup:

[root@server1 ∼]# ocrconfig -local -manualbackup

Although the database is only indirectly affected you must have a backup of every single database using the ASM instance as well. Do not uniquely rely on the v$asm_client view for a list of databases making use of the ASM instance: if a database is stopped it will not be listed there!

Other places to look for hints of databases using the ASM instance are your company’s configuration management database, parameter or server parameter files, the output of “srvctl config database” and database log directories in $ORACLE_BASE. Please do not continue until you have made sure that every component that could break has been backed up in a way that makes it possible to restore it.

Remember that all databases in an Oracle Restart environment are dependent on the ASM instance. In other words: you cannot patch the environment without incurring downtime. There is no way other than to stop the databases during the ASM upgrade unless you are using Real Application Clusters!

The upgrade path for Oracle Restart

As with the database, there are strict rules about which release can be upgraded. The releases eligible for direct migration are listed in the Grid Infrastructure Installation Guide for your platform. In case of Linux, the following versions are mentioned:

  • 10.1.0.5
  • 10.2.0.3
  • 11.1.0.6
  • 11.2

In reality you should not be concerned with any version other than 11.2 unless you are in extended support with Oracle. Bear in mind that there are additional restrictions for almost each of these releases, such as mandatory patches to be applied. In practice it makes a lot of sense to be on the latest available patchset and the terminal release for your release. For Oracle 11.2, which should be the main target for the migration, the following additional restriction has been identified: If you have applied the first patchset to 11g Release and are on 11.2.0.2.0 you must apply at least PSU 3. It does however seem more sensible to apply the latest PSU for 11.2.0.2 instead or better be on 11.2.0.3 which was the most current 11g Release patch set at the time of writing. When this book goes to print 11.2.0.4-the terminal release for 11g Release 2-will be out.

image Note  The requirements for upgrades might change while this book is out. Refer to the Grid Infrastructure Installation Guide for your platform, and refer to Appendix B in that guide for a complete list of current requirements.

Unsetting environment variables

If you have made your life easier by setting certain environment variables such as ORACLE_HOME and others then please ensure they are unset before starting Oracle Universal Installer. Candidates for unsetting are:

  • ORACLE_HOME
  • ORA_CRS_HOME
  • ORACLE_SID
  • LD_LIBRARY_PATH
  • LD_PRELOAD
  • ORACLE_BASE

Also ensure that neither of these environment variables are defined in /etc/profile, /etc/profile.local or your ∼/.profile, ∼/.bash_profile or ∼/.bashrc files. Commands could be launched in shell sessions outside of yours inheriting the settings from the aforementioned files. Any variable found in a configuration file should be commented out or deleted as it could cause problems later.

Performing the Upgrade

The next step involves invoking Oracle Universal Installer from your staging location. You could read in Chapter 1 that this could be a central build server with vetted binaries, or alternatively a temporary location where you downloaded the software from the web. In any way, unzip the software first before continuing. Do not place the zip files for the 11.2 and 12.1 software in the same directory-they both create a subdirectory “grid”, and you do not want to mix and match versions.

The requirements for the installation of Oracle Restart have already been detailed in previous chapters and will not be repeated here. If you would like a refresher, please refer to Chapter 6 for more information about the prerequisites.

image Caution  Even though you are performing an out-of-place upgrade of Oracle Restart it is still a good idea to have a file system backup of the mount point(s) where all your Oracle software is installed. Better be safe than sorry.

With the prerequisites met it is time to start Oracle Universal Installer (OUI) from the command line. Please ensure that you select the radio button “Upgrade Grid Infrastructure or Oracle Automatic Storage Management” as shown in Figure 12-4.

9781430244288_Fig12-04.jpg

Figure 12-4. Selecting the upgrade installation path

All other screens from the installation are not very different from the “clean” installation of the software on a new server. When you initiate the upgrade then the following tasks are executed on your behalf:

  • Prepared the dependent objects for the upgrade.
  • Copied files to the new Oracle Restart home.
  • Linked the binaries.
  • Updated the repository.

The interesting events start to unfold when you are prompted to execute the upgrade script, which is shown in Figure 12-5:

9781430244288_Fig12-05.jpg

Figure 12-5. The prompt for the execution of the rootupgrade.sh script

As always with Oracle upgrades, execution of the root script is where it gets exciting. Either execute the command yourself, or have the sys admin execute it for you should you not have root access. Whichever is the case, you should double-check that no environment variables point to any Oracle home-neither the old nor new one. This could potentially confuse the upgrade scripts. A sample execution of the script is shown here. Note that existing files in the local bin directory (/usr/local/bin on Linux) have been overwritten with the 12c version. Some sites use the oratab file for customization in which case you need to preserve the original files first or choose not to overwrite them.

[root@server1 ∼]# /u01/app/oracle/product/12.1.0.1/grid/rootupgrade.sh
Performing root user operation for Oracle 12c
 
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.1.0.1/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0.1/grid/crs/install/crsconfig_params
 
ASM Configuration upgraded successfully.
 
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node server1 successfully pinned.
2013/08/04 19:25:36 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
 
2013/08/04 19:29:04 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
 
server1     2013/08/04 19:32:45     /u01/app/oracle/product/12.1.0.1/grid/cdata/server1/backup_20130804_193245.olr
 
server1     2013/08/04 17:42:24     /u01/app/oracle/product/11.2.0.3/grid/cdata/server1/backup_20130804_174224.olr
2013/08/04 19:33:34 CLSRSC-327: Successfully configured Oracle Grid Infrastructure
                                for a Standalone Server
[root@server1 ∼]#

The all-important information is in the last line: the result of the operation was successful. If you are running into problems it is useful to raise a Service Request with Oracle Support straight away, in addition to your own troubleshooting efforts. It might take a little while for Oracle Support to develop new notes similar to the troubleshooting notes for rootupgrade.sh with version 11.2. In the meantime you should begin your investigation in the 12.1 Grid Home. You will find a directory name $GRID_HOME/cfgtoollogs/crsconfig/. Inside this directory you should have a file named roothas_timestamp.log. This log file is unusually detailed and useful. Have a look inside to find out where it went wrong. Additionally you can consult the files in $GRID_HOME/log/hostname/. Another useful starting point is the alerthostname.log file, which points you to the correct log file name in most situations.

After you have assured yourself that the installation has been successful close the “execute configuration scripts” window to proceed with the last remaining tasks. After those have completed and when you see the final screen, click on the “Close” button to exit Oracle Universal Installer. But how do you know if ASM has been upgraded? The easiest way is to connect to the ASM instance and query v$version:

SQL> select banner from v$version;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
PL/SQL Release 12.1.0.1.0 - Production
CORE    12.1.0.1.0      Production
TNS for Linux: Version 12.1.0.1.0 - Production
NLSRTL Version 12.1.0.1.0 - Production

But if you carefully checked the output from the rootupgrade script you will have noticed the following:

ASM Configuration upgraded successfully.

This should take away any doubts. If you want more information about the migration process you can check the roothas-timestamp.log file in $GRID_HOME/cfgtoollogs/crsconfig/. With the Oracle Restart migration completed successfully you can either leave it as it is and complete your current change, or alternatively proceed with the database migration.

Performing additional post-migration validation

Your ASM installation has now been upgraded to version 12c. You should review any customizations you performed in the $GRID_HOME/network/admin directory, copy any wallets and files modified by you into the new home. At the same time it might be advisable to check the expiry date for your wallets to avoid them from being a show-stopper during the migration weekend. Additional things to check for are:

  • Are the entries in the tnsnames.ora file correct?
  • Is the (optional) sqlnet.ora file present, and with the correct contents? Are the paths referenced still valid?
  • Are all the static registrations for the listener still correct? Remember the listener will be started from the 12c Grid Home.
  • Have you used any wallets previously which need to be taken care of?
  • Are there custom resources that need adapting?

You could also check if all the required components of the Oracle Restart stack are present. Is there a database missing? The minimalistic output for Oracle Restart could be similar to this one:

[oracle@server1 ∼]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       server1                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       server1                  STABLE
ora.RECO.dg
               ONLINE  ONLINE       server1                  STABLE
ora.asm
               ONLINE  ONLINE       server1                  Started,STABLE
ora.ons
               OFFLINE OFFLINE      server1                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       server1                  STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       server1                  STABLE
ora.ora11.db
      1        ONLINE  ONLINE       server1                  Open,STABLE
--------------------------------------------------------------------------------
[oracle@server1 admin]$

Unless you specifically configured the Oracle Notification Server (ons) for sending Fast Application Notification events, this cluster resource will not be started by default. Until all databases on the server are migrated to 12.1 you need to use the local pre-12c ORACLE_HOME associated with the database to administer it:

[oracle@server1 ∼]$ srvctl status database -d ora11
Database is running.

If unsure you can get the ORACLE_HOME from the resource profile (srvctl config database –d) or alternatively from the /proc file system in /proc/pid/environ. Can you start and stop the database resource using srvctl? Can you get the status of the database resource etc.-you get the idea. Please ensure that all your acceptance criteria are met before proceeding with the database migration.

Using Oracle Restart 12c and previous database releases in parallel

There is no strict dependency between the upgrade of Grid Infrastructure and the RDBMS instances on the same host. It is perfectly fine to keep pre 12c databases and 12c Grid Infrastructure for a period of time. As with 11.2 there are some known problems around such a configuration. These are documented in My Oracle Support note 1568834.1 “Pre 12.1 Database Issues in 12c Grid Infrastructure Environment.” Bear in mind that the note is written for Real Application Clusters as well as Oracle Restart.

Most of the issues documented in the note can be solved if Oracle Restart is operating in a pre 12c friendly mode. In this mode, the server is said to be “pinned.” When upgrading from a previous version of Oracle Restart the compatibility mode is always on. If you performed a fresh 12c Grid Infrastructure installation then you need to “pin” your node using crsctl. Please refer to the aforementioned MOS note for more information.

Another problem is that you cannot manage a pre 12c database when your ORACLE_HOME environment variable is set to the 12c Grid Home. Trying to interrogating the status of the database results in the following error message:

[grid@server1 crsconfig]$ srvctl config database -db ora11
PRCD-1027 : Failed to retrieve database ora11
PRCD-1229 : An attempt to access configuration of database ora11 was rejected because
its version 11.2.0.3.0 differs from the program version 12.1.0.1.0. Instead run the
program from /u01/app/oracle/product/11.2.0.3/dbhome_1.

This message is not new with 12c, it has been around for a while and the situation was exactly the same when using 11.2 Grid Infrastructure and pre 11.2 databases on the same host.

Upgrading the database

Compared to the migration of Oracle Restart which is mostly automatic, the migration of the database to the new release is where the real work lies. ASM has become a lot more sophisticated over time the databases acting as ASM clients did not necessarily take notice that a version change has occurred. Changes in the way the Oracle database operates internally however can make a version change a little more challenging. Luckily for you researchers have always scrutinized the behavior changes-mostly optimizer related-from release to release. Jonathan Lewis has published a lot on optimizer changes, and will surely continue to do so for the current release. Maria Colgan from Oracle is also a great source for information, she contributes to the Oracle blog related to the optimizer. And then there is too large a number of other reasearchers doing their own publications on how the current release is different from the previous one. Therefore you are going over multiple iterations until you understand the upgrade logic, then migrate the lower tiers until it is finally time to migrate production and the disaster recovery environment.

If there is a “magic” parameter that controls a lot within Oracle it is the initialization parameter named “compatible.” It is rarely touched, and for good reason. Increasing the value for “compatible” makes it impossible to downgrade to a lower release unless you perform a restore from your backup medium. The message therefore has to be: do NOT touch the compatible initialization parameter unless you are absolutely sure that your application performs within the Service Level Agreements and there are no show stoppers that thorough, professional, regression testing have found.

From a process point of view the migration project can have multiple phases, not all of which are mandatory:

  1. The engineering department reviews the new Oracle release and becomes familiar with it. In the absence of an engineering department dedicated members of the operational support team should test the new release.
  2. An otherwise unused test environment with a clone of a database is created on separate hardware to allow database administrators to perform the migration without time pressure. So far the migration process was technology driven.
  3. The development environments are migrated. Special emphasis should be on the development process. Can the development tools which are used still be used without limitations? Is the Continuous Integration process still working? Are you capable of performing unit tests? What about replication-Data Guard or other solutions? At this stage some initial regression testing can be performed. Commercial packaged applications depending on the database for reporting or business intelligence might need upgrading too to support Oracle 12c. When migrating it is highly recommended to consider the full stack from top to bottom and carefully check product compatibility lists for all software involved.
  4. The back-out plan is tested a few times to hone in the routine.
  5. Upgrade of higher tiers, except production and disaster recovery (“DR”). This is the phase that requires most attention. Regression testing of the application, parallel runs, and much more are performed at this stage to ensure that the application continues to run within the required parameters. Anything external to the database should not be forgotten: feeds, web services, externally scheduled jobs, reporting software etc. fall into this category. Only if all stakeholders sign off can you proceed and migrate the life and DR environments.
  6. As a last step of the upgrade project you upgrade production and the Disaster Recovery environments.

Once the immediate pressure of migrating to 12c is removed you can think of implementing the consolidation strategies. Non-Container Databases can be consolidated into Container Databases. You should consider implementing Resource Manager at the CDB and PDB level depending on the class of service the application is using. Refer back to Chapter 7 for more information about implementing Resource Manager with Pluggable Databases. This chapter will show you how to migrate a non-CDB into a CDB as part of the migration.

Let’s have a look at the steps to be performed in more detail for a database migration.

High level steps

The individual database migration contains a number of steps that need to be performed sequentially. As one of the first tasks you need to become familiar with the new database release. This is why the author is hoping you are reading this book! Then you have to think about an upgrade strategy, which must include the worst case and a downgrade or a restore from backup if everything else fails. Like with every major task involving the Oracle database you must have a valid backup before commencing the work. Your upgrade plan should include phone numbers for escalation of calls to the backup team who will assist with the restore of the database if needed.

The test plan will be put to practice when you begin the migration of a fresh copy of production on a host clearly marked for development. It is possible that lots of developers depend on the machine you are testing on, if possible you should migrate an independent copy of the database, not an actual development database, space permitting. Although labeled development environment many of these systems are as important as the actual production database, especially for Independent Software Vendors (ISV). Imagine you had 50 developers sitting idle for a couple of days due to a database outage! Whichever way you choose, please ensure you have a strategy for backing out. This strategy must be tested and signed off as part of the process. RMAN backups can be used but any other reliable, supported, documented, tried and tested procedure will also be appropriate, if it has proven to revert back to the start without any issues.

Consider a cycle as show in Figure 12-6.

9781430244288_Fig12-06.jpg

Figure 12-6. The initial migration process for a development server

Once the test migration has been completed, you need to point the application(s) to the newly created database and let the functional and regression testing begin. If you can you should adhere to the motto of changing only one thing at a time: planning on releasing new code to the application at the same time the database migration is scheduled for should be avoided. Otherwise it will be difficult to link cause and effect! Test the migrated database thoroughly with the application, just being able to get to the login screen is not good enough. The migration process needs to be tested as well, and ideally automated to a high degree. Once everyone is in agreement that all the tests were successful it is time to do it for real and upgrade the upper tiers up to production and disaster recovery. By now you need to have stakeholder agreements and valid test reports indicating that everyone is happy to go ahead with the production migration.

Finally the exciting week post go-live begins. Hopefully by then life should be normal, unless the testing phase was insufficient and you missed a lot of the detail which is now striking at you. The database is still running with an unchanged “compatible” setting. This phase of the migration could be referred to as “stable phase.” The database runs out of the new RDBMS home. When appropriate testing has ascertained that changing the “compatible” parameter can be upgraded, you might opt to do so. There is no hard requirement to change the compatible parameter, but there will be a time when this should be done to prevent the software release and compatible value from drifting apart too much.

The steps in the following sections assume that you have taken note of all these warnings and good advice when upgrading. Refer to this section from time to time if you need words of caution. And finally: very few databases are the same. Just because something worked for one, does not necessarily imply it works for the other one too. It is ultimately the responsibility of the project management team to make the migration a success.

Performing a database software-only installation

The installation of the Oracle 12.1 binaries on the database host marks the true beginning of the database upgrade project. Depending on how strict your organization is the binary installation must be performed out of hours even for the development server where you test the migration initially. The installation of the database binaries has been described in great detail earlier in this book and will not be repeated here. Please revert back to Chapter 6 for a more detail about performing a software-only installation of the Oracle 12c. In this example the software has been installed under the default location of /u01/app/oracle/product/12.1.0.1/dbhome_1. If it is different please adapt the path to your environment.

Running the pre-upgrade tool

Beginning with Oracle 12c the name of the pre-upgrade tool has changed to preupgrd.sql. It performs the necessary pre-upgrade checks. The Database Upgrade Assistant DBUA performs exactly the same checks by the way and invokes the script as well. The database to be migrated in this chapter has started its life as a “custom” database created with the Database Configuration Assistant dbca in version 11.2.0.3. It has a standard set of options registered in the server registry:

COMP_NAME                                VERSION
---------------------------------------- ------------------------------
Oracle XML Database                                11.2.0.3.0
Oracle Expression Filter                           11.2.0.3.0
Oracle Rules Manager                               11.2.0.3.0
Oracle Workspace Manager                           11.2.0.3.0
Oracle Database Catalog Views                      11.2.0.3.0
Oracle Database Packages and Types                 11.2.0.3.0
JServer JAVA Virtual Machine                       11.2.0.3.0
Oracle XDK                                         11.2.0.3.0
Oracle Database Java Packages                      11.2.0.3.0

Translating these to dbca options, only the installation of the Java Virtual Machine (JVM) and XML Database (XMLDB) options have been selected. The selection of these implicitly triggers the deployment of all those other components automatically. Note that beginning with Oracle 12.1 the use of XML Database is mandatory and XMLDB and its repository has to be present.

Following the example the file preupgrd.sql and the dependent script utluppkg.sql have been copied from the new Oracle 12.1 home to a temporary location and executed against the database ora11 running from the 11.2.0.3 Oracle Home. The script creates a log file and pre-/post upgrade “fixup” scripts. If an ORACLE_BASE was present-which should be the case with version 11.1 and later installations-then these files are in $ORACLE_BASE/cfgtoollogs/${ORACLE_SID}/. The resulting log file is broken down into various sections containing information about specific parts of the database to be upgraded.

Database information: Including vital information such as the database release, setting of the compatible parameter as well as the timezone file version.

Renamed Parameters: This section will highlight parameters which have been renamed in the new Oracle version.

Obsolete/deprecated parameters: This section does not require an explanation. To avoid warnings when the Oracle database starts with such a parameter you should consider updating the initialization file.

Component List: This section lists the components installed in the database. This output is very similar to what you saw earlier in this section. You should investigate why a component is invalid before upgrading.

Tablespaces: Checks for sufficient space in the Oracle tablespaces SYSTEM, SYSAUX, UNDO, TEMP. A recommendation is offered to increase the size of these if they are too small for the migration.

  • PreUpgrade Checks: Any potential problems which could arise with the migration are listed here. There is usually a description plus a solution offered. You should review the proposed change and act on them if needed.
  • PreUpgrade Recommendations: Oracle will most likely recommend gathering of dictionary statistics.
  • Post Upgrade Recommendations: Although not strictly speaking necessary Oracle might recommend upgrading the time zone file. This step is needed if your application makes use of named timezones (for example timestamp ‘2013-08-16 19:00:00 Europe/London BST’ and the timezones you are storing changed recently. Remember to upgrade the server and all clients to the new timezone file!
  • Summary: Presents a summary of the gathered information and lists any error that might prevent your upgrade from succeeding. Do not proceed until you fixed any potential error unless you want to practice restores under pressure!

In the demo system used for this chapter Oracle complained about too low a setting for the processes initialization parameter. In previous versions it was common to have this set to 150, especially for databases with fewer concurrent users. Oracle now insists on a value of 300 for the processes parameter. Note that if the tool finds the Enterprise Manager DBConsole repository in the database the preupgrade utility advises that the OEM repository will be removed as part of the upgrade. In order to save time it is therefore recommended by the utility to remove the repository prior to the upgrade. The final recommendation for the database was to gather dictionary statistics before starting the actual upgrade.

Before continuing you should get a list of objects currently invalid in the database. In an ideal world there are no invalid objects in a database after a global compilation using utlrp.sql for example has been started during a maintenance window. However the world is not perfect and there might be objects that do not compile. You should take note of them and compare the list with the list of invalid objects after the migration has completed. To do so, execute the script utluiobj.sql before and after the migration to get a list of invalid objects. The upgrade guide lists the following additional areas of interest before the upgrade:

  • Access Control Lists (ACLs) which have been introduced with Oracle 11g for network related PL/SQL packages have been revised. The new database release 12c uses Real Application Security removing the dependency on XML DB.
  • The way that database passwords for database links are stored has changed. If you plan on downgrading the database from 12c to your original release you should preserve the passwords before you upgrade.
  • As with many Oracle migrations you need to upgrade the timezone files as part of the process. Be careful with the procedure-columns using TIMESTAMP WITH TIMEZONE could become corrupted if not treated correctly. Even if your application does not make use of this, the Oracle Scheduler does make intensive use of it!
  • If your application or database makes use of materialized views, please ensure that all refresh operations have completed before migrating.

Additional information about pre-upgrade checks and prerequisites can be found in the official Database Upgrade Guide, section 2.5.

Making a database available for a first test migration

In the following step you need to make the database available to the new server for a first upgrade attempt. This can either be a clone of the database’s LUNs or an RMAN-created clone. In any way you should try and get a database which has not been in use to ease the time pressure of having to succeed within a change window.

If you have been lucky and you storage administrator has given you a clone of the LUNs used for the test database, then you might have to rescan the SCSI bus and update the multipath configuration. The kpartx utility is a great way of making partitions on device-mapper multipathed devices known to the operating system without rebooting the server. If you are on an earlier version of RedHat or Oracle Linux and use ASMLib, then you might need to perform a scandisks operation to make the new LUNs available to Oracle ASM.

After adjusting any necessary parameters in the initialization file you can start the database. If the database was in ASM the new pfile is created very quickly: all you need is a pointer to the SPFILE as shown here:

[oracle@server1 ∼]$ cat initora11.ora
SPFILE='+DATA/ORA11/spfileORA11.ora'          # line added by Agent

Before starting the database for the first time as shown below with the upgrade option you should create a pfile from the spfile and modify any parameters the pre-upgrade script recommended changing. Should the database fail to start then you most likely forgot to create the audit dump destination file or the diagnostic destination. You should not have set the background/user/core dump destination parameters in an 11g database, but if you are migrating from a 10.2 database then you might have to review these and change accordingly. If you are using a password file ensure that it is available in the new Oracle home’s dbs directory. If you are not using Oracle Restart and previously had a SID_LIST in your listener.ora file ensure that this is available with the 12c listener, and that the listener is started.

In any case, create a new backup of the database using a proven and validated method before starting the actual migration of the test database. The backup should include the spfile and controlfile autobackup as well as all database files and necessary archivelogs.

Performing the upgrade

If not already done so, upgrade the /etc/oratab file (or /var/opt/oracle/oratab in Solaris) and create a new entry for the test database about to be migrated. In the next step you need to ensure your new environment has been sourced in to your session. The safest way to do so is to log out and log back in again. The environment variables should now point to the new Oracle home.

At this stage you need to be sure you have a tried and tested backup and recovery strategy in place in the event that the backup fails.

Navigate to the $ORACLE_HOME/rdbms/admin directory on the database server. Then start the database with the upgrade command:

[oracle@server1 admin]$ sqlplus / as sysdba
[...]
 
Connected to an idle instance.
 
SQL> startup upgrade
ORACLE instance started.
 
Total System Global Area 1068937216 bytes
Fixed Size                  2268624 bytes
Variable Size             314573360 bytes
Database Buffers          746586112 bytes
Redo Buffers                5509120 bytes
Database mounted.
Database opened
 
SQL> exit

Exit the SQL*Plus session and be amazed by one of the best new features from the author’s point of view: the parallel dictionary upgrade. Instead of initiating the upgrade from within SQL*plus, you can launch a perl script to perform the task in parallel. The script is named catctl.pl and resides in $ORACLE_HOME/rdbms/admin as described earlier. It normally takes two parameters. The first one indicates the maximum degree of parallelism. It ranges from 1 to 8 with 4 being the default. The migration log files will be created in the current working directory but should rather be directed to a better location, possibly $ORACLE_BASE/admin/dbUniqueName/logs/. A sample session for the 11.2.0.3 database “ora11” is shown here:

[oracle@server1 admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl –n 8 
> –l $ORACLE_BASE/admin/$ORACLE_SID/logs catupgrd.sql
 
Analyzing file catupgrd.sql
14 scripts found in file catupgrd.sql
Next path: catalog.sql
32 scripts found in file catalog.sql
Next path: catproc.sql
37 scripts found in file catproc.sql
Next path: catptabs.sql
61 scripts found in file catptabs.sql
Next path: catpdbms.sql
205 scripts found in file catpdbms.sql
Next path: catpdeps.sql
77 scripts found in file catpdeps.sql
[...]
[Phase 53] type is 1 with 1 Files
cmpupend.sql
 
[Phase 54] type is 1 with 1 Files
catupend.sql
 
[Phase 55] type is 1 with 1 Files
catuppst.sql
 
[Phase 56] type is 1 with 1 Files
catshutdown.sql
 
Using 8 processes.

As you can see the migration is broken down into phases, and each phase has a number of scripts associated. After the analysis completed, the upgrade is started. The last few lines are shown here.

Analyzing file catupgrd.sql
Log files in /u01/app/oracle/admin/ora11/logs
Serial   Phase #: 0 Files: 1      Time: 77s
Serial   Phase #: 1 Files: 3      Time: 50s
[...]
Serial   Phase #:51 Files: 2      Time: 10s
Restart  Phase #:52 Files: 1      Time: 1s
Serial   Phase #:53 Files: 1      Time: 1s
Serial   Phase #:54 Files: 1      Time: 118s
Serial   Phase #:55 Files: 1      Time: 117s
Serial   Phase #:56 Files: 1      Time: 26s
Grand Total Time: 1169s

The parallel upgrade script was using a degree of parallelism of 8 as you can see by the line “using 8 processes.”

image Note  If you want you can still run the upgrade serially, by passing the –n 0 option to catctl.pl.

After the completion of the perl script you will notice that the database has already been shut down. The next mandatory step is to start it normally and run the post-upgrade script, utlu121s.sql after the database has been started normally. In the database which has just migrated, the following output was produced by the script:

SQL> @?/rdbms/admin/utlu121s
.
Oracle Database 12.1 Post-Upgrade Status Tool           08-13-2013 00:59:59
.
Component                               Current         Version  Elapsed Time
Name                                    Status          Number   HH:MM:SS
.
Oracle Server
.                                      UPGRADED      12.1.0.1.0  00:08:13
JServer JAVA Virtual Machine
.                                         VALID      12.1.0.1.0  00:02:02
Oracle Workspace Manager
.                                         VALID      12.1.0.1.0  00:00:58
Oracle XDK
.                                         VALID      12.1.0.1.0  00:00:27
Oracle XML Database
.                                         VALID      12.1.0.1.0  00:03:02
Oracle Database Java Packages
.                                         VALID      12.1.0.1.0  00:00:10
Final Actions
.                                                                00:01:17
Total Upgrade Time: 00:16:23
 
PL/SQL procedure successfully completed.

Before handing the database over for testing you need to ensure that the catuppst.sql script in $ORACLE_HOME/rdbms/admin has been executed as part of the upgrade which it normally has unless the upgrade process has run into an error condition.

After that script has completed it is recommended to follow it up with the execution of the utlrp.sql script to recompile objects that have become invalid in the process. At the end of its execution you should not have additional objects in the dba_objects view with a status of “INVALID”. Oracle provides a script to check for new invalid objects as a result of the migration named utluiobj.sql.

SQL> @?/rdbms/admin/utluiobj
.
Oracle Database 12.1 Post-Upgrade Invalid Objects Tool 08-13-2013 01:02:45
.
This tool lists post-upgrade invalid objects that were not invalid
prior to upgrade (it ignores pre-existing pre-upgrade invalid objects).
.
Owner                     Object Name                     Object Type
.
 
PL/SQL procedure successfully completed.

Ideally the query against the dba_invalid_objects view returns zero rows. The final step when using Oracle Restart is to update the metadata configuration using srvctl upgrade database as shown in the example:

[oracle@server1 ∼]$ srvctl upgrade database -d ora11 -o $ORACLE_HOME

This command does not seem to consider the oratab file which needs to be changed manually, replace the path to the pre-12c Oracle home with the installation path you chose if you have not done so already.

If the output of “srvctl status database -d <dbname>” complains that the database was not running but you clearly see that it is (for example, there are background processes) then you might need to stop the database using SQL*Plus and start it with srvctl. The environment for your Oracle processes should reflect the new Oracle home. Assuming that the UNIX process ID for the database’s smon process was 10059, then the following little command taken from the proc(5) manual can help you get clarity about the ORACLE_HOME:

[root@server1 ∼]# ( cat /proc/10059/environ; echo ) | tr '00' '
' | grep ORACLE_HOME
ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1

A word of caution: your database is now migrated to the current release and should under no circumstances be started from the old ORACLE_HOME!

Performing necessary post-upgrade steps

Each new database upgrade guide lists a number of steps to be completed. Whether or not these steps are required depends on your configuration. Following are the steps to consider:

  • Check your oratab file to ensure that the Oracle home for a given ORACLE_SID points to the 12.1 home, not the previous one. Do NOT start the 12.1 database instance from its old Oracle home!
  • Ensure that your .bashrc, .bash_profile, /etc/profile or .profile scripts do not set environment variables pointing to an old Oracle home for your database.
  • If you saved statistics using the DBMS_STATS.CREATE_STATS_TABLE procedure then these tables need to be updated using the UPGRADE_STAT_TABLE procedure in the DBMS_STATS package.
  • Upgrade the RMAN catalog.
  • If you want to make your XMLDB repository available for FTP and HTTP authentication then you may need to initialize the ports after the database migration.
  • If the pre-upgrade tool recommended that you change the timezone file, you should do so. The log file lists a note on My Oracle Support which can be used for the process.
  • Users of Application Express have to make changes to their configuration settings after the upgrade.
  • If your IT security department has not created a compliance document then you should lock all Oracle supplied accounts except for SYS and SYSTEM. Use CDB_USERS.ORACLE_MAINTAINED column to define Oracle internal accounts and lock them if they are not already locked.
  • You might want to recreate your orapw$ORACLE_SID file to the new 12c format or use the new INPUT_FILE parameter in orapwd to migrate it to the new format.

Each of these steps is explained in detail in Chapter 4 of the official, Upgrade Guide. Consult that chapter for more information on any of the steps you wish to perform.

Upgrading the database with the Database Upgrade Assistant

After having experienced what happens during a database migration many database administrators might prefer to use a graphical tool for the same job for smaller, not quite as important databases. After all, the DBUA utility has been developed by Oracle to provide a means of upgrading databases repeatedly, in the same way without overlooking details. The code in DBUA will certainly be executed over and over, no matter if it is the first, second or tenth database you are upgrading.

Nevertheless the same precautions as described earlier in the context of the manual database migration have to be taken. Do not become lazy-there is nothing worse than a corrupt database that is stuck halfway between 11.2.0.3 and 12.1 and cannot be recovered.

It is recommended that the database you want to upgrade be running before DBUA is invoked. This ensures that the database has been started using the correct parameter files. Remember that DBUA will restart the database during the upgrade process using the default parameter files, so these need to be correctly configured. The interactive Database Upgrade Assistant requires an X server to connect to. Please refer back to Chapter 5 for a refresher on how to provide an X-environment for a graphical Oracle utility if you are unsure how to proceed. The standard way of presenting X-applications to the database administrator in this book is vncserver. Using the vnc client to connect to the server in question you can start the DBUA in an X-terminal. Start the Upgrade Assistant from the 12.1 Oracle home. You will be surprised how flexible and useful the upgrade assistant has become! When started it will greet you as shown in Figure 12-7:

9781430244288_Fig12-07.jpg

Figure 12-7. DBUA welcome screen

Select the “Upgrade Oracle Database” option to begin the upgrade process and click on the “Next” button to advance to step 2 shown in Figure 12-8.

9781430244288_Fig12-08.jpg

Figure 12-8. The database selection screen

In this screen you have the option to select the database you want to upgrade. In our case the only eligible database was named ORA11 and is created in an Oracle 11.2.0.3.x home. It is up, and it is a single instance database. The database instance name ORA11 has chosen for the sake of demonstrating the upgrade process only. In most production deployments it makes very little sense to create a database tied to a version. Version numbers inevitably increase causing the instance name to be out of sync with the database name. The workaround to the situation-using services-may add confusion to administrators looking after such an environment.

The target 12c RDBMS home is shown on the top of the screen and determined by the location of the DBUA executable on the file system. If you do not find the database you want to migrate you should check the source oracle home drop-down menu and chose the appropriate one. As per the recommendation above the database is started. Select the radio button next to it and click on “Next” to advance to step 3, shown in Figure 12-9:

9781430244288_Fig12-09.jpg

Figure 12-9. Output of the prerequisite checks

The database instance ORA11 has been created using the standard engineered build. With every database being designed and deployed in a same way bad surprises are less likely to happen. Since no severe exceptions were reported with the previously migrated database, the author was reasonably confident there were none with this database either. The checks performed by DBUA returned green light for a migration. If there were any problems you could click on the checks to expand a list of potential issues.

If problems had been detected then DBUA offers the option to fix those that can be fixed by a script. For each exception reported in the list which is “fixable” as per the column in the wizard you can apply the fix and check again if the problem has been solved. This again is very similar to the output of the pre-upgrade script you ran in SQL*Plus earlier: it too produced fixup scripts you could have executed in case of problems. Some issues reported by DBUA cannot automatically be fixed and require you to take a look at the Upgrade guide, My Oracle Support or your favorite search engine to find a solution to the problem. You should not continue until the Upgrade Assistant stops reporting warnings and/or errors.

In the above example none of that was necessary, a click on the “Next” button leads to the next screen, shown in Figure 12-10.

9781430244288_Fig12-10.jpg

Figure 12-10. Presenting the upgrade options

In this part of the migration wizard you can fine tune which actions DBUA should perform for you. Amongst the options you can chose are:

Upgrade Parallelism. Let’s you select the degree of parallelism for the catctl.pl script. This script has been covered in more detail in the manual upgrade section earlier in the chapter.

Recompile Invalid Objects During Post Upgrade. The Upgrade Assistant can recompile invalid options after the migration if you tick the relevant check-box. The degree of parallelism can be defined independently of the degree of parallelism the catctl.pl script uses.

Upgrade Timezone Data. Choosing this option is a handy shortcut if you know that your database does not have timestamp with local time zone columns. If you are unsure you might want to review the relevant documentation in the Upgrade Guide and on My Oracle Support. Updating the timezone file is a not a trivial task and should better be performed by an experienced DBA. But please do not forget to work on the timezone file later!

Gather Statistics Before Upgrade. If your dictionary statistics are stale the migration can take longer than necessary to complete. If you tick the “gather statistics before upgrade” check box then DBUA will perform this step for you. If you have recent and accurate dictionary statistics this might not be needed.

Set User Tablespaces Read Only During Upgrade. This is a useful option during the migration to prevent any changes to the SCNs marked in the datafile headers, allowing for a much shorter RMAN restore should something go wrong during the migration.

File Locations. You can specify new file locations of the diagnostic destionation and audit dumps. Remember that the “diag” directory will be created automatically; if you enter $ORACLE_BASE/diag as the diagnostic destination, you will end up with $ORACLE_BASE/diag/diag/ which is probably not what you intended.

You have another option to run your own scripts before and after the database upgrade in the “custom scripts” pane, which is not shown here. A click on the “Next” Button leads to the “Management Options” screen which is not shown. It allows you to configure the database either for OEM Database Express which replaces OEM Database Console or alternatively with Enterprise Manager 12c Cloud Control. If you wish to register the database with Cloud Control you need to have an appropriately configured agent installed on the host. If you do not want to configure these options at this time click on “Next” will guide you to the screen shown in Figure 12-11:

9781430244288_Fig12-11.jpg

Figure 12-11. Move Database Files as part of the upgrade

The “move database files” screen allows you to move the database to a different location on the file system or alternatively into ASM if you so like. The screen is very similar to the one shown in the Database Configuration Assistant during the database creation. The question you need to ask yourself here is: where do you want the migrated database to be located? In this scenario, which is an “in-place” migration of a database which has been moved prior to the invocation of the Upgrade Assistant no action was necessary. Proceed to the next wizard screen, shown in Figure 12-12

9781430244288_Fig12-12.jpg

Figure 12-12. Updating the network configuration.

If you had a listener operating out of the “old” Oracle home, such as from an 11.2 home, then you could opt to have DBUA to move it to the new database home. Since Oracle Restart is in use on this server, there is no listener in the database home. This screen is a very convenient way for systems not using ASM to have the listener moved as part of the migration. Another step less to perform post the migration! If you want to migrate the listener as well, select it before moving to the next screen shown in Figure 12-13.

9781430244288_Fig12-13.jpg

Figure 12-13. Recovery options

The “Recovery Options” screen is a central piece of the migration in the author’s opinion. Here you determine your back-out-while-not-losing-face strategy. You can chose from any of the following three:

  • Backups managed with Recovery Manager.
  • Use of the Flashback Database feature for databases beginning with 11.1.0.7.
  • Your own.

If you chose to use an existing backup then please make sure it is current and includes all relevant archived redo logs! You should also ensure that all the backup pieces are actually available if needed. If unsure, you could let RMAN create a backup for you. To use Flashback Database your source database must be 11.1.0.7 or later, and the Flashback Database feature must be enabled, otherwise you will see the option greyed out in in the screenshot. If you opt to use your own backup strategy, then please have one that is tested and valid. Choosing this option is like telling the Upgrade Assistant: “trust me, I know what I am doing”. The backup and possibly restore operation will be your responsibility.

Finally it is time to start the actual migration! Another click on the “Next” Button will present you with a summary of operations shown in Figure 12-14 below.

9781430244288_Fig12-14.jpg

Figure 12-14. The summary of operations

The summary screen lists the options you have chosen and is the last time when you can change your mind. Carefully review the options, and when you are happy you need to click on “Finish” to begin the actual migration which might take a little while. Please note that the progress window is not shown here. After the migration completed, you are shown the results screen which again lists source and destination databases, any warnings and details. The most likely warning you are going to see is that your database uses an old timezone file. The upgrade of the timezone file is mandatory, and needs to be completed using the instructions provided on My Oracle Support.

Database Upgrade Assistant also supports silent migrations, which can come in handy when you do not have an X server available for security reasons.

Consolidating migrated databases

You read in Chapter 7 that you could plug a non-Container Database (CDB) into a CDB. Transforming the freshly migrated database into a Pluggable Database (PDB) is a logical consequence of the idea described earlier and a further step to consolidation.

Here again it is beneficial to have a standard DBCA template or comparable method of creating databases identically. After the pre-12c database has been migrated to the new release you can plug it into the CDB of your choice, if the following prerequisites are met:

  • The target CDB and the newly migrated database MUST use the same or a compatible character set and national character set.
  • The compatible parameter of the pre-12c database must be advanced to 12.1. Please remember the warning of doing so: upgrading the compatible parameter rules a downgrade by script out. Advance the parameter only if you are sure you will not need to downgrade to the previous release.
  • Both target CDB and migrated database must have the same endianness. In other words you cannot plug a database created on a popular RISC platform straight into a CDB running on Linux.
  • Both CDB and migrated database must use the same options as shown in the DBA_SERVER_REGISTRY view.

The last bit can be a problem on sites where is little control over the database creation process. The well-known General_Purpose.dbc template seems to add every Oracle option under the sun to the database without the option to change it during the creation. “Custom” databases quite often-and rightly so-contain only the options needed. An engineered built ideally limits the options to the minimum required-catalog views and procedures as well as Java and XML DB. Spatial, InterMedia and Text are not standard components and should be installed only when needed, not the least to avoid license problems later!

Updating the compatible parameter

One of the steps after receiving sign-off from all stakeholders that the migration was a success is to bump the compatilibty to 12.1. Again the word of warning-using a different compatible parameter will block the route to a downgrade via a script. Before changing the parameter you should make sure you can revert to the pre-12c compatible setting. Usually a full RMAN backup will be needed. Changing the parameter is rather simple but requires an instance restart. The command to update the compatible settings is the usual alter system command:

SQL> show parameter compatible
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      11.2.0.0.0
 
SQL> alter system set compatible='12.1.0' scope=spfile;
 
System altered.
 
SQL> shutdown immediate
Database closed.
[...]
Database mounted.
Database opened.
SQL> show parameter compatible
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      12.1.0

The corresponding information in the alert.log is shown here as well.

ALTER DATABASE   MOUNT
[...]
2013-08-14 18:56:15.100000 +01:00
ALERT: Compatibility of the database is changed from 11.2.0.0.0 to 12.1.0.0.0.
Increased the record size of controlfile section 12 to 96 bytes
Control file expanded from 614 blocks to 618 blocks
Increased the record size of controlfile section 13 to 780 bytes
Control file expanded from 618 blocks to 620 blocks
[...]
ALTER DATABASE OPEN
Switching redo format version from 11.2.0.0.0 to 12.1.0.0.0 at change 1106948
Thread 1 advanced to log sequence 87 (COMPATIBLE advance)

With that setting in place you can move on to the next task which is to check the compatibility of the database with the destination CDB. It allows you to check if the non-CDB can be plugged into a CDB.

Checking compatibility

The compatibility check is performed in two steps. In the first step you need to create an XML file describing the future PDB to be plugged in. It goes almost without saying that the aspiring PDB’s data files are available to the CDB’s host but this too is a requirement.

You create the XML file using the new DBMS_PDB package using the DESCRIBE function as shown here:

SQL> exec dbms_pdb.describe(-
>  pdb_descr_file => '/u01/app/oracle/admin/ora11/pdb/ora11.xml'),
 
PL/SQL procedure successfully completed.

The database needs to be opened READ-ONLY or the execution of the procedure will fail with that error message. The file created-ora11.xml contains a logical description of the database to be plugged in. It specifically sets the pdbname to the database’s unique name which will be important later. It also records further metadata such as the byte order (“endianness”), the database ID and others before it lists all the data files. Towards the end of the file you find the options in use in the database including their respective versions. Finally important initialization parameters are listed alongside their respective values. All the information in the XML file will next be used to assess the compatibility of the database with its future container.

Now switch to the CDB into which you want to plug the database and execute the compatibility check. This too is found in the DBMS_PDB package. Continuing the example here is the check:

SQL> declare
  2   is_compatible boolean;
  3  begin
  4   is_compatible := dbms_pdb.check_plug_compatibility(
  5    pdb_descr_file => '/u01/app/oracle/admin/ora11/pdb/ora11.xml'
  6   );
  7  end;
  8  /
 
PL/SQL procedure successfully completed.

Now you need to query the new PDB_PLUG_IN_VIOLATIONS view since there will almost certainly be problems reported. In this case they were not severe though:

SQL> select message from pdb_plug_in_violations;
 
MESSAGE
--------------------------------------------------------------------------------
PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.
CDB parameter sga_target mismatch: Previous 4294967296 Current 8589934592
CDB parameter compatible mismatch: Previous 12.1.0 Current 12.1.0.0.0
CDB parameter pga_aggregate_target mismatch: Previous 1073741824 Current 2147483
648

Because the violations found are only parameters, they can easily be changed. After matching the parameters on ORA11 and creating a new XML file the execution of the compatibility check looked good:

SQL> select message from pdb_plug_in_violations;
 
MESSAGE
--------------------------------------------------------------------------------
PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.

This is the standard message indicating that a dictionary conversion script needs to be run after plugging a non-CDB into a CDB as a PDB. A common problem found during testing was that assigning a different “PDB” name causes issues when trying to plug the PDB into the CDB. The safest way was to keep the PDB name the same as the database name.

Different options in the non-CDB and CDB will result in messages similar to these:

MESSAGE                                                                                              -----------------------------------------------------------------------------------------
PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.
Database option APS mismatch: PDB installed version NULL. CDB installed version 12.1.0.1.0.
Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 12.1.0.1.0.
Database option DV mismatch: PDB installed version NULL. CDB installed version 12.1.0.1.0.

The action column in the view indicates that the options should be installed where missing. Alternatively you could plug the non-CDB into a different CDB with matching options.

Plugging the non-CDB into the CDB

The remaining tasks are straight forward and involve the invocation of the “create pluggable database” command discussed in Chapter 7. In the example the data files are already in the correct location and Oracle Managed Files are in use. These facts greatly simplify the plugging in of the database. Please refer back to Chapter 7 for more information on how to plug in a database into a CDB including file name conversion and the decision to copy the data files or not.

Before you can try to plug in the database you need to shut it down safely. Following along the command to add the migrated database to the CDB is shown here:

SQL> create pluggable database ora11 as clone
  2  using '/u01/app/oracle/admin/ora11/pdb/ora11.xml'
  3  nocopy tempfile reuse;
 
Pluggable database created.

The operation is logged quite verbose in the alert.log as well:

2013-08-14 19:44:34.887000 +01:00
create pluggable database ora11
using '/u01/app/oracle/admin/ora11/pdb/ora11.xml'
nocopy tempfile reuse
****************************************************************
Pluggable Database ORA11 with pdb id - 4 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
Deleting old file#1 from file$
Deleting old file#2 from file$
Deleting old file#3 from file$
Deleting old file#4 from file$
Adding new file#13 to file$(old file#1)
Adding new file#14 to file$(old file#2)
Adding new file#15 to file$(old file#4)
Marking tablespace #2 invalid since it is not present in the describe file
2013-08-14 19:44:35.958000 +01:00
Successfully created internal service ora11 at open
ALTER SYSTEM: Flushing buffer cache inst=0 container=4 local
****************************************************************
Post plug operations are now complete.
Pluggable database ORA11 with pdb id - 4 is now marked as NEW.
****************************************************************
Completed: create pluggable database ora11
using '/u01/app/oracle/admin/ora11/pdb/ora11.xml'
nocopy tempfile reuse

Please resist the temptation to open the PDB straight away! Remember from the plug in violations view that you must run the noncdp_to_pdb script fist! This is done by switching to the new PDB and executing the script:

SQL> alter session set container = 'ORA11';
 
Session altered
 
SQL> @?/rdbms/admin/noncdb_to_pdb

After the script has completed it is safe to use the PDB as any other PDB in the Container Database.

Moving the database into ASM

If the decision has been made to consolidate databases in ASM, then it is a good idea to consider such move at time of the migration. Migrating a database from a file system into ASM is quite straight-forward since Oracle 11g. In summary you create an image copy of the database in ASM, and then switch to the database copy. This is very similar to the recovery strategy where a full copy of the database is stored in the Fast Recovery Area which can be the target of a (RMAN) switchover operation.

Before you are going to move the database into ASM you should first keep a log of where the files are located on the file system. Files eligible for a move into ASM are:

  • Data files
  • Temp files
  • Control files
  • Server parameter file

Some other file types such as certain Data Pump files can also be created in ASM, but many users keep those on a file system. If you are also moving an existing Flash Recovery Area into ASM then you need to drop all restore points and disable flashback database before doing so. This may have implications on existing back-out strategies for other changes, so better ensure there are no conflicts! Begin by checking your data and temp files. The below database has very few data and temp files but please be aware that the number of physical files in a database only extends the duration of the copy operation, otherwise a small database is moved into ASM using the same techniques as a large one.

SQL> select name from v$datafile union all
  2  select name from v$tempfile union all
  3  select value from v$parameter2
  4  where name in ('control_files','spfile') union all
  5  select member from v$logfile
  6  /
 
NAME
--------------------------------------------------------------------------------
/u01/oradata/CDB2/system01.dbf
/u01/oradata/CDB2/sysaux01.dbf
/u01/oradata/CDB2/undotbs01.dbf
/u01/oradata/CDB2/pdbseed/system01.dbf
/u01/oradata/CDB2/users01.dbf
/u01/oradata/CDB2/pdbseed/sysaux01.dbf
/u01/oradata/CDB2/MASTER/system01.dbf
/u01/oradata/CDB2/MASTER/sysaux01.dbf
/u01/oradata/CDB2/MASTER/MASTER_users01.dbf
/u01/oradata/CDB2/temp01.dbf
/u01/oradata/CDB2/pdbseed/pdbseed_temp01.dbf
/u01/oradata/CDB2/MASTER/temp01.dbf
/u01/app/oracle/product/12.1.0.1/dbhome_1/dbs/spfileCDB2.ora
/u01/oradata/CDB2/control01.ctl
/u01/fra/CDB2/control02.ctl
/u01/oradata/CDB2/redo03.log
/u01/oradata/CDB2/redo02.log
/u01/oradata/CDB2/redo01.log
 
18 rows selected.

The database CDB2 which is about to be migrated has block change tracking enabled as well as flashback database using a Fast Recovery Area in /u01/fra. Both of these must be disabled prior to the migration into ASM, and any restore points must be dropped. Before dropping these please ensure with the change approval board that these are not a life insurance for another change! During the next steps backups will be taken, creating image copies of the database’s components in ASM. The disk groups to be used are RECO for the Fast Recovery Area and DATA for anything else. Begin by taking a backup of the database using RMAN.

RMAN> backup as copy incremental level 0 database
2> format '+DATA' tag 'CDB2_pre_migration';
 
Starting backup at 15.08.2013 17:53
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=10 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u01/oradata/CDB2/system01.dbf
output file name=+DATA/CDB2/DATAFILE/system.288.823542809 tag=CDB2_PRE_MIGRATIONshift-enter.jpg
RECID=3 STAMP=823542819
[...]
input datafile file number=00010 name=/u01/oradata/CDB2/MASTER/MASTER_users01.dbf
output file name=+DATA/CDB2/E4002A8CC0102B3AE0436638A8C0D53A/DATAFILE/users.282.823542887shift-enter.jpg
tag=CDB2_PRE_MIGRATION RECID=11 STAMP=823542888
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 15.08.2013 17:54
 
Starting Control File and SPFILE Autobackup at 15.08.2013 17:54
piece handle=/u01/fra/CDB2/autobackup/2013_08_15/o1_mf_s_823542889_90t1yso5_.bkpshift-enter.jpg
comment=NONE
Finished Control File and SPFILE Autobackup at 15.08.2013 17:54
 
RMAN> alter system switch logfile
 
Statement processed

The process will copy all the files as image copies into the disk group specified and switches the archive log. So far so good! There are still components outside ASM which need to be moved as well, including the server parameter file (you can’t have a parameter file in ASM), the control files as well as the online redo logs. The server parameter file can be backed up manually if you have not requested an automatic backup.

At this point the Oracle documentation recommends turning flashback off and drop all restore points if you are moving the Fast Recovery Area into ASM as well which you probably should do. Furthermore the Block Change Tracking feature must now be turned off if it was enabled as per the docs. With the initial preparations completed you need to perform a clean shutdown of the database. In most cases “shutdown transactional” or “shutdown immediate” will be sufficient. What remains to be done is the move of the server parameter file into ASM, followed by the control file, then the online redo logs at last. The previously taken backup of the server parameter file is the first to be migrated into ASM:

RMAN> startup mount;
[...]
RMAN> restore spfile to '+DATA/CDB2/spfileCDB2.ora';
 
Starting restore at 15.08.2013 18:01
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=237 device type=DISK
 
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: restoring SPFILE
output file name=+DATA/CDB2/spfileCDB2.ora
channel ORA_DISK_1: reading from backup pieceshift-enter.jpg
/u01/fra/CDB2/autobackup/2013_08_15/o1_mf_s_823542889_90t1yso5_.bkp
channel ORA_DISK_1: piece handle=/u01/fra/CDB2/autobackup/2013_08_15/o1_mf_s_823542889_90t1yso5_.bkpshift-enter.jpg
tag=TAG20130815T175449
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 15.08.2013 18:01

To ensure that Oracle can make use of the spfile you should update the database definition in Oracle Restart to point to the spfile:

[oracle@server1 dbs]$ srvctl config database -db CDB2 | grep -i spfile
Spfile:
[oracle@server1 dbs]$ srvctl modify database -db CDB2 -spfile '+DATA/CDB2/spfileCDB2.ora'
[oracle@server1 dbs]$ srvctl config database -db CDB2 | grep -i spfile
Spfile: +DATA/CDB2/spfileCDB2.ora

The old server parameter file in the $ORACLE_HOME/dbs directory should be renamed-a server parameter file in the default location will most likely be chosen first when starting the database, and you want it to use the spfile in ASM.

Restart the instance to force the use of the new spfile. To instruct the database to use the ASM disk groups from now on rather than the file system, you need to update the following initialization parameters related to Oracle Managed Files:

  • db_create_file_dest
  • db_recovery_file_dest_size (if a FRA was not in use previously)
  • db_recovery_file_dest
  • db_create_online_log_dest_n (optional)

In addition the control file needs to be restored into ASM, so while you are setting the OMF parameters you might as well change “control_files.” This can all be done in RMAN or SQL*Plus using the following examples:

RMAN> alter system set db_create_file_dest='+DATA';
 
Statement processed
 
RMAN> alter system set db_recovery_file_dest_size = 20G;
 
Statement processed
 
RMAN> alter system set db_recovery_file_dest = '+RECO';
 
Statement processed
 
RMAN> ALTER SYSTEM SET CONTROL_FILES='+DATA','+RECO' scope=spfile;
 
Statement processed

Now restart the instance once more in preparation of the restore of the control file, ensure you perform a “startup nomount.” The restore command does not require a backup of the control file, you can simply pick one of the original ones. If unsure, refer to the file list generated before the migration. In the example, the copy in /u01/oradata was restored:

RMAN> restore controlfile from autobackup;
 
Starting restore at 15.08.2013 18:14
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=13 device type=DISK
 
recovery area destination: /u01/fra
database name (or database unique name) used for search: CDB2
channel ORA_DISK_1: AUTOBACKUP /u01/fra/CDB2/autobackup/2013_08_15/o1_mf_s_823542889_90t1yso5_.bkp found shift-enter.jpg
in the recovery area
channel ORA_DISK_1: looking for AUTOBACKUP on day: 20130815
channel ORA_DISK_1: restoring control file from AUTOBACKUP /u01/fra/CDB2/autobackup/2013_08_15/
o1_mf_s_823542889_90t1yso5_.bkp

channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=+DATA/CDB2/CONTROLFILE/current.278.823544041
Finished restore at 15.08.2013 18:14

Mount the database in preparation for the switchover operation. In the next step you perform the switchover and recovery:

RMAN> alter database mount;
[...]
RMAN> switch database to copy;
 
Starting implicit crosscheck backup at 15.08.2013 18:15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=13 device type=DISK
Finished implicit crosscheck backup at 15.08.2013 18:15
 
Starting implicit crosscheck copy at 15.08.2013 18:15
using channel ORA_DISK_1
Crosschecked 9 objects
Finished implicit crosscheck copy at 15.08.2013 18:15
 
searching for all files in the recovery area
cataloging files...
cataloging done
 
List of Cataloged Files
=======================
File Name: /u01/fra/CDB2/autobackup/2013_08_15/o1_mf_s_823542889_90t1yso5_.bkp
 
datafile 1 switched to datafile copy "+DATA/CDB2/DATAFILE/system.288.823542809"
datafile 3 switched to datafile copy "+DATA/CDB2/DATAFILE/sysaux.279.823542829"
datafile 4 switched to datafile copy "+DATA/CDB2/DATAFILE/undotbs1.284.823542885"
datafile 5 switched to datafile copy shift-enter.jpg
"+DATA/CDB2/E4002011F3732897E0436638A8C06C51/DATAFILE/system.285.823542883"
datafile 6 switched to datafile copy "+DATA/CDB2/DATAFILE/users.283.823542887"
datafile 7 switched to datafile copy shift-enter.jpg
"+DATA/CDB2/E4002011F3732897E0436638A8C06C51/DATAFILE/sysaux.280.823542845"
datafile 8 switched to datafile copy shift-enter.jpg
"+DATA/CDB2/E4002A8CC0102B3AE0436638A8C0D53A/DATAFILE/system.286.823542875"
datafile 9 switched to datafile copy shift-enter.jpg
"+DATA/CDB2/E4002A8CC0102B3AE0436638A8C0D53A/DATAFILE/sysaux.287.823542861"
datafile 10 switched to datafile copy shift-enter.jpg
"+DATA/CDB2/E4002A8CC0102B3AE0436638A8C0D53A/DATAFILE/users.282.823542887"
 
RMAN> recover database;
 
Starting recover at 15.08.2013 18:16
using channel ORA_DISK_1
 
starting media recovery
 
archived log for thread 1 with sequence 6 is already on disk as fileshift-enter.jpg
/u01/oradata/CDB2/redo03.log
archived log file name=/u01/oradata/CDB2/redo03.log thread=1 sequence=6
media recovery complete, elapsed time: 00:00:01
Finished recover at 15.08.2013 18:16
 
RMAN> alter database open;

If you used Flashback Database and/or Block Change Tracking, you should enable these now. You can use the “report schema” command to verify the status of the database.

RMAN> report schema;
 
Report of database schema for database with db_unique_name CDB2
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    770      SYSTEM               ***     +DATA/CDB2/DATAFILE/system.288.823542809
3    660      SYSAUX               ***     +DATA/CDB2/DATAFILE/sysaux.279.823542829
4    85       UNDOTBS1             ***     +DATA/CDB2/DATAFILE/undotbs1.284.823542885
5    250      PDB$SEED:SYSTEM      ***     +DATA/CDB2/E40..C51/DATAFILE/system.285.823542883
6    5        USERS                ***     +DATA/CDB2/DATAFILE/users.283.823542887
7    590      PDB$SEED:SYSAUX      ***     +DATA/CDB2/E40...C51/DATAFILE/sysaux.280.823542845
8    260      MASTER:SYSTEM        ***     +DATA/CDB2/E40...53A/DATAFILE/system.286.823542875
9    590      MASTER:SYSAUX        ***     +DATA/CDB2/E40...53A/DATAFILE/sysaux.287.823542861
10   5        MASTER:USERS         ***     +DATA/CDB2/E40...53A/DATAFILE/users.282.823542887
 
List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    60       TEMP                 32767       /u01/oradata/CDB2/temp01.dbf
2    20       PDB$SEED:TEMP        32767       /u01/oradata/CDB2/pdbseed/pdbseed_temp01.dbf
3    20       MASTER:TEMP          32767       /u01/oradata/CDB2/MASTER/temp01.dbf
 
RMAN>

You can see that the permanent tablespaces have been switched, but not the temp files. This can be corrected by adding a new temp file to the tablespace TEMP and dropping the existing one. The last part of the database to be migrated are the online redo logs. The strategy to move those into ASM requires the creation of a new set of groups within ASM, and dropping the old ones. First you need to check the status of your current online redo logs, still on the file system.

SQL> select group#,thread#,sequence#,bytes,status
  2  from v$log;
 
    GROUP#    THREAD#  SEQUENCE#      BYTES STATUS
---------- ---------- ---------- ---------- ----------------
         1          1         13   52428800 INACTIVE
         2          1         14   52428800 CURRENT
         3          1         12   52428800 INACTIVE

A total number of three groups are required, which can be added using the alter database add logfile command. The difficult bit is dropping the existing groups. If the group is needed by Oracle you will not be able to drop it. Instead, you have to switch the logfile and try again.

When you are done with the migration, you should take a full backup of the database. In this context a full backup is either a level 0 or a “full” backup as per the RMAN terminology.

Being able to downgrade

One of the aspects rarely taken into consideration is the fact that there might be a point in time after the migration that your senior management decides that the upgrade-although technically a success-was a failure and you need to get back to square one from where you started. The most likely reason for this almost worst-case scenario is that a critical process has not been sufficiently tested.

Whatever the reason, for you to be able to downgrade to the previous release you must not have upgraded the “compatible” initialization parameter. As soon as you set this to the higher release you are in the world of point-in-time restores or “forward fixing” (and little sleep). Data that has been entered into the system is then lost, which is most likely not what you want. Downgrades to the previous release are possible, and most users will consider them for the database stack. Downgrades of Clusterware, including Oracle Restart are less common and often not needed. In most rollback-cases the rollback has to be performed as part of the migration weekend, not a number of business days earlier.

If a downgrade is a real possibility in your organization then it has to be tested with the same rigorous quality assurance as the upgrade. And since every upgrade and downgrade is different, you should test the process thoroughly, documenting anomalies and the expected outcome so that the DBA in charge during the day is able to see it through.

Summary

This chapter demonstrated possible methods to upgrade a database. Upgrading a database to version 12c is not very different from upgrading in previous releases. The chapter walked you through 2 different upgrade paths. The first one offered full control over the process using the manual method of upgrading. This method can potentially complete in less time compared to previous versions of the database thanks to the parallel execution of scripts during the upgrade process. Some database options such as Application Express extend this process significantly though, and not every step in the migration process can be executed in parallel.

The next upgrade method uses the wizard driven Database Upgrade Assistant. It can make the upgrade a much smoother process as it runs a lot of the checks prior to the upgrade entirely automatically and repeatedly. Once the database has been upgraded it is possible to plug the non-CDB into a CDB as part of the consolidation process after advancing the compatible parameter to 12c.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset