CHAPTER 11

image

Backup and Recovery

No database solution—hosted or in-house—would be complete without proper backup and recovery procedures. In fact the importance of getting a system back at any time within the limits of the service level agreements is of paramount importance. Backup and recovery complement the disaster recovery solution, which you read about in Chapters 9 and 10. In many ways you cannot have just the one solution—backups or a DR site—since large databases might take too long to be restored and rolled forward. It is important to keep in mind that the shared hosting platform is indeed shared, including the backup infrastructure. So if there is a separate tape library available to the shared platform, then you can almost certainly assume that there are going to be delays accessing it. Depending on your service level agreement it can be safer to invoke the standby database, and then rebuild the failed primary. For all other cases the restore and recovery are of course sufficient.

Despite existing standby databases which can be activated in literally no time, backups are essential. Assume for a moment that the regulator wants a copy of a specific database at a point in time, maybe weeks or months back. How would you be able to supply the regulator with the data in a straightforward way?

The chapter you are going to read will focus on the most important aspects of the backup infrastructure for a consolidated database environment. It will specifically focus on Pluggable Databases as explained in Chapter 7 as the focus of your attention. And again the solution to be put into place has to be simple and mostly automatic. Depending on your first line support the restore and recovery commands might need to be hidden in a shell script or similar abstraction layer. The importance of having skilled developers for these scripts cannot be overrated, since the tool will be rolled out globally. Any bugs in the abstraction layer can potentially mean data loss. The chapter is aimed at these developers, mainly to be found in engineering, explaining the tools and methods for creating a suitable abstraction layer for the operations team to use. The resulting script is left as an exercise to the reader.

The primary tool for taking Oracle backups is Recovery Manager, RMAN. While its appeal was little in the initial 8.x release, RMAN has become very mature and can care for literally any situation. RMAN hides a lot of the work to be done under the covers from the user. So-called user-managed backups or manual backups on the other hand are more difficult to take and require more knowledge of Oracle. The use of user-managed backups is strongly discouraged both for complexity and number of steps required. The more steps to be executed, the higher the probability of making mistakes.

An introduction to Backups

Backups are fundamental building blocks and the bread and butter for every database administrator. Without a working backup, or a database recovery based on it even the most prolific tuning expert would struggle to justify why he or she was there in the first place: if there is no database there is no need for tuning. Speaking more seriously there is a real need to be able to get your database back, and Oracle’s own Recovery Manager is the foremost tool to address the needs of backup and recovery. After a complete, consistent backup of the database has been taken you are protected from many types of failures affecting your applications. The most common problems that require a restore of parts or the entire database are media failures and user errors. Oracle’s inability to read or write to a physical file such as control files, data files, or online redo logs are called media failures. They can be caused by file malfunctioning host bus adapters or problems with a file system. Failing disks causing SAN storage to become unavailable are less common with enterprise storage arrays which use elaborate protection mechanisms internally. A media failure most often requires the database administrator to move the affected files off that particular mount point to another one.

Bugs in applications or errors caused by application changes implemented during maintenance windows can cause logical corruption. If that corruption cannot be corrected by rolling the changes back a restore operation is also often unavoidable. Not every change comes with a rollback script, and many rollback scripts have received little attention and testing.

Finally sometimes the Oracle database might discover corrupted blocks requiring block media recovery. In this case instead of restoring and recovering the whole data file only parts need to be restored and recovered. Again, a working backup strategy is required when you want to perform Block Media Recovery, or BMR for short. Block Media Recovery has changed very little since the last major release and therefore will not be covered here.

RMAN backups

Recovery Manager stores backups in two formats: image copies of data files or so-called backup sets. An image copy, as the name suggests, is a one-to-one copy of the source file. A backup set on the other hand is a RMAN specific format containing one or multiple backup pieces. A backup set can contain all files needed to restore a database, such as the server parameter file, control files, archived redo logs, and data files. Database administrators eyed the backup sets suspiciously at first since RMAN is the only tool to access them. Over time this suspicion has subsided. However, backup sets benefit from unused block compression and null block compression, potentially reducing the size of backup set. Additionally, you have the option to apply binary compression of backup sets.

Incremental vs. full backups

Two different kinds of backups exist in RMAN: incremental and full backups, the latter being the default. A full backup, as the name suggests, backs up every component of the database needed for restoring it. Since full backups always include everything needed for a complete restore, they can become real space hogs. Incremental backups are different: rather than backing up everything each time, they only back up what has changed since the last backup. Since there has to be a starting point, the initial incremental backup will—like a full backup—include all the physical structures required to restore the database. This backup is not called a full backup, it is referred to as an incremental backup of level 0, or base backup. Unfortunately a full backup as described earlier cannot be used as a level 0 backup. Subsequent backups will be of level 1, building on the foundation laid by the level 0 backup and include only what has changed since the level 0 backup has been taken. Incremental backups can further be divided into cumulative and differential backups, again with the latter as the default. The cumulative backup will back up all changes since the last level 0 backup. A differential incremental backup will only back up changes since the last backup, regardless whether it was level 0 or a previous level 1 backup. Cumulative backups are normally larger in size compared to a differential backup.

Cumulative backups can offer an advantage over differential backups. If your backup strategy consisted of a level 0 backup on the weekend and cumulative incremental backups on the weekdays, then your restore only required the application of the level 0 backup plus the latest cumulative backup. With differential backups you have to restore the level 0 backup followed by a restore of each differential backup. In summary you are trading recovery time against disk space. On the other hand, if a differential backup is small in size there is merit in using it instead.

The duration of incremental backup directly depends on the number of blocks that changed. Instead of reading every data file to ascertain which blocks have changed since the last backup, Oracle can be instructed to use a change tracking file instead. This so-called block-change tracking (BCT) file contains bitmaps that identify which blocks have changed since the last backup. There are a maximum of eight bitmaps maintained to cover the last eight backups. You should keep this in mind when designing your backup strategy; the eight bitmaps are usually sufficient for a week’s worth of backups. After the eighth bitmap has been used Oracle will wrap around and start at the beginning of the change tracking file again, potentially degenerating the performance of the backup.

Online and offline backups

An online backup, which should really be the default, backs the database up while it is up and running. In order to take an online backup the database must be in archivelog mode. In other words, used online redo logs must be safely stored away by one of the archiver processes.  An online backup by definition is inconsistent.  This means that Oracle has to use the (backed up) archived logs to bring the database to a consistent state-a process called media recovery.

Offline backups, or RMAN backups taken while the database is in mount mode, are consistent if the database has been shut down cleanly. However the shut down and subsequent time to take the backup are a service outage; no user can access the database. For many scenarios this is hugely undesirable. The only upside to consistent backups is that no media recovery is required. Offline backups are usually taken only for databases which do not have archivelog mode enabled. And there should not be many of these anyway.

image Caution  Not using archivelog mode for a database is definitely a Big Problem for anything other than play databases! You are almost guaranteed to lose data operating in noarchivelog mode, the question is not whether or not to lose it but when.

Backup destinations

Another important aspect is the destination of the backup. Oracle offers two different destinations: either to disk, which is the default, or using a media manager. The media manager is accessed via a dedicated API Oracle makes available to third-party vendors. Using this API any third party can write software to interact with RMAN and storage libraries. Tape libraries and their virtual cousins are the most common source of longer-term backups. The beauty of this approach is that except for a short initialization section at the beginning of an RMAN session all commands are portable across any other environment.

The configuration of your Media Manager can be quite time consuming. The easiest way to deploy it is via your platform’s package manager. If you are lucky and your vendor understands the platform well you get more than a simple installation script. Instead you get a package you can add as a dependency into your standard database server operating system build. The configuration options for tape backups are so many that they have not been included in the chapter. Very often however you can simply substitute the initialization of disk channels with tape channel (SBT-System Backup to Tape) to back up to tape instead.

Using a dedicated media manager usually involves extra cost for the software, which is why many environments perform a two staged Backup strategy:the RMAN database backup goes to a dedicated area on disk, from where it is backed up using a “normal” disk backup to tape. Disk in this respect can and maybe should be on a different host. In the UNIX world NFS seems to be a popular choice, and Oracle offers hardware solutions for this purpose. Most backups to disk are likely to complete as quick if not quicker than tape backups. If a low-enough storage tier is selected for them, then the cost advantage of tape drives can also partially be offset. The downside to the two-staged approach is that for a backup which is not on disk you first have to restore from tape to disk. Hopefully your backup area can take more than just one backup, otherwise you might have to remove the existing backup first. After the tape restore you have to initiate the RMAN restore and recover commands. All of these tasks will take time, and as with any critical operation should be documented well and practiced regularly. It is very important to stress that there is little worse during a restore than to find out that the a few crucial archived logs have not been restored from tape to disk.

Which approach you end up taking in the end will most likely depend on budget. If your organization has a favorable licensing deal with your backup vendor, then a direct integration of RMAN and the media manager is desirable. In most other cases you might end up with the more important databases being backed up to tape directly, whereas the lower tier databases probably will be backed up to disk first, then to tape.

Complete versus incomplete recovery

The scope of the recovery can either be full or partial. The latter is also known as incomplete recovery, or point in time recovery. A full recovery, as the name suggests, is applied when media failure has rendered a mount point unusable. The reasons for this are manifold, failure of a SAN switch or logical corruption on the file system level can cause a file system to be downgraded to “read-only” mode at best, or automatically dismounted at worst. In any case, the database will not be able to write to its files, causing it to crash.

If all information required for a full recovery is available you can restore the files on the lost mountpoint to another file system with sufficient free space and fully recover the database. A full recovery ends with the command to open the database, not specifying the resetlogs clause.

An incomplete recovery on the other hand is required in situations where for example a software release has gone badly wrong and the Flashback Database feature has not been enabled, rendering the feature unavailable. Another situation requiring a full recovery is the user error or a bug in the application using the database to store and retrieve data. In some circumstances it is impossible to create a fix for the logical corruption, and the database must be restored to a point in time prior to the failure.

The database incarnation

Whenever you open the database with the resetlogs option it creates a new incarnation. An incarnation as per the Oracle documentation is a separate version of the database. The original incarnation of each new database is incarnation 1, and others are added as and when needed. Using Recovery Manager it is possible to list the incarnations of a database:

RMAN> list incarnation;
 
List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       CDB2     573673282        PARENT  1720082    20.09.2013 09:39
2       2       CDB2     573673282        CURRENT 1764438    20.09.2013 10:24

The resetlogs system change number is tracked with every incarnation, as is the date. It is important to understand that the counter for the sequence numbers in Oracle online log files is reset with each incarnation. This posed a serious problem up to Oracle 10g when the new archived redo logs could potentially overwrite archivelogs with the same sequence number but from an earlier incarnation. Unless you decide otherwise Oracle will now include the resetlogs change number in the format for the archived redo logs, avoiding the problem. Also beginning with Oracle 10 it became possible to recover across the resetlogs statements, a feature which relied on the inclusion of the resetlogs change number in the archived log file name.

The Fast Recovery Area

The Fast Recovery Area or FRA is one of the greatest additions to the Oracle database over the last years. Although it started its existence as Flash Recovery Area Oracle renamed it in 11g to avoid further confusion with other technology using “flash” in its name. The recovery area is a central location for many backup related files, such as:

  • Archived redo logs
  • Backup piece
  • Image copy
  • Flashback log

Additionally you often find a multiplexed online redo log per thread and group as well as a multiplexed controlfile in it. The use of the FRA is so simple and the benefit of having a standard location for important backup files so big that there is actually no reason not to use it. All you need to do is define the size of the FRA and the location, shown here for ASM:

SQL> alter system set db_recovery_file_dest_size = 100G sid='*' scope='both';
 
System altered.
 
SQL> alter system set db_recovery_file_dest = '+RECO' sid='*' scope='both';
 
System altered.

For a FRA on a file system you simply point to the directory you set aside for it as shown here:

SQL> alter system set db_recovery_file_dest = '/u01/fast_recovery_area' scope='both';
 
System altered.

It is important that you do not include an identifier for the database. All files in the FRA are Oracle Managed Files, and a database identifier is appended automatically for you. Complaints from the system administrator that a common location or mount point for more than one database are a risk can be countered that every database has a set quota on that mount point. Unlike the diagnostic destination which—if shared—can become a problem if a single database suddenly fills the file system to 100% by core dumps or other unforeseen problems you specifically tell Oracle how much space is available. Querying the view shows the administrator explicitly how much space is used, and how much is reclaimable, leading to the next great feature of the FRA.

If your system experiences space pressure in the archive log destination and you are not using the FRA then there might be a point where there is no more space left, leaving the archiver stuck which is a common yet very undesirable situation. When using the FRA this problem can partly be mitigated by using the automatic space management features. To understand this feature it is important to understand the type of file in the FRA. Files which are part of the active database, such as online redo log members and a multiplexed controlfile are termed “permanent” files in the FRA. The other files in the FRA are called transient. Only transient files are eligible for automatic removal, and only under certain conditions. You need not worry that permanent files are deleted; they are not touched by the clean-up mechanism. When considering which files will be deleted, the retention policy plays a very important role. The RMAN retention policy defines the number of backups you need to keep or alternatively a time period over which backups are required before being purged.

image Note  You can learn more about the retention policy later in this chapter.

Backups that are no longer needed may automatically be removed. Information about reclaimable space is managed in the v$flash_recovery_area_usage view, as shown here:

SYS@CDB$ROOT> select FILE_TYPE,PERCENT_SPACE_USED,PERCENT_SPACE_RECLAIMABLE
  2  from v$flash_recovery_area_usage
  3  where PERCENT_SPACE_RECLAIMABLE <> 0;
 
FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE
----------------------- ------------------ -------------------------
ARCHIVED LOG                          4.86                      4.86
BACKUP PIECE                         10.37                       .42

In this example all of the archived redo logs have just been backed up, and there is a redundant backup piece as well leading to some reclaimable space. If needed, that space is freed automatically.

Flashback logs, which cannot be managed outside the Fast Recovery Area are always managed by Oracle. If space pressure occurs these logs are among the first to be reused or deleted, even if that means you cannot meet your flashback retention target. If you rely on flashback logs to rewind the database to a point in time, you need to set a guaranteed restore point. In this case you should ensure that there is actually enough space in the FRA to accommodate for all the required archived logs and flashback logs, otherwise you will run out of space and your database will pause until more space is provided.

When provisioning new databases you should strongly consider the use of the flash recovery area for the reasons just outlined. The extra work to enable it is marginal—setting two initialization parameters—and the maintenance benefit is enormous. As an additional plus point you have achieved standardization of your on-disk backup location.

Logical backups

In addition to using RMAN it is possible to take logical backups. These backups are taken using the expdp (Export Data Pump) utility. The biggest downside to taking logical backups is that you cannot perform a complete restore with them. In other words, restoring a Data Pump export file requires a database which is up and running. The effort to create a shell database and then to load the data into it is often too time consuming to be used in real life situations. Data Pump Exports however are ideally suited for developers to backup up changes to their own schemas in a development environment to keep reference of code before it was changed. Export Data Pump—unlike its predecessor—creates the dump file on the database server, and not on the client. Once a Data Pump export has been taken you should ensure that it is subsequently backed up to disk as well. It is advisable to choose unique file names, possibly with a time stamp in the name.

Taking a logical backup is quite simple, but needs a little bit of preparation during the database build. First, you need to have access to a directory object. The easiest way to create one is to do so during the build. If a directory has not been created, you can do so anytime as shown in the below code example.

SYSTEM@PDB1> create directory EXP_DIR as '/u01/oradata/CDB1/pdb1';
 
Directory created.

Note how the directory is created in the PDB. In the next step you can define which elements of the PDB you want to export. A convenient way to do so is to use a parameter file. This file contains name=value pairs and can be used to store the configuration for specific jobs. An example parameter file could have the following contents to export the whole metadata for a PDB:

content=metadata_only
directory=EXP_DIR
full=y
job_name=daily_metadata_export

The log file name and dump file name, which are mandatory for data pump exports have not been added to the parameter file on purpose. They can be passed dynamically at run time to avoid overwriting existing files. A purge job needs to be included in the daily export task to avoid the disk from filling up. The find command is a great tool to identify files with a creation timestamp of several days ago and can at the same time be instructed to remove the files it found.

[oracle@server1]$ expdp /@pdb1 parfile=exp_pdb1.par 
> dumpfile=exp_pdb1_$(date +%F).dmp logfile=exp_pdb1_$(date +%F).log

image Note  The funny looking connect string uses an Oracle Secure External Password Store. It allows you to store the connection credentials for a TNS alias in a securely encrypted file. Chapter 10 explains the password store in more detail.

What you will certainly learn to appreciate is that the file you get can be used for all sorts of transformations. Storage options can be skipped, owners can be replaced with the REMAP_DATA, REMAP_DATAFILE, REMAP_TABLESPACE, REMAP_SCHEMA, and REMAP_TABLE functions. More importantly you can get all DDL statement from the flat file if you specify the sqlfile option. Using the INCLUDE or EXCLUDE options you can limit yourself to individual objects. If you are interested in the table DDL for a specific schema, the following will work for you:

[oracle@server1 ∼]$ impdp /@pdb1 dumpfile=exp_2013-10-04.dmp 
> logfile=imp_job001.log directory=exp_dir sqlfile=tables_user1.sql
> include=table schemas=user1

The result is the same as you would get from the dbms_metada package. Like with the package you can further transform the output using the TRANSFORM clause in addition to the ones shown in the example. This way you could save the day for a developer who needs to revert back to last week’s code he forgot to check into the version control system.

Additional technology available

Backup and recovery are all well and good. Sometimes however there is only a small problem and a full restore and media recovery would take just too long. Beginning with Oracle 10g further options are available to you to get out of a precarious situation. Most of these technologies are prefixed “flashback”, which makes it quite confusing at first. In a backup and recovery scenario the following options can become useful, especially if the fault is discovered quickly.

  • Flashback table
  • Flashback database
  • Storage snapshot

Flashback table is a very nice feature relying on UNDO information in the undo-tablespace to get information back to a table that has accidentally been changed. Note that the undo information actually has to be available in the undo tablespace for this to work. Furthermore, your table must have row movement enabled. Flashback table is demonstrated in the following code snippet. First, the table is created:

USER1@PDB1> create table fbtab1 enable row movement
  2  as select * from all_objects;
 
Table created.
 
USER1@PDB1> select count(1) from fbtab1;
 
  COUNT(1)
----------
     89646

The time was 16:12:35 when something bad (exemplified next) happened:

16:12:35 USER1@PDB1> delete from fbtab1 where object_id < 10000;
 
9645rows deleted.
 
USER1@PDB1> commit;
 
Commit complete.

Almost 10,000 rows have been deleted by accident, causing logical corruption. Luckily the user picked up the phone to the DBA on duty and reported his problem. Using the time and the useful timetamp_to_scn function the DBA could use the flashback table command to get the table back to what it was before the accident, and no damage was done.

DBA1@PDB1> flashback table user1.fbtab1 to scn 983544;
 
Flashback complete.
 
DBA1@PDB1> select count(1) from user1.fbtab1 where object_id < 10000;
 
  COUNT(1)
----------
      9645
 
DBA1@PDB1> select count(1) from fbtab1;
 
  COUNT(1)
----------
     89646

Flashback table can be extended to cater to dropped objects as well. Assume for a moment that instead of a delete the user issued a drop table command:

USER1@PDB1> drop table fbtab1;
 
Table dropped.

The user might be lucky if the 10g “recycle bin” feature was active at the time. If so, just like on your desktop operating system, you can move things out of the recycle bin and back into active service:

USER1@PDB1> select object_name,original_name,type from user_recyclebin;
 
OBJECT_NAME                    ORIGINAL_NAME                  TYPE
------------------------------ ------------------------------ -------------------------
BIN$yZfeIpNmMcHgQwqYqMAW7Q==$0 FBTAB1                         TABLE

You or your DBA on duty can now restore the table to before the drop using a variation of the flashback table command:

DBA1@PDB1> flashback table user1.FBTAB1 to before drop;
 
Flashback complete.

Sometimes however it is not possible to correct a problem, and a point in time recovery is needed. This can become a time-consuming process as it involves a restore of the last level 0 backup followed by potential other incremental backup restore operations until finally you can apply media recovery. Flashback database, another new 10g feature can help in such situations since it merely “rewinds” the database instead. It requires the database to be in flashback mode. When flashback is enabled the database writes information into an additional set of log files, called flashback logs. Writing into flashback logs might have negative performance implications and should be tested first! Flashback database does not only rely on the flashback logs, the archived redo logs are equally required to perform the operation.

Flashback database is great prior to performing software releases as it allows you to get the database back to where it was before the release, saving you from having to apply rollback scripts which are usually poorly tested at best. Instead of having to memorize when the release started it is best to set a guaranteed restore point before starting to simplify the rollback operation. The guaranteed restore point prevents a situation whereby the information required to perform the flashback database call has been overwritten in a way similar to the infamous ORA-1555(snapshot too old) error. Create the guaranteed restore point before the release, as shown here. Remember that the restore point has to be set in the CDB root.

SYS@CDB$ROOT> create restore point pre_release guarantee flashback database;

Then proceed and deploy the release. If required you can perform a flashback database operation which is very straightforward, but it affects all PDBs in the CDB. If this is not possible or desirable refer to the PDB Point In Time Recovery later in this chapter.

SQL> shutdown abort
[...]
SQL> startup mount
[...]
SQL> flashback database to restore point 'pre-release';
 
Flashback complete.
 
SQL> alter database open resetlogs;
 
Database opened.

image Note  Not every operation can be undone using flashback database, some tasks like dropping a data file or certain nologging operations cannot be undone and require a full point-in-time-recovery. In other words: test the flashback operation during a release in a non-production environment and read up on the feature’s limitations in the documentation. In Data Guard configurations flashback database has additional consequences.

The final topic worth mentioning is similar to the flashback database feature just discussed. Rather than allowing Oracle to perform the rewind operation you could rely on the storage subsystem to perform this task for you. Your storage administrator can create snapshots on the array level before important operations, allowing the array to reset the database as it was before the release or batch-run. Whichever technique works best for you depends on the relationship of storage administrators and DBAs. If the DBAs like to be in control of the process to the greatest extent, then it makes a lot of sense to rely on Oracle’s restore points and the flashback database feature.

Noteworthy new RMAN features in Oracle 12.1

New backup and recovery related features for Oracle 12.1 have deliberately not featured in the previous Chapter 2 to keep the information in context. After the introduction to Recovery Manager it is now time to provide more detail about some interesting new RMAN 12c features. The next section is a selection of new features by the author with regard to the consolidation and automation theme that you found throughout the book. There may be more interesting features for you to explore in the “What’s new in Backup and Recovery” section of the Backup and Recovery User’s Guide.

The SYSBACKUP role

Similar to the other roles that have been mentioned throughout the book the SYSBACKUP role addresses a security concern. The power of the SYSDBA role is indeed great, and the possibility to use it to see any data in the database can be seen as too much of a risk. And besides: why should the backup operator be able to see the data he or she is backing up? The solution is to use the SYSBACKUP role. You can find out whether it has been created by querying v$pwfile_users:

SYS@CDB$ROOT> select * from v$pwfile_users;
 
USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          1
SYSKM                          FALSE FALSE FALSE FALSE FALSE TRUE           1

As you can see all the different roles exist in the password file. You can use the existing SYSBACKUP user if you like. Nothing prevents you from choosing a different (common) user account, just grant it the SYSBACKUP role and you are ready to go.

First I would like to demonstrate that using the SYSBACKUP user has indeed been tightened up. Connecting to the database as SYSBACKUP shows fewer privileges compared to SYSDBA:

SQL> connect sysbackup as sysbackup
Enter password:
Connected.
SYSBACKUP@CDB$ROOT> select count(1) from session_privs;
 
  COUNT(1)
----------
        14
 
SYSBACKUP@CDB$ROOT> select count(1) from session_roles;
 
  COUNT(1)
----------
         2
 
SYSBACKUP@CDB$ROOT> select role from session_roles;
 
ROLE
-------------------------------------------------------------
SELECT_CATALOG_ROLE
HS_ADMIN_SELECT_ROLE

In comparison SYSDBA has 233 different session privileges and no granted roles. As SYSBACKUP you cannot peek into user tables you do not have permissions for:

SQL> alter session set container=pdb1;
 
Session altered.
 
SQL> select * from user1.FBTAB1;
select * from user1.FBTAB1
                    *
ERROR at line 1:
ORA-01031: insufficient privileges

In case you need to know whether you are connected as SYSDBA or SYSBACKUP you can use the sys_context function:

SQL> select sys_context('userenv','authenticated_identity') from dual
  2  /
 
SYS_CONTEXT('USERENV','AUTHENTICATED_IDENTITY')
--------------------------------------------------------------------------------
SYSBACKUP

A connection to the database must now be made with the “AS SYSBACKUP” clause. This is a slight deviation from the rule that the role did not have to be supplied in the target or auxiliary arguments to RMAN. To connect as SYSBACKUP the following works:

[oracle@server1 dbs]$ rman target '"sysbackup@cdb1 as sysbackup"'
 
Recovery Manager: Release 12.1.0.1.0 - Production on Fri Sep 20 20:57:23 2013
 
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
 
target database Password:
connected to target database: CDB1 (DBID=800340993)
 
RMAN>

The placement of single and double quotes is important here. There is no syntax change if you would like to connect to the database as SYSDBA, which is still the default. Simply do not specify the “as sysdba” as you always did.

Ability to execute SQL directly in RMAN

RMAN has been improved significantly and you now can directly use it to enter SQL commands without having to switch back and forth to sql*plus. There were so many situations when a screen session was needed and CTRL-A-space was used simply too often to enter a command here or there. Worse, sometimes a restart of the database required you to reconnect. Those times are over! One of the operations more familiar to the author is to check for flashback mode in the database. This is no longer a problem at all:

RMAN> startup mount
 
connected to target database (not started)
Oracle instance started
database mounted
 
Total System Global Area    1603411968 bytes
 
Fixed Size                     2260920 bytes
Variable Size                452984904 bytes
Database Buffers            1140850688 bytes
Redo Buffers                   7315456 bytes
starting full resync of recovery catalog
full resync complete
 
RMAN> select open_mode,log_mode,flashback_on from v$database;
 
OPEN_MODE            LOG_MODE       FLASHBACK_ON
-------------------- -------------- ------------------
MOUNTED              ARCHIVELOG     NO

This is one of the enhancements that put a large smile on the database administrator’s face: nothing essential since we could check in sqlplus before, but such a convenience that you do not want to miss this.

Enhanced over-the-network features

The over-the-network capabilities existed before Oracle 12c in the form of the “from active database” clause in RMAN, but for a limited number of use cases. The duplication from the active database includes the copy of the source database over the network.

This functionality has been enhanced in 12c in that the auxiliary instance can use existing backup sets on the target database (host) to reduce the overhead of the network copy. If you would like to try this you should use the using (compressed) backupset or section size clause and allocate at least as many auxiliary channels as you have target channels.

Beginning with Oracle 12c you can also make use of backup sets created on the fly by the remote physical standby database to restore or recover another database. Backupsets are transparently transferred over the network to the destination host. But instead of using the “from active database” clause you use “from service” instead. The use of a TNS service name obviously requires a corresponding service name definition in the TNS_ADMIN directory. The main use case provided by the Oracle documentation is the recovery of a standby database using an incremental backup. The ease of use is demonstrated by the following example, issued while connected to the standby database. For this example to work the standby database CDB2 had media recovery stopped while the primary database—CDB1—was instructed not to transport redo. While the standby was effectively on hold, the primary performed log switches and a new tablespace “apress” has been created. The incremental backup is then initiated on the primary which in this case is the auxiliary database. Although the same end result could be achieved with Oracle 11.2 as well, it was a more complex process with room for error.

The first step is to connect to the databases in the correct way, that is, the standby database is your target while the primary is the auxiliary instance:

[oracle@server1 ∼]$ rman target sys@cdb2 auxiliary sys@cdb1
 
Recovery Manager: Release 12.1.0.1.0 - Production on Fri Sep 20 22:01:34 2013
 
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
 
target database Password:
connected to target database: CDB1 (DBID=800340993, not open)
auxiliary database Password:
connected to auxiliary database: CDB1 (DBID=800340993)
 
RMAN>

Once connected (remember to watch out for “reversed” primary and auxiliary) you can initiate the backup and recover operation:

RMAN> recover database from servicecdb1 using compressed backupset;
 
Starting recover at 20.09.2013 22:01
using channel ORA_DISK_1
skipping datafile 5; already restored to SCN 1735864
skipping datafile 7; already restored to SCN 1735864
skipping datafile 8; already restored to SCN 1864611
skipping datafile 9; already restored to SCN 1864611
skipping datafile 10; already restored to SCN 1864611
skipping datafile 11; already restored to SCN 1864611
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service cdb1
destination for restore of datafile 00001: +DATA/CDB2/DATAFILE/system.290.826664973
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service cdb1
destination for restore of datafile 00003: +DATA/CDB2/DATAFILE/sysaux.269.826664973
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service cdb1
destination for restore of datafile 00004: +DATA/CDB2/DATAFILE/undotbs1.291.826664973
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service cdb1
destination for restore of datafile 00006: +DATA/CDB2/DATAFILE/users.288.826664993
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
 
starting media recovery
 
media recovery complete, elapsed time: 00:00:10
Finished recover at 20.09.2013 22:12

The code in bold font shows that the backup set is taken from the network. This new method is in stark contrast to the previous method where you needed to create an incremental backup from a specific SCN, transfer it to the standby database host and apply it. The option to compress the backupsets helps conserve bandwidth. Using multiple channels and the section size clause should make the restore a lot quicker.

Instead of recovering the whole database you can limit the scope to data files, tablespaces, and control files; both the “restore” and “recover” command have been enhanced to use the “from service” clause.

Another use case would be the restore of a lost data file on the primary. Connecting to the standby database you can restore the data file over the network. For the next example assume that the database crashed because of the loss of data file number 1. When the DBA tried to restart the instance after the crash he sees this error:

SYS@CDB$ROOT> startup
ORACLE instance started.
 
Total System Global Area 1603411968 bytes
Fixed Size                  2288872 bytes
Variable Size             520094488 bytes
Database Buffers         1073741824 bytes
Redo Buffers                7286784 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '+DATA/CDB1/DATAFILE/system.265.826666055'

The logical next step to restore service is to restore the data file from a backup. This can be a lengthy process depending on how quickly the backup medium can be loaded. Another approach is to copy the file over the network. As before you have the option to use multiple channels and the section size clause to tune the recovery time. In the example the system tablespace is small and such optimizations were not needed. Connecting to the target database CDB1 (primary role) and auxiliary database CDB2 (physical standby) you simply transfer the file across and recover it at the same time:

[oracle@server1 ∼]$ rman target sys@cdb1 auxiliary sys@cdb2
 
Recovery Manager: Release 12.1.0.1.0 - Production on Fri Sep 20 22:22:59 2013
 
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
 
target database Password:
connected to target database: CDB1 (DBID=800340993)
auxiliary database Password:
connected to auxiliary database: CDB1 (DBID=800340993, not open)
 
RMAN> run {
2> restore datafile 1 from service cdb2;
3> recover datafile 1;
4> alter database open;
5> }
 
Starting restore at 20.09.2013 21:39
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK
 
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service cdb2
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to +DATA/CDB1/DATAFILE/system.265.826666055
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 20.09.2013 21:39
 
Starting recover at 20.09.2013 21:39
using channel ORA_DISK_1
 
starting media recovery
media recovery complete, elapsed time: 00:00:01
 
Finished recover at 20.09.2013 21:39
 
Statement processed

Although the technology is not really new—you could achieve the same result in 11.2—the syntax is very straightforward and especially with OMF the switch to the new data file name is done automatically in the control file. Both of these commands you just read about are a huge convenience boost.

Point-in-Time Table Recovery

Despite all the great tools available to the database administrator to prevent “accidental” changes to a table or set of tables, mistakes do happen. Sometimes it is impossible to recover a table. This could be because a Flashback Version Query has run out of undo or a table has been dropped with the “purge” keyword. Or an over-eager DBA has issued a “purge dba_recyclebin” command to free up space... The list is long. In previous Oracle versions you almost always had to do a point-in-time recovery in this case. If your data was self-contained on a tablespace then you were lucky to be able to perform a Tablespace Point-in-Time Recovery or TSPITR. But sometimes objects were scattered across multiple tablespaces, making a TSPITR impossible. In that case you had to bite the bullet and perform a full point in time recovery. Not necessarily in your production environment! A restore of a backup from before the accident on a different host may enable your application developers to get enough data to produce a set of scripts and export dumps to rectify the situation.

The inevitable question this build-up of tension should provoke is: is there no other way to do this? With Oracle 12c there is indeed, and it is available in the form of an extension of the RMAN recover command. Luckily the designers thought of Pluggable Databases as well, which is what this section covers. Non-CDBs have this functionality available too.

The following example demonstrates the use of the “recover table” command. While connected to a PDB, additional tablespaces are created, which all will contain a table, plus a unique index. The scenario presents itself as follows:

USER1@PDB1> select table_name,tablespace_name from tabs;
 
TABLE_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
T_3                            PDB1_TBS_3
T_1                            PDB1_TBS_1
T_2                            PDB1_TBS_2

Each of these tables has been created using the “create table ... as command”, using dba_objects as the source. To demonstrate the usefulness of the new feature a little chaos will be created, simulating an inexperienced operator. The current SCN of the database has been found to be 1993594.

USER1@PDB1> truncate table t_3;
 
Table truncated.
 
USER1@PDB1> truncate table t_2;
 
Table truncated.
 
USER1@PDB1> drop table t_1 purge;
 
Table dropped.

Will we be able to get the data back as it was before? One possible command to get the tables back is shown here. It will create an auxiliary database instance using existing RMAN backups and restore it up to the SCN specified. A Data Pump job will be started next to export the tables in the recover table clause and import it into the target database. To prevent an error message indicating that the tables to be recovered exist, the example used the remap table clause. Alternatively you could use the notableimport option instead which only creates the Data Pump export file without importing it into the target database.

RMAN> recover table user1.t_1, user1.t_2, user1.t_3
2> of pluggable database pdb1
3> until scn 1993594
4> auxiliary destination '+DATA'
5> datapump destination '+DATA'
6> dump file 'user1.restore.dmp'
7> remap table user1.t_1:t_1_restore,user1.t_2:t_2_restore,user1.t_3:t_3_restore;

The command to recover the tables has a few more options. If you are using a Fast Recovery Area for example then you do not need to specify the auxiliary destination; the temporary database files will be created in the FRA. You can optionally specify that the data pump export file containing the tables specified in the recover table command will not be imported in the live environment using the notableimport option.

You are not limited to recovering entire tables, if you need finer granularity then you can choose table partitions as well. In the above example the DBA has decided to allow the import the dump file into the tables t_{1,2,3}_restore, leaving the original tables untouched. Toward the end of the process you can clearly see the Data Pump activity:

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_iafj_nopB":
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 33 MB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "USER1"."T_1"                               8.540 MB   73754 rows
   EXPDP> . . exported "USER1"."T_2"                               8.540 MB   73753 rows
   EXPDP> . . exported "USER1"."T_3"                               8.540 MB   73753 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_iafj_nopB" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_iafj_nopB is:
   EXPDP>   +DATA/user1.restore.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_iafj_nopB" successfully completed Sat Sep 21 22:41:23 2013 shift-enter.jpg
                  elapsed 0 00:00:33
Export completed

This dump file which was deliberately created in ASM is subsequently imported into the life environment and the auxiliary instance is removed. The end result is a success, as you can see for yourself:

USER1@PDB1> select count(1) from T_1_RESTORE union all
  2  select count(1) from T_2_RESTORE union all
  3  select count(1) from T_3_RESTORE union all
  4  select count(1) from t_2 union all
  5  select count(1) from t_3;
 
  COUNT(1)
----------
     73754
     73753
     73753
         0
         0
 
USER1@PDB1> select count(1) from t_1;
select count(1) from t_1
                     *
ERROR at line 1:
ORA-00942: table or view does not exist

After the import you can decide how you want to merge the contents of the restored tables with the live tables.

Taking RMAN backups

You read in the introduction that taking of backups is an essential part of the daily routine. It is equally important that the backup return code is reported back so that failed backups can be triggered manually if required and possible during the day. The backup frequency and the contents need to be defined by the owners of the database hosting service and presented to its users as a choice. There should not be too much choice so as not to confuse the users, and each option should be explained in simple terms understandable to the non-technical audience. The managers requesting a Pluggable Database most likely do not have a database background, nor do they need to have one. So instead of using cryptic-sounding technology terms such as “weekly level 0 backup on weekend, then differential incremental backups every evening” you could say “a full backup is taken on the weekend with additional backups during the week, capturing each day’s changes individually” or similar. It is important for the person raising the request to understand the implications of different backup techniques. If you decide to offer a logical backup of the PDB as an option, it must be made clear that there will not be a point in time recovery, and a completely empty PDB has to be created first. The last thing the managers of the hosting service need are outages caused by improper documentation of the backup techniques! Transparency and openness will greatly help the service to gain acceptance.

Considerations for the Recovery Catalog

Backup information can either be stored in the database’s control file or alternatively in a so-called catalog database. Although the use of a recovery catalog is not mandatory there are good reasons for using one. First of all, should you lose all controlfiles of a database due to a catastrophic failure there is a probability that you have lost the backup information with them. Recovery of the database is still perfectly possible but involves extra steps.

A Recovery Catalog is a schema in an Oracle database, which is initialized with data structures required for RMAN to store information related to backups. To create the catalog you first need to create a new user in a database, and then connect to it from within the RMAN executable to create the schema. It makes sense to dedicate a tablespace to the RMAN catalog in the database. The catalog database should be considered part of the infrastructure and not part of an operational database for obvious reasons. Depending on your organization’s structure you could have a catalog database per region or multiple databases per region and/or tier. Whichever way you decide, your catalog schema has to be in a dedicated database. Please do not forget to back your catalog database up at regular intervals!

Note that when using RMAN in a Data Guard scenario and you intend to take the backups from your standby database to reduce load on the primary database then the use of a catalog database is mandatory if you want to use these backups to restore or recover the primary database. Otherwise, how could the primary “know” about the backups taken on the standby database? Remember that backup information is always stored in the controlfile unless the catalog is used, and controlfiles are local to each database.

While many organizations might have a non-CDB for the recovery catalog a CDB with multiple PDB sounds like an interesting alternative.

Creating the catalog owner

The catalog owner is an arbitrary user account in a non-CDB database. Before creating it you should ensure that the user has its own dedicated tablespace rather than using the database’s default. The size of the tablespace depends on the number of database targets and the amount of archived redo logs they produce. The catalog keeps track of every archived log, and if you go through a lot then your catalog will grow larger. Obviously the more backups per database you keep in the catalog the larger the catalog.

To create the catalog owner, create the tablespace first, as shown in this example:

SQL> create tablespace rman_catalog_tbs datafile size 20G
 2   autoextend on next 1G maxsize 30G;
 
Tablespace created.

Next create the user, similar to this command:

SQL> create user rco identified by securePassword
  2  default tablespace rman_catalog_tbs
  3  temporary tablespace temp
  4  quota unlimited on rman_catalog_tbs;
 
User created.
 
SQL> grant recovery_catalog_owner to rco;
 
Grant succeeded.

This concludes the actions to be taken from a privileged account. The next step is to actually create the catalog.

Creating the catalog

The next step is the creation of the catalog itself. Exit sqlplus if you have not done so already and connect to rman using “catalog” option. It is assumed that a TNS-alias has already been defined for the recovery catalog database. The following example uses “rcat” as the TNS-name.

[oracle@server1 ∼]$ rman catalog rco@rcat
 
Recovery Manager: Release 12.1.0.1.0 - Production on Sat Sep 21 23:20:40 2013
 
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
 
recovery catalog database Password:
connected to recovery catalog database
 
RMAN>

Issue the “create catalog” command next to start the creation of the catalog which will take a short while to complete. Oracle 11g introduced a new feature, called a virtual private catalog. Such a virtual private catalog allows the main catalog owner-“rco” in the example-to grant access to specific objects in the basic catalog. The idea is to allow for stricter separation of duties. After all, in many enterprises dedicated teams are responsible for the operational support of databases, and do not necessarily include the task of managing backup and recovery. The virtual catalog allows the base catalog owner to logically subdivide the main catalog into smaller units. So instead of connecting as the base catalog owner each time, you can create additional accounts in the database, and grant them access to specific objects in the main catalog. The risk of these users to accidentally cause damage to the main catalog is reduced, and access to databases is granted on a need-to-have basis only. The following examples however do not use virtual catalogs, but instead focus on a base catalog instead which is probably the most widely used adoption of RMAN catalogs.

Registering databases in the catalog

When using the RMAN catalog database you should connect to the catalog database as well as the target database when taking backups. Before you can make use of the catalog however the target database(s) must be registered with it. Using the rman utility again connect to the target database as the catalog database to register the database as shown:

[oracle@server1 ∼]$ rman catalog rco@rcat target sys@cdb1
 
Recovery Manager: Release 12.1.0.1.0 - Production on Sat Sep 21 23:22:18 2013
 
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
 
target database Password:
connected to target database: CDB1 (DBID=800340993)
recovery catalog database Password:
connected to recovery catalog database
 
RMAN> register database;
 
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

The implicit resynchronization operation will ensure that existing backup information stored in the database’s controlfile is transferred into the recovery catalog.

Keeping controlfile and catalog in sync

Typically when you are taking backups you are connected to the recovery catalog database as well as the target database. In certain circumstances however, it might not be possible to establish a connection to the recovery catalog database for all the reasons DBAs know. In other environments, you may find yourself generating lots and lots of redo log switches between backups. In both the foregoing cases it might be necessary to manually resynchronize the catalog. This operation is simple and can be performed quickly by issuing the following command when connected to the target database and catalog:

RMAN> resync catalog;

As soon as you start paying attention to the RMAN output when using the catalog you will see that the resynchronization happens fairly frequently.

Configuring the RMAN environment

Recovery Manager options can either be configured globally or at runtime. Options supplied at runtime take precedence over those defined globally. To view the configuration options, connect to the target database and catalog and type the “show all” command as shown in Listing 11-1.

Listing 11-1.  Configuration options for Recovery Manager

RMAN> show all;
 
RMAN configuration parameters for database with db_unique_name CDB1 are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 8 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; shift-enter.jpg
# default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO shift-enter.jpg
    '/u01/app/oracle/product/12.1.0.1/dbhome_1/dbs/snapcf_CDB1.f'; # default

The following sections will explain the most important of these options.

Explaining the retention policy

The backup retention policy determines how backups are treated before they are considered obsolete by RMAN, and therefore eligible for removal. The first option is to keep a number of backups to satisfy a time window for recovery. The so-called recovery window is the window for which backups exist for a point in time recovery. A common case is to keep a week’s worth of backups available. The aim is to be able to perform point in time recoveries anytime between the current time and seven days before that time. If your level 0 backup is created halfway in the 7 days—maybe at day 3—then it will be necessary to keep a previous backup. Otherwise you would not be able to restore a database at day 1. Although it is conceptually simple to understand a recovery window, it can become a headache to the backup and storage administrators since it is impossible to predict the backup volume. If, for example you decide to take additional backups from the backup strategy then the projected data volume increases proportionally to the number of additional-read: unplanned-backups taken.

The use of a recovery window is not the default. To enable it you have to issue the following statement in RMAN:

RMAN> configure retention policy to RECOVERY WINDOW OF X DAYS;

Simply substitute X with the number of days you need. You should match the value of the database’s controlfile_record_keep_time with the recovery window, or have the initialization parameter greater than the RMAN policy.

For a more deterministic backup volume another option is available. Instead of specifying a time window for when you want to be able to perform a point-in-time-recovery you could specify the number of backups you want to keep. The default RMAN retention policy is to have one backup available, which is not very realistic. Depending on your company’s requirements and enterprise Service Level Agreements you should increase the number of backups to a more sensible number. Your consolidation project’s product manager is the best person to ask for advice. To increase the number of backups to be preserved you need to issue the following command:

RMAN> configure retention policy to REDUNDANCY X;

As a direct result, X backups will be preserved; backup X+1 is considered obsolete by RMAN. If you are using a Fast Recovery Area then the retention policy can help you when space pressure arises. Careful though: unplanned backups can cause a really important backup to be declared obsolete and removed!

If backups (or flashback logs for that matter) are considered obsolete by the retention policy they can automatically be purged from the FRA should it come under space pressure. This can be quite useful when you need space for additional archived redo logs. On the other hand, it is something to know about. You can see the impact if changing the retention policy quite clearly in the view v$recovery_area_usage, column percent_space_reclaimable.

If a backup falls outside the limits set by the retention policy then it will be considered obsolete. An obsolete backup can be removed by the media manager. In the context of RMAN it is important that the media manager lets RMAN manage the removal of backups. Removing backups outside of RMAN using operating system commands or—if stored on tape—letting the media manager remove them causes problems with RMAN as it does not know that a backup no longer exists. A backup that has been removed outside of RMAN is considered expired. Expired backups can become a problem when you need to rely on them to restore a database. Nothing is worse than being in the middle of a database restore operation when the backups you so desperately need are reported to be missing. The RMAN crosscheck command can be used at regular intervals to test if a backup is actually available. If it is not, it will be marked as expired in the catalog or control file.

Defining how and where backups should be stored by default

By default backups are stored on disk using the Fast Recovery Area if possible. The “CONFIGURE DEFAULT DEVICE TYPE TO DISK” setting in Listing 11-1 shows this. If you wish to change the default behavior and switch to tape backups, you can do so by issuing the “configure default device type to sbt” command. Bear in mind though that tape backups require additional configuration, both on the operating system as well as within Oracle. In most cases the system administrator will install the required software packages for interaction with the tape library. It often is the backup team’s responsibility to configure the media manager to allow communication between the database server and the tape library. Finally the database administrator will be given the necessary information to be supplied for the channel allocation. A channel in Recovery Manager is responsible for reading data into the server process’s PGA and writing it to the output device: disk or tape. Unlike in the first versions of RMAN you no longer have to explicitly allocate channels for a command to succeed. A well-balanced design however will require you to allocate channels to find a compromise between satisfying the backup window and throughput to the array or tape library. It is entirely possible to send more data down the pipe than the other end can digest!

Generally speaking multiple channels will back your database up quicker if you are not overwhelming the server and/or backup destination. During the initialization phase outstanding work is distributed to each channel. A great enhancement came with Oracle 11g, where RMAN can use multiple channels to work on a single file. Why is that useful? Using the default backup mode it was possible for a multi-channel backup channel to finish while the other one was still working on an enormous bigfile tablespace for quite a while. Using the then-new “section” keyword you can back up datafiles or the whole database using sections of a specific size. In the author’s opinion this makes the most sense with databases made up of many very large bigfile tablespaces or any other system with a mix of small and comparatively large data files. Alternatively you could back up individual data files using the section size command as shown here. Two channels are manually allocated to back up datafile 1 in 250M chunks.

RMAN> run {
2> allocate channel ch1 device type disk;
3> allocate channel ch2 device type disk;
4> backup section size 250m datafile 1;
5> }
[...]
Starting backup at 21.09.2013 23:47
channel ch1: starting full datafile backup set
[...]
backing up blocks 1 through 25600 [...]
channel ch2: starting full datafile backup set
[...]
channel ch1: starting full datafile backup set
[...]
backing up blocks 25601 through 51200
[...]
channel ch2: starting full datafile backup set
[...]
backing up blocks 51201 through 76800 [...]
RMAN>

Oracle 12c introduced the ability to perform multi-section incremental backups which is very useful.

If you do not change the default option then backups will be taken as backup sets. Each backup set can comprise multiple backup pieces. Optionally you can chose to use binary compression of the backup sets. Depending on the content stored in the database using (binary) compressed backup sets can greatly decrease the backup size.

So-called image copies are an alternative to backup sets. An image copy is an exact copy of the source file. For obvious reasons an image copy is of the same size as a data file-an image copy of a completely empty 10G data file will require an additional 10G of disk space. Image copies can be useful in a scenario where you can afford a second copy of the database created as image copies in the FRA. The image copies can be rolled forward by applying incremental backups taken on the “live” data files. If the worst should happen, you can switch the database to the copy, apply the last remaining archived logs and open the database from within the FRA.

Backup optimization

Using backup optimization is a way to conserve space on the backup media by skipping files that have been backed up previously while adhering to the retention policy. Using the configure backup optimization on command in RMAN enables backup optimization for subsequent backups. For files to be skipped they must be identical and also be part of a valid backup as part of the retention policy. In practice you will find that read-only and offline tablespaces are most likely to be backed up only once in a recovery window. Assuming that the read-only tablespace TBS1 is already backed up on Monday in a recovery window of 7 days it will not be backed up on Tuesday and the following days unless it changes. It will however be backed up the following Monday again, allowing the media manager to expire the tape with the backup taken a week earlier.

For a retention policy requesting redundant backups to be available a read-only, offline tablespace or archived log will be excluded from the first additional redundant backup. If for example your retention policy is set to redundancy 5 and you execute the 6th backup of the database, identical files which have not changed will be skipped.

Archived redo logs can also be skipped from being backed up if they exist in a backup set. The retention policy does not play a role when backing up archived logs with backup optimization. During testing it turned out that with backup optimization enabled, Oracle will skip archive logs already backed up once in a backupset which is available if you back it up using the backup archivelog like or backup archivelog all commands. Assume for this example that backups exist for archived logs of thread 1, sequence 81 to 86. Additionally the archived logs have not been deleted from the FRA. Next you try to take another backup of these:

RMAN> backup archivelog all;
 
Starting backup at 22.09.2013 16:54
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
skipping archived logs of thread 1 from sequence 81 to 86; already backed up
channel ORA_DISK_1: starting archived log backup set
[...]

You can see in bold that RMAN skips sequence numbers 81 to 86. If you specifically request a backup of archived logs they will still be taken. Continuing the previous example:

RMAN> backup archivelog sequence between 81 and 86;
 
Starting backup at 22.09.2013 16:58
channel c1: starting archived log backup set
channel c1: specifying archived log(s) in backup set
input archived log thread=1 sequence=81 RECID=89 STAMP=826760036
input archived log thread=1 sequence=82 RECID=90 STAMP=826821462
input archived log thread=1 sequence=83 RECID=91 STAMP=826822278
input archived log thread=1 sequence=84 RECID=92 STAMP=826822339
input archived log thread=1 sequence=85 RECID=93 STAMP=826822349
input archived log thread=1 sequence=86 RECID=94 STAMP=826822368
channel c1: starting piece 1 at 22.09.2013 16:58
channel c1: finished piece 1 at 22.09.2013 16:58
[...]
Finished backup at 22.09.2013 16:58

You can also use the “not backed up n times” clause limit the number of archivelog backups, or add the “force” keyword. If you used the “backup archivelog sequence between 81 and 86” command three more times you will end up with at least 4 backups of the same log, which might not be desirable.

Requesting automatic spfile and controlfile backups

For many situations it is very useful to have a recent copy of the controlfile or a server parameter file at hand. Luckily Oracle allows you to take automatic backups of the controlfile if you want. According to the documentation controlfile autobackups are taken by default in Container Databases. Instead of creating a little scriptlet and including the backup commands for server parameter file and controlfile each time, you can configure these to be backed up automatically using the following command:

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

With this setting you will notice the following block after each Backup strategy:

Starting Control File and SPFILE Autobackup at 22.09.2013 16:58
piece handle=+RECO/CDB1/AUTOBACKUP/2013_09_22/s_826822683.260.826822683 comment=NONE
Finished Control File and SPFILE Autobackup at 22.09.2013 16:58
released channel: c1

Additionally Oracle will back up the control file after lots of structural changes have occurred, such as adding multiple tablespaces, online redo log groups, renaming files etc. In versions before Oracle 11.2 a line was added in the alert.log each time such a change occurred, and you ended up with lots of automatic controlfile backups taken immediately after the structural change happened. This proved to be inconvenient, especially if you consider massive releases to the database where data is reorganized and life cycle management starts. As a mitigating effect one of Oracle’s MMON slave processes is responsible for the creation of the controlfile autobackups. Instead of creating the controlfile autobackup immediately after each change of the database structure during an application release, the creation of the backup is deferred by a few minutes after the event. You will not see entries about controlfile autobackups in the alert.log anymore, you need to check the trace directory within the database’s Automatic Diagnostic Repository for files named ORACLE_SID_m000_OSPID.trc containing the information shown here:

*** 2013-09-22 17:46:06.432
*** SESSION ID:(10.23) 2013-09-22 17:46:06.432
*** CLIENT ID:() 2013-09-22 17:46:06.432
*** SERVICE NAME:(SYS$BACKGROUND) 2013-09-22 17:46:06.432
*** MODULE NAME:(MMON_SLAVE) 2013-09-22 17:46:06.432
*** ACTION NAME:(Autobackup Control File) 2013-09-22 17:46:06.432
  
Starting control autobackup
 
*** 2013-09-22 17:46:07.082
Control autobackup written to DISK device
        handle '/u01/fra/ORA12/autobackup/2013_09_22/o1_mf_n_826825566_93y7pgr3_.bkp'

In the above example lots of structural changes occurred in the time leading up to 17:46 but finished approximately at 17:40. So do not worry if you do not see a trace file immediately after those structural changes.

The Oracle database accesses and modifies the controlfile constantly. Therefore, a way has to be found to get a consistent copy of the controlfile. The simple solution is to create a read-consistent view of the controlfile, called a snapshot controlfile. This process is completely transparent, and in most cases you do not need to modify the location of the snapshot controlfile unless you use Real Application Clusters.

image Note  The snapshot controlfile must be on shared storage for Real Application Clusters from 11.2.0.2 onwards.

If you are not on RAC and have to, use the “configure snapshot controlfile name to” command to define the snapshot controlfile location.

In addition to changing the snapshot controlfile location you can also define a non-default controlfile autobackup location. The implication of changing the controlfile autobackup location is that you need to specify the exact location when you restore it later. If you use a FRA the default location for the controlfile autobackup is your Fast Recovery Area. The restore of a controlfile autobackup is explained later in the chapter in the Restore and Recovery section.

Technical aspects around RMAN backups

Backing up the production database directly is a very common strategy. In quiet periods, usually over night for non 24 by 7 systems, a backup is initiated according to the backup schedule and internal Service Level Agreements. After the completion or failure, a notification is sent out to a central control system informing the database administrator of the outcome. In most cases the administrator is not interested in being paged about a successfully completed backup, a failure is more important and requires attention.

Recovery Manager allows you to take backups of all important components of the database, including data files, the control file, server parameter file etc. The Oracle binaries and log files such as the diagnostic destination, audit files, and Data Pump dump files are not backed up by RMAN. You need to request a standard file system backup for these files.

Recovery Manager has been enhanced to allow backups of individual Pluggable Databases, the root database and the whole CDB. In the author’s opinion a PDB backup should not be part of a planned backup strategy but rather an ad-hoc backup prior to important tasks and immediately after the PDB creation. The simple rationale behind this is that a full recovery of a Container Database will require a restore/recovery of the CDB$ROOT followed by a restore/recovery of all Pluggable Databases. In the heat of the battle with the outage a multi-stage recovery strategy can easily turn out to be too complex and more importantly too time consuming. Instead the whole CDB should be backed up. This operation is not different from any other backup of a database you have issued before.

image Note  The following examples assume that your database is in ARCHIVELOG mode. Where specified the use of the Flashback Database feature will be required too. Please remember that activating the Flashback Database feature requires another background process to write additional data to the Flashback logs. This can have a negative effect on performance and should only be enabled after testing or where appropriate.

Backing up the whole CDB

As you read before, there are multiple ways to back up a Container Database. The most common of which is most likely the hot backup of the database, including a backup of the controlfile—automatic or manual—followed by a backup of archived logs. On sites with high redo-generation additional archived log backups are often scheduled. Archive log destinations can be multiplexed, either by using multiple different and distinct file system mount points or alternatively by sending files over the network using Redo Transport Services. A base backup for an incremental backup of the CDB is initiated as shown here:

RMAN> backup incremental level 0 database plus archivelog;

This is exactly the same command as you have used all these years to back up a database, there is no distinction between CDB and non-CDB when it comes to backing up everything. In the author’s example the backup will go straight to the Fast Recovery Area on disk. Following this many sites opt to back the FRA up to tape, using the backup recovery area command in RMAN. Do not forget to review redo generation and schedule additional backups as and when needed.

Enhancements for Pluggable Databases

RMAN has seen many enhancements for Pluggable Databases, which was to be expected. This section covers the backup command, but numerous other commands have also been extended to take the PDBs into account. To backup up an individual PDB you need to issue the backup pluggable database command as shown in this example:

RMAN> backup pluggable database rcat_pdb;
 
Starting backup at 22.09.2013 17:34
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=28 device type=DISK
[...]
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00017shift-enter.jpg
[...]
input datafile file number=00018 shift-enter.jpg
  name=+DATA/CDB1/E6ED2028263F4B27E0436638A8C088A6/DATAFILE/rman_catalog_tbs.275.826759035
channel ORA_DISK_3: starting piece 1 at 22.09.2013 17:34
channel ORA_DISK_3: finished piece 1 at 22.09.2013 17:34
piece handle=shift-enter.jpg
+RECO/CDB1/E6[...]A6/BACKUPSET/2013_09_22/nnndf0_tag20130922t173446_0.312.826824889
tag=TAG20130922T173446 comment=NONE
[...]
 
Starting Control File and SPFILE Autobackup at 22.09.2013 17:35
piece handle=+RECO/CDB1/AUTOBACKUP/2013_09_22/s_826824903.282.826824905 comment=NONE
Finished Control File and SPFILE Autobackup at 22.09.2013 17:35
 
RMAN>

You can back up multiple Pluggable Databases in a comma-separated list if you like. In a similar way it is possible to back up only the root, using the backup database root command. Again, a full backup of the whole CDB is likely to be more efficient in many cases.

Another command that can be used specifically with Pluggable Databases as well is backup validate. This will check for physical corruption of the files in the database, and can be instructed to check for logical corruption too. Physical corruption can be caused by write errors to disk or other otherwise hard to detect problems such as filesystem glitches if you are not using ASM anyway. The result can be seen immediately when the database wants to access a certain physically corrupt block, it will report an error. Checks for logical corruption can be made by adding the check logical clause to the backup validate command. An example check for Pluggable Database PDB1 is shown here:

RMAN> backup validate check logical pluggable database pdb1;
 
Starting backup at 22.09.2013 19:22
[...]
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 shift-enter.jpg
  name=+DATA/CDB1/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.261.826623815
[...]
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
9    OK     0              20027        80668           2130003
  File Name: +DATA/CDB1/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.261.826623815
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              14246
  Index      0              7448
  Other      0              38919
[...]
 
Finished backup at 22.09.2013 19:22
 
RMAN>

Backup validate will not generate any backup set files, and any findings will go into v$database_block_corruption as well as the alert.log in addition to those that you see on screen. In the unfortunate event that more than one corruption is expected, you should set the tolerance towards detected corruption higher using the set maxcorrupt to a high number, indicating how many corruptions can be reported before the command aborts.

Restore and Recovery

The foundation of successful restores and recovery operations of a database are manifold. Well-trained staff, good written procedures, and the ability to keep a cool head all help the database administrator during the difficulties when restoring a database. With the previous section about backups the second essential success factor should have been covered as well.

There are many ways to restore a database—fully or parts—and recover it. The procedure chosen depends on the failure that has occurred. Oracle provides a system of identifying failures since version 11.1, called the Data Recovery Advisor. In some situations it can be sufficient to invoke it to get to a solution. If for example, a PDB “loses” a data file you will be informed by the database that a new failure has been detected. As soon as a user tries to access a data file, the failure becomes apparent. After the monitoring tool picked the missing file up, you can connect to RMAN and list the failures:

RMAN> list failure;
 
Database Role: PRIMARY
 
List of Database Failures
=========================
 
Failure ID Priority Status    Time Detected    Summary
---------- -------- --------- ---------------- -------
8          HIGH     OPEN      22.09.2013 19:30 One or more non-system datafiles are missing

The message leaves little for interpretation: a data file is missing. Now let’s see what the advisor has to offer:

RMAN> advise failure 8;
 
Database Role: PRIMARY
 
List of Database Failures
=========================
 
Failure ID Priority Status    Time Detected    Summary
---------- -------- --------- ---------------- -------
8          HIGH     OPEN      22.09.2013 19:30 One or more non-system datafiles are missing
 
analyzing automatic repair options; this may take some time
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
analyzing automatic repair options complete
 
Mandatory Manual Actions
========================
no manual actions available
 
Optional Manual Actions
=======================
1. If file +DATA/CDB1/E6CDAD92377F667EE0436638A8C0DB6C/DATAFILE/pdb1_tbs_1.284.826754085
   was unintentionally renamed or moved, restore it
2. Automatic repairs may be available if you shutdown the database and restart it in mount mode
 
Automated Repair Options
========================
Option Repair Description
------ ------------------
1      Restore and recover datafile 13
  Strategy: The repair includes complete media recovery with no data loss
  Repair script: /u01/app/oracle/diag/rdbms/cdb1/CDB1/hm/reco_2181095021.hm
 
RMAN>

The repair script has the following contents:

# restore and recover datafile
sql 'PDB1' 'alter database datafile 13 offline';
restore ( datafile 13 );
recover datafile 13;
sql 'PDB1' 'alter database datafile 13 online';

In this particular case the suggested solution worked—but it was a trivial example. However, as with anything that is automated you are not freed from using common sense and experience to gauge the effectiveness of the solution. No machine logic is 100% free from error—admittedly the same can be said for human logic—but double-checking what the situation is and how it can be fixed is important. Some scenarios you will come across are too complex to be properly addressed by the recovery advisor, and it will not be able to present a solution.

The next sections deal with the recovery of a database from start to finish, detailing everything from the restore of the server parameter file all the way up to restoring data files. The next few sections will explain how to get the various parts of the database back into service.

Restoring the server parameter file

Having to restore the server parameter file is very straightforward provided you followed the advice outlined earlier in the chapter and configured the controlfile autobackup in RMAN. Do not be lazy and operate the database without controlfile autobackups: one day they could well save your career. A controlfile autobackup will automatically create spfile and controlfile autobackups after each backup and after structural changes as explained in the section above. Additionally you can make your life very easy by using a Fast Recovery Area. With such a setup, it is a walk in the park to restore the spfile. In this example the database is shut down. In preparation of the restore a tiny little parameter file helps a lot, releasing you from the need to have the database ID at hand. The file only needs to have the following minimal set of parameters, but yours may need more:

db_name=CDB1
db_recovery_file_dest    = '+RECO'
db_recovery_file_dest_size= 60g
cpu_count = 2
# not necessary but can help if the "dummy" instance won't start
# workarea_size_policy = auto
# pga_aggregate_target = 2G
# sga_target = 4G

Although strictly speaking you do not need an initialization file when starting the database for the restore of the spfile, you still may have to create one. Many Oracle parameters are calculated on-the-fly, and cpu_count is one of those many others are based on. With today’s multi-core CPUs you can run into ORA-4031 errors if you are not setting the cpu_count to a low value as the default SGA size used by the dummy instance is too low for those servers. If you want to be on the safe side set the SGA and PGA parameters as well.

Armed with this file you can start the database. After the database instance has been started in nomount mode, you can restore the spfile:

RMAN> restore spfile from autobackup;
 
Starting restore at 22.09.2013 19:56
 
recovery area destination: +RECO
database name (or database unique name) used for search: CDB1
channel c1: AUTOBACKUP +RECO/CDB1/AUTOBACKUP/2013_09_22/s_826832795.297.826832795shift-enter.jpg
  found in the recovery area
channel c1: looking for AUTOBACKUP on day: 20130922
channel c1: restoring spfile from AUTOBACKUP shift-enter.jpg
+RECO/CDB1/AUTOBACKUP/2013_09_22/s_826832795.297.826832795
channel c1: SPFILE restore from AUTOBACKUP complete
Finished restore at 22.09.2013 19:56

The file found by RMAN is indeed the latest spfile autobackup in the FRA. Be careful though the spfile could be restored in $ORACLE_HOME/dbs and not in ASM. If you are not using the Fast Recovery Area it is mandatory to set the dbid at the RMAN prompt, and you may have to specify the autobackup location as well in a run {} block. Such a block groups instructions together that are executed in the same context. The most common example is the incomplete recovery using a point in time. If the commands were not surrounded in the run {} block setting the time to recover up to would not have any effect. Restart the instance to make use of the restored spfile.

The easiest scenario for restoring the server parameter file is to use a recovery catalog. Since the database has not accessed the controlfile when in nomount mode it does not know about the backup information recorded in it. The recovery catalog however stores this information independently from the controlfile and is of great help. Start the database in nomount mode while connected to the target database and catalog. A stub-parameter file is not used in this example to show the alternative output:

[oracle@server1 ∼]$ rman catalog rco@rcat_pdb target sys@cdb1
 
Recovery Manager: Release 12.1.0.1.0 - Production on Sun Sep 22 21:01:01 2013
 
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
 
target database Password:
connected to target database (not started)
recovery catalog database Password:
connected to recovery catalog database
 
RMAN> startup nomount
 
startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file shift-enter.jpg
'/u01/app/oracle/product/12.1.0.1/dbhome_1/dbs/initCDB1.ora'
 
starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started
 
Total System Global Area    1068937216 bytes
 
Fixed Size                     2296576 bytes
Variable Size                281019648 bytes
Database Buffers             780140544 bytes
Redo Buffers                   5480448 bytes
 
RMAN> run {
2> allocate channel c1 device type disk;
3> restore spfile;
4> }
 
allocated channel: c1
channel c1: SID=11 device type=DISK
 
Starting restore at 22.09.2013 21:01
 
channel c1: starting datafile backup set restore
channel c1: restoring SPFILE
output file name=/u01/app/oracle/product/12.1.0.1/dbhome_1/dbs/spfileCDB1.ora
channel c1: reading from backup piece shift-enter.jpg
  +RECO/CDB1/AUTOBACKUP/2013_09_22/s_826832795.297.826832795
channel c1: piece handle=+RECO/CDB1/AUTOBACKUP/2013_09_22/s_826832795.297.826832795 shift-enter.jpg
  tag=TAG20130922T194635
channel c1: restored backup piece 1
channel c1: restore complete, elapsed time: 00:00:05
Finished restore at 22.09.2013 21:01
released channel: c1
 
RMAN>

It does not get any easier than that. Should you be lucky enough to be on a database server with lots of CPUs then you might get an ORA-4031 error during the restore or channel allocation. In this case you have to create a minimalistic parameter file with a sufficiently sized SGA before starting the database as shown previously.

Restoring the control file

The loss of the controlfile(s) is an intrusive event, and unfortunate since a lot of useful information is contained in it. Thankfully the loss of a controlfile is no longer such a tragedy as it was in the 9i days thanks to the “catalog” command which has been introduced in Oracle 10g. It allows you to search directories for backups which are then cataloged in the controlfile, and therefore made available again for restore operations. The restore of the control file will still require you to open the database with the resetlogs option though! You may want to take a backup after the command completed.

Another great feature of recent Oracle versions is that the restore of the controlfile does not require editing the parameter or server parameter file anymore when using Oracle Managed Files. In older releases Oracle complained that it could not restore a controlfile with an OMF name. Instead, one had created a parameter file from the spfile, edit the parameter file (remove the *.control_files entry) and create a spfile from the pfile before starting the instance. These days are long over, and changes to the initialization parameters related to the control files are not normally necessary.

image Note  One of the reasons to modify initialization parameters is media failure and the loss of a mount point. You can read more about this scenario in the next section.

The process for restoring the control file is very similar to the restoring of the server parameter file, the main difference being the command: instead of using “restore spfile from autobackup” you use “restore controlfile from autobackup”. But before you start the restore process you should check if you do not have a multiplexed copy of the controlfile. The following query can help you find the other copy of the controlfile:

SYS@CDB$ROOT> select value from v$parameter where name = 'control_files';

Be careful not to copy the bad controlfile over the good one! If you are in a situation where the controlfile was not multiplexed you have to restore.

The example shown here assumes that all control files have been lost. Shut the database down if that has not yet happened to remedy the situation, and start it in nomount mode. If you are not using a catalog database, you may have to set the dbid and start into the nomount state, followed by the restore controlfile from autobackup command.

Users with a recovery catalog are luckier, the command sequence to restore the controlfile is shown here, with some of the RMAN output removed for clarity:

RMAN> startup nomount
[...]
RMAN> restore controlfile;
 
Starting restore at 22.09.2013 21:05
[...]
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: reading from backup piece shift-enter.jpg
 +RECO/CDB1/AUTOBACKUP/2013_09_22/s_826832795.297.826832795
channel ORA_DISK_1: piece handle=+RECO/CDB1/AUTOBACKUP/2013_09_22/s_826832795.297.826832795shift-enter.jpg
 tag=TAG20130922T194635
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
output file name=+DATA/CDB1/CONTROLFILE/current.262.826623553
Finished restore at 22.09.2013 21:

The next steps are identical for both users without and with a recovery catalog database. You need to mount the database and recover it before opening it with the resetlogs option.

Restoring databases

Restoring database files is nothing to be afraid of, even though many production database administrators will likely get a little bit nervous when they are tasked with partial or full restore operations. This is completely understandable, especially if one is not all too familiar with the database having a problem. There are two complementary strategies for how to make the database administrator more comfortable with the restore/recovery situation. The first is to ensure that the procedures to be taken are well documented and more importantly: understood. The second option to be taken by management is to perform restore and recover exercises throughout the year and on a regular schedule. You can read a little more about this thought later in the chapter.

The following sections explain the general process of restoring database files for Container Databases and Pluggable Databases.

Restoring a Container Database

The restore operation of a Container Database is not very different from restoring a non-CDB. In any case, any Pluggable Database in the CDB is affected, especially if incomplete recovery has to be performed. As with every database-wide restore operation you need to bring the database into mount mode before you can initiate the restore, followed by the recover command. The following little command restores and recovers the CDB named “CDB1”.

LET RMAN DO THE SYNTAX CHECKING FOR YOU!

If you are unsure if the syntax of your script is correct, you can use the checksyntax option in RMAN before trying the script in anger. Consider the following example of a script used for a full database restore and complete recovery:

[oracle@server1 (CDB1) ∼]$ cat recover.rman
shutdown immediate;
startup mount;
restore database;
recover database;
alter database open;

You can check this script for correctness without having to execute it first and run into problems while a part is in progress:

[oracle@server1 ∼]$ rman target / checksyntax @recover.rman

Recovery Manager: Release 12.1.0.1.0 - Production on Sun Sep 22 21:10:59 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB1 (DBID=800340993)

RMAN> shutdown immediate;
2> startup mount;
3> restore database;
4> recover database;
5> alter database open;
6>
7>
The cmdfile has no syntax errors

Recovery Manager complete.

The example is of course oversimplified, but it gives you the idea. More complex scripts with multiple run {} blocks can easily fail at runtime, checking for syntax errors is a good idea.

Using the exact same script as in the sidebar you can restore and recover your database. The remainder of the section will walk you through the output. First the database must be shut down and started into mount state:

[oracle@server1 ∼]$ rman target sys@cdb1 catalog rco@rcat_pdb @recover.rman
[...]
RMAN> shutdown immediate;
2> startup mount;
3> restore database;
4> recover database;
5> alter database open;
 
database closed
database dismounted
Oracle instance shut down
 
connected to target database (not started)
Oracle instance started
database mounted
 
Total System Global Area    1603411968 bytes
 
Fixed Size                     2288872 bytes
Variable Size                520094488 bytes
Database Buffers            1073741824 bytes
Redo Buffers                   7286784 bytes

Once it is mounted, the restore part of the scripts is executed. In the example the global RMAN settings defined 4 parallel RMAN channels.

Starting restore at 22.09.2013 21:16
[...]
channel ORA_DISK_3: restored backup piece 1
channel ORA_DISK_3: restore complete, elapsed time: 00:01:06
Finished restore at 22.09.2013 21:17

Once the database has been restored, it will be recovered and opened:

Starting recover at 22.09.2013 21:17
using channel ORA_DISK_1
using channel ORA_DISK_2
[...]
media recovery complete, elapsed time: 00:00:07
Finished recover at 22.09.2013 21:17

Finally the database is opened and ready for service again.

An incomplete recovery of the database is possible as well, in which case you need to wrap your commands into a so-called “run block”. The run block creates a context and the scope for operations to happen together. Consider the following example of an incomplete recovery of a CDB:

run {
        set until time="to_date('22.09.2013 21:00:00','dd.mm.yyyy hh24:mi:ss')";
        restore database;
        recover database;
        alter database open resetlogs;
}

Instead of relying on the defined NLS_DATE_FORMAT environment variable it has proved to be less ambiguous to use the to_date() function inside the set until clause. Instead of a human readable format you can also use the SCN or a sequence number instead if you have them documented. As you can also see, you need to open the database with the resetlogs option as with any incomplete recovery.

Restoring a Pluggable Database

One of the main contention points of the multi-schema/multi-tenancy approach in Oracle before 12c was the fact that it was very difficult from an operation point of view to perform a (point-in-time) restore of the Oracle database. Any such operation affected everyone in the database, and you can imagine how difficult it was to agree on a restore operation. Some database administrators tried to separate the different schemas as much as possible by creating dedicated tablespaces for each user, which in theory could have been recovered using Tablespace-Point-In-Time Recovery but even then this was a process that was difficult to coordinate.

Again, the notion of the Pluggable Database makes these tasks easier, allowing you to complete the recovery of individual PDBs without affecting others. Consider for example that a point in time recovery of PDB1 has been agreed between the product manager and all the other stakeholders. In this case you need to follow these steps to recover the PDB to a point in time. For any recovery operation-complete or incomplete, the PDB(s) in question must be in mount mode or closed in other words.

SYS@CDB$ROOT> select con_id,open_mode,name from v$pdbs;
 
    CON_ID OPEN_MODE  NAME
---------- ---------- ------------------------------
         2 READ ONLY  PDB$SEED
         3 MOUNTED    PDB1

Assume for the example that a user error has been identified at 10:00 AM, and all other means of fixing the problem have already been exhausted and all the approvers have given the database team the nod to restore the database to 09:00 AM at an SCN of 2188563 which has been obtained by using the timestamp_to_scn() function. Table t_1 in the schema USER1 contains a column ID. Someone accidentally inserted the value “10000” into it sometime after SCN 2188563. The contents of the table—simplified for readability—is shown here:

SQL> select * from user1.t_1;
 
        ID
----------
    100000
         3
         1

The PDB Point-in-Time recovery is initiated in RMAN using the familiar set until clause followed by the magic triple “restore-recover-open resetlogs”:

RMAN> run {
2> set until scn = 2188563;
3> restore pluggable database pdb1;
4> recover pluggable database pdb1 auxiliary destination='xxx';
5> alter pluggable database pdb1 open resetlogs;
6> }

As you would have imagined the Point-In-Time-Restore of a PDB is not all too different from a Tablespace Point-In-Time-Recovery which has been with Oracle for a while. The RMAN script above can be divided into two major steps. The first is simple and is a restore of the PDB’s data files to their respective locations. It is during the recover step where the similarity begins with the “Tablespace Point In Time Recovery”. Although optional you should really specify a location where the auxiliary instance is going to be created. A file system location worked well even with ASM-it could be a NAS mount or similar. You will notice the creation of an auxiliary instance on the host for which you should ensure sufficient memory to be available. The instance name will be randomly chosen to avoid conflicts with existing Oracle instances, in the example above the auxiliary instance was named zgsF. You will see this output as part of the RMAN command output you entered.

Creating automatic instance, with SID='zgsF'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=mqol_pitr_pdb1_CDB1
compatible=12.1.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
diagnostic_dest=/u01/app/oracle
db_create_file_dest=/u01/temp
log_archive_dest_1='location=/u01/temp'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used

Since a Fast Recovery Area was configured it has been used for the temporary destination for the restored files. At the end of the process the PDB was indeed restored to the SCN in question, and the table contained the correct values:

SQL> select * from user1.t_1;
 
        ID
----------
         1
         3

Full restore and recovery of a PDB is no different from restoring a non-CDB in principle. You connect to the CDB$ROOT before starting the commands. To restore a PDB you are required to add the “pluggable” keyword as shown here:

RMAN> select open_mode from v$pdbs where name = 'PDB1';
 
OPEN_MODE
----------
MOUNTED
 
RMAN> run {
2> allocate channel c1 device type disk;
3> restore pluggable database pdb1;
4> recover pluggable database pdb1;
5> alter pluggable database pdb1 open;
6> }
[...]
Starting restore at 23.09.2013 14:53
[...]
Finished restore at 23.09.2013 14:53
[...]
Starting recover at 23.09.2013 14:55
[...]
starting media recovery
[...]
media recovery complete, elapsed time: 00:01:01
 
Finished recover at 23.09.2013 14:56

You can also restore individual tablespaces and datafiles. The good news is that if you connect to the PDB directly you need not change your recovery scripts; almost everything works as before in the context of the PDB. There are a few commands that are not applicable in the context of the PDB, they are documented in the official “Backup and Recovery User’s Guide”, Chapter 4 in section “Restrictions when Connected to a PDB”.

There are differences when you perform restore and recover operations from the context of the CDB. It is easier to explain using an example. You might remember the syntax from the introduction section about the Data Recovery Advisor. In this particular scenario tablespace TBS_1 of Pluggable Database PDB1 needs to be restored and recovered. In the example the user is connected directly to the PDB.

RMAN> run {
2> allocate channel c1 device type disk;
3> alter tablespace PDB1_TBS_2 offline;
4> restore tablespace PDB1_TBS_2;
5> recover tablespace PDB1_TBS_2;
6> alter tablespace PDB1_TBS_2 online;
7> }
 
allocated channel: c1
channel c1: SID=30 device type=DISK
 
Statement processed
 
Starting restore at 22.09.2013 22:08
 
channel c1: starting datafile backup set restore
channel c1: specifying datafile(s) to restore from backup set
[...]
Finished restore at 22.09.2013 22:08
 
Starting recover at 22.09.2013 22:08
 
starting media recovery
 
archived log for thread 1 with sequence 92 is already on disk as file ...
[...]
media recovery complete, elapsed time: 00:00:01
Finished recover at 22.09.2013 22:08
 
Statement processed
released channel: c1

The output above proves that existing tablespace recovery scripts should work more or less unmodified in Pluggable Databases—but please test your scripts for your environment. You can also restore and recover PDB related files from the CDB$ROOT, all you need to do is to add a prefix to the tablespace name. The exact same example just shown looks as follows if you are connected to the root:

run {
allocate channel c1 device type disk;
sql 'PDB1' 'alter tablespace PDB1_TBS_2 offline';
restore tablespace pdb1:PDB1_TBS_2;
recover tablespace pdb1:PDB1_TBS_2;
sql 'PDB1' 'alter tablespace PDB1_TBS_2 online';
}

This will take the tablespace offline. Note the PDB1 prefix in the SQL command! The next command is to restore the tablespace. Note how the PDB1 prefix is used to tell RMAN to which PDB the command is to be applied. The recover command behaves identically. The last step is to take the tablespace online again.

Restoring to a different location

Sometimes it is necessary to restore parts of the database—either individual files or whole tablespaces—to a different location on the file system. This could be due to planned maintenance such as the decommissioning of a tray in the storage array, or media failure associated with a mount point. In either case, you can definitely restore to a new location, and it is not difficult to do so. Assume for example that the datafile 11 of PDB “pdb1” needs to be restored to disk group FASTDATA residing on a higher tier of storage, and it currently is stored in the generic +DATA disk group.

The commands to restore the file while connected to the PDB instead of the CDB$ROOT are shown in this example:

run {
 set newname for datafile 11 to '+fastdata';
 alter database datafile 11 offline;
 restore datafile 11;
 switch datafile all;
 recover datafile 11;
 alter database datafile 11 online;
}

The little RMAN script “moved” the datafile to the new disk group.

RMAN> select name from v$datafile where file# = 11;
 
NAME
--------------------------------------------------------------------------------
+FASTDATA/CDB1/E6CDAD92377F667EE0436638A8C0DB6C/DATAFILE/example.293.826875289

The process is identical for non-OMF/non ASM data files. In such a case you could set the new name of the datafile to the exact location where you want it to be restored.

The need for testing

With the basic steps of taking backups and restoring/recovering databases explained in the previous sections there is one more item close to the author’s heart. Some sites appear to be happy to only take a backup and never try restoring it. Or not testing the backups rigorously enough. It is the author’s firm belief that backup testing should be as automated as possible, and performed as often as possible. Setting aside a small quad-core system will be enough to restore random backups.

The benefit to this technique is that any faulty backups become apparent during a non-critical time. Many DBA have anecdotes to tell of corrupted backups, or backups that failed silently without being noticed until the Big Day when the backup was needed for recovery. Why not try and find out before the crisis if your backups do work? The idea is simple: a small dedicated host has to be set aside, connected in a secure way to the backup storage location. This setup permits that one random backup from the backup catalog can be restored on that host to attached SAN storage every day to see if the restore and recover completes successfully. There should be a log of activity and any failures should be escalated to the backup team straight away to allow them to take corrective measures before the next scheduled backup. Urgent cases might even require them to trigger a manual backup.

If a complete recovery testing is not possible then you should at least consider running the RMAN validate command against your backups to check for corruption. Although not the same as restoring the database from start to finish, it gives you a little more confidence that the media manager can find the backup pieces and did not introduce corruption.

Introduction to Querying RMAN metadata

Recovery Manager has a very useful set of commands allowing you to view metadata related to backups and the target database itself. The primary commands for interrogating RMAN are list and report. For example, if you want to know which file make up a database, you could use the following command, which should be known to Oracle administrators. Connected to the CDB$ROOT you could expect output similar to this:

RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    790      SYSTEM               ***     +DATA/CDB1/DATAFILE/system.265.826666771
3    720      SYSAUX               ***     +DATA/CDB1/DATAFILE/sysaux.277.826623415
4    135      UNDOTBS1             ***     +DATA/CDB1/DATAFILE/undotbs1.263.826623523
5    250      PDB$SEED:SYSTEM      ***     +DATA/CDB1/DD7[...]403C/DATAFILE/system.[...]6623565
6    5        USERS                ***     +DATA/CDB1/DATAFILE/users.264.826623523
7    590      PDB$SEED:SYSAUX      ***     +DATA/CDB1/DD7[...]403C/DATAFILE/sysaux.[...]623565
8    270      PDB1:SYSTEM          ***     +DATA/CDB1/DD7[...]F577/DATAFILE/system.[...].826617
9    630      PDB1:SYSAUX          ***     +DATA/CDB1/DD7[...]F577/DATAFILE/sysaux.261.8266215
10   16       PDB1:USERS           ***     +DATA/CDB1/DD7[...]F577/DATAFILE/users.258.826623817
11   358      PDB1:EXAMPLE         ***     +RECO/CDB1/E6C[...]DB6C/DATAFILE/example[...]826875289
13   20       PDB1:PDB1_TBS_1      ***     +DATA/CDB1/E6C[...]DB6C/DATAFILE/pdb1_tbs_1.[...]32151
14   20       PDB1:PDB1_TBS_2      ***     +DATA/CDB1/E6C[...]DB6C/DATAFILE/pdb1_tbs_2.[...].82089
15   20       PDB1:PDB1_TBS_3      ***     +DATA/CDB1/E6C[...]DB6C/DATAFILE/pdb1_tbs_3.[...]75409
 
List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    60       TEMP                 32767       +DATA/CDB1/TEMPFILE/temp.270.826623561
2    20       PDB$SEED:TEMP        32767       +DATA/CDB1/DD[...]403/DATAFILE/pdbseed_temp01.dbf
3    20       PDB1:TEMP            32767       +DATA/CDB1/DD7[...]F577/DATAFILE/pdb1_temp01.dbf

Oracle enhanced the report schema command by showing you information about the container for the data file. In the figure above you can see data files from the root (CDB$ROOT), the seed database (PDB$SEED), and finally a user-PDB (PDB1).

If you are unsure if all files of the database have been backed up and satisfy the retention policy, then you can use the report need backup clause to view any file that needs backing up. Assume for example that a new data file has been added to a database during a change window. Consequently, before the nightly backup has run, RMAN will report the file as a candidate for backups:

RMAN> report need backup;
 
RMAN retention policy will be applied to the command
RMAN retention policy is set to recovery window of 7 days
Report of files that must be backed up to satisfy 7 days recovery window
File Days  Name
---- ----- -----------------------------------------------------
21   0     +DATA/CDB1/E6C[...]436638A8C0DB6C/DATAFILE/pdb1_tbs4.299.826877399

Another useful report is the check for existing backups. If you want to know which backups of datafile 1 exist, you can query the repository as shown in this example:

RMAN> list backup of datafile 1;

List of Backup Sets
===================
 
[some other output omitted]
 
BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ----------------
105     Incr 0  665.39M    DISK        00:00:11     23.09.2013 08:24
  List of Datafiles in backup set 105
  File LV Type Ckp SCN    Ckp Time         Name
  ---- -- ---- ---------- ---------------- ----
  1    0  Incr 2210065    23.09.2013 08:24 +DATA/CDB1/DATAFILE/system.265.826666771
 
  Backup Set Copy #1 of backup set 105
  Device Type Elapsed Time Completion Time  Compressed Tag
  ----------- ------------ ---------------- ---------- ---
  DISK        00:00:11     23.09.2013 08:24 NO         TAG20130923T082446
 
    List of Backup Pieces for backup set 105 Copy #1
    BP Key  Pc# Status      Piece Name
    ------- --- ----------- ----------
    111     1   AVAILABLE   +RECO/CDB1/BACKUPSET/2013_09_23/nnndn0_tag201...46_0.345.826878287
    113     2   AVAILABLE   +RECO/CDB1/BACKUPSET/2013_09_23/nnndn0_tag20130...6_0.346.826878291
    114     3   AVAILABLE   +RECO/CDB1/BACKUPSET/2013_09_23/nnndn0_tag201...446_0.267.826878291
    112     4   AVAILABLE   +RECO/CDB1/BACKUPSET/2013_09_23/nnndn0_tag20130...6_0.296.826878293

The output shown here is the result from a multi-section backup of the data file. Four channels were allocated to back up a 700 MB data file, resulting in 4 backup pieces generated under backup set 105. You can also see from the listing that the backup was an incremental level 0 backup and various other items of interest.

Refreshing an environment

Refreshing databases is part of the routine work database administrators have to go through. Once it is understood and a process is put into place a database refresh is not too exciting and a good candidate for a scripted approach. In recent years it has become more and more important to ensure that confidential information in the database-credit card data for example-is masked before a test environment is handed over. A scripted approach should make use of scrambling technology that has been developed with the business units. The effort in time this takes is proportional to the number of applications making use of the database to be duplicated. This fact will play a role when defining service level agreements with the business.

The RMAN “duplicate database” command has been available for a long time now and can be used for duplicating environments. The command can be employed for full database refreshes—test or UAT environments for example—or standby databases. The “from active database” option has been mentioned before in the introduction and the Data Guard Chapters 9 and 10. It allows the database administrator to duplicate a database even if RMAN does not have direct access to the source database backups on the destination host. Instead, the database will be copied over the network, or—new with 12c—remote backup sets can be transferred. This active database duplication is not always allowed on all sites as it poses an additional burden for the network. If possible, dedicated infrastructure should be used for active database duplication.

As you would expect, the RMAN duplicate command has been enhanced for Pluggable Databases as well. You can duplicate PDBs as well as CDBs. The use case for duplicating PDBs is interesting when it comes to cloning PDBs across hosts. You should be able to clone PDBs locally anyway, see Chapter 7 for more information on how to do so. The RMAN duplicate command is one of the most complex RMAN commands, and the mastery of it requires training and experience. The learning curve is smoothed somewhat by keeping a log file of the RMAN activities. The command line parameter “log” can be used or a simple tail after output redirection. For interactive sessions the tee command can be very useful, unlike the “log” option it displays the commands on screen. Consider these two examples:

[oracle@server1 ∼]$ rman @test.rman checksyntax log=rman.log
RMAN> 2> 3> 4> 5> 6> 7>
[oracle@server1 ∼]$

Compared to using the output redirection and tee:

[oracle@server1 ∼]$ rman @test.rman checksyntax 2>&1 | tee rman.log
 
Recovery Manager: Release 12.1.0.1.0 - Production on Mon Sep 23 07:23:21 2013
 
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
 
RMAN> shutdown immediate;
2> startup mount;
3> restore database;
4> recover database;
5> alter database open;
6>
7>
The cmdfile has no syntax errors
 
Recovery Manager complete.

The log file contents are identical of course. As you just read the RMAN duplication can either be backup based or from the active database. Backup-based duplication requires that RMAN has access to the backups. The easiest way to achieve this is via tape backups or NAS storage. If the destination server is connected to the same tape library/network then the duplication is easier. Otherwise you have to transfer the backup to the destination host or NFS; export it should the firewall permit it.

The duplication from active database is easier to put into place logistically. All you need is a (good) Net*8 connection. But there are a few caveats to be aware of. First there will be additional network traffic. If you are not careful this can impact regular database connections. It would be nice if the backup/RMAN traffic was separate from user traffic, but again that might not be possible on all sites.  But then not every product manager might feel comfortable about the intrusion to “his” production database. Technically speaking it helps to statically register the database services with their respective listeners. The clone will be shut down during the duplication, and if the service is not registered with the listener you will get a “listener does not currently know of service name ...” error when trying to start the database. It also proved helpful to use the same SYS password for both databases, or even copying the password file to the remote Oracle home. Once the listener.ora file has been changed the listener needs to be reloaded. If you are using Oracle Restart or Real Application Cluster then please edit the listener.ora file in the Grid home and use srvctl to communicate with the listener. When you are using Oracle Managed Files—either with ASM or on a file system—you need not worry about file name conversions. In other cases you should edit the clone’s parameter file to include the db_file_name_convert and log_file_name_convert parameters. The initialization file of the target CDB must have “enable_pluggable_database” set to true. The actual duplication is simple if the prerequisites are met, and there are notes on My Oracle Support explaining how to recover from a failed duplication. You should connect to RMAN using service names and not the bequeath protocol as shown in the following code snippet:

[oracle@server1 ∼]$ rman target sys@cdb1 auxiliary sys@cdb2
 
Recovery Manager: Release 12.1.0.1.0 - Production on Mon Sep 23 08:31:24 2013
 
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
 
target database Password:
connected to target database: CDB1 (DBID=800340993)
auxiliary database Password:
connected to auxiliary database: CDB2 (not mounted)
 
RMAN> duplicate target database to CDB2;
[...]

The list of options to the RMAN duplicate command is very long and gives testimony to the flexibility of this great tool. Once the test has been complete “manually” and proven that it works you should put it into a script to make it repeatable. More information together with examples can be found in Chapter 10 in the context of standby database duplication.

Summary

Backup and Recovery are key elements of database operations. Nothing is as important as a backup and a proven and tested recovery strategy when it comes to a failure. And please remember that it is not a question of whether or not the failure occurs, but rather when it does so. Often the problem is not even related to your database. Even the most rigorously managed database with the best change control in the world is not immune to a bug in the storage controller’s firmware writing garbage into data files. Being able to restore service at any required moment is paramount, and provides peace of mind.

Pluggable Databases have been covered in RMAN enhancements, and many scenarios with PDBs are possible. The enhancements of point in time recoveries with PDBs are especially welcome to database administrators since they make for a lot less coordination effort between stakeholders. Should it really be necessary to perform a point in time recovery the least number of people will be affected by the outage. Depending on the size of the database however it becomes impractical or too time consuming to restore a database and another option has to be chosen. The most common option to consider for the Oracle database is Data Guard which you read about in Chapters 9 and 10.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset